diff --git a/README.md b/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..a8fbc62116a29f95ab62ae769de34050d1e9b843
--- /dev/null
+++ b/README.md
@@ -0,0 +1,165 @@
+
+
+# 🔥 Flame
+
+
+
+A minimal framework for training FLA models, whether from scratch or through finetuning.
+
+Built on the robust infrastructure of 🤗, `flame` enables you to train large language models with just a few lines of code:
+we use `datasets` for data processing, `transformers` for model definitions, and `accelerate`[^1] for seamless distributed training.
+
+In this README, we will guide you through the process of using `flame` to train GLA models.
+
+## Setup
+
+To get started, you'll need to install the required packages.
+Both `fla` and `flame` have minimal dependencies.
+Clone the `fla` repository and install the necessary packages as follows:
+
+```bash
+git clone https://github.com/sustcsonglin/flash-linear-attention.git
+pip install .
+pip install accelerate wandb
+pip3 install deepspeed
+```
+
+> [!CAUTION]
+> The 🤗 `tokenizers` have some [memory leak issues](https://github.com/huggingface/tokenizers/issues/1539) when processing very long documents.
+> To address this, please ensure you install `tokenizers>=0.20.4`.
+
+## Preprocessing
+
+Before training, you need to download and pre-tokenize your dataset.
+We provide a straightforward script for this.
+For instance, to tokenize a 10B sample of the `fineweb-edu` dataset, run:
+
+```bash
+python preprocess.py \
+ --dataset HuggingFaceFW/fineweb-edu \
+ --name sample-10BT \
+ --split train \
+ --context_length 2048
+```
+or an even smaller example, just for testing:
+```bash
+python preprocess.py \
+ --dataset alturing/gutenberg-texts \
+ --split train \
+ --context_length 2048
+```
+
+This will cache the processed dataset at `data/HuggingFaceFW/fineweb-edu/sample-10BT/train`.
+
+GLA utilizes a subset of Slimpajama for pretraining [in the paper](https://proceedings.mlr.press/v235/yang24ab.html).
+Given the size of the dataset, the fastest way to download it is using `git lfs` (refer to [this issue](https://huggingface.co/datasets/cerebras/SlimPajama-627B/discussions/2)).
+```bash
+git lfs install
+git clone https://huggingface.co/datasets/cerebras/SlimPajama-627B
+python preprocess.py \
+ --dataset SlimPajama-627B \
+ --split train \
+ --context_length 2048
+```
+
+## Training from scratch
+
+To train your 340M model from scratch, execute the following command:
+
+```bash
+bash train.sh \
+ type=gla \
+ lr=3e-4 \
+ steps=20480 \
+ batch=8 \
+ update=1 \
+ warmup=1024 \
+ context=2048 \
+ path=exp/gla-340M-10B \
+ project=fla \
+ model=configs/gla_340M.json \
+ data=HuggingFaceFW/fineweb-edu \
+ name=sample-10BT \
+ cache=data/HuggingFaceFW/fineweb-edu/sample-10BT/train
+```
+or for testing SCAN:
+```bash
+bash train.sh \
+ type=scan \
+ lr=3e-4 \
+ steps=1000 \
+ batch=8 \
+ update=1 \
+ warmup=100 \
+ context=2048 \
+ path=exp/scan-340M-test \
+ project=fla \
+ model=configs/scan_340M.json \
+ data=alturing/gutenberg-texts \
+ name=sample-10BT \
+ cache=data/alturing/gutenberg-texts/train
+```
+
+`flame` also supports resuming interrupted training by specifying the checkpoint path.
+Simply use the following command to resume training:
+
+```bash
+bash train.sh \
+ type=gla \
+ lr=3e-4 \
+ steps=20480 \
+ batch=8 \
+ update=1 \
+ warmup=1024 \
+ context=2048 \
+ path=exp/gla-340M-10B \
+ project=fla \
+ model=configs/gla_340M.json \
+ data=HuggingFaceFW/fineweb-edu \
+ name=sample-10BT \
+ cache=data/HuggingFaceFW/fineweb-edu/sample-10BT/train \
+ checkpoint=exp/gla-340M-10B/checkpoint-8192
+```
+
+You can also use `wandb` to monitor your training process effectively.
+
+![wandb](https://github.com/user-attachments/assets/05ca031c-1cae-41c9-bfcb-5b6b6d0df729)
+
+## Continual Pretraining
+
+`flame` supports continual training from a pretrained checkpoint.
+Below, we provide an example of how to finetune Mistral-7B to GLA.
+You can follow similar steps to reproduce the results in the [GSA paper](https://arxiv.org/abs/2409.07146):
+
+1. Initialize a brand-new GLA-7B model from the config and copy the mathced pretrained weights from Mistral-7B:
+```bash
+cd ../utils
+python convert_from_llama.py \
+ --model mistralai/Mistral-7B-v0.1 \
+ --config ../training/configs/gla_7B.json \
+ --output ../training/converted/gla-7B
+cd -
+```
+
+2. Directly launch training from the converted checkpoint:
+```bash
+bash train.sh \
+ type=gla \
+ lr=3e-5 \
+ steps=10240 \
+ batch=4 \
+ update=8 \
+ warmup=512 \
+ context=2048 \
+ path=exp/gla-7B-20B \
+ project=fla \
+ model=converted/gla-7B \
+ data=SlimPajama-627B \
+ cache=data/SlimPajama-627B/train
+```
+
+Please be aware that finetuning on a single node may not be the most efficient approach.
+If available, consider leveraging multi-node GPUs for optimal performance.
+You can find guidance on how to launch a multi-node job in the [accelerate tutorial](https://github.com/huggingface/accelerate/blob/main/examples/slurm/submit_multinode.sh).
+
+[^1]: The `accelerate` library supports various distributed frameworks, like `deepspeed` and `megatron` for large-scale training. We use `deepspeed` in our case.
diff --git a/config.json b/config.json
new file mode 100644
index 0000000000000000000000000000000000000000..1bcdb7580c0dd61927f7e2596d2e4b1af15cac2c
--- /dev/null
+++ b/config.json
@@ -0,0 +1,42 @@
+{
+ "_name_or_path": "configs/scan_16M_8192.json",
+ "architectures": [
+ "SCANForCausalLM"
+ ],
+ "attn": null,
+ "attn_mode": "parallel",
+ "bos_token_id": 1,
+ "clamp_max": null,
+ "clamp_min": null,
+ "elementwise_affine": true,
+ "eos_token_id": 2,
+ "expand_k": 1,
+ "expand_v": 1,
+ "fuse_cross_entropy": true,
+ "fuse_norm": true,
+ "gate_act": "softmax",
+ "gate_logit_normalizer": 8,
+ "hidden_act": "swish",
+ "hidden_ratio": 4,
+ "hidden_size": 256,
+ "initializer_range": 0.02,
+ "intermediate_size": null,
+ "max_position_embeddings": 8192,
+ "model_type": "scan",
+ "norm_eps": 1e-06,
+ "norm_first": true,
+ "num_heads": 4,
+ "num_hidden_layers": 10,
+ "num_kv_heads": null,
+ "state_size": 16,
+ "tie_word_embeddings": true,
+ "torch_dtype": "bfloat16",
+ "transformers_version": "4.47.0",
+ "use_cache": true,
+ "use_gk": true,
+ "use_gv": false,
+ "use_norm": true,
+ "use_output_gate": false,
+ "vocab_size": 32000,
+ "window_size": 128
+}
diff --git a/configs/deepspeed.yaml b/configs/deepspeed.yaml
new file mode 100644
index 0000000000000000000000000000000000000000..4d3ad966736455a6fb7ffeef1803784d2a6bf997
--- /dev/null
+++ b/configs/deepspeed.yaml
@@ -0,0 +1,10 @@
+compute_environment: LOCAL_MACHINE
+distributed_type: DEEPSPEED
+deepspeed_config:
+ deepspeed_config_file: configs/ds_config.json
+ zero3_init_flag: true
+machine_rank: 0
+main_training_function: main
+num_machines: 1
+num_processes: 1
+use_cpu: false
diff --git a/configs/ds_config.json b/configs/ds_config.json
new file mode 100644
index 0000000000000000000000000000000000000000..272589af1272826bce4abe07ba41e22471f276db
--- /dev/null
+++ b/configs/ds_config.json
@@ -0,0 +1,19 @@
+{
+ "train_batch_size": "auto",
+ "train_micro_batch_size_per_gpu": "auto",
+ "gradient_accumulation_steps": "auto",
+ "gradient_clipping": "auto",
+ "zero_allow_untested_optimizer": true,
+ "bf16": {
+ "enabled": true
+ },
+ "zero_optimization": {
+ "stage": 2,
+ "allgather_partitions": true,
+ "allgather_bucket_size": 5e8,
+ "reduce_scatter": true,
+ "reduce_bucket_size": 5e8,
+ "overlap_comm": false,
+ "contiguous_gradients": true
+ }
+}
diff --git a/configs/gla_16M.json b/configs/gla_16M.json
new file mode 100644
index 0000000000000000000000000000000000000000..ba97225ae9ffac48f79c486a3802f7ff59290320
--- /dev/null
+++ b/configs/gla_16M.json
@@ -0,0 +1,26 @@
+{
+ "attn_mode": "chunk",
+ "bos_token_id": 1,
+ "clamp_min": null,
+ "eos_token_id": 2,
+ "expand_k": 0.5,
+ "expand_v": 1,
+ "fuse_cross_entropy": true,
+ "fuse_norm": true,
+ "hidden_act": "swish",
+ "hidden_ratio": 4,
+ "hidden_size": 256,
+ "initializer_range": 0.02,
+ "intermediate_size": null,
+ "max_position_embeddings": 2048,
+ "model_type": "gla",
+ "num_heads": 4,
+ "num_hidden_layers": 10,
+ "norm_eps": 1e-06,
+ "tie_word_embeddings": true,
+ "transformers_version": "4.38.2",
+ "use_cache": true,
+ "use_gk": true,
+ "use_gv": false,
+ "vocab_size": 32000
+}
\ No newline at end of file
diff --git a/configs/gla_1B.json b/configs/gla_1B.json
new file mode 100644
index 0000000000000000000000000000000000000000..b727f4e7a749055886d98ae452e2093fc954cdfe
--- /dev/null
+++ b/configs/gla_1B.json
@@ -0,0 +1,26 @@
+{
+ "attn_mode": "chunk",
+ "bos_token_id": 1,
+ "clamp_min": null,
+ "eos_token_id": 2,
+ "expand_k": 0.5,
+ "expand_v": 1,
+ "fuse_cross_entropy": true,
+ "fuse_norm": true,
+ "hidden_act": "swish",
+ "hidden_ratio": 4,
+ "hidden_size": 2048,
+ "initializer_range": 0.02,
+ "intermediate_size": null,
+ "max_position_embeddings": 2048,
+ "model_type": "gla",
+ "num_heads": 4,
+ "num_hidden_layers": 24,
+ "norm_eps": 1e-06,
+ "tie_word_embeddings": false,
+ "transformers_version": "4.38.2",
+ "use_cache": true,
+ "use_gk": true,
+ "use_gv": false,
+ "vocab_size": 32000
+}
diff --git a/configs/gla_340M.json b/configs/gla_340M.json
new file mode 100644
index 0000000000000000000000000000000000000000..bcb0beec65bf9abebf2bfbd08c1f6864f57e5fef
--- /dev/null
+++ b/configs/gla_340M.json
@@ -0,0 +1,26 @@
+{
+ "attn_mode": "chunk",
+ "bos_token_id": 1,
+ "clamp_min": null,
+ "eos_token_id": 2,
+ "expand_k": 0.5,
+ "expand_v": 1,
+ "fuse_cross_entropy": true,
+ "fuse_norm": true,
+ "hidden_act": "swish",
+ "hidden_ratio": 4,
+ "hidden_size": 1024,
+ "initializer_range": 0.02,
+ "intermediate_size": null,
+ "max_position_embeddings": 2048,
+ "model_type": "gla",
+ "num_heads": 4,
+ "num_hidden_layers": 24,
+ "norm_eps": 1e-06,
+ "tie_word_embeddings": true,
+ "transformers_version": "4.38.2",
+ "use_cache": true,
+ "use_gk": true,
+ "use_gv": false,
+ "vocab_size": 32000
+}
diff --git a/configs/gla_7B.json b/configs/gla_7B.json
new file mode 100644
index 0000000000000000000000000000000000000000..48107c45a764967b6e9cd544a1b5f026bcdc3c0b
--- /dev/null
+++ b/configs/gla_7B.json
@@ -0,0 +1,29 @@
+{
+ "attn_mode": "chunk",
+ "bos_token_id": 1,
+ "clamp_min": null,
+ "eos_token_id": 2,
+ "expand_k": 1,
+ "expand_v": 1,
+ "feature_map": "relu",
+ "fuse_cross_entropy": true,
+ "fuse_norm": true,
+ "hidden_act": "swish",
+ "hidden_ratio": 4,
+ "hidden_size": 4096,
+ "initializer_range": 0.02,
+ "intermediate_size": 14336,
+ "max_position_embeddings": 32768,
+ "model_type": "gla",
+ "num_heads": 32,
+ "num_kv_heads": 8,
+ "num_hidden_layers": 32,
+ "norm_eps": 1e-05,
+ "tie_word_embeddings": false,
+ "transformers_version": "4.40.0",
+ "use_cache": true,
+ "use_output_gate": false,
+ "use_gk": true,
+ "use_gv": false,
+ "vocab_size": 32000
+}
diff --git a/configs/gsa_16M.json b/configs/gsa_16M.json
new file mode 100644
index 0000000000000000000000000000000000000000..caa6f36f1b368671006986d13a2bfc4f0c95923f
--- /dev/null
+++ b/configs/gsa_16M.json
@@ -0,0 +1,27 @@
+{
+ "attn_mode": "chunk",
+ "bos_token_id": 1,
+ "clamp_min": null,
+ "eos_token_id": 2,
+ "expand_k": 1,
+ "expand_v": 1,
+ "fuse_cross_entropy": true,
+ "fuse_norm": true,
+ "hidden_act": "swish",
+ "hidden_ratio": 4,
+ "hidden_size": 256,
+ "initializer_range": 0.02,
+ "intermediate_size": null,
+ "max_position_embeddings": 2048,
+ "model_type": "gsa",
+ "num_slots": 16,
+ "num_heads": 4,
+ "num_hidden_layers": 10,
+ "norm_eps": 1e-06,
+ "tie_word_embeddings": true,
+ "transformers_version": "4.38.2",
+ "use_cache": true,
+ "use_gk": true,
+ "use_gv": false,
+ "vocab_size": 32000
+}
\ No newline at end of file
diff --git a/configs/scan_16M.json b/configs/scan_16M.json
new file mode 100644
index 0000000000000000000000000000000000000000..acbf33ec6b88f754f95d6af7f30fd8f4145bbb30
--- /dev/null
+++ b/configs/scan_16M.json
@@ -0,0 +1,29 @@
+{
+ "attn_mode": "parallel",
+ "bos_token_id": 1,
+ "clamp_min": null,
+ "eos_token_id": 2,
+ "expand_k": 1,
+ "expand_v": 1,
+ "fuse_cross_entropy": true,
+ "fuse_norm": true,
+ "hidden_act": "swish",
+ "gate_act": "softmax",
+ "hidden_ratio": 4,
+ "hidden_size": 256,
+ "window_size": 128,
+ "state_size": 16,
+ "initializer_range": 0.02,
+ "intermediate_size": null,
+ "max_position_embeddings": 2048,
+ "model_type": "scan",
+ "num_heads": 4,
+ "num_hidden_layers": 10,
+ "norm_eps": 1e-06,
+ "tie_word_embeddings": true,
+ "transformers_version": "4.38.2",
+ "use_cache": true,
+ "use_gk": true,
+ "use_gv": false,
+ "vocab_size": 32000
+}
\ No newline at end of file
diff --git a/configs/scan_16M_8192.json b/configs/scan_16M_8192.json
new file mode 100644
index 0000000000000000000000000000000000000000..c084a9e83f632b73aeed436b3563af739eca97c6
--- /dev/null
+++ b/configs/scan_16M_8192.json
@@ -0,0 +1,29 @@
+{
+ "attn_mode": "parallel",
+ "bos_token_id": 1,
+ "clamp_min": null,
+ "eos_token_id": 2,
+ "expand_k": 1,
+ "expand_v": 1,
+ "fuse_cross_entropy": true,
+ "fuse_norm": true,
+ "hidden_act": "swish",
+ "gate_act": "softmax",
+ "hidden_ratio": 4,
+ "hidden_size": 256,
+ "window_size": 128,
+ "state_size": 16,
+ "initializer_range": 0.02,
+ "intermediate_size": null,
+ "max_position_embeddings": 8192,
+ "model_type": "scan",
+ "num_heads": 4,
+ "num_hidden_layers": 10,
+ "norm_eps": 1e-06,
+ "tie_word_embeddings": true,
+ "transformers_version": "4.38.2",
+ "use_cache": true,
+ "use_gk": true,
+ "use_gv": false,
+ "vocab_size": 32000
+}
\ No newline at end of file
diff --git a/configs/scan_20M.json b/configs/scan_20M.json
new file mode 100644
index 0000000000000000000000000000000000000000..075bbeff341d0d206f544356b88c25516cba1af2
--- /dev/null
+++ b/configs/scan_20M.json
@@ -0,0 +1,29 @@
+{
+ "attn_mode": "parallel",
+ "bos_token_id": 1,
+ "clamp_min": null,
+ "eos_token_id": 2,
+ "expand_k": 1,
+ "expand_v": 1,
+ "fuse_cross_entropy": true,
+ "fuse_norm": true,
+ "hidden_act": "swish",
+ "gate_act": "softmax",
+ "hidden_ratio": 4,
+ "hidden_size": 384,
+ "window_size": 128,
+ "state_size": 16,
+ "initializer_range": 0.02,
+ "intermediate_size": null,
+ "max_position_embeddings": 2048,
+ "model_type": "scan",
+ "num_heads": 6,
+ "num_hidden_layers": 10,
+ "norm_eps": 1e-06,
+ "tie_word_embeddings": true,
+ "transformers_version": "4.38.2",
+ "use_cache": true,
+ "use_gk": true,
+ "use_gv": false,
+ "vocab_size": 32000
+}
\ No newline at end of file
diff --git a/configs/scan_340M.json b/configs/scan_340M.json
new file mode 100644
index 0000000000000000000000000000000000000000..702a09d8c55d49ddd1b12bf41fc39c433624a74e
--- /dev/null
+++ b/configs/scan_340M.json
@@ -0,0 +1,29 @@
+{
+ "attn_mode": "parallel",
+ "bos_token_id": 1,
+ "clamp_min": null,
+ "eos_token_id": 2,
+ "expand_k": 1,
+ "expand_v": 1,
+ "fuse_cross_entropy": true,
+ "fuse_norm": true,
+ "hidden_act": "swish",
+ "gate_act": "softmax",
+ "hidden_ratio": 4,
+ "hidden_size": 1024,
+ "window_size": 128,
+ "state_size": 32,
+ "initializer_range": 0.02,
+ "intermediate_size": null,
+ "max_position_embeddings": 2048,
+ "model_type": "scan",
+ "num_heads": 4,
+ "num_hidden_layers": 24,
+ "norm_eps": 1e-06,
+ "tie_word_embeddings": true,
+ "transformers_version": "4.38.2",
+ "use_cache": true,
+ "use_gk": true,
+ "use_gv": false,
+ "vocab_size": 32000
+}
\ No newline at end of file
diff --git a/configs/transformer_16M.json b/configs/transformer_16M.json
new file mode 100644
index 0000000000000000000000000000000000000000..1a77d74d76b4d835cc2d530360390e631fe368a7
--- /dev/null
+++ b/configs/transformer_16M.json
@@ -0,0 +1,26 @@
+{
+ "model_type": "transformer",
+ "attention_bias": false,
+ "bos_token_id": 1,
+ "clamp_min": null,
+ "eos_token_id": 2,
+ "fuse_cross_entropy": true,
+ "fuse_norm": true,
+ "hidden_act": "swish",
+ "hidden_ratio": 4,
+ "hidden_size": 256,
+ "state_size": 16,
+ "initializer_range": 0.02,
+ "intermediate_size": null,
+ "max_position_embeddings": 2048,
+ "num_heads": 4,
+ "num_kv_heads": 4,
+ "num_hidden_layers": 10,
+ "norm_eps": 1e-06,
+ "tie_word_embeddings": true,
+ "transformers_version": "4.38.2",
+ "use_cache": true,
+ "use_gk": true,
+ "use_gv": false,
+ "vocab_size": 32000
+}
\ No newline at end of file
diff --git a/configs/transformer_16M_8192.json b/configs/transformer_16M_8192.json
new file mode 100644
index 0000000000000000000000000000000000000000..b7af0ae009161ebe9df0567bcb5cbb71dabdde68
--- /dev/null
+++ b/configs/transformer_16M_8192.json
@@ -0,0 +1,26 @@
+{
+ "model_type": "transformer",
+ "attention_bias": false,
+ "bos_token_id": 1,
+ "clamp_min": null,
+ "eos_token_id": 2,
+ "fuse_cross_entropy": true,
+ "fuse_norm": true,
+ "hidden_act": "swish",
+ "hidden_ratio": 4,
+ "hidden_size": 256,
+ "state_size": 16,
+ "initializer_range": 0.02,
+ "intermediate_size": null,
+ "max_position_embeddings": 8192,
+ "num_heads": 4,
+ "num_kv_heads": 4,
+ "num_hidden_layers": 10,
+ "norm_eps": 1e-06,
+ "tie_word_embeddings": true,
+ "transformers_version": "4.38.2",
+ "use_cache": true,
+ "use_gk": true,
+ "use_gv": false,
+ "vocab_size": 32000
+}
\ No newline at end of file
diff --git a/fla/__init__.py b/fla/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..265ee395c4ecd87345fa7f9a1849f94c570c86aa
--- /dev/null
+++ b/fla/__init__.py
@@ -0,0 +1,58 @@
+# -*- coding: utf-8 -*-
+
+from fla.layers import (ABCAttention, Attention, BasedLinearAttention,
+ BitAttention, DeltaNet, GatedLinearAttention,
+ GatedSlotAttention, HGRN2Attention, HGRNAttention,
+ LinearAttention, MultiScaleRetention,
+ ReBasedLinearAttention)
+from fla.models import (ABCForCausalLM, ABCModel, BitNetForCausalLM,
+ BitNetModel, DeltaNetForCausalLM, DeltaNetModel,
+ GLAForCausalLM, GLAModel, GSAForCausalLM, GSAModel,
+ HGRN2ForCausalLM, HGRN2Model, HGRNForCausalLM,
+ LinearAttentionForCausalLM, LinearAttentionModel,
+ RetNetForCausalLM, RetNetModel, RWKV6ForCausalLM,
+ RWKV6Model, TransformerForCausalLM, TransformerModel)
+
+__all__ = [
+ 'ABCAttention',
+ 'Attention',
+ 'BasedLinearAttention',
+ 'BitAttention',
+ 'DeltaNet',
+ 'HGRNAttention',
+ 'HGRN2Attention',
+ 'GatedLinearAttention',
+ 'GatedSlotAttention',
+ 'LinearAttention',
+ 'MultiScaleRetention',
+ 'ReBasedLinearAttention',
+ 'ABCForCausalLM',
+ 'ABCModel',
+ 'BitNetForCausalLM',
+ 'BitNetModel',
+ 'DeltaNetForCausalLM',
+ 'DeltaNetModel',
+ 'HGRNForCausalLM',
+ 'HGRNModel',
+ 'HGRN2ForCausalLM',
+ 'HGRN2Model',
+ 'GLAForCausalLM',
+ 'GLAModel',
+ 'GSAForCausalLM',
+ 'GSAModel',
+ 'LinearAttentionForCausalLM',
+ 'LinearAttentionModel',
+ 'RetNetForCausalLM',
+ 'RetNetModel',
+ 'RWKV6ForCausalLM',
+ 'RWKV6Model',
+ 'TransformerForCausalLM',
+ 'TransformerModel',
+ 'chunk_gla',
+ 'chunk_retention',
+ 'fused_chunk_based',
+ 'fused_chunk_gla',
+ 'fused_chunk_retention'
+]
+
+__version__ = '0.1'
diff --git a/fla/layers/__init__.py b/fla/layers/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..5f1879d844ed10d3515a4fe744fa017ce01d100d
--- /dev/null
+++ b/fla/layers/__init__.py
@@ -0,0 +1,31 @@
+# -*- coding: utf-8 -*-
+
+from .abc import ABCAttention
+from .attn import Attention
+from .based import BasedLinearAttention
+from .bitattn import BitAttention
+from .delta_net import DeltaNet
+from .gla import GatedLinearAttention
+from .gsa import GatedSlotAttention
+from .hgrn import HGRNAttention
+from .hgrn2 import HGRN2Attention
+from .linear_attn import LinearAttention
+from .multiscale_retention import MultiScaleRetention
+from .rebased import ReBasedLinearAttention
+from .rwkv6 import RWKV6Attention
+
+__all__ = [
+ 'ABCAttention',
+ 'Attention',
+ 'BasedLinearAttention',
+ 'BitAttention',
+ 'DeltaNet',
+ 'GatedLinearAttention',
+ 'GatedSlotAttention',
+ 'HGRNAttention',
+ 'HGRN2Attention',
+ 'LinearAttention',
+ 'MultiScaleRetention',
+ 'ReBasedLinearAttention',
+ 'RWKV6Attention',
+]
diff --git a/fla/layers/abc.py b/fla/layers/abc.py
new file mode 100644
index 0000000000000000000000000000000000000000..ad114ea1c7a75fd11f6af1bd655e82281971de44
--- /dev/null
+++ b/fla/layers/abc.py
@@ -0,0 +1,207 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from __future__ import annotations
+
+import warnings
+from typing import TYPE_CHECKING, Optional, Tuple
+
+import torch
+import torch.nn as nn
+from einops import rearrange
+
+from fla.modules import (FusedRMSNormSwishGate, RMSNorm, RotaryEmbedding,
+ ShortConvolution)
+from fla.modules.activations import swiglu, swish
+from fla.ops.abc.chunk import chunk_abc
+
+if TYPE_CHECKING:
+ from fla.models.utils import Cache
+
+
+class ABCAttention(nn.Module):
+
+ def __init__(
+ self,
+ hidden_size: int = 1024,
+ expand_k: float = 0.5,
+ expand_v: float = 1.0,
+ num_heads: int = 4,
+ use_short_conv: bool = False,
+ conv_size: int = 4,
+ conv_bias: bool = False,
+ num_slots: Optional[int] = None,
+ elementwise_affine: Optional[bool] = True,
+ norm_eps: float = 1e-5,
+ gate_low_rank_dim: int = 16,
+ gate_logit_normalizer: int = 16,
+ use_input_gate: bool = False,
+ use_output_gate: bool = True,
+ use_norm: bool = True,
+ clamp_min: Optional[float] = -32,
+ clamp_max: Optional[float] = 32,
+ layer_idx: Optional[int] = None,
+ **kwargs
+ ) -> ABCAttention:
+ super().__init__()
+
+ self.hidden_size = hidden_size
+ self.expand_k = expand_k
+ self.expand_v = expand_v
+ self.num_heads = num_heads
+ self.key_dim = int(self.hidden_size * self.expand_k)
+ self.value_dim = int(self.hidden_size * self.expand_v)
+ self.head_k_dim = self.key_dim // self.num_heads
+ self.head_v_dim = self.value_dim // self.num_heads
+
+ self.use_short_conv = use_short_conv
+ self.conv_size = conv_size
+ self.conv_bias = conv_bias
+
+ self.gate_low_rank_dim = gate_low_rank_dim
+ self.gate_logit_normalizer = gate_logit_normalizer
+
+ self.use_input_gate = use_input_gate
+ self.use_output_gate = use_output_gate
+ self.use_norm = use_norm
+
+ if num_slots is None:
+ num_slots = self.head_k_dim
+ self.num_slots = num_slots
+
+ self.norm_eps = norm_eps
+
+ self.clamp_min = clamp_min
+ self.clamp_max = clamp_max
+ self.layer_idx = layer_idx
+
+ if layer_idx is None:
+ warnings.warn(
+ f"Instantiating {self.__class__.__name__} without passing `layer_idx` is not recommended and will "
+ "to errors during the forward call, if caching is used. Please make sure to provide a `layer_idx` "
+ "when creating this class."
+ )
+
+ self.q_proj = nn.Linear(self.hidden_size, self.key_dim, bias=False)
+ self.k_proj = nn.Linear(self.hidden_size, self.key_dim, bias=False)
+ self.v_proj = nn.Linear(self.hidden_size, self.value_dim, bias=False)
+
+ if use_output_gate:
+ self.g_proj = nn.Linear(self.hidden_size, self.value_dim, bias=False)
+ self.s_proj = nn.Linear(self.hidden_size, self.num_heads * self.num_slots, bias=False)
+ self.o_proj = nn.Linear(self.value_dim, self.hidden_size, bias=False)
+
+ if use_short_conv:
+ self.conv_size = conv_size
+ self.q_conv1d = ShortConvolution(self.key_dim, conv_size, activation='silu')
+ self.k_conv1d = ShortConvolution(self.key_dim, conv_size, activation='silu')
+ self.v_conv1d = ShortConvolution(self.value_dim, conv_size, activation='silu')
+
+ if self.use_norm:
+ if self.use_output_gate:
+ self.g_norm = FusedRMSNormSwishGate(self.head_v_dim, elementwise_affine, norm_eps)
+ else:
+ self.g_norm = RMSNorm(hidden_size=self.head_v_dim, elementwise_affine=elementwise_affine, eps=norm_eps)
+
+ if self.use_rope:
+ self.rotary = RotaryEmbedding(self.head_k_dim)
+
+ self.apply(self._initialize_weights)
+
+ def _initialize_weights(self, module: nn.Module):
+ if getattr(module, "_is_hf_initialized", False):
+ return
+ if isinstance(module, nn.Linear):
+ nn.init.xavier_uniform_(module.weight, gain=2 ** -2.5)
+ if module.bias is not None:
+ nn.init.zeros_(module.bias)
+ module._is_hf_initialized = True
+
+ def forward(
+ self,
+ hidden_states: torch.Tensor,
+ attention_mask: Optional[torch.Tensor] = None,
+ past_key_values: Optional[Cache] = None,
+ use_cache: Optional[bool] = False,
+ output_attentions: Optional[bool] = False,
+ **kwargs
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Cache]]:
+ if attention_mask is not None:
+ assert len(attention_mask.shape) == 2, (
+ "Expected attention_mask as a 0-1 matrix with shape [batch_size, seq_len] "
+ "for padding purposes (0 indicating padding). "
+ "Arbitrary attention masks of shape [batch_size, seq_len, seq_len] are not allowed."
+ )
+
+ last_state = None
+ if past_key_values is not None and len(past_key_values) > self.layer_idx:
+ last_state = past_key_values[self.layer_idx]
+
+ if self.use_short_conv:
+ conv_state_q, conv_state_k, conv_state_v = None, None, None
+ if last_state is not None:
+ conv_state_q, conv_state_k, conv_state_v = last_state['conv_state']
+ conv_mask = attention_mask[:, -hidden_states.shape[1]:] if attention_mask is not None else None
+ q, conv_state_q = self.q_conv1d(x=self.q_proj(hidden_states),
+ mask=conv_mask,
+ cache=conv_state_q,
+ output_final_state=use_cache)
+ k, conv_state_k = self.k_conv1d(x=self.k_proj(hidden_states),
+ mask=conv_mask,
+ cache=conv_state_k,
+ output_final_state=use_cache)
+ v, conv_state_v = self.v_conv1d(x=self.v_proj(hidden_states),
+ mask=conv_mask,
+ cache=conv_state_v,
+ output_final_state=use_cache)
+ else:
+ q = self.q_proj(hidden_states)
+ k = self.k_proj(hidden_states)
+ v = self.v_proj(hidden_states)
+
+ if self.use_input_gate:
+ q, k, v = map(lambda x: swish(x), (q, k, v))
+ # dealing with left-padding
+ if attention_mask is not None:
+ v = v.mul_(attention_mask[:, -v.shape[-2]:, None])
+
+ q, k, v = map(lambda x: rearrange(x, '... (h d) -> ... h d', h=self.num_heads), (q, k, v))
+ if self.use_rope:
+ seqlen_offset = 0
+ if past_key_values is not None:
+ seqlen_offset = past_key_values.get_seq_length(self.layer_idx)
+ q, k = self.rotary(q, k, seqlen_offset)
+
+ s = rearrange(self.s_proj(hidden_states), '... (h m) -> ... h m', h=self.num_heads)
+ s = s.clamp_(self.clamp_min, self.clamp_max)
+
+ recurrent_state = last_state['recurrent_state'] if last_state is not None else None
+ o, recurrent_state = chunk_abc(
+ q=q,
+ k=k,
+ v=v,
+ s=s,
+ initial_state=recurrent_state,
+ output_final_state=use_cache,
+ head_first=False
+ )
+ if past_key_values is not None:
+ past_key_values.update(
+ recurrent_state=recurrent_state,
+ conv_state=(conv_state_q, conv_state_k, conv_state_v) if self.use_short_conv else None,
+ layer_idx=self.layer_idx,
+ offset=q.shape[2]
+ )
+
+ if self.use_norm and not self.use_output_gate:
+ o = self.g_norm(o)
+ elif self.use_output_gate:
+ g = rearrange(self.g_proj(hidden_states), '... (h d) -> ... h d', h=self.num_heads)
+ o = self.g_norm(o, g) if self.use_norm else swiglu(g, o)
+ o = rearrange(o, '... h d -> ... (h d)')
+ o = self.o_proj(o)
+
+ return o, None, past_key_values
+
+ def state_size(self, seq_len: int = 2048):
+ return self.num_heads * self.key_dim * self.head_v_dim
diff --git a/fla/layers/attn.py b/fla/layers/attn.py
new file mode 100644
index 0000000000000000000000000000000000000000..b6659b73be25944ffed9905140e358a3f450be50
--- /dev/null
+++ b/fla/layers/attn.py
@@ -0,0 +1,182 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from __future__ import annotations
+
+import warnings
+from typing import TYPE_CHECKING, Optional, Tuple
+
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+import torch.utils.checkpoint
+from einops import rearrange
+from transformers.utils import logging
+
+from fla.modules import RMSNorm, RotaryEmbedding
+
+if TYPE_CHECKING:
+ from fla.models.utils import Cache
+
+try:
+ from flash_attn import flash_attn_func, flash_attn_varlen_func
+ from flash_attn.bert_padding import (index_first_axis, pad_input,
+ unpad_input)
+except ImportError:
+ warnings.warn(
+ "Flash Attention is not installed. Please install it via `pip install flash-attn --no-build-isolation`",
+ category=ImportWarning
+ )
+ flash_attn_func = None
+
+logger = logging.get_logger(__name__)
+
+
+class Attention(nn.Module):
+
+ def __init__(
+ self,
+ hidden_size: int = 2048,
+ num_heads: int = 32,
+ num_kv_heads: Optional[int] = None,
+ window_size: Optional[int] = None,
+ rope_theta: Optional[float] = 10000.,
+ max_position_embeddings: Optional[int] = None,
+ norm_first: bool = False,
+ norm_eps: float = 1e-5,
+ layer_idx: int = None
+ ):
+ super().__init__()
+
+ self.num_heads = num_heads
+ if num_kv_heads is None:
+ self.num_kv_heads = self.num_heads
+ else:
+ self.num_kv_heads = num_kv_heads
+ self.num_kv_groups = num_heads // self.num_kv_heads
+ self.hidden_size = hidden_size
+ self.head_dim = self.hidden_size // self.num_heads
+ self.kv_dim = self.num_kv_heads * self.head_dim
+ self.kv_dim = self.num_kv_heads * self.head_dim
+ self.window_size = window_size
+ self.rope_theta = rope_theta
+ self.max_position_embeddings = max_position_embeddings
+ self.norm_first = norm_first
+ self.layer_idx = layer_idx
+
+ if norm_first:
+ self.norm = RMSNorm(self.hidden_size, eps=norm_eps)
+ self.q_proj = nn.Linear(self.hidden_size, self.hidden_size, bias=False)
+ self.k_proj = nn.Linear(self.hidden_size, self.kv_dim, bias=False)
+ self.v_proj = nn.Linear(self.hidden_size, self.kv_dim, bias=False)
+ self.o_proj = nn.Linear(self.hidden_size, self.hidden_size, bias=False)
+
+ self.rotary = RotaryEmbedding(dim=self.head_dim, base=self.rope_theta)
+
+ def forward(
+ self,
+ hidden_states: torch.Tensor,
+ attention_mask: Optional[torch.LongTensor] = None,
+ past_key_values: Optional[Cache] = None,
+ output_attentions: bool = False,
+ use_cache: bool = False,
+ **kwargs,
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
+ if attention_mask is not None:
+ assert len(attention_mask.shape) == 2, (
+ "Expected attention_mask as a 0-1 matrix with shape [batch_size, seq_len] "
+ "for padding purposes (0 indicating padding). "
+ "Arbitrary attention masks of shape [batch_size, seq_len, seq_len] are not allowed."
+ )
+
+ batch_size, q_len, _ = hidden_states.size()
+
+ if self.norm_first:
+ hidden_states = self.norm(hidden_states)
+
+ q = rearrange(self.q_proj(hidden_states), '... (h d) -> ... h d', h=self.num_heads)
+ k = rearrange(self.k_proj(hidden_states), '... (h d) -> ... h d', h=self.num_kv_heads)
+ v = rearrange(self.v_proj(hidden_states), '... (h d) -> ... h d', h=self.num_kv_heads)
+
+ seqlen_offset, max_seqlen = 0, q_len
+ if past_key_values is not None:
+ seqlen_offset = past_key_values.get_seq_length(self.layer_idx)
+ max_seqlen = q.shape[1] + seqlen_offset
+
+ if attention_mask is not None:
+ # to deliminate the offsets of padding tokens
+ seqlen_offset = (seqlen_offset + attention_mask.sum(-1) - attention_mask.shape[-1]).clamp(min=0)
+ max_seqlen = q.shape[1] + max(seqlen_offset)
+
+ if self.max_position_embeddings is not None:
+ max_seqlen = max(max_seqlen, self.max_position_embeddings)
+ q, k = self.rotary(q, k, seqlen_offset, max_seqlen)
+
+ if past_key_values is not None:
+ k, v = past_key_values.update(
+ attn_state=(k.flatten(-2, -1), v.flatten(-2, -1)),
+ layer_idx=self.layer_idx,
+ offset=q_len,
+ cache_kwargs=dict(window_size=self.window_size)
+ )['attn_state']
+ k = rearrange(k, '... (h d) -> ... h d', h=self.num_kv_heads)
+ v = rearrange(v, '... (h d) -> ... h d', h=self.num_kv_heads)
+
+ if flash_attn_func is None:
+ raise ImportError("Please install Flash Attention via `pip install flash-attn --no-build-isolation` first")
+
+ # Contains at least one padding token in the sequence
+ if attention_mask is not None:
+ q, k, v, indices_q, cu_seq_lens, max_seq_lens = self._upad_input(q, k, v, attention_mask, q_len)
+ cu_seqlens_q, cu_seqlens_k = cu_seq_lens
+ max_seqlen_q, max_seqlen_k = max_seq_lens
+ o = flash_attn_varlen_func(
+ q, k, v,
+ cu_seqlens_q=cu_seqlens_q,
+ cu_seqlens_k=cu_seqlens_k,
+ max_seqlen_q=max_seqlen_q,
+ max_seqlen_k=max_seqlen_k,
+ causal=True,
+ window_size=(-1, -1) if self.window_size is None else (self.window_size-1, 0)
+ )
+ o = pad_input(o, indices_q, batch_size, q_len)
+ else:
+ o = flash_attn_func(
+ q, k, v,
+ causal=True,
+ window_size=(-1, -1) if self.window_size is None else (self.window_size-1, 0)
+ )
+ o = o.reshape(batch_size, q_len, self.hidden_size)
+ o = self.o_proj(o)
+
+ if not output_attentions:
+ attentions = None
+
+ return o, attentions, past_key_values
+
+ def _upad_input(self, q, k, v, attention_mask, q_len):
+ seqlens = attention_mask.sum(-1, dtype=torch.int32)
+ indices_k = torch.nonzero(attention_mask.flatten(), as_tuple=False).flatten()
+ max_seqlen_k = seqlens.max().item()
+ cu_seqlens_k = F.pad(torch.cumsum(seqlens, dim=0, dtype=torch.int32), (1, 0))
+ batch_size, seq_len, num_key_value_heads, head_dim = k.shape
+
+ k = index_first_axis(k.reshape(batch_size * seq_len, num_key_value_heads, head_dim), indices_k)
+ v = index_first_axis(v.reshape(batch_size * seq_len, num_key_value_heads, head_dim), indices_k)
+ if q_len == seq_len:
+ q = index_first_axis(q.reshape(batch_size * seq_len, self.num_heads, head_dim), indices_k)
+ cu_seqlens_q = cu_seqlens_k
+ max_seqlen_q = max_seqlen_k
+ indices_q = indices_k
+ elif q_len == 1:
+ max_seqlen_q = 1
+ # There is a memcpy here, that is very bad.
+ cu_seqlens_q = torch.arange(batch_size + 1, dtype=torch.int32, device=q.device)
+ indices_q = cu_seqlens_q[:-1]
+ q = q.squeeze(1)
+ else:
+ # The -q_len: slice assumes left padding.
+ attention_mask = attention_mask[:, -q_len:]
+ q, indices_q, cu_seqlens_q, max_seqlen_q = unpad_input(q, attention_mask)
+
+ return q, k, v, indices_q, (cu_seqlens_q, cu_seqlens_k), (max_seqlen_q, max_seqlen_k)
diff --git a/fla/layers/based.py b/fla/layers/based.py
new file mode 100644
index 0000000000000000000000000000000000000000..77cc09570e3cdd56b17f773141ee964afd7be04e
--- /dev/null
+++ b/fla/layers/based.py
@@ -0,0 +1,105 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+"""
+Linear attention in Based.
+https://github.com/HazyResearch/zoology/blob/main/zoology/mixers/based.py
+"""
+
+import torch
+import torch.nn as nn
+from einops import rearrange
+
+from fla.modules.feature_map import TaylorFeatureMap
+from fla.ops.based import parallel_based
+from fla.ops.linear_attn import chunk_linear_attn, fused_chunk_linear_attn
+
+
+class BasedLinearAttention(nn.Module):
+
+ def __init__(
+ self,
+ hidden_size: int,
+ feature_dim: int = 16,
+ num_key_value_heads: int = 12,
+ num_heads: int = 12,
+ feature_name: str = "taylor_exp",
+ eps: float = 1e-12,
+ causal: bool = True,
+ mode: str = "parallel",
+ ):
+ super().__init__()
+
+ self.hidden_size = hidden_size
+ self.mode = mode
+ self.feature_name = feature_name
+ self.feature_dim = feature_dim
+ self.num_key_value_heads = num_key_value_heads
+ self.num_heads = num_heads
+ self.head_dim = self.hidden_size // self.num_key_value_heads
+ self.causal = causal
+
+ self.q_proj = nn.Linear(self.hidden_size, self.feature_dim * self.num_heads, bias=False)
+ self.k_proj = nn.Linear(self.hidden_size, self.feature_dim * self.num_heads, bias=False)
+ self.v_proj = nn.Linear(self.hidden_size, self.num_key_value_heads * self.head_dim, bias=False)
+ self.o_proj = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=False)
+ self.dropout = nn.Identity()
+ self.feature_map = TaylorFeatureMap(feature_dim)
+ self.eps = eps
+
+ self.apply(self._initialize_weights)
+
+ def _initialize_weights(self, module: nn.Module):
+ if getattr(module, "_is_hf_initialized", False):
+ return
+ if isinstance(module, nn.Linear):
+ nn.init.xavier_uniform_(module.weight, gain=2 ** -2.5)
+ if module.bias is not None:
+ nn.init.zeros_(module.bias)
+ module._is_hf_initialized = True
+
+ def forward(self, hidden_states: torch.Tensor, **kwargs):
+ mode = self.mode
+ q, k, v = self.q_proj(hidden_states), self.k_proj(hidden_states), self.v_proj(hidden_states)
+ q, k, v = map(lambda x: rearrange(x, "... (h d) -> ... h d", h=self.num_heads), [q, k, v])
+ if mode == "fused_chunk":
+ q, k = self.feature_map(q), self.feature_map(k)
+ o = fused_chunk_linear_attn(q, k, v, normalize=True, scale=1, head_first=False)
+ elif mode == 'chunk':
+ q, k = self.feature_map(q), self.feature_map(k)
+ o = chunk_linear_attn(q, k, v, normalize=True, scale=1, head_first=False)
+ elif mode == 'parallel':
+ assert q.shape[-1] <= 128
+ o = parallel_based(q, k, v, True, True, head_first=False)
+ o = self.o_proj(o)
+ o = self.dropout(o)
+ return o
+
+ # https://github.com/HazyResearch/zoology/blob/main/zoology/mixers/based.py#L119
+
+ def forward_reference(self, hidden_states: torch.Tensor, filters: torch.Tensor = None, *args, **kwargs):
+ """
+ x (torch.Tensor): tensor of shape (b, d, t)
+ y (torch.Tensor): tensor of shape (b, d, t)
+ """
+ # hidden_states = hidden_states.transpose(1, 2)
+ b, t, _ = hidden_states.size()
+ q, k, v = self.q_proj(hidden_states), self.k_proj(hidden_states), self.v_proj(hidden_states)
+
+ q = q.view(b, t, self.num_heads, self.feature_dim).transpose(1, 2)
+ k = k.view(b, t, self.num_key_value_heads, self.feature_dim).transpose(1, 2)
+ v = v.view(b, t, self.num_key_value_heads, self.head_dim).transpose(1, 2)
+
+ # Linear attention
+ q, k = self.feature_map(q), self.feature_map(k)
+ q, k, v = q.unsqueeze(-2), k.unsqueeze(-2), v.unsqueeze(-1)
+
+ # Compute attention
+ if self.causal:
+ y = ((q * (k * v).cumsum(2)).sum(-1) / ((q * k.cumsum(2)).sum(-1) + self.eps))
+ else:
+ y = ((q * (k * v).sum(2, True)).sum(-1) / ((q * k.sum(2, True)).sum(-1) + self.eps))
+ y = rearrange(y, 'b h t d -> b t (h d)')
+ y = self.o_proj(y.to(hidden_states.dtype))
+ y = self.dropout(y)
+ return y.to(hidden_states.dtype)
diff --git a/fla/layers/bitattn.py b/fla/layers/bitattn.py
new file mode 100644
index 0000000000000000000000000000000000000000..0faea1a2e91fabb2da787bd6f07315712154ec07
--- /dev/null
+++ b/fla/layers/bitattn.py
@@ -0,0 +1,183 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from __future__ import annotations
+
+import warnings
+from typing import TYPE_CHECKING, Optional, Tuple
+
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+import torch.utils.checkpoint
+from einops import rearrange
+from transformers.utils import logging
+
+from fla.modules import RMSNorm, RotaryEmbedding
+from fla.modules.fused_bitlinear import FusedBitLinear
+
+if TYPE_CHECKING:
+ from fla.models.utils import Cache
+
+try:
+ from flash_attn import flash_attn_func, flash_attn_varlen_func
+ from flash_attn.bert_padding import (index_first_axis, pad_input,
+ unpad_input)
+except ImportError:
+ warnings.warn(
+ "Flash Attention is not installed. Please install it via `pip install flash-attn --no-build-isolation`",
+ category=ImportWarning
+ )
+ flash_attn_func = None
+
+logger = logging.get_logger(__name__)
+
+
+class BitAttention(nn.Module):
+
+ def __init__(
+ self,
+ hidden_size: int = 2048,
+ num_heads: int = 32,
+ num_kv_heads: Optional[int] = None,
+ window_size: Optional[int] = None,
+ rope_theta: Optional[float] = 10000.,
+ max_position_embeddings: Optional[int] = None,
+ norm_first: bool = False,
+ norm_eps: float = 1e-5,
+ layer_idx: int = None
+ ):
+ super().__init__()
+
+ self.num_heads = num_heads
+ if num_kv_heads is None:
+ self.num_kv_heads = self.num_heads
+ else:
+ self.num_kv_heads = num_kv_heads
+ self.num_kv_groups = num_heads // self.num_kv_heads
+ self.hidden_size = hidden_size
+ self.head_dim = self.hidden_size // self.num_heads
+ self.kv_dim = self.num_kv_heads * self.head_dim
+ self.kv_dim = self.num_kv_heads * self.head_dim
+ self.window_size = window_size
+ self.rope_theta = rope_theta
+ self.max_position_embeddings = max_position_embeddings
+ self.norm_first = norm_first
+ self.layer_idx = layer_idx
+
+ if norm_first:
+ self.norm = RMSNorm(self.hidden_size, eps=norm_eps)
+ self.q_proj = FusedBitLinear(self.hidden_size, self.hidden_size, bias=False)
+ self.k_proj = FusedBitLinear(self.hidden_size, self.kv_dim, bias=False)
+ self.v_proj = FusedBitLinear(self.hidden_size, self.kv_dim, bias=False)
+ self.o_proj = FusedBitLinear(self.hidden_size, self.hidden_size, bias=False)
+
+ self.rotary = RotaryEmbedding(dim=self.head_dim, base=self.rope_theta)
+
+ def forward(
+ self,
+ hidden_states: torch.Tensor,
+ attention_mask: Optional[torch.LongTensor] = None,
+ past_key_values: Optional[Cache] = None,
+ output_attentions: bool = False,
+ use_cache: bool = False,
+ **kwargs,
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
+ if attention_mask is not None:
+ assert len(attention_mask.shape) == 2, (
+ "Expected attention_mask as a 0-1 matrix with shape [batch_size, seq_len] "
+ "for padding purposes (0 indicating padding). "
+ "Arbitrary attention masks of shape [batch_size, seq_len, seq_len] are not allowed."
+ )
+
+ batch_size, q_len, _ = hidden_states.size()
+
+ if self.norm_first:
+ hidden_states = self.norm(hidden_states)
+
+ q = rearrange(self.q_proj(hidden_states), '... (h d) -> ... h d', h=self.num_heads)
+ k = rearrange(self.k_proj(hidden_states), '... (h d) -> ... h d', h=self.num_kv_heads)
+ v = rearrange(self.v_proj(hidden_states), '... (h d) -> ... h d', h=self.num_kv_heads)
+
+ seqlen_offset, max_seqlen = 0, q_len
+ if past_key_values is not None:
+ seqlen_offset = past_key_values.get_seq_length(self.layer_idx)
+ max_seqlen = q.shape[1] + seqlen_offset
+
+ if attention_mask is not None:
+ # to deliminate the offsets of padding tokens
+ seqlen_offset = (seqlen_offset + attention_mask.sum(-1) - attention_mask.shape[-1]).clamp(min=0)
+ max_seqlen = q.shape[1] + max(seqlen_offset)
+
+ if self.max_position_embeddings is not None:
+ max_seqlen = max(max_seqlen, self.max_position_embeddings)
+ q, k = self.rotary(q, k, seqlen_offset, max_seqlen)
+
+ if past_key_values is not None:
+ k, v = past_key_values.update(
+ attn_state=(k.flatten(-2, -1), v.flatten(-2, -1)),
+ layer_idx=self.layer_idx,
+ offset=q_len,
+ cache_kwargs=dict(window_size=self.window_size)
+ )['attn_state']
+ k = rearrange(k, '... (h d) -> ... h d', h=self.num_kv_heads)
+ v = rearrange(v, '... (h d) -> ... h d', h=self.num_kv_heads)
+
+ if flash_attn_func is None:
+ raise ImportError("Please install Flash Attention via `pip install flash-attn --no-build-isolation` first")
+
+ # Contains at least one padding token in the sequence
+ if attention_mask is not None:
+ q, k, v, indices_q, cu_seq_lens, max_seq_lens = self._upad_input(q, k, v, attention_mask, q_len)
+ cu_seqlens_q, cu_seqlens_k = cu_seq_lens
+ max_seqlen_q, max_seqlen_k = max_seq_lens
+ o = flash_attn_varlen_func(
+ q, k, v,
+ cu_seqlens_q=cu_seqlens_q,
+ cu_seqlens_k=cu_seqlens_k,
+ max_seqlen_q=max_seqlen_q,
+ max_seqlen_k=max_seqlen_k,
+ causal=True,
+ window_size=(-1, -1) if self.window_size is None else (self.window_size-1, 0)
+ )
+ o = pad_input(o, indices_q, batch_size, q_len)
+ else:
+ o = flash_attn_func(
+ q, k, v,
+ causal=True,
+ window_size=(-1, -1) if self.window_size is None else (self.window_size-1, 0)
+ )
+ o = o.reshape(batch_size, q_len, self.hidden_size)
+ o = self.o_proj(o)
+
+ if not output_attentions:
+ attentions = None
+
+ return o, attentions, past_key_values
+
+ def _upad_input(self, q, k, v, attention_mask, q_len):
+ seqlens = attention_mask.sum(-1, dtype=torch.int32)
+ indices_k = torch.nonzero(attention_mask.flatten(), as_tuple=False).flatten()
+ max_seqlen_k = seqlens.max().item()
+ cu_seqlens_k = F.pad(torch.cumsum(seqlens, dim=0, dtype=torch.int32), (1, 0))
+ batch_size, seq_len, num_key_value_heads, head_dim = k.shape
+
+ k = index_first_axis(k.reshape(batch_size * seq_len, num_key_value_heads, head_dim), indices_k)
+ v = index_first_axis(v.reshape(batch_size * seq_len, num_key_value_heads, head_dim), indices_k)
+ if q_len == seq_len:
+ q = index_first_axis(q.reshape(batch_size * seq_len, self.num_heads, head_dim), indices_k)
+ cu_seqlens_q = cu_seqlens_k
+ max_seqlen_q = max_seqlen_k
+ indices_q = indices_k
+ elif q_len == 1:
+ max_seqlen_q = 1
+ # There is a memcpy here, that is very bad.
+ cu_seqlens_q = torch.arange(batch_size + 1, dtype=torch.int32, device=q.device)
+ indices_q = cu_seqlens_q[:-1]
+ q = q.squeeze(1)
+ else:
+ # The -q_len: slice assumes left padding.
+ attention_mask = attention_mask[:, -q_len:]
+ q, indices_q, cu_seqlens_q, max_seqlen_q = unpad_input(q, attention_mask)
+
+ return q, k, v, indices_q, (cu_seqlens_q, cu_seqlens_k), (max_seqlen_q, max_seqlen_k)
diff --git a/fla/layers/delta_net.py b/fla/layers/delta_net.py
new file mode 100644
index 0000000000000000000000000000000000000000..cd9737a83dfd8fa9f58d8b53dcd38609993d239f
--- /dev/null
+++ b/fla/layers/delta_net.py
@@ -0,0 +1,267 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+# Sect4.2 of Linear Transformers Are Secretly Fast Weight Programmers https://arxiv.org/abs/2102.11174
+from __future__ import annotations
+
+from typing import TYPE_CHECKING, Optional, Tuple
+
+import torch
+import torch.nn as nn
+from einops import rearrange
+from torch.nn import functional as F
+
+from fla.modules import FusedRMSNormSwishGate, RMSNorm, ShortConvolution
+from fla.modules.l2norm import l2_norm
+from fla.ops.delta_rule import (chunk_delta_rule, fused_chunk_delta_rule,
+ fused_recurrent_delta_rule)
+
+if TYPE_CHECKING:
+ from fla.models.utils import Cache
+
+
+def elu_p1(x):
+ return (F.elu(x, 1., False) + 1.).to(x)
+
+
+def sum_norm(x):
+ return (x / x.sum(-1, keepdim=True)).to(x)
+
+# https://github.com/IDSIA/recurrent-fwp/blob/master/algorithmic/layers.py#L86C1-L146C1
+
+
+class DeltaNet(nn.Module):
+ def __init__(
+ self,
+ d_model: int = None,
+ hidden_size: int = 1024,
+ expand_k: float = 1.0,
+ expand_v: float = 1.0,
+ num_heads: int = 4,
+ mode: str = 'chunk',
+ use_beta: bool = True,
+ use_gate: bool = False,
+ use_output_norm: bool = True,
+ use_elu: bool = False,
+ use_short_conv: bool = True,
+ conv_size: int = 4,
+ conv_bias: bool = False,
+ layer_idx: int = None,
+ qk_activation: str = 'silu',
+ qk_norm: str = 'l2',
+ norm_first: bool = False,
+ norm_eps: float = 1e-5,
+ **kwargs
+ ) -> DeltaNet:
+ super().__init__()
+
+ self.mode = mode
+ self.qk_activation = qk_activation
+ self.qk_norm = qk_norm
+
+ assert self.qk_activation in ['silu', 'relu', 'elu', 'identity']
+ assert self.qk_norm in ['l2', 'sum']
+
+ if d_model is not None:
+ hidden_size = d_model
+ self.hidden_size = hidden_size
+ self.expand_k = expand_k
+ self.expand_v = expand_v
+ self.num_heads = num_heads
+ self.use_gate = use_gate
+ self.use_output_norm = use_output_norm
+ self.use_short_conv = use_short_conv
+ self.conv_size = conv_size
+ self.conv_bias = conv_bias
+
+ self.key_dim = int(hidden_size * expand_k)
+ self.value_dim = int(hidden_size * expand_v)
+ self.head_qk_dim = self.key_dim // num_heads
+ self.head_v_dim = self.value_dim // num_heads
+ self.norm_first = norm_first
+ self.layer_idx = layer_idx
+
+ self.silu = nn.SiLU()
+
+ assert mode in ['chunk', 'fused_chunk', 'fused_recurrent'], f"Not suppoerted mode `{mode}`."
+ assert self.key_dim % num_heads == 0, f"key dim must be divisible by num_heads of {num_heads}"
+ assert self.value_dim % num_heads == 0, f"value dim must be divisible by num_heads of {num_heads}"
+
+ if norm_first:
+ self.norm = RMSNorm(self.hidden_size, eps=norm_eps)
+
+ self.q_proj = nn.Linear(hidden_size, self.key_dim, bias=False)
+ self.k_proj = nn.Linear(hidden_size, self.key_dim, bias=False)
+ self.v_proj = nn.Linear(hidden_size, self.value_dim, bias=False)
+
+ self.use_beta = use_beta
+ self.use_elu = use_elu
+ if self.use_beta:
+ self.b_proj = nn.Linear(hidden_size, self.num_heads, bias=False)
+ if use_short_conv:
+ self.conv_size = conv_size
+ self.q_conv1d = ShortConvolution(
+ hidden_size=self.key_dim,
+ kernel_size=conv_size,
+ activation='silu' if qk_activation == 'silu' else None
+ )
+ self.k_conv1d = ShortConvolution(
+ hidden_size=self.key_dim,
+ kernel_size=conv_size,
+ activation='silu' if qk_activation == 'silu' else None
+ )
+ self.v_conv1d = ShortConvolution(
+ hidden_size=self.value_dim,
+ kernel_size=conv_size,
+ activation='silu'
+ )
+ else:
+ raise UserWarning(
+ "ShortConvolution is crucial to the performance. "
+ "Do not turn it off, i.e., setting `use_short_conv=False` unless you know what you are doing."
+ )
+ if use_gate:
+ self.g_proj = nn.Linear(hidden_size, self.value_dim, bias=False)
+ self.o_norm = FusedRMSNormSwishGate(self.head_v_dim, eps=norm_eps)
+ else:
+ self.o_norm = RMSNorm(self.head_v_dim, eps=norm_eps)
+
+ self.o_proj = nn.Linear(self.value_dim, hidden_size, bias=False)
+
+ self.apply(self._initialize_weights)
+
+ def _initialize_weights(self, module: nn.Module):
+ if getattr(module, "_is_hf_initialized", False):
+ return
+ if isinstance(module, nn.Linear):
+ nn.init.xavier_uniform_(module.weight, gain=2 ** -2.5)
+ if module.bias is not None:
+ nn.init.zeros_(module.bias)
+ module._is_hf_initialized = True
+
+ def forward(
+ self,
+ hidden_states: torch.Tensor,
+ attention_mask: Optional[torch.Tensor] = None,
+ past_key_values: Optional[Cache] = None,
+ use_cache: Optional[bool] = False,
+ output_attentions: Optional[bool] = False,
+ **kwargs
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Cache]]:
+ if attention_mask is not None:
+ assert len(attention_mask.shape) == 2, (
+ "Expected attention_mask as a 0-1 matrix with shape [batch_size, seq_len] "
+ "for padding purposes (0 indicating padding). "
+ "Arbitrary attention masks of shape [batch_size, seq_len, seq_len] are not allowed."
+ )
+
+ # change to inference mode.
+ mode = 'fused_recurrent' if hidden_states.shape[1] < 64 else self.mode
+
+ if self.norm_first:
+ hidden_states = self.norm(hidden_states)
+
+ last_state = None
+ if past_key_values is not None and len(past_key_values) > self.layer_idx:
+ last_state = past_key_values[self.layer_idx]
+
+ if self.use_short_conv:
+ conv_state_q, conv_state_k, conv_state_v = None, None, None
+ if last_state is not None:
+ conv_state_q, conv_state_k, conv_state_v = last_state['conv_state']
+ conv_mask = attention_mask[:, -hidden_states.shape[1]:] if attention_mask is not None else None
+ q, conv_state_q = self.q_conv1d(x=self.q_proj(hidden_states),
+ mask=conv_mask,
+ cache=conv_state_q,
+ output_final_state=use_cache)
+ k, conv_state_k = self.k_conv1d(x=self.k_proj(hidden_states),
+ mask=conv_mask,
+ cache=conv_state_k,
+ output_final_state=use_cache)
+ v, conv_state_v = self.v_conv1d(x=self.v_proj(hidden_states),
+ mask=conv_mask,
+ cache=conv_state_v,
+ output_final_state=use_cache)
+ else:
+ q = self.q_proj(hidden_states)
+ k = self.k_proj(hidden_states)
+ v = self.silu(self.v_proj(hidden_states))
+
+ q, k, v = map(lambda x: rearrange(x, 'b t (h d) -> b t h d', h=self.num_heads), (q, k, v))
+ if self.qk_activation != 'silu':
+ if self.qk_activation == 'relu':
+ q, k = q.relu(), k.relu()
+ elif self.qk_activation == 'elu':
+ q, k = elu_p1(q), elu_p1(k)
+ elif self.qk_activation == 'identity':
+ pass
+ else:
+ raise NotImplementedError
+
+ if self.qk_norm is not None:
+ if self.qk_norm == 'l2':
+ q = l2_norm(q)
+ k = l2_norm(k)
+ elif self.qk_norm == 'sum':
+ q = sum_norm(q).to(q)
+ k = sum_norm(k).to(k)
+
+ if self.use_beta:
+ beta = self.b_proj(hidden_states).sigmoid()
+ else:
+ beta = q.new_ones(q.shape[0], q.shape[1], q.shape[2])
+
+ # dealing with padding
+ if attention_mask is not None:
+ beta = beta.mul(attention_mask[:, -beta.shape[-2]:, None])
+ recurrent_state = last_state['recurrent_state'] if last_state is not None else None
+ if mode == 'fused_recurrent':
+ o, recurrent_state = fused_recurrent_delta_rule(
+ q=q,
+ k=k,
+ v=v,
+ beta=beta,
+ initial_state=recurrent_state,
+ output_final_state=use_cache,
+ head_first=False
+ )
+ elif mode == 'fused_chunk':
+ o, recurrent_state = fused_chunk_delta_rule(
+ q=q,
+ k=k,
+ v=v,
+ beta=beta,
+ initial_state=recurrent_state,
+ output_final_state=use_cache,
+ head_first=False
+ )
+ elif mode == 'chunk':
+ o, recurrent_state = chunk_delta_rule(
+ q=q,
+ k=k,
+ v=v,
+ beta=beta,
+ initial_state=recurrent_state,
+ output_final_state=use_cache,
+ head_first=False
+ )
+ else:
+ raise NotImplementedError(f"Not supported mode `{mode}`.")
+
+ if past_key_values is not None:
+ past_key_values.update(
+ recurrent_state=recurrent_state,
+ conv_state=(conv_state_q, conv_state_k, conv_state_v) if self.use_short_conv else None,
+ layer_idx=self.layer_idx,
+ offset=q.shape[2]
+ )
+
+ if self.use_gate:
+ g = rearrange(self.g_proj(hidden_states), 'b t (h d) -> b t h d', h=self.num_heads)
+ o = self.o_norm(o, g)
+ else:
+ o = self.o_norm(o)
+ o = rearrange(o, 'b t h d -> b t (h d)')
+ o = self.o_proj(o)
+
+ return o, None, past_key_values
diff --git a/fla/layers/gla.py b/fla/layers/gla.py
new file mode 100644
index 0000000000000000000000000000000000000000..8cc22d5069819309a4ef4aa2b361013fbd4ee47a
--- /dev/null
+++ b/fla/layers/gla.py
@@ -0,0 +1,280 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+
+from __future__ import annotations
+
+from typing import TYPE_CHECKING, Optional, Tuple
+
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+from einops import rearrange, repeat
+
+from fla.modules import FusedRMSNormSwishGate, RMSNorm, ShortConvolution
+from fla.modules.activations import ACT2FN
+from fla.ops.gla import chunk_gla, fused_chunk_gla, fused_recurrent_gla
+
+if TYPE_CHECKING:
+ from fla.models.utils import Cache
+
+
+class GatedLinearAttention(nn.Module):
+ r"""
+ The layer implementaion for [Gated Linear Attention Transformers with Hardware-Efficient Training](https://arxiv.org/abs/2312.06635). # noqa
+
+ Args:
+ mode (str, Optional):
+ Which GLA kernel to use.
+ Currently available: `chunk`, `fused_recurrent`, and `fused_chunk`.
+ Default: `chunk`.
+ hidden_size (int, Optional):
+ The hidden size of the input. Default: 1024.
+ expand_k (float, Optional):
+ The expansion ratio for the key dim. Default: 0.5.
+ expand_v (float, Optional):
+ The expansion ratio for the value dim. Default: 1.0.
+ num_heads (int, Optional):
+ The number of heads. Default: 4.
+ num_kv_heads (int, Optional):
+ The number of key/value heads, used for MQA. Default: None.
+ feature_map (str, Optional):
+ Feature map function applied to queries/keys. Default: None.
+ use_short_conv (bool, Optional):
+ Whether to use short convolutions. Default: `False`.
+ conv_size (int, Optional):
+ The kernel size of the short convolution, only used when `use_short_conv` is `True`. Default: 4.
+ conv_bias (bool, Optional):
+ Whether to use bias in the short convolution, only used when `use_short_conv` is `True`. Default: `False`.
+ use_output_gate (bool, Optional):
+ Whether to use output gate. Default: `True`.
+ gate_fn (str, Optional):
+ The activation function for the output gate. Default: `swish`.
+ elementwise_affine (bool, Optional):
+ If `True`, applies elementwise affine to LayerNorm with learnable parameters. Default: `True`.
+ norm_eps (float, Optional):
+ The epsilon value for the layernorm/rmsnorm layer. Default: 1e-5.
+ gate_logit_normalizer (int, Optional):
+ The normalizer for the gate logits, appied after `logsigmoid`. Default: 16.
+ gate_low_rank_dim (int, Optional):
+ The low rank dim for the gate projection. Default: 16.
+ clamp_min (float, Optional):
+ The minimum value for the gate logits. Default: None.
+ fuse_norm (bool, Optional):
+ Whether to fuse the norm and the output gate for better memory footprint. Default: `True`.
+ layer_idx (int, Optional):
+ The index of the layer. Default: None.
+ """
+
+ def __init__(
+ self,
+ mode: str = 'chunk',
+ hidden_size: int = 1024,
+ expand_k: float = 0.5,
+ expand_v: float = 1.0,
+ num_heads: int = 4,
+ num_kv_heads: Optional[int] = None,
+ feature_map: Optional[str] = None,
+ use_short_conv: bool = False,
+ conv_size: int = 4,
+ conv_bias: bool = False,
+ use_output_gate: bool = True,
+ gate_fn: str = 'swish',
+ elementwise_affine: Optional[bool] = True,
+ norm_eps: float = 1e-5,
+ gate_logit_normalizer: int = 16,
+ gate_low_rank_dim: int = 16,
+ clamp_min: Optional[float] = None,
+ fuse_norm: bool = True,
+ layer_idx: int = None,
+ ) -> GatedLinearAttention:
+ super().__init__()
+
+ self.mode = mode
+ self.hidden_size = hidden_size
+ self.expand_k = expand_k
+ self.expand_v = expand_v
+ self.num_heads = num_heads
+ self.num_kv_heads = num_kv_heads if num_kv_heads is not None else num_heads
+ self.num_kv_groups = self.num_heads // self.num_kv_heads
+ self.feature_map_fn = ACT2FN[feature_map] if feature_map is not None else None
+
+ self.use_short_conv = use_short_conv
+ self.conv_size = conv_size
+ self.conv_bias = conv_bias
+ self.use_output_gate = use_output_gate
+
+ self.key_dim = int(hidden_size * expand_k)
+ self.value_dim = int(hidden_size * expand_v)
+ self.key_dim_per_group = self.key_dim // self.num_kv_groups
+ self.value_dim_per_group = self.value_dim // self.num_kv_groups
+ self.clamp_min = clamp_min
+ self.layer_idx = layer_idx
+
+ assert mode in ['chunk', 'fused_recurrent', 'fused_chunk'], f"Not suppoerted mode `{mode}`."
+ assert self.key_dim % num_heads == 0, f"key dim must be divisible by num_heads of {num_heads}"
+ assert self.value_dim % num_heads == 0, f"value dim must be divisible by num_heads of {num_heads}"
+
+ self.head_qk_dim = self.key_dim // num_heads
+ self.head_v_dim = self.value_dim // num_heads
+
+ self.q_proj = nn.Linear(hidden_size, self.key_dim, bias=False)
+ self.k_proj = nn.Linear(hidden_size, self.key_dim_per_group, bias=False)
+ self.v_proj = nn.Linear(hidden_size, self.value_dim_per_group, bias=False)
+ if self.use_output_gate:
+ self.g_proj = nn.Linear(hidden_size, self.value_dim, bias=False)
+
+ if use_short_conv:
+ self.conv_size = conv_size
+ self.q_conv1d = ShortConvolution(self.key_dim, conv_size, activation='silu')
+ self.k_conv1d = ShortConvolution(self.key_dim_per_group, conv_size, activation='silu')
+ self.v_conv1d = ShortConvolution(self.value_dim_per_group, conv_size, activation='silu')
+
+ self.gk_proj = nn.Sequential(nn.Linear(hidden_size, gate_low_rank_dim, bias=False),
+ nn.Linear(gate_low_rank_dim, self.key_dim_per_group, bias=True))
+ self.o_proj = nn.Linear(self.value_dim, hidden_size, bias=False)
+
+ if gate_fn == 'swish' and fuse_norm and use_output_gate:
+ self.g_norm_swish_gate = FusedRMSNormSwishGate(self.head_v_dim, elementwise_affine, norm_eps)
+ self.fuse_norm_and_gate = True
+ else:
+ self.fuse_norm_and_gate = False
+ self.g_norm = RMSNorm(hidden_size=self.head_v_dim, elementwise_affine=elementwise_affine, eps=norm_eps)
+ self.gate_fn = ACT2FN[gate_fn]
+
+ self.gate_logit_normalizer = gate_logit_normalizer
+
+ self.apply(self._initialize_weights)
+
+ def _initialize_weights(self, module: nn.Module):
+ if getattr(module, "_is_hf_initialized", False):
+ return
+ if isinstance(module, nn.Linear):
+ nn.init.xavier_uniform_(module.weight, gain=2 ** -2.5)
+ if module.bias is not None:
+ nn.init.zeros_(module.bias)
+ module._is_hf_initialized = True
+
+ def forward(
+ self,
+ hidden_states: torch.Tensor,
+ attention_mask: Optional[torch.Tensor] = None,
+ past_key_values: Optional[Cache] = None,
+ use_cache: Optional[bool] = False,
+ output_attentions: Optional[bool] = False,
+ **kwargs
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Cache]]:
+ if attention_mask is not None:
+ assert len(attention_mask.shape) == 2, (
+ "Expected attention_mask as a 0-1 matrix with shape [batch_size, seq_len] "
+ "for padding purposes (0 indicating padding). "
+ "Arbitrary attention masks of shape [batch_size, seq_len, seq_len] are not allowed."
+ )
+
+ # launching the triton kernel for just one token will actually be slower
+ mode = 'fused_recurrent' if hidden_states.shape[1] == 1 else self.mode
+
+ last_state = None
+ if past_key_values is not None and len(past_key_values) > self.layer_idx:
+ last_state = past_key_values[self.layer_idx]
+ if self.use_short_conv:
+ conv_state_q, conv_state_k, conv_state_v = None, None, None
+ if last_state is not None:
+ conv_state_q, conv_state_k, conv_state_v = last_state['conv_state']
+ conv_mask = attention_mask[:, -hidden_states.shape[1]:] if attention_mask is not None else None
+ q, conv_state_q = self.q_conv1d(x=self.q_proj(hidden_states),
+ mask=conv_mask,
+ cache=conv_state_q,
+ output_final_state=use_cache)
+ k, conv_state_k = self.k_conv1d(x=self.k_proj(hidden_states),
+ mask=conv_mask,
+ cache=conv_state_k,
+ output_final_state=use_cache)
+ v, conv_state_v = self.v_conv1d(x=self.v_proj(hidden_states),
+ mask=conv_mask,
+ cache=conv_state_v,
+ output_final_state=use_cache)
+ else:
+ q = self.q_proj(hidden_states)
+ k = self.k_proj(hidden_states)
+ v = self.v_proj(hidden_states)
+ gk = self.gk_proj(hidden_states)
+
+ if self.feature_map_fn is not None:
+ q, k = map(self.feature_map_fn, (q, k))
+ # dealing with left-padding
+ if attention_mask is not None:
+ v = v.mul_(attention_mask[:, -v.shape[-2]:, None])
+ q = rearrange(q, 'b t (h d) -> b t h d', h=self.num_heads)
+ if self.num_kv_groups > 1:
+ k, v, gk = (repeat(x, 'b t (h d) -> b t (h g) d', h=self.num_kv_heads, g=self.num_kv_groups) for x in (k, v, gk))
+ else:
+ k, v, gk = (rearrange(x, 'b t (h d) -> b t h d', h=self.num_kv_heads) for x in (k, v, gk))
+ gk = F.logsigmoid(gk) / self.gate_logit_normalizer
+
+ if self.clamp_min is not None:
+ gk = torch.clamp_min(gk, self.clamp_min)
+
+ recurrent_state = last_state['recurrent_state'] if last_state is not None else None
+ if mode == 'fused_recurrent':
+ o, recurrent_state = fused_recurrent_gla(
+ q=q,
+ k=k,
+ v=v,
+ gk=gk,
+ initial_state=recurrent_state,
+ output_final_state=use_cache,
+ head_first=False
+ )
+ elif mode == 'fused_chunk':
+ o, recurrent_state = fused_chunk_gla(
+ q=q,
+ k=k,
+ v=v,
+ g=gk,
+ initial_state=recurrent_state,
+ output_final_state=use_cache,
+ head_first=False
+ )
+ elif mode == 'chunk':
+ o, recurrent_state = chunk_gla(
+ q=q,
+ k=k,
+ v=v,
+ g=gk,
+ initial_state=recurrent_state,
+ output_final_state=use_cache,
+ head_first=False
+ )
+ else:
+ raise NotImplementedError(f"Not supported mode `{mode}`.")
+
+ if past_key_values is not None:
+ past_key_values.update(
+ recurrent_state=recurrent_state,
+ conv_state=(conv_state_q, conv_state_k, conv_state_v) if self.use_short_conv else None,
+ layer_idx=self.layer_idx,
+ offset=q.shape[2]
+ )
+
+ if self.use_output_gate:
+ g = self.g_proj(hidden_states)
+ if self.fuse_norm_and_gate:
+ g = rearrange(g, 'b t (h d) -> b t h d', h=self.num_heads)
+ o = self.g_norm_swish_gate(o, g)
+ o = rearrange(o, 'b t h d -> b t (h d)')
+ else:
+ o = rearrange(self.g_norm(o), 'b t h d -> b t (h d)')
+ o = o * self.gate_fn(g)
+ else:
+ o = rearrange(self.g_norm(o), 'b t h d -> b t (h d)')
+ o = self.o_proj(o)
+
+ return o, None, past_key_values
+
+ def state_size(self, **kwargs) -> int:
+ state_size = self.key_dim * self.head_v_dim
+ for module in self.children():
+ if isinstance(module, ShortConvolution):
+ state_size += module.state_size
+ return state_size
diff --git a/fla/layers/gsa.py b/fla/layers/gsa.py
new file mode 100644
index 0000000000000000000000000000000000000000..0b0c5f77659bd2ee0ef512c082465eb647a2253c
--- /dev/null
+++ b/fla/layers/gsa.py
@@ -0,0 +1,233 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from __future__ import annotations
+
+import warnings
+from typing import TYPE_CHECKING, Optional, Tuple
+
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+from einops import rearrange
+
+from fla.modules import RMSNorm, ShortConvolution
+from fla.modules.activations import swish
+from fla.modules.feature_map import (ReLUFeatureMap, SwishFeatureMap,
+ T2RFeatureMap)
+from fla.modules.layernorm import rms_norm_linear
+from fla.ops.gsa import chunk_gsa, fused_recurrent_gsa
+
+if TYPE_CHECKING:
+ from fla.models.utils import Cache
+
+
+class GatedSlotAttention(nn.Module):
+
+ def __init__(
+ self,
+ mode: str = 'chunk',
+ hidden_size: int = 1024,
+ expand_k: float = 1.,
+ expand_v: float = 1.,
+ num_heads: int = 4,
+ num_kv_heads: Optional[int] = None,
+ use_short_conv: bool = False,
+ conv_size: int = 4,
+ conv_bias: bool = False,
+ num_slots: Optional[int] = None,
+ elementwise_affine: Optional[bool] = True,
+ norm_first: bool = True,
+ norm_eps: float = 1e-5,
+ gate_logit_normalizer: int = 8,
+ feature_map: str = 'swish',
+ use_output_gate: bool = False,
+ use_norm: bool = True,
+ layer_idx: Optional[int] = None,
+ scale: Optional[float] = 1.,
+ **kwargs
+ ) -> GatedSlotAttention:
+ super().__init__()
+
+ self.mode = mode
+ self.hidden_size = hidden_size
+ self.expand_k = expand_k
+ self.expand_v = expand_v
+ self.num_heads = num_heads
+ self.num_kv_heads = num_heads if num_kv_heads is None else num_kv_heads
+ self.num_kv_groups = self.num_heads // self.num_kv_heads
+ self.key_dim = int(hidden_size * expand_k)
+ self.value_dim = int(hidden_size * expand_v)
+ self.key_dim_per_group = self.key_dim // self.num_kv_groups
+ self.value_dim_per_group = self.value_dim // self.num_kv_groups
+ self.head_k_dim = self.key_dim // self.num_heads
+ self.head_v_dim = self.value_dim // self.num_heads
+
+ self.use_short_conv = use_short_conv
+ self.conv_size = conv_size
+ self.conv_bias = conv_bias
+
+ self.gate_logit_normalizer = gate_logit_normalizer
+
+ self.use_output_gate = use_output_gate
+ self.use_norm = use_norm
+ self.scale = scale
+
+ if num_slots is None:
+ num_slots = self.head_k_dim
+ self.num_slots = num_slots
+ self.norm_first = norm_first
+
+ self.layer_idx = layer_idx
+
+ if layer_idx is None:
+ warnings.warn(
+ f"Instantiating {self.__class__.__name__} without passing `layer_idx` is not recommended and will "
+ "to errors during the forward call, if caching is used. Please make sure to provide a `layer_idx` "
+ "when creating this class."
+ )
+
+ if norm_first:
+ self.norm = RMSNorm(self.hidden_size, eps=norm_eps)
+ self.register_module('feature_map', None)
+ if feature_map == 'swish':
+ self.feature_map = SwishFeatureMap()
+ elif feature_map == 'relu':
+ self.feature_map = ReLUFeatureMap()
+ elif feature_map == 't2r':
+ self.feature_map = T2RFeatureMap(self.head_k_dim, self.head_k_dim)
+ else:
+ raise NotImplementedError(f"Feature map `{feature_map}` is not supported now.")
+
+ self.q_proj = nn.Linear(self.hidden_size, self.key_dim, bias=False)
+ self.k_proj = nn.Linear(self.hidden_size, self.key_dim_per_group, bias=False)
+ self.v_proj = nn.Linear(self.hidden_size, self.value_dim_per_group, bias=False)
+ self.f_proj = nn.Linear(self.hidden_size, self.num_kv_heads * self.num_slots, bias=False)
+
+ if use_short_conv:
+ self.conv_size = conv_size
+ self.q_conv1d = ShortConvolution(self.key_dim, conv_size, activation='silu')
+ self.k_conv1d = ShortConvolution(self.key_dim_per_group, conv_size, activation='silu')
+ self.v_conv1d = ShortConvolution(self.value_dim_per_group, conv_size, activation='silu')
+
+ self.g_norm = RMSNorm(self.hidden_size, elementwise_affine, eps=norm_eps)
+ self.o_proj = nn.Linear(self.value_dim, self.hidden_size, bias=False)
+
+ self.apply(self._initialize_weights)
+
+ def _initialize_weights(self, module: nn.Module):
+ if getattr(module, "_is_hf_initialized", False):
+ return
+ if isinstance(module, nn.Linear):
+ nn.init.xavier_uniform_(module.weight, gain=2 ** -2.5)
+ if module.bias is not None:
+ nn.init.zeros_(module.bias)
+ module._is_hf_initialized = True
+
+ def forward(
+ self,
+ hidden_states: torch.Tensor,
+ attention_mask: Optional[torch.Tensor] = None,
+ past_key_values: Optional[Cache] = None,
+ use_cache: Optional[bool] = False,
+ output_attentions: Optional[bool] = False,
+ **kwargs
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Cache]]:
+ if attention_mask is not None:
+ assert len(attention_mask.shape) == 2, (
+ "Expected attention_mask as a 0-1 matrix with shape [batch_size, seq_len] "
+ "for padding purposes (0 indicating padding). "
+ "Arbitrary attention masks of shape [batch_size, seq_len, seq_len] are not allowed."
+ )
+
+ # launching the triton kernel for just one token will actually be slower
+ mode = 'fused_recurrent' if hidden_states.shape[1] == 1 else self.mode
+
+ if self.norm_first:
+ hidden_states = self.norm(hidden_states)
+
+ last_state = None
+ if past_key_values is not None and len(past_key_values) > self.layer_idx:
+ last_state = past_key_values[self.layer_idx]
+
+ if self.use_short_conv:
+ conv_state_q, conv_state_k, conv_state_v = None, None, None
+ if last_state is not None:
+ conv_state_q, conv_state_k, conv_state_v = last_state['conv_state']
+ conv_mask = attention_mask[:, -hidden_states.shape[1]:] if attention_mask is not None else None
+ q, conv_state_q = self.q_conv1d(x=self.q_proj(hidden_states),
+ mask=conv_mask,
+ cache=conv_state_q,
+ output_final_state=use_cache)
+ k, conv_state_k = self.k_conv1d(x=self.k_proj(hidden_states),
+ mask=conv_mask,
+ cache=conv_state_k,
+ output_final_state=use_cache)
+ v, conv_state_v = self.v_conv1d(x=self.v_proj(hidden_states),
+ mask=conv_mask,
+ cache=conv_state_v,
+ output_final_state=use_cache)
+ else:
+ q = self.q_proj(hidden_states)
+ k = self.k_proj(hidden_states)
+ v = self.v_proj(hidden_states)
+ f = self.f_proj(hidden_states)
+
+ q = rearrange(q, 'b t (h d) -> b t h d', h=self.num_heads)
+ k = rearrange(k, 'b t (h d) -> b t h d', h=self.num_kv_heads)
+ v = rearrange(v, 'b t (h d) -> b t h d', h=self.num_kv_heads)
+ f = rearrange(f, 'b t (h m) -> b t h m', h=self.num_kv_heads)
+
+ if self.feature_map is not None:
+ q, k = map(lambda x: self.feature_map(x), (q, k))
+ v = swish(v)
+
+ f = F.logsigmoid(f) / self.gate_logit_normalizer
+ s = (1 - f.exp()).to(f.dtype)
+ # dealing with left-padding
+ if attention_mask is not None:
+ s = s.mul_(attention_mask[:, -s.shape[1]:, None, None])
+ v = v.mul_(attention_mask[:, -v.shape[1]:, None, None])
+
+ recurrent_state = last_state['recurrent_state'] if last_state is not None else None
+ if mode == 'fused_recurrent':
+ o, recurrent_state = fused_recurrent_gsa(
+ q=q,
+ k=k,
+ v=v,
+ s=s,
+ g=f,
+ initial_state=recurrent_state,
+ output_final_state=use_cache,
+ scale=self.scale,
+ head_first=False
+ )
+ elif mode == 'chunk':
+ o, recurrent_state = chunk_gsa(
+ q=q,
+ k=k,
+ v=v,
+ s=s,
+ g=f,
+ initial_state=recurrent_state,
+ output_final_state=use_cache,
+ scale=self.scale,
+ head_first=False
+ )
+ else:
+ raise NotImplementedError(f"Not supported mode `{mode}`.")
+
+ if past_key_values is not None:
+ past_key_values.update(
+ recurrent_state=recurrent_state,
+ conv_state=(conv_state_q, conv_state_k, conv_state_v) if self.use_short_conv else None,
+ layer_idx=self.layer_idx,
+ offset=q.shape[2]
+ )
+
+ o = rearrange(o, 'b t h d -> b t (h d)')
+ o = rms_norm_linear(swish(o), self.g_norm.weight, self.g_norm.bias, self.o_proj.weight, self.o_proj.bias)
+ return o, None, past_key_values
+
+ def state_size(self, *args, **kwargs) -> int:
+ return 2 * self.num_slots * self.hidden_size
diff --git a/fla/layers/hgrn.py b/fla/layers/hgrn.py
new file mode 100644
index 0000000000000000000000000000000000000000..97549de05f313cd41fdd918074c37985f7a0edcd
--- /dev/null
+++ b/fla/layers/hgrn.py
@@ -0,0 +1,153 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+# "Hierarchically Gated Recurrent Neural Network for Sequence Modeling" [https://arxiv.org/abs/2311.04823]
+
+from __future__ import annotations
+
+from typing import TYPE_CHECKING, Optional, Tuple
+
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+
+from fla.modules import FusedRMSNormSwishGate, ShortConvolution
+from fla.modules.activations import swiglu
+from fla.ops.hgrn import chunk_hgrn, fused_recurrent_hgrn
+
+if TYPE_CHECKING:
+ from fla.models.utils import Cache
+
+
+class HGRNAttention(nn.Module):
+
+ def __init__(
+ self,
+ mode: str = 'chunk',
+ hidden_size: int = 1024,
+ expand_ratio: Optional[int] = 1,
+ use_short_conv: bool = False,
+ conv_size: int = 4,
+ conv_bias: bool = False,
+ elementwise_affine: Optional[bool] = True,
+ norm_eps: float = 1e-5,
+ layer_idx: int = None
+ ) -> HGRNAttention:
+ super().__init__()
+
+ self.mode = mode
+ self.hidden_size = hidden_size
+ self.expand_ratio = expand_ratio
+ self.input_dim = int(hidden_size * expand_ratio)
+
+ self.use_short_conv = use_short_conv
+ self.conv_size = conv_size
+ self.conv_bias = conv_bias
+
+ self.layer_idx = layer_idx
+
+ assert mode in ['chunk', 'fused_recurrent'], f"Not suppoerted mode `{mode}`."
+
+ self.i_proj = nn.Linear(hidden_size, self.input_dim, bias=False)
+ self.f_proj = nn.Linear(hidden_size, self.input_dim, bias=False)
+ self.g_proj = nn.Linear(hidden_size, self.input_dim, bias=False)
+
+ if use_short_conv:
+ self.conv_size = conv_size
+ self.q_conv1d = ShortConvolution(self.input_dim, conv_size, activation=None)
+ self.f_conv1d = ShortConvolution(self.input_dim, conv_size, activation=None)
+ self.i_conv1d = ShortConvolution(self.input_dim, conv_size, activation=None)
+
+ self.g_norm = FusedRMSNormSwishGate(self.input_dim, elementwise_affine, norm_eps)
+ self.o_proj = nn.Linear(self.input_dim, hidden_size, bias=False)
+
+ self.apply(self._initialize_weights)
+
+ def _initialize_weights(self, module: nn.Module):
+ if getattr(module, "_is_hf_initialized", False):
+ return
+ if isinstance(module, nn.Linear):
+ nn.init.xavier_uniform_(module.weight, gain=2 ** -2.5)
+ if module.bias is not None:
+ nn.init.zeros_(module.bias)
+ module._is_hf_initialized = True
+
+ def forward(
+ self,
+ hidden_states: torch.Tensor,
+ attention_mask: Optional[torch.Tensor] = None,
+ past_key_values: Optional[Cache] = None,
+ use_cache: Optional[bool] = False,
+ output_attentions: Optional[bool] = False,
+ lower_bound: Optional[torch.Tensor] = None,
+ **kwargs
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Cache]]:
+ if attention_mask is not None:
+ assert len(attention_mask.shape) == 2, (
+ "Expected attention_mask as a 0-1 matrix with shape [batch_size, seq_len] "
+ "for padding purposes (0 indicating padding). "
+ "Arbitrary attention masks of shape [batch_size, seq_len, seq_len] are not allowed."
+ )
+
+ # launching the triton kernel for just one token will actually be slower
+ mode = 'fused_recurrent' if hidden_states.shape[1] == 1 else self.mode
+
+ last_state = None
+ if past_key_values is not None and len(past_key_values) > self.layer_idx:
+ last_state = past_key_values[self.layer_idx]
+
+ if self.use_short_conv:
+ conv_state_i, conv_state_f = None, None
+ if last_state is not None:
+ conv_state_i, conv_state_f = last_state['conv_state']
+ conv_mask = attention_mask[:, -hidden_states.shape[1]:] if attention_mask is not None else None
+ i, conv_state_i = self.i_conv1d(x=self.i_proj(hidden_states),
+ mask=conv_mask,
+ cache=conv_state_i,
+ output_final_state=use_cache)
+ f, conv_state_f = self.f_conv1d(x=self.f_proj(hidden_states),
+ mask=conv_mask,
+ cache=conv_state_f,
+ output_final_state=use_cache)
+ else:
+ i = self.i_proj(hidden_states)
+ f = self.f_proj(hidden_states)
+
+ # the lower bound for the first layer is zero
+ if lower_bound is None or self.layer_idx == 0:
+ i, f = swiglu(i, 1 - f.sigmoid()), F.logsigmoid(f)
+ else:
+ g = lower_bound + (1 - lower_bound) * f.sigmoid()
+ i, f = swiglu(i, 1 - g), g.log()
+
+ # dealing with left-padding
+ if attention_mask is not None:
+ i = i.mul_(attention_mask[:, -i.shape[-2]:, None])
+
+ recurrent_state = last_state['recurrent_state'] if last_state is not None else None
+ if mode == 'chunk':
+ o, recurrent_state = chunk_hgrn(i, f, recurrent_state, use_cache)
+ elif mode == 'fused_recurrent':
+ o, recurrent_state = fused_recurrent_hgrn(i, f, recurrent_state, use_cache)
+ else:
+ raise NotImplementedError(f"Not supported mode `{mode}`.")
+
+ if past_key_values is not None:
+ past_key_values.update(
+ recurrent_state=recurrent_state,
+ conv_state=(conv_state_i, conv_state_f) if self.use_short_conv else None,
+ layer_idx=self.layer_idx,
+ offset=i.shape[2]
+ )
+
+ o = self.g_norm(o, self.g_proj(hidden_states))
+ o = self.o_proj(o)
+
+ return o, None, past_key_values
+
+ def state_size(self, **kwargs) -> int:
+ state_size = self.hidden_size
+ for module in self.children():
+ if isinstance(module, ShortConvolution):
+ state_size += module.state_size
+ return state_size
diff --git a/fla/layers/hgrn2.py b/fla/layers/hgrn2.py
new file mode 100644
index 0000000000000000000000000000000000000000..769c19b7d2ed12f97ebe4cb44f474d3eba6f72fc
--- /dev/null
+++ b/fla/layers/hgrn2.py
@@ -0,0 +1,207 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+# "HGRN2: Gated Linear RNNs with State Expansion"[https://arxiv.org/abs/2404.07904]
+
+from __future__ import annotations
+
+from typing import TYPE_CHECKING, Optional, Tuple
+
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+from einops import rearrange
+
+from fla.modules import RMSNorm, ShortConvolution
+from fla.modules.activations import swish
+from fla.modules.layernorm import rms_norm_linear
+from fla.ops.gla import chunk_gla, fused_chunk_gla, fused_recurrent_gla
+
+if TYPE_CHECKING:
+ from fla.models.utils import Cache
+
+
+class HGRN2Attention(nn.Module):
+
+ def __init__(
+ self,
+ mode: str = 'chunk',
+ hidden_size: int = 1024,
+ num_heads: Optional[int] = None,
+ expand_ratio: Optional[int] = 128,
+ use_short_conv: bool = False,
+ conv_size: int = 4,
+ conv_bias: bool = False,
+ elementwise_affine: Optional[bool] = True,
+ norm_eps: float = 1e-5,
+ layer_idx: int = None
+ ) -> HGRN2Attention:
+ super().__init__()
+
+ self.mode = mode
+ self.hidden_size = hidden_size
+
+ if expand_ratio is None and num_heads is not None:
+ expand_ratio = hidden_size // num_heads
+ elif expand_ratio is not None and num_heads is None:
+ num_heads = hidden_size // expand_ratio
+ elif expand_ratio is None and num_heads is None:
+ raise RuntimeError("One of `expand_ratio` or `num_heads` should be provided.")
+ self.num_heads = num_heads
+ self.expand_ratio = expand_ratio
+
+ self.use_short_conv = use_short_conv
+ self.conv_size = conv_size
+ self.conv_bias = conv_bias
+
+ self.forget_dim = int(self.num_heads * self.expand_ratio)
+ self.input_dim = hidden_size
+ self.layer_idx = layer_idx
+
+ assert mode in ['chunk', 'fused_recurrent', 'fused_chunk'], f"Not suppoerted mode `{mode}`."
+ assert self.forget_dim % num_heads == 0, f"forget dim must be divisible by num_heads of {num_heads}"
+ assert self.input_dim % num_heads == 0, f"input dim must be divisible by num_heads of {num_heads}"
+
+ self.head_f_dim = self.expand_ratio
+ self.head_i_dim = self.hidden_size // num_heads
+
+ self.q_proj = nn.Linear(hidden_size, self.forget_dim, bias=False)
+ self.f_proj = nn.Linear(hidden_size, self.forget_dim, bias=False)
+ self.i_proj = nn.Linear(hidden_size, self.input_dim, bias=False)
+
+ if use_short_conv:
+ self.conv_size = conv_size
+ self.q_conv1d = ShortConvolution(self.forget_dim, conv_size, activation=None)
+ self.f_conv1d = ShortConvolution(self.forget_dim, conv_size, activation=None)
+ self.i_conv1d = ShortConvolution(self.input_dim, conv_size, activation=None)
+
+ self.g_norm = RMSNorm(hidden_size=self.hidden_size, elementwise_affine=elementwise_affine, eps=norm_eps)
+ self.o_proj = nn.Linear(self.input_dim, hidden_size, bias=False)
+
+ self.apply(self._initialize_weights)
+
+ def _initialize_weights(self, module: nn.Module):
+ if getattr(module, "_is_hf_initialized", False):
+ return
+ if isinstance(module, nn.Linear):
+ nn.init.xavier_uniform_(module.weight, gain=2 ** -2.5)
+ if module.bias is not None:
+ nn.init.zeros_(module.bias)
+ module._is_hf_initialized = True
+
+ def forward(
+ self,
+ hidden_states: torch.Tensor,
+ attention_mask: Optional[torch.Tensor] = None,
+ past_key_values: Optional[Cache] = None,
+ use_cache: Optional[bool] = False,
+ output_attentions: Optional[bool] = False,
+ lower_bound: Optional[torch.Tensor] = None,
+ **kwargs
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Cache]]:
+ if attention_mask is not None:
+ assert len(attention_mask.shape) == 2, (
+ "Expected attention_mask as a 0-1 matrix with shape [batch_size, seq_len] "
+ "for padding purposes (0 indicating padding). "
+ "Arbitrary attention masks of shape [batch_size, seq_len, seq_len] are not allowed."
+ )
+
+ # launching the triton kernel for just one token will actually be slower
+ mode = 'fused_recurrent' if hidden_states.shape[1] == 1 else self.mode
+
+ last_state = None
+ if past_key_values is not None and len(past_key_values) > self.layer_idx:
+ last_state = past_key_values[self.layer_idx]
+
+ if self.use_short_conv:
+ conv_state_q, conv_state_f, conv_state_i = None, None, None
+ if last_state is not None:
+ conv_state_q, conv_state_f, conv_state_i = last_state['conv_state']
+ conv_mask = attention_mask[:, -hidden_states.shape[1]:] if attention_mask is not None else None
+ q, conv_state_q = self.q_conv1d(x=self.q_proj(hidden_states),
+ mask=conv_mask,
+ cache=conv_state_q,
+ output_final_state=use_cache)
+ f, conv_state_f = self.f_conv1d(x=self.f_proj(hidden_states),
+ mask=conv_mask,
+ cache=conv_state_f,
+ output_final_state=use_cache)
+ i, conv_state_i = self.i_conv1d(x=self.i_proj(hidden_states),
+ mask=conv_mask,
+ cache=conv_state_i,
+ output_final_state=use_cache)
+ else:
+ q = self.q_proj(hidden_states)
+ f = self.f_proj(hidden_states)
+ i = self.i_proj(hidden_states)
+
+ # dealing with left-padding
+ if attention_mask is not None:
+ i = i.mul_(attention_mask[:, -i.shape[-2]:, None])
+
+ q = swish(q)
+
+ # improve precision
+ f = f.float()
+
+ # the lower bound for the first layer is zero
+ if lower_bound is None or self.layer_idx == 0:
+ k, g = 1 - f.sigmoid(), F.logsigmoid(f)
+ else:
+ g = lower_bound + (1 - lower_bound) * f.sigmoid()
+ k, g = 1 - g, g.log()
+
+ q, k, i, g = map(lambda x: rearrange(x, '... (h d) -> ... h d', h=self.num_heads), (q, k.to(i), i, g))
+
+ recurrent_state = last_state['recurrent_state'] if last_state is not None else None
+ if mode == 'fused_recurrent':
+ o, recurrent_state = fused_recurrent_gla(
+ q=q,
+ k=k,
+ v=i,
+ gk=g,
+ initial_state=recurrent_state,
+ output_final_state=use_cache,
+ head_first=False
+ )
+ elif mode == 'fused_chunk':
+ o, recurrent_state = fused_chunk_gla(
+ q=q,
+ k=k,
+ v=i,
+ g=g,
+ initial_state=recurrent_state,
+ output_final_state=use_cache,
+ head_first=False
+ )
+ elif mode == 'chunk':
+ o, recurrent_state = chunk_gla(
+ q=q,
+ k=k,
+ v=i,
+ g=g,
+ initial_state=recurrent_state,
+ output_final_state=use_cache,
+ head_first=False
+ )
+ else:
+ raise NotImplementedError(f"Not supported mode `{mode}`.")
+
+ if past_key_values is not None:
+ past_key_values.update(
+ recurrent_state=recurrent_state,
+ conv_state=(conv_state_q, conv_state_f, conv_state_i) if self.use_short_conv else None,
+ layer_idx=self.layer_idx,
+ offset=q.shape[2]
+ )
+
+ o = rearrange(o, '... h d -> ... (h d)')
+ o = rms_norm_linear(o, self.g_norm.weight, self.g_norm.bias, self.o_proj.weight, self.o_proj.bias)
+ return o, None, past_key_values
+
+ def state_size(self, **kwargs) -> int:
+ state_size = self.forget_dim * self.head_i_dim
+ for module in self.children():
+ if isinstance(module, ShortConvolution):
+ state_size += module.state_size
+ return state_size
diff --git a/fla/layers/linear_attn.py b/fla/layers/linear_attn.py
new file mode 100644
index 0000000000000000000000000000000000000000..7aae4e4371963e61e8bd52ffd7a3c97eaf81ee49
--- /dev/null
+++ b/fla/layers/linear_attn.py
@@ -0,0 +1,171 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from typing import Optional
+
+import torch.nn as nn
+import torch.nn.functional as F
+from einops import rearrange, repeat
+
+from fla.modules import RMSNorm
+from fla.modules.feature_map import (DPFPFeatureMap, HadamardFeatureMap,
+ HedgehogFeatureMap, T2RFeatureMap)
+from fla.ops.linear_attn import (chunk_linear_attn, fused_chunk_linear_attn,
+ fused_recurrent_linear_attn)
+
+
+class LinearAttention(nn.Module):
+ def __init__(
+ self,
+ mode: str = 'chunk',
+ hidden_size: str = 1024,
+ expand_k: int = 1.0,
+ expand_v: int = 1.0,
+ num_heads: int = 8,
+ num_kv_heads: Optional[int] = None,
+ feature_map: str = 'elementwise_product',
+ tie_feature_map_qk: bool = False,
+ output_norm: str = 'rmsnorm',
+ norm_q: bool = False,
+ norm_k: bool = False,
+ # standard linear attention normalization
+ do_feature_map_norm: bool = False,
+ elementwise_affine: bool = True,
+ norm_eps: float = 1e-5,
+ **kwargs
+ ):
+ super().__init__()
+
+ self.hidden_size = hidden_size
+ self.mode = mode
+ self.num_heads = num_heads
+ self.num_kv_heads = num_kv_heads if num_kv_heads is not None else num_heads
+ self.num_kv_groups = self.num_heads // self.num_kv_heads
+ self.key_dim = int(hidden_size * expand_k)
+ self.value_dim = int(hidden_size * expand_v)
+ self.key_dim_per_group = self.key_dim // self.num_kv_groups
+ self.value_dim_per_group = self.value_dim // self.num_kv_groups
+
+ assert mode in ['chunk', 'fused_chunk', 'fused_recurrent'], f"Not suppoerted mode `{mode}`."
+ assert self.key_dim % num_heads == 0, f"key dim must be divisible by num_heads of {num_heads}"
+ assert self.value_dim % num_heads == 0, f"value dim must be divisible by num_heads of {num_heads}"
+
+ self.head_qk_dim = self.key_dim // num_heads
+ self.head_v_dim = self.value_dim // num_heads
+ self.do_feature_map_norm = do_feature_map_norm
+
+ if feature_map == 'hedgehog':
+ if tie_feature_map_qk:
+ self.feature_map_q = self.feature_map_k = HedgehogFeatureMap(head_dim=self.head_qk_dim)
+ else:
+ self.feature_map_q = HedgehogFeatureMap(head_dim=self.head_qk_dim)
+ self.feature_map_k = HedgehogFeatureMap(head_dim=self.head_qk_dim)
+
+ elif feature_map == 't2r':
+ if tie_feature_map_qk:
+ self.feature_map_q = self.feature_map_k = T2RFeatureMap(head_dim=self.head_qk_dim)
+ else:
+ self.feature_map_q = T2RFeatureMap(head_dim=self.head_qk_dim)
+ self.feature_map_k = T2RFeatureMap(head_dim=self.head_qk_dim)
+
+ elif feature_map == 'elementwise_product':
+ if tie_feature_map_qk:
+ self.feature_map_q = self.feature_map_k = HadamardFeatureMap(head_dim=self.head_qk_dim)
+ else:
+ self.feature_map_q = HadamardFeatureMap(head_dim=self.head_qk_dim)
+ self.feature_map_k = HadamardFeatureMap(head_dim=self.head_qk_dim)
+
+ elif feature_map == 'dpfp':
+ self.feature_map_q = DPFPFeatureMap(head_dim=self.head_qk_dim)
+ self.feature_map_k = DPFPFeatureMap(head_dim=self.head_qk_dim)
+
+ elif feature_map == 'elu':
+ def elu(x):
+ return F.elu(x) + 1
+ self.feature_map_q = elu
+ self.feature_map_k = elu
+
+ elif feature_map == 'relu':
+ self.feature_map_q = nn.ReLU()
+ self.feature_map_k = nn.ReLU()
+
+ elif feature_map == 'identity':
+ self.feature_map_q = nn.Identity()
+ self.feature_map_k = nn.Identity()
+ else:
+ raise NotImplementedError(f"Not supported feature map `{feature_map}`.")
+
+ self.q_proj = nn.Linear(hidden_size, self.key_dim, bias=False)
+ self.k_proj = nn.Linear(hidden_size, self.key_dim_per_group, bias=False)
+ self.v_proj = nn.Linear(hidden_size, self.value_dim_per_group, bias=False)
+
+ if output_norm == 'rmsnorm':
+ self.norm = RMSNorm(hidden_size=self.head_v_dim, elementwise_affine=elementwise_affine, eps=norm_eps)
+ elif output_norm == 'identity':
+ self.norm = nn.Identity()
+ else:
+ raise NotImplementedError(f"Not supported output norm `{output_norm}`.")
+
+ self.o_proj = nn.Linear(self.value_dim, hidden_size, bias=False)
+
+ self.norm_q = norm_q
+ self.norm_k = norm_k
+
+ self.apply(self._initialize_weights)
+
+ def _initialize_weights(self, module: nn.Module):
+ if getattr(module, "_is_hf_initialized", False):
+ return
+ if isinstance(module, nn.Linear):
+ nn.init.xavier_uniform_(module.weight, gain=2 ** -2.5)
+ if module.bias is not None:
+ nn.init.zeros_(module.bias)
+ module._is_hf_initialized = True
+
+ def forward(self, x):
+ mode = self.mode
+ q = self.q_proj(x)
+ k = self.k_proj(x)
+ v = self.v_proj(x)
+
+ q = rearrange(q, '... (h d) -> ... h d', h=self.num_heads)
+ if self.num_kv_groups > 1:
+ k, v = (repeat(x, '... (h d) -> ... (h g) d', h=self.num_kv_heads, g=self.num_kv_groups) for x in (k, v))
+ else:
+ k, v = (rearrange(x, '... (h d) -> ... h d', h=self.num_kv_heads) for x in (k, v))
+
+ q = self.feature_map_q(q)
+ k = self.feature_map_k(k)
+
+ if self.norm_q:
+ q = q / (q.sum(-1, True) + 1e-4)
+ if self.norm_k:
+ k = k / (k.sum(-1, True) + 1e-4)
+
+ if mode == 'chunk':
+ o, final_state = chunk_linear_attn(
+ q=q,
+ k=k,
+ v=v,
+ normalize=self.do_feature_map_norm,
+ head_first=False
+ )
+ elif mode == 'fused_chunk':
+ o, final_state = fused_chunk_linear_attn(
+ q=q,
+ k=k,
+ v=v,
+ normalize=self.do_feature_map_norm,
+ )
+ elif mode == 'fused_recurrent':
+ o, final_state = fused_recurrent_linear_attn(
+ q=q,
+ k=k,
+ v=v,
+ normalize=self.do_feature_map_norm,
+ )
+ else:
+ raise NotImplementedError
+ o = self.norm(o)
+ o = self.o_proj(o)
+ return o
diff --git a/fla/layers/multiscale_retention.py b/fla/layers/multiscale_retention.py
new file mode 100644
index 0000000000000000000000000000000000000000..45312bec92bd592e61ce646e2d6036b07e76fa41
--- /dev/null
+++ b/fla/layers/multiscale_retention.py
@@ -0,0 +1,282 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from __future__ import annotations
+
+from typing import TYPE_CHECKING, Optional, Tuple
+
+import torch
+import torch.nn as nn
+from einops import rearrange, repeat
+from transformers.activations import ACT2FN
+
+from fla.modules import FusedRMSNormSwishGate, RMSNorm, ShortConvolution
+from fla.modules.rotary import RotaryEmbedding
+from fla.ops.retention import (chunk_retention, fused_chunk_retention,
+ fused_recurrent_retention, parallel_retention)
+
+if TYPE_CHECKING:
+ from fla.models.utils import Cache
+
+
+class MultiScaleRetention(nn.Module):
+ r"""
+ The layer implementaion for [Retentive Network: A Successor to Transformer for Large Language Models](https://arxiv.org/pdf/2307.08621.pdf). # noqa
+
+ Args:
+ mode (str, Optional):
+ Which Retention kernel to use.
+ Currently available: `chunk`, `fused_recurrent`, `parallel`, and `fused_chunk`.
+ Default: `fused_chunk`.
+ hidden_size (int, Optional):
+ The hidden size of the input. Default: 1024.
+ expand_k (float, Optional):
+ The expansion ratio for the key dim. Default: 1.0.
+ expand_v (float, Optional):
+ The expansion ratio for the value dim. Default: 2.0.
+ num_heads (int, Optional):
+ The number of heads. Default: 8.
+ num_kv_heads (int, Optional):
+ The number of key/value heads, used for MQA. Default: None.
+ feature_map (str, Optional):
+ Feature map function applied to queries/keys. Default: None.
+ use_short_conv (bool, Optional):
+ Whether to use short convolutions. Default: `False`.
+ conv_size (int, Optional):
+ The kernel size of the short convolution, only used when `use_short_conv` is `True`. Default: 4.
+ conv_bias (bool, Optional):
+ Whether to use bias in the short convolution, only used when `use_short_conv` is `True`. Default: `False`.
+ use_output_gate (bool, Optional):
+ Whether to use output gate. Default: `True`.
+ gate_fn (str, Optional):
+ The activation function for the output gate. Default: `swish`.
+ elementwise_affine (bool, Optional):
+ If `True`, applies elementwise affine to LayerNorm with learnable parameters. Default: `True`.
+ norm_eps (float, Optional):
+ The epsilon value for the layernorm/rmsnorm layer. Default: 1e-5.
+ fuse_norm (bool, Optional):
+ Whether to fuse the norm and the output gate for better memory footprint. Default: `True`.
+ layer_idx (int, Optional):
+ The index of the layer. Default: None.
+ """
+
+ def __init__(
+ self,
+ mode: str = 'chunk',
+ hidden_size: int = 1024,
+ expand_k: float = 1.0,
+ expand_v: float = 2.0,
+ num_heads: int = 8,
+ num_kv_heads: Optional[int] = None,
+ feature_map: Optional[str] = None,
+ use_short_conv: bool = False,
+ conv_size: int = 4,
+ conv_bias: bool = False,
+ use_output_gate: bool = True,
+ gate_fn: str = 'swish',
+ elementwise_affine: Optional[bool] = True,
+ norm_eps: float = 1e-5,
+ fuse_norm: bool = True,
+ layer_idx: int = None,
+ **kwargs
+ ) -> MultiScaleRetention:
+ super().__init__()
+
+ self.mode = mode
+ self.hidden_size = hidden_size
+ self.expand_k = expand_k
+ self.expand_v = expand_v
+ self.num_heads = num_heads
+ self.num_kv_heads = num_kv_heads if num_kv_heads is not None else num_heads
+ self.num_kv_groups = self.num_heads // self.num_kv_heads
+ self.feature_map_fn = ACT2FN[feature_map] if feature_map is not None else None
+
+ self.use_short_conv = use_short_conv
+ self.conv_size = conv_size
+ self.conv_bias = conv_bias
+ self.use_output_gate = use_output_gate
+
+ self.key_dim = int(hidden_size * expand_k)
+ self.value_dim = int(hidden_size * expand_v)
+ self.key_dim_per_group = self.key_dim // self.num_kv_groups
+ self.value_dim_per_group = self.value_dim // self.num_kv_groups
+ self.layer_idx = layer_idx
+
+ assert mode in ['chunk', 'fused_chunk', 'parallel', 'fused_recurrent'], f"Not suppoerted mode `{mode}`."
+ assert self.key_dim % num_heads == 0, f"key dim must be divisible by num_heads of {num_heads}"
+ assert self.value_dim % num_heads == 0, f"value dim must be divisible by num_heads of {num_heads}"
+
+ self.head_qk_dim = self.key_dim // num_heads
+ self.head_v_dim = self.value_dim // num_heads
+
+ self.q_proj = nn.Linear(hidden_size, self.key_dim, bias=False)
+ self.k_proj = nn.Linear(hidden_size, self.key_dim_per_group, bias=False)
+ self.v_proj = nn.Linear(hidden_size, self.value_dim_per_group, bias=False)
+ if self.use_output_gate:
+ self.g_proj = nn.Linear(hidden_size, self.value_dim, bias=False)
+
+ if use_short_conv:
+ self.conv_size = conv_size
+ self.q_conv1d = ShortConvolution(self.key_dim, conv_size, activation='silu')
+ self.k_conv1d = ShortConvolution(self.key_dim_per_group, conv_size, activation='silu')
+ self.v_conv1d = ShortConvolution(self.value_dim_per_group, conv_size, activation='silu')
+
+ self.o_proj = nn.Linear(self.value_dim, hidden_size, bias=False)
+
+ if gate_fn == 'swish' and fuse_norm and use_output_gate:
+ self.g_norm_swish_gate = FusedRMSNormSwishGate(self.head_v_dim, elementwise_affine, norm_eps)
+ self.fuse_norm_and_gate = True
+ else:
+ self.fuse_norm_and_gate = False
+ self.g_norm = RMSNorm(hidden_size=self.head_v_dim, elementwise_affine=elementwise_affine, eps=norm_eps)
+ self.gate_fn = ACT2FN[gate_fn]
+
+ # TODO: fix this issue
+ # https://github.com/Dao-AILab/flash-attention/blob/main/flash_attn/ops/triton/rotary.py#L180
+ # Ideally, we would want to support arbitrary d_head_qk
+ assert self.head_qk_dim <= 256, "head_qk_dim must be less than or equal to 256"
+ self.rotary = RotaryEmbedding(dim=self.head_qk_dim)
+
+ self.apply(self._initialize_weights)
+
+ def _initialize_weights(self, module: nn.Module):
+ if getattr(module, "_is_hf_initialized", False):
+ return
+ if isinstance(module, nn.Linear):
+ nn.init.xavier_uniform_(module.weight, gain=2 ** -2.5)
+ if module.bias is not None:
+ nn.init.zeros_(module.bias)
+ module._is_hf_initialized = True
+
+ def forward(
+ self,
+ hidden_states: torch.Tensor,
+ attention_mask: Optional[torch.Tensor] = None,
+ past_key_values: Optional[Cache] = None,
+ use_cache: Optional[bool] = False,
+ output_attentions: Optional[bool] = False,
+ **kwargs
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Cache]]:
+ if attention_mask is not None:
+ assert len(attention_mask.shape) == 2, (
+ "Expected attention_mask as a 0-1 matrix with shape [batch_size, seq_len] "
+ "for padding purposes (0 indicating padding). "
+ "Arbitrary attention masks of shape [batch_size, seq_len, seq_len] are not allowed."
+ )
+
+ # launching the triton kernel for just one token will actually be slower
+ mode = 'fused_recurrent' if hidden_states.shape[1] == 1 else self.mode
+
+ last_state = None
+ if past_key_values is not None and len(past_key_values) > self.layer_idx:
+ last_state = past_key_values[self.layer_idx]
+
+ if self.use_short_conv:
+ conv_state_q, conv_state_k, conv_state_v = None, None, None
+ if last_state is not None:
+ conv_state_q, conv_state_k, conv_state_v = last_state['conv_state']
+ conv_mask = attention_mask[:, -hidden_states.shape[1]:] if attention_mask is not None else None
+ q, conv_state_q = self.q_conv1d(x=self.q_proj(hidden_states),
+ mask=conv_mask,
+ cache=conv_state_q,
+ output_final_state=use_cache)
+ k, conv_state_k = self.k_conv1d(x=self.k_proj(hidden_states),
+ mask=conv_mask,
+ cache=conv_state_k,
+ output_final_state=use_cache)
+ v, conv_state_v = self.v_conv1d(x=self.v_proj(hidden_states),
+ mask=conv_mask,
+ cache=conv_state_v,
+ output_final_state=use_cache)
+ else:
+ q = self.q_proj(hidden_states)
+ k = self.k_proj(hidden_states)
+ v = self.v_proj(hidden_states)
+
+ # dealing with left-padding
+ if attention_mask is not None:
+ v = v.mul_(attention_mask[:, -v.shape[-2]:, None])
+ q = rearrange(q, '... (h d) -> ... h d', h=self.num_heads)
+ k = rearrange(k, '... (h d) -> ... h d', h=self.num_kv_heads)
+ if self.feature_map_fn is not None:
+ q, k = map(self.feature_map_fn, (q, k))
+
+ seqlen_offset, max_seqlen = 0, q.shape[1]
+ if past_key_values is not None:
+ seqlen_offset = past_key_values.get_seq_length(self.layer_idx)
+ max_seqlen = q.shape[1] + seqlen_offset
+
+ if attention_mask is not None:
+ # to deliminate the offsets of padding tokens
+ seqlen_offset = (seqlen_offset + attention_mask.sum(-1) - attention_mask.shape[-1]).clamp(min=0)
+ max_seqlen = q.shape[1] + max(seqlen_offset)
+
+ q, k = self.rotary(q, k, seqlen_offset, max_seqlen)
+ if self.num_kv_groups > 1:
+ k = repeat(k, 'b t h d -> b t (h g) d', h=self.num_kv_heads, g=self.num_kv_groups)
+ v = repeat(v, 'b t (h d) -> b t (h g) d', h=self.num_kv_heads, g=self.num_kv_groups)
+ else:
+ k, v = rearrange(k, 'b t h d -> b t h d'), rearrange(v, 'b t (h d) -> b t h d', h=self.num_kv_heads)
+
+ recurrent_state = last_state['recurrent_state'] if last_state is not None else None
+ if mode == 'chunk':
+ o, recurrent_state = chunk_retention(
+ q=q,
+ k=k,
+ v=v,
+ initial_state=recurrent_state,
+ output_final_state=use_cache,
+ head_first=False
+ )
+ elif mode == 'fused_chunk':
+ o, recurrent_state = fused_chunk_retention(
+ q=q,
+ k=k,
+ v=v,
+ initial_state=recurrent_state,
+ output_final_state=use_cache,
+ head_first=False
+ )
+ elif mode == 'parallel':
+ o, recurrent_state = parallel_retention(q, k, v, head_first=False)
+ elif mode == 'fused_recurrent':
+ o, recurrent_state = fused_recurrent_retention(
+ q=q,
+ k=k,
+ v=v,
+ initial_state=recurrent_state,
+ output_final_state=use_cache,
+ head_first=False
+ )
+ else:
+ raise NotImplementedError(f"Not supported mode `{mode}`.")
+
+ if past_key_values is not None:
+ past_key_values.update(
+ recurrent_state=recurrent_state,
+ conv_state=(conv_state_q, conv_state_k, conv_state_v) if self.use_short_conv else None,
+ layer_idx=self.layer_idx,
+ offset=q.shape[2]
+ )
+
+ if self.use_output_gate:
+ g = self.g_proj(hidden_states)
+ if self.fuse_norm_and_gate:
+ g = rearrange(g, 'b t (h d) -> b t h d', h=self.num_heads)
+ o = self.g_norm_swish_gate(o, g)
+ o = rearrange(o, 'b t h d -> b t (h d)')
+ else:
+ o = rearrange(self.g_norm(o), 'b t h d -> b t (h d)')
+ o = o * self.gate_fn(g)
+ else:
+ o = rearrange(self.g_norm(o), 'b t h d -> b t (h d)')
+ o = self.o_proj(o)
+
+ return o, None, past_key_values
+
+ def state_size(self, **kwargs) -> int:
+ state_size = self.key_dim * self.head_v_dim
+ for module in self.children():
+ if isinstance(module, ShortConvolution):
+ state_size += module.state_size
+ return state_size
diff --git a/fla/layers/rebased.py b/fla/layers/rebased.py
new file mode 100644
index 0000000000000000000000000000000000000000..b55a1df64ee1f4e8373c3be0d822043f0f635c25
--- /dev/null
+++ b/fla/layers/rebased.py
@@ -0,0 +1,136 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+"""
+https://github.com/corl-team/rebased/blob/main/flash_linear_attention/fla/layers/rebased_fast.py
+"""
+
+from __future__ import annotations
+
+from typing import Optional
+
+import torch
+import torch.nn as nn
+from einops import rearrange
+
+from fla.modules.feature_map import RebasedFeatureMap
+from fla.ops.linear_attn import chunk_linear_attn, fused_chunk_linear_attn
+from fla.ops.rebased import parallel_rebased
+
+
+class ReBasedLinearAttention(nn.Module):
+ def __init__(
+ self,
+ hidden_size: int,
+ l_max: int = 2048,
+ feature_dim: int = 16,
+ num_key_value_heads: int = 16,
+ num_heads: int = 16,
+ use_gamma: Optional[bool] = True,
+ use_beta: Optional[bool] = True,
+ normalize: Optional[bool] = True,
+ causal: bool = True,
+ eps: float = 1e-5,
+ mode: str = "parallel",
+ layer_idx: Optional[int] = None,
+ **kwargs
+ ) -> ReBasedLinearAttention:
+ super().__init__()
+ self.hidden_size = hidden_size
+ self.l_max = l_max
+ self.mode = mode
+ assert self.mode in ["fused_chunk", "parallel", 'chunk']
+
+ # linear attention
+ self.feature_dim = feature_dim
+ self.num_key_value_heads = num_key_value_heads
+ self.num_heads = num_heads
+ self.head_dim = self.hidden_size // self.num_key_value_heads
+ self.use_gamma = use_gamma
+ self.use_beta = use_beta
+ self.normalize = normalize
+ self.causal = causal
+
+ self.feature_map = RebasedFeatureMap(self.feature_dim, use_gamma, use_beta, normalize)
+ self.q_proj = nn.Linear(self.hidden_size, self.feature_dim * self.num_heads, bias=False)
+ self.k_proj = nn.Linear(self.hidden_size, self.feature_dim * self.num_heads, bias=False)
+ self.v_proj = nn.Linear(self.hidden_size, self.num_key_value_heads * self.head_dim, bias=False)
+ self.o_proj = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=False)
+ self.dropout = nn.Identity()
+ self.eps = eps
+
+ self.apply(self._initialize_weights)
+
+ def _initialize_weights(self, module: nn.Module):
+ if getattr(module, "_is_hf_initialized", False):
+ return
+ if isinstance(module, nn.Linear):
+ nn.init.xavier_uniform_(module.weight, gain=2 ** -2.5)
+ if module.bias is not None:
+ nn.init.zeros_(module.bias)
+ module._is_hf_initialized = True
+
+ def forward(self, hidden_states: torch.Tensor, **kwargs):
+ mode = self.mode
+ q, k, v = self.q_proj(hidden_states), self.k_proj(hidden_states), self.v_proj(hidden_states)
+ q, k, v = map(lambda x: rearrange(x, "... (h d) -> ... h d", h=self.num_heads), [q, k, v])
+ q, k = self.feature_map(q, flatten=(mode != 'parallel')), self.feature_map(k, flatten=(mode != 'parallel'))
+ if mode == "fused_chunk":
+ o = fused_chunk_linear_attn(
+ q=q,
+ k=k,
+ v=v,
+ normalize=True,
+ scale=1,
+ head_first=False
+ )
+ elif mode == 'chunk':
+ o = chunk_linear_attn(
+ q=q,
+ k=k,
+ v=v,
+ normalize=True,
+ scale=1,
+ head_first=False
+ )
+ elif mode == 'parallel':
+ assert q.shape[-1] <= 128
+ o = parallel_rebased(
+ q=q,
+ k=k,
+ v=v,
+ eps=self.eps,
+ use_scale=True,
+ use_normalize=True,
+ head_first=False
+ )
+ o = self.o_proj(o)
+ o = self.dropout(o)
+ return o
+
+ # https://github.com/HazyResearch/zoology/blob/main/zoology/mixers/based.py#L119
+ def forward_reference(self, hidden_states: torch.Tensor, filters: torch.Tensor = None, *args, **kwargs):
+ """
+ x (torch.Tensor): tensor of shape (b, d, t)
+ y (torch.Tensor): tensor of shape (b, d, t)
+ """
+ b, t, _ = hidden_states.size()
+ q, k, v = self.q_proj(hidden_states), self.k_proj(hidden_states), self.v_proj(hidden_states)
+
+ q = q.view(b, t, self.num_heads, self.feature_dim).transpose(1, 2)
+ k = k.view(b, t, self.num_key_value_heads, self.feature_dim).transpose(1, 2)
+ v = v.view(b, t, self.num_key_value_heads, self.head_dim).transpose(1, 2)
+
+ # Linear attention
+ q, k = self.feature_map(q), self.feature_map(k)
+ q, k, v = q.unsqueeze(-2), k.unsqueeze(-2), v.unsqueeze(-1)
+
+ # Compute attention
+ if self.causal:
+ y = ((q * (k * v).cumsum(2)).sum(-1) / ((q * k.cumsum(2)).sum(-1) + self.eps))
+ else:
+ y = ((q * (k * v).sum(2, True)).sum(-1) / ((q * k.sum(2, True)).sum(-1) + self.eps))
+ y = rearrange(y, 'b h t d -> b t (h d)')
+ y = self.o_proj(y.to(hidden_states.dtype))
+ y = self.dropout(y)
+ return y.to(hidden_states.dtype)
diff --git a/fla/layers/rwkv6.py b/fla/layers/rwkv6.py
new file mode 100644
index 0000000000000000000000000000000000000000..f00e17a7f56d68e4881353487b4dddf7c386e146
--- /dev/null
+++ b/fla/layers/rwkv6.py
@@ -0,0 +1,291 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+# "Eagle and Finch: RWKV with Matrix-Valued States and Dynamic Recurrence"[https://arxiv.org/abs/2404.05892]
+
+from __future__ import annotations
+
+from typing import TYPE_CHECKING, Optional, Tuple
+
+import torch
+import torch.nn as nn
+from einops import rearrange
+
+from fla.modules import GroupNorm
+from fla.modules.activations import ACT2FN
+from fla.ops.rwkv6 import chunk_rwkv6, fused_recurrent_rwkv6
+
+if TYPE_CHECKING:
+ from fla.models.utils import Cache
+
+
+class RWKV6Attention(nn.Module):
+
+ def __init__(
+ self,
+ mode: str = 'chunk',
+ hidden_size: int = 1024,
+ expand_k: float = 0.5,
+ expand_v: float = 1.0,
+ num_heads: int = 4,
+ gate_fn: str = 'swish',
+ proj_low_rank_dim: int = 32,
+ gate_low_rank_dim: int = 64,
+ fuse_norm: bool = True,
+ elementwise_affine: Optional[bool] = True,
+ norm_eps: float = 1e-5,
+ layer_idx: int = None,
+ **kwargs
+ ) -> RWKV6Attention:
+ super().__init__()
+
+ self.mode = mode
+ self.hidden_size = hidden_size
+ self.expand_k = expand_k
+ self.expand_v = expand_v
+ self.num_heads = num_heads
+ self.proj_low_rank_dim = proj_low_rank_dim
+ self.gate_low_rank_dim = gate_low_rank_dim
+
+ self.key_dim = int(hidden_size * expand_k)
+ self.value_dim = int(hidden_size * expand_v)
+ self.layer_idx = layer_idx
+
+ assert mode in ['chunk', 'fused_recurrent'], f"Not suppoerted mode `{mode}`."
+ assert self.key_dim % num_heads == 0, f"key dim must be divisible by num_heads of {num_heads}"
+ assert self.value_dim % num_heads == 0, f"value dim must be divisible by num_heads of {num_heads}"
+
+ self.head_qk_dim = self.key_dim // num_heads
+ self.head_v_dim = self.value_dim // num_heads
+
+ self.time_shift = nn.ZeroPad2d((0, 0, 1, -1))
+ self.x_proj = nn.Sequential(
+ LerpLinear(hidden_size, proj_low_rank_dim * 5),
+ nn.Tanh(),
+ nn.Linear(proj_low_rank_dim * 5, hidden_size, bias=False)
+ )
+ self.x_bias = nn.Parameter(torch.zeros(5, hidden_size))
+
+ self.r_proj = DDLerpLinear(hidden_size, self.key_dim)
+ self.w_proj = DDLerpLinear(hidden_size, self.key_dim, low_rank_dim=gate_low_rank_dim)
+ self.k_proj = DDLerpLinear(hidden_size, self.key_dim)
+ self.v_proj = DDLerpLinear(hidden_size, self.value_dim)
+ self.g_proj = DDLerpLinear(hidden_size, self.value_dim)
+ self.bonus = nn.Parameter(torch.zeros(num_heads, self.head_qk_dim))
+
+ # TODO: fuse GroupNorm and output gate
+ self.g_norm = GroupNorm(self.num_heads, self.value_dim, elementwise_affine=elementwise_affine, bias=True, eps=norm_eps)
+ self.o_proj = nn.Linear(self.value_dim, hidden_size, bias=False)
+ self.gate_fn = ACT2FN[gate_fn]
+
+ self.apply(self._initialize_weights)
+
+ def _initialize_weights(self, module: nn.Module):
+ if getattr(module, "_is_hf_initialized", False):
+ return
+ if isinstance(module, nn.Linear):
+ nn.init.xavier_uniform_(module.weight, gain=2 ** -2.5)
+ if module.bias is not None:
+ nn.init.zeros_(module.bias)
+ if isinstance(module, nn.Parameter):
+ nn.init.xavier_uniform_(module, gain=2 ** -2.5)
+ module._is_hf_initialized = True
+
+ def forward(
+ self,
+ hidden_states: torch.Tensor,
+ attention_mask: Optional[torch.Tensor] = None,
+ past_key_values: Optional[Cache] = None,
+ use_cache: Optional[bool] = False,
+ output_attentions: Optional[bool] = False,
+ **kwargs
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Cache]]:
+ if attention_mask is not None:
+ assert len(attention_mask.shape) == 2, (
+ "Expected attention_mask as a 0-1 matrix with shape [batch_size, seq_len] "
+ "for padding purposes (0 indicating padding). "
+ "Arbitrary attention masks of shape [batch_size, seq_len, seq_len] are not allowed."
+ )
+
+ batch_size, seq_len, hidden_size = hidden_states.shape
+ # launching the triton kernel for just one token will actually be slower
+ mode = 'fused_recurrent' if hidden_states.shape[1] == 1 else self.mode
+
+ last_state = None
+ if past_key_values is not None and len(past_key_values) > self.layer_idx:
+ last_state = past_key_values[self.layer_idx]
+
+ if attention_mask is not None:
+ hidden_states = hidden_states.mul_(attention_mask[:, -hidden_states.shape[-2]:, None])
+ if hidden_states.shape[1] == 1 and last_state is not None:
+ shifted = last_state['conv_state'].unsqueeze(1)
+ else:
+ shifted = self.time_shift(hidden_states)
+ if last_state is not None:
+ shifted[:, 0] = last_state['conv_state'][0]
+
+ delta = shifted - hidden_states
+ x = self.x_proj[0](hidden_states, delta).view(batch_size, seq_len, -1, self.proj_low_rank_dim)
+ x = torch.einsum('b t n r, h n r-> b t n h', self.x_proj[1](x), self.x_proj[2].weight.view(hidden_size, 5, -1))
+
+ r, w, k, v, g = x.add_(self.x_bias).unbind(-2)
+ r = self.r_proj(hidden_states, r, delta)
+ w = self.w_proj(hidden_states, w, delta)
+ k = self.k_proj(hidden_states, k, delta)
+ v = self.v_proj(hidden_states, v, delta)
+ g = self.g_proj(hidden_states, g, delta)
+
+ # dealing with left-padding
+ if attention_mask is not None:
+ v = v.mul_(attention_mask[:, -v.shape[-2]:, None])
+ r, w, k, v = map(lambda x: rearrange(x, 'b t (h d) -> b t h d', h=self.num_heads), (r, w, k, v))
+ w = -torch.exp(w)
+ u = self.bonus
+
+ recurrent_state = last_state['recurrent_state'] if last_state is not None else None
+ if mode == 'fused_recurrent':
+ o, recurrent_state = fused_recurrent_rwkv6(
+ r=r,
+ k=k,
+ v=v,
+ w=w,
+ u=u,
+ scale=1.,
+ initial_state=recurrent_state,
+ output_final_state=use_cache,
+ head_first=False
+ )
+ elif mode == 'chunk':
+ o, recurrent_state = chunk_rwkv6(
+ q=r,
+ k=k,
+ v=v,
+ g=w,
+ u=u,
+ scale=1.,
+ initial_state=recurrent_state,
+ output_final_state=use_cache,
+ head_first=False
+ )
+ else:
+ raise NotImplementedError(f"Not supported mode `{mode}`.")
+
+ if past_key_values is not None:
+ past_key_values.update(
+ recurrent_state=recurrent_state,
+ conv_state=hidden_states[:, -1],
+ layer_idx=self.layer_idx,
+ offset=r.shape[2]
+ )
+
+ o = self.g_norm(rearrange(o, '... h d -> ... (h d)')) * self.gate_fn(g)
+ o = self.o_proj(o)
+
+ return o, None, past_key_values
+
+
+class LoRA(nn.Module):
+
+ def __init__(
+ self,
+ input_dim: int,
+ output_dim: int,
+ low_rank_dim: int,
+ bias: Optional[bool] = True
+ ):
+ super().__init__()
+
+ self.input_dim = input_dim
+ self.output_dim = output_dim
+ self.low_rank_dim = low_rank_dim
+ self.bias = bias
+
+ self.lora = nn.Sequential(
+ nn.Linear(input_dim, low_rank_dim, bias=False),
+ nn.Tanh(),
+ nn.Linear(low_rank_dim, output_dim, bias=bias)
+ )
+
+ def __repr__(self) -> str:
+ s = f"{self.__class__.__name__}("
+ s += f"input_dim={self.input_dim}, low_rank_dim={self.low_rank_dim}, output_dim={self.output_dim}"
+ if not self.bias:
+ s += f", bias={self.bias}"
+ s += ")"
+ return s
+
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
+ return self.lora(x)
+
+
+class LerpLinear(nn.Module):
+
+ def __init__(
+ self,
+ input_dim: int,
+ output_dim: int,
+ low_rank_dim: Optional[int] = None
+ ):
+ super().__init__()
+
+ self.input_dim = input_dim
+ self.output_dim = output_dim
+ self.low_rank_dim = low_rank_dim
+
+ self.time_shift = nn.ZeroPad2d((0, 0, 1, -1))
+ if low_rank_dim is None:
+ self.linear = nn.Linear(input_dim, output_dim, bias=False)
+ else:
+ self.linear = LoRA(input_dim, output_dim, low_rank_dim)
+ self.mu = nn.Parameter(torch.zeros(input_dim))
+
+ def __repr__(self) -> str:
+ s = f"{self.__class__.__name__}({self.input_dim}, {self.output_dim}"
+ if self.low_rank_dim is not None:
+ s += f", low_rank_dim={self.low_rank_dim}"
+ s += ")"
+ return s
+
+ def forward(self, x: torch.Tensor, delta: Optional[torch.Tensor] = None) -> torch.Tensor:
+ if delta is None:
+ shifted = self.time_shift(x)
+ if len(shifted.shape) == 2:
+ shifted = shifted.unsqueeze(1)
+ delta = shifted - x
+ return self.linear(x + delta * self.mu)
+
+
+class DDLerpLinear(nn.Module):
+
+ def __init__(
+ self,
+ input_dim: int,
+ output_dim: int,
+ low_rank_dim: Optional[int] = None
+ ):
+ super().__init__()
+
+ self.input_dim = input_dim
+ self.output_dim = output_dim
+ self.low_rank_dim = low_rank_dim
+
+ self.time_shift = nn.ZeroPad2d((0, 0, 1, -1))
+ if low_rank_dim is None:
+ self.linear = nn.Linear(input_dim, output_dim, bias=False)
+ else:
+ self.linear = LoRA(input_dim, output_dim, low_rank_dim)
+
+ def __repr__(self) -> str:
+ s = f"{self.__class__.__name__}({self.input_dim}, {self.output_dim}"
+ if self.low_rank_dim is not None:
+ s += f", low_rank_dim={self.low_rank_dim}"
+ s += ")"
+ return s
+
+ def forward(self, x: torch.Tensor, mu: torch.Tensor, delta: Optional[torch.Tensor] = None) -> torch.Tensor:
+ if delta is None:
+ shifted = self.time_shift(x)
+ if len(shifted.shape) == 2:
+ shifted = shifted.unsqueeze(1)
+ delta = shifted - x
+ return self.linear(x + delta * mu)
diff --git a/fla/layers/scan.py b/fla/layers/scan.py
new file mode 100644
index 0000000000000000000000000000000000000000..a7d167ac64493fc11cd1e70d5c1137a94b49bf32
--- /dev/null
+++ b/fla/layers/scan.py
@@ -0,0 +1,237 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from __future__ import annotations
+
+import warnings
+from typing import TYPE_CHECKING, Optional, Tuple
+
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+from einops import rearrange
+
+from fla.modules import RMSNorm
+from fla.modules.activations import swish, sigmoid
+from fla.modules.layernorm import rms_norm_linear
+from fla.ops.scan import parallel_scan, naive_recurrent_scan
+
+if TYPE_CHECKING:
+ from fla.models.utils import Cache
+
+def build_alibi_tensor_scan(head_num, seq_len, window_len, state_size):
+ slopes = torch.tensor([2 ** (-8.0 * i / head_num) for i in range(head_num)])
+ alibi = torch.zeros((head_num, seq_len, window_len))
+ for i in range(seq_len):
+ for j in range(window_len):
+ if i < window_len:
+ alibi[:, i, j] = slopes * (j - window_len + 1) if i > (window_len - j - 2) else 0
+ else:
+ alibi[:, i, j] = alibi[:, window_len-1, j]
+ # Now concat a zeros tensor of size (head_num, seq_len, state_size) to the left of the above square tensor
+ alibi = torch.cat((torch.zeros(head_num, seq_len, state_size), alibi), dim=2)
+ return alibi # shape: (head_num, seq_len, state_size + window_size) or (H, T, S + W)
+
+def scores_mask(T, W, S):
+ # create lower right triangle mask (W, W)
+ mask = torch.tril(torch.ones(W, W)).flip(1)
+ # concat ones with size (T-W, W) in 0th dim
+ mask = torch.cat((mask, torch.ones(T-W, W)), dim=0)
+ # concat ones with size (T, S) in 1st dim
+ mask = torch.cat((torch.ones(T, S), mask), dim=1)
+ return mask # shape: (T, S + W)
+
+class SemiCompressedAttention(nn.Module):
+
+ def __init__(
+ self,
+ mode: str = 'parallel',
+ hidden_size: int = 1024,
+ window_size: int = 512,
+ state_size: int = 64,
+ gate_act: str = 'softmax',
+ max_position_embeddings: Optional[int] = 2048,
+ expand_k: float = 1.,
+ expand_v: float = 1.,
+ num_heads: int = 4,
+ num_kv_heads: Optional[int] = None,
+ elementwise_affine: Optional[bool] = True,
+ norm_first: bool = True,
+ norm_eps: float = 1e-5,
+ gate_logit_normalizer: int = 8,
+ use_output_gate: bool = False,
+ use_norm: bool = True,
+ layer_idx: Optional[int] = None,
+ scale: Optional[float] = 1.,
+ **kwargs
+ ) -> SemiCompressedAttention:
+ super().__init__()
+
+ self.mode = mode
+ self.hidden_size = hidden_size
+ self.window_size = window_size
+ self.state_size = state_size
+ self.gate_act = gate_act
+ self.max_position_embeddings = max_position_embeddings
+ self.expand_k = expand_k
+ self.expand_v = expand_v
+ self.num_heads = num_heads
+ self.num_kv_heads = num_heads if num_kv_heads is None else num_kv_heads
+ self.num_kv_groups = self.num_heads // self.num_kv_heads
+ self.key_dim = int(hidden_size * expand_k)
+ self.value_dim = int(hidden_size * expand_v)
+ self.key_dim_per_group = self.key_dim // self.num_kv_groups
+ self.value_dim_per_group = self.value_dim // self.num_kv_groups
+ self.head_k_dim = self.key_dim // self.num_heads
+ self.head_v_dim = self.value_dim // self.num_heads
+
+ self.gate_logit_normalizer = gate_logit_normalizer
+
+ self.use_output_gate = use_output_gate
+ self.use_norm = use_norm
+ self.scale = scale
+
+ self.norm_first = norm_first
+
+ self.layer_idx = layer_idx
+
+ if layer_idx is None:
+ warnings.warn(
+ f"Instantiating {self.__class__.__name__} without passing `layer_idx` is not recommended and will "
+ "to errors during the forward call, if caching is used. Please make sure to provide a `layer_idx` "
+ "when creating this class."
+ )
+
+ if norm_first:
+ self.norm = RMSNorm(self.hidden_size, eps=norm_eps)
+
+ self.q_proj = nn.Linear(self.hidden_size, self.key_dim, bias=False)
+ self.k_proj = nn.Linear(self.hidden_size, self.key_dim_per_group, bias=False)
+ self.v_proj = nn.Linear(self.hidden_size, self.value_dim_per_group, bias=False)
+ self.s_proj = nn.Linear(self.hidden_size, self.key_dim_per_group, bias=False)
+ self.g_proj = nn.Linear(self.hidden_size, self.num_heads * self.state_size, bias=False)
+
+ self.norm = RMSNorm(self.hidden_size, elementwise_affine, eps=norm_eps)
+ self.o_proj = nn.Linear(self.value_dim, self.hidden_size, bias=False)
+
+ self.apply(self._initialize_weights)
+
+ self.register_buffer('alibi', build_alibi_tensor_scan(self.num_heads, self.max_position_embeddings, self.window_size, self.state_size))
+ self.register_buffer('mask', scores_mask(self.max_position_embeddings, self.window_size, self.state_size))
+
+ def _initialize_weights(self, module: nn.Module):
+ if getattr(module, "_is_hf_initialized", False):
+ return
+ if isinstance(module, nn.Linear):
+ nn.init.xavier_uniform_(module.weight, gain=2 ** -2.5)
+ if module.bias is not None:
+ nn.init.zeros_(module.bias)
+ module._is_hf_initialized = True
+
+ def forward(
+ self,
+ hidden_states: torch.Tensor,
+ attention_mask: Optional[torch.Tensor] = None,
+ past_key_values: Optional[Cache] = None,
+ use_cache: Optional[bool] = False,
+ output_attentions: Optional[bool] = False,
+ **kwargs
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Cache]]:
+ if attention_mask is not None:
+ assert len(attention_mask.shape) == 2, (
+ "Expected attention_mask as a 0-1 matrix with shape [batch_size, seq_len] "
+ "for padding purposes (0 indicating padding). "
+ "Arbitrary attention masks of shape [batch_size, seq_len, seq_len] are not allowed."
+ )
+
+ # launching the triton kernel for just one token will actually be slower
+ mode = 'naive' if past_key_values is not None else self.mode
+
+ if self.norm_first:
+ hidden_states = self.norm(hidden_states)
+
+ last_state = None
+ if past_key_values is not None and len(past_key_values) > self.layer_idx:
+ last_state = past_key_values[self.layer_idx]
+
+ q = self.q_proj(hidden_states)
+ k = self.k_proj(hidden_states)
+ v = self.v_proj(hidden_states)
+ s = self.s_proj(hidden_states)
+ g = self.g_proj(hidden_states)
+
+ if self.gate_act == 'softmax':
+ g = F.softmax(g, dim=-1)
+ elif self.gate_act == 'sigmoid':
+ g = sigmoid(g)
+ else:
+ raise NotImplementedError(f"Gate activation `{self.gate_act}` is not supported.")
+
+ # KV cache is updated before going into SCAN
+ if past_key_values is not None:
+ k, v = past_key_values.update(
+ attn_state=(k, v),
+ layer_idx=self.layer_idx,
+ offset=q.shape[2],
+ # We actually don't want to crop to window for the initial prompt, only for subsequent autoregressive tokens
+ cache_kwargs=dict(window_size=self.window_size) if q.shape[-2] == 1 else dict()
+ )['attn_state']
+
+ recurrent_state = last_state['recurrent_state'] if last_state is not None else None
+ if mode == 'parallel':
+ # Split heads (but merge with batch dimension because kernels receive (B T C) shape)
+ q = rearrange(q, 'b t (h c) -> (b h) t c', h=self.num_heads)
+ k = rearrange(k, 'b t (h c) -> (b h) t c', h=self.num_kv_heads)
+ v = rearrange(v, 'b t (h c) -> (b h) t c', h=self.num_kv_heads)
+ s = rearrange(s, 'b t (h c) -> (b h) t c', h=self.num_kv_heads)
+ g = rearrange(g, 'b t (h s) -> (b h) t s', h=self.num_kv_heads)
+ o, recurrent_state = parallel_scan(
+ q=q,
+ k=k,
+ v=v,
+ s=s,
+ g=g,
+ window_size=self.window_size,
+ num_heads=self.num_heads,
+ alibi=self.alibi.to(q.device),
+ mask=self.mask.to(q.device),
+ initial_state=recurrent_state,
+ output_final_state=use_cache,
+ scale=self.scale,
+ head_first=False
+ )
+ o = rearrange(o, '(b h) t c -> b t (h c)', h=self.num_heads)
+ elif mode == 'naive':
+ # TODO: Implement naive recurrent SCAN for inference
+ q = rearrange(q, 'b t (h c) -> b h t c', h=self.num_heads)
+ k = rearrange(k, 'b t (h c) -> b h t c', h=self.num_kv_heads)
+ v = rearrange(v, 'b t (h c) -> b h t c', h=self.num_kv_heads)
+ s = rearrange(s, 'b t (h c) -> b h t c', h=self.num_kv_heads)
+ g = rearrange(g, 'b t (h s) -> b h t s', h=self.num_kv_heads)
+ o, recurrent_state = naive_recurrent_scan(
+ q=q,
+ k=k,
+ v=v,
+ s=s,
+ g=g,
+ window_size=self.window_size,
+ alibi=self.alibi.to(q.device),
+ mask=self.mask.to(q.device),
+ initial_state=recurrent_state,
+ output_final_state=use_cache,
+ scale=self.scale,
+ head_first=False
+ )
+ o = rearrange(o, 'b h t c -> b t (h c)', h=self.num_heads)
+ else:
+ raise NotImplementedError(f"Not supported mode `{mode}`.")
+
+ # Update the recurrent state after SCAN
+ if past_key_values is not None:
+ past_key_values.update(
+ recurrent_state=recurrent_state,
+ layer_idx=self.layer_idx
+ )
+
+ o = rms_norm_linear(swish(o), self.norm.weight, self.norm.bias, self.o_proj.weight, self.o_proj.bias)
+ return o, None, past_key_values
diff --git a/fla/layers/simple_gla.py b/fla/layers/simple_gla.py
new file mode 100644
index 0000000000000000000000000000000000000000..d25ad738960121f8a51a757d357a219d998c54d7
--- /dev/null
+++ b/fla/layers/simple_gla.py
@@ -0,0 +1,252 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from __future__ import annotations
+
+from typing import TYPE_CHECKING, Optional, Tuple
+
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+from einops import rearrange, repeat
+
+from fla.modules import FusedRMSNormSwishGate, RMSNorm, ShortConvolution
+from fla.modules.activations import ACT2FN
+from fla.ops.simple_gla import chunk_simple_gla, fused_recurrent_simple_gla
+
+if TYPE_CHECKING:
+ from fla.models.utils import Cache
+
+
+class SimpleGatedLinearAttention(nn.Module):
+ r"""
+ The layer implementaion for [Gated Linear Attention Transformers with Hardware-Efficient Training](https://arxiv.org/abs/2312.06635). # noqa
+ This layer calls the simplified GLA kernel in which the gating is head-wise instead of elementwise.
+
+ Args:
+ mode (str, Optional):
+ Which GLA kernel to use.
+ Currently available: `chunk`.
+ Default: `chunk`.
+ hidden_size (int, Optional):
+ The hidden size of the input. Default: 1024.
+ expand_k (float, Optional):
+ The expansion ratio for the key dim. Default: 1.0.
+ expand_v (float, Optional):
+ The expansion ratio for the value dim. Default: 1.0.
+ num_heads (int, Optional):
+ The number of heads. Default: 4.
+ num_kv_heads (int, Optional):
+ The number of key/value heads, used for MQA. Default: None.
+ feature_map (str, Optional):
+ Feature map function applied to queries/keys. Default: None.
+ use_short_conv (bool, Optional):
+ Whether to use short convolutions. Default: `False`.
+ conv_size (int, Optional):
+ The kernel size of the short convolution, only used when `use_short_conv` is `True`. Default: 4.
+ conv_bias (bool, Optional):
+ Whether to use bias in the short convolution, only used when `use_short_conv` is `True`. Default: `False`.
+ gate_fn (str, Optional):
+ The activation function for the output gate. Default: `swish`.
+ elementwise_affine (bool, Optional):
+ If `True`, applies elementwise affine to LayerNorm with learnable parameters. Default: `True`.
+ norm_eps (float, Optional):
+ The epsilon value for the layernorm/rmsnorm layer. Default: 1e-5.
+ gate_logit_normalizer (int, Optional):
+ The normalizer for the gate logits, appied after `logsigmoid`. Default: 16.
+ fuse_norm (bool, Optional):
+ Whether to fuse the norm and the output gate for better memory footprint. Default: `True`.
+ layer_idx (int, Optional):
+ The index of the layer. Default: None.
+ """
+
+ def __init__(
+ self,
+ mode: str = 'chunk',
+ hidden_size: int = 1024,
+ expand_k: float = 1.,
+ expand_v: float = 1.,
+ num_heads: int = 4,
+ num_kv_heads: Optional[int] = None,
+ feature_map: Optional[str] = None,
+ use_short_conv: bool = True,
+ conv_size: int = 4,
+ conv_bias: bool = False,
+ gate_fn: str = 'swish',
+ elementwise_affine: Optional[bool] = True,
+ norm_eps: float = 1e-5,
+ gate_logit_normalizer: int = 16,
+ fuse_norm: bool = True,
+ layer_idx: int = None,
+ ) -> SimpleGatedLinearAttention:
+ super().__init__()
+
+ self.mode = mode
+ self.hidden_size = hidden_size
+ self.expand_k = expand_k
+ self.expand_v = expand_v
+ self.num_heads = num_heads
+ self.num_kv_heads = num_kv_heads if num_kv_heads is not None else num_heads
+ self.num_kv_groups = self.num_heads // self.num_kv_heads
+ self.feature_map_fn = ACT2FN[feature_map] if feature_map is not None else None
+
+ self.use_short_conv = use_short_conv
+ self.conv_size = conv_size
+ self.conv_bias = conv_bias
+
+ self.key_dim = int(hidden_size * expand_k)
+ self.value_dim = int(hidden_size * expand_v)
+ self.key_dim_per_group = self.key_dim // self.num_kv_groups
+ self.value_dim_per_group = self.value_dim // self.num_kv_groups
+ self.layer_idx = layer_idx
+
+ assert mode in ['chunk', "fused_recurrent"], f"Not suppoerted mode `{mode}`."
+ assert self.key_dim % num_heads == 0, f"key dim must be divisible by num_heads of {num_heads}"
+ assert self.value_dim % num_heads == 0, f"value dim must be divisible by num_heads of {num_heads}"
+
+ self.head_qk_dim = self.key_dim // num_heads
+ self.head_v_dim = self.value_dim // num_heads
+
+ self.q_proj = nn.Linear(hidden_size, self.key_dim, bias=False)
+ self.k_proj = nn.Linear(hidden_size, self.key_dim_per_group, bias=False)
+ self.v_proj = nn.Linear(hidden_size, self.value_dim_per_group, bias=False)
+ self.g_proj = nn.Linear(hidden_size, self.value_dim, bias=False)
+
+ if use_short_conv:
+ self.conv_size = conv_size
+ self.q_conv1d = ShortConvolution(self.key_dim, conv_size, activation='silu')
+ self.k_conv1d = ShortConvolution(self.key_dim_per_group, conv_size, activation='silu')
+ self.v_conv1d = ShortConvolution(self.value_dim_per_group, conv_size, activation='silu')
+
+ self.gk_proj = nn.Linear(hidden_size, self.num_heads)
+
+ if gate_fn == 'swish' and fuse_norm:
+ self.g_norm_swish_gate = FusedRMSNormSwishGate(self.head_v_dim, elementwise_affine, norm_eps)
+ self.fuse_norm_and_gate = True
+ else:
+ self.fuse_norm_and_gate = False
+ self.g_norm = RMSNorm(hidden_size=self.head_v_dim, elementwise_affine=elementwise_affine, eps=norm_eps)
+ self.gate_fn = ACT2FN[gate_fn]
+ self.o_proj = nn.Linear(self.value_dim, hidden_size, bias=False)
+
+ self.gate_logit_normalizer = gate_logit_normalizer
+
+ self.apply(self._initialize_weights)
+
+ def _initialize_weights(self, module: nn.Module):
+ if getattr(module, "_is_hf_initialized", False):
+ return
+ if isinstance(module, nn.Linear):
+ nn.init.xavier_uniform_(module.weight, gain=2 ** -2.5)
+ if module.bias is not None:
+ nn.init.zeros_(module.bias)
+ module._is_hf_initialized = True
+
+ def forward(
+ self,
+ hidden_states: torch.Tensor,
+ attention_mask: Optional[torch.Tensor] = None,
+ past_key_values: Optional[Cache] = None,
+ use_cache: Optional[bool] = False,
+ output_attentions: Optional[bool] = False,
+ **kwargs
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Cache]]:
+ if attention_mask is not None:
+ assert len(attention_mask.shape) == 2, (
+ "Expected attention_mask as a 0-1 matrix with shape [batch_size, seq_len] "
+ "for padding purposes (0 indicating padding). "
+ "Arbitrary attention masks of shape [batch_size, seq_len, seq_len] are not allowed."
+ )
+
+ # launching the triton kernel for just one token will actually be slower
+ mode = 'fused_recurrent' if hidden_states.shape[1] == 1 else self.mode
+
+ last_state = None
+ if past_key_values is not None and len(past_key_values) > self.layer_idx:
+ last_state = past_key_values[self.layer_idx]
+
+ if self.use_short_conv:
+ conv_state_q, conv_state_k, conv_state_v = None, None, None
+ if last_state is not None:
+ conv_state_q, conv_state_k, conv_state_v = last_state['conv_state']
+ conv_mask = attention_mask[:, -hidden_states.shape[1]:] if attention_mask is not None else None
+ q, conv_state_q = self.q_conv1d(x=self.q_proj(hidden_states),
+ mask=conv_mask,
+ cache=conv_state_q,
+ output_final_state=use_cache)
+ k, conv_state_k = self.k_conv1d(x=self.k_proj(hidden_states),
+ mask=conv_mask,
+ cache=conv_state_k,
+ output_final_state=use_cache)
+ v, conv_state_v = self.v_conv1d(x=self.v_proj(hidden_states),
+ mask=conv_mask,
+ cache=conv_state_v,
+ output_final_state=use_cache)
+ else:
+ q = self.q_proj(hidden_states)
+ k = self.k_proj(hidden_states)
+ v = self.v_proj(hidden_states)
+ gk = self.gk_proj(hidden_states)
+
+ if self.feature_map_fn is not None:
+ q, k = map(self.feature_map_fn, (q, k))
+ # dealing with left-padding
+ if attention_mask is not None:
+ v = v.mul_(attention_mask[:, -v.shape[-2]:, None])
+ q = rearrange(q, '... (h d) -> ... h d', h=self.num_heads)
+ if self.num_kv_groups > 1:
+ k, v = (repeat(x, '... (h d) -> ... (h g) d', h=self.num_kv_heads, g=self.num_kv_groups) for x in (k, v))
+ else:
+ k, v = (rearrange(x, '... (h d) -> ... h d', h=self.num_kv_heads) for x in (k, v))
+ gk = F.logsigmoid(gk) / self.gate_logit_normalizer
+
+ recurrent_state = last_state['recurrent_state'] if last_state is not None else None
+ if mode == 'chunk':
+ o, recurrent_state = chunk_simple_gla(
+ q=q,
+ k=k,
+ v=v,
+ gk=gk,
+ initial_state=recurrent_state,
+ output_final_state=use_cache,
+ head_first=False
+ )
+ elif mode == 'fused_recurrent':
+ o, recurrent_state = fused_recurrent_simple_gla(
+ q=q,
+ k=k,
+ v=v,
+ gk=gk,
+ initial_state=recurrent_state,
+ output_final_state=use_cache,
+ head_first=False
+ )
+ else:
+ raise NotImplementedError(f"Not supported mode `{mode}`.")
+
+ if past_key_values is not None:
+ past_key_values.update(
+ recurrent_state=recurrent_state,
+ conv_state=(conv_state_q, conv_state_k, conv_state_v) if self.use_short_conv else None,
+ layer_idx=self.layer_idx,
+ offset=q.shape[2]
+ )
+
+ g = self.g_proj(hidden_states)
+ if self.fuse_norm_and_gate:
+ g = rearrange(g, 'b t (h d) -> b t h d', h=self.num_heads)
+ o = self.g_norm_swish_gate(o, g)
+ o = rearrange(o, 'b t h d -> b t (h d)')
+ else:
+ o = rearrange(self.g_norm(o), 'b t h d -> b t (h d)')
+ o = o * self.gate_fn(g)
+ o = self.o_proj(o)
+
+ return o, None, past_key_values
+
+ def state_size(self, **kwargs) -> int:
+ state_size = self.key_dim * self.head_v_dim
+ for module in self.children():
+ if isinstance(module, ShortConvolution):
+ state_size += module.state_size
+ return state_size
diff --git a/fla/models/__init__.py b/fla/models/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..dfb1a000adc48c8bf62d827a8f78ad648b36421a
--- /dev/null
+++ b/fla/models/__init__.py
@@ -0,0 +1,39 @@
+# -*- coding: utf-8 -*-
+
+from fla.models.abc import ABCConfig, ABCForCausalLM, ABCModel
+from fla.models.bitnet import BitNetConfig, BitNetForCausalLM, BitNetModel
+from fla.models.delta_net import (DeltaNetConfig, DeltaNetForCausalLM,
+ DeltaNetModel)
+from fla.models.gla import GLAConfig, GLAForCausalLM, GLAModel
+from fla.models.gsa import GSAConfig, GSAForCausalLM, GSAModel
+from fla.models.hgrn import HGRNConfig, HGRNForCausalLM, HGRNModel
+from fla.models.hgrn2 import HGRN2Config, HGRN2ForCausalLM, HGRN2Model
+from fla.models.linear_attn import (LinearAttentionConfig,
+ LinearAttentionForCausalLM,
+ LinearAttentionModel)
+from fla.models.mamba import MambaConfig, MambaForCausalLM, MambaModel
+from fla.models.mamba2 import Mamba2Config, Mamba2ForCausalLM, Mamba2Model
+from fla.models.retnet import RetNetConfig, RetNetForCausalLM, RetNetModel
+from fla.models.rwkv6 import RWKV6Config, RWKV6ForCausalLM, RWKV6Model
+from fla.models.samba import SambaConfig, SambaForCausalLM, SambaModel
+from fla.models.scan import SCANConfig, SCANForCausalLM, SCANModel
+from fla.models.transformer import (TransformerConfig, TransformerForCausalLM,
+ TransformerModel)
+
+__all__ = [
+ 'ABCConfig', 'ABCForCausalLM', 'ABCModel',
+ 'BitNetConfig', 'BitNetForCausalLM', 'BitNetModel',
+ 'DeltaNetConfig', 'DeltaNetForCausalLM', 'DeltaNetModel',
+ 'GLAConfig', 'GLAForCausalLM', 'GLAModel',
+ 'GSAConfig', 'GSAForCausalLM', 'GSAModel',
+ 'HGRNConfig', 'HGRNForCausalLM', 'HGRNModel',
+ 'HGRN2Config', 'HGRN2ForCausalLM', 'HGRN2Model',
+ 'LinearAttentionConfig', 'LinearAttentionForCausalLM', 'LinearAttentionModel',
+ 'MambaConfig', 'MambaForCausalLM', 'MambaModel',
+ 'Mamba2Config', 'Mamba2ForCausalLM', 'Mamba2Model',
+ 'RetNetConfig', 'RetNetForCausalLM', 'RetNetModel',
+ 'RWKV6Config', 'RWKV6ForCausalLM', 'RWKV6Model',
+ 'SambaConfig', 'SambaForCausalLM', 'SambaModel',
+ 'SCANConfig', 'SCANForCausalLM', 'SCANModel',
+ 'TransformerConfig', 'TransformerForCausalLM', 'TransformerModel'
+]
diff --git a/fla/models/abc/__init__.py b/fla/models/abc/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..f7021f22ff0f9781432bd3969473520851f4b553
--- /dev/null
+++ b/fla/models/abc/__init__.py
@@ -0,0 +1,13 @@
+# -*- coding: utf-8 -*-
+
+from transformers import AutoConfig, AutoModel, AutoModelForCausalLM
+
+from fla.models.abc.configuration_abc import ABCConfig
+from fla.models.abc.modeling_abc import ABCForCausalLM, ABCModel
+
+AutoConfig.register(ABCConfig.model_type, ABCConfig)
+AutoModel.register(ABCConfig, ABCModel)
+AutoModelForCausalLM.register(ABCConfig, ABCForCausalLM)
+
+
+__all__ = ['ABCConfig', 'ABCForCausalLM', 'ABCModel']
diff --git a/fla/models/abc/configuration_abc.py b/fla/models/abc/configuration_abc.py
new file mode 100644
index 0000000000000000000000000000000000000000..bdb7f4e2276db932ad5cdd80adbcb14a49e41927
--- /dev/null
+++ b/fla/models/abc/configuration_abc.py
@@ -0,0 +1,84 @@
+# -*- coding: utf-8 -*-
+
+from typing import Dict, Optional
+
+from transformers.configuration_utils import PretrainedConfig
+
+
+class ABCConfig(PretrainedConfig):
+
+ model_type = 'abc'
+ keys_to_ignore_at_inference = ['past_key_values']
+
+ def __init__(
+ self,
+ hidden_size: int = 2048,
+ gate_low_rank_dim: int = 16,
+ clamp_min: float = -32,
+ clamp_max: float = 32,
+ hidden_ratio: Optional[int] = 4,
+ intermediate_size: Optional[int] = None,
+ num_hidden_layers: int = 24,
+ num_heads: int = 4,
+ num_slots: Optional[int] = 64,
+ use_short_conv: bool = False,
+ conv_size: int = 4,
+ exapnd_k: float = 0.5,
+ exapnd_v: float = 1,
+ hidden_act: str = "swish",
+ max_position_embeddings: int = 2048,
+ elementwise_affine: Optional[bool] = True,
+ norm_eps: float = 1e-6,
+ attn: Optional[Dict] = None,
+ use_cache: bool = True,
+ pad_token_id: int = None,
+ bos_token_id: int = 1,
+ eos_token_id: int = 2,
+ tie_word_embeddings: bool = False,
+ initializer_range: float = 0.02,
+ fuse_norm: bool = True,
+ fuse_cross_entropy: bool = True,
+ vocab_size: int = 32000,
+ **kwargs
+ ):
+ self.hidden_size = hidden_size
+ self.gate_low_rank_dim = gate_low_rank_dim
+ self.clamp_min = clamp_min
+ self.clamp_max = clamp_max
+ self.hidden_ratio = hidden_ratio
+ self.intermediate_size = intermediate_size
+ self.num_hidden_layers = num_hidden_layers
+ self.num_heads = num_heads
+ self.num_slots = num_slots
+ self.use_short_conv = use_short_conv
+ self.conv_size = conv_size
+ self.expand_k = exapnd_k
+ self.expand_v = exapnd_v
+ self.hidden_act = hidden_act
+ self.max_position_embeddings = max_position_embeddings
+ self.elementwise_affine = elementwise_affine
+ self.norm_eps = norm_eps
+ self.attn = attn
+ self.use_cache = use_cache
+ self.initializer_range = initializer_range
+ self.fuse_norm = fuse_norm
+ self.fuse_cross_entropy = fuse_cross_entropy
+ self.vocab_size = vocab_size
+
+ if attn is not None:
+ if not isinstance(attn, Dict):
+ raise ValueError("attn must be a dictionary")
+ if 'layers' not in attn:
+ raise ValueError("Layer indices must be provided to initialize hybrid attention layers")
+ if 'num_heads' not in attn:
+ raise ValueError("Number of heads must be provided to initialize hybrid attention layers")
+ attn['num_kv_heads'] = attn.get('num_kv_heads', attn['num_heads'])
+ attn['window_size'] = attn.get('window_size', None)
+
+ super().__init__(
+ pad_token_id=pad_token_id,
+ bos_token_id=bos_token_id,
+ eos_token_id=eos_token_id,
+ tie_word_embeddings=tie_word_embeddings,
+ **kwargs,
+ )
diff --git a/fla/models/abc/modeling_abc.py b/fla/models/abc/modeling_abc.py
new file mode 100644
index 0000000000000000000000000000000000000000..3db6a5b05275c87e8c2ec089be58b51d91c61a3d
--- /dev/null
+++ b/fla/models/abc/modeling_abc.py
@@ -0,0 +1,403 @@
+# -*- coding: utf-8 -*-
+
+from __future__ import annotations
+
+import math
+import warnings
+from typing import List, Optional, Tuple, Union
+
+import torch
+import torch.nn as nn
+import torch.utils.checkpoint
+from transformers.activations import ACT2FN
+from transformers.generation import GenerationMixin
+from transformers.modeling_outputs import (BaseModelOutputWithPast,
+ CausalLMOutputWithPast)
+from transformers.modeling_utils import PreTrainedModel
+from transformers.utils import logging
+
+from fla.layers.abc import ABCAttention
+from fla.layers.attn import Attention
+from fla.models.abc.configuration_abc import ABCConfig
+from fla.models.utils import Cache
+from fla.modules import (FusedCrossEntropyLoss, FusedLinearCrossEntropyLoss,
+ RMSNorm)
+from fla.modules.activations import swiglu_linear
+
+logger = logging.get_logger(__name__)
+
+
+class ABCMLP(nn.Module):
+
+ def __init__(
+ self,
+ hidden_size: int,
+ hidden_ratio: Optional[int] = None,
+ intermediate_size: Optional[int] = None,
+ hidden_act: str = 'swish'
+ ) -> ABCMLP:
+ super().__init__()
+
+ self.hidden_size = hidden_size
+ # the final number of params is `hidden_ratio * hidden_size^2`
+ # `intermediate_size` is chosen to be a multiple of 256 closest to `2/3 * hidden_size * hidden_ratio`
+ if hidden_ratio is None:
+ hidden_ratio = 4
+ if intermediate_size is None:
+ intermediate_size = int(hidden_size * hidden_ratio * 2 / 3)
+ intermediate_size = 256 * ((intermediate_size + 256 - 1) // 256)
+ self.hidden_ratio = hidden_ratio
+ self.intermediate_size = intermediate_size
+
+ self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size * 2, bias=False)
+ self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False)
+ self.act_fn = ACT2FN[hidden_act]
+
+ def forward(self, x):
+ y = self.gate_proj(x)
+ gate, y = y.chunk(2, -1)
+ return swiglu_linear(gate, y, self.down_proj.weight, self.down_proj.bias)
+
+
+class ABCBlock(nn.Module):
+ def __init__(self, config: ABCConfig, layer_idx: int):
+ super().__init__()
+ self.hidden_size = config.hidden_size
+
+ self.attn_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps)
+ if config.attn is not None and layer_idx in config.attn['layers']:
+ self.attn = Attention(
+ hidden_size=config.hidden_size,
+ num_heads=config.attn['num_heads'],
+ num_kv_heads=config.attn['num_kv_heads'],
+ window_size=config.attn['window_size'],
+ max_position_embeddings=config.max_position_embeddings,
+ layer_idx=layer_idx
+ )
+ else:
+ self.attn = ABCAttention(
+ hidden_size=config.hidden_size,
+ expand_k=config.expand_k,
+ expand_v=config.expand_v,
+ num_heads=config.num_heads,
+ num_slots=config.num_slots,
+ use_short_conv=config.use_short_conv,
+ conv_size=config.conv_size,
+ gate_fn=config.hidden_act,
+ elementwise_affine=config.elementwise_affine,
+ norm_eps=config.norm_eps,
+ clamp_min=config.clamp_min,
+ clamp_max=config.clamp_max,
+ fuse_norm=config.fuse_norm,
+ layer_idx=layer_idx
+ )
+ self.mlp_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps)
+ self.mlp = ABCMLP(
+ hidden_size=config.hidden_size,
+ hidden_ratio=config.hidden_ratio,
+ intermediate_size=config.intermediate_size,
+ hidden_act=config.hidden_act
+ )
+
+ def forward(
+ self,
+ hidden_states: torch.Tensor,
+ attention_mask: Optional[torch.Tensor] = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ use_cache: Optional[bool] = False,
+ output_attentions: Optional[bool] = False,
+ **kwargs,
+ ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
+
+ residual = hidden_states
+
+ hidden_states = self.attn_norm(hidden_states)
+ hidden_states, attentions, past_key_values = self.attn(
+ hidden_states=hidden_states,
+ attention_mask=attention_mask,
+ past_key_values=past_key_values,
+ use_cache=use_cache,
+ output_attentions=output_attentions
+ )
+ hidden_states, residual = self.mlp_norm(hidden_states, residual, True)
+ hidden_states = self.mlp(hidden_states)
+ hidden_states = residual + hidden_states
+
+ outputs = (hidden_states, attentions, past_key_values)
+
+ return outputs
+
+
+class ABCPreTrainedModel(PreTrainedModel):
+
+ config_class = ABCConfig
+ supports_gradient_checkpointing = True
+ _no_split_modules = ['ABCBlock']
+
+ def __init__(self, *inputs, **kwargs):
+ super().__init__(*inputs, **kwargs)
+
+ def _init_weights(
+ self,
+ module: nn.Module,
+ rescale_prenorm_residual: bool = True,
+ num_residuals_per_layer: int = 2,
+ ):
+ if isinstance(module, (nn.Linear, nn.Conv1d)):
+ # Slightly different from the TF version which uses truncated_normal for initialization
+ # cf https://github.com/pytorch/pytorch/pull/5617
+ nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range)
+ if module.bias is not None:
+ nn.init.zeros_(module.bias)
+ elif isinstance(module, nn.Embedding):
+ nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range)
+ if module.padding_idx is not None:
+ module.weight.data[module.padding_idx].zero_()
+
+ if rescale_prenorm_residual:
+ # Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme:
+ # > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale
+ # > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers.
+ # > -- GPT-2 :: https://openai.com/blog/better-language-models/
+ #
+ # Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py
+ for name, p in module.named_parameters():
+ if name in ["o_proj.weight", "down_proj.weight"]:
+ # Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block
+ # Following Pytorch init, except scale by 1/sqrt(2 * n_layer)
+ # We need to reinit p since this code could be called multiple times
+ # Having just p *= scale would repeatedly scale it down
+ with torch.no_grad():
+ p /= math.sqrt(num_residuals_per_layer * self.config.num_hidden_layers)
+
+
+class ABCModel(ABCPreTrainedModel):
+
+ def __init__(self, config: ABCConfig):
+ super().__init__(config)
+ self.padding_idx = config.pad_token_id
+ self.vocab_size = config.vocab_size
+
+ self.embeddings = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
+ self.layers = nn.ModuleList([ABCBlock(config, layer_idx) for layer_idx in range(config.num_hidden_layers)])
+ self.norm = RMSNorm(config.hidden_size, eps=config.norm_eps)
+
+ self.gradient_checkpointing = False
+
+ self.post_init()
+
+ def get_input_embeddings(self):
+ return self.embeddings
+
+ def set_input_embeddings(self, value):
+ self.embeddings = value
+
+ def forward(
+ self,
+ input_ids: Optional[torch.LongTensor] = None,
+ attention_mask: Optional[torch.Tensor] = None, # noqa
+ inputs_embeds: Optional[torch.FloatTensor] = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ use_cache: Optional[bool] = None,
+ output_attentions: Optional[bool] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None
+ ) -> Union[Tuple, BaseModelOutputWithPast]:
+ if output_attentions:
+ warnings.warn("`ABCModel` does not `output_attentions` now, setting it to `False`.")
+ output_attentions = False
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
+ output_hidden_states = output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
+ use_cache = use_cache if use_cache is not None else (self.config.use_cache if not self.training else False)
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ # retrieve input_ids and inputs_embeds
+ if input_ids is not None and inputs_embeds is not None:
+ raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
+ if input_ids is None and inputs_embeds is None:
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
+
+ if inputs_embeds is None:
+ inputs_embeds = self.embeddings(input_ids)
+ hidden_states = inputs_embeds
+
+ if use_cache and not isinstance(past_key_values, Cache):
+ past_key_values = Cache.from_legacy_cache(past_key_values)
+
+ if self.gradient_checkpointing and self.training and use_cache:
+ logger.warning_once("`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...")
+ use_cache = False
+
+ all_hidden_states = () if output_hidden_states else None
+ all_attns = () if output_attentions else None
+ for layer in self.layers:
+ if output_hidden_states:
+ all_hidden_states += (hidden_states,)
+
+ if self.gradient_checkpointing and self.training:
+ hidden_states, attentions, past_key_values = self._gradient_checkpointing_func(
+ layer.__call__,
+ hidden_states,
+ attention_mask,
+ past_key_values,
+ use_cache,
+ output_attentions
+ )
+ else:
+ hidden_states, attentions, past_key_values = layer(
+ hidden_states,
+ attention_mask,
+ past_key_values=past_key_values,
+ use_cache=use_cache,
+ output_attentions=output_attentions
+ )
+
+ if output_attentions:
+ all_attns += (attentions,)
+
+ hidden_states = self.norm(hidden_states)
+
+ # add hidden states from the last decoder layer
+ if output_hidden_states:
+ all_hidden_states += (hidden_states,)
+
+ if not return_dict:
+ return tuple(i for i in [hidden_states, past_key_values, all_hidden_states, all_attns] if i is not None)
+ return BaseModelOutputWithPast(
+ last_hidden_state=hidden_states,
+ past_key_values=past_key_values,
+ hidden_states=all_hidden_states,
+ attentions=all_attns
+ )
+
+
+class ABCForCausalLM(ABCPreTrainedModel, GenerationMixin):
+
+ _tied_weights_keys = ["lm_head.weight"]
+
+ def __init__(self, config):
+ super().__init__(config)
+ self.model = ABCModel(config)
+ self.vocab_size = config.vocab_size
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
+
+ # Initialize weights and apply final processing
+ self.post_init()
+
+ def get_input_embeddings(self):
+ return self.model.embeddings
+
+ def set_input_embeddings(self, value):
+ self.model.embeddings = value
+
+ def get_output_embeddings(self):
+ return self.lm_head
+
+ def set_output_embeddings(self, new_embeddings):
+ self.lm_head = new_embeddings
+
+ def set_decoder(self, decoder):
+ self.model = decoder
+
+ def get_decoder(self):
+ return self.model
+
+ def generate(self, *args, **kwargs):
+ try:
+ return super().generate(*args, **kwargs)
+ except AttributeError as exception:
+ if 'past_key_values' in str(exception):
+ raise AttributeError(
+ f"You tried to call `generate` with a decoding strategy that manipulates `past_key_values`, "
+ f"which is not supported for {self.__class__.__name__}. "
+ f"Try another generation strategy instead. "
+ f"For the available generation strategies, check this doc: "
+ f"https://huggingface.co/docs/transformers/en/generation_strategies#decoding-strategies"
+ )
+ else:
+ raise exception
+
+ def prepare_inputs_for_generation(
+ self,
+ input_ids: torch.LongTensor = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ inputs_embeds: Optional[torch.FloatTensor] = None,
+ **kwargs
+ ):
+ # only last token for `inputs_ids` if the `past_key_values` is passed along.
+ if past_key_values is not None:
+ input_ids = input_ids[:, -1:]
+
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
+ if inputs_embeds is not None and past_key_values is None:
+ model_inputs = {'inputs_embeds': inputs_embeds}
+ else:
+ model_inputs = {'input_ids': input_ids}
+ model_inputs['past_key_values'] = past_key_values
+ return model_inputs
+
+ def forward(
+ self,
+ input_ids: torch.LongTensor = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ inputs_embeds: Optional[torch.Tensor] = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ labels: Optional[torch.LongTensor] = None,
+ use_cache: Optional[bool] = None,
+ output_attentions: Optional[bool] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None,
+ num_logits_to_keep: Optional[int] = 0
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
+ output_hidden_states = (
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
+ )
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ outputs = self.model(
+ input_ids=input_ids,
+ attention_mask=attention_mask,
+ inputs_embeds=inputs_embeds,
+ past_key_values=past_key_values,
+ use_cache=use_cache,
+ output_attentions=output_attentions,
+ output_hidden_states=output_hidden_states,
+ return_dict=return_dict
+ )
+
+ hidden_states = outputs[0]
+ fuse_linear_and_cross_entropy = self.config.fuse_cross_entropy and self.training
+ logits = None if fuse_linear_and_cross_entropy else self.lm_head(hidden_states[:, -num_logits_to_keep:])
+
+ loss = None
+ if labels is not None:
+ if self.config.fuse_cross_entropy:
+ if fuse_linear_and_cross_entropy:
+ loss_fct = FusedLinearCrossEntropyLoss()
+ else:
+ loss_fct = FusedCrossEntropyLoss(inplace_backward=True)
+ else:
+ loss_fct = nn.CrossEntropyLoss()
+ # Enable model parallelism
+ labels = labels.to(hidden_states.device)
+ labels = torch.cat((labels[..., 1:], torch.full_like(labels[:, :1], loss_fct.ignore_index)), 1)
+ if fuse_linear_and_cross_entropy:
+ loss = loss_fct(hidden_states.view(-1, self.config.hidden_size),
+ labels.view(-1),
+ self.lm_head.weight,
+ self.lm_head.bias)
+ else:
+ loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1))
+
+ if not return_dict:
+ output = (logits,) + outputs[1:]
+ return (loss,) + output if loss is not None else output
+
+ return CausalLMOutputWithPast(
+ loss=loss,
+ logits=logits,
+ past_key_values=outputs.past_key_values,
+ hidden_states=outputs.hidden_states,
+ attentions=outputs.attentions,
+ )
diff --git a/fla/models/bitnet/__init__.py b/fla/models/bitnet/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..bede22c64707be1ff17f402c0af6ed9da1ff1aee
--- /dev/null
+++ b/fla/models/bitnet/__init__.py
@@ -0,0 +1,13 @@
+# -*- coding: utf-8 -*-
+
+from transformers import AutoConfig, AutoModel, AutoModelForCausalLM
+
+from fla.models.bitnet.configuration_bitnet import BitNetConfig
+from fla.models.bitnet.modeling_bitnet import BitNetForCausalLM, BitNetModel
+
+AutoConfig.register(BitNetConfig.model_type, BitNetConfig)
+AutoModel.register(BitNetConfig, BitNetModel)
+AutoModelForCausalLM.register(BitNetConfig, BitNetForCausalLM)
+
+
+__all__ = ['BitNetConfig', 'BitNetForCausalLM', 'BitNetModel']
diff --git a/fla/models/bitnet/configuration_bitnet.py b/fla/models/bitnet/configuration_bitnet.py
new file mode 100644
index 0000000000000000000000000000000000000000..b6c50f8aae9f4671f08a332d36b4db8cfefbf071
--- /dev/null
+++ b/fla/models/bitnet/configuration_bitnet.py
@@ -0,0 +1,68 @@
+# -*- coding: utf-8 -*-
+
+from typing import Optional
+
+from transformers.configuration_utils import PretrainedConfig
+
+
+class BitNetConfig(PretrainedConfig):
+
+ model_type = 'bitnet'
+ keys_to_ignore_at_inference = ['past_key_values']
+
+ def __init__(
+ self,
+ vocab_size: int = 32000,
+ hidden_size: int = 2048,
+ num_hidden_layers: int = 24,
+ num_heads: int = 32,
+ num_kv_heads: int = None,
+ window_size: Optional[int] = None,
+ rope_theta: Optional[float] = 10000.,
+ max_position_embeddings: int = 2048,
+ hidden_ratio: Optional[int] = 4,
+ intermediate_size: Optional[int] = None,
+ hidden_act: str = "swish",
+ initializer_range: float = 0.02,
+ elementwise_affine: Optional[bool] = True,
+ norm_first: bool = False,
+ norm_eps: float = 1e-6,
+ use_cache: bool = True,
+ pad_token_id: int = None,
+ bos_token_id: int = 1,
+ eos_token_id: int = 2,
+ tie_word_embeddings: bool = False,
+ attention_bias: bool = False,
+ fuse_norm: bool = True,
+ fuse_cross_entropy: bool = True,
+ **kwargs,
+ ):
+ self.vocab_size = vocab_size
+ self.hidden_size = hidden_size
+ self.num_hidden_layers = num_hidden_layers
+ self.num_heads = num_heads
+ self.num_kv_heads = num_kv_heads
+ self.window_size = window_size
+ self.rope_theta = rope_theta
+ self.max_position_embeddings = max_position_embeddings
+
+ self.hidden_ratio = hidden_ratio
+ self.intermediate_size = intermediate_size
+ self.hidden_act = hidden_act
+
+ self.initializer_range = initializer_range
+ self.elementwise_affine = elementwise_affine
+ self.norm_first = norm_first
+ self.norm_eps = norm_eps
+ self.use_cache = use_cache
+ self.attention_bias = attention_bias
+ self.fuse_cross_entropy = fuse_cross_entropy
+ self.fuse_norm = fuse_norm
+
+ super().__init__(
+ pad_token_id=pad_token_id,
+ bos_token_id=bos_token_id,
+ eos_token_id=eos_token_id,
+ tie_word_embeddings=tie_word_embeddings,
+ **kwargs,
+ )
diff --git a/fla/models/bitnet/modeling_bitnet.py b/fla/models/bitnet/modeling_bitnet.py
new file mode 100644
index 0000000000000000000000000000000000000000..5e55e0033639450e16a728d64ced18683c980463
--- /dev/null
+++ b/fla/models/bitnet/modeling_bitnet.py
@@ -0,0 +1,428 @@
+# -*- coding: utf-8 -*-
+
+from __future__ import annotations
+
+import math
+import warnings
+from typing import List, Optional, Tuple, Union
+
+import torch
+import torch.nn as nn
+import torch.utils.checkpoint
+from transformers.activations import ACT2FN
+from transformers.generation import GenerationMixin
+from transformers.modeling_outputs import (BaseModelOutputWithPast,
+ CausalLMOutputWithPast)
+from transformers.modeling_utils import PreTrainedModel
+from transformers.utils import logging
+
+from fla.layers.bitattn import BitAttention
+from fla.models.bitnet.configuration_bitnet import BitNetConfig
+from fla.models.utils import Cache
+from fla.modules import (FusedCrossEntropyLoss, FusedLinearCrossEntropyLoss,
+ RMSNorm)
+from fla.modules.activations import swiglu_bitlinear
+from fla.modules.fused_bitlinear import BitLinear, rms_norm_linear_quant
+
+logger = logging.get_logger(__name__)
+
+
+class BitNetMLP(nn.Module):
+
+ def __init__(
+ self,
+ hidden_size: int,
+ hidden_ratio: Optional[int] = None,
+ intermediate_size: Optional[int] = None,
+ hidden_act: str = 'swish',
+ norm_first: bool = True,
+ norm_eps: float = 1e-5
+ ) -> BitNetMLP:
+ super().__init__()
+
+ self.hidden_size = hidden_size
+ # the final number of params is `hidden_ratio * hidden_size^2`
+ # `intermediate_size` is chosen to be a multiple of 256 closest to `2/3 * hidden_size * hidden_ratio`
+ if hidden_ratio is None:
+ hidden_ratio = 4
+ if intermediate_size is None:
+ intermediate_size = int(hidden_size * hidden_ratio * 2 / 3)
+ intermediate_size = 256 * ((intermediate_size + 256 - 1) // 256)
+ self.hidden_ratio = hidden_ratio
+ self.intermediate_size = intermediate_size
+ self.norm_first = norm_first
+
+ if norm_first:
+ self.norm = RMSNorm(hidden_size=hidden_size, eps=norm_eps)
+
+ self.gate_proj = BitLinear(self.hidden_size, self.intermediate_size * 2, bias=False)
+ self.down_proj = BitLinear(self.intermediate_size, self.hidden_size, bias=False)
+ self.act_fn = ACT2FN[hidden_act]
+
+ def forward(self, x):
+ if self.norm_first:
+ x = rms_norm_linear_quant(x, self.norm.weight, self.norm.bias, self.gate_proj.weight, self.gate_proj.bias)
+ else:
+ x = self.gate_proj(x)
+ gate, y = x.chunk(2, -1)
+ return swiglu_bitlinear(gate, y, self.down_proj.weight, self.down_proj.bias)
+
+
+class BitNetBlock(nn.Module):
+
+ def __init__(self, config: BitNetConfig, layer_idx: int):
+ super().__init__()
+
+ self.hidden_size = config.hidden_size
+
+ if not config.norm_first:
+ self.attn_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps)
+ self.attn = BitAttention(
+ hidden_size=config.hidden_size,
+ num_heads=config.num_heads,
+ num_kv_heads=config.num_kv_heads,
+ window_size=config.window_size,
+ rope_theta=config.rope_theta,
+ max_position_embeddings=config.max_position_embeddings,
+ norm_first=config.norm_first,
+ norm_eps=config.norm_eps,
+ layer_idx=layer_idx
+ )
+ if not config.norm_first:
+ self.mlp_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps)
+ self.mlp = BitNetMLP(
+ hidden_size=config.hidden_size,
+ hidden_ratio=config.hidden_ratio,
+ intermediate_size=config.intermediate_size,
+ hidden_act=config.hidden_act,
+ norm_first=config.norm_first,
+ norm_eps=config.norm_eps
+ )
+
+ def forward(
+ self,
+ hidden_states: torch.Tensor,
+ attention_mask: Optional[torch.Tensor] = None,
+ past_key_values: Optional[Tuple[torch.Tensor]] = None,
+ output_attentions: Optional[bool] = False,
+ use_cache: Optional[bool] = False,
+ **kwargs,
+ ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
+
+ residual = hidden_states
+ if hasattr(self, 'attn_norm'):
+ hidden_states = self.attn_norm(hidden_states)
+ hidden_states, attentions, past_key_values = self.attn(
+ hidden_states=hidden_states,
+ attention_mask=attention_mask,
+ past_key_values=past_key_values,
+ use_cache=use_cache,
+ output_attentions=output_attentions
+ )
+ if hasattr(self, 'mlp_norm'):
+ hidden_states, residual = self.mlp_norm(hidden_states, residual, True)
+ else:
+ hidden_states = residual + hidden_states
+ residual = hidden_states
+ hidden_states = self.mlp(hidden_states)
+ hidden_states = residual + hidden_states
+
+ outputs = (hidden_states,)
+
+ if output_attentions:
+ outputs += (attentions,)
+
+ if use_cache:
+ outputs += (past_key_values,)
+
+ return outputs
+
+
+class BitNetPreTrainedModel(PreTrainedModel):
+
+ config_class = BitNetConfig
+ supports_gradient_checkpointing = True
+ _no_split_modules = ['BitNetBlock']
+
+ def __init__(self, *inputs, **kwargs):
+ super().__init__(*inputs, **kwargs)
+
+ def _init_weights(
+ self,
+ module: nn.Module,
+ rescale_prenorm_residual: bool = False,
+ num_residuals_per_layer: int = 2,
+ ):
+ if isinstance(module, (BitLinear, nn.Conv1d)):
+ # Slightly different from the TF version which uses truncated_normal for initialization
+ # cf https://github.com/pytorch/pytorch/pull/5617
+ nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range)
+ if module.bias is not None:
+ nn.init.zeros_(module.bias)
+ elif isinstance(module, nn.Embedding):
+ nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range)
+ if module.padding_idx is not None:
+ module.weight.data[module.padding_idx].zero_()
+
+ if rescale_prenorm_residual:
+ # Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme:
+ # > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale
+ # > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers.
+ # > -- GPT-2 :: https://openai.com/blog/better-language-models/
+ #
+ # Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py
+ for name, p in module.named_parameters():
+ if name in ["o_proj.weight", "down_proj.weight"]:
+ # Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block
+ # Following Pytorch init, except scale by 1/sqrt(2 * n_layer)
+ # We need to reinit p since this code could be called multiple times
+ # Having just p *= scale would repeatedly scale it down
+ with torch.no_grad():
+ p /= math.sqrt(num_residuals_per_layer * self.config.num_hidden_layers)
+
+
+class BitNetModel(BitNetPreTrainedModel):
+
+ def __init__(self, config: BitNetConfig):
+ super().__init__(config)
+ self.padding_idx = config.pad_token_id
+ self.vocab_size = config.vocab_size
+
+ self.embeddings = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
+ self.layers = nn.ModuleList([BitNetBlock(config, layer_idx) for layer_idx in range(config.num_hidden_layers)])
+ self.norm = RMSNorm(config.hidden_size, eps=config.norm_eps)
+
+ self.gradient_checkpointing = False
+
+ self.post_init()
+
+ def get_input_embeddings(self):
+ return self.embeddings
+
+ def set_input_embeddings(self, value):
+ self.embeddings = value
+
+ def forward(
+ self,
+ input_ids: Optional[torch.LongTensor] = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
+ inputs_embeds: Optional[torch.FloatTensor] = None,
+ use_cache: Optional[bool] = None,
+ output_attentions: Optional[bool] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
+ if output_attentions:
+ warnings.warn(
+ "`BitNetModel` does not support output attention weights now, so `output_attentions` is set to `False`."
+ )
+ output_attentions = False
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
+ output_hidden_states = output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
+ use_cache = use_cache if use_cache is not None else (self.config.use_cache if not self.training else False)
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ # retrieve input_ids and inputs_embeds
+ if input_ids is not None and inputs_embeds is not None:
+ raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
+ elif input_ids is None and inputs_embeds is None:
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
+
+ if use_cache and not isinstance(past_key_values, Cache):
+ past_key_values = Cache.from_legacy_cache(past_key_values)
+
+ if inputs_embeds is None:
+ inputs_embeds = self.embeddings(input_ids)
+
+ # embed positions
+ hidden_states = inputs_embeds
+
+ if self.gradient_checkpointing and self.training:
+ if use_cache:
+ logger.warning_once(
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
+ )
+ use_cache = False
+
+ all_hidden_states = () if output_hidden_states else None
+ all_attns = () if output_attentions else None
+ next_cache = None
+
+ for layer in self.layers:
+ if output_hidden_states:
+ all_hidden_states += (hidden_states,)
+
+ if self.gradient_checkpointing and self.training:
+ layer_outputs = self._gradient_checkpointing_func(
+ layer.__call__,
+ hidden_states,
+ attention_mask,
+ past_key_values,
+ output_attentions,
+ use_cache
+ )
+ else:
+ layer_outputs = layer(
+ hidden_states,
+ attention_mask=attention_mask,
+ past_key_values=past_key_values,
+ output_attentions=output_attentions,
+ use_cache=use_cache
+ )
+
+ hidden_states = layer_outputs[0]
+
+ if use_cache:
+ next_cache = layer_outputs[2 if output_attentions else 1]
+
+ if output_attentions:
+ all_attns += (layer_outputs[1],)
+
+ hidden_states = self.norm(hidden_states)
+
+ # add hidden states from the last decoder layer
+ if output_hidden_states:
+ all_hidden_states += (hidden_states,)
+
+ if not return_dict:
+ return tuple(v for v in [hidden_states, next_cache, all_hidden_states, all_attns] if v is not None)
+
+ return BaseModelOutputWithPast(
+ last_hidden_state=hidden_states,
+ past_key_values=next_cache,
+ hidden_states=all_hidden_states,
+ attentions=all_attns
+ )
+
+
+class BitNetForCausalLM(BitNetPreTrainedModel, GenerationMixin):
+
+ _tied_weights_keys = ["lm_head.weight"]
+
+ def __init__(self, config):
+ super().__init__(config)
+ self.model = BitNetModel(config)
+ self.vocab_size = config.vocab_size
+ self.lm_head = BitLinear(config.hidden_size, config.vocab_size, bias=False)
+
+ # Initialize weights and apply final processing
+ self.post_init()
+
+ def get_input_embeddings(self):
+ return self.model.embeddings
+
+ def set_input_embeddings(self, value):
+ self.model.embeddings = value
+
+ def get_output_embeddings(self):
+ return self.lm_head
+
+ def set_output_embeddings(self, new_embeddings):
+ self.lm_head = new_embeddings
+
+ def set_decoder(self, decoder):
+ self.model = decoder
+
+ def get_decoder(self):
+ return self.model
+
+ def prepare_inputs_for_generation(
+ self,
+ input_ids: torch.LongTensor = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ inputs_embeds: Optional[torch.Tensor] = None,
+ use_cache: bool = True,
+ num_logits_to_keep: Optional[int] = None,
+ **kwargs
+ ):
+ # only last token for `inputs_ids` if the `past_key_values` is passed along.
+ if past_key_values is not None:
+ input_ids = input_ids[:, -1:]
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
+ if inputs_embeds is not None and past_key_values is None:
+ model_inputs = {'inputs_embeds': inputs_embeds}
+ else:
+ # The `contiguous()` here is necessary to have a static stride during decoding. torchdynamo otherwise
+ # recompiles graphs as the stride of the inputs is a guard.
+ # Ref: https://github.com/huggingface/transformers/pull/29114
+ # TODO: use `next_tokens` directly instead.
+ model_inputs = {'input_ids': input_ids.contiguous()}
+
+ if num_logits_to_keep is not None:
+ model_inputs['num_logits_to_keep'] = num_logits_to_keep
+
+ model_inputs.update({
+ 'past_key_values': past_key_values,
+ 'use_cache': use_cache,
+ 'attention_mask': attention_mask,
+ 'num_logits_to_keep': num_logits_to_keep,
+ })
+ return model_inputs
+
+ def forward(
+ self,
+ input_ids: torch.LongTensor = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ inputs_embeds: Optional[torch.FloatTensor] = None,
+ labels: Optional[torch.LongTensor] = None,
+ use_cache: Optional[bool] = None,
+ output_attentions: Optional[bool] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None,
+ num_logits_to_keep: Optional[int] = 0
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
+ output_hidden_states = (
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
+ )
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ outputs = self.model(
+ input_ids=input_ids,
+ attention_mask=attention_mask,
+ past_key_values=past_key_values,
+ inputs_embeds=inputs_embeds,
+ use_cache=use_cache,
+ output_attentions=output_attentions,
+ output_hidden_states=output_hidden_states,
+ return_dict=return_dict
+ )
+
+ hidden_states = outputs[0]
+ fuse_linear_and_cross_entropy = self.config.fuse_cross_entropy and self.training
+ logits = None if fuse_linear_and_cross_entropy else self.lm_head(hidden_states[:, -num_logits_to_keep:])
+
+ loss = None
+ if labels is not None:
+ if self.config.fuse_cross_entropy:
+ if fuse_linear_and_cross_entropy:
+ loss_fct = FusedLinearCrossEntropyLoss()
+ else:
+ loss_fct = FusedCrossEntropyLoss(inplace_backward=True)
+ else:
+ loss_fct = nn.CrossEntropyLoss()
+ # Enable model parallelism
+ labels = labels.to(hidden_states.device)
+ labels = torch.cat((labels[..., 1:], torch.full_like(labels[:, :1], loss_fct.ignore_index)), 1)
+ if fuse_linear_and_cross_entropy:
+ loss = loss_fct(hidden_states.view(-1, self.config.hidden_size),
+ labels.view(-1),
+ self.lm_head.weight,
+ self.lm_head.bias)
+ else:
+ loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1))
+
+ if not return_dict:
+ output = (logits,) + outputs[1:]
+ return (loss,) + output if loss is not None else output
+
+ return CausalLMOutputWithPast(
+ loss=loss,
+ logits=logits,
+ past_key_values=outputs.past_key_values,
+ hidden_states=outputs.hidden_states,
+ attentions=outputs.attentions,
+ )
diff --git a/fla/models/delta_net/__init__.py b/fla/models/delta_net/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..258908922fef01c223727a92958355d4ae5f78d6
--- /dev/null
+++ b/fla/models/delta_net/__init__.py
@@ -0,0 +1,13 @@
+# -*- coding: utf-8 -*-
+
+from transformers import AutoConfig, AutoModel, AutoModelForCausalLM
+
+from fla.models.delta_net.configuration_delta_net import DeltaNetConfig
+from fla.models.delta_net.modeling_delta_net import (DeltaNetForCausalLM,
+ DeltaNetModel)
+
+AutoConfig.register(DeltaNetConfig.model_type, DeltaNetConfig)
+AutoModel.register(DeltaNetConfig, DeltaNetModel)
+AutoModelForCausalLM.register(DeltaNetConfig, DeltaNetForCausalLM)
+
+__all__ = ['DeltaNetConfig', 'DeltaNetForCausalLM', 'DeltaNetModel']
diff --git a/fla/models/delta_net/configuration_delta_net.py b/fla/models/delta_net/configuration_delta_net.py
new file mode 100644
index 0000000000000000000000000000000000000000..45ba7b498ad754b516bd6d2de248838d3d85552d
--- /dev/null
+++ b/fla/models/delta_net/configuration_delta_net.py
@@ -0,0 +1,87 @@
+# -*- coding: utf-8 -*-
+
+from typing import Dict, Optional
+
+from transformers.configuration_utils import PretrainedConfig
+
+
+class DeltaNetConfig(PretrainedConfig):
+
+ model_type = 'delta_net'
+ keys_to_ignore_at_inference = ['past_key_values']
+
+ def __init__(
+ self,
+ attn_mode: str = "chunk",
+ hidden_size: int = 2048,
+ expand_k: int = 1,
+ expand_v: int = 1,
+ use_gate: bool = False,
+ use_short_conv: bool = True,
+ conv_size: int = 4,
+ use_beta: bool = True,
+ use_output_norm: bool = True,
+ num_heads: int = 16,
+ qk_norm: str = 'l2',
+ qk_activation: str = 'silu',
+ max_position_embeddings: int = 2048,
+ hidden_ratio: Optional[int] = 4,
+ intermediate_size: Optional[int] = None,
+ hidden_act: str = "swish",
+ num_hidden_layers: int = 24,
+ norm_first: bool = False,
+ norm_eps: float = 1e-6,
+ attn: Optional[Dict] = None,
+ use_cache: bool = True,
+ pad_token_id: int = None,
+ bos_token_id: int = 1,
+ eos_token_id: int = 2,
+ tie_word_embeddings: bool = False,
+ initializer_range: float = 0.02,
+ fuse_cross_entropy: bool = True,
+ vocab_size: int = 32000,
+ **kwargs
+ ):
+ self.attn_mode = attn_mode
+ self.hidden_size = hidden_size
+ self.expand_k = expand_k
+ self.expand_v = expand_v
+ self.use_gate = use_gate
+ self.use_short_conv = use_short_conv
+ self.conv_size = conv_size
+ self.use_beta = use_beta
+ self.use_output_norm = use_output_norm
+ self.num_heads = num_heads
+ self.qk_norm = qk_norm
+ self.qk_activation = qk_activation
+ self.max_position_embeddings = max_position_embeddings
+
+ self.hidden_ratio = hidden_ratio
+ self.intermediate_size = intermediate_size
+ self.hidden_act = hidden_act
+ self.num_hidden_layers = num_hidden_layers
+ self.norm_first = norm_first
+ self.norm_eps = norm_eps
+ self.attn = attn
+ self.use_cache = use_cache
+ self.initializer_range = initializer_range
+ self.fuse_cross_entropy = fuse_cross_entropy
+ self.vocab_size = vocab_size
+
+ if attn is not None:
+ if not isinstance(attn, Dict):
+ raise ValueError("attn must be a dictionary")
+ if 'layers' not in attn:
+ raise ValueError("Layer indices must be provided to initialize hybrid attention layers")
+ if 'num_heads' not in attn:
+ raise ValueError("Number of heads must be provided to initialize hybrid attention layers")
+ attn['num_kv_heads'] = attn.get('num_kv_heads', attn['num_heads'])
+ attn['window_size'] = attn.get('window_size', None)
+
+ super().__init__(
+ pad_token_id=pad_token_id,
+ bos_token_id=bos_token_id,
+ eos_token_id=eos_token_id,
+ tie_word_embeddings=tie_word_embeddings,
+ **kwargs,
+ )
diff --git a/fla/models/delta_net/modeling_delta_net.py b/fla/models/delta_net/modeling_delta_net.py
new file mode 100644
index 0000000000000000000000000000000000000000..6fe6f6bf87d98f74dddd8a600e61daebdcd624e8
--- /dev/null
+++ b/fla/models/delta_net/modeling_delta_net.py
@@ -0,0 +1,439 @@
+# -*- coding: utf-8 -*-
+
+from __future__ import annotations
+
+import math
+import warnings
+from typing import List, Optional, Tuple, Union
+
+import torch
+import torch.nn as nn
+import torch.utils.checkpoint
+from transformers.activations import ACT2FN
+from transformers.generation import GenerationMixin
+from transformers.modeling_outputs import (BaseModelOutputWithPast,
+ CausalLMOutputWithPast)
+from transformers.modeling_utils import PreTrainedModel
+from transformers.utils import logging
+
+from fla.layers.attn import Attention
+from fla.layers.delta_net import DeltaNet
+from fla.models.delta_net.configuration_delta_net import DeltaNetConfig
+from fla.models.utils import Cache
+from fla.modules import (FusedCrossEntropyLoss, FusedLinearCrossEntropyLoss,
+ RMSNorm)
+from fla.modules.activations import swiglu_linear
+from fla.modules.layernorm import rms_norm_linear
+
+logger = logging.get_logger(__name__)
+
+
+class DeltaNetMLP(nn.Module):
+
+ def __init__(
+ self,
+ hidden_size: int,
+ hidden_ratio: Optional[int] = None,
+ intermediate_size: Optional[int] = None,
+ hidden_act: str = 'swish',
+ norm_first: bool = True,
+ norm_eps: float = 1e-5
+ ) -> DeltaNetMLP:
+ super().__init__()
+
+ self.hidden_size = hidden_size
+ # the final number of params is `hidden_ratio * hidden_size^2`
+ # `intermediate_size` is chosen to be a multiple of 256 closest to `2/3 * hidden_size * hidden_ratio`
+ if hidden_ratio is None:
+ hidden_ratio = 4
+ if intermediate_size is None:
+ intermediate_size = int(hidden_size * hidden_ratio * 2 / 3)
+ intermediate_size = 256 * ((intermediate_size + 256 - 1) // 256)
+ self.hidden_ratio = hidden_ratio
+ self.intermediate_size = intermediate_size
+ self.norm_first = norm_first
+
+ if norm_first:
+ self.norm = RMSNorm(hidden_size=hidden_size, eps=norm_eps)
+
+ self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size * 2, bias=False)
+ self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False)
+ self.act_fn = ACT2FN[hidden_act]
+
+ def forward(self, x):
+ if self.norm_first:
+ x = rms_norm_linear(x, self.norm.weight, self.norm.bias, self.gate_proj.weight, self.gate_proj.bias)
+ else:
+ x = self.gate_proj(x)
+ gate, y = x.chunk(2, -1)
+ return swiglu_linear(gate, y, self.down_proj.weight, self.down_proj.bias)
+
+
+class DeltaNetBlock(nn.Module):
+ def __init__(self, config: DeltaNetConfig, layer_idx: int):
+ super().__init__()
+ self.hidden_size = config.hidden_size
+
+ if not config.norm_first:
+ self.attn_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps)
+ if config.attn is not None and layer_idx in config.attn['layers']:
+ self.attn = Attention(
+ hidden_size=config.hidden_size,
+ num_heads=config.attn['num_heads'],
+ num_kv_heads=config.attn['num_kv_heads'],
+ window_size=config.attn['window_size'],
+ max_position_embeddings=config.max_position_embeddings,
+ layer_idx=layer_idx
+ )
+ else:
+ self.attn = DeltaNet(
+ mode=config.attn_mode,
+ hidden_size=config.hidden_size,
+ expand_k=config.expand_k,
+ expand_v=config.expand_v,
+ num_heads=config.num_heads,
+ use_gate=config.use_gate,
+ use_beta=config.use_beta,
+ use_short_conv=config.use_short_conv,
+ use_output_norm=config.use_output_norm,
+ conv_size=config.conv_size,
+ qk_norm=config.qk_norm,
+ qk_activation=config.qk_activation,
+ norm_first=config.norm_first,
+ norm_eps=config.norm_eps,
+ layer_idx=layer_idx
+ )
+ if not config.norm_first:
+ self.mlp_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps)
+ self.mlp = DeltaNetMLP(
+ hidden_size=config.hidden_size,
+ hidden_ratio=config.hidden_ratio,
+ intermediate_size=config.intermediate_size,
+ hidden_act=config.hidden_act,
+ norm_first=config.norm_first,
+ norm_eps=config.norm_eps
+ )
+
+ def forward(
+ self,
+ hidden_states: torch.Tensor,
+ attention_mask: Optional[torch.Tensor] = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ use_cache: Optional[bool] = False,
+ output_attentions: Optional[bool] = False,
+ **kwargs
+ ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
+
+ residual = hidden_states
+ if hasattr(self, 'attn_norm'):
+ hidden_states = self.attn_norm(hidden_states)
+ hidden_states, attentions, past_key_values = self.attn(
+ hidden_states=hidden_states,
+ attention_mask=attention_mask,
+ past_key_values=past_key_values,
+ use_cache=use_cache,
+ output_attentions=output_attentions
+ )
+ if hasattr(self, 'mlp_norm'):
+ hidden_states, residual = self.mlp_norm(hidden_states, residual, True)
+ else:
+ hidden_states = residual + hidden_states
+ residual = hidden_states
+ hidden_states = self.mlp(hidden_states)
+ hidden_states = residual + hidden_states
+
+ outputs = (hidden_states, attentions, past_key_values)
+
+ return outputs
+
+
+class DeltaNetPreTrainedModel(PreTrainedModel):
+
+ config_class = DeltaNetConfig
+ supports_gradient_checkpointing = True
+ _no_split_modules = ['DeltaNetBlock']
+
+ def __init__(self, *inputs, **kwargs):
+ super().__init__(*inputs, **kwargs)
+
+ def _init_weights(
+ self,
+ module: nn.Module,
+ rescale_prenorm_residual: bool = True,
+ num_residuals_per_layer: int = 2,
+ ):
+ if isinstance(module, (nn.Linear, nn.Conv1d)):
+ # Slightly different from the TF version which uses truncated_normal for initialization
+ # cf https://github.com/pytorch/pytorch/pull/5617
+ nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range)
+ if module.bias is not None:
+ nn.init.zeros_(module.bias)
+ elif isinstance(module, nn.Embedding):
+ nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range)
+ if module.padding_idx is not None:
+ module.weight.data[module.padding_idx].zero_()
+
+ if rescale_prenorm_residual:
+ # Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme:
+ # > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale
+ # > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers.
+ # > -- GPT-2 :: https://openai.com/blog/better-language-models/
+ #
+ # Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py
+ for name, p in module.named_parameters():
+ if name in ["o_proj.weight", "down_proj.weight"]:
+ # Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block
+ # Following Pytorch init, except scale by 1/sqrt(2 * n_layer)
+ # We need to reinit p since this code could be called multiple times
+ # Having just p *= scale would repeatedly scale it down
+ with torch.no_grad():
+ p /= math.sqrt(num_residuals_per_layer * self.config.num_hidden_layers)
+
+
+class DeltaNetModel(DeltaNetPreTrainedModel):
+
+ def __init__(self, config: DeltaNetConfig):
+ super().__init__(config)
+ self.padding_idx = config.pad_token_id
+ self.vocab_size = config.vocab_size
+
+ self.embeddings = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
+ self.layers = nn.ModuleList([DeltaNetBlock(config, layer_idx) for layer_idx in range(config.num_hidden_layers)])
+ self.norm = RMSNorm(config.hidden_size, eps=config.norm_eps)
+
+ self.gradient_checkpointing = False
+
+ self.post_init()
+
+ def get_input_embeddings(self):
+ return self.embeddings
+
+ def set_input_embeddings(self, value):
+ self.embeddings = value
+
+ def forward(
+ self,
+ input_ids: Optional[torch.LongTensor] = None,
+ attention_mask: Optional[torch.Tensor] = None, # noqa
+ inputs_embeds: Optional[torch.FloatTensor] = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ use_cache: Optional[bool] = None,
+ output_attentions: Optional[bool] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None
+ ) -> Union[Tuple, BaseModelOutputWithPast]:
+ if output_attentions:
+ warnings.warn("`DeltaNetModel` does not `output_attentions` now, setting it to `False`.")
+ output_attentions = False
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
+ output_hidden_states = output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
+ use_cache = use_cache if use_cache is not None else (self.config.use_cache if not self.training else False)
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ # retrieve input_ids and inputs_embeds
+ if input_ids is not None and inputs_embeds is not None:
+ raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
+ if input_ids is None and inputs_embeds is None:
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
+
+ if inputs_embeds is None:
+ inputs_embeds = self.embeddings(input_ids)
+ hidden_states = inputs_embeds
+
+ if use_cache and not isinstance(past_key_values, Cache):
+ past_key_values = Cache.from_legacy_cache(past_key_values)
+
+ if self.gradient_checkpointing and self.training and use_cache:
+ logger.warning_once("`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...")
+ use_cache = False
+
+ all_hidden_states = () if output_hidden_states else None
+ all_attns = () if output_attentions else None
+ for layer in self.layers:
+ if output_hidden_states:
+ all_hidden_states += (hidden_states,)
+
+ if self.gradient_checkpointing and self.training:
+ hidden_states, attentions, past_key_values = self._gradient_checkpointing_func(
+ layer.__call__,
+ hidden_states,
+ attention_mask,
+ past_key_values,
+ use_cache,
+ output_attentions
+ )
+ else:
+ hidden_states, attentions, past_key_values = layer(
+ hidden_states,
+ attention_mask=attention_mask,
+ past_key_values=past_key_values,
+ use_cache=use_cache,
+ output_attentions=output_attentions
+ )
+
+ if output_attentions:
+ all_attns += (attentions,)
+
+ hidden_states = self.norm(hidden_states)
+
+ # add hidden states from the last decoder layer
+ if output_hidden_states:
+ all_hidden_states += (hidden_states,)
+
+ next_cache = past_key_values
+ if not return_dict:
+ return tuple(x for x in [hidden_states, next_cache, all_hidden_states, all_attns] if x is not None)
+ return BaseModelOutputWithPast(
+ last_hidden_state=hidden_states,
+ past_key_values=next_cache,
+ hidden_states=all_hidden_states,
+ attentions=all_attns
+ )
+
+
+class DeltaNetForCausalLM(DeltaNetPreTrainedModel, GenerationMixin):
+
+ _tied_weights_keys = ["lm_head.weight"]
+
+ def __init__(self, config):
+ super().__init__(config)
+ self.model = DeltaNetModel(config)
+ self.vocab_size = config.vocab_size
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
+
+ # Initialize weights and apply final processing
+ self.post_init()
+
+ def get_input_embeddings(self):
+ return self.model.embeddings
+
+ def set_input_embeddings(self, value):
+ self.model.embeddings = value
+
+ def get_output_embeddings(self):
+ return self.lm_head
+
+ def set_output_embeddings(self, new_embeddings):
+ self.lm_head = new_embeddings
+
+ def set_decoder(self, decoder):
+ self.model = decoder
+
+ def get_decoder(self):
+ return self.model
+
+ def generate(self, *args, **kwargs):
+ try:
+ return super().generate(*args, **kwargs)
+ except AttributeError as exception:
+ if 'past_key_values' in str(exception):
+ raise AttributeError(
+ f"You tried to call `generate` with a decoding strategy that manipulates `past_key_values`, "
+ f"which is not supported for {self.__class__.__name__}. "
+ f"Try another generation strategy instead. "
+ f"For the available generation strategies, check this doc: "
+ f"https://huggingface.co/docs/transformers/en/generation_strategies#decoding-strategies"
+ )
+ else:
+ raise exception
+
+ def prepare_inputs_for_generation(
+ self,
+ input_ids: torch.LongTensor = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ inputs_embeds: Optional[torch.Tensor] = None,
+ use_cache: bool = True,
+ num_logits_to_keep: Optional[int] = None,
+ **kwargs
+ ):
+ # only last token for `inputs_ids` if the `past_key_values` is passed along.
+ if past_key_values is not None:
+ input_ids = input_ids[:, -1:]
+
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
+ if inputs_embeds is not None and past_key_values is None:
+ model_inputs = {'inputs_embeds': inputs_embeds}
+ else:
+ # The `contiguous()` here is necessary to have a static stride during decoding. torchdynamo otherwise
+ # recompiles graphs as the stride of the inputs is a guard.
+ # Ref: https://github.com/huggingface/transformers/pull/29114
+ # TODO: use `next_tokens` directly instead.
+ model_inputs = {'input_ids': input_ids.contiguous()}
+
+ if num_logits_to_keep is not None:
+ model_inputs['num_logits_to_keep'] = num_logits_to_keep
+
+ model_inputs.update({
+ 'past_key_values': past_key_values,
+ 'use_cache': use_cache,
+ 'attention_mask': attention_mask,
+ 'num_logits_to_keep': num_logits_to_keep,
+ })
+ return model_inputs
+
+ def forward(
+ self,
+ input_ids: torch.LongTensor = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ inputs_embeds: Optional[torch.Tensor] = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ labels: Optional[torch.LongTensor] = None,
+ use_cache: Optional[bool] = None,
+ output_attentions: Optional[bool] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None,
+ num_logits_to_keep: Optional[int] = 0
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
+ output_hidden_states = (
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
+ )
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ outputs = self.model(
+ input_ids=input_ids,
+ attention_mask=attention_mask,
+ inputs_embeds=inputs_embeds,
+ past_key_values=past_key_values,
+ use_cache=use_cache,
+ output_attentions=output_attentions,
+ output_hidden_states=output_hidden_states,
+ return_dict=return_dict
+ )
+
+ hidden_states = outputs[0]
+ fuse_linear_and_cross_entropy = self.config.fuse_cross_entropy and self.training
+ logits = None if fuse_linear_and_cross_entropy else self.lm_head(hidden_states[:, -num_logits_to_keep:])
+
+ loss = None
+ if labels is not None:
+ if self.config.fuse_cross_entropy:
+ if fuse_linear_and_cross_entropy:
+ loss_fct = FusedLinearCrossEntropyLoss()
+ else:
+ loss_fct = FusedCrossEntropyLoss(inplace_backward=True)
+ else:
+ loss_fct = nn.CrossEntropyLoss()
+ # Enable model parallelism
+ labels = labels.to(hidden_states.device)
+ labels = torch.cat((labels[..., 1:], torch.full_like(labels[:, :1], loss_fct.ignore_index)), 1)
+ if fuse_linear_and_cross_entropy:
+ loss = loss_fct(hidden_states.view(-1, self.config.hidden_size),
+ labels.view(-1),
+ self.lm_head.weight,
+ self.lm_head.bias)
+ else:
+ loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1))
+
+ if not return_dict:
+ output = (logits,) + outputs[1:]
+ return (loss,) + output if loss is not None else output
+
+ return CausalLMOutputWithPast(
+ loss=loss,
+ logits=logits,
+ past_key_values=outputs.past_key_values,
+ hidden_states=outputs.hidden_states,
+ attentions=outputs.attentions,
+ )
diff --git a/fla/models/gla/__init__.py b/fla/models/gla/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..edccb515af8f04144308bfcbb72be8e91e714cd7
--- /dev/null
+++ b/fla/models/gla/__init__.py
@@ -0,0 +1,13 @@
+# -*- coding: utf-8 -*-
+
+from transformers import AutoConfig, AutoModel, AutoModelForCausalLM
+
+from fla.models.gla.configuration_gla import GLAConfig
+from fla.models.gla.modeling_gla import GLAForCausalLM, GLAModel
+
+AutoConfig.register(GLAConfig.model_type, GLAConfig)
+AutoModel.register(GLAConfig, GLAModel)
+AutoModelForCausalLM.register(GLAConfig, GLAForCausalLM)
+
+
+__all__ = ['GLAConfig', 'GLAForCausalLM', 'GLAModel']
diff --git a/fla/models/gla/configuration_gla.py b/fla/models/gla/configuration_gla.py
new file mode 100644
index 0000000000000000000000000000000000000000..7991112b2c7bdf2fdad7211cf4c619641599ed87
--- /dev/null
+++ b/fla/models/gla/configuration_gla.py
@@ -0,0 +1,90 @@
+# -*- coding: utf-8 -*-
+
+from typing import Dict, Optional
+
+from transformers.configuration_utils import PretrainedConfig
+
+
+class GLAConfig(PretrainedConfig):
+
+ model_type = 'gla'
+ keys_to_ignore_at_inference = ['past_key_values']
+
+ def __init__(
+ self,
+ hidden_size: int = 2048,
+ expand_k: int = 0.5,
+ expand_v: int = 1,
+ hidden_ratio: Optional[int] = 4,
+ intermediate_size: Optional[int] = None,
+ num_hidden_layers: int = 24,
+ num_heads: int = 4,
+ num_kv_heads: Optional[int] = None,
+ feature_map: Optional[str] = None,
+ attn_mode: str = "chunk",
+ use_short_conv: bool = False,
+ conv_size: int = 4,
+ use_output_gate: bool = True,
+ clamp_min: Optional[float] = None,
+ hidden_act: str = "swish",
+ max_position_embeddings: int = 2048,
+ elementwise_affine: Optional[bool] = True,
+ norm_eps: float = 1e-6,
+ use_gk: bool = True,
+ use_gv: bool = False,
+ attn: Optional[Dict] = None,
+ use_cache: bool = True,
+ pad_token_id: int = None,
+ bos_token_id: int = 1,
+ eos_token_id: int = 2,
+ tie_word_embeddings: bool = False,
+ initializer_range: float = 0.02,
+ fuse_norm: bool = True,
+ fuse_cross_entropy: bool = True,
+ vocab_size: int = 32000,
+ **kwargs
+ ):
+ self.hidden_size = hidden_size
+ self.expand_k = expand_k
+ self.expand_v = expand_v
+ self.hidden_ratio = hidden_ratio
+ self.intermediate_size = intermediate_size
+ self.num_hidden_layers = num_hidden_layers
+ self.num_heads = num_heads
+ self.num_kv_heads = num_kv_heads
+ self.feature_map = feature_map
+ self.attn_mode = attn_mode
+ self.use_short_conv = use_short_conv
+ self.conv_size = conv_size
+ self.use_output_gate = use_output_gate
+ self.clamp_min = clamp_min
+ self.hidden_act = hidden_act
+ self.max_position_embeddings = max_position_embeddings
+ self.elementwise_affine = elementwise_affine
+ self.norm_eps = norm_eps
+ self.use_gk = use_gk
+ self.use_gv = use_gv
+ self.attn = attn
+ self.use_cache = use_cache
+ self.initializer_range = initializer_range
+ self.fuse_norm = fuse_norm
+ self.fuse_cross_entropy = fuse_cross_entropy
+ self.vocab_size = vocab_size
+
+ if attn is not None:
+ if not isinstance(attn, Dict):
+ raise ValueError("attn must be a dictionary")
+ if 'layers' not in attn:
+ raise ValueError("Layer indices must be provided to initialize hybrid attention layers")
+ if 'num_heads' not in attn:
+ raise ValueError("Number of heads must be provided to initialize hybrid attention layers")
+ attn['num_kv_heads'] = attn.get('num_kv_heads', attn['num_heads'])
+ attn['window_size'] = attn.get('window_size', None)
+
+ super().__init__(
+ pad_token_id=pad_token_id,
+ bos_token_id=bos_token_id,
+ eos_token_id=eos_token_id,
+ tie_word_embeddings=tie_word_embeddings,
+ **kwargs,
+ )
diff --git a/fla/models/gla/modeling_gla.py b/fla/models/gla/modeling_gla.py
new file mode 100644
index 0000000000000000000000000000000000000000..bfaa29568a5d026443b4a23f87ba4ba698e0b5d0
--- /dev/null
+++ b/fla/models/gla/modeling_gla.py
@@ -0,0 +1,418 @@
+# -*- coding: utf-8 -*-
+
+from __future__ import annotations
+
+import math
+import warnings
+from typing import List, Optional, Tuple, Union
+
+import torch
+import torch.nn as nn
+import torch.utils.checkpoint
+from transformers.activations import ACT2FN
+from transformers.generation import GenerationMixin
+from transformers.modeling_outputs import (BaseModelOutputWithPast,
+ CausalLMOutputWithPast)
+from transformers.modeling_utils import PreTrainedModel
+from transformers.utils import logging
+
+from fla.layers.attn import Attention
+from fla.layers.gla import GatedLinearAttention
+from fla.models.gla.configuration_gla import GLAConfig
+from fla.models.utils import Cache
+from fla.modules import (FusedCrossEntropyLoss, FusedLinearCrossEntropyLoss,
+ RMSNorm)
+from fla.modules.activations import swiglu_linear
+
+logger = logging.get_logger(__name__)
+
+
+class GLAMLP(nn.Module):
+
+ def __init__(
+ self,
+ hidden_size: int,
+ hidden_ratio: Optional[int] = None,
+ intermediate_size: Optional[int] = None,
+ hidden_act: str = 'swish'
+ ) -> GLAMLP:
+ super().__init__()
+
+ self.hidden_size = hidden_size
+ # the final number of params is `hidden_ratio * hidden_size^2`
+ # `intermediate_size` is chosen to be a multiple of 256 closest to `2/3 * hidden_size * hidden_ratio`
+ if hidden_ratio is None:
+ hidden_ratio = 4
+ if intermediate_size is None:
+ intermediate_size = int(hidden_size * hidden_ratio * 2 / 3)
+ intermediate_size = 256 * ((intermediate_size + 256 - 1) // 256)
+ self.hidden_ratio = hidden_ratio
+ self.intermediate_size = intermediate_size
+
+ self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size * 2, bias=False)
+ self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False)
+ self.act_fn = ACT2FN[hidden_act]
+
+ def forward(self, x):
+ y = self.gate_proj(x)
+ gate, y = y.chunk(2, -1)
+ return swiglu_linear(gate, y, self.down_proj.weight, self.down_proj.bias)
+
+
+class GLABlock(nn.Module):
+ def __init__(self, config: GLAConfig, layer_idx: int):
+ super().__init__()
+ self.hidden_size = config.hidden_size
+
+ self.attn_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps)
+ if config.attn is not None and layer_idx in config.attn['layers']:
+ self.attn = Attention(
+ hidden_size=config.hidden_size,
+ num_heads=config.attn['num_heads'],
+ num_kv_heads=config.attn['num_kv_heads'],
+ window_size=config.attn['window_size'],
+ max_position_embeddings=config.max_position_embeddings,
+ layer_idx=layer_idx
+ )
+ else:
+ self.attn = GatedLinearAttention(
+ mode=config.attn_mode,
+ hidden_size=config.hidden_size,
+ expand_k=config.expand_k,
+ expand_v=config.expand_v,
+ num_heads=config.num_heads,
+ num_kv_heads=config.num_kv_heads,
+ feature_map=config.feature_map,
+ use_short_conv=config.use_short_conv,
+ conv_size=config.conv_size,
+ use_output_gate=config.use_output_gate,
+ gate_fn=config.hidden_act,
+ elementwise_affine=config.elementwise_affine,
+ norm_eps=config.norm_eps,
+ clamp_min=config.clamp_min,
+ fuse_norm=config.fuse_norm,
+ layer_idx=layer_idx
+ )
+ self.mlp_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps)
+ self.mlp = GLAMLP(
+ hidden_size=config.hidden_size,
+ hidden_ratio=config.hidden_ratio,
+ intermediate_size=config.intermediate_size,
+ hidden_act=config.hidden_act
+ )
+
+ def forward(
+ self,
+ hidden_states: torch.Tensor,
+ attention_mask: Optional[torch.Tensor] = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ use_cache: Optional[bool] = False,
+ output_attentions: Optional[bool] = False,
+ **kwargs,
+ ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
+ residual = hidden_states
+ hidden_states = self.attn_norm(hidden_states)
+ hidden_states, attentions, past_key_values = self.attn(
+ hidden_states=hidden_states,
+ attention_mask=attention_mask,
+ past_key_values=past_key_values,
+ use_cache=use_cache,
+ output_attentions=output_attentions
+ )
+ hidden_states, residual = self.mlp_norm(hidden_states, residual, True)
+ hidden_states = self.mlp(hidden_states)
+ hidden_states = residual + hidden_states
+
+ outputs = (hidden_states, attentions, past_key_values)
+
+ return outputs
+
+
+class GLAPreTrainedModel(PreTrainedModel):
+
+ config_class = GLAConfig
+ supports_gradient_checkpointing = True
+ _no_split_modules = ['GLABlock']
+
+ def __init__(self, *inputs, **kwargs):
+ super().__init__(*inputs, **kwargs)
+
+ def _init_weights(
+ self,
+ module: nn.Module,
+ rescale_prenorm_residual: bool = True,
+ num_residuals_per_layer: int = 2,
+ ):
+ if isinstance(module, (nn.Linear, nn.Conv1d)):
+ # Slightly different from the TF version which uses truncated_normal for initialization
+ # cf https://github.com/pytorch/pytorch/pull/5617
+ nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range)
+ if module.bias is not None:
+ nn.init.zeros_(module.bias)
+ elif isinstance(module, nn.Embedding):
+ nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range)
+ if module.padding_idx is not None:
+ module.weight.data[module.padding_idx].zero_()
+
+ if rescale_prenorm_residual:
+ # Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme:
+ # > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale
+ # > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers.
+ # > -- GPT-2 :: https://openai.com/blog/better-language-models/
+ #
+ # Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py
+ for name, p in module.named_parameters():
+ if name in ["o_proj.weight", "down_proj.weight"]:
+ # Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block
+ # Following Pytorch init, except scale by 1/sqrt(2 * n_layer)
+ # We need to reinit p since this code could be called multiple times
+ # Having just p *= scale would repeatedly scale it down
+ with torch.no_grad():
+ p /= math.sqrt(num_residuals_per_layer * self.config.num_hidden_layers)
+
+
+class GLAModel(GLAPreTrainedModel):
+
+ def __init__(self, config: GLAConfig):
+ super().__init__(config)
+ self.padding_idx = config.pad_token_id
+ self.vocab_size = config.vocab_size
+
+ self.embeddings = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
+ self.layers = nn.ModuleList([GLABlock(config, layer_idx) for layer_idx in range(config.num_hidden_layers)])
+ self.norm = RMSNorm(config.hidden_size, eps=config.norm_eps)
+
+ self.gradient_checkpointing = False
+
+ self.post_init()
+
+ def get_input_embeddings(self):
+ return self.embeddings
+
+ def set_input_embeddings(self, value):
+ self.embeddings = value
+
+ def forward(
+ self,
+ input_ids: Optional[torch.LongTensor] = None,
+ attention_mask: Optional[torch.Tensor] = None, # noqa
+ inputs_embeds: Optional[torch.FloatTensor] = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ use_cache: Optional[bool] = None,
+ output_attentions: Optional[bool] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None
+ ) -> Union[Tuple, BaseModelOutputWithPast]:
+ if output_attentions:
+ warnings.warn("`GLAModel` does not `output_attentions` now, setting it to `False`.")
+ output_attentions = False
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
+ output_hidden_states = output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
+ use_cache = use_cache if use_cache is not None else (self.config.use_cache if not self.training else False)
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ # retrieve input_ids and inputs_embeds
+ if input_ids is not None and inputs_embeds is not None:
+ raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
+ if input_ids is None and inputs_embeds is None:
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
+
+ if inputs_embeds is None:
+ inputs_embeds = self.embeddings(input_ids)
+ hidden_states = inputs_embeds
+
+ if use_cache and not isinstance(past_key_values, Cache):
+ past_key_values = Cache.from_legacy_cache(past_key_values)
+
+ if self.gradient_checkpointing and self.training and use_cache:
+ logger.warning_once("`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...")
+ use_cache = False
+
+ all_hidden_states = () if output_hidden_states else None
+ all_attns = () if output_attentions else None
+ for layer in self.layers:
+ if output_hidden_states:
+ all_hidden_states += (hidden_states,)
+
+ if self.gradient_checkpointing and self.training:
+ hidden_states, attentions, past_key_values = self._gradient_checkpointing_func(
+ layer.__call__,
+ hidden_states,
+ attention_mask,
+ past_key_values,
+ use_cache,
+ output_attentions
+ )
+ else:
+ hidden_states, attentions, past_key_values = layer(
+ hidden_states,
+ attention_mask=attention_mask,
+ past_key_values=past_key_values,
+ use_cache=use_cache,
+ output_attentions=output_attentions
+ )
+
+ if output_attentions:
+ all_attns += (attentions,)
+
+ hidden_states = self.norm(hidden_states)
+
+ # add hidden states from the last decoder layer
+ if output_hidden_states:
+ all_hidden_states += (hidden_states,)
+
+ if not return_dict:
+ return tuple(i for i in [hidden_states, past_key_values, all_hidden_states, all_attns] if i is not None)
+ return BaseModelOutputWithPast(
+ last_hidden_state=hidden_states,
+ past_key_values=past_key_values,
+ hidden_states=all_hidden_states,
+ attentions=all_attns
+ )
+
+
+class GLAForCausalLM(GLAPreTrainedModel, GenerationMixin):
+
+ _tied_weights_keys = ["lm_head.weight"]
+
+ def __init__(self, config):
+ super().__init__(config)
+ self.model = GLAModel(config)
+ self.vocab_size = config.vocab_size
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
+
+ # Initialize weights and apply final processing
+ self.post_init()
+
+ def get_input_embeddings(self):
+ return self.model.embeddings
+
+ def set_input_embeddings(self, value):
+ self.model.embeddings = value
+
+ def get_output_embeddings(self):
+ return self.lm_head
+
+ def set_output_embeddings(self, new_embeddings):
+ self.lm_head = new_embeddings
+
+ def set_decoder(self, decoder):
+ self.model = decoder
+
+ def get_decoder(self):
+ return self.model
+
+ def generate(self, *args, **kwargs):
+ try:
+ return super().generate(*args, **kwargs)
+ except AttributeError as exception:
+ if 'past_key_values' in str(exception):
+ raise AttributeError(
+ f"You tried to call `generate` with a decoding strategy that manipulates `past_key_values`, "
+ f"which is not supported for {self.__class__.__name__}. "
+ f"Try another generation strategy instead. "
+ f"For the available generation strategies, check this doc: "
+ f"https://huggingface.co/docs/transformers/en/generation_strategies#decoding-strategies"
+ )
+ else:
+ raise exception
+
+ def prepare_inputs_for_generation(
+ self,
+ input_ids: torch.LongTensor = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ inputs_embeds: Optional[torch.Tensor] = None,
+ use_cache: bool = True,
+ num_logits_to_keep: Optional[int] = None,
+ **kwargs
+ ):
+ # only last token for `inputs_ids` if the `past_key_values` is passed along.
+ if past_key_values is not None:
+ input_ids = input_ids[:, -1:]
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
+ if inputs_embeds is not None and past_key_values is None:
+ model_inputs = {'inputs_embeds': inputs_embeds}
+ else:
+ # The `contiguous()` here is necessary to have a static stride during decoding. torchdynamo otherwise
+ # recompiles graphs as the stride of the inputs is a guard.
+ # Ref: https://github.com/huggingface/transformers/pull/29114
+ # TODO: use `next_tokens` directly instead.
+ model_inputs = {'input_ids': input_ids.contiguous()}
+
+ if num_logits_to_keep is not None:
+ model_inputs['num_logits_to_keep'] = num_logits_to_keep
+
+ model_inputs.update({
+ 'past_key_values': past_key_values,
+ 'use_cache': use_cache,
+ 'attention_mask': attention_mask,
+ 'num_logits_to_keep': num_logits_to_keep,
+ })
+ return model_inputs
+
+ def forward(
+ self,
+ input_ids: torch.LongTensor = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ inputs_embeds: Optional[torch.Tensor] = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ labels: Optional[torch.LongTensor] = None,
+ use_cache: Optional[bool] = None,
+ output_attentions: Optional[bool] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None,
+ num_logits_to_keep: Optional[int] = 0
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
+ output_hidden_states = (
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
+ )
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ outputs = self.model(
+ input_ids=input_ids,
+ attention_mask=attention_mask,
+ inputs_embeds=inputs_embeds,
+ past_key_values=past_key_values,
+ use_cache=use_cache,
+ output_attentions=output_attentions,
+ output_hidden_states=output_hidden_states,
+ return_dict=return_dict
+ )
+
+ hidden_states = outputs[0]
+ fuse_linear_and_cross_entropy = self.config.fuse_cross_entropy and self.training
+ logits = None if fuse_linear_and_cross_entropy else self.lm_head(hidden_states[:, -num_logits_to_keep:])
+
+ loss = None
+ if labels is not None:
+ if self.config.fuse_cross_entropy:
+ if fuse_linear_and_cross_entropy:
+ loss_fct = FusedLinearCrossEntropyLoss()
+ else:
+ loss_fct = FusedCrossEntropyLoss(inplace_backward=True)
+ else:
+ loss_fct = nn.CrossEntropyLoss()
+ # Enable model parallelism
+ labels = labels.to(hidden_states.device)
+ labels = torch.cat((labels[..., 1:], torch.full_like(labels[:, :1], loss_fct.ignore_index)), 1)
+ if fuse_linear_and_cross_entropy:
+ loss = loss_fct(hidden_states.view(-1, self.config.hidden_size),
+ labels.view(-1),
+ self.lm_head.weight,
+ self.lm_head.bias)
+ else:
+ loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1))
+
+ if not return_dict:
+ output = (logits,) + outputs[1:]
+ return (loss,) + output if loss is not None else output
+
+ return CausalLMOutputWithPast(
+ loss=loss,
+ logits=logits,
+ past_key_values=outputs.past_key_values,
+ hidden_states=outputs.hidden_states,
+ attentions=outputs.attentions,
+ )
diff --git a/fla/models/gsa/__init__.py b/fla/models/gsa/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..a134f758e0bea0eb844a2db73957936078f889b6
--- /dev/null
+++ b/fla/models/gsa/__init__.py
@@ -0,0 +1,13 @@
+# -*- coding: utf-8 -*-
+
+from transformers import AutoConfig, AutoModel, AutoModelForCausalLM
+
+from fla.models.gsa.configuration_gsa import GSAConfig
+from fla.models.gsa.modeling_gsa import GSAForCausalLM, GSAModel
+
+AutoConfig.register(GSAConfig.model_type, GSAConfig)
+AutoModel.register(GSAConfig, GSAModel)
+AutoModelForCausalLM.register(GSAConfig, GSAForCausalLM)
+
+
+__all__ = ['GSAConfig', 'GSAForCausalLM', 'GSAModel']
diff --git a/fla/models/gsa/configuration_gsa.py b/fla/models/gsa/configuration_gsa.py
new file mode 100644
index 0000000000000000000000000000000000000000..b2b37c8438cfdc05ea611950a955c431f40354ba
--- /dev/null
+++ b/fla/models/gsa/configuration_gsa.py
@@ -0,0 +1,94 @@
+# -*- coding: utf-8 -*-
+
+from typing import Dict, Optional
+
+from transformers.configuration_utils import PretrainedConfig
+
+
+class GSAConfig(PretrainedConfig):
+
+ model_type = 'gsa'
+ keys_to_ignore_at_inference = ['past_key_values']
+
+ def __init__(
+ self,
+ hidden_size: int = 2048,
+ gate_logit_normalizer: Optional[int] = 8,
+ clamp_min: Optional[float] = None,
+ clamp_max: Optional[float] = None,
+ hidden_ratio: Optional[int] = 4,
+ intermediate_size: Optional[int] = None,
+ num_hidden_layers: int = 24,
+ num_heads: int = 4,
+ num_kv_heads: Optional[int] = None,
+ num_slots: Optional[int] = 64,
+ use_short_conv: bool = False,
+ conv_size: int = 4,
+ exapnd_k: float = 1,
+ exapnd_v: float = 1,
+ feature_map: str = 'swish',
+ use_output_gate: bool = False,
+ use_norm: bool = True,
+ max_position_embeddings: int = 2048,
+ hidden_act: str = "swish",
+ elementwise_affine: Optional[bool] = True,
+ norm_first: bool = True,
+ norm_eps: float = 1e-6,
+ attn: Optional[Dict] = None,
+ use_cache: bool = True,
+ pad_token_id: int = None,
+ bos_token_id: int = 1,
+ eos_token_id: int = 2,
+ initializer_range: float = 0.02,
+ tie_word_embeddings: bool = False,
+ fuse_norm: bool = True,
+ fuse_cross_entropy: bool = True,
+ vocab_size: int = 32000,
+ **kwargs
+ ):
+ self.hidden_size = hidden_size
+ self.gate_logit_normalizer = gate_logit_normalizer
+ self.clamp_min = clamp_min
+ self.clamp_max = clamp_max
+ self.hidden_ratio = hidden_ratio
+ self.intermediate_size = intermediate_size
+ self.num_hidden_layers = num_hidden_layers
+ self.num_heads = num_heads
+ self.num_kv_heads = num_kv_heads
+ self.num_slots = num_slots
+ self.use_short_conv = use_short_conv
+ self.conv_size = conv_size
+ self.expand_k = exapnd_k
+ self.expand_v = exapnd_v
+ self.feature_map = feature_map
+ self.use_output_gate = use_output_gate
+ self.use_norm = use_norm
+ self.max_position_embeddings = max_position_embeddings
+ self.hidden_act = hidden_act
+ self.elementwise_affine = elementwise_affine
+ self.norm_first = norm_first
+ self.norm_eps = norm_eps
+ self.attn = attn
+ self.use_cache = use_cache
+ self.initializer_range = initializer_range
+ self.fuse_cross_entropy = fuse_cross_entropy
+ self.fuse_norm = fuse_norm
+ self.vocab_size = vocab_size
+
+ if attn is not None:
+ if not isinstance(attn, Dict):
+ raise ValueError("attn must be a dictionary")
+ if 'layers' not in attn:
+ raise ValueError("Layer indices must be provided to initialize hybrid attention layers")
+ if 'num_heads' not in attn:
+ raise ValueError("Number of heads must be provided to initialize hybrid attention layers")
+ attn['num_kv_heads'] = attn.get('num_kv_heads', attn['num_heads'])
+ attn['window_size'] = attn.get('window_size', None)
+
+ super().__init__(
+ pad_token_id=pad_token_id,
+ bos_token_id=bos_token_id,
+ eos_token_id=eos_token_id,
+ tie_word_embeddings=tie_word_embeddings,
+ **kwargs,
+ )
diff --git a/fla/models/gsa/modeling_gsa.py b/fla/models/gsa/modeling_gsa.py
new file mode 100644
index 0000000000000000000000000000000000000000..4133e6ca0cfb3579ae368ca9a35ce8074f8a7fd2
--- /dev/null
+++ b/fla/models/gsa/modeling_gsa.py
@@ -0,0 +1,442 @@
+# -*- coding: utf-8 -*-
+
+from __future__ import annotations
+
+import math
+import warnings
+from typing import List, Optional, Tuple, Union
+
+import torch
+import torch.nn as nn
+import torch.utils.checkpoint
+from transformers.activations import ACT2FN
+from transformers.generation import GenerationMixin
+from transformers.modeling_outputs import (BaseModelOutputWithPast,
+ CausalLMOutputWithPast)
+from transformers.modeling_utils import PreTrainedModel
+from transformers.utils import logging
+
+from fla.layers.attn import Attention
+from fla.layers.gsa import GatedSlotAttention
+from fla.models.gsa.configuration_gsa import GSAConfig
+from fla.models.utils import Cache
+from fla.modules import (FusedCrossEntropyLoss, FusedLinearCrossEntropyLoss,
+ RMSNorm)
+from fla.modules.activations import swiglu_linear
+from fla.modules.layernorm import rms_norm_linear
+
+logger = logging.get_logger(__name__)
+
+
+class GSAMLP(nn.Module):
+
+ def __init__(
+ self,
+ hidden_size: int,
+ hidden_ratio: Optional[int] = None,
+ intermediate_size: Optional[int] = None,
+ hidden_act: str = 'swish',
+ norm_first: bool = True,
+ norm_eps: float = 1e-5
+ ) -> GSAMLP:
+ super().__init__()
+
+ self.hidden_size = hidden_size
+ # the final number of params is `hidden_ratio * hidden_size^2`
+ # `intermediate_size` is chosen to be a multiple of 256 closest to `2/3 * hidden_size * hidden_ratio`
+ if hidden_ratio is None:
+ hidden_ratio = 4
+ if intermediate_size is None:
+ intermediate_size = int(hidden_size * hidden_ratio * 2 / 3)
+ intermediate_size = 256 * ((intermediate_size + 256 - 1) // 256)
+ self.hidden_ratio = hidden_ratio
+ self.intermediate_size = intermediate_size
+ self.norm_first = norm_first
+
+ if norm_first:
+ self.norm = RMSNorm(hidden_size=hidden_size, eps=norm_eps)
+
+ self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size * 2, bias=False)
+ self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False)
+ self.act_fn = ACT2FN[hidden_act]
+
+ def forward(self, x):
+ if self.norm_first:
+ x = rms_norm_linear(x, self.norm.weight, self.norm.bias, self.gate_proj.weight, self.gate_proj.bias)
+ else:
+ x = self.gate_proj(x)
+ gate, y = x.chunk(2, -1)
+ return swiglu_linear(gate, y, self.down_proj.weight, self.down_proj.bias)
+
+
+class GSABlock(nn.Module):
+ def __init__(self, config: GSAConfig, layer_idx: int):
+ super().__init__()
+ self.hidden_size = config.hidden_size
+
+ if not config.norm_first:
+ self.attn_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps)
+ if config.attn is not None and layer_idx in config.attn['layers']:
+ self.attn = Attention(
+ hidden_size=config.hidden_size,
+ num_heads=config.attn['num_heads'],
+ num_kv_heads=config.attn['num_kv_heads'],
+ window_size=config.attn['window_size'],
+ max_position_embeddings=config.max_position_embeddings,
+ layer_idx=layer_idx
+ )
+ else:
+ self.attn = GatedSlotAttention(
+ hidden_size=config.hidden_size,
+ expand_k=config.expand_k,
+ expand_v=config.expand_v,
+ num_heads=config.num_heads,
+ num_kv_heads=config.num_kv_heads,
+ num_slots=config.num_slots,
+ use_short_conv=config.use_short_conv,
+ conv_size=config.conv_size,
+ feature_map=config.feature_map,
+ use_output_gate=config.use_output_gate,
+ use_norm=config.use_norm,
+ gate_fn=config.hidden_act,
+ gate_logit_normalizer=config.gate_logit_normalizer,
+ elementwise_affine=config.elementwise_affine,
+ norm_first=config.norm_first,
+ norm_eps=config.norm_eps,
+ fuse_norm=config.fuse_norm,
+ layer_idx=layer_idx
+ )
+ if not config.norm_first:
+ self.mlp_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps)
+ self.mlp = GSAMLP(
+ hidden_size=config.hidden_size,
+ hidden_ratio=config.hidden_ratio,
+ intermediate_size=config.intermediate_size,
+ hidden_act=config.hidden_act,
+ norm_first=config.norm_first,
+ norm_eps=config.norm_eps
+ )
+
+ def forward(
+ self,
+ hidden_states: torch.Tensor,
+ attention_mask: Optional[torch.Tensor] = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ use_cache: Optional[bool] = False,
+ output_attentions: Optional[bool] = False,
+ **kwargs
+ ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
+
+ residual = hidden_states
+ if hasattr(self, 'attn_norm'):
+ hidden_states = self.attn_norm(hidden_states)
+ hidden_states, attentions, past_key_values = self.attn(
+ hidden_states=hidden_states,
+ attention_mask=attention_mask,
+ past_key_values=past_key_values,
+ use_cache=use_cache,
+ output_attentions=output_attentions
+ )
+ if hasattr(self, 'mlp_norm'):
+ hidden_states, residual = self.mlp_norm(hidden_states, residual, True)
+ else:
+ hidden_states = residual + hidden_states
+ residual = hidden_states
+ hidden_states = self.mlp(hidden_states)
+ hidden_states = residual + hidden_states
+
+ outputs = (hidden_states, attentions, past_key_values)
+
+ return outputs
+
+
+class GSAPreTrainedModel(PreTrainedModel):
+
+ config_class = GSAConfig
+ supports_gradient_checkpointing = True
+ _no_split_modules = ['GSABlock']
+
+ def __init__(self, *inputs, **kwargs):
+ super().__init__(*inputs, **kwargs)
+
+ def _init_weights(
+ self,
+ module: nn.Module,
+ rescale_prenorm_residual: bool = True,
+ num_residuals_per_layer: int = 2,
+ ):
+ if isinstance(module, (nn.Linear, nn.Conv1d)):
+ # Slightly different from the TF version which uses truncated_normal for initialization
+ # cf https://github.com/pytorch/pytorch/pull/5617
+ nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range)
+ if module.bias is not None:
+ nn.init.zeros_(module.bias)
+ elif isinstance(module, nn.Embedding):
+ nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range)
+ if module.padding_idx is not None:
+ module.weight.data[module.padding_idx].zero_()
+
+ if rescale_prenorm_residual:
+ # Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme:
+ # > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale
+ # > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers.
+ # > -- GPT-2 :: https://openai.com/blog/better-language-models/
+ #
+ # Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py
+ for name, p in module.named_parameters():
+ if name in ["o_proj.weight", "down_proj.weight"]:
+ # Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block
+ # Following Pytorch init, except scale by 1/sqrt(2 * n_layer)
+ # We need to reinit p since this code could be called multiple times
+ # Having just p *= scale would repeatedly scale it down
+ with torch.no_grad():
+ p /= math.sqrt(num_residuals_per_layer * self.config.num_hidden_layers)
+
+
+class GSAModel(GSAPreTrainedModel):
+
+ def __init__(self, config: GSAConfig):
+ super().__init__(config)
+ self.padding_idx = config.pad_token_id
+ self.vocab_size = config.vocab_size
+
+ self.embeddings = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
+ self.layers = nn.ModuleList([GSABlock(config, layer_idx) for layer_idx in range(config.num_hidden_layers)])
+ self.norm = RMSNorm(config.hidden_size, eps=config.norm_eps)
+
+ self.gradient_checkpointing = False
+
+ self.post_init()
+
+ def get_input_embeddings(self):
+ return self.embeddings
+
+ def set_input_embeddings(self, value):
+ self.embeddings = value
+
+ def forward(
+ self,
+ input_ids: Optional[torch.LongTensor] = None,
+ attention_mask: Optional[torch.Tensor] = None, # noqa
+ inputs_embeds: Optional[torch.FloatTensor] = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ use_cache: Optional[bool] = None,
+ output_attentions: Optional[bool] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None
+ ) -> Union[Tuple, BaseModelOutputWithPast]:
+ if output_attentions:
+ warnings.warn("`GSAModel` does not `output_attentions` now, setting it to `False`.")
+ output_attentions = False
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
+ output_hidden_states = output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
+ use_cache = use_cache if use_cache is not None else (self.config.use_cache if not self.training else False)
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ # retrieve input_ids and inputs_embeds
+ if input_ids is not None and inputs_embeds is not None:
+ raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
+ if input_ids is None and inputs_embeds is None:
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
+
+ if inputs_embeds is None:
+ inputs_embeds = self.embeddings(input_ids)
+ hidden_states = inputs_embeds
+
+ if use_cache and not isinstance(past_key_values, Cache):
+ past_key_values = Cache.from_legacy_cache(past_key_values)
+
+ if self.gradient_checkpointing and self.training and use_cache:
+ logger.warning_once("`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...")
+ use_cache = False
+
+ all_hidden_states = () if output_hidden_states else None
+ all_attns = () if output_attentions else None
+
+ for i, layer in enumerate(self.layers):
+ if output_hidden_states:
+ all_hidden_states += (hidden_states,)
+
+ if self.gradient_checkpointing and self.training:
+ hidden_states, attentions, past_key_values = self._gradient_checkpointing_func(
+ layer.__call__,
+ hidden_states,
+ attention_mask,
+ past_key_values,
+ use_cache,
+ output_attentions,
+ )
+ else:
+ hidden_states, attentions, past_key_values = layer(
+ hidden_states,
+ attention_mask=attention_mask,
+ past_key_values=past_key_values,
+ use_cache=use_cache,
+ output_attentions=output_attentions
+ )
+
+ if output_attentions:
+ all_attns += (attentions,)
+
+ hidden_states = self.norm(hidden_states)
+
+ # add hidden states from the last decoder layer
+ if output_hidden_states:
+ all_hidden_states += (hidden_states,)
+
+ if not return_dict:
+ return tuple(i for i in [hidden_states, past_key_values, all_hidden_states, all_attns] if i is not None)
+ return BaseModelOutputWithPast(
+ last_hidden_state=hidden_states,
+ past_key_values=past_key_values,
+ hidden_states=all_hidden_states,
+ attentions=all_attns
+ )
+
+
+class GSAForCausalLM(GSAPreTrainedModel, GenerationMixin):
+
+ _tied_weights_keys = ["lm_head.weight"]
+
+ def __init__(self, config):
+
+ super().__init__(config)
+ self.model = GSAModel(config)
+ self.vocab_size = config.vocab_size
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
+
+ # Initialize weights and apply final processing
+ self.post_init()
+
+ def get_input_embeddings(self):
+ return self.model.embeddings
+
+ def set_input_embeddings(self, value):
+ self.model.embeddings = value
+
+ def get_output_embeddings(self):
+ return self.lm_head
+
+ def set_output_embeddings(self, new_embeddings):
+ self.lm_head = new_embeddings
+
+ def set_decoder(self, decoder):
+ self.model = decoder
+
+ def get_decoder(self):
+ return self.model
+
+ def generate(self, *args, **kwargs):
+ try:
+ return super().generate(*args, **kwargs)
+ except AttributeError as exception:
+ if 'past_key_values' in str(exception):
+ raise AttributeError(
+ f"You tried to call `generate` with a decoding strategy that manipulates `past_key_values`, "
+ f"which is not supported for {self.__class__.__name__}. "
+ f"Try another generation strategy instead. "
+ f"For the available generation strategies, check this doc: "
+ f"https://huggingface.co/docs/transformers/en/generation_strategies#decoding-strategies"
+ )
+ else:
+ raise exception
+
+ def prepare_inputs_for_generation(
+ self,
+ input_ids: torch.LongTensor = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ inputs_embeds: Optional[torch.Tensor] = None,
+ use_cache: bool = True,
+ num_logits_to_keep: Optional[int] = None,
+ **kwargs
+ ):
+ # only last token for `inputs_ids` if the `past_key_values` is passed along.
+ if past_key_values is not None:
+ input_ids = input_ids[:, -1:]
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
+ if inputs_embeds is not None and past_key_values is None:
+ model_inputs = {'inputs_embeds': inputs_embeds}
+ else:
+ # The `contiguous()` here is necessary to have a static stride during decoding. torchdynamo otherwise
+ # recompiles graphs as the stride of the inputs is a guard.
+ # Ref: https://github.com/huggingface/transformers/pull/29114
+ # TODO: use `next_tokens` directly instead.
+ model_inputs = {'input_ids': input_ids.contiguous()}
+
+ if num_logits_to_keep is not None:
+ model_inputs['num_logits_to_keep'] = num_logits_to_keep
+
+ model_inputs.update({
+ 'past_key_values': past_key_values,
+ 'use_cache': use_cache,
+ 'attention_mask': attention_mask,
+ 'num_logits_to_keep': num_logits_to_keep,
+ })
+ return model_inputs
+
+ def forward(
+ self,
+ input_ids: torch.LongTensor = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ inputs_embeds: Optional[torch.Tensor] = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ labels: Optional[torch.LongTensor] = None,
+ use_cache: Optional[bool] = None,
+ output_attentions: Optional[bool] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None,
+ num_logits_to_keep: Optional[int] = 0
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
+ output_hidden_states = (
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
+ )
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ outputs = self.model(
+ input_ids=input_ids,
+ attention_mask=attention_mask,
+ inputs_embeds=inputs_embeds,
+ past_key_values=past_key_values,
+ use_cache=use_cache,
+ output_attentions=output_attentions,
+ output_hidden_states=output_hidden_states,
+ return_dict=return_dict
+ )
+
+ hidden_states = outputs[0]
+ fuse_linear_and_cross_entropy = self.config.fuse_cross_entropy and self.training
+ logits = None if fuse_linear_and_cross_entropy else self.lm_head(hidden_states[:, -num_logits_to_keep:])
+
+ loss = None
+ if labels is not None:
+ if self.config.fuse_cross_entropy:
+ if fuse_linear_and_cross_entropy:
+ loss_fct = FusedLinearCrossEntropyLoss()
+ else:
+ loss_fct = FusedCrossEntropyLoss(inplace_backward=True)
+ else:
+ loss_fct = nn.CrossEntropyLoss()
+ # Enable model parallelism
+ labels = labels.to(hidden_states.device)
+ labels = torch.cat((labels[..., 1:], torch.full_like(labels[:, :1], loss_fct.ignore_index)), 1)
+ if fuse_linear_and_cross_entropy:
+ loss = loss_fct(hidden_states.view(-1, self.config.hidden_size),
+ labels.view(-1),
+ self.lm_head.weight,
+ self.lm_head.bias)
+ else:
+ loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1))
+
+ if not return_dict:
+ output = (logits,) + outputs[1:]
+ return (loss,) + output if loss is not None else output
+
+ return CausalLMOutputWithPast(
+ loss=loss,
+ logits=logits,
+ past_key_values=outputs.past_key_values,
+ hidden_states=outputs.hidden_states,
+ attentions=outputs.attentions,
+ )
diff --git a/fla/models/hgrn/__init__.py b/fla/models/hgrn/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..3b29a3dd82da6d64bac6cc887e24295a03de5b23
--- /dev/null
+++ b/fla/models/hgrn/__init__.py
@@ -0,0 +1,13 @@
+# -*- coding: utf-8 -*-
+
+from transformers import AutoConfig, AutoModel, AutoModelForCausalLM
+
+from fla.models.hgrn.configuration_hgrn import HGRNConfig
+from fla.models.hgrn.modeling_hgrn import HGRNForCausalLM, HGRNModel
+
+AutoConfig.register(HGRNConfig.model_type, HGRNConfig)
+AutoModel.register(HGRNConfig, HGRNModel)
+AutoModelForCausalLM.register(HGRNConfig, HGRNForCausalLM)
+
+
+__all__ = ['HGRNConfig', 'HGRNForCausalLM', 'HGRNModel']
diff --git a/fla/models/hgrn/configuration_hgrn.py b/fla/models/hgrn/configuration_hgrn.py
new file mode 100644
index 0000000000000000000000000000000000000000..39dd38db6f855029f070862f03ad6f47ef913bbe
--- /dev/null
+++ b/fla/models/hgrn/configuration_hgrn.py
@@ -0,0 +1,74 @@
+# -*- coding: utf-8 -*-
+
+from typing import Dict, Optional
+
+from transformers.configuration_utils import PretrainedConfig
+
+
+class HGRNConfig(PretrainedConfig):
+
+ model_type = 'hgrn'
+ keys_to_ignore_at_inference = ['past_key_values']
+
+ def __init__(
+ self,
+ attn_mode: str = "chunk",
+ hidden_size: int = 2048,
+ num_hidden_layers: int = 24,
+ expand_ratio: Optional[int] = 1,
+ use_short_conv: bool = False,
+ conv_size: int = 4,
+ use_lower_bound: bool = True,
+ max_position_embeddings: int = 2048,
+ hidden_ratio: Optional[int] = 4,
+ intermediate_size: Optional[int] = None,
+ hidden_act: str = "swish",
+ elementwise_affine: Optional[bool] = True,
+ norm_eps: float = 1e-6,
+ attn: Optional[Dict] = None,
+ use_cache: bool = True,
+ pad_token_id: int = None,
+ bos_token_id: int = 1,
+ eos_token_id: int = 2,
+ tie_word_embeddings: bool = False,
+ initializer_range: float = 0.02,
+ fuse_cross_entropy: bool = True,
+ vocab_size: int = 32000,
+ **kwargs
+ ):
+ self.attn_mode = attn_mode
+ self.hidden_size = hidden_size
+ self.num_hidden_layers = num_hidden_layers
+ self.expand_ratio = expand_ratio
+ self.use_short_conv = use_short_conv
+ self.conv_size = conv_size
+ self.use_lower_bound = use_lower_bound
+ self.max_position_embeddings = max_position_embeddings
+ self.hidden_ratio = hidden_ratio
+ self.intermediate_size = intermediate_size
+ self.elementwise_affine = elementwise_affine
+ self.attn = attn
+ self.norm_eps = norm_eps
+ self.hidden_act = hidden_act
+ self.use_cache = use_cache
+ self.initializer_range = initializer_range
+ self.fuse_cross_entropy = fuse_cross_entropy
+ self.vocab_size = vocab_size
+
+ if attn is not None:
+ if not isinstance(attn, Dict):
+ raise ValueError("attn must be a dictionary")
+ if 'layers' not in attn:
+ raise ValueError("Layer indices must be provided to initialize hybrid attention layers")
+ if 'num_heads' not in attn:
+ raise ValueError("Number of heads must be provided to initialize hybrid attention layers")
+ attn['num_kv_heads'] = attn.get('num_kv_heads', attn['num_heads'])
+ attn['window_size'] = attn.get('window_size', None)
+
+ super().__init__(
+ pad_token_id=pad_token_id,
+ bos_token_id=bos_token_id,
+ eos_token_id=eos_token_id,
+ tie_word_embeddings=tie_word_embeddings,
+ **kwargs,
+ )
diff --git a/fla/models/hgrn/modeling_hgrn.py b/fla/models/hgrn/modeling_hgrn.py
new file mode 100644
index 0000000000000000000000000000000000000000..3091b40fd86ecb601d813f1b7a77affd40ba3c05
--- /dev/null
+++ b/fla/models/hgrn/modeling_hgrn.py
@@ -0,0 +1,421 @@
+# -*- coding: utf-8 -*-
+
+from __future__ import annotations
+
+import math
+import warnings
+from typing import List, Optional, Tuple, Union
+
+import torch
+import torch.nn as nn
+import torch.utils.checkpoint
+from transformers.activations import ACT2FN
+from transformers.generation import GenerationMixin
+from transformers.modeling_outputs import (BaseModelOutputWithPast,
+ CausalLMOutputWithPast)
+from transformers.modeling_utils import PreTrainedModel
+from transformers.utils import logging
+
+from fla.layers.attn import Attention
+from fla.layers.hgrn import HGRNAttention
+from fla.models.hgrn.configuration_hgrn import HGRNConfig
+from fla.models.utils import Cache
+from fla.modules import (FusedCrossEntropyLoss, FusedLinearCrossEntropyLoss,
+ RMSNorm)
+from fla.modules.activations import swiglu_linear
+
+logger = logging.get_logger(__name__)
+
+
+class HGRNMLP(nn.Module):
+
+ def __init__(
+ self,
+ hidden_size: int,
+ hidden_ratio: Optional[int] = None,
+ intermediate_size: Optional[int] = None,
+ hidden_act: str = 'swish'
+ ) -> HGRNMLP:
+ super().__init__()
+
+ self.hidden_size = hidden_size
+ # the final number of params is `hidden_ratio * hidden_size^2`
+ # `intermediate_size` is chosen to be a multiple of 256 closest to `2/3 * hidden_size * hidden_ratio`
+ if hidden_ratio is None:
+ hidden_ratio = 4
+ if intermediate_size is None:
+ intermediate_size = int(hidden_size * hidden_ratio * 2 / 3)
+ intermediate_size = 256 * ((intermediate_size + 256 - 1) // 256)
+ self.hidden_ratio = hidden_ratio
+ self.intermediate_size = intermediate_size
+
+ self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size * 2, bias=False)
+ self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False)
+ self.act_fn = ACT2FN[hidden_act]
+
+ def forward(self, x):
+ y = self.gate_proj(x)
+ gate, y = y.chunk(2, -1)
+ return swiglu_linear(gate, y, self.down_proj.weight, self.down_proj.bias)
+
+
+class HGRNBlock(nn.Module):
+ def __init__(self, config: HGRNConfig, layer_idx: int):
+ super().__init__()
+ self.hidden_size = config.hidden_size
+
+ self.attn_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps)
+ if config.attn is not None and layer_idx in config.attn['layers']:
+ self.attn = Attention(
+ hidden_size=config.hidden_size,
+ num_heads=config.attn['num_heads'],
+ num_kv_heads=config.attn['num_kv_heads'],
+ window_size=config.attn['window_size'],
+ max_position_embeddings=config.max_position_embeddings,
+ layer_idx=layer_idx
+ )
+ else:
+ self.attn = HGRNAttention(
+ mode=config.attn_mode,
+ hidden_size=config.hidden_size,
+ expand_ratio=config.expand_ratio,
+ use_short_conv=config.use_short_conv,
+ conv_size=config.conv_size,
+ elementwise_affine=config.elementwise_affine,
+ norm_eps=config.norm_eps,
+ layer_idx=layer_idx
+ )
+ self.mlp_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps)
+ self.mlp = HGRNMLP(
+ hidden_size=config.hidden_size,
+ hidden_ratio=config.hidden_ratio,
+ intermediate_size=config.intermediate_size,
+ hidden_act=config.hidden_act
+ )
+
+ def forward(
+ self,
+ hidden_states: torch.Tensor,
+ attention_mask: Optional[torch.Tensor] = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ use_cache: Optional[bool] = False,
+ output_attentions: Optional[bool] = False,
+ lower_bound: Optional[torch.Tensor] = False,
+ **kwargs,
+ ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
+ residual = hidden_states
+ hidden_states = self.attn_norm(hidden_states)
+ hidden_states, attentions, past_key_values = self.attn(
+ hidden_states=hidden_states,
+ attention_mask=attention_mask,
+ past_key_values=past_key_values,
+ use_cache=use_cache,
+ output_attentions=output_attentions,
+ lower_bound=lower_bound
+ )
+ hidden_states, residual = self.mlp_norm(hidden_states, residual, True)
+ hidden_states = self.mlp(hidden_states)
+ hidden_states = residual + hidden_states
+
+ outputs = (hidden_states, attentions, past_key_values)
+
+ return outputs
+
+
+class HGRNPreTrainedModel(PreTrainedModel):
+
+ config_class = HGRNConfig
+ supports_gradient_checkpointing = True
+ _no_split_modules = ['HGRNBlock']
+
+ def __init__(self, *inputs, **kwargs):
+ super().__init__(*inputs, **kwargs)
+
+ def _init_weights(
+ self,
+ module: nn.Module,
+ rescale_prenorm_residual: bool = True,
+ num_residuals_per_layer: int = 2,
+ ):
+ if isinstance(module, (nn.Linear, nn.Conv1d)):
+ # Slightly different from the TF version which uses truncated_normal for initialization
+ # cf https://github.com/pytorch/pytorch/pull/5617
+ nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range)
+ if module.bias is not None:
+ nn.init.zeros_(module.bias)
+ elif isinstance(module, nn.Embedding):
+ nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range)
+ if module.padding_idx is not None:
+ module.weight.data[module.padding_idx].zero_()
+
+ if rescale_prenorm_residual:
+ # Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme:
+ # > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale
+ # > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers.
+ # > -- GPT-2 :: https://openai.com/blog/better-language-models/
+ #
+ # Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py
+ for name, p in module.named_parameters():
+ if name in ["o_proj.weight", "down_proj.weight"]:
+ # Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block
+ # Following Pytorch init, except scale by 1/sqrt(2 * n_layer)
+ # We need to reinit p since this code could be called multiple times
+ # Having just p *= scale would repeatedly scale it down
+ with torch.no_grad():
+ p /= math.sqrt(num_residuals_per_layer * self.config.num_hidden_layers)
+
+
+class HGRNModel(HGRNPreTrainedModel):
+
+ def __init__(self, config: HGRNConfig):
+ super().__init__(config)
+ self.padding_idx = config.pad_token_id
+ self.vocab_size = config.vocab_size
+
+ self.embeddings = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
+ if config.use_lower_bound:
+ self.lower_bounds = nn.Parameter(torch.zeros(config.num_hidden_layers, config.hidden_size))
+ self.layers = nn.ModuleList([HGRNBlock(config, layer_idx) for layer_idx in range(config.num_hidden_layers)])
+ self.norm = RMSNorm(config.hidden_size, eps=config.norm_eps)
+
+ self.gradient_checkpointing = False
+
+ self.post_init()
+
+ def get_input_embeddings(self):
+ return self.embeddings
+
+ def set_input_embeddings(self, value):
+ self.embeddings = value
+
+ def forward(
+ self,
+ input_ids: Optional[torch.LongTensor] = None,
+ attention_mask: Optional[torch.Tensor] = None, # noqa
+ inputs_embeds: Optional[torch.FloatTensor] = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ use_cache: Optional[bool] = None,
+ output_attentions: Optional[bool] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None
+ ) -> Union[Tuple, BaseModelOutputWithPast]:
+ if output_attentions:
+ warnings.warn("`HGRNModel` does not `output_attentions` now, setting it to `False`.")
+ output_attentions = False
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
+ output_hidden_states = output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
+ use_cache = use_cache if use_cache is not None else (self.config.use_cache if not self.training else False)
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ # retrieve input_ids and inputs_embeds
+ if input_ids is not None and inputs_embeds is not None:
+ raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
+ if input_ids is None and inputs_embeds is None:
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
+
+ if inputs_embeds is None:
+ inputs_embeds = self.embeddings(input_ids)
+ hidden_states = inputs_embeds
+
+ if use_cache and not isinstance(past_key_values, Cache):
+ past_key_values = Cache.from_legacy_cache(past_key_values)
+
+ if self.gradient_checkpointing and self.training and use_cache:
+ logger.warning_once("`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...")
+ use_cache = False
+
+ all_hidden_states = () if output_hidden_states else None
+ all_attns = () if output_attentions else None
+
+ if self.config.use_lower_bound:
+ lower_bounds = self.lower_bounds.softmax(0)
+ lower_bounds = lower_bounds.cumsum(0) - lower_bounds[0]
+ for i, layer in enumerate(self.layers):
+ if output_hidden_states:
+ all_hidden_states += (hidden_states,)
+
+ lower_bound = lower_bounds[i] if self.config.use_lower_bound else None
+ if self.gradient_checkpointing and self.training:
+ hidden_states, attentions, past_key_values = self._gradient_checkpointing_func(
+ layer.__call__,
+ hidden_states,
+ attention_mask,
+ past_key_values,
+ use_cache,
+ output_attentions,
+ lower_bound
+ )
+ else:
+ hidden_states, attentions, past_key_values = layer(
+ hidden_states,
+ attention_mask=attention_mask,
+ past_key_values=past_key_values,
+ use_cache=use_cache,
+ output_attentions=output_attentions,
+ lower_bound=lower_bound
+ )
+
+ if output_attentions:
+ all_attns += (attentions,)
+
+ hidden_states = self.norm(hidden_states)
+
+ # add hidden states from the last decoder layer
+ if output_hidden_states:
+ all_hidden_states += (hidden_states,)
+
+ if not return_dict:
+ return tuple(i for i in [hidden_states, past_key_values, all_hidden_states, all_attns] if i is not None)
+ return BaseModelOutputWithPast(
+ last_hidden_state=hidden_states,
+ past_key_values=past_key_values,
+ hidden_states=all_hidden_states,
+ attentions=all_attns
+ )
+
+
+class HGRNForCausalLM(HGRNPreTrainedModel, GenerationMixin):
+
+ _tied_weights_keys = ["lm_head.weight"]
+
+ def __init__(self, config):
+ super().__init__(config)
+ self.model = HGRNModel(config)
+ self.vocab_size = config.vocab_size
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
+
+ # Initialize weights and apply final processing
+ self.post_init()
+
+ def get_input_embeddings(self):
+ return self.model.embeddings
+
+ def set_input_embeddings(self, value):
+ self.model.embeddings = value
+
+ def get_output_embeddings(self):
+ return self.lm_head
+
+ def set_output_embeddings(self, new_embeddings):
+ self.lm_head = new_embeddings
+
+ def set_decoder(self, decoder):
+ self.model = decoder
+
+ def get_decoder(self):
+ return self.model
+
+ def generate(self, *args, **kwargs):
+ try:
+ return super().generate(*args, **kwargs)
+ except AttributeError as exception:
+ if 'past_key_values' in str(exception):
+ raise AttributeError(
+ f"You tried to call `generate` with a decoding strategy that manipulates `past_key_values`, "
+ f"which is not supported for {self.__class__.__name__}. "
+ f"Try another generation strategy instead. "
+ f"For the available generation strategies, check this doc: "
+ f"https://huggingface.co/docs/transformers/en/generation_strategies#decoding-strategies"
+ )
+ else:
+ raise exception
+
+ def prepare_inputs_for_generation(
+ self,
+ input_ids: torch.LongTensor = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ inputs_embeds: Optional[torch.Tensor] = None,
+ use_cache: bool = True,
+ num_logits_to_keep: Optional[int] = None,
+ **kwargs
+ ):
+ # only last token for `inputs_ids` if the `past_key_values` is passed along.
+ if past_key_values is not None:
+ input_ids = input_ids[:, -1:]
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
+ if inputs_embeds is not None and past_key_values is None:
+ model_inputs = {'inputs_embeds': inputs_embeds}
+ else:
+ # The `contiguous()` here is necessary to have a static stride during decoding. torchdynamo otherwise
+ # recompiles graphs as the stride of the inputs is a guard.
+ # Ref: https://github.com/huggingface/transformers/pull/29114
+ # TODO: use `next_tokens` directly instead.
+ model_inputs = {'input_ids': input_ids.contiguous()}
+
+ if num_logits_to_keep is not None:
+ model_inputs['num_logits_to_keep'] = num_logits_to_keep
+
+ model_inputs.update({
+ 'past_key_values': past_key_values,
+ 'use_cache': use_cache,
+ 'attention_mask': attention_mask,
+ 'num_logits_to_keep': num_logits_to_keep,
+ })
+ return model_inputs
+
+ def forward(
+ self,
+ input_ids: torch.LongTensor = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ inputs_embeds: Optional[torch.Tensor] = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ labels: Optional[torch.LongTensor] = None,
+ use_cache: Optional[bool] = None,
+ output_attentions: Optional[bool] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None,
+ num_logits_to_keep: Optional[int] = 0
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
+ output_hidden_states = (
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
+ )
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ outputs = self.model(
+ input_ids=input_ids,
+ attention_mask=attention_mask,
+ inputs_embeds=inputs_embeds,
+ past_key_values=past_key_values,
+ use_cache=use_cache,
+ output_attentions=output_attentions,
+ output_hidden_states=output_hidden_states,
+ return_dict=return_dict
+ )
+
+ hidden_states = outputs[0]
+ fuse_linear_and_cross_entropy = self.config.fuse_cross_entropy and self.training
+ logits = None if fuse_linear_and_cross_entropy else self.lm_head(hidden_states[:, -num_logits_to_keep:])
+
+ loss = None
+ if labels is not None:
+ if self.config.fuse_cross_entropy:
+ if fuse_linear_and_cross_entropy:
+ loss_fct = FusedLinearCrossEntropyLoss()
+ else:
+ loss_fct = FusedCrossEntropyLoss(inplace_backward=True)
+ else:
+ loss_fct = nn.CrossEntropyLoss()
+ # Enable model parallelism
+ labels = labels.to(hidden_states.device)
+ labels = torch.cat((labels[..., 1:], torch.full_like(labels[:, :1], loss_fct.ignore_index)), 1)
+ if fuse_linear_and_cross_entropy:
+ loss = loss_fct(hidden_states.view(-1, self.config.hidden_size),
+ labels.view(-1),
+ self.lm_head.weight,
+ self.lm_head.bias)
+ else:
+ loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1))
+
+ if not return_dict:
+ output = (logits,) + outputs[1:]
+ return (loss,) + output if loss is not None else output
+
+ return CausalLMOutputWithPast(
+ loss=loss,
+ logits=logits,
+ past_key_values=outputs.past_key_values,
+ hidden_states=outputs.hidden_states,
+ attentions=outputs.attentions,
+ )
diff --git a/fla/models/hgrn2/__init__.py b/fla/models/hgrn2/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..306b8082220a57091f2e99cd689c011690db0439
--- /dev/null
+++ b/fla/models/hgrn2/__init__.py
@@ -0,0 +1,13 @@
+# -*- coding: utf-8 -*-
+
+from transformers import AutoConfig, AutoModel, AutoModelForCausalLM
+
+from fla.models.hgrn2.configuration_hgrn2 import HGRN2Config
+from fla.models.hgrn2.modeling_hgrn2 import HGRN2ForCausalLM, HGRN2Model
+
+AutoConfig.register(HGRN2Config.model_type, HGRN2Config)
+AutoModel.register(HGRN2Config, HGRN2Model)
+AutoModelForCausalLM.register(HGRN2Config, HGRN2ForCausalLM)
+
+
+__all__ = ['HGRN2Config', 'HGRN2ForCausalLM', 'HGRN2Model']
diff --git a/fla/models/hgrn2/configuration_hgrn2.py b/fla/models/hgrn2/configuration_hgrn2.py
new file mode 100644
index 0000000000000000000000000000000000000000..d7a5945b8586d66cc8f0f9e734cc5667fc4ddead
--- /dev/null
+++ b/fla/models/hgrn2/configuration_hgrn2.py
@@ -0,0 +1,76 @@
+# -*- coding: utf-8 -*-
+
+from typing import Dict, Optional
+
+from transformers.configuration_utils import PretrainedConfig
+
+
+class HGRN2Config(PretrainedConfig):
+
+ model_type = 'hgrn2'
+ keys_to_ignore_at_inference = ['past_key_values']
+
+ def __init__(
+ self,
+ hidden_size: int = 2048,
+ num_hidden_layers: int = 24,
+ attn_mode: str = "chunk",
+ num_heads: Optional[int] = None,
+ expand_ratio: Optional[int] = 128,
+ use_short_conv: bool = False,
+ conv_size: int = 4,
+ use_lower_bound: bool = True,
+ hidden_ratio: Optional[int] = 4,
+ intermediate_size: Optional[int] = None,
+ hidden_act: str = "swish",
+ max_position_embeddings: int = 2048,
+ elementwise_affine: Optional[bool] = True,
+ norm_eps: float = 1e-6,
+ attn: Optional[Dict] = None,
+ use_cache: bool = True,
+ pad_token_id: int = None,
+ bos_token_id: int = 1,
+ eos_token_id: int = 2,
+ tie_word_embeddings: bool = False,
+ initializer_range: float = 0.02,
+ fuse_cross_entropy: bool = True,
+ vocab_size: int = 32000,
+ **kwargs
+ ):
+ self.hidden_size = hidden_size
+ self.num_hidden_layers = num_hidden_layers
+ self.attn_mode = attn_mode
+ self.num_heads = num_heads
+ self.expand_ratio = expand_ratio
+ self.use_short_conv = use_short_conv
+ self.conv_size = conv_size
+ self.use_lower_bound = use_lower_bound
+ self.max_position_embeddings = max_position_embeddings
+ self.hidden_ratio = hidden_ratio
+ self.intermediate_size = intermediate_size
+ self.hidden_act = hidden_act
+ self.elementwise_affine = elementwise_affine
+ self.norm_eps = norm_eps
+ self.attn = attn
+ self.use_cache = use_cache
+ self.initializer_range = initializer_range
+ self.fuse_cross_entropy = fuse_cross_entropy
+ self.vocab_size = vocab_size
+
+ if attn is not None:
+ if not isinstance(attn, Dict):
+ raise ValueError("attn must be a dictionary")
+ if 'layers' not in attn:
+ raise ValueError("Layer indices must be provided to initialize hybrid attention layers")
+ if 'num_heads' not in attn:
+ raise ValueError("Number of heads must be provided to initialize hybrid attention layers")
+ attn['num_kv_heads'] = attn.get('num_kv_heads', attn['num_heads'])
+ attn['window_size'] = attn.get('window_size', None)
+
+ super().__init__(
+ pad_token_id=pad_token_id,
+ bos_token_id=bos_token_id,
+ eos_token_id=eos_token_id,
+ tie_word_embeddings=tie_word_embeddings,
+ **kwargs,
+ )
diff --git a/fla/models/hgrn2/modeling_hgrn2.py b/fla/models/hgrn2/modeling_hgrn2.py
new file mode 100644
index 0000000000000000000000000000000000000000..f9fec7317941f9d9597fd7ee926ea49b802de6f7
--- /dev/null
+++ b/fla/models/hgrn2/modeling_hgrn2.py
@@ -0,0 +1,422 @@
+# -*- coding: utf-8 -*-
+
+from __future__ import annotations
+
+import math
+import warnings
+from typing import List, Optional, Tuple, Union
+
+import torch
+import torch.nn as nn
+import torch.utils.checkpoint
+from transformers.activations import ACT2FN
+from transformers.generation import GenerationMixin
+from transformers.modeling_outputs import (BaseModelOutputWithPast,
+ CausalLMOutputWithPast)
+from transformers.modeling_utils import PreTrainedModel
+from transformers.utils import logging
+
+from fla.layers.attn import Attention
+from fla.layers.hgrn2 import HGRN2Attention
+from fla.models.hgrn2.configuration_hgrn2 import HGRN2Config
+from fla.models.utils import Cache
+from fla.modules import (FusedCrossEntropyLoss, FusedLinearCrossEntropyLoss,
+ RMSNorm)
+from fla.modules.activations import swiglu_linear
+
+logger = logging.get_logger(__name__)
+
+
+class HGRN2MLP(nn.Module):
+
+ def __init__(
+ self,
+ hidden_size: int,
+ hidden_ratio: Optional[int] = None,
+ intermediate_size: Optional[int] = None,
+ hidden_act: str = 'swish'
+ ) -> HGRN2MLP:
+ super().__init__()
+
+ self.hidden_size = hidden_size
+ # the final number of params is `hidden_ratio * hidden_size^2`
+ # `intermediate_size` is chosen to be a multiple of 256 closest to `2/3 * hidden_size * hidden_ratio`
+ if hidden_ratio is None:
+ hidden_ratio = 4
+ if intermediate_size is None:
+ intermediate_size = int(hidden_size * hidden_ratio * 2 / 3)
+ intermediate_size = 256 * ((intermediate_size + 256 - 1) // 256)
+ self.hidden_ratio = hidden_ratio
+ self.intermediate_size = intermediate_size
+
+ self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size * 2, bias=False)
+ self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False)
+ self.act_fn = ACT2FN[hidden_act]
+
+ def forward(self, x):
+ y = self.gate_proj(x)
+ gate, y = y.chunk(2, -1)
+ return swiglu_linear(gate, y, self.down_proj.weight, self.down_proj.bias)
+
+
+class HGRN2Block(nn.Module):
+ def __init__(self, config: HGRN2Config, layer_idx: int):
+ super().__init__()
+ self.hidden_size = config.hidden_size
+
+ self.attn_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps)
+ if config.attn is not None and layer_idx in config.attn['layers']:
+ self.attn = Attention(
+ hidden_size=config.hidden_size,
+ num_heads=config.attn['num_heads'],
+ num_kv_heads=config.attn['num_kv_heads'],
+ window_size=config.attn['window_size'],
+ max_position_embeddings=config.max_position_embeddings,
+ layer_idx=layer_idx
+ )
+ else:
+ self.attn = HGRN2Attention(
+ mode=config.attn_mode,
+ hidden_size=config.hidden_size,
+ num_heads=config.num_heads,
+ expand_ratio=config.expand_ratio,
+ use_short_conv=config.use_short_conv,
+ conv_size=config.conv_size,
+ elementwise_affine=config.elementwise_affine,
+ norm_eps=config.norm_eps,
+ layer_idx=layer_idx
+ )
+ self.mlp_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps)
+ self.mlp = HGRN2MLP(
+ hidden_size=config.hidden_size,
+ hidden_ratio=config.hidden_ratio,
+ intermediate_size=config.intermediate_size,
+ hidden_act=config.hidden_act
+ )
+
+ def forward(
+ self,
+ hidden_states: torch.Tensor,
+ attention_mask: Optional[torch.Tensor] = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ use_cache: Optional[bool] = False,
+ output_attentions: Optional[bool] = False,
+ lower_bound: Optional[torch.Tensor] = False,
+ **kwargs,
+ ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
+ residual = hidden_states
+ hidden_states = self.attn_norm(hidden_states)
+ hidden_states, attentions, past_key_values = self.attn(
+ hidden_states=hidden_states,
+ attention_mask=attention_mask,
+ past_key_values=past_key_values,
+ use_cache=use_cache,
+ output_attentions=output_attentions,
+ lower_bound=lower_bound
+ )
+ hidden_states, residual = self.mlp_norm(hidden_states, residual, True)
+ hidden_states = self.mlp(hidden_states)
+ hidden_states = residual + hidden_states
+
+ outputs = (hidden_states, attentions, past_key_values)
+
+ return outputs
+
+
+class HGRN2PreTrainedModel(PreTrainedModel):
+
+ config_class = HGRN2Config
+ supports_gradient_checkpointing = True
+ _no_split_modules = ['HGRN2Block']
+
+ def __init__(self, *inputs, **kwargs):
+ super().__init__(*inputs, **kwargs)
+
+ def _init_weights(
+ self,
+ module: nn.Module,
+ rescale_prenorm_residual: bool = True,
+ num_residuals_per_layer: int = 2,
+ ):
+ if isinstance(module, (nn.Linear, nn.Conv1d)):
+ # Slightly different from the TF version which uses truncated_normal for initialization
+ # cf https://github.com/pytorch/pytorch/pull/5617
+ nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range)
+ if module.bias is not None:
+ nn.init.zeros_(module.bias)
+ elif isinstance(module, nn.Embedding):
+ nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range)
+ if module.padding_idx is not None:
+ module.weight.data[module.padding_idx].zero_()
+
+ if rescale_prenorm_residual:
+ # Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme:
+ # > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale
+ # > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers.
+ # > -- GPT-2 :: https://openai.com/blog/better-language-models/
+ #
+ # Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py
+ for name, p in module.named_parameters():
+ if name in ["o_proj.weight", "down_proj.weight"]:
+ # Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block
+ # Following Pytorch init, except scale by 1/sqrt(2 * n_layer)
+ # We need to reinit p since this code could be called multiple times
+ # Having just p *= scale would repeatedly scale it down
+ with torch.no_grad():
+ p /= math.sqrt(num_residuals_per_layer * self.config.num_hidden_layers)
+
+
+class HGRN2Model(HGRN2PreTrainedModel):
+
+ def __init__(self, config: HGRN2Config):
+ super().__init__(config)
+ self.padding_idx = config.pad_token_id
+ self.vocab_size = config.vocab_size
+
+ self.embeddings = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
+ if config.use_lower_bound:
+ self.lower_bounds = nn.Parameter(torch.zeros(config.num_hidden_layers, config.hidden_size))
+ self.layers = nn.ModuleList([HGRN2Block(config, layer_idx) for layer_idx in range(config.num_hidden_layers)])
+ self.norm = RMSNorm(config.hidden_size, eps=config.norm_eps)
+
+ self.gradient_checkpointing = False
+
+ self.post_init()
+
+ def get_input_embeddings(self):
+ return self.embeddings
+
+ def set_input_embeddings(self, value):
+ self.embeddings = value
+
+ def forward(
+ self,
+ input_ids: Optional[torch.LongTensor] = None,
+ attention_mask: Optional[torch.Tensor] = None, # noqa
+ inputs_embeds: Optional[torch.FloatTensor] = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ use_cache: Optional[bool] = None,
+ output_attentions: Optional[bool] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None
+ ) -> Union[Tuple, BaseModelOutputWithPast]:
+ if output_attentions:
+ warnings.warn("`HGRN2Model` does not `output_attentions` now, setting it to `False`.")
+ output_attentions = False
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
+ output_hidden_states = output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
+ use_cache = use_cache if use_cache is not None else (self.config.use_cache if not self.training else False)
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ # retrieve input_ids and inputs_embeds
+ if input_ids is not None and inputs_embeds is not None:
+ raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
+ if input_ids is None and inputs_embeds is None:
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
+
+ if inputs_embeds is None:
+ inputs_embeds = self.embeddings(input_ids)
+ hidden_states = inputs_embeds
+
+ if use_cache and not isinstance(past_key_values, Cache):
+ past_key_values = Cache.from_legacy_cache(past_key_values)
+
+ if self.gradient_checkpointing and self.training and use_cache:
+ logger.warning_once("`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...")
+ use_cache = False
+
+ all_hidden_states = () if output_hidden_states else None
+ all_attns = () if output_attentions else None
+
+ if self.config.use_lower_bound:
+ lower_bounds = self.lower_bounds.softmax(0)
+ lower_bounds = lower_bounds.cumsum(0) - lower_bounds[0]
+ for i, layer in enumerate(self.layers):
+ if output_hidden_states:
+ all_hidden_states += (hidden_states,)
+
+ lower_bound = lower_bounds[i] if self.config.use_lower_bound else None
+ if self.gradient_checkpointing and self.training:
+ hidden_states, attentions, past_key_values = self._gradient_checkpointing_func(
+ layer.__call__,
+ hidden_states,
+ attention_mask,
+ past_key_values,
+ use_cache,
+ output_attentions,
+ lower_bound
+ )
+ else:
+ hidden_states, attentions, past_key_values = layer(
+ hidden_states,
+ attention_mask=attention_mask,
+ past_key_values=past_key_values,
+ use_cache=use_cache,
+ output_attentions=output_attentions,
+ lower_bound=lower_bound
+ )
+
+ if output_attentions:
+ all_attns += (attentions,)
+
+ hidden_states = self.norm(hidden_states)
+
+ # add hidden states from the last decoder layer
+ if output_hidden_states:
+ all_hidden_states += (hidden_states,)
+
+ if not return_dict:
+ return tuple(i for i in [hidden_states, past_key_values, all_hidden_states, all_attns] if i is not None)
+ return BaseModelOutputWithPast(
+ last_hidden_state=hidden_states,
+ past_key_values=past_key_values,
+ hidden_states=all_hidden_states,
+ attentions=all_attns
+ )
+
+
+class HGRN2ForCausalLM(HGRN2PreTrainedModel, GenerationMixin):
+
+ _tied_weights_keys = ["lm_head.weight"]
+
+ def __init__(self, config):
+ super().__init__(config)
+ self.model = HGRN2Model(config)
+ self.vocab_size = config.vocab_size
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
+
+ # Initialize weights and apply final processing
+ self.post_init()
+
+ def get_input_embeddings(self):
+ return self.model.embeddings
+
+ def set_input_embeddings(self, value):
+ self.model.embeddings = value
+
+ def get_output_embeddings(self):
+ return self.lm_head
+
+ def set_output_embeddings(self, new_embeddings):
+ self.lm_head = new_embeddings
+
+ def set_decoder(self, decoder):
+ self.model = decoder
+
+ def get_decoder(self):
+ return self.model
+
+ def generate(self, *args, **kwargs):
+ try:
+ return super().generate(*args, **kwargs)
+ except AttributeError as exception:
+ if 'past_key_values' in str(exception):
+ raise AttributeError(
+ f"You tried to call `generate` with a decoding strategy that manipulates `past_key_values`, "
+ f"which is not supported for {self.__class__.__name__}. "
+ f"Try another generation strategy instead. "
+ f"For the available generation strategies, check this doc: "
+ f"https://huggingface.co/docs/transformers/en/generation_strategies#decoding-strategies"
+ )
+ else:
+ raise exception
+
+ def prepare_inputs_for_generation(
+ self,
+ input_ids: torch.LongTensor = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ inputs_embeds: Optional[torch.Tensor] = None,
+ use_cache: bool = True,
+ num_logits_to_keep: Optional[int] = None,
+ **kwargs
+ ):
+ # only last token for `inputs_ids` if the `past_key_values` is passed along.
+ if past_key_values is not None:
+ input_ids = input_ids[:, -1:]
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
+ if inputs_embeds is not None and past_key_values is None:
+ model_inputs = {'inputs_embeds': inputs_embeds}
+ else:
+ # The `contiguous()` here is necessary to have a static stride during decoding. torchdynamo otherwise
+ # recompiles graphs as the stride of the inputs is a guard.
+ # Ref: https://github.com/huggingface/transformers/pull/29114
+ # TODO: use `next_tokens` directly instead.
+ model_inputs = {'input_ids': input_ids.contiguous()}
+
+ if num_logits_to_keep is not None:
+ model_inputs['num_logits_to_keep'] = num_logits_to_keep
+
+ model_inputs.update({
+ 'past_key_values': past_key_values,
+ 'use_cache': use_cache,
+ 'attention_mask': attention_mask,
+ 'num_logits_to_keep': num_logits_to_keep,
+ })
+ return model_inputs
+
+ def forward(
+ self,
+ input_ids: torch.LongTensor = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ inputs_embeds: Optional[torch.Tensor] = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ labels: Optional[torch.LongTensor] = None,
+ use_cache: Optional[bool] = None,
+ output_attentions: Optional[bool] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None,
+ num_logits_to_keep: Optional[int] = 0
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
+ output_hidden_states = (
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
+ )
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ outputs = self.model(
+ input_ids=input_ids,
+ attention_mask=attention_mask,
+ inputs_embeds=inputs_embeds,
+ past_key_values=past_key_values,
+ use_cache=use_cache,
+ output_attentions=output_attentions,
+ output_hidden_states=output_hidden_states,
+ return_dict=return_dict
+ )
+
+ hidden_states = outputs[0]
+ fuse_linear_and_cross_entropy = self.config.fuse_cross_entropy and self.training
+ logits = None if fuse_linear_and_cross_entropy else self.lm_head(hidden_states[:, -num_logits_to_keep:])
+
+ loss = None
+ if labels is not None:
+ if self.config.fuse_cross_entropy:
+ if fuse_linear_and_cross_entropy:
+ loss_fct = FusedLinearCrossEntropyLoss()
+ else:
+ loss_fct = FusedCrossEntropyLoss(inplace_backward=True)
+ else:
+ loss_fct = nn.CrossEntropyLoss()
+ # Enable model parallelism
+ labels = labels.to(hidden_states.device)
+ labels = torch.cat((labels[..., 1:], torch.full_like(labels[:, :1], loss_fct.ignore_index)), 1)
+ if fuse_linear_and_cross_entropy:
+ loss = loss_fct(hidden_states.view(-1, self.config.hidden_size),
+ labels.view(-1),
+ self.lm_head.weight,
+ self.lm_head.bias)
+ else:
+ loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1))
+
+ if not return_dict:
+ output = (logits,) + outputs[1:]
+ return (loss,) + output if loss is not None else output
+
+ return CausalLMOutputWithPast(
+ loss=loss,
+ logits=logits,
+ past_key_values=outputs.past_key_values,
+ hidden_states=outputs.hidden_states,
+ attentions=outputs.attentions,
+ )
diff --git a/fla/models/linear_attn/__init__.py b/fla/models/linear_attn/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..72d5d022de95afe9dc6cf76d3c2026a6a7f9e7a0
--- /dev/null
+++ b/fla/models/linear_attn/__init__.py
@@ -0,0 +1,14 @@
+# -*- coding: utf-8 -*-
+
+from transformers import AutoConfig, AutoModel, AutoModelForCausalLM
+
+from fla.models.linear_attn.configuration_linear_attn import \
+ LinearAttentionConfig
+from fla.models.linear_attn.modeling_linear_attn import (
+ LinearAttentionForCausalLM, LinearAttentionModel)
+
+AutoConfig.register(LinearAttentionConfig.model_type, LinearAttentionConfig)
+AutoModel.register(LinearAttentionConfig, LinearAttentionModel)
+AutoModelForCausalLM.register(LinearAttentionConfig, LinearAttentionForCausalLM)
+
+__all__ = ['LinearAttentionConfig', 'LinearAttentionForCausalLM', 'LinearAttentionModel']
diff --git a/fla/models/linear_attn/configuration_linear_attn.py b/fla/models/linear_attn/configuration_linear_attn.py
new file mode 100644
index 0000000000000000000000000000000000000000..d1bff79e2ecbd61fc07479b4dafbccc40cfa94d8
--- /dev/null
+++ b/fla/models/linear_attn/configuration_linear_attn.py
@@ -0,0 +1,83 @@
+# -*- coding: utf-8 -*-
+
+from typing import Dict, Optional
+
+from transformers.configuration_utils import PretrainedConfig
+
+
+class LinearAttentionConfig(PretrainedConfig):
+
+ model_type = 'linear_attn'
+ keys_to_ignore_at_inference = ['past_key_values']
+
+ def __init__(
+ self,
+ attn_mode: str = "fused_chunk",
+ hidden_size: int = 2048,
+ expand_k: int = 1,
+ expand_v: int = 1,
+ hidden_ratio: Optional[int] = 4,
+ intermediate_size: Optional[int] = None,
+ num_hidden_layers: int = 24,
+ num_heads: int = 4,
+ num_kv_heads: Optional[int] = None,
+ feature_map: str = "elementwise_product",
+ tie_feature_map_qk: bool = False,
+ norm_q: bool = False,
+ norm_k: bool = False,
+ norm_feature_map: bool = False,
+ hidden_act: str = "swish",
+ max_position_embeddings: int = 2048,
+ elementwise_affine: Optional[bool] = True,
+ norm_eps: float = 1e-6,
+ attn: Optional[Dict] = None,
+ use_cache: bool = True,
+ pad_token_id: int = None,
+ bos_token_id: int = 1,
+ eos_token_id: int = 2,
+ tie_word_embeddings: bool = False,
+ initializer_range: float = 0.02,
+ fuse_cross_entropy: bool = True,
+ vocab_size: int = 32000,
+ **kwargs
+ ):
+ self.attn_mode = attn_mode
+ self.hidden_size = hidden_size
+ self.expand_k = expand_k
+ self.expand_v = expand_v
+ self.hidden_ratio = hidden_ratio
+ self.intermediate_size = intermediate_size
+ self.num_hidden_layers = num_hidden_layers
+ self.num_heads = num_heads
+ self.num_kv_heads = num_kv_heads
+ self.feature_map = feature_map
+ self.tie_feature_map_qk = tie_feature_map_qk
+ self.norm_q = norm_q
+ self.norm_k = norm_k
+ self.norm_feature_map = norm_feature_map
+ self.max_position_embeddings = max_position_embeddings
+ self.elementwise_affine = elementwise_affine
+ self.norm_eps = norm_eps
+ self.attn = attn
+ self.use_cache = use_cache
+ self.initializer_range = initializer_range
+ self.fuse_cross_entropy = fuse_cross_entropy
+ self.vocab_size = vocab_size
+
+ if attn is not None:
+ if not isinstance(attn, Dict):
+ raise ValueError("attn must be a dictionary")
+ if 'layers' not in attn:
+ raise ValueError("Layer indices must be provided to initialize hybrid attention layers")
+ if 'num_heads' not in attn:
+ raise ValueError("Number of heads must be provided to initialize hybrid attention layers")
+ attn['num_kv_heads'] = attn.get('num_kv_heads', attn['num_heads'])
+ attn['window_size'] = attn.get('window_size', None)
+
+ super().__init__(
+ pad_token_id=pad_token_id,
+ bos_token_id=bos_token_id,
+ eos_token_id=eos_token_id,
+ tie_word_embeddings=tie_word_embeddings,
+ **kwargs,
+ )
diff --git a/fla/models/linear_attn/modeling_linear_attn.py b/fla/models/linear_attn/modeling_linear_attn.py
new file mode 100644
index 0000000000000000000000000000000000000000..e1906bfd856a801c4b2073a1ba7094ff93304da5
--- /dev/null
+++ b/fla/models/linear_attn/modeling_linear_attn.py
@@ -0,0 +1,427 @@
+# -*- coding: utf-8 -*-
+
+from __future__ import annotations
+
+import math
+import warnings
+from typing import List, Optional, Tuple, Union
+
+import torch
+import torch.nn as nn
+import torch.utils.checkpoint
+from transformers.activations import ACT2FN
+from transformers.generation import GenerationMixin
+from transformers.modeling_outputs import (BaseModelOutputWithPast,
+ CausalLMOutputWithPast)
+from transformers.modeling_utils import PreTrainedModel
+from transformers.utils import logging
+
+from fla.layers.attn import Attention
+from fla.layers.linear_attn import LinearAttention
+from fla.models.linear_attn.configuration_linear_attn import \
+ LinearAttentionConfig
+from fla.models.utils import Cache
+from fla.modules import (FusedCrossEntropyLoss, FusedLinearCrossEntropyLoss,
+ RMSNorm)
+from fla.modules.activations import swiglu_linear
+
+logger = logging.get_logger(__name__)
+
+
+class LinearAttentionMLP(nn.Module):
+
+ def __init__(
+ self,
+ hidden_size: int,
+ hidden_ratio: Optional[int] = None,
+ intermediate_size: Optional[int] = None,
+ hidden_act: str = 'swish'
+ ) -> LinearAttentionMLP:
+ super().__init__()
+
+ self.hidden_size = hidden_size
+ # the final number of params is `hidden_ratio * hidden_size^2`
+ # `intermediate_size` is chosen to be a multiple of 256 closest to `2/3 * hidden_size * hidden_ratio`
+ if hidden_ratio is None:
+ hidden_ratio = 4
+ if intermediate_size is None:
+ intermediate_size = int(hidden_size * hidden_ratio * 2 / 3)
+ intermediate_size = 256 * ((intermediate_size + 256 - 1) // 256)
+ self.hidden_ratio = hidden_ratio
+ self.intermediate_size = intermediate_size
+
+ self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size * 2, bias=False)
+ self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False)
+ self.act_fn = ACT2FN[hidden_act]
+
+ def forward(self, x):
+ y = self.gate_proj(x)
+ gate, y = y.chunk(2, -1)
+ return swiglu_linear(gate, y, self.down_proj.weight, self.down_proj.bias)
+
+
+class LinearAttentionBlock(nn.Module):
+ def __init__(self, config: LinearAttentionConfig, layer_idx: int):
+ super().__init__()
+ self.hidden_size = config.hidden_size
+
+ self.attn_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps)
+ if config.attn is not None and layer_idx in config.attn['layers']:
+ self.attn = Attention(
+ hidden_size=config.hidden_size,
+ num_heads=config.attn['num_heads'],
+ num_kv_heads=config.attn['num_kv_heads'],
+ window_size=config.attn['window_size'],
+ max_position_embeddings=config.max_position_embeddings,
+ layer_idx=layer_idx
+ )
+ else:
+ self.attn = LinearAttention(
+ mode=config.attn_mode,
+ hidden_size=config.hidden_size,
+ expand_k=config.expand_k,
+ expand_v=config.expand_v,
+ num_heads=config.num_heads,
+ num_kv_heads=config.num_kv_heads,
+ feature_map=config.feature_map,
+ tie_feature_map_qk=config.tie_feature_map_qk,
+ norm_q=config.norm_q,
+ norm_k=config.norm_k,
+ do_feature_map_norm=config.norm_feature_map,
+ elementwise_affine=config.elementwise_affine,
+ norm_eps=config.norm_eps,
+ layer_idx=layer_idx
+ )
+ self.mlp_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps)
+ self.mlp = LinearAttentionMLP(
+ hidden_size=config.hidden_size,
+ hidden_ratio=config.hidden_ratio,
+ intermediate_size=config.intermediate_size,
+ hidden_act=config.hidden_act
+ )
+
+ def forward(
+ self,
+ hidden_states: torch.Tensor,
+ attention_mask: Optional[torch.Tensor] = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ use_cache: Optional[bool] = False,
+ output_attentions: Optional[bool] = False,
+ **kwargs,
+ ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
+ residual = hidden_states
+ # currently not supported
+ attn_weights, present_key_value = None, None
+
+ hidden_states = self.attn_norm(hidden_states)
+ hidden_states = self.attn(hidden_states)
+ hidden_states, residual = self.mlp_norm(hidden_states, residual, True)
+ hidden_states = self.mlp(hidden_states)
+ hidden_states = residual + hidden_states
+
+ outputs = (hidden_states,)
+
+ if output_attentions:
+ outputs += (attn_weights,)
+
+ if use_cache:
+ outputs += (present_key_value,)
+
+ return outputs
+
+
+class LinearAttentionPreTrainedModel(PreTrainedModel):
+
+ config_class = LinearAttentionConfig
+ supports_gradient_checkpointing = True
+ _no_split_modules = ['LinearAttentionBlock']
+
+ def __init__(self, *inputs, **kwargs):
+ super().__init__(*inputs, **kwargs)
+
+ def _init_weights(
+ self,
+ module: nn.Module,
+ rescale_prenorm_residual: bool = True,
+ num_residuals_per_layer: int = 2,
+ ):
+ if isinstance(module, (nn.Linear, nn.Conv1d)):
+ # Slightly different from the TF version which uses truncated_normal for initialization
+ # cf https://github.com/pytorch/pytorch/pull/5617
+ nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range)
+ if module.bias is not None:
+ nn.init.zeros_(module.bias)
+ elif isinstance(module, nn.Embedding):
+ nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range)
+ if module.padding_idx is not None:
+ module.weight.data[module.padding_idx].zero_()
+
+ if rescale_prenorm_residual:
+ # Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme:
+ # > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale
+ # > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers.
+ # > -- GPT-2 :: https://openai.com/blog/better-language-models/
+ #
+ # Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py
+ for name, p in module.named_parameters():
+ if name in ["o_proj.weight", "down_proj.weight"]:
+ # Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block
+ # Following Pytorch init, except scale by 1/sqrt(2 * n_layer)
+ # We need to reinit p since this code could be called multiple times
+ # Having just p *= scale would repeatedly scale it down
+ with torch.no_grad():
+ p /= math.sqrt(num_residuals_per_layer * self.config.num_hidden_layers)
+
+
+class LinearAttentionModel(LinearAttentionPreTrainedModel):
+
+ def __init__(self, config: LinearAttentionConfig):
+ super().__init__(config)
+ self.padding_idx = config.pad_token_id
+ self.vocab_size = config.vocab_size
+
+ self.embeddings = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
+ self.layers = nn.ModuleList([LinearAttentionBlock(config, layer_idx) for layer_idx in range(config.num_hidden_layers)])
+ self.norm = RMSNorm(config.hidden_size, eps=config.norm_eps)
+
+ self.gradient_checkpointing = False
+
+ self.post_init()
+
+ def get_input_embeddings(self):
+ return self.embeddings
+
+ def set_input_embeddings(self, value):
+ self.embeddings = value
+
+ def forward(
+ self,
+ input_ids: Optional[torch.LongTensor] = None,
+ attention_mask: Optional[torch.Tensor] = None, # noqa
+ inputs_embeds: Optional[torch.FloatTensor] = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ use_cache: Optional[bool] = None,
+ output_attentions: Optional[bool] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None
+ ) -> Union[Tuple, BaseModelOutputWithPast]:
+ if output_attentions:
+ warnings.warn(
+ "`LinearAttentionModel` does not support output attention weights now, "
+ "so `output_attentions` is set to `False`."
+ )
+ output_attentions = False
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
+ output_hidden_states = output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
+ use_cache = use_cache if use_cache is not None else (self.config.use_cache if not self.training else False)
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ # retrieve input_ids and inputs_embeds
+ if input_ids is not None and inputs_embeds is not None:
+ raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
+ if input_ids is None and inputs_embeds is None:
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
+
+ if inputs_embeds is None:
+ inputs_embeds = self.embeddings(input_ids)
+ hidden_states = inputs_embeds
+
+ if use_cache and not isinstance(past_key_values, Cache):
+ past_key_values = Cache.from_legacy_cache(past_key_values)
+
+ if self.gradient_checkpointing and self.training and use_cache:
+ logger.warning_once("`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...")
+ use_cache = False
+
+ all_hidden_states = () if output_hidden_states else None
+ all_attns = () if output_attentions else None
+
+ if self.config.use_lower_bound:
+ lower_bounds = self.lower_bounds.softmax(0)
+ lower_bounds = lower_bounds.cumsum(0) - lower_bounds[0]
+ for i, layer in enumerate(self.layers):
+ if output_hidden_states:
+ all_hidden_states += (hidden_states,)
+
+ if self.gradient_checkpointing and self.training:
+ hidden_states, attentions, past_key_values = self._gradient_checkpointing_func(
+ layer.__call__,
+ hidden_states,
+ attention_mask,
+ past_key_values,
+ use_cache,
+ output_attentions,
+ )
+ else:
+ hidden_states, attentions, past_key_values = layer(
+ hidden_states,
+ attention_mask=attention_mask,
+ past_key_values=past_key_values,
+ use_cache=use_cache,
+ output_attentions=output_attentions
+ )
+
+ if output_attentions:
+ all_attns += (attentions,)
+
+ hidden_states = self.norm(hidden_states)
+
+ # add hidden states from the last decoder layer
+ if output_hidden_states:
+ all_hidden_states += (hidden_states,)
+
+ if not return_dict:
+ return tuple(i for i in [hidden_states, past_key_values, all_hidden_states, all_attns] if i is not None)
+ return BaseModelOutputWithPast(
+ last_hidden_state=hidden_states,
+ past_key_values=past_key_values,
+ hidden_states=all_hidden_states,
+ attentions=all_attns
+ )
+
+
+class LinearAttentionForCausalLM(LinearAttentionPreTrainedModel, GenerationMixin):
+
+ _tied_weights_keys = ["lm_head.weight"]
+
+ def __init__(self, config):
+ super().__init__(config)
+ self.model = LinearAttentionModel(config)
+ self.vocab_size = config.vocab_size
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
+
+ # Initialize weights and apply final processing
+ self.post_init()
+
+ def get_input_embeddings(self):
+ return self.model.embeddings
+
+ def set_input_embeddings(self, value):
+ self.model.embeddings = value
+
+ def get_output_embeddings(self):
+ return self.lm_head
+
+ def set_output_embeddings(self, new_embeddings):
+ self.lm_head = new_embeddings
+
+ def set_decoder(self, decoder):
+ self.model = decoder
+
+ def get_decoder(self):
+ return self.model
+
+ def generate(self, *args, **kwargs):
+ try:
+ return super().generate(*args, **kwargs)
+ except AttributeError as exception:
+ if 'past_key_values' in str(exception):
+ raise AttributeError(
+ f"You tried to call `generate` with a decoding strategy that manipulates `past_key_values`, "
+ f"which is not supported for {self.__class__.__name__}. "
+ f"Try another generation strategy instead. "
+ f"For the available generation strategies, check this doc: "
+ f"https://huggingface.co/docs/transformers/en/generation_strategies#decoding-strategies"
+ )
+ else:
+ raise exception
+
+ def prepare_inputs_for_generation(
+ self,
+ input_ids: torch.LongTensor = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ inputs_embeds: Optional[torch.Tensor] = None,
+ use_cache: bool = True,
+ num_logits_to_keep: Optional[int] = None,
+ **kwargs
+ ):
+ # only last token for `inputs_ids` if the `past_key_values` is passed along.
+ if past_key_values is not None:
+ input_ids = input_ids[:, -1:]
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
+ if inputs_embeds is not None and past_key_values is None:
+ model_inputs = {'inputs_embeds': inputs_embeds}
+ else:
+ # The `contiguous()` here is necessary to have a static stride during decoding. torchdynamo otherwise
+ # recompiles graphs as the stride of the inputs is a guard.
+ # Ref: https://github.com/huggingface/transformers/pull/29114
+ # TODO: use `next_tokens` directly instead.
+ model_inputs = {'input_ids': input_ids.contiguous()}
+
+ if num_logits_to_keep is not None:
+ model_inputs['num_logits_to_keep'] = num_logits_to_keep
+
+ model_inputs.update({
+ 'past_key_values': past_key_values,
+ 'use_cache': use_cache,
+ 'attention_mask': attention_mask,
+ 'num_logits_to_keep': num_logits_to_keep,
+ })
+ return model_inputs
+
+ def forward(
+ self,
+ input_ids: torch.LongTensor = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ inputs_embeds: Optional[torch.Tensor] = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ labels: Optional[torch.LongTensor] = None,
+ use_cache: Optional[bool] = None,
+ output_attentions: Optional[bool] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None,
+ num_logits_to_keep: Optional[int] = 0
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
+ output_hidden_states = (
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
+ )
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ outputs = self.model(
+ input_ids=input_ids,
+ attention_mask=attention_mask,
+ inputs_embeds=inputs_embeds,
+ past_key_values=past_key_values,
+ use_cache=use_cache,
+ output_attentions=output_attentions,
+ output_hidden_states=output_hidden_states,
+ return_dict=return_dict
+ )
+
+ hidden_states = outputs[0]
+ fuse_linear_and_cross_entropy = self.config.fuse_cross_entropy and self.training
+ logits = None if fuse_linear_and_cross_entropy else self.lm_head(hidden_states[:, -num_logits_to_keep:])
+
+ loss = None
+ if labels is not None:
+ if self.config.fuse_cross_entropy:
+ if fuse_linear_and_cross_entropy:
+ loss_fct = FusedLinearCrossEntropyLoss()
+ else:
+ loss_fct = FusedCrossEntropyLoss(inplace_backward=True)
+ else:
+ loss_fct = nn.CrossEntropyLoss()
+ # Enable model parallelism
+ labels = labels.to(hidden_states.device)
+ labels = torch.cat((labels[..., 1:], torch.full_like(labels[:, :1], loss_fct.ignore_index)), 1)
+ if fuse_linear_and_cross_entropy:
+ loss = loss_fct(hidden_states.view(-1, self.config.hidden_size),
+ labels.view(-1),
+ self.lm_head.weight,
+ self.lm_head.bias)
+ else:
+ loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1))
+
+ if not return_dict:
+ output = (logits,) + outputs[1:]
+ return (loss,) + output if loss is not None else output
+
+ return CausalLMOutputWithPast(
+ loss=loss,
+ logits=logits,
+ past_key_values=outputs.past_key_values,
+ hidden_states=outputs.hidden_states,
+ attentions=outputs.attentions,
+ )
diff --git a/fla/models/mamba/__init__.py b/fla/models/mamba/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..a0eff2ea26f3a11bcf2333002509686eca2289aa
--- /dev/null
+++ b/fla/models/mamba/__init__.py
@@ -0,0 +1,14 @@
+# -*- coding: utf-8 -*-
+
+from transformers import AutoConfig, AutoModel, AutoModelForCausalLM
+
+from fla.models.mamba.configuration_mamba import MambaConfig
+from fla.models.mamba.modeling_mamba import (MambaBlock, MambaForCausalLM,
+ MambaModel)
+
+AutoConfig.register(MambaConfig.model_type, MambaConfig, True)
+AutoModel.register(MambaConfig, MambaModel, True)
+AutoModelForCausalLM.register(MambaConfig, MambaForCausalLM, True)
+
+
+__all__ = ['MambaConfig', 'MambaForCausalLM', 'MambaModel', 'MambaBlock']
diff --git a/fla/models/mamba/configuration_mamba.py b/fla/models/mamba/configuration_mamba.py
new file mode 100644
index 0000000000000000000000000000000000000000..f25d6e3e134c5a549351fbfe256c674ecbbec29b
--- /dev/null
+++ b/fla/models/mamba/configuration_mamba.py
@@ -0,0 +1,166 @@
+# coding=utf-8
+# Copyright 2024 The HuggingFace Inc. team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""MAMBA configuration"""
+
+import math
+
+from transformers.configuration_utils import PretrainedConfig
+
+
+class MambaConfig(PretrainedConfig):
+ """
+ This is the configuration class to store the configuration of a [`MambaModel`]. It is used to instantiate a MAMBA
+ model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
+ defaults will yield a similar configuration to that of the MAMBA
+ [state-spaces/mamba-2.8b](https://huggingface.co/state-spaces/mamba-2.8b) architecture.
+
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
+ documentation from [`PretrainedConfig`] for more information.
+
+
+ Args:
+ vocab_size (`int`, *optional*):
+ Vocabulary size of the Mamba model.
+ hidden_size (`int`, *optional*):
+ Dimensionality of the embeddings and hidden states. Default: 2048.
+ state_size (`int`, *optional*):
+ Shape of the state space latents. Default: 16.
+ num_hidden_layers (`int`, *optional*):
+ Number of hidden layers in the model. Default: 48.
+ layer_norm_epsilon (`float`, *optional*):
+ The epsilon to use in the layer normalization layers. Default: 1e-5.
+ pad_token_id (`int`, *optional*):
+ Padding token id. Default: 0.
+ bos_token_id (`int`, *optional*):
+ The id of the beginning of sentence token in the vocabulary. Default: 0.
+ eos_token_id (`int`, *optional*):
+ The id of the end of sentence token in the vocabulary. Default: 0.
+ expand (`int`, *optional*):
+ Expanding factor used to determine the intermediate size. Default: 2.
+ conv_kernel (`int`, *optional*):
+ Size of the convolution kernel. Default: 4.
+ use_bias (`bool`, *optional*):
+ Whether or not to use bias in ["in_proj", "out_proj"] of the mixer block. Default: `False`.
+ use_conv_bias (`bool`, *optional*):
+ Whether or not to use bias in the convolution layer of the mixer block. Default: `True`.
+ hidden_act (`str`, *optional*):
+ The non-linear activation function (function or string) in the decoder. Default: `"silu"`.
+ initializer_range (`float`, *optional*):
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices. Default: 0.1.
+ residual_in_fp32 (`bool`, *optional*):
+ Whether or not residuals should be in `float32`.
+ If set to `False` residuals will keep the same `dtype` as the rest of the model. Default: `True`.
+ time_step_rank (`Union[int,str]`, *optional*):
+ Rank of the the discretization projection matrix.
+ `"auto"` means that it will default to `math.ceil(self.hidden_size / 16)`. Default: `"auto"`.
+ time_step_scale (`float`, *optional*):
+ Scale used used to scale `dt_proj.bias`. Default: 1.0.
+ time_step_min (`float`, *optional*):
+ Minimum `time_step` used to bound `dt_proj.bias`. Default: 0.001.
+ time_step_max (`float`, *optional*):
+ Maximum `time_step` used to bound `dt_proj.bias`. Default: 0.1.
+ time_step_init_scheme (`float`, *optional*):
+ Init scheme used for `dt_proj.weight`. Should be one of `["random","uniform"]`. Default: `"random"`.
+ time_step_floor (`float`, *optional*):
+ Minimum clamping value of the `dt_proj.bias` layer initialization. Default: 0.0001.
+ window_size (`int`, *optional*):
+ The window size used for sliding window attention. Default: 2048.
+ rescale_prenorm_residual (`bool`, *optional*):
+ Whether or not to rescale `out_proj` weights when initializing. Default: `False`.
+ use_cache (`bool`, *optional*):
+ Whether or not the cache should be used. Default: `True`.
+
+
+ Example:
+
+ ```python
+ >>> from transformers import MambaConfig, MambaModel
+
+ >>> # Initializing a Mamba configuration
+ >>> configuration = MambaConfig()
+
+ >>> # Initializing a model (with random weights) from the configuration
+ >>> model = MambaModel(configuration)
+
+ >>> # Accessing the model configuration
+ >>> configuration = model.config
+ ```"""
+
+ model_type = "mamba"
+
+ def __init__(
+ self,
+ vocab_size: int = 32000,
+ hidden_size: int = 2048,
+ state_size: int = 16,
+ num_hidden_layers: int = 48,
+ layer_norm_epsilon=1e-5,
+ pad_token_id: int = 0,
+ bos_token_id: int = 1,
+ eos_token_id: int = 2,
+ expand: int = 2,
+ conv_kernel: int = 4,
+ use_bias: bool = False,
+ use_conv_bias: bool = True,
+ hidden_act: str = "silu",
+ initializer_range: str = 0.1,
+ residual_in_fp32: bool = False,
+ time_step_rank: str = "auto",
+ time_step_scale: float = 1.0,
+ time_step_min: float = 0.001,
+ time_step_max: float = 0.1,
+ time_step_init_scheme: str = "random",
+ time_step_floor: float = 1e-4,
+ rescale_prenorm_residual: bool = False,
+ use_cache: bool = True,
+ fuse_norm: bool = True,
+ fuse_cross_entropy: bool = True,
+ tie_word_embeddings: bool = False,
+ **kwargs,
+ ):
+ self.vocab_size = vocab_size
+ self.hidden_size = hidden_size
+ self.state_size = state_size
+ self.num_hidden_layers = num_hidden_layers
+ self.layer_norm_epsilon = layer_norm_epsilon
+ self.conv_kernel = conv_kernel
+ self.expand = expand
+ self.intermediate_size = int(expand * self.hidden_size)
+ self.bos_token_id = bos_token_id
+ self.eos_token_id = eos_token_id
+ self.pad_token_id = pad_token_id
+ self.use_bias = use_bias
+ self.use_conv_bias = use_conv_bias
+ self.hidden_act = hidden_act
+ self.initializer_range = initializer_range
+ self.time_step_rank = math.ceil(self.hidden_size / 16) if time_step_rank == "auto" else time_step_rank
+ self.time_step_scale = time_step_scale
+ self.time_step_min = time_step_min
+ self.time_step_max = time_step_max
+ self.time_step_init_scheme = time_step_init_scheme
+ self.time_step_floor = time_step_floor
+ self.rescale_prenorm_residual = rescale_prenorm_residual
+ self.residual_in_fp32 = residual_in_fp32
+ self.use_cache = use_cache
+ self.fuse_cross_entropy = fuse_cross_entropy
+ self.fuse_norm = fuse_norm
+
+ super().__init__(
+ bos_token_id=bos_token_id,
+ eos_token_id=eos_token_id,
+ pad_token_id=pad_token_id,
+ tie_word_embeddings=tie_word_embeddings,
+ **kwargs
+ )
diff --git a/fla/models/mamba/modeling_mamba.py b/fla/models/mamba/modeling_mamba.py
new file mode 100644
index 0000000000000000000000000000000000000000..3ffbd2cd9b5e6d93ceca620679c268bb67624276
--- /dev/null
+++ b/fla/models/mamba/modeling_mamba.py
@@ -0,0 +1,837 @@
+# coding=utf-8
+# Copyright 2024 state-spaces/mamba org and HuggingFace Inc. team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""PyTorch MAMBA model."""
+
+import math
+import warnings
+from dataclasses import dataclass
+from typing import Any, Dict, Optional, Tuple, Union
+
+import torch
+import torch.utils.checkpoint
+from torch import nn
+from transformers.activations import ACT2FN
+from transformers.configuration_utils import PretrainedConfig
+from transformers.generation import GenerationMixin
+from transformers.modeling_utils import PreTrainedModel
+from transformers.utils import ModelOutput, logging
+
+from fla.models.mamba.configuration_mamba import MambaConfig
+from fla.modules import (FusedCrossEntropyLoss, FusedLinearCrossEntropyLoss,
+ RMSNorm)
+
+logger = logging.get_logger(__name__)
+
+
+with warnings.catch_warnings():
+ warnings.simplefilter('ignore')
+ try:
+ from mamba_ssm.ops.selective_scan_interface import (mamba_inner_fn,
+ selective_scan_fn)
+ from mamba_ssm.ops.triton.selective_state_update import \
+ selective_state_update
+ except ImportError:
+ selective_state_update, selective_scan_fn, mamba_inner_fn = None, None, None
+
+ try:
+ from causal_conv1d import causal_conv1d_fn, causal_conv1d_update
+ except ImportError:
+ causal_conv1d_update, causal_conv1d_fn = None, None
+ is_fast_path_available = all((
+ selective_state_update,
+ selective_scan_fn,
+ causal_conv1d_fn,
+ causal_conv1d_update,
+ mamba_inner_fn
+ ))
+
+
+class MambaCache:
+ """
+ Cache for mamba model which does not have attention mechanism and key value states.
+
+ Arguments:
+ config (`PretrainedConfig):
+ The configuration file defining the shape-related attributes required to initialize the static cache.
+ batch_size (`int`):
+ The batch size with which the model will be used. Note that a new instance must be instantiated if a
+ smaller batch size is used.
+ dtype (`torch.dtype`, *optional*, defaults to `torch.float16`):
+ The default `dtype` to use when initializing the layer.
+ device (`torch.device` or `str`, *optional*):
+ The device on which the cache should be initialized. Should be the same as the layer.
+
+ Attributes:
+ dtype: (`torch.dtype`):
+ The default `dtype` used to initializing the cache.
+ intermediate_size: (`int`):
+ Model's intermediate_size taken from config.
+ ssm_state_size: (`int`):
+ Model's state_size taken from config.
+ conv_kernel_size: (`int`):
+ Model's convolution kernel size taken from config
+ conv_states: (`torch.Tensor`):
+ A tensor of shape `[layer_idx, batch_size, intermediate_size, conv_kernel_size]` that holds convolutional states.
+ ssm_states: (`torch.Tensor`):
+ A tensor of shape `[layer_idx, batch_size, intermediate_size, ssm_state_size]` that holds ssm states
+
+ Example:
+
+ ```python
+ >>> from transformers import AutoTokenizer, MambaForCausalLM, MambaCache
+
+ >>> model = MambaForCausalLM.from_pretrained("state-spaces/mamba-130m-hf")
+ >>> tokenizer = AutoTokenizer.from_pretrained("state-spaces/mamba-130m-hf")
+
+ >>> inputs = tokenizer(text="My name is Mamba", return_tensors="pt")
+
+ >>> # Prepare a cache class and pass it to model's forward
+ >>> past_key_values = MambaCache(config=model.config, batch_size=1, device=model.device, dtype=model.dtype)
+ >>> outputs = model(**inputs, past_key_values=past_key_values, use_cache=True)
+ >>> outputs.past_key_values
+ MambaCache()
+ ```
+ """
+
+ # TODO (joao): remove `=None` in non-optional arguments in v4.46. Remove from `OBJECTS_TO_IGNORE` as well.
+ def __init__(
+ self,
+ config: PretrainedConfig,
+ batch_size: int = None,
+ dtype: torch.dtype = torch.float16,
+ device: Optional[Union[torch.device, str]] = None,
+ max_batch_size: Optional[int] = None,
+ ):
+ if max_batch_size is not None:
+ logger.warning_once(
+ f"The 'max_batch_size' argument of {self.__class__.__name__} is deprecated and will be removed in "
+ "v4.46. Use the more precisely named 'batch_size' argument instead."
+ )
+ self.dtype = dtype
+ self.batch_size = batch_size or max_batch_size
+ self.intermediate_size = config.intermediate_size
+ self.ssm_state_size = config.state_size
+ self.conv_kernel_size = config.conv_kernel
+
+ self.conv_states: torch.Tensor = torch.zeros(
+ config.num_hidden_layers,
+ self.batch_size,
+ self.intermediate_size,
+ self.conv_kernel_size,
+ device=device,
+ dtype=dtype,
+ )
+ self.ssm_states: torch.Tensor = torch.zeros(
+ config.num_hidden_layers,
+ self.batch_size,
+ self.intermediate_size,
+ self.ssm_state_size,
+ device=device,
+ dtype=dtype,
+ )
+
+ torch._dynamo.mark_static_address(self.conv_states)
+ torch._dynamo.mark_static_address(self.ssm_states)
+
+ def update_conv_state(
+ self, layer_idx: int, new_conv_state: torch.Tensor, cache_position: torch.LongTensor
+ ) -> torch.Tensor:
+ conv_state = self.conv_states[layer_idx]
+ cache_position = cache_position.clamp(0, self.conv_kernel_size - 1)
+
+ conv_state = conv_state.roll(shifts=-1, dims=-1)
+ conv_state[:, :, cache_position] = new_conv_state.to(conv_state.device)
+ self.conv_states[layer_idx].zero_()
+ self.conv_states[layer_idx] += conv_state
+ return self.conv_states[layer_idx]
+
+ def update_ssm_state(self, layer_idx: int, new_ssm_state: torch.Tensor):
+ self.ssm_states[layer_idx] = new_ssm_state.to(self.ssm_states.device)
+ return self.ssm_states[layer_idx]
+
+ def reset(self):
+ self.conv_states.zero_()
+ self.ssm_states.zero_()
+
+
+class MambaMixer(nn.Module):
+ """
+ Compute ∆, A, B, C, and D the state space parameters and compute the `contextualized_states`.
+ A, D are input independent (see Mamba paper [1] Section 3.5.2 "Interpretation of A" for why A isn't selective)
+ ∆, B, C are input-dependent (this is a key difference between Mamba and the linear time invariant S4,
+ and is why Mamba is called **selective** state spaces)
+ """
+
+ def __init__(self, config: MambaConfig, layer_idx: int):
+ super().__init__()
+ self.config = config
+ self.hidden_size = config.hidden_size
+ self.ssm_state_size = config.state_size
+ self.conv_kernel_size = config.conv_kernel
+ self.intermediate_size = config.intermediate_size
+ self.time_step_rank = int(config.time_step_rank)
+ self.layer_idx = layer_idx
+ self.use_conv_bias = config.use_conv_bias
+ self.conv1d = nn.Conv1d(
+ in_channels=self.intermediate_size,
+ out_channels=self.intermediate_size,
+ bias=config.use_conv_bias,
+ kernel_size=config.conv_kernel,
+ groups=self.intermediate_size,
+ padding=config.conv_kernel - 1,
+ )
+
+ self.activation = config.hidden_act
+ self.act = ACT2FN[config.hidden_act]
+
+ # projection of the input hidden states
+ self.in_proj = nn.Linear(self.hidden_size, self.intermediate_size * 2, bias=config.use_bias)
+ # selective projection used to make dt, B and C input dependant
+ self.x_proj = nn.Linear(self.intermediate_size, self.time_step_rank + self.ssm_state_size * 2, bias=False)
+ # time step projection (discretization)
+ self.dt_proj = nn.Linear(self.time_step_rank, self.intermediate_size, bias=True)
+
+ # S4D real initialization. These are not discretized!
+ # The core is to load them, compute the discrete states, then write the updated state. Keeps the memory bounded
+ A = torch.arange(1, self.ssm_state_size + 1, dtype=torch.float32)[None, :]
+ A = A.expand(self.intermediate_size, -1).contiguous()
+
+ self.A_log = nn.Parameter(torch.log(A))
+ self.D = nn.Parameter(torch.ones(self.intermediate_size))
+ self.out_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=config.use_bias)
+ self.use_bias = config.use_bias
+
+ if not is_fast_path_available:
+ logger.warning_once(
+ "The fast path is not available because on of "
+ "`(selective_state_update, selective_scan_fn, causal_conv1d_fn, causal_conv1d_update, mamba_inner_fn)`"
+ " is None. Falling back to the naive implementation. "
+ "To install follow https://github.com/state-spaces/mamba/#installation and"
+ " https://github.com/Dao-AILab/causal-conv1d"
+ )
+
+ def cuda_kernels_forward(
+ self,
+ hidden_states: torch.Tensor,
+ cache_params: Optional[MambaCache] = None,
+ cache_position: Optional[torch.LongTensor] = None,
+ attention_mask: Optional[torch.LongTensor] = None,
+ ):
+ # 1. Gated MLP's linear projection
+ projected_states = self.in_proj(hidden_states).transpose(1, 2)
+
+ if self.training and cache_params is None: # Doesn't support outputting the states -> used for training
+ contextualized_states = mamba_inner_fn(
+ projected_states,
+ self.conv1d.weight,
+ self.conv1d.bias if self.use_conv_bias else None,
+ self.x_proj.weight,
+ self.dt_proj.weight,
+ self.out_proj.weight,
+ self.out_proj.bias.float() if self.use_bias else None,
+ -torch.exp(self.A_log.float()),
+ None, # input-dependent B
+ None, # input-dependent C
+ self.D.float(),
+ delta_bias=self.dt_proj.bias.float(),
+ delta_softplus=True,
+ )
+
+ else:
+ hidden_states, gate = projected_states.chunk(2, dim=1)
+
+ if attention_mask is not None:
+ hidden_states = hidden_states * attention_mask.unsqueeze(1)
+
+ # 2. Convolution sequence transformation
+ conv_weights = self.conv1d.weight.view(self.conv1d.weight.size(0), self.conv1d.weight.size(2))
+ if cache_params is not None and cache_position[0] > 0:
+ hidden_states = causal_conv1d_update(
+ hidden_states.squeeze(-1),
+ cache_params.conv_states[self.layer_idx],
+ conv_weights,
+ self.conv1d.bias,
+ self.activation,
+ )
+ hidden_states = hidden_states.unsqueeze(-1)
+ else:
+ if cache_params is not None:
+ conv_states = nn.functional.pad(
+ hidden_states, (self.conv_kernel_size - hidden_states.shape[-1], 0)
+ )
+ cache_params.update_conv_state(self.layer_idx, conv_states, cache_position)
+ hidden_states = causal_conv1d_fn(
+ hidden_states, conv_weights, self.conv1d.bias, activation=self.activation
+ )
+
+ if attention_mask is not None:
+ hidden_states = hidden_states * attention_mask.unsqueeze(1)
+
+ # 3. State Space Model sequence transformation
+ # 3.a. input varying initialization of time_step, B and C
+ ssm_parameters = self.x_proj(hidden_states.transpose(1, 2))
+ time_step, B, C = torch.split(
+ ssm_parameters, [self.time_step_rank, self.ssm_state_size, self.ssm_state_size], dim=-1
+ )
+ discrete_time_step = self.dt_proj.weight @ time_step.transpose(1, 2)
+
+ A = -torch.exp(self.A_log.float())
+ # 3.c perform the recurrence y ← SSM(A, B, C)(x)
+ time_proj_bias = self.dt_proj.bias.float() if hasattr(self.dt_proj, "bias") else None
+ if cache_params is not None and cache_position[0] > 0:
+ scan_outputs = selective_state_update(
+ cache_params.ssm_states[self.layer_idx],
+ hidden_states[..., 0],
+ discrete_time_step[..., 0],
+ A,
+ B[:, 0],
+ C[:, 0],
+ self.D,
+ gate[..., 0],
+ time_proj_bias,
+ dt_softplus=True,
+ ).unsqueeze(-1)
+ else:
+ scan_outputs, ssm_state = selective_scan_fn(
+ hidden_states,
+ discrete_time_step,
+ A,
+ B.transpose(1, 2),
+ C.transpose(1, 2),
+ self.D.float(),
+ gate,
+ time_proj_bias,
+ delta_softplus=True,
+ return_last_state=True,
+ )
+ if ssm_state is not None and cache_params is not None:
+ cache_params.update_ssm_state(self.layer_idx, ssm_state)
+
+ # 4. Final linear projection
+ contextualized_states = self.out_proj(scan_outputs.transpose(1, 2))
+ return contextualized_states
+
+ def slow_forward(
+ self,
+ input_states,
+ cache_params: Optional[MambaCache] = None,
+ cache_position: Optional[torch.LongTensor] = None,
+ attention_mask: Optional[torch.LongTensor] = None
+ ):
+ batch_size, seq_len, _ = input_states.shape
+ dtype = input_states.dtype
+ # 1. Gated MLP's linear projection
+ # [batch, 2 * intermediate_size, seq_len]
+ projected_states = self.in_proj(input_states).transpose(1, 2)
+ hidden_states, gate = projected_states.chunk(2, dim=1)
+
+ if attention_mask is not None:
+ hidden_states = hidden_states * attention_mask.unsqueeze(1)
+
+ # 2. Convolution sequence transformation
+ if cache_params is not None:
+ ssm_state = cache_params.ssm_states[self.layer_idx].clone()
+ ssm_state = ssm_state.to(hidden_states.device)
+ # use `cache_position.shape[0]` to check whether we are in prefill
+ # stage, it's equivalent to check `cache_position[0] == 0`, which
+ # breaks dynamo fullgraph constraints
+ if cache_position.shape[0] == self.conv_kernel_size:
+ conv_state = nn.functional.pad(
+ hidden_states,
+ (self.conv_kernel_size - hidden_states.shape[-1], 0)
+ )
+
+ cache_params.update_conv_state(self.layer_idx, conv_state, cache_position)
+ # [batch, intermediate_size, seq_len]
+ hidden_states = self.act(self.conv1d(hidden_states)[..., :seq_len])
+ else:
+ conv_state = cache_params.update_conv_state(self.layer_idx, hidden_states, cache_position)
+ hidden_states = torch.sum(conv_state * self.conv1d.weight[:, 0, :], dim=-1)
+ if self.use_conv_bias:
+ hidden_states += self.conv1d.bias
+ # [batch, intermediate_size, 1] : decoding
+ hidden_states = self.act(hidden_states).to(dtype).unsqueeze(-1)
+ else:
+ ssm_state = torch.zeros(
+ (batch_size, self.intermediate_size, self.ssm_state_size),
+ device=hidden_states.device, dtype=dtype
+ )
+ # [batch, intermediate_size, seq_len]
+ hidden_states = self.act(self.conv1d(hidden_states)[..., :seq_len])
+
+ if attention_mask is not None:
+ hidden_states = hidden_states * attention_mask.unsqueeze(1)
+
+ # 3. State Space Model sequence transformation
+ # 3.a. Selection: [batch, seq_len, self.time_step_rank + self.ssm_state_size * 2]
+ ssm_parameters = self.x_proj(hidden_states.transpose(1, 2))
+ time_step, B, C = torch.split(
+ ssm_parameters, [self.time_step_rank, self.ssm_state_size, self.ssm_state_size], dim=-1
+ )
+ # [batch, seq_len, intermediate_size]
+ discrete_time_step = self.dt_proj(time_step)
+ # [batch, intermediate_size, seq_len]
+ discrete_time_step = nn.functional.softplus(discrete_time_step).transpose(1, 2)
+
+ # 3.b. Discretization: B and C to [batch, seq_len, intermediate_size, ssm_state_size] (SRAM)
+ # [intermediate_size, ssm_state_size]
+ A = -torch.exp(self.A_log.float())
+ # [batch, intermediate_size, seq_len, ssm_state_size]
+ discrete_A = torch.exp(A[None, :, None, :] * discrete_time_step[:, :, :, None])
+ # [batch, intermediate_size, seq_len, ssm_state_size]
+ discrete_B = discrete_time_step[:, :, :, None] * B[:, None, :, :].float()
+ deltaB_u = discrete_B * hidden_states[:, :, :, None].float()
+
+ # 3.c perform the recurrence y ← SSM(A, B, C)(x)
+ scan_outputs = []
+ for i in range(seq_len):
+ # [batch, intermediade_size, ssm_state]
+ ssm_state = discrete_A[:, :, i, :] * ssm_state + deltaB_u[:, :, i, :]
+ # [batch, intermediade_size, 1]
+ scan_output = torch.matmul(ssm_state.to(dtype), C[:, i, :].unsqueeze(-1))
+ scan_outputs.append(scan_output[:, :, 0])
+ # [batch, seq_len, intermediade_size]
+ scan_output = torch.stack(scan_outputs, dim=-1)
+ scan_output = scan_output + (hidden_states * self.D[None, :, None])
+ scan_output = (scan_output * self.act(gate))
+
+ if cache_params is not None:
+ cache_params.ssm_states[self.layer_idx].copy_(ssm_state)
+
+ # 4. Final linear projection
+ # [batch, seq_len, hidden_size]
+ contextualized_states = self.out_proj(scan_output.transpose(1, 2))
+ return contextualized_states
+ # fmt: on
+
+ def forward(
+ self,
+ hidden_states,
+ cache_params: Optional[MambaCache] = None,
+ cache_position: Optional[torch.LongTensor] = None,
+ attention_mask: Optional[torch.LongTensor] = None,
+ ):
+ if is_fast_path_available and "cuda" in self.x_proj.weight.device.type:
+ return self.cuda_kernels_forward(hidden_states, cache_params, cache_position, attention_mask)
+ return self.slow_forward(hidden_states, cache_params, cache_position, attention_mask)
+
+
+class MambaBlock(nn.Module):
+ def __init__(self, config, layer_idx):
+ super().__init__()
+ self.config = config
+ self.layer_idx = layer_idx
+ self.residual_in_fp32 = config.residual_in_fp32
+ self.norm = RMSNorm(config.hidden_size, eps=config.layer_norm_epsilon)
+ self.mixer = MambaMixer(config, layer_idx=layer_idx)
+
+ def forward(
+ self,
+ hidden_states,
+ cache_params: Optional[MambaCache] = None,
+ cache_position: Optional[torch.LongTensor] = None,
+ attention_mask: Optional[torch.LongTensor] = None,
+ ):
+ residual = hidden_states
+ hidden_states = self.norm(hidden_states)
+ if self.residual_in_fp32:
+ residual = residual.to(torch.float32)
+
+ hidden_states = self.mixer(
+ hidden_states, cache_params=cache_params, cache_position=cache_position, attention_mask=attention_mask
+ )
+ hidden_states = residual + hidden_states
+ return hidden_states
+
+
+class MambaPreTrainedModel(PreTrainedModel):
+ """
+ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
+ models.
+ """
+
+ config_class = MambaConfig
+ base_model_prefix = "backbone"
+ _no_split_modules = ["MambaBlock", "MambaMixer"]
+ supports_gradient_checkpointing = True
+ _is_stateful = True
+
+ def _init_weights(self, module):
+ """Initialize the weights."""
+ if isinstance(module, MambaMixer):
+ module.A_log._no_weight_decay = True
+ module.D._no_weight_decay = True
+
+ dt_init_std = self.config.time_step_rank**-0.5 * self.config.time_step_scale
+ if self.config.time_step_init_scheme == "constant":
+ nn.init.constant_(module.dt_proj.weight, dt_init_std)
+ elif self.config.time_step_init_scheme == "random":
+ nn.init.uniform_(module.dt_proj.weight, -dt_init_std, dt_init_std)
+
+ dt = torch.exp(
+ torch.rand(self.config.intermediate_size)
+ * (math.log(self.config.time_step_max) - math.log(self.config.time_step_min))
+ + math.log(self.config.time_step_min)
+ ).clamp(min=self.config.time_step_floor)
+ # # Inverse of softplus: https://github.com/pytorch/pytorch/issues/72759
+ inv_dt = dt + torch.log(-torch.expm1(-dt))
+ with torch.no_grad():
+ module.dt_proj.bias.copy_(inv_dt)
+ module.dt_proj.bias._no_reinit = True
+
+ if isinstance(module, nn.Linear):
+ if module.bias is not None:
+ if not getattr(module.bias, "_no_reinit", False):
+ nn.init.zeros_(module.bias)
+ elif isinstance(module, nn.Embedding):
+ nn.init.normal_(module.weight, std=self.config.initializer_range)
+
+ if self.config.rescale_prenorm_residual:
+ # Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme:
+ # > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale
+ # > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers.
+ # > -- GPT-2 :: https://openai.com/blog/better-language-models/
+ #
+ # Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py
+ for name, p in module.named_parameters():
+ if name in ["out_proj.weight"]:
+ # Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block
+ # Following Pytorch init, except scale by 1/sqrt(2 * n_layer)
+ # We need to reinit p since this code could be called multiple times
+ # Having just p *= scale would repeatedly scale it down
+ nn.init.kaiming_uniform_(p, a=math.sqrt(5))
+ with torch.no_grad():
+ p /= math.sqrt(self.config.num_hidden_layers)
+
+
+@dataclass
+class MambaOutput(ModelOutput):
+ """
+ Class for the MAMBA model outputs.
+
+ Args:
+ last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
+ Sequence of hidden-states at the output of the last layer of the model.
+ cache_params (`MambaCache`):
+ The state of the model at the last time step. Can be used in a forward method with the next `input_ids` to
+ avoid providing the old `input_ids`.
+
+ Includes both the State space model state matrices after the selective scan, and the Convolutional states
+ hidden_states (`tuple(torch.FloatTensor)`, *optional*,
+ returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
+ Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
+ one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
+
+ Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
+ """
+
+ last_hidden_state: Optional[torch.FloatTensor] = None
+ cache_params: Optional[MambaCache] = None
+ hidden_states: Optional[Tuple[torch.FloatTensor]] = None
+
+
+@dataclass
+class MambaCausalLMOutput(ModelOutput):
+ """
+ Base class for causal language model (or autoregressive) outputs.
+
+ Args:
+ loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
+ Language modeling loss (for next-token prediction).
+ logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
+ Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
+ cache_params (`MambaCache`):
+ The state of the model at the last time step. Can be used in a forward method with the next `input_ids` to
+ avoid providing the old `input_ids`.
+
+ Includes both the State space model state matrices after the selective scan, and the Convolutional states
+ hidden_states (`tuple(torch.FloatTensor)`, *optional*,
+ returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
+ Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
+ one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
+
+ Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
+ """
+
+ loss: Optional[torch.FloatTensor] = None
+ logits: Optional[torch.FloatTensor] = None
+ cache_params: Optional[MambaCache] = None
+ hidden_states: Optional[Tuple[torch.FloatTensor]] = None
+
+
+class MambaModel(MambaPreTrainedModel):
+ def __init__(self, config):
+ super().__init__(config)
+
+ self.embeddings = nn.Embedding(config.vocab_size, config.hidden_size)
+ self.layers = nn.ModuleList([MambaBlock(config, layer_idx=idx) for idx in range(config.num_hidden_layers)])
+
+ self.gradient_checkpointing = False
+ self.norm_f = RMSNorm(config.hidden_size, eps=config.layer_norm_epsilon)
+ # Initialize weights and apply final processing
+ self._register_load_state_dict_pre_hook(self.load_hook)
+ self.post_init()
+
+ def load_hook(self, state_dict, prefix, *args):
+ for k in state_dict:
+ if "embedding." in k:
+ state_dict[k.replace("embedding.", "embeddings.")] = state_dict.pop(k)
+ break
+
+ def get_input_embeddings(self):
+ return self.embeddings
+
+ def set_input_embeddings(self, new_embeddings):
+ self.embeddings = new_embeddings
+
+ def forward(
+ self,
+ input_ids: Optional[torch.LongTensor] = None,
+ inputs_embeds: Optional[torch.LongTensor] = None,
+ cache_params: Optional[MambaCache] = None,
+ use_cache: Optional[bool] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None,
+ cache_position: Optional[torch.LongTensor] = None,
+ attention_mask: Optional[torch.LongTensor] = None,
+ ) -> Union[Tuple, MambaOutput]:
+ output_hidden_states = (
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
+ )
+ use_cache = use_cache if use_cache is not None else (self.config.use_cache if not self.training else False)
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ if (input_ids is None) ^ (inputs_embeds is not None): # ^ is python for xor
+ raise ValueError(
+ "You cannot specify both input_ids and inputs_embeds at the same time, and must specify either one"
+ )
+
+ if inputs_embeds is None:
+ inputs_embeds = self.embeddings(input_ids)
+
+ if self.gradient_checkpointing and self.training and use_cache:
+ use_cache = False
+
+ if use_cache:
+ if cache_params is None:
+ cache_params = MambaCache(
+ self.config, inputs_embeds.size(0), device=inputs_embeds.device, dtype=inputs_embeds.dtype
+ )
+ cache_position = torch.arange(0, self.config.conv_kernel, device=inputs_embeds.device)
+ elif cache_position is None:
+ # cases when we do manual forward instead of using `model.generate` which will initiate
+ # `cache_position` and makes sure it is not None, throw error here instead of doing some
+ # hack to conjecture the current cache position
+ raise ValueError(
+ "You have to specify the `cache_position` manually when `use_cache=True` and `cache_params` is passed, "
+ "you don't have to pass a `cache_params` if you are in prefilling stage because in that case it will "
+ "be initialized for you automatically"
+ )
+ else:
+ cache_params = None
+
+ hidden_states = inputs_embeds
+ all_hidden_states = () if output_hidden_states else None
+ for mixer_block in self.layers:
+ if self.gradient_checkpointing and self.training:
+ hidden_states = self._gradient_checkpointing_func(
+ mixer_block.__call__, hidden_states, cache_params, cache_position, attention_mask
+ )
+ else:
+ hidden_states = mixer_block(
+ hidden_states,
+ cache_params=cache_params,
+ cache_position=cache_position,
+ attention_mask=attention_mask,
+ )
+
+ if output_hidden_states:
+ all_hidden_states = all_hidden_states + (hidden_states,)
+
+ hidden_states = self.norm_f(hidden_states)
+
+ if output_hidden_states:
+ all_hidden_states = all_hidden_states + (hidden_states,)
+
+ if not return_dict:
+ return tuple(v for v in [hidden_states, cache_params, all_hidden_states] if v is not None)
+
+ return MambaOutput(
+ last_hidden_state=hidden_states,
+ cache_params=cache_params if use_cache else None,
+ hidden_states=all_hidden_states,
+ )
+
+
+class MambaForCausalLM(MambaPreTrainedModel, GenerationMixin):
+
+ _tied_weights_keys = ["lm_head.weight"]
+
+ def __init__(self, config):
+ super().__init__(config)
+ self.backbone = MambaModel(config)
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
+ # Initialize weights and apply final processing
+ self.post_init()
+
+ def get_output_embeddings(self):
+ return self.lm_head
+
+ def set_output_embeddings(self, new_embeddings):
+ self.lm_head = new_embeddings
+
+ def get_input_embeddings(self):
+ return self.backbone.get_input_embeddings()
+
+ def set_input_embeddings(self, new_embeddings):
+ return self.backbone.set_input_embeddings(new_embeddings)
+
+ def _update_model_kwargs_for_generation(
+ self, outputs: ModelOutput,
+ model_kwargs: Dict[str, Any],
+ num_new_tokens: int = 1,
+ **kwargs
+ ) -> Dict[str, Any]:
+ model_kwargs["cache_params"] = outputs.get("cache_params", None)
+ if (
+ model_kwargs.get("use_cache", True)
+ and "cache_position" in model_kwargs
+ and model_kwargs["cache_position"] is not None
+ ):
+ model_kwargs["cache_position"] = model_kwargs["cache_position"][-1:] + num_new_tokens
+
+ if "attention_mask" in model_kwargs:
+ attention_mask = model_kwargs["attention_mask"]
+ model_kwargs["attention_mask"] = torch.cat(
+ [attention_mask, attention_mask.new_ones((attention_mask.shape[0], 1))], dim=-1
+ )
+
+ return model_kwargs
+
+ def prepare_inputs_for_generation(
+ self,
+ input_ids,
+ inputs_embeds=None,
+ use_cache=None,
+ cache_params: Optional[MambaCache] = None,
+ cache_position: Optional[torch.LongTensor] = None,
+ attention_mask: Optional[torch.LongTensor] = None,
+ num_logits_to_keep: Optional[int] = None,
+ **kwargs,
+ ):
+ if use_cache:
+ # `cache_position` should have been initialized in `generate`
+ if cache_position is None:
+ raise ValueError(
+ "`cache_position` should not be None as it should have been initialized in "
+ "`model.generate`, you are responsible for passing in a valid `cache_position` if "
+ "you are calling `prepare_inputs_for_generation` directly with `use_cache=True`"
+ )
+ if cache_position[0] > 0:
+ input_ids = input_ids[:, -1].unsqueeze(-1)
+
+ if attention_mask is not None:
+ attention_mask = None
+
+ else:
+ # we initialize the `cache_position` to full size of `conv_states` at prefill stage
+ # considering padding will be applied when input length is shorter, and truncation
+ # will be applied when it is longer, so it will be equivalent to always have it match
+ # the length of `cache_params.conv_states`, which is `config.conv_kernel`
+ cache_position = torch.arange(0, self.config.conv_kernel, device=input_ids.device)
+
+ if inputs_embeds is not None and cache_params is None:
+ model_inputs = {"inputs_embeds": inputs_embeds}
+ else:
+ model_inputs = {"input_ids": input_ids.contiguous()}
+
+ if num_logits_to_keep is not None:
+ model_inputs['num_logits_to_keep'] = num_logits_to_keep
+
+ model_inputs.update({
+ 'cache_params': cache_params,
+ 'use_cache': use_cache,
+ 'cache_position': cache_position,
+ 'attention_mask': attention_mask,
+ 'num_logits_to_keep': num_logits_to_keep,
+ })
+ return model_inputs
+
+ def forward(
+ self,
+ input_ids: Optional[torch.LongTensor] = None,
+ attention_mask: Optional[torch.LongTensor] = None,
+ inputs_embeds: Optional[torch.FloatTensor] = None,
+ cache_params: Optional[MambaCache] = None,
+ labels: Optional[torch.LongTensor] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None,
+ use_cache: Optional[bool] = None,
+ cache_position: Optional[torch.Tensor] = None,
+ num_logits_to_keep: Optional[int] = 0,
+ **kwargs, # for now we need this for generation
+ ) -> Union[Tuple, MambaCausalLMOutput]:
+ r"""
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
+ Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set
+ `labels = input_ids` Indices are selected in `[-100, 0, ..., config.vocab_size]` All labels set to `-100`
+ are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size]`
+ """
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ mamba_outputs = self.backbone(
+ input_ids,
+ cache_params=cache_params,
+ inputs_embeds=inputs_embeds,
+ output_hidden_states=output_hidden_states,
+ return_dict=return_dict,
+ use_cache=use_cache,
+ cache_position=cache_position,
+ attention_mask=attention_mask,
+ )
+ hidden_states = mamba_outputs[0]
+ fuse_linear_and_cross_entropy = self.config.fuse_cross_entropy and self.training
+ logits = None if fuse_linear_and_cross_entropy else self.lm_head(hidden_states[:, -num_logits_to_keep:])
+
+ loss = None
+ if labels is not None:
+ if self.config.fuse_cross_entropy:
+ if fuse_linear_and_cross_entropy:
+ loss_fct = FusedLinearCrossEntropyLoss()
+ else:
+ loss_fct = FusedCrossEntropyLoss(inplace_backward=True)
+ else:
+ loss_fct = nn.CrossEntropyLoss()
+ # Enable model parallelism
+ labels = labels.to(hidden_states.device)
+ labels = torch.cat((labels[..., 1:], torch.full_like(labels[:, :1], loss_fct.ignore_index)), 1)
+ if fuse_linear_and_cross_entropy:
+ loss = loss_fct(hidden_states.view(-1, self.config.hidden_size),
+ labels.view(-1),
+ self.lm_head.weight,
+ self.lm_head.bias)
+ else:
+ loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1))
+
+ if not return_dict:
+ output = (logits,) + mamba_outputs[1:]
+ return (loss,) + output if loss is not None else output
+
+ return MambaCausalLMOutput(
+ loss=loss,
+ logits=logits,
+ cache_params=mamba_outputs.cache_params,
+ hidden_states=mamba_outputs.hidden_states,
+ )
diff --git a/fla/models/mamba2/__init__.py b/fla/models/mamba2/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..0b8ac62a700590e06d1e524979b2f21353aa5188
--- /dev/null
+++ b/fla/models/mamba2/__init__.py
@@ -0,0 +1,13 @@
+# -*- coding: utf-8 -*-
+
+from transformers import AutoConfig, AutoModel, AutoModelForCausalLM
+
+from fla.models.mamba2.configuration_mamba2 import Mamba2Config
+from fla.models.mamba2.modeling_mamba2 import Mamba2ForCausalLM, Mamba2Model
+
+AutoConfig.register(Mamba2Config.model_type, Mamba2Config, True)
+AutoModel.register(Mamba2Config, Mamba2Model, True)
+AutoModelForCausalLM.register(Mamba2Config, Mamba2ForCausalLM, True)
+
+
+__all__ = ['Mamba2Config', 'Mamba2ForCausalLM', 'Mamba2Model']
diff --git a/fla/models/mamba2/configuration_mamba2.py b/fla/models/mamba2/configuration_mamba2.py
new file mode 100644
index 0000000000000000000000000000000000000000..264c95bfa2a8ba697c8e2d70ab65758b70ca8888
--- /dev/null
+++ b/fla/models/mamba2/configuration_mamba2.py
@@ -0,0 +1,168 @@
+# Copyright 2024 The HuggingFace Inc. team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""MAMBA2 configuration"""
+
+import math
+
+from transformers.configuration_utils import PretrainedConfig
+
+
+class Mamba2Config(PretrainedConfig):
+ """
+ This is the configuration class to store the configuration of a [`Mamba2Model`]. It is used to instantiate a MAMBA2
+ model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
+ defaults will yield a similar configuration to that of the MAMBA2
+ [state-spaces/mamba2-2.8b](https://huggingface.co/state-spaces/mamba2-2.8b) architecture.
+
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
+ documentation from [`PretrainedConfig`] for more information.
+
+
+ Args:
+ num_heads (`int`, *optional*, defaults to 64):
+ Number of heads for the evolution matrices of mamba 2.
+ head_dim (`int`, *optional*, defaults to 64):
+ Dimension of each head.
+ vocab_size (`int`, *optional*, defaults to 32768):
+ Vocabulary size of the MAMBA2 model. Defines the number of different tokens that can be represented by the
+ `inputs_ids` passed when calling [`Mamba2Model`].
+ hidden_size (`int`, *optional*, defaults to 2048):
+ Dimensionality of the embeddings and hidden states.
+ state_size (`int`, *optional*, defaults to 128): shape of the state space latents.
+ num_hidden_layers (`int`, *optional*, defaults to 48):
+ Number of hidden layers in the model.
+ layer_norm_epsilon (`float`, *optional*, defaults to 1e-05):
+ The epsilon to use in the layer normalization layers.
+ pad_token_id (`int`, *optional*, defaults to 0):
+ Padding token id.
+ bos_token_id (`int`, *optional*, defaults to 1):
+ The id of the beginning of sentence token in the vocabulary.
+ eos_token_id (`int`, *optional*, defaults to 2):
+ The id of the end of sentence token in the vocabulary.
+ expand (`int`, *optional*, defaults to 2): Expanding factor used to determine the intermediate size.
+ conv_kernel (`int`, *optional*, defaults to 4): Size of the convolution kernel.
+ n_groups (`int`, *optional*, defaults to 1):
+ Number of groups for the evolution matrices of mamba 2.
+ use_bias (`bool`, *optional*, defaults to `False`):
+ Whether or not to use bias in ["in_proj", "out_proj"] of the mixer block
+ use_conv_bias (`bool`, *optional*, defaults to `True`):
+ Whether or not to use bias in the convolution layer of the mixer block.
+ hidden_act (`str`, *optional*, defaults to `"silu"`):
+ The non-linear activation function (function or string) in the decoder.
+ initializer_range (`float`, *optional*, defaults to 0.1):
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
+ residual_in_fp32 (`bool`, *optional*, defaults to `True`):
+ Whether or not residuals should be in `float32`.
+ If set to `False` residuals will keep the same `dtype` as the rest of the model
+ time_step_rank (`Union[int,str]`, *optional*, defaults to `"auto"`):
+ Rank of the discretization projection matrix.
+ `"auto"` means that it will default to `math.ceil(self.hidden_size / 16)`
+ time_step_min (`float`, *optional*, defaults to 0.001):
+ Minimum `time_step` used to bound `dt_proj.bias`.
+ time_step_max (`float`, *optional*, defaults to 0.1):
+ Maximum `time_step` used to bound `dt_proj.bias`.
+ time_step_floor (`float`, *optional*, defaults to 0.0001):
+ Minimum clamping value of the `dt_proj.bias` layer initialization.
+ time_step_limit (`tuple`, *optional*, defaults to `(0.0, inf)`):
+ Accepted range of time step values.
+ rescale_prenorm_residual (`bool`, *optional*, defaults to `True`):
+ Whether or not to rescale `out_proj` weights when initializing.
+ use_cache (`bool`, *optional*, defaults to `True`):
+ Whether or not the cache should be used.
+ rms_norm (`bool`, *optional*, defaults to `True`):
+ Whether to use RMS norm or not.
+ chunk_size (`int`, *optional*, defaults to 256):
+ Size of the chunks that will comprise the sequence.
+ tie_word_embeddings (`bool`, *optional*, defaults to `False`):
+ Whether to tie word embeddings or not.
+ """
+
+ model_type = "mamba2"
+
+ def __init__(
+ self,
+ num_heads: int = 64,
+ head_dim: int = 64,
+ vocab_size: int = 32000,
+ hidden_size: int = 2048,
+ state_size: int = 128,
+ num_hidden_layers: int = 48,
+ layer_norm_epsilon: float = 1e-5,
+ pad_token_id: int = 0,
+ bos_token_id: int = 1,
+ eos_token_id: int = 2,
+ expand: int = 2,
+ conv_kernel: int = 4,
+ n_groups: int = 1,
+ use_bias: bool = False,
+ use_conv_bias: bool = True,
+ hidden_act: str = "silu",
+ initializer_range: float = 0.1,
+ residual_in_fp32: bool = True,
+ time_step_rank: str = "auto",
+ time_step_min: float = 0.001,
+ time_step_max: float = 0.1,
+ time_step_floor: float = 1e-4,
+ time_step_limit=(0.0, float("inf")),
+ rescale_prenorm_residual: bool = True,
+ use_cache: bool = True,
+ rms_norm: bool = True,
+ chunk_size: int = 256,
+ fuse_cross_entropy: bool = True,
+ tie_word_embeddings: bool = False,
+ **kwargs,
+ ):
+ self.vocab_size = vocab_size
+ self.hidden_size = hidden_size
+ self.state_size = state_size
+ self.num_hidden_layers = num_hidden_layers
+ self.layer_norm_epsilon = layer_norm_epsilon
+ self.conv_kernel = conv_kernel
+ self.expand = expand
+
+ self.bos_token_id = bos_token_id
+ self.eos_token_id = eos_token_id
+ self.pad_token_id = pad_token_id
+ self.use_bias = use_bias
+ self.use_conv_bias = use_conv_bias
+ self.hidden_act = hidden_act
+ self.initializer_range = initializer_range
+ self.time_step_rank = (
+ math.ceil(self.hidden_size / 16)
+ if time_step_rank == "auto"
+ else time_step_rank
+ )
+ self.time_step_min = time_step_min
+ self.time_step_max = time_step_max
+ self.time_step_floor = time_step_floor
+ self.rescale_prenorm_residual = rescale_prenorm_residual
+ self.residual_in_fp32 = residual_in_fp32
+ self.use_cache = use_cache
+ self.n_groups = n_groups
+ self.num_heads = num_heads
+ self.head_dim = head_dim
+ self.rms_norm = rms_norm
+ self.state_size = state_size
+ self.chunk_size = chunk_size
+ self.time_step_limit = time_step_limit
+ self.fuse_cross_entropy = fuse_cross_entropy
+ self.tie_word_embeddings = tie_word_embeddings
+
+ super().__init__(
+ bos_token_id=bos_token_id,
+ eos_token_id=eos_token_id,
+ pad_token_id=pad_token_id,
+ tie_word_embeddings=tie_word_embeddings,
+ **kwargs,
+ )
diff --git a/fla/models/mamba2/modeling_mamba2.py b/fla/models/mamba2/modeling_mamba2.py
new file mode 100644
index 0000000000000000000000000000000000000000..79431927a54a3d6826a4a7f9a51007dac54357eb
--- /dev/null
+++ b/fla/models/mamba2/modeling_mamba2.py
@@ -0,0 +1,1030 @@
+# Copyright 2024 state-spaces/mamba2 org and HuggingFace Inc. team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""PyTorch MAMBA2 model."""
+
+import math
+import warnings
+from dataclasses import dataclass
+from typing import Optional, Tuple, Union
+
+import torch
+import torch.utils.checkpoint
+from torch import nn
+from transformers.activations import ACT2FN
+from transformers.generation import GenerationMixin
+from transformers.modeling_utils import PreTrainedModel
+from transformers.utils import ModelOutput, logging
+
+from fla.models.mamba2.configuration_mamba2 import Mamba2Config
+from fla.modules import (FusedCrossEntropyLoss, FusedLinearCrossEntropyLoss,
+ RMSNorm)
+from fla.modules.layernorm_gated import RMSNormGated
+
+logger = logging.get_logger(__name__)
+
+with warnings.catch_warnings():
+ warnings.simplefilter('ignore')
+ try:
+ from mamba_ssm.ops.triton.selective_state_update import \
+ selective_state_update
+ from mamba_ssm.ops.triton.ssd_combined import (
+ mamba_chunk_scan_combined, mamba_split_conv1d_scan_combined)
+ except ImportError:
+ (
+ selective_state_update,
+ mamba_chunk_scan_combined,
+ mamba_split_conv1d_scan_combined,
+ ) = (None, None, None)
+ try:
+ from causal_conv1d import causal_conv1d_fn, causal_conv1d_update
+ except ImportError:
+ causal_conv1d_update, causal_conv1d_fn = None, None
+ is_fast_path_available = all((
+ selective_state_update,
+ causal_conv1d_fn,
+ causal_conv1d_update
+ ))
+
+
+def pad_tensor_by_size(input_tensor: torch.Tensor, pad_size: int):
+ """
+ Padding x tensor with `pad_size` on the seq_len dim (dim=1)
+
+ Assumes that we only have tensors of either size 4 or 3
+ """
+ pad_shape = (0, 0, 0, 0, 0, pad_size, 0, 0) if len(input_tensor.shape) == 4 else (0, 0, 0, pad_size, 0, 0)
+
+ return torch.nn.functional.pad(input_tensor, pad_shape, mode="constant", value=0)
+
+
+def reshape_into_chunks(input_tensor, pad_size, chunk_size):
+ """
+ Padding input_tensor with `pad_size` on the seq_len dim (dim=1) and
+ simultaneously splitting it into chunk sequences.
+
+ Assumes that we only have tensors of either size 4 or 3
+ """
+ # [bsz, seq_len, ...] -> [bsz, seq_len multiple of chunk_size, ...]
+ input_tensor = pad_tensor_by_size(input_tensor, pad_size)
+
+ if len(input_tensor.shape) == 3:
+ # [bsz, seq_len multiple of chunk_size, num_heads] -> [bsz, -1, chunk_size, num_heads]
+ return input_tensor.reshape(input_tensor.shape[0], -1, chunk_size, input_tensor.shape[2])
+ else:
+ # [bsz, seq_len multiple of chunk_size, num_heads, head_dim or state_size] ->
+ # [bsz, -1, chunk_size, num_heads, head_dim or state_size]
+ return input_tensor.reshape(
+ input_tensor.shape[0], -1, chunk_size, input_tensor.shape[2], input_tensor.shape[3]
+ )
+
+
+def segment_sum(input_tensor):
+ """
+ More stable segment sum calculation. Uses cumulative sums and masking instead of direct subtractions.
+ """
+ chunk_size = input_tensor.size(-1)
+ # 1. expand input tensor to have an additional dimension and repeat along that dimension
+ # [..., chunk_size] -> [..., chunk_size, chunk_size]
+ input_tensor = input_tensor[..., None].expand(*input_tensor.size(), chunk_size)
+ # 2. create a lower triangular mask with the diagonal set to 0 to 0 out elements above diag
+ mask = torch.tril(torch.ones(chunk_size, chunk_size, device=input_tensor.device, dtype=torch.bool), diagonal=-1)
+ input_tensor = input_tensor.masked_fill(~mask, 0)
+ # 3. compute actual cumsum
+ tensor_segsum = torch.cumsum(input_tensor, dim=-2)
+
+ # 4. apply mask to keep only the lower triangular part of the cumulative sum result (incl diagonal this time)
+ mask = torch.tril(torch.ones(chunk_size, chunk_size, device=input_tensor.device, dtype=torch.bool), diagonal=0)
+ tensor_segsum = tensor_segsum.masked_fill(~mask, -torch.inf)
+ return tensor_segsum
+
+
+class Mamba2Cache:
+ """
+ Arguments:
+ config: Mamba2Config
+ batch_size: int
+ dtype: torch.dtype
+ device: torch.device
+
+ Attributes:
+ seqlen_offset: int
+ dtype: torch.dtype
+ conv_states: Dict[int, torch.Tensor] # layer_idx -> [batch_size, intermediate_size, conv_kernel_size]
+ ssm_states: Dict[int, torch.Tensor] # layer_idx -> [batch_size, intermediate_size, ssm_state_size]
+ """
+
+ def __init__(
+ self,
+ config: Mamba2Config,
+ batch_size: int,
+ dtype: torch.dtype = torch.float16,
+ device: Optional[str] = None,
+ ):
+ self.seqlen_offset = 0
+ self.dtype = dtype
+ self.conv_kernel_size = config.conv_kernel
+ self.intermediate_size = int(config.expand * config.hidden_size)
+
+ self.conv_states = {
+ i: torch.zeros(
+ batch_size,
+ self.intermediate_size + 2 * config.n_groups * config.state_size,
+ self.conv_kernel_size,
+ device=device,
+ dtype=dtype,
+ )
+ for i in range(config.num_hidden_layers)
+ }
+ self.ssm_states = {
+ i: torch.zeros(
+ batch_size,
+ config.num_heads,
+ config.head_dim,
+ config.state_size,
+ device=device,
+ dtype=dtype,
+ )
+ for i in range(config.num_hidden_layers)
+ }
+ self.activation = config.hidden_act
+ self.act = ACT2FN[config.hidden_act]
+
+ def update_conv_state(
+ self,
+ layer_idx: int,
+ new_conv_state: torch.Tensor,
+ cache_position: torch.LongTensor,
+ ) -> torch.Tensor:
+ conv_state = self.conv_states[layer_idx]
+ cache_position = cache_position.clamp(0, self.conv_kernel_size - 1)
+
+ conv_state = conv_state.roll(shifts=-1, dims=-1)
+ conv_state[:, :, cache_position] = new_conv_state.to(conv_state.device)
+ self.conv_states[layer_idx].zero_()
+ self.conv_states[layer_idx] += conv_state
+ return self.conv_states[layer_idx]
+
+ def reset(self):
+ self.conv_states.zero_()
+ self.ssm_states.zero_()
+
+
+class Mamba2Mixer(nn.Module):
+ """
+ Compute ∆, A, B, C, and D the state space parameters and compute the `contextualized_states`.
+ A, D are input independent (see Mamba paper [1] Section 3.5.2 "Interpretation of A" for why A isn't selective)
+ ∆, B, C are input-dependent (this is a key difference between Mamba and the linear time invariant S4,
+ and is why Mamba is called **selective** state spaces)
+ """
+
+ def __init__(self, config: Mamba2Config, layer_idx: int):
+ super().__init__()
+ self.num_heads = config.num_heads
+ self.hidden_size = config.hidden_size
+ self.ssm_state_size = config.state_size
+ self.conv_kernel_size = config.conv_kernel
+ self.intermediate_size = int(config.expand * self.hidden_size)
+ self.time_step_rank = int(config.time_step_rank)
+ self.layer_idx = layer_idx
+ self.use_conv_bias = config.use_conv_bias
+ self.activation = config.hidden_act
+ self.act = ACT2FN[config.hidden_act]
+
+ self.layer_norm_epsilon = config.layer_norm_epsilon
+ self.rms_norm = config.rms_norm
+
+ self.n_groups = config.n_groups
+ self.head_dim = config.head_dim
+ self.chunk_size = config.chunk_size
+
+ self.time_step_limit = config.time_step_limit
+ self.time_step_min = config.time_step_min
+ self.time_step_max = config.time_step_max
+
+ self.conv_dim = self.intermediate_size + 2 * self.n_groups * self.ssm_state_size
+ self.conv1d = nn.Conv1d(
+ in_channels=self.conv_dim,
+ out_channels=self.conv_dim,
+ bias=config.use_conv_bias,
+ kernel_size=config.conv_kernel,
+ groups=self.conv_dim,
+ padding=config.conv_kernel - 1,
+ )
+
+ # projection of the input hidden states
+ projection_size = self.intermediate_size + self.conv_dim + self.num_heads
+ self.in_proj = nn.Linear(
+ self.hidden_size,
+ projection_size,
+ bias=config.use_bias,
+ )
+ # selective projection used to make dt, B and C input dependant
+
+ # time step projection (discretization)
+ # instantiate once and copy inv_dt in init_weights of PretrainedModel
+ self.dt_bias = nn.Parameter(torch.ones(self.num_heads))
+
+ # S4D real initialization. These are not discretized!
+ # The core is to load them, compute the discrete states, then write the updated state. Keeps the memory bounded
+ A = torch.arange(1, self.num_heads + 1)
+ self.A_log = nn.Parameter(torch.log(A))
+ self.A_log._no_weight_decay = True
+ self.norm = RMSNormGated(
+ self.intermediate_size, eps=self.layer_norm_epsilon, norm_before_gate=False
+ )
+ self.D = nn.Parameter(torch.ones(self.num_heads))
+ self.D._no_weight_decay = True
+
+ self.out_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=config.use_bias)
+ self.use_bias = config.use_bias
+
+ if not is_fast_path_available:
+ logger.warning_once(
+ "The fast path is not available because one of "
+ "`(selective_state_update, causal_conv1d_fn, causal_conv1d_update)` is None. "
+ "Falling back to the naive implementation. "
+ "To install follow https://github.com/state-spaces/mamba/#installation and"
+ "https://github.com/Dao-AILab/causal-conv1d"
+ )
+
+ def cuda_kernels_forward(
+ self,
+ hidden_states: torch.Tensor,
+ cache_params: Optional[Mamba2Cache] = None,
+ cache_position: Optional[torch.LongTensor] = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ ):
+ # set up dimensions for reshapes later
+ batch_size, seq_len, _ = hidden_states.shape
+ groups_time_state_size = self.n_groups * self.ssm_state_size
+ d_to_remove = 2 * self.intermediate_size + 2 * self.n_groups * self.ssm_state_size + self.num_heads
+
+ # getting projected states from cache if it exists
+ if cache_params is not None and cache_params.seqlen_offset > 0:
+ in_projected_states = self.in_proj(hidden_states.squeeze(1)) # (B 2D)
+ d_mlp = (in_projected_states.shape[-1] - d_to_remove) // 2
+ split_projection_dim = [d_mlp, d_mlp, self.intermediate_size, self.conv_dim, self.num_heads]
+ _, _, gate, hidden_states_B_C, dt = torch.split(in_projected_states, split_projection_dim, dim=-1)
+
+ hidden_states_B_C = causal_conv1d_update(
+ hidden_states_B_C,
+ cache_params.conv_states[self.layer_idx],
+ self.conv1d.weight.squeeze(1),
+ self.conv1d.bias,
+ self.activation,
+ )
+
+ hidden_states, B, C = torch.split(
+ hidden_states_B_C,
+ [
+ self.intermediate_size,
+ groups_time_state_size,
+ groups_time_state_size,
+ ],
+ dim=-1,
+ )
+ A = -torch.exp(self.A_log.float()) # (nheads,)
+
+ A = A[:, None, ...][:, :, None].expand(-1, self.head_dim, self.ssm_state_size).to(dtype=torch.float32)
+ dt = dt[:, :, None].expand(-1, -1, self.head_dim)
+ dt_bias = self.dt_bias[:, None, ...].expand(-1, self.head_dim)
+ D = self.D[:, None, ...].expand(-1, self.head_dim)
+ B = B.view(batch_size, self.n_groups, B.shape[1] // self.n_groups)
+ C = C.view(batch_size, self.n_groups, C.shape[1] // self.n_groups)
+ hidden_states_reshaped = hidden_states.view(batch_size, self.num_heads, self.head_dim)
+
+ hidden_states = selective_state_update(
+ cache_params.ssm_states[self.layer_idx],
+ hidden_states_reshaped,
+ dt,
+ A,
+ B,
+ C,
+ D,
+ z=None,
+ dt_bias=dt_bias,
+ dt_softplus=True,
+ )
+ hidden_states = hidden_states.view(batch_size, self.num_heads * self.head_dim)
+ hidden_states = self.norm(hidden_states, gate)
+ out = self.out_proj(hidden_states)[:, None, ...]
+ # if no cache is found, calling the kernel
+ else:
+ if attention_mask is not None and attention_mask.shape[1] > 1 and attention_mask.shape[0] > 1:
+ # tune out hidden states for pad tokens, see https://github.com/state-spaces/mamba/issues/66
+ dtype = hidden_states.dtype
+ hidden_states = (hidden_states * attention_mask[:, :, None]).to(dtype)
+ # 1. Gated MLP's linear projection
+ projected_states = self.in_proj(hidden_states)
+ A = -torch.exp(self.A_log.float()) # (num_heads) or (intermediate_size, state_size)
+ dt_limit_kwargs = {} if self.time_step_limit == (0.0, float("inf")) else {"dt_limit": self.time_step_limit}
+
+ if self.training and cache_params is None:
+ out, ssm_state = mamba_split_conv1d_scan_combined(
+ projected_states,
+ self.conv1d.weight.squeeze(1),
+ self.conv1d.bias,
+ self.dt_bias,
+ A,
+ D=self.D,
+ chunk_size=self.chunk_size,
+ seq_idx=None, # was seq_idx
+ activation=self.activation,
+ rmsnorm_weight=self.norm.weight,
+ rmsnorm_eps=self.norm.eps,
+ outproj_weight=self.out_proj.weight,
+ outproj_bias=self.out_proj.bias,
+ headdim=self.head_dim,
+ ngroups=self.n_groups,
+ norm_before_gate=False,
+ return_final_states=True,
+ **dt_limit_kwargs,
+ )
+
+ else:
+ gate, hidden_states_B_C, time_step = torch.split(
+ projected_states,
+ [self.intermediate_size, self.conv_dim, self.num_heads],
+ dim=-1,
+ )
+
+ # 1D Convolution
+ if causal_conv1d_fn is None or self.activation not in ["silu", "swish"]:
+ hidden_states_B_C = self.act(
+ self.conv1d(hidden_states_B_C.transpose(1, 2)).transpose(1, 2)[:, :seq_len]
+ ) # (B, L, self.d_inner + 2 * ngroups * d_state)
+ else:
+ hidden_states_B_C = causal_conv1d_fn(
+ x=hidden_states_B_C.transpose(1, 2),
+ weight=self.conv1d.weight.squeeze(1),
+ bias=self.conv1d.bias,
+ activation=self.activation,
+ ).transpose(1, 2)[:, :seq_len]
+ hidden_states, B, C = torch.split(
+ hidden_states_B_C,
+ [self.intermediate_size, groups_time_state_size, groups_time_state_size],
+ dim=-1,
+ )
+ if attention_mask is not None and attention_mask.shape[1] > 1 and attention_mask.shape[0] > 1:
+ # tune out hidden states for pad tokens, see https://github.com/state-spaces/mamba/issues/66
+ dtype = hidden_states.dtype
+ hidden_states = (hidden_states * attention_mask[:, :, None]).to(dtype)
+ scan_output, ssm_state = mamba_chunk_scan_combined(
+ hidden_states.view(batch_size, seq_len, -1, self.head_dim),
+ time_step,
+ A,
+ B.view(batch_size, seq_len, self.n_groups, -1),
+ C.view(batch_size, seq_len, self.n_groups, -1),
+ chunk_size=self.chunk_size,
+ D=self.D,
+ z=None,
+ seq_idx=None,
+ return_final_states=True,
+ dt_bias=self.dt_bias,
+ dt_softplus=True,
+ **dt_limit_kwargs,
+ )
+ if ssm_state is not None and cache_params is not None:
+ cache_params.ssm_states[self.layer_idx].copy_(ssm_state)
+ scan_output = scan_output.view(batch_size, seq_len, -1)
+ # Multiply "gate" branch and apply extra normalization layer
+ scan_output = self.norm(scan_output, gate)
+ out = self.out_proj(scan_output)
+ return out
+
+ # fmt: off
+ def torch_forward(
+ self,
+ input_states,
+ cache_params: Optional[Mamba2Cache] = None,
+ cache_position: Optional[torch.LongTensor] = None,
+ attention_mask: Optional[torch.Tensor] = None
+ ):
+ batch_size, seq_len, _ = input_states.shape
+ dtype = input_states.dtype
+ # Gated MLP's linear projection
+ projected_states = self.in_proj(input_states.squeeze(1))
+ d_mlp = (projected_states.shape[-1] - 2 * self.intermediate_size -
+ 2 * self.n_groups * self.ssm_state_size - self.num_heads) // 2
+ _, _, gate, hidden_states, dt = projected_states.split(
+ [d_mlp, d_mlp, self.intermediate_size, self.conv_dim, self.num_heads], dim=-1
+ )
+
+ # Convolution sequence transformation
+ if cache_params is not None:
+ ssm_state = cache_params.ssm_states[self.layer_idx].clone()
+ ssm_state = ssm_state.to(hidden_states.device)
+ if cache_params.seqlen_offset > 0:
+ # [batch, intermediate_size, conv_kernel_size]
+ conv_state = cache_params.conv_states[self.layer_idx]
+ conv_state = torch.roll(conv_state, shifts=-1, dims=-1)
+ # handle batched generation - states are copied through
+ conv_state[:, :, -1] = hidden_states[:, 0, :] if hidden_states.ndim == 3 else hidden_states
+ cache_params.conv_states[self.layer_idx].copy_(conv_state)
+ hidden_states = torch.sum(conv_state.to(projected_states.device) * self.conv1d.weight[:, 0, :], dim=-1)
+ if self.use_conv_bias:
+ hidden_states += self.conv1d.bias
+ # [batch, 1, intermediate_size] : decoding
+ hidden_states = self.act(hidden_states).to(dtype)[:, None, ...]
+ else:
+ hidden_states = hidden_states.transpose(1, 2)
+ conv_state = nn.functional.pad(
+ hidden_states,
+ (self.conv_kernel_size - hidden_states.shape[-1], 0)
+ )
+ cache_params.conv_states[self.layer_idx].copy_(conv_state)
+ # [batch, intermediate_size, seq_len]
+ hidden_states = self.act(self.conv1d(hidden_states).transpose(1, 2))[:, :seq_len, :]
+ if attention_mask is not None and attention_mask.shape[1] > 1 and attention_mask.shape[0] > 1:
+ dtype = hidden_states.dtype
+ # tune out hidden states for pad tokens, see https://github.com/state-spaces/mamba/issues/66
+ hidden_states = (hidden_states * attention_mask[:, :, None]).to(dtype)
+ else:
+ ssm_state = torch.zeros(
+ (batch_size, self.num_heads, self.head_dim, self.ssm_state_size),
+ device=hidden_states.device, dtype=dtype
+ )
+ hidden_states = self.act(self.conv1d(hidden_states.transpose(1, 2))[..., :seq_len].transpose(1, 2))
+ hidden_states, B, C = torch.split(
+ hidden_states,
+ [self.intermediate_size, self.n_groups * self.ssm_state_size, self.n_groups * self.ssm_state_size],
+ dim=-1
+ )
+ A = -torch.exp(self.A_log.float()) # [num_heads]
+ if cache_params is not None and cache_params.seqlen_offset > 0:
+ # Note: there is no need to pad parameter matrices here, as there is just one new token
+ # for batched generation
+ dt = dt[:, None, ...] if dt.ndim == 2 else dt[:, 0, :][:, None, ...]
+ dt = dt.transpose(1, 2).expand(batch_size, dt.shape[-1], self.head_dim)
+ # [num_heads] -> [num_heads, head_dim]
+ dt_bias = self.dt_bias[..., None].expand(self.dt_bias.shape[0], self.head_dim)
+
+ dt = torch.nn.functional.softplus(dt + dt_bias.to(dt.dtype))
+ dt = torch.clamp(dt, self.time_step_min)
+ A = A[..., None, None].expand(self.num_heads, self.head_dim, self.ssm_state_size).to(dtype=torch.float32)
+ # [bsz, num_heads, head_dim, state_size]
+ dA = torch.exp(dt[..., None] * A)
+
+ # Discretize B
+ # [bsz, n_groups * state_size] -> [bsz, n_groups, 1, state_size] ->
+ # -> [bsz, n_groups, group to head repetition factor, state_size] -> [bsz, num_heads, state_size]
+ B = B.reshape(batch_size, self.n_groups, -1)[..., None, :]
+ B = B.expand(batch_size, self.n_groups, self.num_heads // self.n_groups, B.shape[-1]).contiguous()
+ B = B.reshape(batch_size, -1, B.shape[-1])
+ # [bsz, num_heads, head_dim, state_size]
+ dB = dt[..., None] * B[..., None, :]
+
+ # Discretize x into dB
+ # [bsz, intermediate_size] -> [bsz, num_heads, head_dim]
+ hidden_states = hidden_states.reshape(batch_size, -1, self.head_dim)
+ dBx = dB * hidden_states[..., None]
+
+ # State calculation
+ cache_params.ssm_states[self.layer_idx].copy_(
+ cache_params.ssm_states[self.layer_idx] * dA + dBx
+ )
+
+ # Subsequent output
+ # [bsz, n_groups * state_size] -> [bsz, num_heads, state_size]
+ C = C.reshape(batch_size, self.n_groups, -1)[..., None, :]
+ C = C.expand(batch_size, self.n_groups, self.num_heads // self.n_groups, C.shape[-1]).contiguous()
+ C = C.reshape(batch_size, -1, C.shape[-1])
+ # [bsz, num_heads, head_dim]
+
+ ssm_states = cache_params.ssm_states[self.layer_idx].to(C.dtype) # Shape: [b, h, d, n]
+ # Reshape ssm_states to merge the first two dimensions
+ # Shape: [b*h, d, n]
+ ssm_states_reshaped = ssm_states.view(batch_size * self.num_heads, self.head_dim, self.ssm_state_size)
+ C_reshaped = C.view(batch_size * self.num_heads, self.ssm_state_size, 1) # Shape: [b*h, n, 1]
+ y = torch.bmm(ssm_states_reshaped, C_reshaped)
+ y = y.view(batch_size, self.num_heads, self.head_dim)
+
+ # D skip connection
+ # [num_heads] -> [num_heads, head_dim]
+ D = self.D[..., None].expand(self.D.shape[0], self.head_dim)
+ y = (y + hidden_states * D).to(y.dtype)
+
+ # [bsz, num_heads, head_dim] -> [bsz, 1, intermediate_size]
+ y = y.reshape(batch_size, -1)[:, None, ...]
+ else:
+ # begin ssd naive implementation without einsums
+ dt = nn.functional.softplus(dt + self.dt_bias)
+ dt = torch.clamp(dt, self.time_step_min)
+ hidden_states = hidden_states.reshape(batch_size, seq_len, -1, self.head_dim).float()
+ B = B.reshape(batch_size, seq_len, -1, self.ssm_state_size).float()
+ C = C.reshape(batch_size, seq_len, -1, self.ssm_state_size).float()
+ B = B.repeat(1, 1, self.num_heads // self.n_groups, 1)
+ C = C.repeat(1, 1, self.num_heads // self.n_groups, 1)
+ pad_size = (self.chunk_size - seq_len % self.chunk_size) % self.chunk_size
+
+ D_residual = self.D[..., None] * pad_tensor_by_size(hidden_states, pad_size)
+
+ # Discretize x and A
+ hidden_states = hidden_states * dt[..., None]
+ A = A.to(hidden_states.dtype) * dt
+
+ # Rearrange into blocks/chunks
+ hidden_states, A, B, C = [reshape_into_chunks(t, pad_size, self.chunk_size) for t in (hidden_states, A, B, C)]
+
+ # [bsz, -1, chunk_size, num_heads] -> [bsz, num_heads, -1, chunk_size]
+ A = A.permute(0, 3, 1, 2)
+ A_cumsum = torch.cumsum(A, dim=-1)
+
+ # 1. Compute the output for each intra-chunk (diagonal blocks)
+ # This is the analog of a causal mask
+ L = torch.exp(segment_sum(A))
+
+ # Contraction of C and B to get G (attention-weights like)
+ # shape: (b, c, l, s, h, n)
+ G_intermediate = C[:, :, :, None, :, :] * B[:, :, None, :, :, :]
+ G = G_intermediate.sum(dim=-1) # shape: (b, c, l, s, h)
+
+ # Compute M, equivalent to applying attention mask to weights
+ M_intermediate = G[..., None] * L.permute(0, 2, 3, 4, 1)[..., None]
+ M = M_intermediate.sum(dim=-1)
+
+ # Compute Y_diag (apply to values)
+ Y_diag = (M[..., None] * hidden_states[:, :, None]).sum(dim=3)
+
+ # 2. Compute the state for each intra-chunk
+ # (right term of low-rank factorization of off-diagonal blocks; B terms)
+ decay_states = torch.exp((A_cumsum[:, :, :, -1:] - A_cumsum))
+ B_decay = B * decay_states.permute(0, -2, -1, 1)[..., None]
+ states = (B_decay[..., None, :] * hidden_states[..., None]).sum(dim=2)
+
+ # 3. Compute the inter-chunk SSM recurrence; produces correct SSM states at chunk boundaries
+ # (middle term of factorization of off-diag blocks; A terms)
+ if cache_params is not None and cache_params.seqlen_offset > 0:
+ previous_states = cache_params.ssm_states[self.layer_idx][:, None, ...]
+ else:
+ previous_states = torch.zeros_like(states[:, :1])
+ states = torch.cat([previous_states, states], dim=1)
+ decay_chunk = torch.exp(segment_sum(nn.functional.pad(A_cumsum[:, :, :, -1], (1, 0))))
+ decay_chunk = decay_chunk.transpose(1, 3)
+ new_states = (decay_chunk[..., None, None] * states[:, :, None, ...]).sum(dim=1)
+ states, ssm_state = new_states[:, :-1], new_states[:, -1]
+
+ # 4. Compute state -> output conversion per chunk
+ # (left term of low-rank factorization of off-diagonal blocks; C terms)
+ state_decay_out = torch.exp(A_cumsum)
+ C_times_states = (C[..., None, :] * states[:, :, None, ...])
+ state_decay_out_permuted = state_decay_out.permute(0, 2, 3, 1)
+ Y_off = (C_times_states.sum(-1) * state_decay_out_permuted[..., None])
+
+ # Add output of intra-chunk and inter-chunk terms (diagonal and off-diagonal blocks)
+ y = Y_diag + Y_off
+ # [bsz, -1, self.chunk_size, num_heads, head_dim] -> [bsz, (padded) seq_len, num_heads, head_dim]
+ y = y.reshape(batch_size, -1, self.num_heads, self.head_dim)
+
+ y = y + D_residual
+ # Cutting off padded chunks
+ if pad_size > 0:
+ y = y[:, :seq_len, :, :]
+ y = y.reshape(batch_size, seq_len, -1)
+ if ssm_state is not None and cache_params is not None:
+ cache_params.ssm_states[self.layer_idx].copy_(ssm_state)
+
+ scan_output = self.norm(y, gate)
+ # end ssd naive
+
+ # 4. Final linear projection
+ contextualized_states = self.out_proj(scan_output.to(dtype)) # [batch, seq_len, hidden_size]
+ return contextualized_states
+ # fmt: on
+
+ def forward(
+ self,
+ hidden_states,
+ cache_params: Optional[Mamba2Cache] = None,
+ cache_position: Optional[torch.LongTensor] = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ ):
+ if is_fast_path_available and "cuda" in self.in_proj.weight.device.type:
+ return self.cuda_kernels_forward(hidden_states, cache_params, cache_position, attention_mask)
+ dtype = hidden_states.dtype
+ if attention_mask is not None and attention_mask.shape[1] > 1 and attention_mask.shape[0] > 1:
+ # tune out hidden states for pad tokens, see https://github.com/state-spaces/mamba/issues/66
+ hidden_states = (hidden_states * attention_mask[:, :, None]).to(dtype)
+
+ return self.torch_forward(hidden_states, cache_params, cache_position, attention_mask)
+
+
+class Mamba2Block(nn.Module):
+ def __init__(self, config, layer_idx):
+ super().__init__()
+ self.config = config
+ self.layer_idx = layer_idx
+ self.residual_in_fp32 = config.residual_in_fp32
+ self.norm = RMSNorm(config.hidden_size, eps=config.layer_norm_epsilon)
+ self.mixer = Mamba2Mixer(config, layer_idx=layer_idx)
+
+ def forward(
+ self,
+ hidden_states,
+ cache_params: Optional[Mamba2Cache] = None,
+ cache_position: Optional[torch.LongTensor] = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ ):
+ residual = hidden_states
+ hidden_states = self.norm(hidden_states.to(dtype=self.norm.weight.dtype))
+ if self.residual_in_fp32:
+ residual = residual.to(torch.float32)
+
+ hidden_states = self.mixer(
+ hidden_states,
+ cache_params=cache_params,
+ cache_position=cache_position,
+ attention_mask=attention_mask,
+ )
+ hidden_states = residual + hidden_states
+ return hidden_states
+
+
+class Mamba2PreTrainedModel(PreTrainedModel, GenerationMixin):
+ """
+ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
+ models.
+ """
+
+ config_class = Mamba2Config
+ base_model_prefix = "backbone"
+ _no_split_modules = ["Mamba2Block"]
+ supports_gradient_checkpointing = True
+ _is_stateful = True
+
+ def _init_weights(self, module):
+ """Initialize the weights."""
+ if isinstance(module, Mamba2Mixer):
+ module.A_log._no_weight_decay = True
+ module.D._no_weight_decay = True
+
+ dt = torch.exp(
+ torch.rand(self.config.num_heads)
+ * (math.log(self.config.time_step_max) - math.log(self.config.time_step_min))
+ + math.log(self.config.time_step_min)
+ ).clamp(min=self.config.time_step_floor)
+
+ # # Inverse of softplus: https://github.com/pytorch/pytorch/issues/72759
+ inv_dt = dt + torch.log(-torch.expm1(-dt))
+ with torch.no_grad():
+ module.dt_bias.copy_(inv_dt)
+ module.dt_bias._no_reinit = True
+
+ if isinstance(module, nn.Linear):
+ if module.bias is not None:
+ if not getattr(module.bias, "_no_reinit", False):
+ nn.init.zeros_(module.bias)
+ elif isinstance(module, nn.Embedding):
+ nn.init.normal_(module.weight, std=self.config.initializer_range)
+
+ if self.config.rescale_prenorm_residual:
+ # Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme:
+ # > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale
+ # > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers.
+ # > -- GPT-2 :: https://openai.com/blog/better-language-models/
+ #
+ # Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py
+ for name, p in module.named_parameters():
+ if name in ["out_proj.weight"]:
+ # Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block
+ # Following Pytorch init, except scale by 1/sqrt(2 * n_layer)
+ # We need to reinit p since this code could be called multiple times
+ # Having just p *= scale would repeatedly scale it down
+ nn.init.kaiming_uniform_(p, a=math.sqrt(5))
+ with torch.no_grad():
+ p /= math.sqrt(self.config.num_hidden_layers)
+
+
+@dataclass
+# Copied from transformers.models.mamba.modeling_mamba.MambaOutput with MAMBA->MAMBA2,Mamba->Mamba2
+class Mamba2Output(ModelOutput):
+ """
+ Class for the MAMBA2 model outputs.
+
+ Args:
+ last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
+ Sequence of hidden-states at the output of the last layer of the model.
+ cache_params (`Mamba2Cache`):
+ The state of the model at the last time step. Can be used in a forward method with the next `input_ids` to
+ avoid providing the old `input_ids`.
+
+ Includes both the State space model state matrices after the selective scan, and the Convolutional states
+ hidden_states (`tuple(torch.FloatTensor)`, *optional*,
+ returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
+ Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
+ one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
+
+ Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
+ """
+
+ last_hidden_state: Optional[torch.FloatTensor] = None
+ cache_params: Optional[Mamba2Cache] = None
+ hidden_states: Optional[Tuple[torch.FloatTensor]] = None
+
+
+@dataclass
+# Copied from transformers.models.mamba.modeling_mamba.MambaCausalLMOutput with Mamba->Mamba2
+class Mamba2CausalLMOutput(ModelOutput):
+ """
+ Base class for causal language model (or autoregressive) outputs.
+
+ Args:
+ loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
+ Language modeling loss (for next-token prediction).
+ logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
+ Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
+ cache_params (`Mamba2Cache`):
+ The state of the model at the last time step. Can be used in a forward method with the next `input_ids` to
+ avoid providing the old `input_ids`.
+
+ Includes both the State space model state matrices after the selective scan, and the Convolutional states
+ hidden_states (`tuple(torch.FloatTensor)`, *optional*,
+ returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
+ Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
+ one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
+
+ Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
+ """
+
+ loss: Optional[torch.FloatTensor] = None
+ logits: Optional[torch.FloatTensor] = None
+ cache_params: Optional[Mamba2Cache] = None
+ hidden_states: Optional[Tuple[torch.FloatTensor]] = None
+
+
+class Mamba2Model(Mamba2PreTrainedModel):
+ def __init__(self, config):
+ super().__init__(config)
+
+ self.embeddings = nn.Embedding(config.vocab_size, config.hidden_size)
+ self.layers = nn.ModuleList([Mamba2Block(config, layer_idx=idx) for idx in range(config.num_hidden_layers)])
+
+ self.gradient_checkpointing = False
+ self.norm_f = RMSNorm(config.hidden_size, eps=config.layer_norm_epsilon)
+ # Initialize weights and apply final processing
+ self._register_load_state_dict_pre_hook(self.load_hook)
+ self.post_init()
+
+ def load_hook(self, state_dict, prefix, *args):
+ for k in state_dict:
+ if "embedding." in k:
+ state_dict[k.replace("embedding.", "embeddings.")] = state_dict.pop(k)
+ break
+
+ def get_input_embeddings(self):
+ return self.embeddings
+
+ def set_input_embeddings(self, new_embeddings):
+ self.embeddings = new_embeddings
+
+ def forward(
+ self,
+ input_ids: Optional[torch.LongTensor] = None,
+ inputs_embeds: Optional[torch.LongTensor] = None,
+ cache_params: Optional[Mamba2Cache] = None,
+ use_cache: Optional[bool] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None,
+ cache_position: Optional[torch.LongTensor] = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ **kwargs,
+ ) -> Union[Tuple, Mamba2Output]:
+ output_hidden_states = (
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
+ )
+ use_cache = use_cache if use_cache is not None else (self.config.use_cache if not self.training else False)
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ if (input_ids is None) ^ (inputs_embeds is not None): # ^ is python for xor
+ raise ValueError("You must specify exactly one of input_ids or inputs_embeds")
+
+ if inputs_embeds is None:
+ inputs_embeds = self.embeddings(input_ids)
+
+ if self.gradient_checkpointing and self.training and use_cache:
+ use_cache = False
+
+ if use_cache:
+ if cache_params is None:
+ cache_params = Mamba2Cache(
+ self.config, inputs_embeds.size(0), device=inputs_embeds.device, dtype=inputs_embeds.dtype
+ )
+ cache_position = torch.arange(0, self.config.conv_kernel, device=inputs_embeds.device)
+ elif cache_position is None:
+ # cases when we do manual forward instead of using `model.generate` which will initiate
+ # `cache_position` and makes sure it is not None, throw error here instead of doing some
+ # hack to conjecture the current cache position
+ raise ValueError(
+ "You have to specify the `cache_position` manually when `use_cache=True` and `cache_params` is passed, "
+ "you don't have to pass a `cache_params` if you are in prefilling stage because in that case it will "
+ "be initialized for you automatically"
+ )
+ else:
+ cache_params = None
+
+ hidden_states = inputs_embeds
+ all_hidden_states = () if output_hidden_states else None
+ for mixer_block in self.layers:
+ if self.gradient_checkpointing and self.training:
+ hidden_states = self._gradient_checkpointing_func(
+ mixer_block.__call__,
+ hidden_states,
+ cache_params,
+ cache_position,
+ attention_mask,
+ )
+ else:
+ hidden_states = mixer_block(
+ hidden_states,
+ cache_params=cache_params,
+ cache_position=cache_position,
+ attention_mask=attention_mask,
+ )
+
+ if output_hidden_states:
+ all_hidden_states = all_hidden_states + (hidden_states,)
+
+ if use_cache:
+ cache_params.seqlen_offset += inputs_embeds.shape[1]
+
+ hidden_states = self.norm_f(hidden_states)
+
+ if output_hidden_states:
+ all_hidden_states = all_hidden_states + (hidden_states,)
+
+ if not return_dict:
+ return tuple(v for v in [hidden_states, cache_params, all_hidden_states] if v is not None)
+
+ return Mamba2Output(
+ last_hidden_state=hidden_states,
+ cache_params=cache_params if use_cache else None,
+ hidden_states=all_hidden_states,
+ )
+
+
+class Mamba2ForCausalLM(Mamba2PreTrainedModel):
+ _tied_weights_keys = []
+
+ def __init__(self, config):
+ super().__init__(config)
+ self.backbone = Mamba2Model(config)
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
+ # Initialize weights and apply final processing
+ self.post_init()
+
+ def get_output_embeddings(self):
+ return self.lm_head
+
+ def set_output_embeddings(self, new_embeddings):
+ self.lm_head = new_embeddings
+
+ def get_input_embeddings(self):
+ return self.backbone.get_input_embeddings()
+
+ def set_input_embeddings(self, new_embeddings):
+ return self.backbone.set_input_embeddings(new_embeddings)
+
+ def prepare_inputs_for_generation(
+ self,
+ input_ids,
+ inputs_embeds=None,
+ use_cache=None,
+ cache_params: Optional[Mamba2Cache] = None,
+ cache_position: Optional[torch.LongTensor] = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ num_logits_to_keep: Optional[int] = None,
+ **kwargs,
+ ):
+ if inputs_embeds is not None:
+ past_len = inputs_embeds.shape[1] + input_ids.shape[1]
+ else:
+ past_len = input_ids.shape[1]
+ if use_cache:
+ # `cache_position` should have been initialized in `generate`
+ if cache_position is None:
+ raise ValueError(
+ "`cache_position` should not be None as it should have been initialized in "
+ "`model.generate`, you are responsible for passing in a valid `cache_position` if "
+ "you are calling `prepare_inputs_for_generation` directly with `use_cache=True`"
+ )
+ # how do we detect that we are in decoding without cache?
+ if cache_position[0] > 0:
+ input_ids = input_ids[:, -1][..., None]
+ attention_mask = attention_mask[:, -1][..., None]
+ else:
+ # we initialize the `cache_position` to full size of `conv_states` at prefill stage
+ # considering padding will be applied when input length is shorter, and truncation
+ # will be applied when it is longer, so it will be equivalent to always have it match
+ # the length of `cache_params.conv_states`, which is `config.conv_kernel`
+ cache_position = torch.arange(0, past_len, device=input_ids.device)
+ # if the cache is not used, we also do have to extend the attention mask here
+ # TODO there is likely a cleverer way to do this
+ extended_mask = torch.ones(
+ attention_mask.size(0), past_len - attention_mask.shape[1], device=attention_mask.device
+ )
+ attention_mask = torch.cat([attention_mask, extended_mask], dim=1)
+ cache_params = None
+
+ if attention_mask.shape[1] < past_len:
+ # we have to update manually the attention mask if
+ # we are in decoding without cache
+ # and we don't have position_ids here
+ # TODO but we should be able to use cache_position though at a later time
+ extended_mask = torch.ones(
+ attention_mask.size(0), past_len - attention_mask.shape[1], device=attention_mask.device
+ )
+ attention_mask = torch.cat([attention_mask, extended_mask], dim=1)
+ if inputs_embeds is not None and cache_params is None:
+ model_inputs = {"inputs_embeds": inputs_embeds}
+ else:
+ model_inputs = {"input_ids": input_ids}
+
+ if num_logits_to_keep is not None:
+ model_inputs['num_logits_to_keep'] = num_logits_to_keep
+
+ model_inputs.update({
+ 'attention_mask': attention_mask,
+ 'cache_params': cache_params,
+ 'use_cache': use_cache,
+ 'cache_position': cache_position,
+ 'num_logits_to_keep': num_logits_to_keep
+ })
+ return model_inputs
+
+ def forward(
+ self,
+ input_ids: Optional[torch.LongTensor] = None,
+ inputs_embeds: Optional[torch.FloatTensor] = None,
+ cache_params: Optional[Mamba2Cache] = None,
+ labels: Optional[torch.LongTensor] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None,
+ use_cache: Optional[bool] = None,
+ cache_position: Optional[torch.Tensor] = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ num_logits_to_keep: Optional[int] = 0,
+ **kwargs, # for now we need this for generation
+ ) -> Union[Tuple, Mamba2CausalLMOutput]:
+ r"""
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
+ Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set
+ `labels = input_ids` Indices are selected in `[-100, 0, ..., config.vocab_size]` All labels set to `-100`
+ are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size]`
+ """
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ outputs = self.backbone(
+ input_ids,
+ cache_params=cache_params,
+ inputs_embeds=inputs_embeds,
+ output_hidden_states=output_hidden_states,
+ return_dict=return_dict,
+ use_cache=use_cache,
+ cache_position=cache_position,
+ attention_mask=attention_mask,
+ )
+ hidden_states = outputs[0]
+ fuse_linear_and_cross_entropy = self.config.fuse_cross_entropy and self.training
+ logits = None if fuse_linear_and_cross_entropy else self.lm_head(hidden_states[:, -num_logits_to_keep:])
+
+ loss = None
+ if labels is not None:
+ if self.config.fuse_cross_entropy:
+ if fuse_linear_and_cross_entropy:
+ loss_fct = FusedLinearCrossEntropyLoss()
+ else:
+ loss_fct = FusedCrossEntropyLoss(inplace_backward=True)
+ else:
+ loss_fct = nn.CrossEntropyLoss()
+ # Enable model parallelism
+ labels = labels.to(hidden_states.device)
+ labels = torch.cat((labels[..., 1:], torch.full_like(labels[:, :1], loss_fct.ignore_index)), 1)
+ if fuse_linear_and_cross_entropy:
+ loss = loss_fct(hidden_states.view(-1, self.config.hidden_size),
+ labels.view(-1),
+ self.lm_head.weight,
+ self.lm_head.bias)
+ else:
+ loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1))
+
+ if not return_dict:
+ output = (logits,) + outputs[1:]
+ return (loss,) + output if loss is not None else output
+
+ return Mamba2CausalLMOutput(
+ loss=loss,
+ logits=logits,
+ cache_params=outputs.cache_params,
+ hidden_states=outputs.hidden_states,
+ )
diff --git a/fla/models/retnet/__init__.py b/fla/models/retnet/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..ad7d9e9da930819a2a6728e3e189090651b82a2e
--- /dev/null
+++ b/fla/models/retnet/__init__.py
@@ -0,0 +1,13 @@
+# -*- coding: utf-8 -*-
+
+from transformers import AutoConfig, AutoModel, AutoModelForCausalLM
+
+from fla.models.retnet.configuration_retnet import RetNetConfig
+from fla.models.retnet.modeling_retnet import RetNetForCausalLM, RetNetModel
+
+AutoConfig.register(RetNetConfig.model_type, RetNetConfig)
+AutoModel.register(RetNetConfig, RetNetModel)
+AutoModelForCausalLM.register(RetNetConfig, RetNetForCausalLM)
+
+
+__all__ = ['RetNetConfig', 'RetNetForCausalLM', 'RetNetModel']
diff --git a/fla/models/retnet/configuration_retnet.py b/fla/models/retnet/configuration_retnet.py
new file mode 100644
index 0000000000000000000000000000000000000000..535841629c557f226bf240a5e0bd6dc1493e317f
--- /dev/null
+++ b/fla/models/retnet/configuration_retnet.py
@@ -0,0 +1,87 @@
+# -*- coding: utf-8 -*-
+
+from __future__ import annotations
+
+from typing import Dict, Optional
+
+from transformers.configuration_utils import PretrainedConfig
+
+
+class RetNetConfig(PretrainedConfig):
+
+ model_type = 'retnet'
+ keys_to_ignore_at_inference = ['past_key_values']
+
+ def __init__(
+ self,
+ attn_mode: str = "chunk",
+ hidden_size: int = 2048,
+ expand_k: int = 1,
+ expand_v: int = 2,
+ hidden_ratio: Optional[int] = 2,
+ intermediate_size: Optional[int] = None,
+ num_hidden_layers: int = 24,
+ num_heads: int = 8,
+ num_kv_heads: Optional[int] = None,
+ feature_map: Optional[str] = None,
+ hidden_act: str = "swish",
+ use_short_conv: bool = False,
+ conv_size: int = 4,
+ use_output_gate: bool = True,
+ max_position_embeddings: int = 2048,
+ elementwise_affine: Optional[bool] = True,
+ norm_eps: float = 1e-6,
+ attn: Optional[Dict] = None,
+ use_cache: bool = True,
+ pad_token_id: int = None,
+ bos_token_id: int = 1,
+ eos_token_id: int = 2,
+ tie_word_embeddings: bool = False,
+ initializer_range: float = 0.02,
+ fuse_norm: bool = True,
+ fuse_cross_entropy: bool = True,
+ vocab_size: int = 32000,
+ **kwargs
+ ) -> RetNetConfig:
+ self.attn_mode = attn_mode
+ self.hidden_size = hidden_size
+ self.expand_k = expand_k
+ self.expand_v = expand_v
+ self.hidden_ratio = hidden_ratio
+ self.intermediate_size = intermediate_size
+ self.num_hidden_layers = num_hidden_layers
+ self.num_heads = num_heads
+ self.num_kv_heads = num_kv_heads
+ self.feature_map = feature_map
+ self.hidden_act = hidden_act
+ self.use_short_conv = use_short_conv
+ self.conv_size = conv_size
+ self.use_output_gate = use_output_gate
+ self.hidden_act = hidden_act
+ self.max_position_embeddings = max_position_embeddings
+ self.elementwise_affine = elementwise_affine
+ self.norm_eps = norm_eps
+ self.attn = attn
+ self.use_cache = use_cache
+ self.initializer_range = initializer_range
+ self.fuse_norm = fuse_norm
+ self.fuse_cross_entropy = fuse_cross_entropy
+ self.vocab_size = vocab_size
+
+ if attn is not None:
+ if not isinstance(attn, Dict):
+ raise ValueError("attn must be a dictionary")
+ if 'layers' not in attn:
+ raise ValueError("Layer indices must be provided to initialize hybrid attention layers")
+ if 'num_heads' not in attn:
+ raise ValueError("Number of heads must be provided to initialize hybrid attention layers")
+ attn['num_kv_heads'] = attn.get('num_kv_heads', attn['num_heads'])
+ attn['window_size'] = attn.get('window_size', None)
+
+ super().__init__(
+ pad_token_id=pad_token_id,
+ bos_token_id=bos_token_id,
+ eos_token_id=eos_token_id,
+ tie_word_embeddings=tie_word_embeddings,
+ **kwargs,
+ )
diff --git a/fla/models/retnet/modeling_retnet.py b/fla/models/retnet/modeling_retnet.py
new file mode 100644
index 0000000000000000000000000000000000000000..f70c737a1504fd5d3fb37d394ae93b39a3de6bab
--- /dev/null
+++ b/fla/models/retnet/modeling_retnet.py
@@ -0,0 +1,426 @@
+# -*- coding: utf-8 -*-
+
+from __future__ import annotations
+
+import math
+import warnings
+from typing import List, Optional, Tuple, Union
+
+import torch
+import torch.nn as nn
+import torch.utils.checkpoint
+from transformers.activations import ACT2FN
+from transformers.generation import GenerationMixin
+from transformers.modeling_outputs import (BaseModelOutputWithPast,
+ CausalLMOutputWithPast)
+from transformers.modeling_utils import PreTrainedModel
+from transformers.utils import logging
+
+from fla.layers.attn import Attention
+from fla.layers.multiscale_retention import MultiScaleRetention
+from fla.models.retnet.configuration_retnet import RetNetConfig
+from fla.models.utils import Cache
+from fla.modules import (FusedCrossEntropyLoss, FusedLinearCrossEntropyLoss,
+ RMSNorm)
+from fla.modules.activations import swiglu_linear
+
+logger = logging.get_logger(__name__)
+
+
+class RetNetMLP(nn.Module):
+
+ def __init__(
+ self,
+ hidden_size: int,
+ hidden_ratio: Optional[int] = None,
+ intermediate_size: Optional[int] = None,
+ hidden_act: str = 'swish'
+ ) -> RetNetMLP:
+ super().__init__()
+
+ self.hidden_size = hidden_size
+ # the final number of params is `hidden_ratio * hidden_size^2`
+ # `intermediate_size` is chosen to be a multiple of 256 closest to `2/3 * hidden_size * hidden_ratio`
+ if hidden_ratio is None:
+ hidden_ratio = 4
+ if intermediate_size is None:
+ intermediate_size = int(hidden_size * hidden_ratio * 2 / 3)
+ intermediate_size = 256 * ((intermediate_size + 256 - 1) // 256)
+ self.hidden_ratio = hidden_ratio
+ self.intermediate_size = intermediate_size
+
+ self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size * 2, bias=False)
+ self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False)
+ self.act_fn = ACT2FN[hidden_act]
+
+ def forward(self, x):
+ y = self.gate_proj(x)
+ gate, y = y.chunk(2, -1)
+ return swiglu_linear(gate, y, self.down_proj.weight, self.down_proj.bias)
+
+
+class RetNetBlock(nn.Module):
+ def __init__(self, config: RetNetConfig, layer_idx: int):
+ super().__init__()
+ self.hidden_size = config.hidden_size
+
+ self.attn_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps)
+ if config.attn is not None and layer_idx in config.attn['layers']:
+ self.attn = Attention(
+ hidden_size=config.hidden_size,
+ num_heads=config.attn['num_heads'],
+ num_kv_heads=config.attn['num_kv_heads'],
+ window_size=config.attn['window_size'],
+ max_position_embeddings=config.max_position_embeddings,
+ layer_idx=layer_idx
+ )
+ else:
+ self.attn = MultiScaleRetention(
+ mode=config.attn_mode,
+ hidden_size=config.hidden_size,
+ expand_k=config.expand_k,
+ expand_v=config.expand_v,
+ num_heads=config.num_heads,
+ num_kv_heads=config.num_kv_heads,
+ feature_map=config.feature_map,
+ use_output_gate=config.use_output_gate,
+ gate_fn=config.hidden_act,
+ elementwise_affine=config.elementwise_affine,
+ norm_eps=config.norm_eps,
+ fuse_norm=config.fuse_norm,
+ layer_idx=layer_idx
+ )
+ self.mlp_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps)
+ self.mlp = RetNetMLP(
+ hidden_size=config.hidden_size,
+ hidden_ratio=config.hidden_ratio,
+ intermediate_size=config.intermediate_size,
+ hidden_act=config.hidden_act
+ )
+
+ def forward(
+ self,
+ hidden_states: torch.Tensor,
+ attention_mask: Optional[torch.Tensor] = None,
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
+ use_cache: Optional[bool] = False,
+ output_attentions: Optional[bool] = False,
+ **kwargs,
+ ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
+
+ residual = hidden_states
+
+ hidden_states = self.attn_norm(hidden_states)
+ hidden_states, attentions, past_key_values = self.attn(
+ hidden_states=hidden_states,
+ attention_mask=attention_mask,
+ past_key_values=past_key_values,
+ use_cache=use_cache,
+ output_attentions=output_attentions
+ )
+ hidden_states, residual = self.mlp_norm(hidden_states, residual, True)
+ hidden_states = self.mlp(hidden_states)
+ hidden_states = residual + hidden_states
+
+ outputs = (hidden_states, attentions, past_key_values)
+
+ return outputs
+
+
+class RetNetPreTrainedModel(PreTrainedModel):
+
+ config_class = RetNetConfig
+ supports_gradient_checkpointing = True
+ _no_split_modules = ['RetNetBlock']
+
+ def __init__(self, *inputs, **kwargs):
+ super().__init__(*inputs, **kwargs)
+
+ def _init_weights(
+ self,
+ module: nn.Module,
+ rescale_prenorm_residual: bool = True,
+ num_residuals_per_layer: int = 2,
+ ):
+ if isinstance(module, (nn.Linear, nn.Conv1d)):
+ # Slightly different from the TF version which uses truncated_normal for initialization
+ # cf https://github.com/pytorch/pytorch/pull/5617
+ nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range)
+ if module.bias is not None:
+ nn.init.zeros_(module.bias)
+ elif isinstance(module, nn.Embedding):
+ nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range)
+ if module.padding_idx is not None:
+ module.weight.data[module.padding_idx].zero_()
+
+ if rescale_prenorm_residual:
+ # Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme:
+ # > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale
+ # > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers.
+ # > -- GPT-2 :: https://openai.com/blog/better-language-models/
+ #
+ # Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py
+ for name, p in module.named_parameters():
+ if name in ["o_proj.weight", "down_proj.weight"]:
+ # Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block
+ # Following Pytorch init, except scale by 1/sqrt(2 * n_layer)
+ # We need to reinit p since this code could be called multiple times
+ # Having just p *= scale would repeatedly scale it down
+ with torch.no_grad():
+ p /= math.sqrt(num_residuals_per_layer * self.config.num_hidden_layers)
+
+
+class RetNetModel(RetNetPreTrainedModel):
+
+ def __init__(self, config: RetNetConfig):
+ super().__init__(config)
+ self.padding_idx = config.pad_token_id
+ self.vocab_size = config.vocab_size
+
+ self.embeddings = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
+ self.layers = nn.ModuleList(
+ [RetNetBlock(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
+ )
+ self.norm = RMSNorm(config.hidden_size, eps=config.norm_eps)
+
+ self.gradient_checkpointing = False
+
+ self.post_init()
+
+ def get_input_embeddings(self):
+ return self.embeddings
+
+ def set_input_embeddings(self, value):
+ self.embeddings = value
+
+ def forward(
+ self,
+ input_ids: Optional[torch.LongTensor] = None,
+ attention_mask: Optional[torch.Tensor] = None, # noqa
+ inputs_embeds: Optional[torch.FloatTensor] = None,
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
+ use_cache: Optional[bool] = None,
+ output_attentions: Optional[bool] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None
+ ) -> Union[Tuple, BaseModelOutputWithPast]:
+ if output_attentions:
+ warnings.warn(
+ "`RetNetModel` does not support output attention weights now, so `output_attentions` is set to `False`."
+ )
+ output_attentions = False
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
+ output_hidden_states = (
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
+ )
+ use_cache = use_cache if use_cache is not None else (self.config.use_cache if not self.training else False)
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ # retrieve input_ids and inputs_embeds
+ if input_ids is not None and inputs_embeds is not None:
+ raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
+ if input_ids is None and inputs_embeds is None:
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
+
+ if inputs_embeds is None:
+ inputs_embeds = self.embeddings(input_ids)
+ hidden_states = inputs_embeds
+
+ if use_cache and not isinstance(past_key_values, Cache):
+ past_key_values = Cache.from_legacy_cache(past_key_values)
+
+ if self.gradient_checkpointing and self.training and use_cache:
+ logger.warning_once("`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...")
+ use_cache = False
+
+ all_hidden_states = () if output_hidden_states else None
+ all_attns = () if output_attentions else None
+ for layer in self.layers:
+ if output_hidden_states:
+ all_hidden_states += (hidden_states,)
+
+ if self.gradient_checkpointing and self.training:
+ hidden_states, attentions, past_key_values = self._gradient_checkpointing_func(
+ layer.__call__,
+ hidden_states,
+ attention_mask,
+ past_key_values,
+ use_cache,
+ output_attentions
+ )
+ else:
+ hidden_states, attentions, past_key_values = layer(
+ hidden_states,
+ attention_mask=attention_mask,
+ past_key_values=past_key_values,
+ use_cache=use_cache,
+ output_attentions=output_attentions
+ )
+
+ if output_attentions:
+ all_attns += (attentions,)
+
+ hidden_states = self.norm(hidden_states)
+
+ # add hidden states from the last decoder layer
+ if output_hidden_states:
+ all_hidden_states += (hidden_states,)
+
+ if not return_dict:
+ return tuple(i for i in [hidden_states, past_key_values, all_hidden_states, all_attns] if i is not None)
+ return BaseModelOutputWithPast(
+ last_hidden_state=hidden_states,
+ past_key_values=past_key_values,
+ hidden_states=all_hidden_states,
+ attentions=all_attns
+ )
+
+
+class RetNetForCausalLM(RetNetPreTrainedModel, GenerationMixin):
+
+ _tied_weights_keys = ["lm_head.weight"]
+
+ def __init__(self, config):
+ super().__init__(config)
+ self.model = RetNetModel(config)
+ self.vocab_size = config.vocab_size
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
+
+ # Initialize weights and apply final processing
+ self.post_init()
+
+ def get_input_embeddings(self):
+ return self.model.embeddings
+
+ def set_input_embeddings(self, value):
+ self.model.embeddings = value
+
+ def get_output_embeddings(self):
+ return self.lm_head
+
+ def set_output_embeddings(self, new_embeddings):
+ self.lm_head = new_embeddings
+
+ def set_decoder(self, decoder):
+ self.model = decoder
+
+ def get_decoder(self):
+ return self.model
+
+ def generate(self, *args, **kwargs):
+ try:
+ return super().generate(*args, **kwargs)
+ except AttributeError as exception:
+ # Expected exception: "AttributeError: '(object name)' object has no attribute 'past_key_values'"
+ if 'past_key_values' in str(exception):
+ raise AttributeError(
+ f"You tried to call `generate` with a decoding strategy that manipulates `past_key_values`, "
+ f"which is not supported for {self.__class__.__name__}. "
+ f"Try another generation strategy instead. "
+ f"For the available generation strategies, check this doc: "
+ f"https://huggingface.co/docs/transformers/en/generation_strategies#decoding-strategies"
+ )
+ else:
+ raise exception
+
+ def prepare_inputs_for_generation(
+ self,
+ input_ids: torch.LongTensor = None,
+ past_key_values: Optional[torch.Tensor] = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ inputs_embeds: Optional[torch.FloatTensor] = None,
+ use_cache: Optional[bool] = True,
+ num_logits_to_keep: Optional[int] = None,
+ **kwargs
+ ):
+ # only last token for `inputs_ids` if the `past_key_values` is passed along.
+ if past_key_values is not None:
+ input_ids = input_ids[:, -1:]
+
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
+ if inputs_embeds is not None and past_key_values is None:
+ model_inputs = {'inputs_embeds': inputs_embeds}
+ else:
+ # The `contiguous()` here is necessary to have a static stride during decoding. torchdynamo otherwise
+ # recompiles graphs as the stride of the inputs is a guard.
+ # Ref: https://github.com/huggingface/transformers/pull/29114
+ # TODO: use `next_tokens` directly instead.
+ model_inputs = {'input_ids': input_ids.contiguous()}
+
+ if num_logits_to_keep is not None:
+ model_inputs['num_logits_to_keep'] = num_logits_to_keep
+
+ model_inputs.update({
+ 'past_key_values': past_key_values,
+ 'use_cache': use_cache,
+ 'attention_mask': attention_mask,
+ 'num_logits_to_keep': num_logits_to_keep,
+ })
+ return model_inputs
+
+ def forward(
+ self,
+ input_ids: torch.LongTensor = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ inputs_embeds: Optional[torch.FloatTensor] = None,
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
+ labels: Optional[torch.LongTensor] = None,
+ use_cache: Optional[bool] = None,
+ output_attentions: Optional[bool] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None,
+ num_logits_to_keep: Optional[int] = 0
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
+ output_hidden_states = (
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
+ )
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
+ outputs = self.model(
+ input_ids=input_ids,
+ attention_mask=attention_mask,
+ inputs_embeds=inputs_embeds,
+ past_key_values=past_key_values,
+ use_cache=use_cache,
+ output_attentions=output_attentions,
+ output_hidden_states=output_hidden_states,
+ return_dict=return_dict
+ )
+
+ hidden_states = outputs[0]
+ fuse_linear_and_cross_entropy = self.config.fuse_cross_entropy and self.training
+ logits = None if fuse_linear_and_cross_entropy else self.lm_head(hidden_states[:, -num_logits_to_keep:])
+
+ loss = None
+ if labels is not None:
+ if self.config.fuse_cross_entropy:
+ if fuse_linear_and_cross_entropy:
+ loss_fct = FusedLinearCrossEntropyLoss()
+ else:
+ loss_fct = FusedCrossEntropyLoss(inplace_backward=True)
+ else:
+ loss_fct = nn.CrossEntropyLoss()
+ # Enable model parallelism
+ labels = labels.to(hidden_states.device)
+ labels = torch.cat((labels[..., 1:], torch.full_like(labels[:, :1], loss_fct.ignore_index)), 1)
+ if fuse_linear_and_cross_entropy:
+ loss = loss_fct(hidden_states.view(-1, self.config.hidden_size),
+ labels.view(-1),
+ self.lm_head.weight,
+ self.lm_head.bias)
+ else:
+ loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1))
+
+ if not return_dict:
+ output = (logits,) + outputs[1:]
+ return (loss,) + output if loss is not None else output
+
+ return CausalLMOutputWithPast(
+ loss=loss,
+ logits=logits,
+ past_key_values=outputs.past_key_values,
+ hidden_states=outputs.hidden_states,
+ attentions=outputs.attentions,
+ )
diff --git a/fla/models/rwkv6/__init__.py b/fla/models/rwkv6/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..942c6dc203bf6c867ffd5111e7f2ae1e7c060386
--- /dev/null
+++ b/fla/models/rwkv6/__init__.py
@@ -0,0 +1,13 @@
+# -*- coding: utf-8 -*-
+
+from transformers import AutoConfig, AutoModel, AutoModelForCausalLM
+
+from fla.models.rwkv6.configuration_rwkv6 import RWKV6Config
+from fla.models.rwkv6.modeling_rwkv6 import RWKV6ForCausalLM, RWKV6Model
+
+AutoConfig.register(RWKV6Config.model_type, RWKV6Config)
+AutoModel.register(RWKV6Config, RWKV6Model)
+AutoModelForCausalLM.register(RWKV6Config, RWKV6ForCausalLM)
+
+
+__all__ = ['RWKV6Config', 'RWKV6ForCausalLM', 'RWKV6Model']
diff --git a/fla/models/rwkv6/configuration_rwkv6.py b/fla/models/rwkv6/configuration_rwkv6.py
new file mode 100644
index 0000000000000000000000000000000000000000..6e56614bf908dee1b34b3864629b58a5d10295c7
--- /dev/null
+++ b/fla/models/rwkv6/configuration_rwkv6.py
@@ -0,0 +1,80 @@
+# -*- coding: utf-8 -*-
+
+from typing import Dict, Optional
+
+from transformers.configuration_utils import PretrainedConfig
+
+
+class RWKV6Config(PretrainedConfig):
+
+ model_type = 'rwkv6'
+ keys_to_ignore_at_inference = ['past_key_values']
+
+ def __init__(
+ self,
+ attn_mode: str = "chunk",
+ hidden_size: int = 2048,
+ expand_k: int = 0.5,
+ expand_v: int = 1,
+ hidden_ratio: Optional[int] = 3.5,
+ intermediate_size: Optional[int] = None,
+ num_hidden_layers: int = 24,
+ num_heads: int = 4,
+ proj_low_rank_dim: int = 32,
+ gate_low_rank_dim: int = 64,
+ hidden_act: str = "sqrelu",
+ max_position_embeddings: int = 2048,
+ norm_first: bool = True,
+ norm_bias: bool = True,
+ norm_eps: float = 1e-5,
+ attn: Optional[Dict] = None,
+ use_cache: bool = True,
+ pad_token_id: int = None,
+ bos_token_id: int = 1,
+ eos_token_id: int = 2,
+ tie_word_embeddings: bool = False,
+ initializer_range: float = 0.02,
+ fuse_norm: bool = True,
+ fuse_cross_entropy: bool = True,
+ vocab_size: int = 32000,
+ **kwargs
+ ):
+ self.attn_mode = attn_mode
+ self.hidden_size = hidden_size
+ self.expand_k = expand_k
+ self.expand_v = expand_v
+ self.hidden_ratio = hidden_ratio
+ self.intermediate_size = intermediate_size
+ self.norm_first = norm_first
+ self.num_hidden_layers = num_hidden_layers
+ self.num_heads = num_heads
+ self.proj_low_rank_dim = proj_low_rank_dim
+ self.gate_low_rank_dim = gate_low_rank_dim
+ self.hidden_act = hidden_act
+ self.max_position_embeddings = max_position_embeddings
+ self.norm_bias = norm_bias
+ self.norm_eps = norm_eps
+ self.attn = attn
+ self.use_cache = use_cache
+ self.initializer_range = initializer_range
+ self.fuse_norm = fuse_norm
+ self.fuse_cross_entropy = fuse_cross_entropy
+ self.vocab_size = vocab_size
+
+ if attn is not None:
+ if not isinstance(attn, Dict):
+ raise ValueError("attn must be a dictionary")
+ if 'layers' not in attn:
+ raise ValueError("Layer indices must be provided to initialize hybrid attention layers")
+ if 'num_heads' not in attn:
+ raise ValueError("Number of heads must be provided to initialize hybrid attention layers")
+ attn['num_kv_heads'] = attn.get('num_kv_heads', attn['num_heads'])
+ attn['window_size'] = attn.get('window_size', None)
+
+ super().__init__(
+ pad_token_id=pad_token_id,
+ bos_token_id=bos_token_id,
+ eos_token_id=eos_token_id,
+ tie_word_embeddings=tie_word_embeddings,
+ **kwargs,
+ )
diff --git a/fla/models/rwkv6/modeling_rwkv6.py b/fla/models/rwkv6/modeling_rwkv6.py
new file mode 100644
index 0000000000000000000000000000000000000000..a5b68d2a941d1ecdc3575f2356662b3f7c33ce75
--- /dev/null
+++ b/fla/models/rwkv6/modeling_rwkv6.py
@@ -0,0 +1,442 @@
+# -*- coding: utf-8 -*-
+
+from __future__ import annotations
+
+import math
+import warnings
+from typing import Optional, Tuple, Union
+
+import torch
+import torch.nn as nn
+import torch.utils.checkpoint
+from transformers.generation import GenerationMixin
+from transformers.modeling_outputs import (BaseModelOutputWithPast,
+ CausalLMOutputWithPast)
+from transformers.modeling_utils import PreTrainedModel
+from transformers.utils import logging
+
+from fla.layers.attn import Attention
+from fla.layers.rwkv6 import LerpLinear, RWKV6Attention
+from fla.models.rwkv6.configuration_rwkv6 import RWKV6Config
+from fla.models.utils import Cache
+from fla.modules import (FusedCrossEntropyLoss, FusedLinearCrossEntropyLoss,
+ LayerNorm)
+from fla.modules.activations import ACT2FN
+
+logger = logging.get_logger(__name__)
+
+
+class RWKV6FeedForward(nn.Module):
+
+ def __init__(
+ self,
+ hidden_size: int,
+ hidden_ratio: Optional[int] = None,
+ intermediate_size: Optional[int] = None,
+ hidden_act: str = 'sqrelu',
+ layer_idx: int = None
+ ) -> RWKV6FeedForward:
+ super().__init__()
+
+ self.hidden_size = hidden_size
+ if hidden_ratio is None:
+ hidden_ratio = 3.5
+ if intermediate_size is None:
+ intermediate_size = int(hidden_size * hidden_ratio)
+ intermediate_size = 32 * ((intermediate_size + 32 - 1) // 32)
+ self.hidden_ratio = hidden_ratio
+ self.intermediate_size = intermediate_size
+
+ self.time_shift = nn.ZeroPad2d((0, 0, 1, -1))
+
+ self.key = LerpLinear(hidden_size, intermediate_size)
+ self.value = nn.Linear(intermediate_size, hidden_size, bias=False)
+ self.receptance = LerpLinear(hidden_size, hidden_size)
+ self.act_fn = ACT2FN[hidden_act]
+
+ self.layer_idx = layer_idx
+
+ def forward(
+ self,
+ x: torch.Tensor,
+ attention_mask: Optional[torch.Tensor] = None,
+ state: Optional[Cache] = None
+ ) -> torch.Tensor:
+ if attention_mask is not None:
+ x = x.mul_(attention_mask[:, -x.shape[-2]:, None])
+ if x.shape[1] == 1 and state is not None:
+ shifted = state[self.layer_idx]['ffn_state'].unsqueeze(1)
+ else:
+ shifted = self.time_shift(x)
+ if state[self.layer_idx]['ffn_state'] is not None:
+ shifted[:, 0] = state[self.layer_idx]['ffn_state'][-1]
+ delta = shifted - x
+ key = self.act_fn(self.key(x, delta))
+ value = self.value(key)
+ receptance = self.receptance(x, delta)
+
+ if state is not None:
+ # no need to update the offset twice
+ state.update(ffn_state=x[:, -1], layer_idx=self.layer_idx, offset=0)
+ return receptance.sigmoid() * value, state
+
+
+class RWKV6Block(nn.Module):
+ def __init__(self, config: RWKV6Config, layer_idx: int):
+ super().__init__()
+ self.hidden_size = config.hidden_size
+
+ self.config = config
+ self.layer_idx = layer_idx
+
+ if config.norm_first and layer_idx == 0:
+ self.pre_norm = LayerNorm(hidden_size=config.hidden_size, bias=config.norm_bias, eps=config.norm_eps)
+ self.attn_norm = LayerNorm(hidden_size=config.hidden_size, bias=config.norm_bias, eps=config.norm_eps)
+ if config.attn is not None and layer_idx in config.attn['layers']:
+ self.attn = Attention(
+ hidden_size=config.hidden_size,
+ num_heads=config.attn['num_heads'],
+ num_kv_heads=config.attn['num_kv_heads'],
+ window_size=config.attn['window_size'],
+ max_position_embeddings=config.max_position_embeddings,
+ layer_idx=layer_idx
+ )
+ else:
+ self.attn = RWKV6Attention(
+ mode=config.attn_mode,
+ hidden_size=config.hidden_size,
+ expand_k=config.expand_k,
+ expand_v=config.expand_v,
+ num_heads=config.num_heads,
+ proj_low_rank_dim=config.proj_low_rank_dim,
+ gate_low_rank_dim=config.gate_low_rank_dim,
+ norm_eps=config.norm_eps,
+ fuse_norm=config.fuse_norm,
+ layer_idx=layer_idx
+ )
+ self.ffn_norm = LayerNorm(hidden_size=config.hidden_size, bias=config.norm_bias, eps=config.norm_eps)
+ self.ffn = RWKV6FeedForward(
+ hidden_size=config.hidden_size,
+ hidden_ratio=config.hidden_ratio,
+ intermediate_size=config.intermediate_size,
+ hidden_act=config.hidden_act,
+ layer_idx=layer_idx
+ )
+
+ def forward(
+ self,
+ hidden_states: torch.Tensor,
+ attention_mask: Optional[torch.Tensor] = None,
+ past_key_values: Optional[Cache] = None,
+ use_cache: Optional[bool] = False,
+ output_attentions: Optional[bool] = False,
+ **kwargs,
+ ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
+ residual = self.pre_norm(hidden_states) if hasattr(self, 'pre_norm') else hidden_states
+ hidden_states = self.attn_norm(residual)
+ hidden_states, attentions, past_key_values = self.attn(
+ hidden_states=hidden_states,
+ attention_mask=attention_mask,
+ past_key_values=past_key_values,
+ use_cache=use_cache,
+ output_attentions=output_attentions
+ )
+ hidden_states, residual = self.ffn_norm(hidden_states, residual, True)
+ hidden_states, past_key_values = self.ffn(hidden_states, attention_mask, past_key_values)
+ hidden_states = residual + hidden_states
+
+ outputs = (hidden_states, attentions, past_key_values)
+
+ return outputs
+
+
+class RWKV6PreTrainedModel(PreTrainedModel):
+
+ config_class = RWKV6Config
+ supports_gradient_checkpointing = True
+ _no_split_modules = ['RWKV6Block']
+
+ def __init__(self, *inputs, **kwargs):
+ super().__init__(*inputs, **kwargs)
+
+ def _init_weights(
+ self,
+ module: nn.Module,
+ rescale_prenorm_residual: bool = True,
+ num_residuals_per_layer: int = 2,
+ ):
+ if isinstance(module, (nn.Linear, nn.Conv1d)):
+ # Slightly different from the TF version which uses truncated_normal for initialization
+ # cf https://github.com/pytorch/pytorch/pull/5617
+ nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range)
+ if module.bias is not None:
+ nn.init.zeros_(module.bias)
+ elif isinstance(module, nn.Parameter):
+ nn.init.normal_(module, mean=0.0, std=self.config.initializer_range)
+ elif isinstance(module, nn.Embedding):
+ nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range)
+ if module.padding_idx is not None:
+ module.weight.data[module.padding_idx].zero_()
+
+ if rescale_prenorm_residual:
+ # Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme:
+ # > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale
+ # > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers.
+ # > -- GPT-2 :: https://openai.com/blog/better-language-models/
+ #
+ # Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py
+ for name, p in module.named_parameters():
+ if name in ["o_proj.weight", "down_proj.weight"]:
+ # Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block
+ # Following Pytorch init, except scale by 1/sqrt(2 * n_layer)
+ # We need to reinit p since this code could be called multiple times
+ # Having just p *= scale would repeatedly scale it down
+ with torch.no_grad():
+ p /= math.sqrt(num_residuals_per_layer * self.config.num_hidden_layers)
+
+
+class RWKV6Model(RWKV6PreTrainedModel):
+
+ def __init__(self, config: RWKV6Config):
+ super().__init__(config)
+ self.padding_idx = config.pad_token_id
+ self.vocab_size = config.vocab_size
+
+ self.embeddings = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
+ self.layers = nn.ModuleList([RWKV6Block(config, layer_idx) for layer_idx in range(config.num_hidden_layers)])
+ self.norm = LayerNorm(config.hidden_size, bias=config.norm_bias, eps=config.norm_eps)
+
+ self.gradient_checkpointing = False
+
+ self.post_init()
+
+ def get_input_embeddings(self):
+ return self.embeddings
+
+ def set_input_embeddings(self, value):
+ self.embeddings = value
+
+ def forward(
+ self,
+ input_ids: Optional[torch.LongTensor] = None,
+ attention_mask: Optional[torch.Tensor] = None, # noqa
+ inputs_embeds: Optional[torch.FloatTensor] = None,
+ past_key_values: Optional[Cache] = None,
+ use_cache: Optional[bool] = None,
+ output_attentions: Optional[bool] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None
+ ) -> Union[Tuple, BaseModelOutputWithPast]:
+ if output_attentions:
+ warnings.warn("`RWKV6Model` does not `output_attentions` now, setting it to `False`.")
+ output_attentions = False
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
+ output_hidden_states = output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
+ use_cache = use_cache if use_cache is not None else (self.config.use_cache if not self.training else False)
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ # retrieve input_ids and inputs_embeds
+ if input_ids is not None and inputs_embeds is not None:
+ raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
+ if input_ids is None and inputs_embeds is None:
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
+
+ if inputs_embeds is None:
+ inputs_embeds = self.embeddings(input_ids)
+ hidden_states = inputs_embeds
+
+ if use_cache and not isinstance(past_key_values, Cache):
+ past_key_values = Cache.from_legacy_cache(past_key_values)
+
+ if self.gradient_checkpointing and self.training and use_cache:
+ logger.warning_once("`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...")
+ use_cache = False
+
+ all_hidden_states = () if output_hidden_states else None
+ all_attns = () if output_attentions else None
+ for layer in self.layers:
+ if output_hidden_states:
+ all_hidden_states += (hidden_states,)
+
+ if self.gradient_checkpointing and self.training:
+ hidden_states, attentions, past_key_values = self._gradient_checkpointing_func(
+ layer.__call__,
+ hidden_states,
+ attention_mask,
+ past_key_values,
+ use_cache,
+ output_attentions
+ )
+ else:
+ hidden_states, attentions, past_key_values = layer(
+ hidden_states,
+ attention_mask=attention_mask,
+ past_key_values=past_key_values,
+ use_cache=use_cache,
+ output_attentions=output_attentions
+ )
+
+ if output_attentions:
+ all_attns += (attentions,)
+
+ hidden_states = self.norm(hidden_states)
+
+ # add hidden states from the last decoder layer
+ if output_hidden_states:
+ all_hidden_states += (hidden_states,)
+
+ if not return_dict:
+ return tuple(i for i in [hidden_states, past_key_values, all_hidden_states, all_attns] if i is not None)
+ return BaseModelOutputWithPast(
+ last_hidden_state=hidden_states,
+ past_key_values=past_key_values,
+ hidden_states=all_hidden_states,
+ attentions=all_attns
+ )
+
+
+class RWKV6ForCausalLM(RWKV6PreTrainedModel, GenerationMixin):
+
+ _tied_weights_keys = ["lm_head.weight"]
+
+ def __init__(self, config):
+ super().__init__(config)
+ self.model = RWKV6Model(config)
+ self.vocab_size = config.vocab_size
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
+
+ # Initialize weights and apply final processing
+ self.post_init()
+
+ def get_input_embeddings(self):
+ return self.model.embeddings
+
+ def set_input_embeddings(self, value):
+ self.model.embeddings = value
+
+ def get_output_embeddings(self):
+ return self.lm_head
+
+ def set_output_embeddings(self, new_embeddings):
+ self.lm_head = new_embeddings
+
+ def set_decoder(self, decoder):
+ self.model = decoder
+
+ def get_decoder(self):
+ return self.model
+
+ def generate(self, *args, **kwargs):
+ try:
+ return super().generate(*args, **kwargs)
+ except AttributeError as exception:
+ if 'past_key_values' in str(exception):
+ raise AttributeError(
+ f"You tried to call `generate` with a decoding strategy that manipulates `past_key_values`, "
+ f"which is not supported for {self.__class__.__name__}. "
+ f"Try another generation strategy instead. "
+ f"For the available generation strategies, check this doc: "
+ f"https://huggingface.co/docs/transformers/en/generation_strategies#decoding-strategies"
+ )
+ else:
+ raise exception
+
+ def prepare_inputs_for_generation(
+ self,
+ input_ids: torch.LongTensor = None,
+ past_key_values: Optional[Cache] = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ inputs_embeds: Optional[torch.Tensor] = None,
+ use_cache: bool = True,
+ num_logits_to_keep: Optional[int] = None,
+ **kwargs
+ ):
+ # only last token for `inputs_ids` if the `past_key_values` is passed along.
+ if past_key_values is not None:
+ input_ids = input_ids[:, -1:]
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
+ if inputs_embeds is not None and past_key_values is None:
+ model_inputs = {'inputs_embeds': inputs_embeds}
+ else:
+ # The `contiguous()` here is necessary to have a static stride during decoding. torchdynamo otherwise
+ # recompiles graphs as the stride of the inputs is a guard.
+ # Ref: https://github.com/huggingface/transformers/pull/29114
+ # TODO: use `next_tokens` directly instead.
+ model_inputs = {'input_ids': input_ids.contiguous()}
+
+ if num_logits_to_keep is not None:
+ model_inputs['num_logits_to_keep'] = num_logits_to_keep
+
+ model_inputs.update({
+ 'past_key_values': past_key_values,
+ 'use_cache': use_cache,
+ 'attention_mask': attention_mask,
+ 'num_logits_to_keep': num_logits_to_keep,
+ })
+ return model_inputs
+
+ def forward(
+ self,
+ input_ids: torch.LongTensor = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ inputs_embeds: Optional[torch.Tensor] = None,
+ past_key_values: Optional[Cache] = None,
+ labels: Optional[torch.LongTensor] = None,
+ use_cache: Optional[bool] = None,
+ output_attentions: Optional[bool] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None,
+ num_logits_to_keep: Optional[int] = 0
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
+ output_hidden_states = (
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
+ )
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ outputs = self.model(
+ input_ids=input_ids,
+ attention_mask=attention_mask,
+ inputs_embeds=inputs_embeds,
+ past_key_values=past_key_values,
+ use_cache=use_cache,
+ output_attentions=output_attentions,
+ output_hidden_states=output_hidden_states,
+ return_dict=return_dict
+ )
+
+ hidden_states = outputs[0]
+ fuse_linear_and_cross_entropy = self.config.fuse_cross_entropy and self.training
+ logits = None if fuse_linear_and_cross_entropy else self.lm_head(hidden_states[:, -num_logits_to_keep:])
+
+ loss = None
+ if labels is not None:
+ if self.config.fuse_cross_entropy:
+ if fuse_linear_and_cross_entropy:
+ loss_fct = FusedLinearCrossEntropyLoss()
+ else:
+ loss_fct = FusedCrossEntropyLoss(inplace_backward=True)
+ else:
+ loss_fct = nn.CrossEntropyLoss()
+ # Enable model parallelism
+ labels = labels.to(hidden_states.device)
+ labels = torch.cat((labels[..., 1:], torch.full_like(labels[:, :1], loss_fct.ignore_index)), 1)
+ if fuse_linear_and_cross_entropy:
+ loss = loss_fct(hidden_states.view(-1, self.config.hidden_size),
+ labels.view(-1),
+ self.lm_head.weight,
+ self.lm_head.bias)
+ else:
+ loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1))
+
+ if not return_dict:
+ output = (logits,) + outputs[1:]
+ return (loss,) + output if loss is not None else output
+
+ return CausalLMOutputWithPast(
+ loss=loss,
+ logits=logits,
+ past_key_values=outputs.past_key_values,
+ hidden_states=outputs.hidden_states,
+ attentions=outputs.attentions,
+ )
diff --git a/fla/models/samba/__init__.py b/fla/models/samba/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..244913e776944de23878781f4be7bd037fac89ab
--- /dev/null
+++ b/fla/models/samba/__init__.py
@@ -0,0 +1,14 @@
+# -*- coding: utf-8 -*-
+
+from transformers import AutoConfig, AutoModel, AutoModelForCausalLM
+
+from fla.models.samba.configuration_samba import SambaConfig
+from fla.models.samba.modeling_samba import (SambaBlock, SambaForCausalLM,
+ SambaModel)
+
+AutoConfig.register(SambaConfig.model_type, SambaConfig, True)
+AutoModel.register(SambaConfig, SambaModel, True)
+AutoModelForCausalLM.register(SambaConfig, SambaForCausalLM, True)
+
+
+__all__ = ['SambaConfig', 'SambaForCausalLM', 'SambaModel', 'SambaBlock']
diff --git a/fla/models/samba/configuration_samba.py b/fla/models/samba/configuration_samba.py
new file mode 100644
index 0000000000000000000000000000000000000000..fd7b008ef73f780c4487517c1883056d4d58b525
--- /dev/null
+++ b/fla/models/samba/configuration_samba.py
@@ -0,0 +1,87 @@
+# -*- coding: utf-8 -*-
+
+import math
+from typing import Dict, Optional
+
+from transformers.configuration_utils import PretrainedConfig
+
+
+class SambaConfig(PretrainedConfig):
+
+ model_type = "samba"
+
+ def __init__(
+ self,
+ vocab_size: int = 32000,
+ hidden_size: int = 2304,
+ state_size: int = 16,
+ num_hidden_layers: int = 18,
+ norm_eps=1e-5,
+ pad_token_id: int = 0,
+ bos_token_id: int = 1,
+ eos_token_id: int = 2,
+ expand: int = 2,
+ conv_kernel: int = 4,
+ use_bias: bool = False,
+ use_conv_bias: bool = True,
+ hidden_act: str = "silu",
+ initializer_range: str = 0.02,
+ residual_in_fp32: bool = False,
+ time_step_rank: str = "auto",
+ time_step_scale: float = 1.0,
+ time_step_min: float = 0.001,
+ time_step_max: float = 0.1,
+ time_step_init_scheme: str = "random",
+ time_step_floor: float = 1e-4,
+ max_position_embeddings: int = 2048,
+ attn: Optional[Dict] = {
+ 'layers': (1, 3, 5, 7, 9, 11, 13, 15, 17),
+ 'num_heads': 18,
+ 'num_kv_heads': 18,
+ 'window_size': 2048
+ },
+ hidden_ratio: Optional[int] = 4,
+ rescale_prenorm_residual: bool = False,
+ use_cache: bool = True,
+ fuse_norm: bool = True,
+ fuse_cross_entropy: bool = True,
+ tie_word_embeddings: bool = False,
+ **kwargs,
+ ):
+ self.vocab_size = vocab_size
+ self.hidden_size = hidden_size
+ self.state_size = state_size
+ self.num_hidden_layers = num_hidden_layers
+ self.norm_eps = norm_eps
+ self.conv_kernel = conv_kernel
+ self.expand = expand
+ self.intermediate_size = int(expand * self.hidden_size)
+ self.bos_token_id = bos_token_id
+ self.eos_token_id = eos_token_id
+ self.pad_token_id = pad_token_id
+ self.use_bias = use_bias
+ self.use_conv_bias = use_conv_bias
+ self.hidden_act = hidden_act
+ self.initializer_range = initializer_range
+ self.time_step_rank = math.ceil(self.hidden_size / 16) if time_step_rank == "auto" else time_step_rank
+ self.time_step_scale = time_step_scale
+ self.time_step_min = time_step_min
+ self.time_step_max = time_step_max
+ self.time_step_init_scheme = time_step_init_scheme
+ self.time_step_floor = time_step_floor
+ self.max_position_embeddings = max_position_embeddings
+ self.attn = attn
+ self.hidden_ratio = hidden_ratio
+ self.rescale_prenorm_residual = rescale_prenorm_residual
+ self.residual_in_fp32 = residual_in_fp32
+ self.use_cache = use_cache
+ self.fuse_cross_entropy = fuse_cross_entropy
+ self.fuse_norm = fuse_norm
+
+ super().__init__(
+ bos_token_id=bos_token_id,
+ eos_token_id=eos_token_id,
+ pad_token_id=pad_token_id,
+ tie_word_embeddings=tie_word_embeddings,
+ **kwargs
+ )
diff --git a/fla/models/samba/modeling_samba.py b/fla/models/samba/modeling_samba.py
new file mode 100644
index 0000000000000000000000000000000000000000..b23ea4fcfe5951bb1429c1c1122a1d3834eee139
--- /dev/null
+++ b/fla/models/samba/modeling_samba.py
@@ -0,0 +1,418 @@
+# -*- coding: utf-8 -*-
+
+from __future__ import annotations
+
+import math
+from dataclasses import dataclass
+from typing import Any, Dict, Optional, Tuple, Union
+
+import torch
+import torch.utils.checkpoint
+from torch import nn
+from transformers.activations import ACT2FN
+from transformers.generation import GenerationMixin
+from transformers.modeling_utils import PreTrainedModel
+from transformers.utils import ModelOutput, logging
+
+from fla.layers.attn import Attention
+from fla.models.mamba.modeling_mamba import MambaCache, MambaMixer
+from fla.models.samba.configuration_samba import SambaConfig
+from fla.modules import (FusedCrossEntropyLoss, FusedLinearCrossEntropyLoss,
+ RMSNorm)
+from fla.modules.activations import swiglu_linear
+
+logger = logging.get_logger(__name__)
+
+
+class SambaMLP(nn.Module):
+
+ def __init__(
+ self,
+ hidden_size: int,
+ hidden_ratio: Optional[int] = None,
+ hidden_act: str = 'swish'
+ ) -> SambaMLP:
+ super().__init__()
+
+ self.hidden_size = hidden_size
+ # the final number of params is `hidden_ratio * hidden_size^2`
+ # `intermediate_size` is chosen to be a multiple of 256 closest to `2/3 * hidden_size * hidden_ratio`
+ if hidden_ratio is None:
+ hidden_ratio = 4
+ self.hidden_ratio = hidden_ratio
+
+ self.intermediate_size = int(hidden_size * hidden_ratio * 2 / 3)
+ self.intermediate_size = 256 * ((self.intermediate_size + 256 - 1) // 256)
+
+ self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size * 2, bias=False)
+ self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False)
+ self.act_fn = ACT2FN[hidden_act]
+
+ def forward(self, x):
+ y = self.gate_proj(x)
+ gate, y = y.chunk(2, -1)
+ return swiglu_linear(gate, y, self.down_proj.weight, self.down_proj.bias)
+
+
+class SambaBlock(nn.Module):
+ def __init__(self, config, layer_idx):
+ super().__init__()
+
+ self.config = config
+ self.hidden_size = config.hidden_size
+ self.layer_idx = layer_idx
+
+ self.mixer_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps)
+ if config.attn is not None and layer_idx in config.attn['layers']:
+ self.mixer = Attention(
+ hidden_size=config.hidden_size,
+ num_heads=config.attn['num_heads'],
+ num_kv_heads=config.attn['num_kv_heads'],
+ window_size=config.attn['window_size'],
+ max_position_embeddings=config.max_position_embeddings,
+ layer_idx=layer_idx
+ )
+ else:
+ self.mixer = MambaMixer(config, layer_idx=layer_idx)
+ self.mlp_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps)
+ self.mlp = SambaMLP(
+ hidden_size=config.hidden_size,
+ hidden_ratio=config.hidden_ratio,
+ hidden_act=config.hidden_act
+ )
+
+ def forward(
+ self,
+ hidden_states: torch.Tensor,
+ cache_params: Optional[Tuple[torch.Tensor]] = None,
+ **kwargs,
+ ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
+
+ residual = hidden_states
+ hidden_states = self.mixer_norm(hidden_states)
+ if isinstance(self.mixer, MambaMixer):
+ hidden_states = self.mixer(hidden_states, cache_params=cache_params)
+ else:
+ hidden_states, _, cache_params = self.mixer(hidden_states=hidden_states, past_key_values=cache_params)
+ hidden_states, residual = self.mlp_norm(hidden_states, residual, True)
+ hidden_states = self.mlp(hidden_states)
+ hidden_states = residual + hidden_states
+ return hidden_states
+
+
+class SambaPreTrainedModel(PreTrainedModel):
+ """
+ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
+ models.
+ """
+
+ config_class = SambaConfig
+ base_model_prefix = "backbone"
+ _no_split_modules = ["SambaBlock"]
+ supports_gradient_checkpointing = True
+
+ def _init_weights(self, module):
+ """Initialize the weights."""
+ if isinstance(module, MambaMixer):
+ module.A_log._no_weight_decay = True
+ module.D._no_weight_decay = True
+
+ dt_init_std = self.config.time_step_rank**-0.5 * self.config.time_step_scale
+ if self.config.time_step_init_scheme == "constant":
+ nn.init.constant_(module.dt_proj.weight, dt_init_std)
+ elif self.config.time_step_init_scheme == "random":
+ nn.init.uniform_(module.dt_proj.weight, -dt_init_std, dt_init_std)
+
+ dt = torch.exp(
+ torch.rand(self.config.intermediate_size)
+ * (math.log(self.config.time_step_max) - math.log(self.config.time_step_min))
+ + math.log(self.config.time_step_min)
+ ).clamp(min=self.config.time_step_floor)
+ # # Inverse of softplus: https://github.com/pytorch/pytorch/issues/72759
+ inv_dt = dt + torch.log(-torch.expm1(-dt))
+ with torch.no_grad():
+ module.dt_proj.bias.copy_(inv_dt)
+ module.dt_proj.bias._no_reinit = True
+
+ if isinstance(module, nn.Linear):
+ if module.bias is not None:
+ if not getattr(module.bias, "_no_reinit", False):
+ nn.init.zeros_(module.bias)
+ elif isinstance(module, nn.Embedding):
+ nn.init.normal_(module.weight, std=self.config.initializer_range)
+
+ if self.config.rescale_prenorm_residual:
+ # Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme:
+ # > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale
+ # > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers.
+ # > -- GPT-2 :: https://openai.com/blog/better-language-models/
+ #
+ # Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py
+ for name, p in module.named_parameters():
+ if name in ["out_proj.weight"]:
+ # Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block
+ # Following Pytorch init, except scale by 1/sqrt(2 * n_layer)
+ # We need to reinit p since this code could be called multiple times
+ # Having just p *= scale would repeatedly scale it down
+ nn.init.kaiming_uniform_(p, a=math.sqrt(5))
+ with torch.no_grad():
+ p /= math.sqrt(self.config.num_layers)
+
+
+@dataclass
+class SambaOutput(ModelOutput):
+ """
+ Class for the Samba model outputs.
+
+ Args:
+ last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
+ Sequence of hidden-states at the output of the last layer of the model.
+ cache_params (`MambaCache`):
+ The state of the model at the last time step. Can be used in a forward method with the next `input_ids` to
+ avoid providing the old `input_ids`.
+
+ Includes both the State space model state matrices after the selective scan, and the Convolutional states
+ hidden_states (`tuple(torch.FloatTensor)`, *optional*,
+ returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
+ Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
+ one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
+
+ Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
+ """
+
+ last_hidden_state: Optional[torch.FloatTensor] = None
+ cache_params: Optional[MambaCache] = None
+ hidden_states: Optional[Tuple[torch.FloatTensor]] = None
+
+
+@dataclass
+class SambaCausalLMOutput(ModelOutput):
+ """
+ Base class for causal language model (or autoregressive) outputs.
+
+ Args:
+ loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
+ Language modeling loss (for next-token prediction).
+ logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
+ Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
+ cache_params (`MambaCache`):
+ The state of the model at the last time step. Can be used in a forward method with the next `input_ids` to
+ avoid providing the old `input_ids`.
+
+ Includes both the State space model state matrices after the selective scan, and the Convolutional states
+ hidden_states (`tuple(torch.FloatTensor)`, *optional*,
+ returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
+ Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
+ one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
+
+ Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
+ """
+
+ loss: Optional[torch.FloatTensor] = None
+ logits: Optional[torch.FloatTensor] = None
+ cache_params: Optional[MambaCache] = None
+ hidden_states: Optional[Tuple[torch.FloatTensor]] = None
+
+
+class SambaModel(SambaPreTrainedModel):
+ def __init__(self, config):
+ super().__init__(config)
+
+ self.embeddings = nn.Embedding(config.vocab_size, config.hidden_size)
+ self.layers = nn.ModuleList([SambaBlock(config, layer_idx=idx) for idx in range(config.num_hidden_layers)])
+
+ self.gradient_checkpointing = False
+ self.norm_f = RMSNorm(config.hidden_size, eps=config.norm_eps)
+ # Initialize weights and apply final processing
+ self.post_init()
+
+ def get_input_embeddings(self):
+ return self.embeddings
+
+ def set_input_embeddings(self, new_embeddings):
+ self.embeddings = new_embeddings
+
+ def forward(
+ self,
+ input_ids: Optional[torch.LongTensor] = None,
+ inputs_embeds: Optional[torch.LongTensor] = None,
+ cache_params: Optional[MambaCache] = None,
+ use_cache: Optional[bool] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None,
+ **kwargs, # `attention_mask` is passed by the tokenizer and we don't want it
+ ) -> Union[Tuple, SambaOutput]:
+ output_hidden_states = (
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
+ )
+ use_cache = use_cache if use_cache is not None else (self.config.use_cache if not self.training else False)
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ if (input_ids is None) ^ (inputs_embeds is not None): # ^ is python for xor
+ raise ValueError(
+ "You cannot specify both input_ids and inputs_embeds at the same time, and must specify either one"
+ )
+
+ if inputs_embeds is None:
+ inputs_embeds = self.embeddings(input_ids)
+
+ if self.gradient_checkpointing and self.training and use_cache:
+ use_cache = False
+
+ if cache_params is None and use_cache:
+ cache_params = MambaCache(
+ self.config, inputs_embeds.size(0), device=inputs_embeds.device, dtype=inputs_embeds.dtype
+ )
+
+ hidden_states = inputs_embeds
+ all_hidden_states = () if output_hidden_states else None
+ for mixer_block in self.layers:
+ if self.gradient_checkpointing and self.training:
+ hidden_states = self._gradient_checkpointing_func(mixer_block.__call__, hidden_states, cache_params)
+ else:
+ hidden_states = mixer_block(hidden_states, cache_params=cache_params)
+
+ if output_hidden_states:
+ all_hidden_states = all_hidden_states + (hidden_states,)
+
+ if use_cache:
+ cache_params.seqlen_offset += inputs_embeds.shape[1]
+
+ hidden_states = self.norm_f(hidden_states)
+
+ if output_hidden_states:
+ all_hidden_states = all_hidden_states + (hidden_states,)
+
+ if not return_dict:
+ return tuple(v for v in [hidden_states, cache_params, all_hidden_states] if v is not None)
+
+ return SambaOutput(
+ last_hidden_state=hidden_states,
+ cache_params=cache_params if use_cache else None,
+ hidden_states=all_hidden_states,
+ )
+
+
+class SambaForCausalLM(SambaPreTrainedModel, GenerationMixin):
+
+ _tied_weights_keys = ["lm_head.weight"]
+
+ def __init__(self, config):
+ super().__init__(config)
+ self.backbone = SambaModel(config)
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
+ # Initialize weights and apply final processing
+ self.post_init()
+
+ def get_output_embeddings(self):
+ return self.lm_head
+
+ def set_output_embeddings(self, new_embeddings):
+ self.lm_head = new_embeddings
+
+ def get_input_embeddings(self):
+ return self.backbone.get_input_embeddings()
+
+ def set_input_embeddings(self, new_embeddings):
+ return self.backbone.set_input_embeddings(new_embeddings)
+
+ def _update_model_kwargs_for_generation(
+ self, outputs: ModelOutput, model_kwargs: Dict[str, Any], **kwargs
+ ) -> Dict[str, Any]:
+ model_kwargs["cache_params"] = outputs.get("cache_params", None)
+ return model_kwargs
+
+ def prepare_inputs_for_generation(
+ self,
+ input_ids,
+ cache_params:
+ Optional[MambaCache] = None,
+ inputs_embeds=None,
+ attention_mask=None,
+ use_cache: Optional[bool] = True,
+ num_logits_to_keep: Optional[int] = None,
+ **kwargs
+ ):
+ # only last token for inputs_ids if the state is passed along.
+ if cache_params is not None:
+ input_ids = input_ids[:, -1].unsqueeze(-1)
+
+ if inputs_embeds is not None and cache_params is None:
+ model_inputs = {"inputs_embeds": inputs_embeds}
+ else:
+ model_inputs = {"input_ids": input_ids}
+
+ if num_logits_to_keep is not None:
+ model_inputs['num_logits_to_keep'] = num_logits_to_keep
+
+ model_inputs.update({
+ 'cache_params': cache_params,
+ 'use_cache': use_cache,
+ 'attention_mask': attention_mask,
+ 'num_logits_to_keep': num_logits_to_keep,
+ })
+ return model_inputs
+
+ def forward(
+ self,
+ input_ids: Optional[torch.LongTensor] = None,
+ attention_mask: Optional[torch.Tensor] = None, # noqa
+ inputs_embeds: Optional[torch.FloatTensor] = None,
+ cache_params: Optional[MambaCache] = None,
+ labels: Optional[torch.LongTensor] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None,
+ use_cache: Optional[bool] = None,
+ num_logits_to_keep: Optional[int] = 0,
+ **kwargs, # for now we need this for generation
+ ) -> Union[Tuple, SambaCausalLMOutput]:
+ r"""
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
+ Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set
+ `labels = input_ids` Indices are selected in `[-100, 0, ..., config.vocab_size]` All labels set to `-100`
+ are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size]`
+ """
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ samba_outputs = self.backbone(
+ input_ids,
+ cache_params=cache_params,
+ inputs_embeds=inputs_embeds,
+ output_hidden_states=output_hidden_states,
+ return_dict=return_dict,
+ use_cache=use_cache,
+ )
+ hidden_states = samba_outputs[0]
+ fuse_linear_and_cross_entropy = self.config.fuse_cross_entropy and self.training
+ logits = None if fuse_linear_and_cross_entropy else self.lm_head(hidden_states[:, -num_logits_to_keep:])
+
+ loss = None
+ if labels is not None:
+ if self.config.fuse_cross_entropy:
+ if fuse_linear_and_cross_entropy:
+ loss_fct = FusedLinearCrossEntropyLoss()
+ else:
+ loss_fct = FusedCrossEntropyLoss(inplace_backward=True)
+ else:
+ loss_fct = nn.CrossEntropyLoss()
+ # Enable model parallelism
+ labels = labels.to(hidden_states.device)
+ labels = torch.cat((labels[..., 1:], torch.full_like(labels[:, :1], loss_fct.ignore_index)), 1)
+ if fuse_linear_and_cross_entropy:
+ loss = loss_fct(hidden_states.view(-1, self.config.hidden_size),
+ labels.view(-1),
+ self.lm_head.weight,
+ self.lm_head.bias)
+ else:
+ loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1))
+
+ if not return_dict:
+ output = (logits,) + samba_outputs[1:]
+ return (loss,) + output if loss is not None else output
+
+ return SambaCausalLMOutput(
+ loss=loss,
+ logits=logits,
+ cache_params=samba_outputs.cache_params,
+ hidden_states=samba_outputs.hidden_states,
+ )
diff --git a/fla/models/scan/__init__.py b/fla/models/scan/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..55e172cf0ab129cc2568b85064a1df632d0d3cb3
--- /dev/null
+++ b/fla/models/scan/__init__.py
@@ -0,0 +1,13 @@
+# -*- coding: utf-8 -*-
+
+from transformers import AutoConfig, AutoModel, AutoModelForCausalLM
+
+from fla.models.scan.configuration_scan import SCANConfig
+from fla.models.scan.modeling_scan import SCANForCausalLM, SCANModel
+
+AutoConfig.register(SCANConfig.model_type, SCANConfig)
+AutoModel.register(SCANConfig, SCANModel)
+AutoModelForCausalLM.register(SCANConfig, SCANForCausalLM)
+
+
+__all__ = ['SCANConfig', 'SCANForCausalLM', 'SCANModel']
diff --git a/fla/models/scan/configuration_scan.py b/fla/models/scan/configuration_scan.py
new file mode 100644
index 0000000000000000000000000000000000000000..a89b1adfd382d4f36fde1bd077a64cc5af158031
--- /dev/null
+++ b/fla/models/scan/configuration_scan.py
@@ -0,0 +1,92 @@
+# -*- coding: utf-8 -*-
+
+from typing import Dict, Optional
+
+from transformers.configuration_utils import PretrainedConfig
+
+
+class SCANConfig(PretrainedConfig):
+
+ model_type = 'scan'
+ # keys_to_ignore_at_inference = ['past_key_values']
+
+ def __init__(
+ self,
+ hidden_size: int = 2048,
+ window_size: int = 512,
+ gate_logit_normalizer: Optional[int] = 8,
+ clamp_min: Optional[float] = None,
+ clamp_max: Optional[float] = None,
+ hidden_ratio: Optional[int] = 4,
+ intermediate_size: Optional[int] = None,
+ num_hidden_layers: int = 24,
+ num_heads: int = 4,
+ num_kv_heads: Optional[int] = None,
+ state_size: Optional[int] = 64,
+ expand_k: float = 1,
+ expand_v: float = 1,
+ gate_act: str = 'softmax',
+ use_output_gate: bool = False,
+ use_norm: bool = True,
+ hidden_act: str = "swish",
+ elementwise_affine: Optional[bool] = True,
+ max_position_embeddings: Optional[int] = 2048,
+ norm_first: bool = True,
+ norm_eps: float = 1e-6,
+ attn: Optional[Dict] = None,
+ use_cache: bool = True,
+ pad_token_id: int = None,
+ bos_token_id: int = 1,
+ eos_token_id: int = 2,
+ initializer_range: float = 0.02,
+ tie_word_embeddings: bool = False,
+ fuse_norm: bool = True,
+ fuse_cross_entropy: bool = True,
+ vocab_size: int = 32000,
+ **kwargs
+ ):
+ self.hidden_size = hidden_size
+ self.window_size = window_size
+ self.gate_logit_normalizer = gate_logit_normalizer
+ self.clamp_min = clamp_min
+ self.clamp_max = clamp_max
+ self.hidden_ratio = hidden_ratio
+ self.intermediate_size = intermediate_size
+ self.num_hidden_layers = num_hidden_layers
+ self.num_heads = num_heads
+ self.num_kv_heads = num_kv_heads
+ self.state_size = state_size
+ self.expand_k = expand_k
+ self.expand_v = expand_v
+ self.gate_act = gate_act
+ self.use_output_gate = use_output_gate
+ self.use_norm = use_norm
+ self.hidden_act = hidden_act
+ self.elementwise_affine = elementwise_affine
+ self.max_position_embeddings = max_position_embeddings
+ self.norm_first = norm_first
+ self.norm_eps = norm_eps
+ self.attn = attn
+ self.use_cache = use_cache
+ self.initializer_range = initializer_range
+ self.fuse_cross_entropy = fuse_cross_entropy
+ self.fuse_norm = fuse_norm
+ self.vocab_size = vocab_size
+
+ if attn is not None:
+ if not isinstance(attn, Dict):
+ raise ValueError("attn must be a dictionary")
+ if 'layers' not in attn:
+ raise ValueError("Layer indices must be provided to initialize hybrid attention layers")
+ if 'num_heads' not in attn:
+ raise ValueError("Number of heads must be provided to initialize hybrid attention layers")
+ attn['num_kv_heads'] = attn.get('num_kv_heads', attn['num_heads'])
+ attn['window_size'] = attn.get('window_size', None)
+
+ super().__init__(
+ pad_token_id=pad_token_id,
+ bos_token_id=bos_token_id,
+ eos_token_id=eos_token_id,
+ tie_word_embeddings=tie_word_embeddings,
+ **kwargs,
+ )
diff --git a/fla/models/scan/modeling_scan.py b/fla/models/scan/modeling_scan.py
new file mode 100644
index 0000000000000000000000000000000000000000..3197aeca52b499be8c079447998f88620800a5d8
--- /dev/null
+++ b/fla/models/scan/modeling_scan.py
@@ -0,0 +1,442 @@
+# -*- coding: utf-8 -*-
+
+from __future__ import annotations
+
+import math
+import warnings
+from typing import List, Optional, Tuple, Union
+
+import torch
+import torch.nn as nn
+import torch.utils.checkpoint
+from transformers.activations import ACT2FN
+from transformers.generation import GenerationMixin
+from transformers.modeling_outputs import (BaseModelOutputWithPast,
+ CausalLMOutputWithPast)
+from transformers.modeling_utils import PreTrainedModel
+from transformers.utils import logging
+
+from fla.layers.attn import Attention
+from fla.layers.scan import SemiCompressedAttention
+from fla.models.scan.configuration_scan import SCANConfig
+from fla.models.utils import Cache
+from fla.modules import (FusedCrossEntropyLoss, FusedLinearCrossEntropyLoss,
+ RMSNorm)
+from fla.modules.activations import swiglu_linear
+from fla.modules.layernorm import rms_norm_linear
+
+logger = logging.get_logger(__name__)
+
+
+class SCANMLP(nn.Module):
+
+ def __init__(
+ self,
+ hidden_size: int,
+ hidden_ratio: Optional[int] = None,
+ intermediate_size: Optional[int] = None,
+ hidden_act: str = 'swish',
+ norm_first: bool = True,
+ norm_eps: float = 1e-5
+ ) -> SCANMLP:
+ super().__init__()
+
+ self.hidden_size = hidden_size
+ # the final number of params is `hidden_ratio * hidden_size^2`
+ # `intermediate_size` is chosen to be a multiple of 256 closest to `2/3 * hidden_size * hidden_ratio`
+ if hidden_ratio is None:
+ hidden_ratio = 4
+ if intermediate_size is None:
+ intermediate_size = int(hidden_size * hidden_ratio * 2 / 3)
+ intermediate_size = 256 * ((intermediate_size + 256 - 1) // 256)
+ self.hidden_ratio = hidden_ratio
+ self.intermediate_size = intermediate_size
+ self.norm_first = norm_first
+
+ if norm_first:
+ self.norm = RMSNorm(hidden_size=hidden_size, eps=norm_eps)
+
+ self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size * 2, bias=False)
+ self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False)
+ self.act_fn = ACT2FN[hidden_act]
+
+ def forward(self, x):
+ if self.norm_first:
+ x = rms_norm_linear(x, self.norm.weight, self.norm.bias, self.gate_proj.weight, self.gate_proj.bias)
+ else:
+ x = self.gate_proj(x)
+ gate, y = x.chunk(2, -1)
+ return swiglu_linear(gate, y, self.down_proj.weight, self.down_proj.bias)
+
+
+class SCANBlock(nn.Module):
+ def __init__(self, config: SCANConfig, layer_idx: int):
+ super().__init__()
+ self.hidden_size = config.hidden_size
+
+ if not config.norm_first:
+ self.attn_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps)
+ # if config.attn is not None and layer_idx in config.attn['layers']:
+ # self.attn = Attention(
+ # hidden_size=config.hidden_size,
+ # num_heads=config.attn['num_heads'],
+ # num_kv_heads=config.attn['num_kv_heads'],
+ # window_size=config.attn['window_size'],
+ # max_position_embeddings=config.max_position_embeddings,
+ # layer_idx=layer_idx
+ # )
+ # else: # No need for hybrid option because the SCAN module is inherently hybrid
+ self.attn = SemiCompressedAttention(
+ hidden_size=config.hidden_size,
+ window_size=config.window_size,
+ state_size=config.state_size,
+ gate_act=config.gate_act,
+ max_position_embeddings=config.max_position_embeddings,
+ expand_k=config.expand_k,
+ expand_v=config.expand_v,
+ num_heads=config.num_heads,
+ num_kv_heads=config.num_kv_heads,
+ use_output_gate=config.use_output_gate,
+ use_norm=config.use_norm,
+ gate_fn=config.hidden_act,
+ gate_logit_normalizer=config.gate_logit_normalizer,
+ elementwise_affine=config.elementwise_affine,
+ norm_first=config.norm_first,
+ norm_eps=config.norm_eps,
+ fuse_norm=config.fuse_norm,
+ layer_idx=layer_idx
+ )
+ if not config.norm_first:
+ self.mlp_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps)
+ self.mlp = SCANMLP(
+ hidden_size=config.hidden_size,
+ hidden_ratio=config.hidden_ratio,
+ intermediate_size=config.intermediate_size,
+ hidden_act=config.hidden_act,
+ norm_first=config.norm_first,
+ norm_eps=config.norm_eps
+ )
+
+ def forward(
+ self,
+ hidden_states: torch.Tensor,
+ attention_mask: Optional[torch.Tensor] = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ use_cache: Optional[bool] = False,
+ output_attentions: Optional[bool] = False,
+ **kwargs
+ ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
+
+ residual = hidden_states
+ if hasattr(self, 'attn_norm'):
+ hidden_states = self.attn_norm(hidden_states)
+ hidden_states, attentions, past_key_values = self.attn(
+ hidden_states=hidden_states,
+ attention_mask=attention_mask,
+ past_key_values=past_key_values,
+ use_cache=use_cache,
+ output_attentions=output_attentions
+ )
+ if hasattr(self, 'mlp_norm'):
+ hidden_states, residual = self.mlp_norm(hidden_states, residual, True)
+ else:
+ hidden_states = residual + hidden_states
+ residual = hidden_states
+ hidden_states = self.mlp(hidden_states)
+ hidden_states = residual + hidden_states
+
+ outputs = (hidden_states, attentions, past_key_values)
+
+ return outputs
+
+
+class SCANPreTrainedModel(PreTrainedModel):
+
+ config_class = SCANConfig
+ supports_gradient_checkpointing = True
+ _no_split_modules = ['SCANBlock']
+
+ def __init__(self, *inputs, **kwargs):
+ super().__init__(*inputs, **kwargs)
+
+ def _init_weights(
+ self,
+ module: nn.Module,
+ rescale_prenorm_residual: bool = True,
+ num_residuals_per_layer: int = 2,
+ ):
+ if isinstance(module, (nn.Linear, nn.Conv1d)):
+ # Slightly different from the TF version which uses truncated_normal for initialization
+ # cf https://github.com/pytorch/pytorch/pull/5617
+ nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range)
+ if module.bias is not None:
+ nn.init.zeros_(module.bias)
+ elif isinstance(module, nn.Embedding):
+ nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range)
+ if module.padding_idx is not None:
+ module.weight.data[module.padding_idx].zero_()
+
+ if rescale_prenorm_residual:
+ # Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme:
+ # > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale
+ # > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers.
+ # > -- GPT-2 :: https://openai.com/blog/better-language-models/
+ #
+ # Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py
+ for name, p in module.named_parameters():
+ if name in ["o_proj.weight", "down_proj.weight"]:
+ # Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block
+ # Following Pytorch init, except scale by 1/sqrt(2 * n_layer)
+ # We need to reinit p since this code could be called multiple times
+ # Having just p *= scale would repeatedly scale it down
+ with torch.no_grad():
+ p /= math.sqrt(num_residuals_per_layer * self.config.num_hidden_layers)
+
+
+class SCANModel(SCANPreTrainedModel):
+
+ def __init__(self, config: SCANConfig):
+ super().__init__(config)
+ self.padding_idx = config.pad_token_id
+ self.vocab_size = config.vocab_size
+
+ self.embeddings = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
+ self.layers = nn.ModuleList([SCANBlock(config, layer_idx) for layer_idx in range(config.num_hidden_layers)])
+ self.norm = RMSNorm(config.hidden_size, eps=config.norm_eps)
+
+ self.gradient_checkpointing = False
+
+ self.post_init()
+
+ def get_input_embeddings(self):
+ return self.embeddings
+
+ def set_input_embeddings(self, value):
+ self.embeddings = value
+
+ def forward(
+ self,
+ input_ids: Optional[torch.LongTensor] = None,
+ attention_mask: Optional[torch.Tensor] = None, # noqa
+ inputs_embeds: Optional[torch.FloatTensor] = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ use_cache: Optional[bool] = None,
+ output_attentions: Optional[bool] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None
+ ) -> Union[Tuple, BaseModelOutputWithPast]:
+ if output_attentions:
+ warnings.warn("`SCANModel` does not `output_attentions` now, setting it to `False`.")
+ output_attentions = False
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
+ output_hidden_states = output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
+ use_cache = use_cache if use_cache is not None else (self.config.use_cache if not self.training else False)
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ # retrieve input_ids and inputs_embeds
+ if input_ids is not None and inputs_embeds is not None:
+ raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
+ if input_ids is None and inputs_embeds is None:
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
+
+ if inputs_embeds is None:
+ inputs_embeds = self.embeddings(input_ids)
+ hidden_states = inputs_embeds
+
+ if use_cache and not isinstance(past_key_values, Cache):
+ past_key_values = Cache.from_legacy_cache(past_key_values)
+
+ if self.gradient_checkpointing and self.training and use_cache:
+ logger.warning_once("`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...")
+ use_cache = False
+
+ all_hidden_states = () if output_hidden_states else None
+ all_attns = () if output_attentions else None
+
+ for i, layer in enumerate(self.layers):
+ if output_hidden_states:
+ all_hidden_states += (hidden_states,)
+
+ if self.gradient_checkpointing and self.training:
+ hidden_states, attentions, past_key_values = self._gradient_checkpointing_func(
+ layer.__call__,
+ hidden_states,
+ attention_mask,
+ past_key_values,
+ use_cache,
+ output_attentions,
+ )
+ else:
+ hidden_states, attentions, past_key_values = layer(
+ hidden_states,
+ attention_mask=attention_mask,
+ past_key_values=past_key_values,
+ use_cache=use_cache,
+ output_attentions=output_attentions
+ )
+
+ if output_attentions:
+ all_attns += (attentions,)
+
+ hidden_states = self.norm(hidden_states)
+
+ # add hidden states from the last decoder layer
+ if output_hidden_states:
+ all_hidden_states += (hidden_states,)
+
+ if not return_dict:
+ return tuple(i for i in [hidden_states, past_key_values, all_hidden_states, all_attns] if i is not None)
+ return BaseModelOutputWithPast(
+ last_hidden_state=hidden_states,
+ past_key_values=past_key_values,
+ hidden_states=all_hidden_states,
+ attentions=all_attns
+ )
+
+
+class SCANForCausalLM(SCANPreTrainedModel, GenerationMixin):
+
+ _tied_weights_keys = ["lm_head.weight"]
+
+ def __init__(self, config):
+
+ super().__init__(config)
+ self.model = SCANModel(config)
+ self.vocab_size = config.vocab_size
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
+
+ # Initialize weights and apply final processing
+ self.post_init()
+
+ def get_input_embeddings(self):
+ return self.model.embeddings
+
+ def set_input_embeddings(self, value):
+ self.model.embeddings = value
+
+ def get_output_embeddings(self):
+ return self.lm_head
+
+ def set_output_embeddings(self, new_embeddings):
+ self.lm_head = new_embeddings
+
+ def set_decoder(self, decoder):
+ self.model = decoder
+
+ def get_decoder(self):
+ return self.model
+
+ def generate(self, *args, **kwargs):
+ try:
+ return super().generate(*args, **kwargs)
+ except AttributeError as exception:
+ if 'past_key_values' in str(exception):
+ raise AttributeError(
+ f"You tried to call `generate` with a decoding strategy that manipulates `past_key_values`, "
+ f"which is not supported for {self.__class__.__name__}. "
+ f"Try another generation strategy instead. "
+ f"For the available generation strategies, check this doc: "
+ f"https://huggingface.co/docs/transformers/en/generation_strategies#decoding-strategies"
+ )
+ else:
+ raise exception
+
+ def prepare_inputs_for_generation(
+ self,
+ input_ids: torch.LongTensor = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ inputs_embeds: Optional[torch.Tensor] = None,
+ use_cache: bool = True,
+ num_logits_to_keep: Optional[int] = None,
+ **kwargs
+ ):
+ # only last token for `inputs_ids` if the `past_key_values` is passed along.
+ if past_key_values is not None:
+ input_ids = input_ids[:, -1:]
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
+ if inputs_embeds is not None and past_key_values is None:
+ model_inputs = {'inputs_embeds': inputs_embeds}
+ else:
+ # The `contiguous()` here is necessary to have a static stride during decoding. torchdynamo otherwise
+ # recompiles graphs as the stride of the inputs is a guard.
+ # Ref: https://github.com/huggingface/transformers/pull/29114
+ # TODO: use `next_tokens` directly instead.
+ model_inputs = {'input_ids': input_ids.contiguous()}
+
+ if num_logits_to_keep is not None:
+ model_inputs['num_logits_to_keep'] = num_logits_to_keep
+
+ model_inputs.update({
+ 'past_key_values': past_key_values,
+ 'use_cache': use_cache,
+ 'attention_mask': attention_mask,
+ 'num_logits_to_keep': num_logits_to_keep,
+ })
+ return model_inputs
+
+ def forward(
+ self,
+ input_ids: torch.LongTensor = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ inputs_embeds: Optional[torch.Tensor] = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ labels: Optional[torch.LongTensor] = None,
+ use_cache: Optional[bool] = None,
+ output_attentions: Optional[bool] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None,
+ num_logits_to_keep: Optional[int] = 0
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
+ output_hidden_states = (
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
+ )
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ outputs = self.model(
+ input_ids=input_ids,
+ attention_mask=attention_mask,
+ inputs_embeds=inputs_embeds,
+ past_key_values=past_key_values,
+ use_cache=use_cache,
+ output_attentions=output_attentions,
+ output_hidden_states=output_hidden_states,
+ return_dict=return_dict
+ )
+
+ hidden_states = outputs[0]
+ fuse_linear_and_cross_entropy = self.config.fuse_cross_entropy and self.training
+ logits = None if fuse_linear_and_cross_entropy else self.lm_head(hidden_states[:, -num_logits_to_keep:])
+
+ loss = None
+ if labels is not None:
+ if self.config.fuse_cross_entropy:
+ if fuse_linear_and_cross_entropy:
+ loss_fct = FusedLinearCrossEntropyLoss()
+ else:
+ loss_fct = FusedCrossEntropyLoss(inplace_backward=True)
+ else:
+ loss_fct = nn.CrossEntropyLoss()
+ # Enable model parallelism
+ labels = labels.to(hidden_states.device)
+ labels = torch.cat((labels[..., 1:], torch.full_like(labels[:, :1], loss_fct.ignore_index)), 1)
+ if fuse_linear_and_cross_entropy:
+ loss = loss_fct(hidden_states.view(-1, self.config.hidden_size),
+ labels.view(-1),
+ self.lm_head.weight,
+ self.lm_head.bias)
+ else:
+ loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1))
+
+ if not return_dict:
+ output = (logits,) + outputs[1:]
+ return (loss,) + output if loss is not None else output
+
+ return CausalLMOutputWithPast(
+ loss=loss,
+ logits=logits,
+ past_key_values=outputs.past_key_values,
+ hidden_states=outputs.hidden_states,
+ attentions=outputs.attentions,
+ )
diff --git a/fla/models/transformer/__init__.py b/fla/models/transformer/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..47df999fe1446258dc9930e8b0aa6941f1c93f58
--- /dev/null
+++ b/fla/models/transformer/__init__.py
@@ -0,0 +1,14 @@
+# -*- coding: utf-8 -*-
+
+from transformers import AutoConfig, AutoModel, AutoModelForCausalLM
+
+from fla.models.transformer.configuration_transformer import TransformerConfig
+from fla.models.transformer.modeling_transformer import (
+ TransformerForCausalLM, TransformerModel)
+
+AutoConfig.register(TransformerConfig.model_type, TransformerConfig)
+AutoModel.register(TransformerConfig, TransformerModel)
+AutoModelForCausalLM.register(TransformerConfig, TransformerForCausalLM)
+
+
+__all__ = ['TransformerConfig', 'TransformerForCausalLM', 'TransformerModel']
diff --git a/fla/models/transformer/configuration_transformer.py b/fla/models/transformer/configuration_transformer.py
new file mode 100644
index 0000000000000000000000000000000000000000..35e27113cdcfb0e5ed9c0ec70ec08761f6ed4232
--- /dev/null
+++ b/fla/models/transformer/configuration_transformer.py
@@ -0,0 +1,68 @@
+# -*- coding: utf-8 -*-
+
+from typing import Optional
+
+from transformers.configuration_utils import PretrainedConfig
+
+
+class TransformerConfig(PretrainedConfig):
+
+ model_type = 'transformer'
+ keys_to_ignore_at_inference = ['past_key_values']
+
+ def __init__(
+ self,
+ vocab_size: int = 32000,
+ hidden_size: int = 2048,
+ num_hidden_layers: int = 24,
+ num_heads: int = 32,
+ num_kv_heads: int = None,
+ window_size: Optional[int] = None,
+ rope_theta: Optional[float] = 10000.,
+ max_position_embeddings: int = 2048,
+ hidden_ratio: Optional[int] = 4,
+ intermediate_size: Optional[int] = None,
+ hidden_act: str = "swish",
+ initializer_range: float = 0.02,
+ elementwise_affine: Optional[bool] = True,
+ norm_first: bool = False,
+ norm_eps: float = 1e-6,
+ use_cache: bool = True,
+ pad_token_id: int = None,
+ bos_token_id: int = 1,
+ eos_token_id: int = 2,
+ tie_word_embeddings: bool = False,
+ attention_bias: bool = False,
+ fuse_norm: bool = True,
+ fuse_cross_entropy: bool = True,
+ **kwargs,
+ ):
+ self.vocab_size = vocab_size
+ self.hidden_size = hidden_size
+ self.num_hidden_layers = num_hidden_layers
+ self.num_heads = num_heads
+ self.num_kv_heads = num_kv_heads
+ self.window_size = window_size
+ self.rope_theta = rope_theta
+ self.max_position_embeddings = max_position_embeddings
+
+ self.hidden_ratio = hidden_ratio
+ self.intermediate_size = intermediate_size
+ self.hidden_act = hidden_act
+
+ self.initializer_range = initializer_range
+ self.elementwise_affine = elementwise_affine
+ self.norm_first = norm_first
+ self.norm_eps = norm_eps
+ self.use_cache = use_cache
+ self.attention_bias = attention_bias
+ self.fuse_cross_entropy = fuse_cross_entropy
+ self.fuse_norm = fuse_norm
+
+ super().__init__(
+ pad_token_id=pad_token_id,
+ bos_token_id=bos_token_id,
+ eos_token_id=eos_token_id,
+ tie_word_embeddings=tie_word_embeddings,
+ **kwargs,
+ )
diff --git a/fla/models/transformer/modeling_transformer.py b/fla/models/transformer/modeling_transformer.py
new file mode 100644
index 0000000000000000000000000000000000000000..d5c3aa943887648b7656365676888916b42331b3
--- /dev/null
+++ b/fla/models/transformer/modeling_transformer.py
@@ -0,0 +1,428 @@
+# -*- coding: utf-8 -*-
+
+from __future__ import annotations
+
+import math
+import warnings
+from typing import List, Optional, Tuple, Union
+
+import torch
+import torch.nn as nn
+import torch.utils.checkpoint
+from transformers.activations import ACT2FN
+from transformers.generation import GenerationMixin
+from transformers.modeling_outputs import (BaseModelOutputWithPast,
+ CausalLMOutputWithPast)
+from transformers.modeling_utils import PreTrainedModel
+from transformers.utils import logging
+
+from fla.layers.attn import Attention
+from fla.models.transformer.configuration_transformer import TransformerConfig
+from fla.models.utils import Cache
+from fla.modules import (FusedCrossEntropyLoss, FusedLinearCrossEntropyLoss,
+ RMSNorm)
+from fla.modules.activations import swiglu_linear
+from fla.modules.layernorm import rms_norm_linear
+
+logger = logging.get_logger(__name__)
+
+
+class TransformerMLP(nn.Module):
+
+ def __init__(
+ self,
+ hidden_size: int,
+ hidden_ratio: Optional[int] = None,
+ intermediate_size: Optional[int] = None,
+ hidden_act: str = 'swish',
+ norm_first: bool = True,
+ norm_eps: float = 1e-5
+ ) -> TransformerMLP:
+ super().__init__()
+
+ self.hidden_size = hidden_size
+ # the final number of params is `hidden_ratio * hidden_size^2`
+ # `intermediate_size` is chosen to be a multiple of 256 closest to `2/3 * hidden_size * hidden_ratio`
+ if hidden_ratio is None:
+ hidden_ratio = 4
+ if intermediate_size is None:
+ intermediate_size = int(hidden_size * hidden_ratio * 2 / 3)
+ intermediate_size = 256 * ((intermediate_size + 256 - 1) // 256)
+ self.hidden_ratio = hidden_ratio
+ self.intermediate_size = intermediate_size
+ self.norm_first = norm_first
+
+ if norm_first:
+ self.norm = RMSNorm(hidden_size=hidden_size, eps=norm_eps)
+
+ self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size * 2, bias=False)
+ self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False)
+ self.act_fn = ACT2FN[hidden_act]
+
+ def forward(self, x):
+ if self.norm_first:
+ x = rms_norm_linear(x, self.norm.weight, self.norm.bias, self.gate_proj.weight, self.gate_proj.bias)
+ else:
+ x = self.gate_proj(x)
+ gate, y = x.chunk(2, -1)
+ return swiglu_linear(gate, y, self.down_proj.weight, self.down_proj.bias)
+
+
+class TransformerBlock(nn.Module):
+
+ def __init__(self, config: TransformerConfig, layer_idx: int):
+ super().__init__()
+
+ self.hidden_size = config.hidden_size
+
+ if not config.norm_first:
+ self.attn_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps)
+ self.attn = Attention(
+ hidden_size=config.hidden_size,
+ num_heads=config.num_heads,
+ num_kv_heads=config.num_kv_heads,
+ window_size=config.window_size,
+ rope_theta=config.rope_theta,
+ max_position_embeddings=config.max_position_embeddings,
+ norm_first=config.norm_first,
+ norm_eps=config.norm_eps,
+ layer_idx=layer_idx
+ )
+ if not config.norm_first:
+ self.mlp_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps)
+ self.mlp = TransformerMLP(
+ hidden_size=config.hidden_size,
+ hidden_ratio=config.hidden_ratio,
+ intermediate_size=config.intermediate_size,
+ hidden_act=config.hidden_act,
+ norm_first=config.norm_first,
+ norm_eps=config.norm_eps
+ )
+
+ def forward(
+ self,
+ hidden_states: torch.Tensor,
+ attention_mask: Optional[torch.Tensor] = None,
+ past_key_values: Optional[Tuple[torch.Tensor]] = None,
+ output_attentions: Optional[bool] = False,
+ use_cache: Optional[bool] = False,
+ **kwargs,
+ ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
+
+ residual = hidden_states
+ if hasattr(self, 'attn_norm'):
+ hidden_states = self.attn_norm(hidden_states)
+ hidden_states, attentions, past_key_values = self.attn(
+ hidden_states=hidden_states,
+ attention_mask=attention_mask,
+ past_key_values=past_key_values,
+ use_cache=use_cache,
+ output_attentions=output_attentions
+ )
+ if hasattr(self, 'mlp_norm'):
+ hidden_states, residual = self.mlp_norm(hidden_states, residual, True)
+ else:
+ hidden_states = residual + hidden_states
+ residual = hidden_states
+ hidden_states = self.mlp(hidden_states)
+ hidden_states = residual + hidden_states
+
+ outputs = (hidden_states,)
+
+ if output_attentions:
+ outputs += (attentions,)
+
+ if use_cache:
+ outputs += (past_key_values,)
+
+ return outputs
+
+
+class TransformerPreTrainedModel(PreTrainedModel):
+
+ config_class = TransformerConfig
+ supports_gradient_checkpointing = True
+ _no_split_modules = ['TransformerBlock']
+
+ def __init__(self, *inputs, **kwargs):
+ super().__init__(*inputs, **kwargs)
+
+ def _init_weights(
+ self,
+ module: nn.Module,
+ rescale_prenorm_residual: bool = False,
+ num_residuals_per_layer: int = 2,
+ ):
+ if isinstance(module, (nn.Linear, nn.Conv1d)):
+ # Slightly different from the TF version which uses truncated_normal for initialization
+ # cf https://github.com/pytorch/pytorch/pull/5617
+ nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range)
+ if module.bias is not None:
+ nn.init.zeros_(module.bias)
+ elif isinstance(module, nn.Embedding):
+ nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range)
+ if module.padding_idx is not None:
+ module.weight.data[module.padding_idx].zero_()
+
+ if rescale_prenorm_residual:
+ # Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme:
+ # > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale
+ # > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers.
+ # > -- GPT-2 :: https://openai.com/blog/better-language-models/
+ #
+ # Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py
+ for name, p in module.named_parameters():
+ if name in ["o_proj.weight", "down_proj.weight"]:
+ # Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block
+ # Following Pytorch init, except scale by 1/sqrt(2 * n_layer)
+ # We need to reinit p since this code could be called multiple times
+ # Having just p *= scale would repeatedly scale it down
+ with torch.no_grad():
+ p /= math.sqrt(num_residuals_per_layer * self.config.num_hidden_layers)
+
+
+class TransformerModel(TransformerPreTrainedModel):
+
+ def __init__(self, config: TransformerConfig):
+ super().__init__(config)
+ self.padding_idx = config.pad_token_id
+ self.vocab_size = config.vocab_size
+
+ self.embeddings = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
+ self.layers = nn.ModuleList([TransformerBlock(config, layer_idx) for layer_idx in range(config.num_hidden_layers)])
+ self.norm = RMSNorm(config.hidden_size, eps=config.norm_eps)
+
+ self.gradient_checkpointing = False
+
+ self.post_init()
+
+ def get_input_embeddings(self):
+ return self.embeddings
+
+ def set_input_embeddings(self, value):
+ self.embeddings = value
+
+ def forward(
+ self,
+ input_ids: Optional[torch.LongTensor] = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
+ inputs_embeds: Optional[torch.FloatTensor] = None,
+ use_cache: Optional[bool] = None,
+ output_attentions: Optional[bool] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
+ if output_attentions:
+ warnings.warn(
+ "`TransformerModel` does not support output attention weights now, so `output_attentions` is set to `False`."
+ )
+ output_attentions = False
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
+ output_hidden_states = output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
+ use_cache = use_cache if use_cache is not None else (self.config.use_cache if not self.training else False)
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ # retrieve input_ids and inputs_embeds
+ if input_ids is not None and inputs_embeds is not None:
+ raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
+ elif input_ids is None and inputs_embeds is None:
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
+
+ if use_cache and not isinstance(past_key_values, Cache):
+ past_key_values = Cache.from_legacy_cache(past_key_values)
+
+ if inputs_embeds is None:
+ inputs_embeds = self.embeddings(input_ids)
+
+ # embed positions
+ hidden_states = inputs_embeds
+
+ if self.gradient_checkpointing and self.training:
+ if use_cache:
+ logger.warning_once(
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
+ )
+ use_cache = False
+
+ all_hidden_states = () if output_hidden_states else None
+ all_attns = () if output_attentions else None
+ next_cache = None
+
+ for layer in self.layers:
+ if output_hidden_states:
+ all_hidden_states += (hidden_states,)
+
+ if self.gradient_checkpointing and self.training:
+ layer_outputs = self._gradient_checkpointing_func(
+ layer.__call__,
+ hidden_states,
+ attention_mask,
+ past_key_values,
+ output_attentions,
+ use_cache
+ )
+ else:
+ layer_outputs = layer(
+ hidden_states,
+ attention_mask=attention_mask,
+ past_key_values=past_key_values,
+ output_attentions=output_attentions,
+ use_cache=use_cache
+ )
+
+ hidden_states = layer_outputs[0]
+
+ if use_cache:
+ next_cache = layer_outputs[2 if output_attentions else 1]
+
+ if output_attentions:
+ all_attns += (layer_outputs[1],)
+
+ hidden_states = self.norm(hidden_states)
+
+ # add hidden states from the last decoder layer
+ if output_hidden_states:
+ all_hidden_states += (hidden_states,)
+
+ if not return_dict:
+ return tuple(v for v in [hidden_states, next_cache, all_hidden_states, all_attns] if v is not None)
+
+ return BaseModelOutputWithPast(
+ last_hidden_state=hidden_states,
+ past_key_values=next_cache,
+ hidden_states=all_hidden_states,
+ attentions=all_attns
+ )
+
+
+class TransformerForCausalLM(TransformerPreTrainedModel, GenerationMixin):
+
+ _tied_weights_keys = ["lm_head.weight"]
+
+ def __init__(self, config):
+ super().__init__(config)
+ self.model = TransformerModel(config)
+ self.vocab_size = config.vocab_size
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
+
+ # Initialize weights and apply final processing
+ self.post_init()
+
+ def get_input_embeddings(self):
+ return self.model.embeddings
+
+ def set_input_embeddings(self, value):
+ self.model.embeddings = value
+
+ def get_output_embeddings(self):
+ return self.lm_head
+
+ def set_output_embeddings(self, new_embeddings):
+ self.lm_head = new_embeddings
+
+ def set_decoder(self, decoder):
+ self.model = decoder
+
+ def get_decoder(self):
+ return self.model
+
+ def prepare_inputs_for_generation(
+ self,
+ input_ids: torch.LongTensor = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ inputs_embeds: Optional[torch.Tensor] = None,
+ use_cache: bool = True,
+ num_logits_to_keep: Optional[int] = None,
+ **kwargs
+ ):
+ # only last token for `inputs_ids` if the `past_key_values` is passed along.
+ if past_key_values is not None:
+ input_ids = input_ids[:, -1:]
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
+ if inputs_embeds is not None and past_key_values is None:
+ model_inputs = {'inputs_embeds': inputs_embeds}
+ else:
+ # The `contiguous()` here is necessary to have a static stride during decoding. torchdynamo otherwise
+ # recompiles graphs as the stride of the inputs is a guard.
+ # Ref: https://github.com/huggingface/transformers/pull/29114
+ # TODO: use `next_tokens` directly instead.
+ model_inputs = {'input_ids': input_ids.contiguous()}
+
+ if num_logits_to_keep is not None:
+ model_inputs['num_logits_to_keep'] = num_logits_to_keep
+
+ model_inputs.update({
+ 'past_key_values': past_key_values,
+ 'use_cache': use_cache,
+ 'attention_mask': attention_mask,
+ 'num_logits_to_keep': num_logits_to_keep,
+ })
+ return model_inputs
+
+ def forward(
+ self,
+ input_ids: torch.LongTensor = None,
+ attention_mask: Optional[torch.Tensor] = None,
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
+ inputs_embeds: Optional[torch.FloatTensor] = None,
+ labels: Optional[torch.LongTensor] = None,
+ use_cache: Optional[bool] = None,
+ output_attentions: Optional[bool] = None,
+ output_hidden_states: Optional[bool] = None,
+ return_dict: Optional[bool] = None,
+ num_logits_to_keep: Optional[int] = 0
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
+ output_hidden_states = (
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
+ )
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+
+ outputs = self.model(
+ input_ids=input_ids,
+ attention_mask=attention_mask,
+ past_key_values=past_key_values,
+ inputs_embeds=inputs_embeds,
+ use_cache=use_cache,
+ output_attentions=output_attentions,
+ output_hidden_states=output_hidden_states,
+ return_dict=return_dict
+ )
+
+ hidden_states = outputs[0]
+ fuse_linear_and_cross_entropy = self.config.fuse_cross_entropy and self.training
+ logits = None if fuse_linear_and_cross_entropy else self.lm_head(hidden_states[:, -num_logits_to_keep:])
+
+ loss = None
+ if labels is not None:
+ if self.config.fuse_cross_entropy:
+ if fuse_linear_and_cross_entropy:
+ loss_fct = FusedLinearCrossEntropyLoss()
+ else:
+ loss_fct = FusedCrossEntropyLoss(inplace_backward=True)
+ else:
+ loss_fct = nn.CrossEntropyLoss()
+ # Enable model parallelism
+ labels = labels.to(hidden_states.device)
+ labels = torch.cat((labels[..., 1:], torch.full_like(labels[:, :1], loss_fct.ignore_index)), 1)
+ if fuse_linear_and_cross_entropy:
+ loss = loss_fct(hidden_states.view(-1, self.config.hidden_size),
+ labels.view(-1),
+ self.lm_head.weight,
+ self.lm_head.bias)
+ else:
+ loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1))
+
+ if not return_dict:
+ output = (logits,) + outputs[1:]
+ return (loss,) + output if loss is not None else output
+
+ return CausalLMOutputWithPast(
+ loss=loss,
+ logits=logits,
+ past_key_values=outputs.past_key_values,
+ hidden_states=outputs.hidden_states,
+ attentions=outputs.attentions,
+ )
diff --git a/fla/models/utils.py b/fla/models/utils.py
new file mode 100644
index 0000000000000000000000000000000000000000..ed6fa9002feee9bc172ada761481a56bc4c98cc1
--- /dev/null
+++ b/fla/models/utils.py
@@ -0,0 +1,143 @@
+# -*- coding: utf-8 -*-
+
+from __future__ import annotations
+
+from typing import Any, Dict, List, Optional, Tuple
+
+import torch
+import transformers
+
+
+class Cache(transformers.cache_utils.Cache):
+ """
+ A cache used for storing hidden states produced by flash linear attention models.
+
+ It stores the states of each layer as the tensor of shape `[batch_size, key_dim, value_dim]`.
+ """
+
+ def __init__(
+ self,
+ seen_tokens: int = 0
+ ) -> Cache:
+
+ self.states: List[Dict[str, Any]] = []
+
+ self._seen_tokens = seen_tokens # Used in `generate` to keep tally of how many tokens the cache has seen
+
+ def __getitem__(self, layer_idx: int) -> Dict[str, Any]:
+ if layer_idx < len(self):
+ return self.states[layer_idx]
+ else:
+ raise KeyError(f"Cache only has {len(self)} layers, attempted to access layer with index {layer_idx}")
+
+ def __iter__(self):
+ for state in self.states:
+ yield state
+
+ def __len__(self):
+ return len(self.states)
+
+ def update(
+ self,
+ recurrent_state: torch.Tensor = None,
+ attn_state: Tuple[torch.Tensor, torch.Tensor] = None,
+ conv_state: Tuple[torch.Tensor] = None,
+ ffn_state: torch.Tensor = None,
+ layer_idx: int = 0,
+ offset: Optional[int] = 1,
+ cache_kwargs: Optional[Dict[str, Any]] = None,
+ ) -> Dict[str, Any]:
+ """
+ Updates the cache with the new `recurrent_state`/`attn_state`/`conv_state` for the layer `layer_idx`.
+
+ Args:
+ recurrent_state (`torch.Tensor`, `optional`):
+ The new recurrent state to cache.
+ attn_state (`Tuple[torch.Tensor, torch.Tensor]`, `optional`):
+ The new attention key/value states to cache.
+ conv_state (`Tuple[torch.Tensor]`, `optional`):
+ The new convolution state to cache.
+ layer_idx (`int`, defaults to 0):
+ The index of the layer to cache the states for.
+ offset (`int`, `optional`, defaults to 1):
+ The number of new tokens being processed.
+ cache_kwargs (`Dict[str, Any]`, `optional`):
+ Additional arguments for the cache subclass.
+
+ Return:
+ Dictionary of the updated state.
+ """
+
+ # Update the number of seen tokens
+ if layer_idx == 0:
+ self._seen_tokens += offset
+
+ if attn_state is not None:
+ input_size = attn_state[0].shape[-2]
+ window_size = cache_kwargs.get('window_size', None)
+ if not isinstance(attn_state, Tuple) or len(attn_state) != 2:
+ raise ValueError("`attn_state` must be a tuple of two tensors for key/value states")
+ if len(self.states) <= layer_idx:
+ if attn_state is not None:
+ if window_size is not None and input_size > window_size:
+ attn_state = (attn_state[0][..., -window_size:, :].contiguous(),
+ attn_state[1][..., -window_size:, :].contiguous())
+ state = dict(
+ recurrent_state=recurrent_state,
+ attn_state=attn_state,
+ conv_state=conv_state,
+ ffn_state=ffn_state
+ )
+ self.states.append(state)
+ else:
+ state = self.states[layer_idx]
+ if recurrent_state is not None:
+ state['recurrent_state'] = recurrent_state
+ if attn_state is not None:
+ key_state, value_state = state['attn_state']
+ if window_size is not None and key_state.shape[-2] == window_size:
+ # DO NOT allocate new memory if the cache is full
+ # roll the key/value states to the left by `input_size`
+ key_state = key_state.roll(-input_size, -2)
+ value_state = value_state.roll(-input_size, -2)
+ # replace the last `input_size` tokens with the new key/value states
+ key_state[..., -input_size:, :] = attn_state[0]
+ value_state[..., -input_size:, :] = attn_state[1]
+ attn_state = (key_state, value_state)
+ else:
+ attn_state = (torch.cat([key_state, attn_state[0]], -2),
+ torch.cat([value_state, attn_state[1]], -2),)
+ state['attn_state'] = attn_state
+ if conv_state is not None:
+ state['conv_state'] = conv_state
+ if ffn_state is not None:
+ state['ffn_state'] = ffn_state
+
+ return state
+
+ def get_seq_length(self, layer_idx: Optional[int] = 0) -> int:
+ """Returns the sequence length of the cached states. A layer index can be optionally passed."""
+ if len(self.states) <= layer_idx:
+ return 0
+ return self._seen_tokens
+
+ def get_max_length(self) -> Optional[int]:
+ """Returns the maximum sequence length of the cached states. Cache does not have a maximum length."""
+ return None
+
+ def to_legacy_cache(self) -> Tuple:
+ return tuple(self.states)
+
+ @classmethod
+ def from_legacy_cache(
+ cls,
+ past_key_values: Optional[Tuple] = None,
+ seen_tokens: int = 0
+ ) -> Cache:
+ """Converts a cache in the legacy cache format into an equivalent `Cache`."""
+
+ cache = cls(seen_tokens)
+ if past_key_values is not None:
+ for layer_idx in range(len(past_key_values)):
+ cache.states.append(past_key_values[layer_idx])
+ return cache
diff --git a/fla/modules/__init__.py b/fla/modules/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..12ed06c29b480e04228b3799c089930f5e79b94f
--- /dev/null
+++ b/fla/modules/__init__.py
@@ -0,0 +1,22 @@
+# -*- coding: utf-8 -*-
+
+from fla.modules.convolution import (ImplicitLongConvolution, LongConvolution,
+ ShortConvolution)
+from fla.modules.fused_cross_entropy import FusedCrossEntropyLoss
+from fla.modules.fused_kl_div import FusedKLDivLoss
+from fla.modules.fused_linear_cross_entropy import FusedLinearCrossEntropyLoss
+from fla.modules.fused_norm_gate import (FusedLayerNormSwishGate,
+ FusedLayerNormSwishGateLinear,
+ FusedRMSNormSwishGate,
+ FusedRMSNormSwishGateLinear)
+from fla.modules.layernorm import (GroupNorm, GroupNormLinear, LayerNorm,
+ LayerNormLinear, RMSNorm, RMSNormLinear)
+from fla.modules.rotary import RotaryEmbedding
+
+__all__ = [
+ 'ImplicitLongConvolution', 'LongConvolution', 'ShortConvolution',
+ 'FusedCrossEntropyLoss', 'FusedLinearCrossEntropyLoss', 'FusedKLDivLoss',
+ 'GroupNorm', 'GroupNormLinear', 'LayerNorm', 'LayerNormLinear', 'RMSNorm', 'RMSNormLinear',
+ 'FusedLayerNormSwishGate', 'FusedLayerNormSwishGateLinear', 'FusedRMSNormSwishGate', 'FusedRMSNormSwishGateLinear',
+ 'RotaryEmbedding'
+]
diff --git a/fla/modules/activations.py b/fla/modules/activations.py
new file mode 100644
index 0000000000000000000000000000000000000000..c4dd608b453ac39d28d64b729fb24ad4e31ca0ef
--- /dev/null
+++ b/fla/modules/activations.py
@@ -0,0 +1,434 @@
+# -*- coding: utf-8 -*-
+
+# Copyright (c) 2023-2024, Tri Dao, Yu Zhang, Songlin Yang.
+
+import torch
+import torch.nn.functional as F
+import triton
+import triton.language as tl
+
+import fla.modules.fused_bitlinear as fused_bitlinear
+from fla.utils import autocast_custom_bwd, autocast_custom_fwd, contiguous
+
+sigmoid_fwd_codestring = """
+template T sigmoid_fwd(T x) {
+ return 1.0f / (1.0f + ::exp(-float(x)));
+}
+"""
+sigmoid_bwd_codestring = """
+template T sigmoid_bwd(T x, T g) {
+ float x_sigmoid = 1.0f / (1.0f + ::exp(-float(x)));
+ return float(g) * x_sigmoid * (1.0f - x_sigmoid);
+}
+"""
+
+sigmoid_fwd = torch.cuda.jiterator._create_jit_fn(sigmoid_fwd_codestring)
+sigmoid_bwd = torch.cuda.jiterator._create_jit_fn(sigmoid_bwd_codestring)
+
+
+class SigmoidFunction(torch.autograd.Function):
+
+ @staticmethod
+ def forward(ctx, x):
+ ctx.save_for_backward(x)
+ return sigmoid_fwd(x)
+
+ @staticmethod
+ def backward(ctx, dout):
+ x, = ctx.saved_tensors
+ return sigmoid_bwd(x, dout)
+
+
+sigmoid = SigmoidFunction.apply
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ triton.Config({}, num_warps=16),
+ triton.Config({}, num_warps=32)
+ ],
+ key=['D']
+)
+@triton.jit
+def logsigmoid_fwd_kernel(
+ x,
+ y,
+ temperature,
+ T: tl.constexpr,
+ D: tl.constexpr,
+ B: tl.constexpr
+):
+ i = tl.program_id(0)
+ o_i = i * B + tl.arange(0, B)
+ m_i = o_i < T
+
+ b_x = tl.load(x + o_i, mask=m_i, other=0.).to(tl.float32)
+ b_m = tl.minimum(0., b_x)
+ b_z = 1. + tl.exp(-tl.abs(b_x))
+ b_y = (b_m - tl.log(b_z)) / temperature
+ tl.store(y + o_i, b_y.to(y.dtype.element_ty), mask=m_i)
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ triton.Config({}, num_warps=16),
+ triton.Config({}, num_warps=32)
+ ],
+ key=['D']
+)
+@triton.jit
+def logsigmoid_bwd_kernel(
+ x,
+ dx,
+ dy,
+ temperature,
+ T: tl.constexpr,
+ D: tl.constexpr,
+ B: tl.constexpr
+):
+ i = tl.program_id(0)
+ o_i = i * B + tl.arange(0, B)
+ m_i = o_i < T
+
+ b_x = tl.load(x + o_i, mask=m_i, other=0.).to(tl.float32)
+ b_dy = tl.load(dy + o_i, mask=m_i, other=0.).to(tl.float32)
+ b_dx = b_dy * (1. - tl.sigmoid(b_x)) / temperature
+ tl.store(dx + o_i, b_dx.to(dx.dtype.element_ty), mask=m_i)
+
+
+def logsigmoid_fwd(x: torch.Tensor, temperature: float = 1.) -> torch.Tensor:
+ T, D = x.numel(), x.shape[-1]
+ B = triton.next_power_of_2(triton.cdiv(T, torch.cuda.get_device_properties(x.device).multi_processor_count))
+ y = torch.empty_like(x)
+ logsigmoid_fwd_kernel[(triton.cdiv(T, B),)](
+ x=x,
+ y=y,
+ temperature=temperature,
+ T=T,
+ D=D,
+ B=B
+ )
+ return y
+
+
+def logsigmoid_bwd(x: torch.Tensor, dy: torch.Tensor, temperature: float = 1.) -> torch.Tensor:
+ T, D = x.numel(), x.shape[-1]
+ B = triton.next_power_of_2(triton.cdiv(T, torch.cuda.get_device_properties(x.device).multi_processor_count))
+ dx = torch.empty_like(x)
+ logsigmoid_bwd_kernel[(triton.cdiv(T, B),)](
+ x=x,
+ dx=dx,
+ dy=dy,
+ temperature=temperature,
+ T=T,
+ D=D,
+ B=B
+ )
+ return dx
+
+
+class LogSigmoidFunction(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ def forward(ctx, x, temperature):
+ ctx.save_for_backward(x,)
+ ctx.temperature = temperature
+ return logsigmoid_fwd(x, temperature)
+
+ @staticmethod
+ @contiguous
+ def backward(ctx, dy):
+ x, = ctx.saved_tensors
+ return logsigmoid_bwd(x, dy, ctx.temperature), None
+
+
+def logsigmoid(x: torch.Tensor, temperature: float = 1.) -> torch.Tensor:
+ return LogSigmoidFunction.apply(x, temperature)
+
+
+swish_fwd_codestring = """
+template T swish_fwd(T x) {
+ float x_sigmoid = 1.0f / (1.0f + ::exp(-float(x)));
+ return float(x) * x_sigmoid;
+}
+"""
+swish_bwd_codestring = """
+template T swish_bwd(T x, T g) {
+ float x_sigmoid = 1.0f / (1.0f + ::exp(-float(x)));
+ return float(g) * x_sigmoid * (1.0f - float(x) * x_sigmoid + float(x));
+}
+"""
+
+swish_fwd = torch.cuda.jiterator._create_jit_fn(swish_fwd_codestring)
+swish_bwd = torch.cuda.jiterator._create_jit_fn(swish_bwd_codestring)
+
+
+class SwishFunction(torch.autograd.Function):
+
+ @staticmethod
+ def forward(ctx, x):
+ ctx.save_for_backward(x)
+ return swish_fwd(x)
+
+ @staticmethod
+ def backward(ctx, dout):
+ x, = ctx.saved_tensors
+ return swish_bwd(x, dout)
+
+
+swish = SwishFunction.apply
+
+# 1/sqrt(2*pi)-> 0.3989423
+# 1/sqrt(2) -> 0.70710678
+# sqrt(2/pi) -> 0.79788456
+
+
+# this function is tanh approximation of gelu
+# actual gelu is:
+# x * 0.5 * (1.0 + torch.erf(x * 0.70710678))
+@torch.jit.script
+def bias_gelu(y, bias):
+ x = bias + y
+ return (x * 0.5 * (1.0 + torch.tanh(0.79788456 * x * (1 + 0.044715 * x * x)))).to(dtype=y.dtype)
+
+
+# gradient of tanh approximation of gelu
+# gradient of actual gelu is:
+# 0.5 * (1. + torch.erf(x * 0.70710678)) + 0.3989423 * x * torch.exp(-0.5 * x * x)
+@torch.jit.script
+def bias_gelu_bwd(g, y, bias):
+ """Assume that y has shape (B, D) and bias has shape (D)"""
+ x = bias + y
+ tanh_out = torch.tanh(0.79788456 * x * (1 + 0.044715 * x * x))
+ # sqrt(2/pi) * 3 * 0.044715 -> 0.1070322243
+ ff = 0.5 * x * ((1 - tanh_out * tanh_out) * (0.79788456 + 0.1070322243 * x * x)) + 0.5 * (
+ 1 + tanh_out
+ )
+ grad_y = ff * g
+ return grad_y.to(dtype=y.dtype), grad_y.sum(dim=(0), dtype=bias.dtype)
+
+
+class GeLUFunction(torch.autograd.Function):
+
+ @staticmethod
+ # bias is an optional argument
+ def forward(ctx, input, bias):
+ ctx.save_for_backward(input, bias)
+ return bias_gelu(input, bias)
+
+ @staticmethod
+ def backward(ctx, grad_output):
+ input, bias = ctx.saved_tensors
+ tmp = bias_gelu_bwd(grad_output, input, bias)
+ return tmp, tmp
+
+
+bias_gelu_impl = GeLUFunction.apply
+
+
+# this function is tanh approximation of gelu
+# actual gelu is:
+# x * 0.5 * (1.0 + torch.erf(x * 0.70710678))
+@torch.jit.script
+def gelu_fwd(x):
+ return (x * 0.5 * (1.0 + torch.tanh(0.79788456 * x * (1 + 0.044715 * x * x)))).to(dtype=x.dtype)
+
+
+# gradient of tanh approximation of gelu
+# gradient of actual gelu is:
+# 0.5 * (1. + torch.erf(x * 0.70710678)) + 0.3989423 * x * torch.exp(-0.5 * x * x)
+@torch.jit.script
+def gelu_bwd(g, x):
+ tanh_out = torch.tanh(0.79788456 * x * (1 + 0.044715 * x * x))
+ # sqrt(2/pi) * 3 * 0.044715 -> 0.1070322243
+ ff = 0.5 * x * ((1 - tanh_out * tanh_out) * (0.79788456 + 0.1070322243 * x * x)) + 0.5 * (
+ 1 + tanh_out
+ )
+ return (ff * g).to(dtype=x.dtype)
+
+
+class FastGeLUFunction(torch.autograd.Function):
+ @staticmethod
+ # bias is an optional argument
+ def forward(ctx, input):
+ ctx.save_for_backward(input)
+ return gelu_fwd(input)
+
+ @staticmethod
+ def backward(ctx, grad_output):
+ (input,) = ctx.saved_tensors
+ tmp = gelu_bwd(grad_output, input)
+ return tmp
+
+
+fast_gelu_impl = FastGeLUFunction.apply
+
+
+@torch.jit.script
+def relu_bwd(g, x):
+ return torch.where(x >= 0, g, 0.0).to(dtype=x.dtype)
+
+
+@torch.jit.script
+def sqrelu_fwd(x):
+ r = F.relu(x)
+ return (r * r).to(dtype=x.dtype)
+
+
+@torch.jit.script
+def sqrelu_bwd(g, x):
+ return (2.0 * g * F.relu(x)).to(dtype=x.dtype)
+
+
+class SquaredReLUFunction(torch.autograd.Function):
+
+ @staticmethod
+ def forward(ctx, input):
+ ctx.save_for_backward(input)
+ return sqrelu_fwd(input)
+
+ @staticmethod
+ def backward(ctx, grad_output):
+ input, = ctx.saved_tensors
+ return sqrelu_bwd(grad_output, input)
+
+
+sqrelu = SquaredReLUFunction.apply
+
+
+swiglu_fwd_codestring = """
+template T swiglu_fwd(T x, T y) {
+ return float(x) * float(y) / (1.0f + ::exp(-float(x)));
+}
+"""
+swiglu_bwd_codestring = """
+template T swiglu_bwd(T x, T y, T g, T& dx, T& dy) {
+ float x_sigmoid = 1.0f / (1.0f + ::exp(-float(x)));
+ dx = x_sigmoid * (1 + float(x) * (1.0f - x_sigmoid)) * float(g) * float(y);
+ dy = float(x) * x_sigmoid * float(g);
+}
+"""
+
+swiglu_bwd_with_output_codestring = """
+template T swiglu_bwd_with_output(T x, T y, T g, T& dx, T& dy, T& z) {
+ float x_sigmoid = 1.0f / (1.0f + ::exp(-float(x)));
+ float x_swish = float(x) * x_sigmoid;
+ dx = x_sigmoid * (1 + float(x) * (1.0f - x_sigmoid)) * float(g) * float(y);
+ dy = x_swish * float(g);
+ z = x_swish * float(y);
+}
+"""
+
+swiglu_fwd = torch.cuda.jiterator._create_jit_fn(swiglu_fwd_codestring)
+swiglu_bwd = torch.cuda.jiterator._create_multi_output_jit_fn(swiglu_bwd_codestring, num_outputs=2)
+swiglu_bwd_with_output = torch.cuda.jiterator._create_multi_output_jit_fn(swiglu_bwd_with_output_codestring, num_outputs=3)
+
+
+class SwiGLUFunction(torch.autograd.Function):
+ r"""
+ Swish-Gated Linear Unit (SwiGLU) function.
+
+ .. math::
+ \text{SwiGLU}(x, y) = swish(x) * y = \frac{x}{1 + \exp(-x)} * y
+ """
+
+ @staticmethod
+ def forward(ctx, x, y):
+ ctx.save_for_backward(x, y)
+ return swiglu_fwd(x, y)
+
+ @staticmethod
+ def backward(ctx, dout):
+ x, y = ctx.saved_tensors
+ return swiglu_bwd(x, y, dout)
+
+
+class SwiGLULinearFunction(torch.autograd.Function):
+ r"""
+ Swish-Gated Linear Unit (SwiGLU) function followed by a linear transformation.
+
+ .. math::
+ \text{SwiGLULinear}(x, y, W, b) = (swish(x) * y) W + b
+
+ This simple wrap discards the intermediate results of SwiGLU(x, y) to save memory.
+ """
+
+ @staticmethod
+ @autocast_custom_fwd
+ def forward(ctx, x, y, weight, bias):
+ z = swiglu_fwd(x, y)
+ out = F.linear(z, weight, bias)
+ # We don't store z, will be recomputed in the backward pass to save memory
+ ctx.save_for_backward(x, y, weight)
+ ctx.linear_bias_is_none = bias is None
+ return out
+
+ @staticmethod
+ @autocast_custom_bwd
+ def backward(ctx, dout, *args):
+ x, y, weight = ctx.saved_tensors
+ dout = dout.reshape(-1, dout.shape[-1])
+ dz = F.linear(dout, weight.t()).view_as(x)
+ dx, dy, z = swiglu_bwd_with_output(x, y, dz)
+ dlinear_weight = torch.einsum("bo,bi->oi", dout, z.reshape(-1, z.shape[-1]))
+ dlinear_bias = None if ctx.linear_bias_is_none else dout.sum(0)
+ return dx, dy, dlinear_weight, dlinear_bias
+
+
+class SwiGLUBitLinearFunction(torch.autograd.Function):
+ r"""
+ Swish-Gated Linear Unit (SwiGLU) function followed by a linear transformation.
+
+ .. math::
+ \text{SwiGLULinear}(x, y, W, b) = (swish(x) * y) W + b
+
+ This simple wrap discards the intermediate results of SwiGLU(x, y) to save memory.
+ """
+
+ @staticmethod
+ @autocast_custom_fwd
+ def forward(ctx, x, y, weight, bias):
+ z = swiglu_fwd(x, y)
+ out = fused_bitlinear.bit_linear(z, weight, bias)
+ # We don't store z, will be recomputed in the backward pass to save memory
+ ctx.save_for_backward(x, y, weight)
+ ctx.linear_bias_is_none = bias is None
+ return out
+
+ @staticmethod
+ @autocast_custom_bwd
+ def backward(ctx, dout, *args):
+ x, y, weight = ctx.saved_tensors
+ dout = dout.reshape(-1, dout.shape[-1])
+ dz = fused_bitlinear.bit_linear(dout, weight.t()).view_as(x)
+ dx, dy, z = swiglu_bwd_with_output(x, y, dz)
+ dlinear_weight = torch.einsum("bo,bi->oi", dout, z.reshape(-1, z.shape[-1]))
+ dlinear_bias = None if ctx.linear_bias_is_none else dout.sum(0)
+ return dx, dy, dlinear_weight, dlinear_bias
+
+
+swiglu = SwiGLUFunction.apply
+
+swiglu_linear = SwiGLULinearFunction.apply
+
+swiglu_bitlinear = SwiGLUBitLinearFunction.apply
+
+ACT2FN = {
+ 'relu': F.relu,
+ 'sigmoid': sigmoid,
+ 'logsigmoid': logsigmoid,
+ 'silu': swish,
+ 'swish': swish,
+ 'sqrelu': sqrelu,
+ 'gelu': fast_gelu_impl,
+ 'bias_gelu': bias_gelu_impl,
+}
diff --git a/fla/modules/convolution.py b/fla/modules/convolution.py
new file mode 100644
index 0000000000000000000000000000000000000000..5a7d0228d49a14b00c886a264e6201c661d58964
--- /dev/null
+++ b/fla/modules/convolution.py
@@ -0,0 +1,353 @@
+# -*- coding: utf-8 -*-
+
+# from https://github.com/HazyResearch/zoology/blob/main/zoology/mixers/convolution.py
+
+import math
+import warnings
+from typing import Optional, Tuple
+
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+from einops import rearrange
+
+from fla.modules.activations import ACT2FN
+from fla.utils import checkpoint
+
+try:
+ from causal_conv1d import causal_conv1d_fn, causal_conv1d_update
+except ImportError:
+ causal_conv1d_fn = None
+ causal_conv1d_update = None
+
+
+def fft_conv(u, k, dropout_mask, gelu=True, k_rev=None):
+ seqlen = u.shape[-1]
+ fft_size = 2 * seqlen
+ k_f = torch.fft.rfft(k, n=fft_size) / fft_size
+ if k_rev is not None:
+ k_rev_f = torch.fft.rfft(k_rev, n=fft_size) / fft_size
+ k_f = k_f + k_rev_f.conj()
+ u_f = torch.fft.rfft(u.to(dtype=k.dtype), n=fft_size)
+
+ if len(u.shape) > 3:
+ k_f = k_f.unsqueeze(1)
+ y = torch.fft.irfft(u_f * k_f, n=fft_size, norm="forward")[..., :seqlen]
+
+ out = y + u
+ if gelu:
+ out = F.gelu(out)
+ if dropout_mask is not None:
+ return (out * rearrange(dropout_mask, "b H -> b H 1")).to(dtype=u.dtype)
+ else:
+ return out.to(dtype=u.dtype)
+
+
+@checkpoint
+def proj_then_conv1d(
+ x: torch.Tensor,
+ proj_weight: torch.Tensor,
+ conv1d_weight: torch.Tensor,
+ conv1d_bias: Optional[torch.Tensor] = None,
+ cache: Optional[torch.Tensor] = None
+) -> torch.Tensor:
+ # We do matmul and transpose BLH -> HBL at the same time
+ x = rearrange(proj_weight @ rearrange(x, "b t d -> d (b t)"), "d (b t) -> b d t", t=x.shape[-2])
+
+ if causal_conv1d_fn is None:
+ raise ImportError("`causal_conv1d_fn` is not available. Please install `causal-conv1d` first.")
+ if cache is None:
+ x = causal_conv1d_fn(
+ x=x,
+ weight=rearrange(conv1d_weight, "d 1 w -> d w"),
+ bias=conv1d_bias,
+ activation="silu",
+ ).transpose(1, 2)
+ else:
+ assert x.shape[-1] == 1, "Only support decoding with 1 token at a time for now"
+ x = x.squeeze(-1)
+ x = causal_conv1d_update(
+ x=x,
+ weight=rearrange(conv1d_weight, "d 1 w -> d w"),
+ bias=conv1d_bias,
+ cache=cache,
+ activation="silu",
+ )
+ return x
+
+
+class ShortConvolution(nn.Conv1d):
+ """
+ Simple wrapper around `nn.Conv1d` that accepts dimension last.
+ """
+
+ def __init__(
+ self,
+ hidden_size: int,
+ kernel_size: int,
+ bias: bool = False,
+ activation: Optional[str] = 'silu',
+ use_fast_conv1d: Optional[bool] = True
+ ):
+ super().__init__(
+ in_channels=hidden_size,
+ out_channels=hidden_size,
+ kernel_size=kernel_size,
+ groups=hidden_size,
+ bias=bias,
+ padding=kernel_size - 1
+ )
+
+ self.hidden_size = hidden_size
+ self.activation = None
+ if activation is not None:
+ assert activation in ['silu', 'swish'], f"Activation `{activation}` not supported yet."
+ self.activation = activation
+
+ if causal_conv1d_fn is None:
+ if use_fast_conv1d:
+ raise RuntimeError(
+ "Please either install `causal-conv1d>=1.4.0` to enable fast causal short convolution CUDA kernel "
+ "or set `use_fast_conv1d` to False"
+ )
+ else:
+ warnings.warn(
+ "The naive Pytorch verison is very slow in practice, "
+ "please run `pip install causal-conv1d>=1.4.0` to install fast causal short convolution CUDA kernel",
+ category=ImportWarning
+ )
+ self.use_fast_conv1d = use_fast_conv1d
+
+ def extra_repr(self):
+ s = ('{in_channels}, {out_channels}, kernel_size={kernel_size}'
+ ', stride={stride}')
+ if self.padding != (0,) * len(self.padding):
+ s += ', padding={padding}'
+ if self.dilation != (1,) * len(self.dilation):
+ s += ', dilation={dilation}'
+ if self.output_padding != (0,) * len(self.output_padding):
+ s += ', output_padding={output_padding}'
+ if self.groups != 1:
+ s += ', groups={groups}'
+ if self.bias is None:
+ s += ', bias=False'
+ if self.padding_mode != 'zeros':
+ s += ', padding_mode={padding_mode}'
+ if self.activation is not None:
+ s += ', activation={activation}'
+ if not self.use_fast_conv1d:
+ s += ', use_fast_conv1d={use_fast_conv1d}'
+ return s.format(**self.__dict__)
+
+ def forward(
+ self,
+ x: torch.Tensor,
+ mask: Optional[torch.Tensor] = None,
+ cache: Optional[torch.Tensor] = None,
+ output_final_state: bool = False
+ ) -> Tuple[torch.Tensor, torch.Tensor]:
+ """
+ Args:
+ x (`torch.Tensor`):
+ Tensor of shape `[batch_size, seq_len, hidden_size]`
+ mask (`Optional[torch.Tensor]`):
+ Attention mask dealing with padded positions.
+ cache (`Optional[torch.Tensor]`):
+ Previous cache tensor of shape `[batch_size, hidden_size, kernel_size]`.
+ If provided, the cache is updated **inplace**.
+ output_final_state (Optional[bool]):
+ Whether to output the final state of shape `[batch_size, hidden_size, kernel_size]`. Default: `False`.
+ Returns:
+ Tensor of shape `[batch_size, seq_len, hidden_size]`.
+ """
+
+ batch_size, _, hidden_size = x.shape
+ if mask is not None:
+ x = x.mul_(mask.unsqueeze(-1))
+ if output_final_state and cache is None:
+ cache = x.new_zeros(batch_size, hidden_size, self.kernel_size[0])
+ if cache is not None and x.shape[1] == 1:
+ return self.step(x, cache)
+ x = rearrange(x, "b t d -> b d t")
+ # Update state (B D W)
+ if cache is not None:
+ cache.copy_(F.pad(x, (self.kernel_size[0] - x.shape[-1], 0)))
+ if self.use_fast_conv1d:
+ x = causal_conv1d_fn(
+ x=x,
+ weight=rearrange(self.weight, "d 1 w -> d w"),
+ bias=self.bias,
+ activation=self.activation,
+ )
+ else:
+ x = self._conv_forward(x, self.weight, self.bias)[..., :x.shape[-1]]
+ if self.activation is not None:
+ x = ACT2FN[self.activation](x)
+ return rearrange(x, "b d t -> b t d"), cache
+
+ def step(
+ self,
+ x: torch.Tensor,
+ cache: torch.Tensor
+ ):
+ assert x.shape[1] == 1, "Only support decoding with 1 token at a time for now"
+
+ x = x.squeeze(1)
+ if self.use_fast_conv1d:
+ x = causal_conv1d_update(
+ x=x,
+ conv_state=cache,
+ weight=rearrange(self.weight, "d 1 w -> d w"),
+ bias=self.bias,
+ activation=self.activation,
+ )
+ else:
+ dtype = x.dtype
+ cache.copy_(torch.roll(cache, shifts=-1, dims=-1))
+ cache[:, :, -1] = x
+ x = torch.sum(cache * rearrange(self.weight, "d 1 w -> d w"), dim=-1)
+ if self.bias is not None:
+ x = x + self.bias
+ if self.activation is not None:
+ x = ACT2FN[self.activation](x).to(dtype=dtype)
+ return x.unsqueeze(1), cache
+
+ @property
+ def state_size(self) -> int:
+ return self.hidden_size * self.kernel_size
+
+
+class LongConvolution(nn.Module):
+ """
+ LongConvolution applies a convolution operation on the input tensor using a fixed
+ filter of length max_len.
+ The filter is learned during training and is applied using FFT convolution.
+ Args:
+ hidden_size (int): The number of expected features in the input and output.
+ max_len (int): The maximum sequence length.
+ Returns:
+ y: [batch_size, seq_len, hidden_size] tensor
+ """
+
+ def __init__(
+ self,
+ hidden_size: int,
+ max_len: int,
+ **kwargs,
+ ):
+ """
+ Initializes the LongConvolution module.
+ Args:
+ hidden_size (int): The number of expected features in the input and output.
+ max_len (int): The maximum sequence length.
+ """
+ super().__init__()
+ self.hidden_size = hidden_size
+ self.filter = nn.Parameter(torch.randn(self.hidden_size, max_len), requires_grad=True)
+
+ def forward(self, x: torch.Tensor, *args, **kwargs):
+ """
+ Applies the LongConvolution operation on the input tensor.
+ Args:
+ x: [batch_size, seq_len, hidden_size] tensor
+ Returns:
+ y: [batch_size, seq_len, hidden_size] tensor
+ """
+ x = x.transpose(1, 2)
+ y = fft_conv(x, self.filter, dropout_mask=None, gelu=False)
+ y = y.transpose(1, 2)
+ return y.to(dtype=x.dtype)
+
+
+class PositionalEmbedding(nn.Module):
+ def __init__(self, emb_dim: int, seq_len: int, **kwargs):
+ """Complex exponential positional embeddings for implicit long convolution filters."""
+ super().__init__()
+
+ self.seq_len = seq_len
+ # The time embedding fed to the filteres is normalized so that t_f = 1
+ t = torch.linspace(0, 1, self.seq_len)[None, :, None] # 1, L, 1
+
+ if emb_dim > 1:
+ bands = (emb_dim - 1) // 2
+ # To compute the right embeddings we use the "proper" linspace
+ t_rescaled = torch.linspace(0, seq_len - 1, seq_len)[None, :, None]
+ w = 2 * math.pi * t_rescaled / seq_len # 1, L, 1
+
+ f = torch.linspace(1e-4, bands - 1, bands)[None, None]
+ z = torch.exp(-1j * f * w)
+ z = torch.cat([t, z.real, z.imag], dim=-1)
+ self.z = nn.Parameter(z, requires_grad=False)
+
+ def forward(self, L):
+ return self.z[:, :L]
+
+
+class ImplicitLongConvolution(nn.Module):
+ """
+ Long convolution with implicit filter parameterized by an MLP.
+
+ Args:
+ hidden_size (int):
+ The number of expected features in the input and output.
+ max_len (int):
+ The maximum sequence length.
+ d_emb (Optional[int]):
+ The dimension of the positional embeddings. Must be odd and greater or equal to 3 (time, sine and cosine).
+ Defaults to 3.
+ d_hidden (Optional[int]):
+ The number of features in the hidden layer of the MLP. Defaults to 16.
+
+ Attributes:
+ pos_emb (`PositionalEmbedding`): The positional embedding layer.
+ mlp (`nn.Sequential`): The MLP that parameterizes the implicit filter.
+
+ """
+
+ def __init__(
+ self,
+ hidden_size: int,
+ max_len: int,
+ d_emb: int = 3,
+ d_hidden: int = 16,
+ **kwargs,
+ ):
+ """
+ Long convolution with implicit filter parameterized by an MLP.
+
+
+ """
+ super().__init__()
+ self.hidden_size = hidden_size
+ self.d_emb = d_emb
+
+ assert (
+ d_emb % 2 != 0 and d_emb >= 3
+ ), "d_emb must be odd and greater or equal to 3 (time, sine and cosine)"
+ self.pos_emb = PositionalEmbedding(d_emb, max_len)
+
+ # final linear layer
+ self.mlp = nn.Sequential(
+ nn.Linear(d_emb, d_hidden),
+ torch.nn.ReLU(),
+ nn.Linear(d_hidden, hidden_size),
+ )
+
+ def filter(self, seq_len: int, *args, **kwargs):
+ k = self.mlp(self.pos_emb(seq_len))
+
+ return k.transpose(1, 2)
+
+ def forward(self, x: torch.Tensor, *args, **kwargs):
+ """
+ Args:
+ x: [batch_size, seq_len, hidden_size] tensor
+ Returns:
+ y: [batch_size, seq_len, hidden_size] tensor
+ """
+ x = x.transpose(1, 2)
+ k = self.filter(x.shape[-1])
+ y = fft_conv(x, k, dropout_mask=None, gelu=False)
+
+ y = y.transpose(1, 2)
+ return y.to(dtype=x.dtype)
diff --git a/fla/modules/feature_map.py b/fla/modules/feature_map.py
new file mode 100644
index 0000000000000000000000000000000000000000..6af81e74d3975f67b8df23c1dfa60cd01b5a4950
--- /dev/null
+++ b/fla/modules/feature_map.py
@@ -0,0 +1,300 @@
+# -*- coding: utf-8 -*-
+
+from __future__ import annotations
+
+import math
+from typing import Optional
+
+import torch
+import torch.nn.functional as F
+from torch import nn
+
+from fla.modules.activations import fast_gelu_impl, sigmoid, sqrelu, swish
+from fla.modules.layernorm import layer_norm
+from fla.utils import checkpoint
+
+
+@checkpoint
+def flatten_diag_outer_product(x, y):
+ z = torch.einsum("...i,...j->...ij", x, y)
+ N = z.size(-1)
+ indicies = torch.triu_indices(N, N)
+ return z[..., indicies[0], indicies[1]]
+
+
+@checkpoint
+def flatten_diag_outer_product_off1(x, y):
+ z = torch.einsum("...i,...j->...ij", x, y)
+ N = z.size(-1)
+ indicies = torch.triu_indices(N, N, 1)
+ indices2 = torch.arange(0, N)
+ return z[..., indicies[0], indicies[1]], z[..., indices2, indices2]
+
+
+def is_power_of_2(n):
+ return (n & (n - 1) == 0) and n != 0
+
+
+class HedgehogFeatureMap(nn.Module):
+
+ r"""
+ Hedgehog feature map as introduced in
+ `The Hedgehog & the Porcupine: Expressive Linear Attentions with Softmax Mimicry `_
+ """
+
+ def __init__(
+ self,
+ head_dim: int
+ ) -> HedgehogFeatureMap:
+ super().__init__()
+ # Trainable map
+ self.layer = nn.Linear(head_dim, head_dim)
+ self.init_weights_()
+
+ def init_weights_(self):
+ """Initialize trainable map as identity"""
+ with torch.no_grad():
+ identity = torch.eye(*self.layer.weight.shape[-2:], dtype=torch.float)
+ self.layer.weight.copy_(identity.to(self.layer.weight))
+ nn.init.zeros_(self.layer.bias)
+
+ def forward(self, x: torch.Tensor):
+ x = self.layer(x) # shape b, h, l, d
+ return torch.cat([2*x, -2*x], dim=-1).softmax(-1)
+
+
+class T2RFeatureMap(nn.Module):
+
+ r"""
+ Simple linear mapping feature map as in
+ `Finetuning Pretrained Transformers into RNNs `_
+ """
+
+ def __init__(
+ self,
+ head_dim: int,
+ dot_dim: int = None,
+ bias: Optional[bool] = False
+ ) -> T2RFeatureMap:
+ super().__init__()
+ # Trainable map
+ if dot_dim is None:
+ dot_dim = head_dim
+
+ self.head_dim = head_dim
+ self.dot_dim = dot_dim
+ self.bias = bias
+
+ self.layer = nn.Linear(head_dim, dot_dim, bias=bias)
+
+ def __repr__(self) -> str:
+ return f"{self.__class__.__name__}(head_dim={self.head_dim}, dot_dim={self.dot_dim}, bias={self.bias})"
+
+ def forward(self, x: torch.Tensor):
+ return self.layer(x).relu()
+
+
+class DPFPFeatureMap(nn.Module):
+
+ r"""
+ Deterministic Parameter-Free Projection (DPFP) feature map in
+ `Linear Transformers Are Secretly Fast Weight Programmers `_
+ """
+
+ def __init__(
+ self,
+ head_dim: int,
+ nu: int = 4
+ ) -> DPFPFeatureMap:
+ super().__init__()
+ self.nu = nu
+
+ def forward(self, x: torch.Tensor):
+ x = torch.cat([x.relu(), -x.relu()], dim=-1)
+ x_rolled = torch.cat([x.roll(shifts=j, dims=-1) for j in range(1, self.nu+1)], dim=-1)
+ x_repeat = torch.cat([x] * self.nu, dim=-1)
+ return x_repeat * x_rolled
+
+
+class HadamardFeatureMap(nn.Module):
+ def __init__(
+ self,
+ head_dim: int
+ ) -> HadamardFeatureMap:
+ super().__init__()
+ # Trainable map
+ self.layer1 = nn.Linear(head_dim, head_dim)
+ self.layer2 = nn.Linear(head_dim, head_dim)
+
+ def forward(self, x: torch.Tensor):
+ return self.layer1(x) * self.layer2(x)
+
+
+class LearnableOuterProductFeatureMap(nn.Module):
+ def __init__(
+ self,
+ head_dim: int,
+ feature_dim: int
+ ) -> LearnableOuterProductFeatureMap:
+ super().__init__()
+ # Trainable map
+ self.layer1 = nn.Linear(head_dim, feature_dim, bias=False)
+ self.layer2 = nn.Linear(head_dim, feature_dim, bias=False)
+ self.normalizer = feature_dim ** -0.5
+
+ def forward(self, x: torch.Tensor):
+ return flatten_diag_outer_product(self.layer1(x), self.layer2(x))
+
+
+class LearnablePolySketchNonNegativeFeatureMap(nn.Module):
+
+ def __init__(
+ self,
+ head_dim: int,
+ sketch_size: Optional[int] = None,
+ degree: Optional[int] = 2
+ ) -> LearnablePolySketchNonNegativeFeatureMap:
+ super().__init__()
+
+ assert is_power_of_2(degree) and degree >= 2, f"The degree {degree} must be a power of 2"
+
+ self.head_dim = head_dim
+ self.sketch_size = sketch_size if sketch_size is not None else head_dim
+ self.degree = degree
+
+ self.gamma = nn.Parameter(torch.ones(head_dim))
+ self.beta = nn.Parameter(torch.zeros(head_dim))
+ # NOTE: the sketch layers defined here are quite different from the original paper
+ # currently we simply use linear layers without any non-linear activations
+ self.sketches1 = nn.ModuleList([
+ nn.Linear(head_dim, sketch_size, bias=False),
+ *[nn.Linear(sketch_size, sketch_size, bias=False) for _ in range(int(math.log2(self.degree)) - 2)]
+ ])
+ self.sketches2 = nn.ModuleList([
+ nn.Linear(head_dim, sketch_size, bias=False),
+ *[nn.Linear(sketch_size, sketch_size, bias=False) for _ in range(int(math.log2(self.degree)) - 2)]
+ ])
+
+ def forward(self, x: torch.Tensor):
+ # Section 2.1
+ x = layer_norm(x, self.gamma, self.beta)
+ # first map the input to sketch size with learnable parameters
+ x = self.sketches1[0](x) * self.sketches2[0](x) * self.head_dim ** -0.5
+ for i in range(1, int(math.log2(self.degree)) - 1):
+ x = self.sketches1[i](x) * self.sketches2[i](x) * self.head_dim ** -0.5
+ # do sketch mapping for log2(p) - 1 times in total
+ # do p=2 mapping to ensure non-negativity
+ return flatten_diag_outer_product(x, x)
+
+
+class TaylorFeatureMap(nn.Module):
+ def __init__(
+ self,
+ head_dim: int
+ ) -> TaylorFeatureMap:
+ super().__init__()
+ self.head_dim = head_dim
+ self.r2 = math.sqrt(2)
+ self.rd = math.sqrt(self.head_dim)
+ self.rrd = math.sqrt(self.rd)
+
+ def forward(self, x: torch.Tensor):
+ x2_1, x2_2 = flatten_diag_outer_product_off1(x, x)
+ return torch.cat([torch.ones_like(x[..., 0:1]), x / self.rrd, x2_2 / (self.rd * self.r2), x2_1 / self.rd], dim=-1)
+
+
+class RebasedFeatureMap(nn.Module):
+
+ def __init__(
+ self,
+ head_dim: int,
+ use_gamma: Optional[bool] = True,
+ use_beta: Optional[bool] = True,
+ normalize: Optional[bool] = True
+ ) -> RebasedFeatureMap:
+ super().__init__()
+
+ self.head_dim = head_dim
+ self.use_gamma = use_gamma
+ self.use_beta = use_beta
+ self.normalize = normalize
+
+ self.gamma = None
+ self.beta = None
+ if use_gamma:
+ self.gamma = nn.Parameter(torch.ones(head_dim))
+ if use_beta:
+ self.beta = nn.Parameter(torch.zeros(head_dim))
+
+ def forward(self, x: torch.Tensor, flatten: Optional[bool] = True):
+ if self.use_beta and self.use_gamma and self.normalize:
+ x = layer_norm(x, self.gamma, self.beta)
+ elif self.normalize:
+ x = F.layer_norm(x, (self.head_dim,), self.gamma, self.beta)
+ elif self.use_gamma and self.use_beta:
+ x = torch.addcmul(self.beta, x, self.gamma)
+ elif self.use_gamma:
+ x = x.mul(self.gamma)
+ else:
+ raise RuntimeError(f"Not supported combination of `use_gamma`, `use_beta` and `normalize`, "
+ f"which is currentlt set as (`{self.use_gamma}`, `{self.use_beta}`, `{self.normalize}`)")
+ if not flatten:
+ return x
+ x2_1, x2_2 = flatten_diag_outer_product_off1(x, x)
+ # rebased use learnable parameters to approximate any quadratic function
+ return torch.cat([x2_2 * self.head_dim ** -0.5, x2_1 * (2 / self.head_dim) ** 0.5], dim=-1)
+
+
+class ReLUFeatureMap(nn.Module):
+
+ def __init__(
+ self,
+ ) -> ReLUFeatureMap:
+ super().__init__()
+
+ def forward(self, x: torch.Tensor):
+ return F.relu(x)
+
+
+class SquaredReLUFeatureMap(nn.Module):
+
+ def __init__(
+ self,
+ ) -> SquaredReLUFeatureMap:
+ super().__init__()
+
+ def forward(self, x: torch.Tensor):
+ return sqrelu(x)
+
+
+class GELUFeatureMap(nn.Module):
+
+ def __init__(
+ self,
+ ) -> GELUFeatureMap:
+ super().__init__()
+
+ def forward(self, x: torch.Tensor):
+ return fast_gelu_impl(x)
+
+
+class SwishFeatureMap(nn.Module):
+
+ def __init__(
+ self,
+ ) -> SwishFeatureMap:
+ super().__init__()
+
+ def forward(self, x: torch.Tensor):
+ return swish(x)
+
+
+class SigmoidFeatureMap(nn.Module):
+
+ def __init__(
+ self,
+ ) -> SigmoidFeatureMap:
+ super().__init__()
+
+ def forward(self, x: torch.Tensor):
+ return sigmoid(x)
diff --git a/fla/modules/fused_bitlinear.py b/fla/modules/fused_bitlinear.py
new file mode 100644
index 0000000000000000000000000000000000000000..341abec4e0a1ad926103e5c91f91bede0379d215
--- /dev/null
+++ b/fla/modules/fused_bitlinear.py
@@ -0,0 +1,625 @@
+# -*- coding: utf-8 -*-
+
+# Implementations of BitLinear layer with fused LayerNorm and quantized Linear layer.
+# [The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits](https://arxiv.org/abs/2402.17764)
+# [Scalable MatMul-free Language Modeling](https://arxiv.org/abs/2406.02528)
+
+# Code adapted from https://github.com/ridgerchu/matmulfreellm/
+
+from __future__ import annotations
+
+import math
+
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+import triton
+import triton.language as tl
+
+from fla.modules.layernorm import RMSNorm
+from fla.utils import contiguous
+
+
+def activation_quant(x):
+ """
+ Per-token quantization to 8 bits. No grouping is needed for quantization.
+
+ Args:
+ x: An activation tensor with shape [n, d].
+
+ Returns:
+ A quantized activation tensor with shape [n, d].
+ """
+ # Compute the scale factor
+ scale = 127.0 / x.abs().max(dim=-1, keepdim=True).values.clamp_(min=1e-5)
+ # Quantize and then de-quantize the tensor
+ y = (x * scale).round().clamp_(-128, 127) / scale
+ return y
+
+
+def weight_quant(w):
+ """
+ Per-tensor quantization to 1.58 bits. No grouping is needed for quantization.
+
+ Args:
+ w: A weight tensor with shape [d, k].
+
+ Returns:
+ A quantized weight tensor with shape [d, k].
+ """
+ # Compute the scale factor
+ scale = 1.0 / w.abs().mean().clamp_(min=1e-5)
+ # Quantize and then de-quantize the tensor
+ u = (w * scale).round().clamp_(-1, 1) / scale
+ return u
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ triton.Config({}, num_warps=16),
+ triton.Config({}, num_warps=32),
+ ],
+ key=["N", "HAS_RESIDUAL", "STORE_RESIDUAL_OUT", "IS_RMS_NORM", "HAS_BIAS"],
+)
+# @triton.heuristics({"HAS_BIAS": lambda args: args["B"] is not None})
+# @triton.heuristics({"HAS_RESIDUAL": lambda args: args["RESIDUAL"] is not None})
+@triton.jit
+def _layer_norm_fwd_quant_kernel(
+ X, # pointer to the input
+ Y, # pointer to the output
+ W, # pointer to the weights
+ B, # pointer to the biases
+ RESIDUAL, # pointer to the residual
+ RESIDUAL_OUT, # pointer to the residual
+ Mean, # pointer to the mean
+ Rstd, # pointer to the 1/std
+ stride_x_row, # how much to increase the pointer when moving by 1 row
+ stride_y_row,
+ stride_res_row,
+ stride_res_out_row,
+ N, # number of columns in X
+ eps, # epsilon to avoid division by zero
+ IS_RMS_NORM: tl.constexpr,
+ BLOCK_N: tl.constexpr,
+ HAS_RESIDUAL: tl.constexpr,
+ STORE_RESIDUAL_OUT: tl.constexpr,
+ HAS_WEIGHT: tl.constexpr,
+ HAS_BIAS: tl.constexpr
+):
+ # Map the program id to the row of X and Y it should compute.
+ row = tl.program_id(0)
+ X += row * stride_x_row
+ Y += row * stride_y_row
+ if HAS_RESIDUAL:
+ RESIDUAL += row * stride_res_row
+ if STORE_RESIDUAL_OUT:
+ RESIDUAL_OUT += row * stride_res_out_row
+ # Compute mean and variance
+ cols = tl.arange(0, BLOCK_N)
+ x = tl.load(X + cols, mask=cols < N, other=0.0).to(tl.float32)
+ if HAS_RESIDUAL:
+ residual = tl.load(RESIDUAL + cols, mask=cols < N, other=0.0).to(tl.float32)
+ x += residual
+ if STORE_RESIDUAL_OUT:
+ tl.store(RESIDUAL_OUT + cols, x, mask=cols < N)
+ if not IS_RMS_NORM:
+ mean = tl.sum(x, axis=0) / N
+ tl.store(Mean + row, mean)
+ xbar = tl.where(cols < N, x - mean, 0.0)
+ var = tl.sum(xbar * xbar, axis=0) / N
+ else:
+ xbar = tl.where(cols < N, x, 0.0)
+ var = tl.sum(xbar * xbar, axis=0) / N
+ rstd = 1 / tl.sqrt(var + eps)
+ tl.store(Rstd + row, rstd)
+ # Normalize and apply linear transformation
+ mask = cols < N
+ if HAS_WEIGHT:
+ w = tl.load(W + cols, mask=mask).to(tl.float32)
+ if HAS_BIAS:
+ b = tl.load(B + cols, mask=mask).to(tl.float32)
+ x_hat = (x - mean) * rstd if not IS_RMS_NORM else x * rstd
+
+ y = x_hat * w if HAS_WEIGHT else x_hat
+ if HAS_BIAS:
+ y = y + b
+
+ # Aply quantization to the output
+ scale = 127.0 / tl.maximum(tl.max(tl.abs(y), 0), 1e-5)
+ # Quantize and then de-quantize the tensor
+ y = tl.math.round(y * scale)
+ y = tl.maximum(tl.minimum(y, 127), -128) / scale
+
+ # Write output
+ tl.store(Y + cols, y, mask=mask)
+
+
+def _layer_norm_fwd_quant(
+ x, weight, bias, eps, residual=None, out_dtype=None, residual_dtype=None, is_rms_norm=False
+):
+ if residual is not None:
+ residual_dtype = residual.dtype
+ M, N = x.shape
+ # allocate output
+ y = torch.empty_like(x, dtype=x.dtype if out_dtype is None else out_dtype)
+ if residual is not None or (residual_dtype is not None and residual_dtype != x.dtype):
+ residual_out = torch.empty(M, N, device=x.device, dtype=residual_dtype)
+ else:
+ residual_out = None
+ mean = torch.empty((M,), dtype=torch.float32, device="cuda") if not is_rms_norm else None
+ rstd = torch.empty((M,), dtype=torch.float32, device="cuda")
+ # Less than 64KB per feature: enqueue fused kernel
+ MAX_FUSED_SIZE = 65536 // x.element_size()
+ BLOCK_N = min(MAX_FUSED_SIZE, triton.next_power_of_2(N))
+ if N > BLOCK_N:
+ raise RuntimeError("This layer norm doesn't support feature dim >= 64KB.")
+ # heuristics for number of warps
+ with torch.cuda.device(x.device.index):
+ _layer_norm_fwd_quant_kernel[(M,)](
+ x,
+ y,
+ weight,
+ bias,
+ residual,
+ residual_out,
+ mean,
+ rstd,
+ x.stride(0),
+ y.stride(0),
+ residual.stride(0) if residual is not None else 0,
+ residual_out.stride(0) if residual_out is not None else 0,
+ N,
+ eps,
+ is_rms_norm,
+ BLOCK_N,
+ residual is not None,
+ residual_out is not None,
+ weight is not None,
+ bias is not None,
+ )
+ # residual_out is None if residual is None and residual_dtype == input_dtype
+ return y, mean, rstd, residual_out if residual_out is not None else x
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ triton.Config({}, num_warps=16),
+ triton.Config({}, num_warps=32),
+ ],
+ key=["N", "HAS_DRESIDUAL", "STORE_DRESIDUAL", "IS_RMS_NORM", "HAS_BIAS"],
+)
+# @triton.heuristics({"HAS_BIAS": lambda args: args["B"] is not None})
+# @triton.heuristics({"HAS_DRESIDUAL": lambda args: args["DRESIDUAL"] is not None})
+# @triton.heuristics({"STORE_DRESIDUAL": lambda args: args["DRESIDUAL_IN"] is not None})
+@triton.heuristics({"RECOMPUTE_OUTPUT": lambda args: args["Y"] is not None})
+@triton.jit
+def _layer_norm_bwd_kernel(
+ X, # pointer to the input
+ W, # pointer to the weights
+ B, # pointer to the biases
+ Y, # pointer to the output to be recomputed
+ DY, # pointer to the output gradient
+ DX, # pointer to the input gradient
+ DW, # pointer to the partial sum of weights gradient
+ DB, # pointer to the partial sum of biases gradient
+ DRESIDUAL,
+ DRESIDUAL_IN,
+ Mean, # pointer to the mean
+ Rstd, # pointer to the 1/std
+ stride_x_row, # how much to increase the pointer when moving by 1 row
+ stride_y_row,
+ stride_dy_row,
+ stride_dx_row,
+ stride_dres_row,
+ stride_dres_in_row,
+ M, # number of rows in X
+ N, # number of columns in X
+ eps, # epsilon to avoid division by zero
+ rows_per_program,
+ IS_RMS_NORM: tl.constexpr,
+ BLOCK_N: tl.constexpr,
+ HAS_DRESIDUAL: tl.constexpr,
+ STORE_DRESIDUAL: tl.constexpr,
+ HAS_WEIGHT: tl.constexpr,
+ HAS_BIAS: tl.constexpr,
+ RECOMPUTE_OUTPUT: tl.constexpr,
+):
+ # Map the program id to the elements of X, DX, and DY it should compute.
+ row_block_id = tl.program_id(0)
+ row_start = row_block_id * rows_per_program
+ cols = tl.arange(0, BLOCK_N)
+ mask = cols < N
+ X += row_start * stride_x_row
+ if HAS_DRESIDUAL:
+ DRESIDUAL += row_start * stride_dres_row
+ if STORE_DRESIDUAL:
+ DRESIDUAL_IN += row_start * stride_dres_in_row
+ DY += row_start * stride_dy_row
+ DX += row_start * stride_dx_row
+ if RECOMPUTE_OUTPUT:
+ Y += row_start * stride_y_row
+ if HAS_WEIGHT:
+ w = tl.load(W + cols, mask=mask).to(tl.float32)
+ dw = tl.zeros((BLOCK_N,), dtype=tl.float32)
+ if RECOMPUTE_OUTPUT and HAS_BIAS:
+ b = tl.load(B + cols, mask=mask, other=0.0).to(tl.float32)
+ if HAS_BIAS:
+ db = tl.zeros((BLOCK_N,), dtype=tl.float32)
+ row_end = min((row_block_id + 1) * rows_per_program, M)
+ for row in range(row_start, row_end):
+ # Load data to SRAM
+ x = tl.load(X + cols, mask=mask, other=0).to(tl.float32)
+ dy = tl.load(DY + cols, mask=mask, other=0).to(tl.float32)
+ if not IS_RMS_NORM:
+ mean = tl.load(Mean + row)
+ rstd = tl.load(Rstd + row)
+ # Compute dx
+ xhat = (x - mean) * rstd if not IS_RMS_NORM else x * rstd
+ xhat = tl.where(mask, xhat, 0.0)
+ if RECOMPUTE_OUTPUT:
+ y = xhat * w if HAS_WEIGHT else xhat
+ if HAS_BIAS:
+ y = y + b
+
+ # Aply quantization to the output
+ scale = 127.0 / tl.maximum(tl.max(tl.abs(y), 0), 1e-5)
+ # Quantize and then de-quantize the tensor
+ y = tl.math.round(y * scale)
+ y = tl.maximum(tl.minimum(y, 127), -128) / scale
+
+ tl.store(Y + cols, y, mask=mask)
+ wdy = dy
+ if HAS_WEIGHT:
+ wdy = dy * w
+ dw += dy * xhat
+ if HAS_BIAS:
+ db += dy
+ if not IS_RMS_NORM:
+ c1 = tl.sum(xhat * wdy, axis=0) / N
+ c2 = tl.sum(wdy, axis=0) / N
+ dx = (wdy - (xhat * c1 + c2)) * rstd
+ else:
+ c1 = tl.sum(xhat * wdy, axis=0) / N
+ dx = (wdy - xhat * c1) * rstd
+ if HAS_DRESIDUAL:
+ dres = tl.load(DRESIDUAL + cols, mask=mask, other=0).to(tl.float32)
+ dx += dres
+ # Write dx
+ if STORE_DRESIDUAL:
+ tl.store(DRESIDUAL_IN + cols, dx, mask=mask)
+ tl.store(DX + cols, dx, mask=mask)
+
+ X += stride_x_row
+ if HAS_DRESIDUAL:
+ DRESIDUAL += stride_dres_row
+ if STORE_DRESIDUAL:
+ DRESIDUAL_IN += stride_dres_in_row
+ if RECOMPUTE_OUTPUT:
+ Y += stride_y_row
+ DY += stride_dy_row
+ DX += stride_dx_row
+ if HAS_WEIGHT:
+ tl.store(DW + row_block_id * N + cols, dw, mask=mask)
+ if HAS_BIAS:
+ tl.store(DB + row_block_id * N + cols, db, mask=mask)
+
+
+def _layer_norm_bwd(
+ dy,
+ x,
+ weight,
+ bias,
+ eps,
+ mean,
+ rstd,
+ dresidual=None,
+ has_residual=False,
+ is_rms_norm=False,
+ x_dtype=None,
+ recompute_output=False,
+):
+ M, N = x.shape
+ # allocate output
+ dx = torch.empty_like(x) if x_dtype is None else torch.empty(M, N, dtype=x_dtype, device=x.device)
+ dresidual_in = torch.empty_like(x) if has_residual and dx.dtype != x.dtype else None
+ y = torch.empty(M, N, dtype=dy.dtype, device=dy.device) if recompute_output else None
+
+ # Less than 64KB per feature: enqueue fused kernel
+ MAX_FUSED_SIZE = 65536 // x.element_size()
+ BLOCK_N = min(MAX_FUSED_SIZE, triton.next_power_of_2(N))
+ if N > BLOCK_N:
+ raise RuntimeError("This layer norm doesn't support feature dim >= 64KB.")
+ sm_count = torch.cuda.get_device_properties(x.device).multi_processor_count
+ _dw = torch.empty((sm_count, N), dtype=torch.float32, device=weight.device) if weight is not None else None
+ _db = torch.empty((sm_count, N), dtype=torch.float32, device=bias.device) if bias is not None else None
+ rows_per_program = math.ceil(M / sm_count)
+ grid = (sm_count,)
+ with torch.cuda.device(x.device.index):
+ _layer_norm_bwd_kernel[grid](
+ x,
+ weight,
+ bias,
+ y,
+ dy,
+ dx,
+ _dw,
+ _db,
+ dresidual,
+ dresidual_in,
+ mean,
+ rstd,
+ x.stride(0),
+ 0 if not recompute_output else y.stride(0),
+ dy.stride(0),
+ dx.stride(0),
+ dresidual.stride(0) if dresidual is not None else 0,
+ dresidual_in.stride(0) if dresidual_in is not None else 0,
+ M,
+ N,
+ eps,
+ rows_per_program,
+ is_rms_norm,
+ BLOCK_N,
+ dresidual is not None,
+ dresidual_in is not None,
+ weight is not None,
+ bias is not None,
+ )
+ dw = _dw.sum(0).to(weight.dtype) if weight is not None else None
+ db = _db.sum(0).to(bias.dtype) if bias is not None else None
+ # Don't need to compute dresidual_in separately in this case
+ if has_residual and dx.dtype == x.dtype:
+ dresidual_in = dx
+ return (dx, dw, db, dresidual_in) if not recompute_output else (dx, dw, db, dresidual_in, y)
+
+
+class LayerNormLinearQuantFn(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ def forward(
+ ctx,
+ x,
+ norm_weight,
+ norm_bias,
+ linear_weight,
+ linear_bias,
+ residual=None,
+ eps=1e-6,
+ prenorm=False,
+ residual_in_fp32=False,
+ is_rms_norm=False,
+ ):
+ x_shape_og = x.shape
+ # reshape input data into 2D tensor
+ x = x.reshape(-1, x.shape[-1])
+ if residual is not None:
+ assert residual.shape == x_shape_og
+ residual = residual.reshape(-1, residual.shape[-1])
+ residual_dtype = residual.dtype if residual is not None else (torch.float32 if residual_in_fp32 else None)
+ y, mean, rstd, residual_out = _layer_norm_fwd_quant(
+ x,
+ norm_weight,
+ norm_bias,
+ eps,
+ residual,
+ out_dtype=None if not torch.is_autocast_enabled() else torch.get_autocast_gpu_dtype(),
+ residual_dtype=residual_dtype,
+ is_rms_norm=is_rms_norm,
+ )
+ y = y.reshape(x_shape_og)
+ dtype = torch.get_autocast_gpu_dtype() if torch.is_autocast_enabled() else y.dtype
+ linear_weight = weight_quant(linear_weight).to(dtype)
+ linear_bias = linear_bias.to(dtype) if linear_bias is not None else None
+ out = F.linear(y.to(linear_weight.dtype), linear_weight, linear_bias)
+ # We don't store y, will be recomputed in the backward pass to save memory
+ ctx.save_for_backward(residual_out, norm_weight, norm_bias, linear_weight, mean, rstd)
+ ctx.x_shape_og = x_shape_og
+ ctx.eps = eps
+ ctx.is_rms_norm = is_rms_norm
+ ctx.has_residual = residual is not None
+ ctx.prenorm = prenorm
+ ctx.x_dtype = x.dtype
+ ctx.linear_bias_is_none = linear_bias is None
+ return out if not prenorm else (out, residual_out.reshape(x_shape_og))
+
+ @staticmethod
+ @contiguous
+ def backward(ctx, dout, *args):
+ x, norm_weight, norm_bias, linear_weight, mean, rstd = ctx.saved_tensors
+ dout = dout.reshape(-1, dout.shape[-1])
+ dy = F.linear(dout, linear_weight.t())
+ dlinear_bias = None if ctx.linear_bias_is_none else dout.sum(0)
+ assert dy.shape == x.shape
+ if ctx.prenorm:
+ dresidual = args[0]
+ dresidual = dresidual.reshape(-1, dresidual.shape[-1])
+ assert dresidual.shape == x.shape
+ else:
+ dresidual = None
+ dx, dnorm_weight, dnorm_bias, dresidual_in, y = _layer_norm_bwd(
+ dy,
+ x,
+ norm_weight,
+ norm_bias,
+ ctx.eps,
+ mean,
+ rstd,
+ dresidual,
+ ctx.has_residual,
+ ctx.is_rms_norm,
+ x_dtype=ctx.x_dtype,
+ recompute_output=True
+ )
+ dlinear_weight = torch.einsum("bo,bi->oi", dout, y)
+ return (
+ dx.reshape(ctx.x_shape_og),
+ dnorm_weight,
+ dnorm_bias,
+ dlinear_weight,
+ dlinear_bias,
+ dresidual_in.reshape(ctx.x_shape_og) if ctx.has_residual else None,
+ None,
+ None,
+ None,
+ None,
+ )
+
+
+def layer_norm_linear_quant_fn(
+ x,
+ norm_weight,
+ norm_bias,
+ linear_weight,
+ linear_bias,
+ residual=None,
+ eps=1e-6,
+ prenorm=False,
+ residual_in_fp32=False,
+ is_rms_norm=False,
+):
+ return LayerNormLinearQuantFn.apply(
+ x,
+ norm_weight,
+ norm_bias,
+ linear_weight,
+ linear_bias,
+ residual,
+ eps,
+ prenorm,
+ residual_in_fp32,
+ is_rms_norm,
+ )
+
+
+def rms_norm_linear_quant(
+ x: torch.Tensor,
+ norm_weight: torch.Tensor,
+ norm_bias: torch.Tensor,
+ linear_weight: torch.Tensor,
+ linear_bias: torch.Tensor,
+ residual: torch.Tensor = None,
+ eps: float = 1e-5,
+ prenorm: bool = False,
+ residual_in_fp32: bool = False
+):
+ return layer_norm_linear_quant_fn(
+ x=x,
+ norm_weight=norm_weight,
+ norm_bias=norm_bias,
+ linear_weight=linear_weight,
+ linear_bias=linear_bias,
+ residual=residual,
+ eps=eps,
+ prenorm=prenorm,
+ residual_in_fp32=residual_in_fp32,
+ is_rms_norm=True
+ )
+
+
+def bit_linear(x, weight, bias=None, norm_weight=None, norm_bias=None, eps=1e-8):
+ """
+ A functional version of BitLinear that applies quantization to activations and weights.
+
+ Args:
+ x: Input tensor with shape [n, d].
+ weight: Weight tensor with shape [out_features, in_features].
+ bias: Bias tensor with shape [out_features] (optional).
+ norm_weight: Weight tensor for RMS normalization with shape [in_features].
+ norm_bias: Bias tensor for RMS normalization with shape [in_features].
+ eps: A small constant for numerical stability in normalization.
+
+ Returns:
+ Output tensor with shape [n, out_features].
+ """
+ return layer_norm_linear_quant_fn(
+ x,
+ norm_weight,
+ norm_bias,
+ weight,
+ bias,
+ is_rms_norm=True
+ )
+
+
+class BitLinear(nn.Linear):
+ """
+ A custom linear layer that applies quantization on both activations and weights.
+ This is primarily for training; kernel optimization is needed for efficiency in deployment.
+ """
+
+ def __init__(self, in_features, out_features, bias=False):
+ """
+ Initializes the BitLinear layer.
+
+ Args:
+ in_features: Size of each input sample.
+ out_features: Size of each output sample.
+ bias: If set to False, the layer will not learn an additive bias. Default: True.
+ """
+ # Initialize the superclass nn.Linear with the given parameters
+ super(BitLinear, self).__init__(in_features, out_features, bias=bias)
+
+ self.norm = RMSNorm(in_features, eps=1e-8)
+
+ def forward(self, x):
+ """
+ Overrides the forward pass to include quantization.
+
+ Args:
+ x: An input tensor with shape [n, d].
+
+ Returns:
+ An output tensor with shape [n, d].
+ """
+ # Weight tensor
+ w = self.weight
+
+ # Apply RMS normalization to the input
+ x_norm = self.norm(x)
+
+ # Apply quantization to both activations and weights
+ # Uses Straight-Through Estimator (STE) trick with .detach() for gradient flow
+ x_quant = x_norm + (activation_quant(x_norm) - x_norm).detach()
+ w_quant = w + (weight_quant(w) - w).detach()
+ # Perform linear operation with quantized values
+ y = F.linear(x_quant, w_quant)
+
+ return y
+
+
+class FusedBitLinear(BitLinear):
+ """
+ A custom linear layer that applies quantization on both activations and weights.
+ This is primarily for training; kernel optimization is needed for efficiency in deployment.
+ """
+
+ def __init__(self, in_features, out_features, bias=False):
+ """
+ Initializes the BitLinear layer.
+
+ Args:
+ in_features: Size of each input sample.
+ out_features: Size of each output sample.
+ bias: If set to False, the layer will not learn an additive bias. Default: True.
+ """
+ # Initialize the superclass nn.Linear with the given parameters
+ super(FusedBitLinear, self).__init__(in_features, out_features, bias=bias)
+
+ def forward(self, x):
+ return layer_norm_linear_quant_fn(
+ x,
+ self.norm.weight,
+ self.norm.bias,
+ self.weight,
+ self.bias,
+ is_rms_norm=True
+ )
diff --git a/fla/modules/fused_cross_entropy.py b/fla/modules/fused_cross_entropy.py
new file mode 100644
index 0000000000000000000000000000000000000000..b87c1f6ecd20f1f8dd633cb2663f7bdb66e5e79c
--- /dev/null
+++ b/fla/modules/fused_cross_entropy.py
@@ -0,0 +1,423 @@
+# -*- coding: utf-8 -*-
+
+# Copyright (c) 2023, Tri Dao.
+
+from typing import Any, Tuple
+
+import torch
+import torch.nn as nn
+import triton
+import triton.language as tl
+
+from fla.utils import contiguous
+
+# `all_gather_into_tensor` and `reduce_scatter_tensor` are new placeholders for
+# `_all_gather_base` and `_reduce_scatter_base`. They require the most recent
+# version of PyTorch. The following 2 lines are for backward compatibility with
+# older PyTorch.
+if "all_gather_into_tensor" not in dir(torch.distributed):
+ torch.distributed.all_gather_into_tensor = torch.distributed._all_gather_base
+
+
+@triton.heuristics({
+ "HAS_SMOOTHING": lambda args: args["label_smoothing"] > 0.0,
+})
+@triton.jit
+def cross_entropy_fwd_kernel(
+ loss_ptr, # data ptrs
+ lse_ptr,
+ z_loss_ptr,
+ logits_ptr,
+ labels_ptr,
+ label_smoothing,
+ logit_scale,
+ lse_square_scale,
+ ignore_index,
+ total_classes,
+ class_start_idx, # Useful for tensor parallel when each rank only has a subset of classes
+ n_cols, # shapes
+ n_rows,
+ logits_row_stride, # strides
+ BLOCK_SIZE: tl.constexpr,
+ HAS_SMOOTHING: tl.constexpr,
+ # if SPLIT (e.g. tensor parallel), don't include the LSE in the loss since it's not the final LSE
+ SPLIT: tl.constexpr,
+):
+ row_idx = tl.program_id(0)
+ col_block_idx = tl.program_id(1)
+ logits_ptr = logits_ptr + row_idx * logits_row_stride.to(tl.int64)
+ col_offsets = col_block_idx * BLOCK_SIZE + tl.arange(0, BLOCK_SIZE)
+ label_idx = tl.load(labels_ptr + row_idx)
+ logits = tl.load(logits_ptr + col_offsets, mask=col_offsets < n_cols, other=-float("inf"))
+ logits = logits.to(tl.float32) * logit_scale
+ max_logits = tl.max(logits, 0)
+ if HAS_SMOOTHING:
+ sum_logits = tl.sum(tl.where(col_offsets < n_cols, logits, 0.0), 0)
+ lse = tl.log(tl.sum(tl.exp(logits - max_logits), 0)) + max_logits
+ tl.store(lse_ptr + col_block_idx * n_rows + row_idx, lse)
+ if label_idx == ignore_index:
+ loss = 0.0
+ z_loss = 0.0
+ else:
+ label_idx -= class_start_idx
+ if label_idx >= col_block_idx * BLOCK_SIZE and label_idx < min(
+ n_cols, (col_block_idx + 1) * BLOCK_SIZE
+ ):
+ logits_label = tl.load(logits_ptr + label_idx) * logit_scale
+ if HAS_SMOOTHING:
+ loss = (
+ (lse if not SPLIT else 0.0)
+ - label_smoothing * sum_logits / total_classes
+ - (1 - label_smoothing) * logits_label
+ )
+ else:
+ loss = (lse if not SPLIT else 0.0) - logits_label
+ else:
+ # If label is out of bounds, we set the CE loss to 0.0. But we still want the label_smoothing loss
+ if HAS_SMOOTHING:
+ loss = label_smoothing * ((lse if not SPLIT else 0.0) - sum_logits / total_classes)
+ else:
+ loss = 0.0
+ if not SPLIT:
+ z_loss = lse_square_scale * lse * lse
+ loss += z_loss
+ else:
+ z_loss = 0.0
+ tl.store(loss_ptr + col_block_idx * n_rows + row_idx, loss)
+ if not SPLIT:
+ tl.store(z_loss_ptr + col_block_idx * n_rows + row_idx, z_loss)
+
+
+@triton.heuristics({
+ "HAS_SMOOTHING": lambda args: args["label_smoothing"] > 0.0,
+})
+@triton.jit
+def cross_entropy_bwd_kernel(
+ dlogits_ptr, # data ptrs
+ dloss_ptr,
+ logits_ptr,
+ lse_ptr,
+ labels_ptr,
+ label_smoothing,
+ logit_scale,
+ lse_square_scale,
+ ignore_index,
+ total_classes,
+ class_start_idx, # Useful for tensor parallel when each rank only has a subset of classes
+ n_cols, # shapes
+ logits_row_stride, # strides
+ dlogits_row_stride,
+ dloss_row_stride,
+ BLOCK_SIZE: tl.constexpr,
+ HAS_SMOOTHING: tl.constexpr,
+):
+ row_idx = tl.program_id(0)
+ col_block_idx = tl.program_id(1)
+ logits_ptr = logits_ptr + row_idx * logits_row_stride.to(tl.int64)
+ dlogits_ptr = dlogits_ptr + row_idx * dlogits_row_stride.to(tl.int64)
+ col_offsets = col_block_idx * BLOCK_SIZE + tl.arange(0, BLOCK_SIZE)
+ label_idx = tl.load(labels_ptr + row_idx)
+ if label_idx != ignore_index:
+ dloss = tl.load(dloss_ptr + row_idx * dloss_row_stride)
+ else:
+ dloss = 0.0
+ logits = tl.load(logits_ptr + col_offsets, mask=col_offsets < n_cols, other=-float("inf")).to(
+ tl.float32
+ ) * logit_scale
+ lse = tl.load(lse_ptr + row_idx)
+ probs = tl.exp(logits - lse)
+ probs += 2.0 * lse_square_scale * lse * probs
+ label_idx -= class_start_idx
+ if HAS_SMOOTHING:
+ smooth_negative = label_smoothing / total_classes
+ probs = tl.where(col_offsets == label_idx, probs - (1 - label_smoothing), probs) - smooth_negative
+ else:
+ probs = tl.where(col_offsets == label_idx, probs - 1.0, probs)
+ tl.store(dlogits_ptr + col_offsets, (dloss * logit_scale) * probs, mask=col_offsets < n_cols)
+
+
+def fused_cross_entropy_forward(
+ logits: torch.Tensor,
+ target: torch.Tensor,
+ label_smoothing: float = 0.0,
+ logit_scale: float = 1.0,
+ lse_square_scale: float = 0.0,
+ ignore_index: int = -100,
+ process_group=None,
+):
+ n_rows, n_cols = logits.shape
+ assert target.shape == (n_rows,)
+ world_size = 1 if process_group is None else torch.distributed.get_world_size(process_group)
+ total_classes = world_size * n_cols
+ rank = 0 if process_group is None else torch.distributed.get_rank(process_group)
+ class_start_idx = rank * n_cols
+
+ if logits.stride(-1) != 1:
+ logits = logits.contiguous()
+ # Set these similar to https://github.com/openai/triton/blob/main/python/tutorials/02-fused-softmax.py
+ MAX_BLOCK_SIZE = 64 * 1024
+ BLOCK_SIZE = min(triton.next_power_of_2(n_cols), MAX_BLOCK_SIZE)
+ num_warps = (
+ 4
+ if BLOCK_SIZE < 2048
+ else (8 if BLOCK_SIZE < 8192 else (16 if BLOCK_SIZE < 128 * 1024 else 32))
+ )
+ # We may split the lse computation across multiple blocks, then do a reduction
+ # lse(local_lse) to get the final LSE. This is faster for large n_cols (e.g., > 64k)
+ # where having just one thread block processing more than 64k elements is slow.
+ split = world_size > 1 or n_cols > MAX_BLOCK_SIZE
+ n_splits = (n_cols + BLOCK_SIZE - 1) // BLOCK_SIZE
+ loss_shape = (n_splits, n_rows) if n_splits > 1 else (n_rows,)
+ losses = torch.empty(*loss_shape, dtype=torch.float, device=logits.device)
+ lse = torch.empty(*loss_shape, dtype=torch.float, device=logits.device)
+ z_losses = torch.empty(*loss_shape, dtype=torch.float, device=logits.device)
+ # Need this, otherwise Triton tries to launch from cuda:0 and we get
+ # ValueError: Pointer argument (at 0) cannot be accessed from Triton (cpu tensor?)
+ with torch.cuda.device(logits.device.index):
+ cross_entropy_fwd_kernel[(n_rows, n_splits)](
+ losses, # data ptrs
+ lse,
+ z_losses,
+ logits,
+ target,
+ label_smoothing,
+ logit_scale,
+ lse_square_scale,
+ ignore_index,
+ total_classes,
+ class_start_idx,
+ n_cols, # shapes
+ n_rows,
+ logits.stride(0), # strides
+ BLOCK_SIZE=BLOCK_SIZE, # constants
+ num_warps=num_warps,
+ SPLIT=split
+ )
+
+ if split:
+ # If there's no label_smoothing, if target are in the vocab of this partition, losses contains
+ # - predicted logit, and 0 otherwise.
+ # If there's label_smoothing=0.1, for target in the vocab of this partition, losses contains
+ # -0.9 * predicted logit - 0.1 * sum logit / total_classes.
+ # For target not in the vocab of this partition, losses contains
+ # -0.1 * sum logit / total_classes.
+ if n_splits > 1:
+ lse = torch.logsumexp(lse, dim=0)
+ losses = losses.sum(dim=0)
+ if world_size > 1:
+ lse_allgather = torch.empty(world_size, n_rows, dtype=lse.dtype, device=lse.device)
+ torch.distributed.all_gather_into_tensor(lse_allgather, lse, group=process_group)
+ handle_losses = torch.distributed.all_reduce(
+ losses, op=torch.distributed.ReduceOp.SUM, group=process_group, async_op=True
+ )
+ lse = torch.logsumexp(lse_allgather, dim=0)
+ handle_losses.wait()
+ # After the allreduce, if there's no label_smoothing, the total losses are - predicted_logit,
+ # we just have to add the (global) lse.
+ # If there's label_smoothing=0.1, the total losses are
+ # -0.9 * predicted_logit - 0.1 * sum logit / total_classes.
+ # Again, we just have to add the (global) lse.
+ losses += lse
+ if lse_square_scale != 0.0:
+ z_losses = lse_square_scale * lse.square()
+ z_losses.masked_fill_(target == ignore_index, 0.0)
+ losses += z_losses
+ else:
+ z_losses = torch.zeros_like(losses)
+ losses.masked_fill_(target == ignore_index, 0.0)
+
+ return losses, z_losses, lse, total_classes, class_start_idx
+
+
+class CrossEntropyLossFunction(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ def forward(
+ ctx,
+ logits,
+ target,
+ label_smoothing=0.0,
+ logit_scale=1.0,
+ lse_square_scale=0.0,
+ ignore_index=-100,
+ inplace_backward=False,
+ process_group=None,
+ ):
+ losses, z_losses, lse, total_classes, class_start_idx = fused_cross_entropy_forward(
+ logits,
+ target,
+ label_smoothing,
+ logit_scale,
+ lse_square_scale,
+ ignore_index,
+ process_group,
+ )
+ ctx.save_for_backward(logits, lse, target)
+ ctx.mark_non_differentiable(z_losses)
+ ctx.label_smoothing = label_smoothing
+ ctx.logit_scale = logit_scale
+ ctx.lse_square_scale = lse_square_scale
+ ctx.ignore_index = ignore_index
+ ctx.total_classes = total_classes
+ ctx.class_start_idx = class_start_idx
+ ctx.inplace_backward = inplace_backward
+
+ return losses, z_losses
+
+ @staticmethod
+ @contiguous
+ def backward(ctx, grad_losses, grad_z_losses):
+ del grad_z_losses # z_losses are only for logging.
+
+ logits, lse, target = ctx.saved_tensors
+ dlogits = logits if ctx.inplace_backward else torch.empty_like(logits)
+ n_rows, n_cols = logits.shape
+ BLOCK_SIZE = min(triton.next_power_of_2(n_cols), 4 * 1024)
+ num_warps = 4 if BLOCK_SIZE < 2048 else (8 if BLOCK_SIZE < 8192 else 16)
+ def grid(META): return (n_rows, triton.cdiv(n_cols, META["BLOCK_SIZE"])) # noqa
+ # Need this, otherwise Triton tries to launch from cuda:0 and we get
+ # ValueError: Pointer argument (at 0) cannot be accessed from Triton (cpu tensor?)
+ with torch.cuda.device(logits.device.index):
+ cross_entropy_bwd_kernel[grid](
+ dlogits, # data ptrs
+ grad_losses,
+ logits,
+ lse,
+ target,
+ ctx.label_smoothing,
+ ctx.logit_scale,
+ ctx.lse_square_scale,
+ ctx.ignore_index,
+ ctx.total_classes,
+ ctx.class_start_idx,
+ n_cols, # shapes
+ logits.stride(0), # strides
+ dlogits.stride(0),
+ grad_losses.stride(0),
+ BLOCK_SIZE=BLOCK_SIZE, # constants
+ num_warps=num_warps,
+ )
+ return dlogits, None, None, None, None, None, None, None, None
+
+
+def cross_entropy_loss(
+ logits: torch.Tensor,
+ target: torch.Tensor,
+ label_smoothing: float = 0.0,
+ logit_scale: float = 1.0,
+ lse_square_scale: float = 0.0,
+ ignore_index=-100,
+ inplace_backward: bool = False,
+ process_group=None,
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ """
+ Arguments:
+ logits: [batch, vocab_size]
+ target: [batch,]
+ label_smoothing: float
+ logit_scale: float.
+ Multiply logits by this scale before calculating the loss.
+ lse_square_scale: float.
+ If > 0, we add lse_square_scale * lse(logits) ^ 2 to the loss.
+ This is also referred to as "z-loss".
+ ignore_index: int.
+ If target == ignore_index, the loss is set to 0.0.
+ inplace_backward: bool.
+ If True, we do the backward pass in-place by modifying the logits.
+ This saves memory.
+ process_group:
+ if not None, we're doing Tensor Parallel: each process is responsible for
+ one part of the vocab. The loss will be aggregated across processes.
+ Returns:
+ losses: [batch,], float
+ z_losses: [batch,], float
+ """
+ return CrossEntropyLossFunction.apply(
+ logits,
+ target,
+ label_smoothing,
+ logit_scale,
+ lse_square_scale,
+ ignore_index,
+ inplace_backward,
+ process_group,
+ )
+
+
+class FusedCrossEntropyLoss(nn.Module):
+ def __init__(
+ self,
+ ignore_index: int = -100,
+ reduction: str = "mean",
+ label_smoothing: float = 0.0,
+ logit_scale: float = 1.0,
+ lse_square_scale: float = 0.0,
+ inplace_backward: bool = False,
+ process_group: Any = None,
+ return_z_loss: bool = False,
+ ):
+ """
+ Arguments:
+ ignore_index: int. If target == ignore_index, the loss is set to 0.0.
+ label_smoothing: float
+ lse_square_scale: float. If > 0, we add lse_square_scale * lse(logits) ^ 2 to the loss.
+ This is also referred to as "z-loss".
+ inplace_backward: bool. If True, we do the backward pass in-place by modifying the logits.
+ This saves memory.
+ process_group: if not None, we're doing Tensor Parallel: each process is responsible for
+ one part of the vocab. The loss will be aggregated across processes.
+ return_z_loss: bool. If True, we return the component of the loss contributed by
+ the lse_square_scale value. This value is only for logging and does not support
+ backprop.
+ """
+ super().__init__()
+ if reduction not in ["mean", "none", "sum"]:
+ raise NotImplementedError("Only support reduction = 'mean' or 'none' or 'sum'")
+ self.ignore_index = ignore_index
+ self.reduction = reduction
+ self.label_smoothing = label_smoothing
+ self.logit_scale = logit_scale
+ self.lse_square_scale = lse_square_scale
+ self.inplace_backward = inplace_backward
+ self.process_group = process_group
+ self.return_z_loss = return_z_loss
+
+ def forward(self, input, target):
+ """
+ Arguments:
+ input: (batch, vocab_size)
+ target: (batch,)
+ Returns:
+ losses: (batch,) if reduction is 'none', else (1,), dtype float
+ z_loss: (batch,) if reduction is 'none', else (1,), dtype float (if self.return_z_loss)
+ """
+ assert input.is_cuda and target.is_cuda, "Only support CUDA tensors"
+ loss, z_loss = cross_entropy_loss(
+ input,
+ target,
+ label_smoothing=self.label_smoothing,
+ logit_scale=self.logit_scale,
+ lse_square_scale=self.lse_square_scale,
+ ignore_index=self.ignore_index,
+ inplace_backward=self.inplace_backward,
+ process_group=self.process_group,
+ )
+ if self.reduction == "mean":
+ loss = loss.sum() / (target != self.ignore_index).sum()
+ elif self.reduction == "sum":
+ loss = loss.sum()
+ else:
+ loss = loss
+
+ if not self.return_z_loss:
+ return loss
+
+ if self.reduction == "mean":
+ z_loss = z_loss.sum() / (target != self.ignore_index).sum()
+ elif self.reduction == "sum":
+ z_loss = z_loss.sum()
+ else:
+ z_loss = z_loss
+
+ return loss, z_loss
diff --git a/fla/modules/fused_kl_div.py b/fla/modules/fused_kl_div.py
new file mode 100644
index 0000000000000000000000000000000000000000..a69c9d6ab2cfa8a6ce2eb2d0794ddff206caef37
--- /dev/null
+++ b/fla/modules/fused_kl_div.py
@@ -0,0 +1,321 @@
+# -*- coding: utf-8 -*-
+
+from typing import Tuple
+
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+import triton
+import triton.language as tl
+
+from fla.utils import contiguous
+
+# The hard limit of TRITON_MAX_TENSOR_NUMEL is 1048576
+# https://github.com/triton-lang/triton/blob/ba42a5c68fd0505f8c42f4202d53be0f8d9a5fe0/python/triton/language/core.py#L19
+# However, setting limit as 65536 as in LayerNorm tutorial is faster because of less register spilling
+# The optimal maximum block size depends on your hardware, your kernel, and your dtype
+MAX_FUSED_SIZE = 65536 // 2
+
+
+@triton.jit
+def kl_div_kernel(
+ logits,
+ target_logits,
+ loss,
+ s_logits,
+ s_loss,
+ reduction: tl.constexpr,
+ N: tl.constexpr,
+ V: tl.constexpr,
+ BV: tl.constexpr
+):
+ # https://github.com/triton-lang/triton/issues/1058
+ # If N*V is too large, i_n * stride will overflow out of int32, so we convert to int64
+ i_n = tl.program_id(0).to(tl.int64)
+
+ logits += i_n * s_logits
+ target_logits += i_n * s_logits
+
+ # m is the max value. use the notation from the paper
+ sm, tm = float('-inf'), float('-inf')
+ # d is the sum. use the notation from the paper
+ sd, td = 0.0, 0.0
+
+ NV = tl.cdiv(V, BV)
+ for iv in range(0, NV):
+ o_x = iv * BV + tl.arange(0, BV)
+ # for student
+ b_sl = tl.load(logits + o_x, mask=o_x < V, other=float('-inf'))
+ b_sm = tl.max(b_sl)
+ m_new = tl.maximum(sm, b_sm)
+ sd = sd * tl.exp(sm - m_new) + tl.sum(tl.exp(b_sl - m_new))
+ sm = m_new
+ # for teacher
+ b_tl = tl.load(target_logits + o_x, mask=o_x < V, other=float('-inf'))
+ b_tm = tl.max(b_tl)
+ m_new = tl.maximum(tm, b_tm)
+ td = td * tl.exp(tm - m_new) + tl.sum(tl.exp(b_tl - m_new))
+ tm = m_new
+
+ b_loss = 0.
+ # KL(y_true || y) = exp(y_true) * (log(y_true) - log(y))
+ for iv in range(0, NV):
+ o_x = iv * BV + tl.arange(0, BV)
+ b_sl = tl.load(logits + o_x, mask=o_x < V, other=float('-inf'))
+ b_tl = tl.load(target_logits + o_x, mask=o_x < V, other=float('-inf'))
+ b_sp_log = b_sl - sm - tl.log(sd)
+ b_tp_log = b_tl - tm - tl.log(td)
+ b_sp = tl.exp(b_sp_log)
+ b_tp = tl.exp(b_tp_log)
+ b_kl = tl.where(o_x < V, b_tp * (b_tp_log - b_sp_log), 0)
+ b_dl = -b_tp + b_sp
+ b_loss += tl.sum(b_kl)
+ if reduction == 'batchmean':
+ b_dl = b_dl / N
+ tl.store(logits + o_x, b_dl, mask=o_x < V)
+
+ # Normalize the loss by the number of elements if reduction is 'batchmean'
+ if reduction == 'batchmean':
+ b_loss = b_loss / N
+
+ tl.store(loss + i_n * s_loss, b_loss)
+
+
+@triton.jit
+def elementwise_mul_kernel(
+ x,
+ g,
+ N: tl.constexpr,
+ B: tl.constexpr
+):
+ """
+ This function multiplies each element of the tensor pointed by x with the value pointed by g.
+ The multiplication is performed in-place on the tensor pointed by x.
+
+ Parameters:
+ x:
+ Pointer to the input tensor.
+ g:
+ Pointer to the gradient output value.
+ N (int):
+ The number of columns in the input tensor.
+ B (int):
+ The block size for Triton operations.
+ """
+
+ # Get the program ID and convert it to int64 to avoid overflow
+ i_x = tl.program_id(0).to(tl.int64)
+ o_x = i_x * B + tl.arange(0, B)
+
+ # Load the gradient output value
+ b_g = tl.load(g)
+ b_x = tl.load(x + o_x, mask=o_x < N)
+ tl.store(x + o_x, b_x * b_g, mask=o_x < N)
+
+
+def fused_kl_div_forward(
+ x: torch.Tensor,
+ target_x: torch.Tensor,
+ weight: torch.Tensor,
+ target_weight: torch.Tensor,
+ reduction: str = 'batchmean'
+):
+ device = x.device
+
+ # ideally, we would like to achieve the same memory consumption as [N, H],
+ # so the expected chunk size should be:
+ # NC = ceil(V / H)
+ # C = ceil(N / NC)
+ # for ex: N = 4096*4, V = 32000, H = 4096 ==> NC = 8, C = ceil(N / NC) = 2048
+ N, H, V = *x.shape, weight.shape[0]
+ BV = min(MAX_FUSED_SIZE, triton.next_power_of_2(V))
+ # TODO: in real cases, we may need to limit the number of chunks NC to
+ # ensure the precisions of accumulated gradients
+ NC = min(8, triton.cdiv(V, H))
+ C = triton.next_power_of_2(triton.cdiv(N, NC))
+ NC = triton.cdiv(N, C)
+
+ dx = torch.zeros_like(x, device=device)
+ dw = torch.zeros_like(weight, device=device) if weight is not None else None
+ # we use fp32 for loss accumulator
+ loss = torch.zeros(N, dtype=torch.float32, device=device)
+
+ for ic in range(NC):
+ start, end = ic * C, min((ic + 1) * C, N)
+ # [C, N]
+ c_sx = x[start:end]
+ c_tx = target_x[start:end]
+ # when doing matmul, use the original precision
+ # [C, V]
+ c_sl = F.linear(c_sx, weight)
+ c_tl = F.linear(c_tx, target_weight)
+
+ # unreduced loss
+ c_loss = loss[start:end]
+
+ # Here we calculate the gradient of c_sx in place so we can save memory.
+ kl_div_kernel[(c_sx.shape[0],)](
+ logits=c_sl,
+ target_logits=c_tl,
+ loss=c_loss,
+ s_logits=c_sl.stride(-2),
+ s_loss=c_loss.stride(-1),
+ reduction=reduction,
+ N=N,
+ V=V,
+ BV=BV,
+ num_warps=32
+ )
+
+ # gradient of logits is computed in-place by the above triton kernel and is of shape: C x V
+ # thus dx[start: end] should be of shape: C x H
+ # additionally, since we are chunking the inputs, observe that the loss and gradients are calculated only
+ # on `n_non_ignore` tokens. However, the gradient of the input should be calculated for all tokens.
+ # Thus, we need an additional scaling factor of (n_non_ignore/total) to scale the gradients.
+ # [C, H]
+
+ dx[start:end] = torch.mm(c_sl, weight)
+
+ if weight is not None:
+ torch.addmm(input=dw, mat1=c_sl.t(), mat2=c_sx, out=dw)
+
+ loss = loss.sum()
+ return loss, dx, dw
+
+
+def fused_kl_div_backward(
+ do: torch.Tensor,
+ dx: torch.Tensor,
+ dw: torch.Tensor
+):
+ # If cross entropy is the last layer, do is 1.0. Skip the mul to save time
+ if torch.ne(do, torch.tensor(1.0, device=do.device)):
+ # We use a Triton kernel instead of a PyTorch operation because modifying inputs in-place
+ # for gradient storage and backward multiple times causes anomalies with PyTorch but not with Triton.
+ N, H = dx.shape
+ B = min(MAX_FUSED_SIZE, triton.next_power_of_2(H))
+
+ elementwise_mul_kernel[(triton.cdiv(N * H, B),)](
+ x=dx,
+ g=do,
+ N=N*H,
+ B=B,
+ num_warps=32,
+ )
+
+ # handle dw
+ if dw is not None:
+ V, H = dw.shape
+ elementwise_mul_kernel[(triton.cdiv(V * H, B),)](
+ x=dw,
+ g=do,
+ N=V*H,
+ B=B,
+ num_warps=32,
+ )
+
+ return dx, dw
+
+
+class FusedKLDivLossFunction(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ def forward(
+ ctx,
+ x: torch.Tensor,
+ target_x: torch.Tensor,
+ weight: torch.Tensor,
+ target_weight: torch.Tensor,
+ reduction: str
+ ):
+ loss, dx, dw = fused_kl_div_forward(
+ x=x,
+ target_x=target_x,
+ weight=weight,
+ target_weight=target_weight,
+ reduction=reduction
+ )
+ ctx.save_for_backward(dx, dw)
+ return loss
+
+ @staticmethod
+ @contiguous
+ def backward(ctx, do):
+ dx, dw = ctx.saved_tensors
+ dx, dw = fused_kl_div_backward(do, dx, dw)
+ return dx, None, dw, None, None
+
+
+def fused_kl_div_loss(
+ x: torch.Tensor,
+ target_x: torch.Tensor,
+ weight: torch.Tensor,
+ target_weight: torch.Tensor,
+ reduction: str = 'batchmean'
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ """
+ Args:
+ x (torch.Tensor): [batch_size * seq_len, hidden_size]
+ target_x (torch.Tensor): [batch_size * seq_len, hidden_size]
+ weight (torch.Tensor): [vocab_size, hidden_size]
+ where `vocab_size` is the number of classes.
+ target_weight (torch.Tensor): [vocab_size, hidden_size]
+ where `vocab_size` is the number of classes.
+ reduction:
+ Specifies the reduction to apply to the output: 'batchmean'. Default: 'batchmean'.
+ Returns:
+ loss
+ """
+ return FusedKLDivLossFunction.apply(
+ x,
+ target_x,
+ weight,
+ target_weight,
+ reduction
+ )
+
+
+class FusedKLDivLoss(nn.Module):
+
+ def __init__(
+ self,
+ reduction: str = 'batchmean'
+ ):
+ """
+ Args:
+ reduction:
+ Specifies the reduction to apply to the output: 'batchmean'. Default: 'batchmean'.
+ """
+ super().__init__()
+
+ assert reduction in ['batchmean'], f"reduction: {reduction} is not supported"
+
+ self.reduction = reduction
+
+ def forward(
+ self,
+ x: torch.Tensor,
+ target_x: torch.Tensor,
+ weight: torch.Tensor,
+ target_weight: torch.Tensor
+ ):
+ """
+ Args:
+ x (torch.Tensor): [batch_size * seq_len, hidden_size]
+ target_x (torch.Tensor): [batch_size * seq_len, hidden_size]
+ weight (torch.Tensor): [vocab_size, hidden_size]
+ where `vocab_size` is the number of classes.
+ target_weight (torch.Tensor): [vocab_size, hidden_size]
+ where `vocab_size` is the number of classes.
+ Returns:
+ loss
+ """
+ loss = fused_kl_div_loss(
+ x=x,
+ target_x=target_x,
+ weight=weight,
+ target_weight=target_weight,
+ reduction=self.reduction
+ )
+ return loss
diff --git a/fla/modules/fused_linear_cross_entropy.py b/fla/modules/fused_linear_cross_entropy.py
new file mode 100644
index 0000000000000000000000000000000000000000..e1e3ac8a4218fcf946ecabc7bfba0978ee1ca40d
--- /dev/null
+++ b/fla/modules/fused_linear_cross_entropy.py
@@ -0,0 +1,509 @@
+# -*- coding: utf-8 -*-
+
+# Code adapted from
+# https://github.com/linkedin/Liger-Kernel/blob/main/src/liger_kernel/ops/fused_linear_cross_entropy.py
+
+from typing import Optional, Tuple
+
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+import triton
+import triton.language as tl
+
+from fla.ops.utils import logsumexp_fwd
+from fla.utils import contiguous
+
+# The hard limit of TRITON_MAX_TENSOR_NUMEL is 1048576
+# https://github.com/triton-lang/triton/blob/ba42a5c68fd0505f8c42f4202d53be0f8d9a5fe0/python/triton/language/core.py#L19
+# However, setting limit as 65536 as in LayerNorm tutorial is faster because of less register spilling
+# The optimal maximum block size depends on your hardware, your kernel, and your dtype
+MAX_FUSED_SIZE = 65536 // 2
+
+
+@triton.jit
+def cross_entropy_kernel(
+ logits,
+ lse,
+ target,
+ loss,
+ total,
+ ignore_index,
+ label_smoothing: tl.constexpr,
+ logit_scale: tl.constexpr,
+ reduction: tl.constexpr,
+ V: tl.constexpr,
+ BV: tl.constexpr
+):
+ """
+ This kernel computes both cross entropy loss and the gradient of the input.
+ We only consider hard label + mean reduction for now.
+ Please refer to https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html for the math.
+
+ Args:
+ logits:
+ Pointer to logits tensor.
+ lse:
+ Pointer to logsumexp tensor.
+ target: Pointer to target tensor.
+ loss:
+ Pointer to tensor to store the loss.
+ V (int):
+ The number of columns in the input tensor.
+ total (int):
+ The number of non-ignored classes.
+ ignore_index (int):
+ The index to ignore in the target.
+ label_smoothing (float):
+ The amount of smoothing when computing the loss, where 0.0 means no smoothing.
+ reduction (str):
+ The string for the reduction to apply
+ BV (int):
+ The block size for vocab.
+ """
+
+ # https://github.com/triton-lang/triton/issues/1058
+ # If B*T*V is too large, i_n * stride will overflow out of int32, so we convert to int64
+ i_n = tl.program_id(0).to(tl.int64)
+ NV = tl.cdiv(V, BV)
+
+ # 1. Load target first because if the target is ignore_index, we can return right away
+ b_y = tl.load(target + i_n)
+
+ # 2. locate the start index
+ logits += i_n * V
+
+ if b_y == ignore_index:
+ # set all x as 0
+ for i in range(0, V, BV):
+ o_v = i + tl.arange(0, BV)
+ tl.store(logits + o_v, 0.0, mask=o_v < V)
+ return
+
+ # Online softmax: 2 loads + 1 store (compared with 3 loads + 1 store for the safe softmax)
+ # Refer to Algorithm 3 in the paper: https://arxiv.org/pdf/1805.02867
+
+ # 3. [Online softmax] first pass: compute logsumexp
+ # we did this in anouter kernel
+ b_l = tl.load(logits + b_y) * logit_scale
+ b_lse = tl.load(lse + i_n)
+
+ # 4. Calculate the loss
+ # loss = lse - logits_l
+ b_loss = b_lse - b_l
+
+ # Label smoothing is a general case of normal cross entropy
+ # See the full derivation at https://github.com/linkedin/Liger-Kernel/pull/198#issue-2503665310
+ b_z = 0.0
+ eps = label_smoothing / V
+
+ # We need tl.debug_barrier() as mentioned in
+ # https://github.com/triton-lang/triton/blob/ba42a5c68fd0505f8c42f4202d53be0f8d9a5fe0/python/triton/ops/cross_entropy.py#L34
+ tl.debug_barrier()
+
+ # 5. [Online Softmax] Second pass: compute gradients
+ # For 'mean' reduction, gradients are normalized by number of non-ignored elements
+ # dx_y = (softmax(x_y) - 1) / N
+ # dx_i = softmax(x_i) / N, i != y
+ # For label smoothing:
+ # dx_i = (softmax(x_y) - label_smoothing / V) / N, i != y
+ # dx_y = (softmax(x_y) - label_smoothing / V - (1 - label_smoothing)) / N
+ # = dx_i - (1 - label_smoothing) / N
+ for iv in range(0, NV):
+ o_v = iv * BV + tl.arange(0, BV)
+ b_logits = tl.load(logits + o_v, mask=o_v < V, other=float('-inf')) * logit_scale
+ if label_smoothing > 0:
+ # scale X beforehand to avoid overflow
+ b_z += tl.sum(tl.where(o_v < V, -eps * b_logits, 0.0))
+ b_p = (tl.exp(b_logits - b_lse) - eps) * logit_scale
+ if reduction == "mean":
+ b_p = b_p / total
+ tl.store(logits + o_v, b_p, mask=o_v < V)
+
+ tl.debug_barrier()
+
+ # Orginal loss = H(q, p), with label smoothing regularization = H(q', p) and (label_smoothing / V) = eps
+ # H(q', p) = (1 - label_smoothing) * H(q, p) + label_smoothing * H(u, p)
+ # = (1 - label_smoothing) * H(q, p) + eps * sum(logsoftmax(x_i))
+ # By using m (global max of xi) and d (sum of e^(xi-m)), we can simplify as:
+ # = (1 - label_smoothing) * H(q, p) + (-sum(x_i * eps) + label_smoothing * (m + logd))
+ # Refer to H(q', p) in section 7 of the paper:
+ # https://arxiv.org/pdf/1512.00567
+ # pytorch:
+ # https://github.com/pytorch/pytorch/blob/2981534f54d49fa3a9755c9b0855e7929c2527f0/aten/src/ATen/native/LossNLL.cpp#L516
+ # See full derivation at https://github.com/linkedin/Liger-Kernel/pull/198#issuecomment-2333753087
+ if label_smoothing > 0:
+ b_loss = b_loss * (1 - label_smoothing) + (b_z + label_smoothing * b_lse)
+
+ # 6. Specially handle the i==y case where `dx_y = (softmax(x_y) - (1 - label_smoothing) / N`
+ b_l = tl.load(logits + b_y)
+
+ # Normalize the loss by the number of non-ignored elements if reduction is "mean"
+ if reduction == 'mean':
+ b_loss = b_loss / total
+ b_l += (label_smoothing - 1) / total * logit_scale
+ else:
+ b_l += (label_smoothing - 1) * logit_scale
+
+ tl.store(loss + i_n, b_loss)
+ tl.store(logits + b_y, b_l)
+
+
+@triton.jit
+def elementwise_mul_kernel(
+ x,
+ g,
+ N: tl.constexpr,
+ B: tl.constexpr
+):
+ """
+ This function multiplies each element of the tensor pointed by x with the value pointed by g.
+ The multiplication is performed in-place on the tensor pointed by x.
+
+ Parameters:
+ x:
+ Pointer to the input tensor.
+ g:
+ Pointer to the gradient output value.
+ N (int):
+ The number of columns in the input tensor.
+ B (int):
+ The block size for Triton operations.
+ """
+
+ # Get the program ID and convert it to int64 to avoid overflow
+ i_x = tl.program_id(0).to(tl.int64)
+ o_x = i_x * B + tl.arange(0, B)
+
+ # Load the gradient output value
+ b_g = tl.load(g)
+ b_x = tl.load(x + o_x, mask=o_x < N)
+ tl.store(x + o_x, b_x * b_g, mask=o_x < N)
+
+
+def fused_linear_cross_entropy_forward(
+ x: torch.Tensor,
+ target: torch.LongTensor,
+ weight: torch.Tensor,
+ bias: torch.Tensor = None,
+ ignore_index: int = -100,
+ label_smoothing: float = 0.0,
+ logit_scale: float = 1.0,
+ num_chunks: int = 8,
+ reduction: str = "mean"
+):
+ device = x.device
+
+ # inputs have shape: [N, H]
+ # materialized activations will have shape: [N, V]
+ # the increase in memory = [N, V]
+ # reduction can be achieved by partitioning the number of tokens N into smaller chunks.
+
+ # ideally, we would like to achieve the same memory consumption as [N, H],
+ # so the expected chunk size should be:
+ # NC = ceil(V / H)
+ # C = ceil(N / NC)
+ # for ex: N = 4096*4, V = 32000, H = 4096 ==> NC = 8, C = ceil(N / NC) = 2048
+ N, H, V = *x.shape, weight.shape[0]
+ BV = min(MAX_FUSED_SIZE, triton.next_power_of_2(V))
+ # TODO: in real cases, we may need to limit the number of chunks NC to
+ # ensure the precisions of accumulated gradients
+ NC = min(num_chunks, triton.cdiv(V, H))
+ C = triton.next_power_of_2(triton.cdiv(N, NC))
+ NC = triton.cdiv(N, C)
+
+ dx = torch.zeros_like(x, device=device)
+ dw = torch.zeros_like(weight, device=device) if weight is not None else None
+ db = torch.zeros_like(bias, device=device) if bias is not None else None
+ # we use fp32 for loss accumulator
+ loss = torch.zeros(N, dtype=torch.float32, device=device)
+
+ total = target.ne(ignore_index).sum().item()
+
+ for ic in range(NC):
+ start, end = ic * C, min((ic + 1) * C, N)
+ # [C, N]
+ c_x = x[start:end]
+ # when doing matmul, use the original precision
+ # [C, V]
+ c_logits = F.linear(c_x, weight, bias)
+ c_target = target[start:end]
+ # [C]
+ # keep lse in fp32 to maintain precision
+ c_lse = logsumexp_fwd(c_logits, scale=logit_scale, dtype=torch.float)
+
+ # unreduced loss
+ c_loss = loss[start:end]
+
+ # Here we calculate the gradient of c_logits in place so we can save memory.
+ cross_entropy_kernel[(c_logits.shape[0],)](
+ logits=c_logits,
+ lse=c_lse,
+ target=c_target,
+ loss=c_loss,
+ total=total,
+ ignore_index=ignore_index,
+ label_smoothing=label_smoothing,
+ logit_scale=logit_scale,
+ reduction=reduction,
+ V=V,
+ BV=BV,
+ num_warps=32
+ )
+
+ # gradient of logits is computed in-place by the above triton kernel and is of shape: C x V
+ # thus dx should be of shape: C x H
+ dx[start:end] = torch.mm(c_logits, weight)
+
+ # keep dw in fp32 to maintain precision
+ if weight is not None:
+ dw += c_logits.t() @ c_x
+
+ if bias is not None:
+ torch.add(input=db, other=c_logits.sum(0), out=db)
+
+ loss = loss.sum()
+ if dw is not None:
+ dw = dw.to(weight)
+ if db is not None:
+ db = db.to(bias)
+ return loss, dx, dw, db
+
+
+def fused_linear_cross_entropy_backward(
+ do: torch.Tensor,
+ dx: torch.Tensor,
+ dw: torch.Tensor,
+ db: torch.Tensor
+):
+ # If cross entropy is the last layer, do is 1.0. Skip the mul to save time
+ if torch.ne(do, torch.tensor(1.0, device=do.device)):
+ # We use a Triton kernel instead of a PyTorch operation because modifying inputs in-place
+ # for gradient storage and backward multiple times causes anomalies with PyTorch but not with Triton.
+ N, H = dx.shape
+ B = min(MAX_FUSED_SIZE, triton.next_power_of_2(H))
+
+ elementwise_mul_kernel[(triton.cdiv(N * H, B),)](
+ x=dx,
+ g=do,
+ N=N*H,
+ B=B,
+ num_warps=32,
+ )
+
+ # handle dw
+ if dw is not None:
+ V, H = dw.shape
+ elementwise_mul_kernel[(triton.cdiv(V * H, B),)](
+ x=dw,
+ g=do,
+ N=V*H,
+ B=B,
+ num_warps=32,
+ )
+
+ if db is not None:
+ V = db.shape[0]
+ elementwise_mul_kernel[(triton.cdiv(V, B),)](
+ x=db,
+ g=do,
+ N=V,
+ B=B,
+ num_warps=32,
+ )
+ return dx, dw, db
+
+
+class FusedLinearCrossEntropyFunction(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ def forward(
+ ctx,
+ x: torch.Tensor,
+ target: torch.LongTensor,
+ weight: torch.Tensor,
+ bias: torch.Tensor = None,
+ ignore_index: int = -100,
+ label_smoothing: float = 0.0,
+ logit_scale: float = 1.0,
+ num_chunks: int = 8,
+ reduction: str = "mean"
+ ):
+ """
+ Fusing the last linear layer with cross-entropy loss
+ Reference: https://github.com/mgmalek/efficient_cross_entropy
+
+ Handle the forward and backward pass of the final linear layer via cross-entropy loss by avoiding
+ the materialization of the large logits tensor. Since Cross Entropy Loss is the last layer, we can
+ compute the gradient at the forward pass. By doing so, we don't have to store the x and target
+ for the backward pass.
+
+ x (torch.Tensor): [batch_size * seq_len, hidden_size]
+ target (torch.LongTensor): [batch_size * seq_len]
+ where each value is in [0, vocab_size).
+ weight (torch.Tensor): [vocab_size, hidden_size]
+ where `vocab_size` is the number of classes.
+ bias (Optional[torch.Tensor]): [vocab_size]
+ where `vocab_size` is the number of classes.
+ ignore_index:
+ the index to ignore in the target.
+ label_smoothing:
+ the amount of smoothing when computing the loss, where 0.0 means no smoothing.
+ logit_scale: float = 1.0,
+ A scaling factor applied to the logits. Default: 1.0
+ num_chunks: int
+ The number of chunks to split the input tensor into for processing.
+ This can help optimize memory usage and computation speed.
+ Default: 8
+ reduction:
+ Specifies the reduction to apply to the output: 'mean' | 'sum'.
+ 'mean': the weighted mean of the output is taken,
+ 'sum': the output will be summed.
+ Default: 'mean'.
+ """
+ loss, dx, dw, db = fused_linear_cross_entropy_forward(
+ x,
+ target,
+ weight,
+ bias,
+ ignore_index,
+ label_smoothing,
+ logit_scale,
+ num_chunks,
+ reduction
+ )
+ # downcast to dtype and store for backward
+ ctx.save_for_backward(
+ dx.detach(),
+ dw.detach() if weight is not None else None,
+ db.detach() if bias is not None else None,
+ )
+ return loss
+
+ @staticmethod
+ @contiguous
+ def backward(ctx, do):
+ dx, dw, db = ctx.saved_tensors
+ dx, dw, db = fused_linear_cross_entropy_backward(do, dx, dw, db)
+ return dx, None, dw, db, None, None, None, None, None
+
+
+def fused_linear_cross_entropy_loss(
+ x: torch.Tensor,
+ target: torch.LongTensor,
+ weight: torch.Tensor,
+ bias: torch.Tensor = None,
+ ignore_index: int = -100,
+ label_smoothing: float = 0.0,
+ logit_scale: float = 1.0,
+ num_chunks: int = 8,
+ reduction: str = "mean"
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ """
+ Args:
+ x (torch.Tensor): [batch_size * seq_len, hidden_size]
+ target (torch.LongTensor): [batch_size * seq_len]
+ where each value is in [0, vocab_size).
+ weight (torch.Tensor): [vocab_size, hidden_size]
+ where `vocab_size` is the number of classes.
+ bias (Optional[torch.Tensor]): [vocab_size]
+ where `vocab_size` is the number of classes.
+ ignore_index: int.
+ If target == ignore_index, the loss is set to 0.0.
+ label_smoothing: float
+ logit_scale: float
+ A scaling factor applied to the logits. Default: 1.0
+ num_chunks: int
+ The number of chunks to split the input tensor into for processing.
+ This can help optimize memory usage and computation speed.
+ Default: 8
+ reduction:
+ Specifies the reduction to apply to the output: 'mean' | 'sum'.
+ 'mean': the weighted mean of the output is taken,
+ 'sum': the output will be summed.
+ Default: 'mean'.
+ Returns:
+ losses: [batch,], float
+ """
+ return FusedLinearCrossEntropyFunction.apply(
+ x,
+ target,
+ weight,
+ bias,
+ ignore_index,
+ label_smoothing,
+ logit_scale,
+ num_chunks,
+ reduction
+ )
+
+
+class FusedLinearCrossEntropyLoss(nn.Module):
+
+ def __init__(
+ self,
+ ignore_index: int = -100,
+ label_smoothing: float = 0.0,
+ logit_scale: float = 1.0,
+ num_chunks: int = 8,
+ reduction: str = "mean"
+ ):
+ """
+ Args:
+ ignore_index: int.
+ If target == ignore_index, the loss is set to 0.0.
+ label_smoothing: float
+ logit_scale: float
+ A scaling factor applied to the logits. Default: 1.0
+ num_chunks: int
+ The number of chunks to split the input tensor into for processing.
+ This can help optimize memory usage and computation speed.
+ Default: 8
+ reduction:
+ Specifies the reduction to apply to the output: 'mean' | 'sum'.
+ 'mean': the weighted mean of the output is taken,
+ 'sum': the output will be summed.
+ Default: 'mean'.
+ """
+ super().__init__()
+
+ assert reduction in ["none", "mean", "sum"], f"reduction: {reduction} is not supported"
+
+ self.ignore_index = ignore_index
+ self.label_smoothing = label_smoothing
+ self.logit_scale = logit_scale
+ self.num_chunks = num_chunks
+ self.reduction = reduction
+
+ def forward(
+ self,
+ x: torch.Tensor,
+ target: torch.LongTensor,
+ weight: torch.Tensor,
+ bias: Optional[torch.Tensor] = None
+ ):
+ """
+ Args:
+ x (torch.Tensor): [batch_size * seq_len, hidden_size]
+ target (torch.LongTensor): [batch_size * seq_len]
+ where each value is in [0, V).
+ weight (torch.Tensor): [vocab_size, hidden_size]
+ where `vocab_size` is the number of classes.
+ bias (Optional[torch.Tensor]): [vocab_size]
+ where `vocab_size` is the number of classes.
+ Returns:
+ loss
+ """
+ loss = fused_linear_cross_entropy_loss(
+ x,
+ target,
+ weight=weight,
+ bias=bias,
+ ignore_index=self.ignore_index,
+ label_smoothing=self.label_smoothing,
+ logit_scale=self.logit_scale,
+ num_chunks=self.num_chunks,
+ reduction=self.reduction
+ )
+ return loss
diff --git a/fla/modules/fused_norm_gate.py b/fla/modules/fused_norm_gate.py
new file mode 100644
index 0000000000000000000000000000000000000000..739b5ae46ca4e15d263fabbebfa70dcd6424ed7d
--- /dev/null
+++ b/fla/modules/fused_norm_gate.py
@@ -0,0 +1,889 @@
+# -*- coding: utf-8 -*-
+
+# Copyright (c) 2023, Tri Dao.
+# https://github.com/state-spaces/mamba/blob/fb7b5310fa865dbd62aa059b1e26f2b431363e2a/mamba_ssm/ops/triton/layernorm.py
+# Implement residual + layer_norm / rms_norm.
+
+# Based on the Triton LayerNorm tutorial: https://triton-lang.org/main/getting-started/tutorials/05-layer-norm.html
+# For the backward pass, we keep weight_grad and bias_grad in registers and accumulate.
+# This is faster for dimensions up to 8k, but after that it's much slower due to register spilling.
+# The models we train have hidden dim up to 8k anyway (e.g. Llama 70B), so this is fine.
+
+from __future__ import annotations
+
+import math
+
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+import triton
+import triton.language as tl
+
+from fla.utils import contiguous
+
+
+def layer_norm_ref(x, weight, bias, residual=None, eps=1e-6, prenorm=False, upcast=False):
+ dtype = x.dtype
+ if upcast:
+ weight = weight.float()
+ bias = bias.float() if bias is not None else None
+ if upcast:
+ x = x.float()
+ residual = residual.float() if residual is not None else residual
+ if residual is not None:
+ x = (x + residual).to(x.dtype)
+ out = F.layer_norm(x.to(weight.dtype), x.shape[-1:], weight=weight, bias=bias, eps=eps).to(
+ dtype
+ )
+ return out if not prenorm else (out, x)
+
+
+def rms_norm_ref(x, weight, bias, residual=None, eps=1e-6, prenorm=False, upcast=False):
+ dtype = x.dtype
+ if upcast:
+ weight = weight.float()
+ bias = bias.float() if bias is not None else None
+ if upcast:
+ x = x.float()
+ residual = residual.float() if residual is not None else residual
+ if residual is not None:
+ x = (x + residual).to(x.dtype)
+ rstd = 1 / torch.sqrt((x.square()).mean(dim=-1, keepdim=True) + eps)
+ out = (x * rstd * weight) + \
+ bias if bias is not None else (x * rstd * weight)
+ out = out.to(dtype)
+ return out if not prenorm else (out, x)
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ triton.Config({}, num_warps=16),
+ triton.Config({}, num_warps=32),
+ ],
+ key=["N", "HAS_RESIDUAL", "STORE_RESIDUAL_OUT", "IS_RMS_NORM", "HAS_BIAS"],
+)
+# @triton.heuristics({"HAS_BIAS": lambda args: args["B"] is not None})
+# @triton.heuristics({"HAS_RESIDUAL": lambda args: args["RESIDUAL"] is not None})
+@triton.jit
+def _layer_norm_fwd_1pass_kernel(
+ X, # pointer to the input
+ O, # pointer to the gate
+ Y, # pointer to the output
+ W, # pointer to the weights
+ B, # pointer to the biases
+ RESIDUAL, # pointer to the residual
+ RESIDUAL_OUT, # pointer to the residual
+ Mean, # pointer to the mean
+ Rstd, # pointer to the 1/std
+ stride_x_row, # how much to increase the pointer when moving by 1 row
+ stride_y_row,
+ stride_res_row,
+ stride_res_out_row,
+ N, # number of columns in X
+ eps, # epsilon to avoid division by zero
+ IS_RMS_NORM: tl.constexpr,
+ BLOCK_N: tl.constexpr,
+ HAS_RESIDUAL: tl.constexpr,
+ STORE_RESIDUAL_OUT: tl.constexpr,
+ HAS_WEIGHT: tl.constexpr,
+ HAS_BIAS: tl.constexpr
+):
+ # Map the program id to the row of X and Y it should compute.
+ row = tl.program_id(0)
+ X += row * stride_x_row
+ Y += row * stride_y_row
+ O += row * stride_x_row
+ if HAS_RESIDUAL:
+ RESIDUAL += row * stride_res_row
+ if STORE_RESIDUAL_OUT:
+ RESIDUAL_OUT += row * stride_res_out_row
+ # Compute mean and variance
+ cols = tl.arange(0, BLOCK_N)
+ x = tl.load(X + cols, mask=cols < N, other=0.0).to(tl.float32)
+ if HAS_RESIDUAL:
+ residual = tl.load(RESIDUAL + cols, mask=cols <
+ N, other=0.0).to(tl.float32)
+ x += residual
+ if STORE_RESIDUAL_OUT:
+ tl.store(RESIDUAL_OUT + cols, x, mask=cols < N)
+ if not IS_RMS_NORM:
+ mean = tl.sum(x, axis=0) / N
+ tl.store(Mean + row, mean)
+ xbar = tl.where(cols < N, x - mean, 0.0)
+ var = tl.sum(xbar * xbar, axis=0) / N
+ else:
+ xbar = tl.where(cols < N, x, 0.0)
+ var = tl.sum(xbar * xbar, axis=0) / N
+ rstd = 1 / tl.sqrt(var + eps)
+ tl.store(Rstd + row, rstd)
+ # Normalize and apply linear transformation
+ mask = cols < N
+ if HAS_WEIGHT:
+ w = tl.load(W + cols, mask=mask).to(tl.float32)
+ if HAS_BIAS:
+ b = tl.load(B + cols, mask=mask).to(tl.float32)
+ x_hat = (x - mean) * rstd if not IS_RMS_NORM else x * rstd
+ y = x_hat * w if HAS_WEIGHT else x_hat
+ if HAS_BIAS:
+ y = y + b
+
+ # Swish output gate
+ o = tl.load(O + cols, mask=cols < N, other=0.0).to(tl.float32)
+ y = y * o * tl.sigmoid(o)
+
+ # Write output
+ tl.store(Y + cols, y, mask=mask)
+
+
+def _layer_norm_fwd(
+ x, o, weight, bias, eps, residual=None, out_dtype=None, residual_dtype=None, is_rms_norm=False
+):
+ if residual is not None:
+ residual_dtype = residual.dtype
+ M, N = x.shape
+ assert x.stride(-1) == 1
+ if residual is not None:
+ assert residual.stride(-1) == 1
+ assert residual.shape == (M, N)
+ if weight is not None:
+ assert weight.shape == (N,)
+ assert weight.stride(-1) == 1
+ if bias is not None:
+ assert bias.stride(-1) == 1
+ assert bias.shape == (N,)
+ # allocate output
+ y = torch.empty_like(x, dtype=x.dtype if out_dtype is None else out_dtype)
+ assert y.stride(-1) == 1
+ if residual is not None or (residual_dtype is not None and residual_dtype != x.dtype):
+ residual_out = torch.empty(M, N, device=x.device, dtype=residual_dtype)
+ assert residual_out.stride(-1) == 1
+ else:
+ residual_out = None
+ mean = torch.empty((M,), dtype=torch.float32,
+ device="cuda") if not is_rms_norm else None
+ rstd = torch.empty((M,), dtype=torch.float32, device="cuda")
+ # Less than 64KB per feature: enqueue fused kernel
+ MAX_FUSED_SIZE = 65536 // x.element_size()
+ BLOCK_N = min(MAX_FUSED_SIZE, triton.next_power_of_2(N))
+ if N > BLOCK_N:
+ raise RuntimeError(
+ "This layer norm doesn't support feature dim >= 64KB.")
+ # heuristics for number of warps
+ with torch.cuda.device(x.device.index):
+ _layer_norm_fwd_1pass_kernel[(M,)](
+ x,
+ o,
+ y,
+ weight,
+ bias,
+ residual,
+ residual_out,
+ mean,
+ rstd,
+ x.stride(0),
+ y.stride(0),
+ residual.stride(0) if residual is not None else 0,
+ residual_out.stride(0) if residual_out is not None else 0,
+ N,
+ eps,
+ is_rms_norm,
+ BLOCK_N,
+ residual is not None,
+ residual_out is not None,
+ weight is not None,
+ bias is not None,
+ )
+ # residual_out is None if residual is None and residual_dtype == input_dtype
+ return y, mean, rstd, residual_out if residual_out is not None else x
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ triton.Config({}, num_warps=16),
+ triton.Config({}, num_warps=32),
+ ],
+ key=["N", "HAS_DRESIDUAL", "STORE_DRESIDUAL", "IS_RMS_NORM", "HAS_BIAS"],
+)
+# @triton.heuristics({"HAS_BIAS": lambda args: args["B"] is not None})
+# @triton.heuristics({"HAS_DRESIDUAL": lambda args: args["DRESIDUAL"] is not None})
+# @triton.heuristics({"STORE_DRESIDUAL": lambda args: args["DRESIDUAL_IN"] is not None})
+@triton.heuristics({"RECOMPUTE_OUTPUT": lambda args: args["Y"] is not None})
+@triton.jit
+def _layer_norm_bwd_kernel(
+ X, # pointer to the input
+ O, # pointer to the gate
+ W, # pointer to the weights
+ B, # pointer to the biases
+ Y, # pointer to the output to be recomputed
+ DY, # pointer to the output gradient
+ DX, # pointer to the input gradient
+ DO, # pointer to the gate gradient
+ DW, # pointer to the partial sum of weights gradient
+ DB, # pointer to the partial sum of biases gradient
+ DRESIDUAL,
+ DRESIDUAL_IN,
+ Mean, # pointer to the mean
+ Rstd, # pointer to the 1/std
+ stride_x_row, # how much to increase the pointer when moving by 1 row
+ stride_y_row,
+ stride_dy_row,
+ stride_dx_row,
+ stride_dres_row,
+ stride_dres_in_row,
+ M, # number of rows in X
+ N, # number of columns in X
+ eps, # epsilon to avoid division by zero
+ rows_per_program,
+ IS_RMS_NORM: tl.constexpr,
+ BLOCK_N: tl.constexpr,
+ HAS_DRESIDUAL: tl.constexpr,
+ STORE_DRESIDUAL: tl.constexpr,
+ HAS_WEIGHT: tl.constexpr,
+ HAS_BIAS: tl.constexpr,
+ RECOMPUTE_OUTPUT: tl.constexpr,
+):
+ # Map the program id to the elements of X, DX, and DY it should compute.
+ row_block_id = tl.program_id(0)
+ row_start = row_block_id * rows_per_program
+ cols = tl.arange(0, BLOCK_N)
+ mask = cols < N
+ X += row_start * stride_x_row
+ O += row_start * stride_x_row
+ if HAS_DRESIDUAL:
+ DRESIDUAL += row_start * stride_dres_row
+ if STORE_DRESIDUAL:
+ DRESIDUAL_IN += row_start * stride_dres_in_row
+ DY += row_start * stride_dy_row
+ DX += row_start * stride_dx_row
+ DO += row_start * stride_dx_row
+ if RECOMPUTE_OUTPUT:
+ Y += row_start * stride_y_row
+ if HAS_WEIGHT:
+ w = tl.load(W + cols, mask=mask).to(tl.float32)
+ dw = tl.zeros((BLOCK_N,), dtype=tl.float32)
+ if RECOMPUTE_OUTPUT and HAS_BIAS:
+ b = tl.load(B + cols, mask=mask, other=0.0).to(tl.float32)
+ if HAS_BIAS:
+ db = tl.zeros((BLOCK_N,), dtype=tl.float32)
+ row_end = min((row_block_id + 1) * rows_per_program, M)
+ for row in range(row_start, row_end):
+ # Load data to SRAM
+ x = tl.load(X + cols, mask=mask, other=0).to(tl.float32)
+ o = tl.load(O + cols, mask=mask, other=0).to(tl.float32)
+ dy = tl.load(DY + cols, mask=mask, other=0).to(tl.float32)
+
+ if not IS_RMS_NORM:
+ mean = tl.load(Mean + row)
+ rstd = tl.load(Rstd + row)
+ # Compute dx
+ xhat = (x - mean) * rstd if not IS_RMS_NORM else x * rstd
+ xhat = tl.where(mask, xhat, 0.0)
+
+ y = xhat * w if HAS_WEIGHT else xhat
+ if HAS_BIAS:
+ y = y + b
+ if RECOMPUTE_OUTPUT:
+ tl.store(Y + cols, y, mask=mask)
+
+ sigmoid_o = tl.sigmoid(o)
+ do = dy * y * (sigmoid_o + o * sigmoid_o * (1 - sigmoid_o))
+ dy = dy * o * sigmoid_o
+ wdy = dy
+ if HAS_WEIGHT:
+ wdy = dy * w
+ dw += dy * xhat
+ if HAS_BIAS:
+ db += dy
+ if not IS_RMS_NORM:
+ c1 = tl.sum(xhat * wdy, axis=0) / N
+ c2 = tl.sum(wdy, axis=0) / N
+ dx = (wdy - (xhat * c1 + c2)) * rstd
+ else:
+ c1 = tl.sum(xhat * wdy, axis=0) / N
+ dx = (wdy - xhat * c1) * rstd
+ if HAS_DRESIDUAL:
+ dres = tl.load(DRESIDUAL + cols, mask=mask, other=0).to(tl.float32)
+ dx += dres
+ # Write dx
+ if STORE_DRESIDUAL:
+ tl.store(DRESIDUAL_IN + cols, dx, mask=mask)
+ tl.store(DX + cols, dx, mask=mask)
+ tl.store(DO + cols, do, mask=mask)
+
+ X += stride_x_row
+ O += stride_x_row
+ if HAS_DRESIDUAL:
+ DRESIDUAL += stride_dres_row
+ if STORE_DRESIDUAL:
+ DRESIDUAL_IN += stride_dres_in_row
+ if RECOMPUTE_OUTPUT:
+ Y += stride_y_row
+ DY += stride_dy_row
+ DX += stride_dx_row
+ DO += stride_dx_row
+ if HAS_WEIGHT:
+ tl.store(DW + row_block_id * N + cols, dw, mask=mask)
+ if HAS_BIAS:
+ tl.store(DB + row_block_id * N + cols, db, mask=mask)
+
+
+def _layer_norm_bwd(
+ dy,
+ x,
+ o,
+ weight,
+ bias,
+ eps,
+ mean,
+ rstd,
+ dresidual=None,
+ has_residual=False,
+ is_rms_norm=False,
+ x_dtype=None,
+ recompute_output=False,
+):
+ M, N = x.shape
+ assert x.stride(-1) == 1
+ assert dy.stride(-1) == 1
+ assert dy.shape == (M, N)
+ if dresidual is not None:
+ assert dresidual.stride(-1) == 1
+ assert dresidual.shape == (M, N)
+ if weight is not None:
+ assert weight.shape == (N,)
+ assert weight.stride(-1) == 1
+ if bias is not None:
+ assert bias.stride(-1) == 1
+ assert bias.shape == (N,)
+ # allocate output
+ dx = (
+ torch.empty_like(x)
+ if x_dtype is None
+ else torch.empty(M, N, dtype=x_dtype, device=x.device)
+ )
+ do = (
+ torch.empty_like(o)
+ if x_dtype is None
+ else torch.empty(M, N, dtype=x_dtype, device=x.device)
+ )
+ dresidual_in = torch.empty_like(x) if has_residual and dx.dtype != x.dtype else None
+ y = torch.empty(M, N, dtype=dy.dtype, device=dy.device) if recompute_output else None
+
+ # Less than 64KB per feature: enqueue fused kernel
+ MAX_FUSED_SIZE = 65536 // x.element_size()
+ BLOCK_N = min(MAX_FUSED_SIZE, triton.next_power_of_2(N))
+ if N > BLOCK_N:
+ raise RuntimeError("This layer norm doesn't support feature dim >= 64KB.")
+ sm_count = torch.cuda.get_device_properties(x.device).multi_processor_count
+ _dw = (
+ torch.empty((sm_count, N), dtype=torch.float32, device=weight.device)
+ if weight is not None
+ else None
+ )
+ _db = (
+ torch.empty((sm_count, N), dtype=torch.float32, device=bias.device)
+ if bias is not None
+ else None
+ )
+ rows_per_program = math.ceil(M / sm_count)
+ grid = (sm_count,)
+ with torch.cuda.device(x.device.index):
+ _layer_norm_bwd_kernel[grid](
+ x,
+ o,
+ weight,
+ bias,
+ y,
+ dy,
+ dx,
+ do,
+ _dw,
+ _db,
+ dresidual,
+ dresidual_in,
+ mean,
+ rstd,
+ x.stride(0),
+ 0 if not recompute_output else y.stride(0),
+ dy.stride(0),
+ dx.stride(0),
+ dresidual.stride(0) if dresidual is not None else 0,
+ dresidual_in.stride(0) if dresidual_in is not None else 0,
+ M,
+ N,
+ eps,
+ rows_per_program,
+ is_rms_norm,
+ BLOCK_N,
+ dresidual is not None,
+ dresidual_in is not None,
+ weight is not None,
+ bias is not None,
+ )
+ dw = _dw.sum(0).to(weight.dtype) if weight is not None else None
+ db = _db.sum(0).to(bias.dtype) if bias is not None else None
+ # Don't need to compute dresidual_in separately in this case
+ if has_residual and dx.dtype == x.dtype:
+ dresidual_in = dx
+ return (dx, do, dw, db, dresidual_in) if not recompute_output else (dx, do, dw, db, dresidual_in, y)
+
+
+class LayerNormSwishGateFn(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ def forward(
+ ctx,
+ x,
+ o,
+ weight,
+ bias,
+ residual=None,
+ eps=1e-6,
+ prenorm=False,
+ residual_in_fp32=False,
+ is_rms_norm=False,
+ ):
+ x_shape_og = x.shape
+ o_shape_og = o.shape
+ # reshape input data into 2D tensor
+ x = x.reshape(-1, x.shape[-1])
+ o = o.reshape(-1, o.shape[-1])
+ if residual is not None:
+ assert residual.shape == x_shape_og
+ residual = residual.reshape(-1, residual.shape[-1])
+ residual_dtype = (
+ residual.dtype
+ if residual is not None
+ else (torch.float32 if residual_in_fp32 else None)
+ )
+ y, mean, rstd, residual_out = _layer_norm_fwd(
+ x, o, weight, bias, eps, residual, residual_dtype=residual_dtype, is_rms_norm=is_rms_norm
+ )
+ ctx.save_for_backward(residual_out, o, weight, bias, mean, rstd)
+ ctx.x_shape_og = x_shape_og
+ ctx.o_shape_og = o_shape_og
+ ctx.eps = eps
+ ctx.is_rms_norm = is_rms_norm
+ ctx.has_residual = residual is not None
+ ctx.prenorm = prenorm
+ ctx.x_dtype = x.dtype
+ y = y.reshape(x_shape_og)
+ return y if not prenorm else (y, residual_out.reshape(x_shape_og))
+
+ @staticmethod
+ @contiguous
+ def backward(ctx, dy, *args):
+ x, o, weight, bias, mean, rstd = ctx.saved_tensors
+ dy = dy.reshape(-1, dy.shape[-1])
+ assert dy.shape == x.shape
+ if ctx.prenorm:
+ dresidual = args[0]
+ dresidual = dresidual.reshape(-1, dresidual.shape[-1])
+ assert dresidual.shape == x.shape
+ else:
+ dresidual = None
+ dx, do, dw, db, dresidual_in = _layer_norm_bwd(
+ dy,
+ x,
+ o,
+ weight,
+ bias,
+ ctx.eps,
+ mean,
+ rstd,
+ dresidual,
+ ctx.has_residual,
+ ctx.is_rms_norm,
+ x_dtype=ctx.x_dtype,
+ )
+ return (
+ dx.reshape(ctx.x_shape_og),
+ do.reshape(ctx.o_shape_og),
+ dw,
+ db,
+ dresidual_in.reshape(ctx.x_shape_og) if ctx.has_residual else None,
+ None,
+ None,
+ None,
+ None,
+ )
+
+
+class LayerNormSwishGateLinearFn(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ def forward(
+ ctx,
+ x,
+ o,
+ norm_weight,
+ norm_bias,
+ linear_weight,
+ linear_bias,
+ residual=None,
+ eps=1e-6,
+ prenorm=False,
+ residual_in_fp32=False,
+ is_rms_norm=False,
+ ):
+ x_shape_og = x.shape
+ o_shape_og = o.shape
+ # reshape input data into 2D tensor
+ x = x.reshape(-1, x.shape[-1])
+ o = o.reshape(-1, o.shape[-1])
+ if residual is not None:
+ assert residual.shape == x_shape_og
+ residual = residual.reshape(-1, residual.shape[-1])
+ residual_dtype = (
+ residual.dtype
+ if residual is not None
+ else (torch.float32 if residual_in_fp32 else None)
+ )
+ y, mean, rstd, residual_out = _layer_norm_fwd(
+ x,
+ o,
+ norm_weight,
+ norm_bias,
+ eps,
+ residual,
+ residual_dtype=residual_dtype,
+ is_rms_norm=is_rms_norm
+ )
+ y = y.reshape(x_shape_og)
+ dtype = torch.get_autocast_gpu_dtype() if torch.is_autocast_enabled() else y.dtype
+ linear_weight = linear_weight.to(dtype)
+ linear_bias = linear_bias.to(dtype) if linear_bias is not None else None
+ out = F.linear(y.to(linear_weight.dtype), linear_weight, linear_bias)
+ # We don't store y, will be recomputed in the backward pass to save memory
+ ctx.save_for_backward(residual_out, o, norm_weight, norm_bias, linear_weight, mean, rstd)
+ ctx.x_shape_og = x_shape_og
+ ctx.o_shape_og = o_shape_og
+ ctx.eps = eps
+ ctx.is_rms_norm = is_rms_norm
+ ctx.has_residual = residual is not None
+ ctx.prenorm = prenorm
+ ctx.x_dtype = x.dtype
+ ctx.linear_bias_is_none = linear_bias is None
+ return out if not prenorm else (out, residual_out.reshape(x_shape_og))
+
+ @staticmethod
+ @contiguous
+ def backward(ctx, dout, *args):
+ x, o, norm_weight, norm_bias, linear_weight, mean, rstd = ctx.saved_tensors
+ dout = dout.reshape(-1, dout.shape[-1])
+ dy = F.linear(dout, linear_weight.t())
+ dlinear_bias = None if ctx.linear_bias_is_none else dout.sum(0)
+ assert dy.shape == x.shape
+ if ctx.prenorm:
+ dresidual = args[0]
+ dresidual = dresidual.reshape(-1, dresidual.shape[-1])
+ assert dresidual.shape == x.shape
+ else:
+ dresidual = None
+ dx, do, dnorm_weight, dnorm_bias, dresidual_in, y = _layer_norm_bwd(
+ dy,
+ x,
+ o,
+ norm_weight,
+ norm_bias,
+ ctx.eps,
+ mean,
+ rstd,
+ dresidual=dresidual,
+ has_residual=ctx.has_residual,
+ is_rms_norm=ctx.is_rms_norm,
+ x_dtype=ctx.x_dtype,
+ recompute_output=True,
+ )
+ dlinear_weight = torch.einsum("bo,bi->oi", dout, y)
+ return (
+ dx.reshape(ctx.x_shape_og),
+ do.reshape(ctx.o_shape_og),
+ dnorm_weight,
+ dnorm_bias,
+ dlinear_weight,
+ dlinear_bias,
+ dresidual_in.reshape(ctx.x_shape_og) if ctx.has_residual else None,
+ None,
+ None,
+ None,
+ None,
+ )
+
+
+def layer_norm_swish_gate_fn(
+ x,
+ o,
+ weight,
+ bias,
+ residual=None,
+ prenorm=False,
+ residual_in_fp32=False,
+ eps=1e-6
+):
+ return LayerNormSwishGateFn.apply(
+ x,
+ o,
+ weight,
+ bias,
+ residual,
+ eps,
+ prenorm,
+ residual_in_fp32,
+ False
+ )
+
+
+def rms_norm_swish_gate_fn(
+ x,
+ o,
+ weight,
+ bias,
+ residual=None,
+ prenorm=False,
+ residual_in_fp32=False,
+ eps=1e-6
+):
+ return LayerNormSwishGateFn.apply(
+ x,
+ o,
+ weight,
+ bias,
+ residual,
+ eps,
+ prenorm,
+ residual_in_fp32,
+ True
+ )
+
+
+def layer_norm_swish_gate_linear_fn(
+ x,
+ o,
+ norm_weight,
+ norm_bias,
+ linear_weight,
+ linear_bias,
+ residual=None,
+ prenorm=False,
+ residual_in_fp32=False,
+ eps=1e-6
+):
+ return LayerNormSwishGateLinearFn.apply(
+ x,
+ o,
+ norm_weight,
+ norm_bias,
+ linear_weight,
+ linear_bias,
+ residual,
+ eps,
+ prenorm,
+ residual_in_fp32,
+ False
+ )
+
+
+def rms_norm_swish_gate_linear_fn(
+ x,
+ o,
+ norm_weight,
+ norm_bias,
+ linear_weight,
+ linear_bias,
+ residual=None,
+ prenorm=False,
+ residual_in_fp32=False,
+ eps=1e-6
+):
+ return LayerNormSwishGateLinearFn.apply(
+ x,
+ o,
+ norm_weight,
+ norm_bias,
+ linear_weight,
+ linear_bias,
+ residual,
+ eps,
+ prenorm,
+ residual_in_fp32,
+ True
+ )
+
+
+class FusedLayerNormSwishGate(nn.Module):
+
+ def __init__(
+ self,
+ hidden_size,
+ elementwise_affine: bool = True,
+ eps=1e-5
+ ) -> FusedLayerNormSwishGate:
+ super().__init__()
+
+ self.hidden_size = hidden_size
+ self.elementwise_affine = elementwise_affine
+ self.eps = eps
+
+ if elementwise_affine:
+ self.weight = nn.Parameter(torch.ones(hidden_size))
+ else:
+ self.register_parameter("weight", None)
+ self.register_parameter("bias", None)
+
+ def __repr__(self) -> str:
+ s = f"{self.__class__.__name__}({self.hidden_size}"
+ if not self.elementwise_affine:
+ s += f", elementwise_affine={self.elementwise_affine}"
+ s += f", eps={self.eps}"
+ s += ")"
+ return s
+
+ def forward(self, x, o, residual=None, prenorm=False, residual_in_fp32=False):
+ return layer_norm_swish_gate_fn(
+ x,
+ o,
+ self.weight,
+ self.bias,
+ residual=residual,
+ eps=self.eps,
+ prenorm=prenorm,
+ residual_in_fp32=residual_in_fp32
+ )
+
+
+class FusedRMSNormSwishGate(nn.Module):
+
+ def __init__(
+ self,
+ hidden_size,
+ elementwise_affine: bool = True,
+ eps=1e-5
+ ) -> FusedRMSNormSwishGate:
+ super().__init__()
+
+ self.hidden_size = hidden_size
+ self.elementwise_affine = elementwise_affine
+ self.eps = eps
+
+ if elementwise_affine:
+ self.weight = nn.Parameter(torch.ones(hidden_size))
+ else:
+ self.register_parameter("weight", None)
+ self.register_parameter("bias", None)
+
+ def __repr__(self) -> str:
+ s = f"{self.__class__.__name__}({self.hidden_size}"
+ if not self.elementwise_affine:
+ s += f", elementwise_affine={self.elementwise_affine}"
+ s += f", eps={self.eps}"
+ s += ")"
+ return s
+
+ def forward(self, x, o, residual=None, prenorm=False, residual_in_fp32=False):
+ return rms_norm_swish_gate_fn(
+ x,
+ o,
+ self.weight,
+ self.bias,
+ residual=residual,
+ eps=self.eps,
+ prenorm=prenorm,
+ residual_in_fp32=residual_in_fp32
+ )
+
+
+class FusedLayerNormSwishGateLinear(nn.Module):
+
+ def __init__(
+ self,
+ hidden_size,
+ elementwise_affine: bool = True,
+ eps=1e-5
+ ) -> FusedLayerNormSwishGateLinear:
+ super().__init__()
+
+ self.hidden_size = hidden_size
+ self.elementwise_affine = elementwise_affine
+ self.eps = eps
+
+ if elementwise_affine:
+ self.weight = nn.Parameter(torch.ones(hidden_size))
+ else:
+ self.register_parameter("weight", None)
+ self.register_parameter("bias", None)
+
+ def __repr__(self) -> str:
+ s = f"{self.__class__.__name__}({self.hidden_size}"
+ if not self.elementwise_affine:
+ s += f", elementwise_affine={self.elementwise_affine}"
+ s += f", eps={self.eps}"
+ s += ")"
+ return s
+
+ def forward(self, x, o, weight, bias, residual=None, prenorm=False, residual_in_fp32=False):
+ return layer_norm_swish_gate_linear_fn(
+ x,
+ o,
+ self.weight,
+ self.bias,
+ weight,
+ bias,
+ residual=residual,
+ eps=self.eps,
+ prenorm=prenorm,
+ residual_in_fp32=residual_in_fp32
+ )
+
+
+class FusedRMSNormSwishGateLinear(nn.Module):
+
+ def __init__(
+ self,
+ hidden_size,
+ elementwise_affine: bool = True,
+ eps=1e-5
+ ) -> FusedRMSNormSwishGateLinear:
+ super().__init__()
+
+ self.hidden_size = hidden_size
+ self.elementwise_affine = elementwise_affine
+ self.eps = eps
+
+ if elementwise_affine:
+ self.weight = nn.Parameter(torch.ones(hidden_size))
+ else:
+ self.register_parameter("weight", None)
+ self.register_parameter("bias", None)
+
+ def __repr__(self) -> str:
+ s = f"{self.__class__.__name__}({self.hidden_size}"
+ if not self.elementwise_affine:
+ s += f", elementwise_affine={self.elementwise_affine}"
+ s += f", eps={self.eps}"
+ s += ")"
+ return s
+
+ def forward(self, x, o, weight, bias, residual=None, prenorm=False, residual_in_fp32=False):
+ return rms_norm_swish_gate_linear_fn(
+ x,
+ o,
+ self.weight,
+ self.bias,
+ weight,
+ bias,
+ residual=residual,
+ eps=self.eps,
+ prenorm=prenorm,
+ residual_in_fp32=residual_in_fp32
+ )
diff --git a/fla/modules/l2norm.py b/fla/modules/l2norm.py
new file mode 100644
index 0000000000000000000000000000000000000000..4206125bb2d6985cd84999420e63b48a84b29ead
--- /dev/null
+++ b/fla/modules/l2norm.py
@@ -0,0 +1,201 @@
+# -*- coding: utf-8 -*-
+
+import torch
+import triton
+import triton.language as tl
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ triton.Config({}, num_warps=16),
+ triton.Config({}, num_warps=32),
+ ],
+ key=["N"],
+)
+# @triton.heuristics({"HAS_BIAS": lambda args: args["B"] is not None})
+# @triton.heuristics({"HAS_RESIDUAL": lambda args: args["RESIDUAL"] is not None})
+@triton.jit
+def _l2_norm_fwd_1pass_kernel(
+ X, # pointer to the input
+ Y, # pointer to the output
+ stride_x_row, # how much to increase the pointer when moving by 1 row
+ N, # number of columns in X
+ eps, # epsilon to avoid division by zero
+ BLOCK_N: tl.constexpr,
+):
+ # Map the program id to the row of X and Y it should compute.
+ row = tl.program_id(0)
+ X += row * stride_x_row
+ Y += row * stride_x_row
+ # Compute mean and variance
+ cols = tl.arange(0, BLOCK_N)
+ x = tl.load(X + cols, mask=cols < N, other=0.0).to(tl.float32)
+ xbar = tl.where(cols < N, x, 0.0)
+ var = tl.sum(xbar * xbar, axis=0)
+ rstd = 1 / tl.sqrt(var + eps)
+ # tl.store(Rstd + row, rstd)
+ # Normalize and apply linear transformation
+ mask = cols < N
+ y = x * rstd
+ # Write output
+ tl.store(Y + cols, y, mask=mask)
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ triton.Config({}, num_warps=16),
+ triton.Config({}, num_warps=32),
+ ],
+ key=["N"],
+)
+# @triton.heuristics({"HAS_BIAS": lambda args: args["B"] is not None})
+# @triton.heuristics({"HAS_DRESIDUAL": lambda args: args["DRESIDUAL"] is not None})
+# @triton.heuristics({"STORE_DRESIDUAL": lambda args: args["DRESIDUAL_IN"] is not None})
+# @triton.heuristics({"RECOMPUTE_OUTPUT": lambda args: args["Y"] is not None})
+@triton.jit
+def _l2_norm_bwd_kernel(
+ X, # pointer to the input
+ # Y, # pointer to the output to be recomputed
+ DY, # pointer to the output gradient
+ DX, # pointer to the input gradient
+ stride_x_row, # how much to increase the pointer when moving by 1 row
+ N, # number of columns in X
+ eps, # epsilon to avoid division by zero
+ BLOCK_N: tl.constexpr,
+):
+ # Map the program id to the elements of X, DX, and DY it should compute.
+ # Map the program id to the row of X and Y it should compute.
+ row = tl.program_id(0)
+ X += row * stride_x_row
+ DX += row * stride_x_row
+ DY += row * stride_x_row
+
+ # Y += row * stride_y_row
+ cols = tl.arange(0, BLOCK_N)
+ x = tl.load(X + cols, mask=cols < N, other=0.0).to(tl.float32)
+ x = tl.where(cols < N, x, 0.0)
+ var = tl.sum(x * x)
+ rstd = 1 / tl.sqrt(var + eps)
+ # tl.store(Rstd + row, rstd)
+ # Normalize and apply linear transformation
+ mask = cols < N
+ # y = x * rstd
+ dy = tl.load(DY + cols, mask=cols < N, other=0.0).to(tl.float32)
+ dy = tl.where(cols < N, dy, 0.0)
+ # dx = dy * rstd - tl.sum(dy * x) * (1 / (var+eps)) * rstd * x
+ dx = dy * rstd - tl.sum(dy * x) * (1 / (var+eps)) * rstd * x
+ tl.store(DX + cols, dx, mask=mask)
+
+
+def _l2_norm_fwd(
+ x, eps=1e-6
+):
+ x_shape_og = x.shape
+ x = x.reshape(-1, x.shape[-1])
+ if x.stride(-1) != 1:
+ x = x.contiguous()
+ M, N = x.shape
+ assert x.stride(-1) == 1
+ # allocate output
+ y = torch.empty_like(x)
+ assert y.stride(-1) == 1
+ N = x.shape[-1]
+ M = x.shape[0]
+ # rstd = torch.empty((M,), dtype=torch.float32, device="cuda")
+ # Less than 64KB per feature: enqueue fused kernel
+ MAX_FUSED_SIZE = 65536 // x.element_size()
+ BLOCK_N = min(MAX_FUSED_SIZE, triton.next_power_of_2(N))
+ if N > BLOCK_N:
+ raise RuntimeError(
+ "This layer norm doesn't support feature dim >= 64KB.")
+ # heuristics for number of warps
+ with torch.cuda.device(x.device.index):
+ _l2_norm_fwd_1pass_kernel[(M,)](
+ x,
+ y,
+ x.stride(0),
+ N,
+ eps,
+ # is_rms_norm,
+ BLOCK_N,
+ # residual is not None,
+ # residual_out is not None,
+ # bias is not None,
+ )
+ return y.reshape(x_shape_og)
+
+
+def _l2_norm_bwd(
+ x, dy, eps=1e-5,
+):
+ x_shape_og = x.shape
+ x = x.reshape(-1, dy.shape[-1])
+ dy = dy.reshape(-1, dy.shape[-1])
+ if dy.stride(-1) != 1:
+ dy = dy.contiguous()
+ assert dy.shape == x.shape
+ # allocate output
+ dx = torch.empty_like(x)
+ N = x.shape[-1]
+ M = x.shape[0]
+ assert x.stride(-1) == 1
+ assert dy.stride(-1) == 1
+ # rstd = torch.empty((M,), dtype=torch.float32, device="cuda")
+ # Less than 64KB per feature: enqueue fused kernel
+ MAX_FUSED_SIZE = 65536 // x.element_size()
+ BLOCK_N = min(MAX_FUSED_SIZE, triton.next_power_of_2(N))
+ if N > BLOCK_N:
+ raise RuntimeError(
+ "This layer norm doesn't support feature dim >= 64KB.")
+ # heuristics for number of warps
+ with torch.cuda.device(x.device.index):
+ _l2_norm_bwd_kernel[(M,)](
+ x,
+ dy,
+ dx,
+ x.stride(0),
+ N,
+ eps,
+ BLOCK_N,
+ )
+ return dx.reshape(x_shape_og)
+
+
+class L2NormFunction(torch.autograd.Function):
+
+ @staticmethod
+ def forward(
+ ctx,
+ x,
+ eps=1e-6,
+ ):
+ # reshape input data into 2D tensor
+ y = _l2_norm_fwd(x, eps)
+ ctx.eps = eps
+ ctx.x_dtype = x.dtype
+ ctx.save_for_backward(x)
+ return y
+
+ @staticmethod
+ def backward(ctx, dy, *args):
+ x, = ctx.saved_tensors
+ dx = _l2_norm_bwd(
+ x,
+ dy,
+ ctx.eps,
+ )
+ return (
+ dx,
+ None
+ )
+
+
+l2_norm = L2NormFunction.apply
diff --git a/fla/modules/layernorm.py b/fla/modules/layernorm.py
new file mode 100644
index 0000000000000000000000000000000000000000..de226f27e71a4ba35605792bd181fb3c21712b9f
--- /dev/null
+++ b/fla/modules/layernorm.py
@@ -0,0 +1,1009 @@
+# -*- coding: utf-8 -*-
+
+# Copyright (c) 2023, Tri Dao.
+# https://github.com/state-spaces/mamba/blob/fb7b5310fa865dbd62aa059b1e26f2b431363e2a/mamba_ssm/ops/triton/layernorm.py
+# Implement residual + layer_norm / rms_norm.
+
+# Based on the Triton LayerNorm tutorial: https://triton-lang.org/main/getting-started/tutorials/05-layer-norm.html
+# For the backward pass, we keep weight_grad and bias_grad in registers and accumulate.
+# This is faster for dimensions up to 8k, but after that it's much slower due to register spilling.
+# The models we train have hidden dim up to 8k anyway (e.g. Llama 70B), so this is fine.
+
+from __future__ import annotations
+
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+import triton
+import triton.language as tl
+
+from fla.utils import contiguous
+
+
+def layer_norm_ref(
+ x: torch.Tensor,
+ weight: torch.Tensor,
+ bias: torch.Tensor,
+ residual: torch.Tensor = None,
+ eps: float = 1e-5,
+ prenorm: bool = False,
+ upcast: bool = False
+):
+ dtype = x.dtype
+ if upcast:
+ weight = weight.float()
+ bias = bias.float() if bias is not None else None
+ if upcast:
+ x = x.float()
+ residual = residual.float() if residual is not None else residual
+ if residual is not None:
+ x = (x + residual).to(x.dtype)
+ out = F.layer_norm(x.to(weight.dtype), x.shape[-1:], weight=weight, bias=bias, eps=eps).to(
+ dtype
+ )
+ return out if not prenorm else (out, x)
+
+
+def rms_norm_ref(
+ x: torch.Tensor,
+ weight: torch.Tensor,
+ bias: torch.Tensor,
+ residual: torch.Tensor = None,
+ eps: float = 1e-5,
+ prenorm: bool = False,
+ upcast: bool = False
+):
+ dtype = x.dtype
+ if upcast:
+ weight = weight.float()
+ bias = bias.float() if bias is not None else None
+ if upcast:
+ x = x.float()
+ residual = residual.float() if residual is not None else residual
+ if residual is not None:
+ x = (x + residual).to(x.dtype)
+ rstd = 1 / torch.sqrt((x.square()).mean(dim=-1, keepdim=True) + eps)
+ out = (x * rstd * weight) + bias if bias is not None else (x * rstd * weight)
+ out = out.to(dtype)
+ return out if not prenorm else (out, x)
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ triton.Config({}, num_warps=16),
+ triton.Config({}, num_warps=32),
+ ],
+ key=["N", "HAS_RESIDUAL", "STORE_RESIDUAL_OUT", "IS_RMS_NORM", "HAS_BIAS"],
+)
+# @triton.heuristics({"HAS_BIAS": lambda args: args["B"] is not None})
+# @triton.heuristics({"HAS_RESIDUAL": lambda args: args["RESIDUAL"] is not None})
+@triton.jit
+def _layer_norm_fwd_1pass_kernel(
+ X, # pointer to the input
+ Y, # pointer to the output
+ W, # pointer to the weights
+ B, # pointer to the biases
+ RESIDUAL, # pointer to the residual
+ RESIDUAL_OUT, # pointer to the residual
+ Mean, # pointer to the mean
+ Rstd, # pointer to the 1/std
+ stride_x_row, # how much to increase the pointer when moving by 1 row
+ stride_y_row,
+ stride_res_row,
+ stride_res_out_row,
+ N, # number of columns in X
+ G, # number of groups
+ eps, # epsilon to avoid division by zero
+ IS_RMS_NORM: tl.constexpr,
+ BLOCK_N: tl.constexpr,
+ HAS_RESIDUAL: tl.constexpr,
+ STORE_RESIDUAL_OUT: tl.constexpr,
+ HAS_WEIGHT: tl.constexpr,
+ HAS_BIAS: tl.constexpr
+):
+ # Map the program id to the row of X and Y it should compute.
+ row = tl.program_id(0)
+ group = row % G
+ X += row * stride_x_row
+ Y += row * stride_y_row
+ if HAS_RESIDUAL:
+ RESIDUAL += row * stride_res_row
+ if STORE_RESIDUAL_OUT:
+ RESIDUAL_OUT += row * stride_res_out_row
+ # Compute mean and variance
+ cols = tl.arange(0, BLOCK_N)
+ x = tl.load(X + cols, mask=cols < N, other=0.0).to(tl.float32)
+ if HAS_RESIDUAL:
+ residual = tl.load(RESIDUAL + cols, mask=cols < N, other=0.0).to(tl.float32)
+ x += residual
+ if STORE_RESIDUAL_OUT:
+ tl.store(RESIDUAL_OUT + cols, x, mask=cols < N)
+ if not IS_RMS_NORM:
+ mean = tl.sum(x, axis=0) / N
+ tl.store(Mean + row, mean)
+ xbar = tl.where(cols < N, x - mean, 0.0)
+ var = tl.sum(xbar * xbar, axis=0) / N
+ else:
+ xbar = tl.where(cols < N, x, 0.0)
+ var = tl.sum(xbar * xbar, axis=0) / N
+ rstd = 1 / tl.sqrt(var + eps)
+ tl.store(Rstd + row, rstd)
+ # Normalize and apply linear transformation
+ mask = cols < N
+ if HAS_WEIGHT:
+ w = tl.load(W + group * stride_x_row + cols, mask=mask).to(tl.float32)
+ if HAS_BIAS:
+ b = tl.load(B + group * stride_x_row + cols, mask=mask).to(tl.float32)
+ x_hat = (x - mean) * rstd if not IS_RMS_NORM else x * rstd
+
+ y = x_hat * w if HAS_WEIGHT else x_hat
+ if HAS_BIAS:
+ y = y + b
+ # Write output
+ tl.store(Y + cols, y, mask=mask)
+
+
+def _layer_norm_fwd(
+ x,
+ weight,
+ bias,
+ eps,
+ residual=None,
+ out_dtype=None,
+ residual_dtype=None,
+ is_rms_norm=False,
+ num_groups=1
+):
+ if residual is not None:
+ residual_dtype = residual.dtype
+ M, N, G = *x.shape, num_groups
+ if residual is not None:
+ assert residual.shape == (M, N)
+ if weight is not None:
+ assert weight.shape == (G * N,)
+ if bias is not None:
+ assert bias.shape == (G * N,)
+ # allocate output
+ y = torch.empty_like(x, dtype=x.dtype if out_dtype is None else out_dtype)
+ if residual is not None or (residual_dtype is not None and residual_dtype != x.dtype):
+ residual_out = torch.empty(M, N, device=x.device, dtype=residual_dtype)
+ else:
+ residual_out = None
+ mean = torch.empty((M,), dtype=torch.float32, device="cuda") if not is_rms_norm else None
+ rstd = torch.empty((M,), dtype=torch.float32, device="cuda")
+ # Less than 64KB per feature: enqueue fused kernel
+ MAX_FUSED_SIZE = 65536 // x.element_size()
+ BLOCK_N = min(MAX_FUSED_SIZE, triton.next_power_of_2(N))
+ if N > BLOCK_N:
+ raise RuntimeError("This layer norm doesn't support feature dim >= 64KB.")
+ # heuristics for number of warps
+ with torch.cuda.device(x.device.index):
+ _layer_norm_fwd_1pass_kernel[(M,)](
+ x,
+ y,
+ weight,
+ bias,
+ residual,
+ residual_out,
+ mean,
+ rstd,
+ x.stride(0),
+ y.stride(0),
+ residual.stride(0) if residual is not None else 0,
+ residual_out.stride(0) if residual_out is not None else 0,
+ N,
+ G,
+ eps,
+ is_rms_norm,
+ BLOCK_N,
+ residual is not None,
+ residual_out is not None,
+ weight is not None,
+ bias is not None,
+ )
+ # residual_out is None if residual is None and residual_dtype == input_dtype
+ return y, mean, rstd, residual_out if residual_out is not None else x
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ triton.Config({}, num_warps=16),
+ triton.Config({}, num_warps=32),
+ ],
+ key=["N", "HAS_DRESIDUAL", "STORE_DRESIDUAL", "IS_RMS_NORM", "HAS_BIAS"],
+)
+# @triton.heuristics({"HAS_BIAS": lambda args: args["B"] is not None})
+# @triton.heuristics({"HAS_DRESIDUAL": lambda args: args["DRESIDUAL"] is not None})
+# @triton.heuristics({"STORE_DRESIDUAL": lambda args: args["DRESIDUAL_IN"] is not None})
+@triton.heuristics({"RECOMPUTE_OUTPUT": lambda args: args["Y"] is not None})
+@triton.jit
+def _layer_norm_bwd_kernel(
+ X, # pointer to the input
+ W, # pointer to the weights
+ B, # pointer to the biases
+ Y, # pointer to the output to be recomputed
+ DY, # pointer to the output gradient
+ DX, # pointer to the input gradient
+ DW, # pointer to the partial sum of weights gradient
+ DB, # pointer to the partial sum of biases gradient
+ DRESIDUAL,
+ DRESIDUAL_IN,
+ Mean, # pointer to the mean
+ Rstd, # pointer to the 1/std
+ stride_x_row, # how much to increase the pointer when moving by 1 row
+ stride_y_row,
+ stride_dy_row,
+ stride_dx_row,
+ stride_dres_row,
+ stride_dres_in_row,
+ M, # number of rows in X
+ N, # number of columns in X
+ G, # number of groups
+ rows_per_program,
+ programs_per_group,
+ IS_RMS_NORM: tl.constexpr,
+ BLOCK_N: tl.constexpr,
+ HAS_DRESIDUAL: tl.constexpr,
+ STORE_DRESIDUAL: tl.constexpr,
+ HAS_WEIGHT: tl.constexpr,
+ HAS_BIAS: tl.constexpr,
+ RECOMPUTE_OUTPUT: tl.constexpr,
+):
+ row_block_id = tl.program_id(0)
+ group_id, program_id_in_group = row_block_id // programs_per_group, row_block_id % programs_per_group
+
+ row_start = group_id + program_id_in_group * G * rows_per_program
+ row_end = min(row_start + G * rows_per_program, M)
+
+ cols = tl.arange(0, BLOCK_N)
+ mask = cols < N
+
+ if HAS_WEIGHT:
+ w = tl.load(W + group_id * stride_x_row + cols, mask=mask).to(tl.float32)
+ dw = tl.zeros((BLOCK_N,), dtype=tl.float32)
+ if RECOMPUTE_OUTPUT and HAS_BIAS:
+ b = tl.load(B + group_id * stride_x_row + cols, mask=mask, other=0.0).to(tl.float32)
+ if HAS_BIAS:
+ db = tl.zeros((BLOCK_N,), dtype=tl.float32)
+
+ for row in range(row_start, row_end, G):
+ # Load data to SRAM
+ x = tl.load(X + row * stride_x_row + cols, mask=mask, other=0).to(tl.float32)
+ dy = tl.load(DY + row * stride_dy_row + cols, mask=mask, other=0).to(tl.float32)
+ if not IS_RMS_NORM:
+ mean = tl.load(Mean + row)
+ rstd = tl.load(Rstd + row)
+ # Compute dx
+ xhat = (x - mean) * rstd if not IS_RMS_NORM else x * rstd
+ xhat = tl.where(mask, xhat, 0.0)
+ if RECOMPUTE_OUTPUT:
+ y = xhat * w if HAS_WEIGHT else xhat
+ if HAS_BIAS:
+ y = y + b
+ tl.store(Y + row * stride_y_row + cols, y, mask=mask)
+ wdy = dy
+ if HAS_WEIGHT:
+ wdy = dy * w
+ dw += dy * xhat
+ if HAS_BIAS:
+ db += dy
+ if not IS_RMS_NORM:
+ c1 = tl.sum(xhat * wdy, axis=0) / N
+ c2 = tl.sum(wdy, axis=0) / N
+ dx = (wdy - (xhat * c1 + c2)) * rstd
+ else:
+ c1 = tl.sum(xhat * wdy, axis=0) / N
+ dx = (wdy - xhat * c1) * rstd
+ if HAS_DRESIDUAL:
+ dres = tl.load(DRESIDUAL + row * stride_dres_row + cols, mask=mask, other=0).to(tl.float32)
+ dx += dres
+ # Write dx
+ if STORE_DRESIDUAL:
+ tl.store(DRESIDUAL_IN + row * stride_dres_in_row + cols, dx, mask=mask)
+ tl.store(DX + row * stride_dx_row + cols, dx, mask=mask)
+
+ if HAS_WEIGHT:
+ tl.store(DW + row_block_id * N + cols, dw, mask=mask)
+ if HAS_BIAS:
+ tl.store(DB + row_block_id * N + cols, db, mask=mask)
+
+
+def _layer_norm_bwd(
+ dy,
+ x,
+ weight,
+ bias,
+ eps,
+ mean,
+ rstd,
+ dresidual=None,
+ has_residual=False,
+ is_rms_norm=False,
+ x_dtype=None,
+ recompute_output=False,
+ num_groups=1
+):
+ M, N, G = *x.shape, num_groups
+ assert dy.shape == (M, N)
+ if dresidual is not None:
+ assert dresidual.shape == (M, N)
+ if weight is not None:
+ assert weight.shape == (G * N,)
+ if bias is not None:
+ assert bias.shape == (G * N,)
+ # allocate output
+ dx = torch.empty_like(x) if x_dtype is None else torch.empty(M, N, dtype=x_dtype, device=x.device)
+ dresidual_in = torch.empty_like(x) if has_residual and dx.dtype != x.dtype else None
+ y = torch.empty(M, N, dtype=dy.dtype, device=dy.device) if recompute_output else None
+
+ # Less than 64KB per feature: enqueue fused kernel
+ MAX_FUSED_SIZE = 65536 // x.element_size()
+ BLOCK_N = min(MAX_FUSED_SIZE, triton.next_power_of_2(N))
+ if N > BLOCK_N:
+ raise RuntimeError("This layer norm doesn't support feature dim >= 64KB.")
+ # each program handles one group only
+ S = triton.cdiv(torch.cuda.get_device_properties(x.device).multi_processor_count, G) * G
+ dw = torch.empty((S, N), dtype=torch.float32, device=weight.device) if weight is not None else None
+ db = torch.empty((S, N), dtype=torch.float32, device=bias.device) if bias is not None else None
+ rows_per_program = triton.cdiv(M, S)
+ programs_per_group = S // G
+ grid = (S,)
+ with torch.cuda.device(x.device.index):
+ _layer_norm_bwd_kernel[grid](
+ x,
+ weight,
+ bias,
+ y,
+ dy,
+ dx,
+ dw,
+ db,
+ dresidual,
+ dresidual_in,
+ mean,
+ rstd,
+ x.stride(0),
+ 0 if not recompute_output else y.stride(0),
+ dy.stride(0),
+ dx.stride(0),
+ dresidual.stride(0) if dresidual is not None else 0,
+ dresidual_in.stride(0) if dresidual_in is not None else 0,
+ M,
+ N,
+ G,
+ rows_per_program,
+ programs_per_group,
+ is_rms_norm,
+ BLOCK_N,
+ dresidual is not None,
+ dresidual_in is not None,
+ weight is not None,
+ bias is not None,
+ )
+ dw = dw.view(G, -1, N).sum(1).to(weight).view_as(weight) if weight is not None else None
+ db = db.view(G, -1, N).sum(1).to(bias).view_as(bias) if bias is not None else None
+ # Don't need to compute dresidual_in separately in this case
+ if has_residual and dx.dtype == x.dtype:
+ dresidual_in = dx
+ return (dx, dw, db, dresidual_in) if not recompute_output else (dx, dw, db, dresidual_in, y)
+
+
+class LayerNormFunction(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ def forward(
+ ctx,
+ x,
+ weight,
+ bias,
+ residual=None,
+ eps=1e-5,
+ prenorm=False,
+ residual_in_fp32=False,
+ is_rms_norm=False,
+ num_groups=1
+ ):
+ x_shape_og = x.shape
+
+ if x.shape[-1] % num_groups != 0:
+ raise ValueError('num_channels must be divisible by num_groups')
+ # reshape input data into 2D tensor
+ x = x.reshape(-1, (x.shape[-1] // num_groups))
+ if residual is not None:
+ assert residual.shape == x_shape_og
+ residual = residual.reshape_as(x)
+ residual_dtype = (
+ residual.dtype
+ if residual is not None
+ else (torch.float32 if residual_in_fp32 else None)
+ )
+ y, mean, rstd, residual_out = _layer_norm_fwd(
+ x, weight, bias, eps, residual,
+ residual_dtype=residual_dtype,
+ is_rms_norm=is_rms_norm,
+ num_groups=num_groups
+ )
+ ctx.save_for_backward(residual_out, weight, bias, mean, rstd)
+ ctx.x_shape_og = x_shape_og
+ ctx.eps = eps
+ ctx.is_rms_norm = is_rms_norm
+ ctx.num_groups = num_groups
+ ctx.has_residual = residual is not None
+ ctx.prenorm = prenorm
+ ctx.x_dtype = x.dtype
+ y = y.reshape(x_shape_og)
+ return y if not prenorm else (y, residual_out.reshape(x_shape_og))
+
+ @staticmethod
+ @contiguous
+ def backward(ctx, dy, *args):
+ x, weight, bias, mean, rstd = ctx.saved_tensors
+ dy = dy.reshape(-1, (dy.shape[-1] // ctx.num_groups))
+ assert dy.shape == x.shape
+ if ctx.prenorm:
+ dresidual = args[0]
+ dresidual = dresidual.reshape(-1, x.shape[-1])
+ assert dresidual.shape == x.shape
+ else:
+ dresidual = None
+ dx, dw, db, dresidual_in = _layer_norm_bwd(
+ dy,
+ x,
+ weight,
+ bias,
+ ctx.eps,
+ mean,
+ rstd,
+ dresidual,
+ ctx.has_residual,
+ ctx.is_rms_norm,
+ x_dtype=ctx.x_dtype,
+ num_groups=ctx.num_groups
+ )
+ return (
+ dx.reshape(ctx.x_shape_og),
+ dw,
+ db,
+ dresidual_in.reshape(ctx.x_shape_og) if ctx.has_residual else None,
+ None,
+ None,
+ None,
+ None,
+ None
+ )
+
+
+def layer_norm(
+ x: torch.Tensor,
+ weight: torch.Tensor,
+ bias: torch.Tensor,
+ residual: torch.Tensor = None,
+ eps: float = 1e-5,
+ prenorm: bool = False,
+ residual_in_fp32: bool = False,
+ is_rms_norm: bool = False
+):
+ return LayerNormFunction.apply(
+ x,
+ weight,
+ bias,
+ residual,
+ eps,
+ prenorm,
+ residual_in_fp32,
+ is_rms_norm
+ )
+
+
+def group_norm(
+ x: torch.Tensor,
+ weight: torch.Tensor,
+ bias: torch.Tensor,
+ residual: torch.Tensor = None,
+ eps: float = 1e-5,
+ prenorm: bool = False,
+ residual_in_fp32: bool = False,
+ is_rms_norm: bool = False,
+ num_groups: int = 1
+):
+ return LayerNormFunction.apply(
+ x,
+ weight,
+ bias,
+ residual,
+ eps,
+ prenorm,
+ residual_in_fp32,
+ is_rms_norm,
+ num_groups
+ )
+
+
+def rms_norm(
+ x: torch.Tensor,
+ weight: torch.Tensor,
+ bias: torch.Tensor,
+ residual: torch.Tensor = None,
+ eps: float = 1e-5,
+ prenorm: bool = False,
+ residual_in_fp32: bool = False
+):
+ return LayerNormFunction.apply(
+ x,
+ weight,
+ bias,
+ residual,
+ eps,
+ prenorm,
+ residual_in_fp32,
+ True
+ )
+
+
+class LayerNorm(nn.Module):
+
+ def __init__(
+ self,
+ hidden_size: int,
+ elementwise_affine: bool = True,
+ bias: bool = False,
+ eps: float = 1e-5
+ ) -> LayerNorm:
+ super().__init__()
+
+ self.hidden_size = hidden_size
+ self.elementwise_affine = elementwise_affine
+ self.eps = eps
+
+ self.register_parameter("weight", None)
+ self.register_parameter("bias", None)
+ if elementwise_affine:
+ self.weight = nn.Parameter(torch.ones(hidden_size))
+ if bias:
+ self.bias = nn.Parameter(torch.zeros(hidden_size))
+
+ def __repr__(self) -> str:
+ s = f"{self.__class__.__name__}({self.hidden_size}"
+ if not self.elementwise_affine:
+ s += f", elementwise_affine={self.elementwise_affine}"
+ s += f", eps={self.eps}"
+ s += ")"
+ return s
+
+ def forward(self, x, residual=None, prenorm=False, residual_in_fp32=False):
+ return layer_norm(
+ x,
+ self.weight,
+ self.bias,
+ residual=residual,
+ eps=self.eps,
+ prenorm=prenorm,
+ residual_in_fp32=residual_in_fp32
+ )
+
+
+class GroupNorm(nn.Module):
+
+ def __init__(
+ self,
+ num_groups: int,
+ hidden_size: int,
+ elementwise_affine: bool = True,
+ bias: bool = False,
+ eps: float = 1e-5
+ ) -> GroupNorm:
+ super().__init__()
+
+ if hidden_size % num_groups != 0:
+ raise ValueError('num_channels must be divisible by num_groups')
+
+ self.num_groups = num_groups
+ self.hidden_size = hidden_size
+ self.elementwise_affine = elementwise_affine
+ self.eps = eps
+
+ self.register_parameter("weight", None)
+ self.register_parameter("bias", None)
+ if elementwise_affine:
+ self.weight = nn.Parameter(torch.ones(hidden_size))
+ if bias:
+ self.bias = nn.Parameter(torch.zeros(hidden_size))
+
+ def __repr__(self) -> str:
+ s = f"{self.__class__.__name__}({self.num_groups}, {self.hidden_size}"
+ if not self.elementwise_affine:
+ s += f", elementwise_affine={self.elementwise_affine}"
+ s += f", eps={self.eps}"
+ s += ")"
+ return s
+
+ def forward(self, x, residual=None, prenorm=False, residual_in_fp32=False):
+ return group_norm(
+ x,
+ self.weight,
+ self.bias,
+ residual=residual,
+ eps=self.eps,
+ prenorm=prenorm,
+ residual_in_fp32=residual_in_fp32,
+ num_groups=self.num_groups
+ )
+
+
+class RMSNorm(nn.Module):
+
+ def __init__(
+ self,
+ hidden_size: int,
+ elementwise_affine: bool = True,
+ bias: bool = False,
+ eps: float = 1e-5
+ ) -> RMSNorm:
+ super().__init__()
+
+ self.hidden_size = hidden_size
+ self.elementwise_affine = elementwise_affine
+ self.eps = eps
+
+ self.register_parameter("weight", None)
+ self.register_parameter("bias", None)
+ if elementwise_affine:
+ self.weight = nn.Parameter(torch.ones(hidden_size))
+ if bias:
+ self.bias = nn.Parameter(torch.zeros(hidden_size))
+
+ def __repr__(self) -> str:
+ s = f"{self.__class__.__name__}({self.hidden_size}"
+ if not self.elementwise_affine:
+ s += f", elementwise_affine={self.elementwise_affine}"
+ s += f", eps={self.eps}"
+ s += ")"
+ return s
+
+ def forward(self, x, residual=None, prenorm=False, residual_in_fp32=False):
+ return rms_norm(
+ x,
+ self.weight,
+ self.bias,
+ residual=residual,
+ eps=self.eps,
+ prenorm=prenorm,
+ residual_in_fp32=residual_in_fp32,
+ )
+
+
+class LayerNormLinearFunction(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ def forward(
+ ctx,
+ x,
+ norm_weight,
+ norm_bias,
+ linear_weight,
+ linear_bias,
+ residual=None,
+ eps=1e-5,
+ prenorm=False,
+ residual_in_fp32=False,
+ is_rms_norm=False,
+ num_groups=1
+ ):
+ x_shape_og = x.shape
+
+ if x.shape[-1] % num_groups != 0:
+ raise ValueError('num_channels must be divisible by num_groups')
+ # reshape input data into 2D tensor
+ x = x.reshape(-1, (x.shape[-1] // num_groups))
+ if residual is not None:
+ assert residual.shape == x_shape_og
+ residual = residual.reshape_as(x)
+ residual_dtype = (
+ residual.dtype
+ if residual is not None
+ else (torch.float32 if residual_in_fp32 else None)
+ )
+ y, mean, rstd, residual_out = _layer_norm_fwd(
+ x,
+ norm_weight,
+ norm_bias,
+ eps,
+ residual,
+ out_dtype=None if not torch.is_autocast_enabled() else torch.get_autocast_gpu_dtype(),
+ residual_dtype=residual_dtype,
+ is_rms_norm=is_rms_norm,
+ num_groups=num_groups
+ )
+ y = y.reshape(x_shape_og)
+ dtype = torch.get_autocast_gpu_dtype() if torch.is_autocast_enabled() else y.dtype
+ linear_weight = linear_weight.to(dtype)
+ linear_bias = linear_bias.to(dtype) if linear_bias is not None else None
+ out = F.linear(y.to(linear_weight.dtype), linear_weight, linear_bias)
+ # We don't store y, will be recomputed in the backward pass to save memory
+ ctx.save_for_backward(residual_out, norm_weight, norm_bias, linear_weight, mean, rstd)
+ ctx.x_shape_og = x_shape_og
+ ctx.eps = eps
+ ctx.is_rms_norm = is_rms_norm
+ ctx.num_groups = num_groups
+ ctx.has_residual = residual is not None
+ ctx.prenorm = prenorm
+ ctx.x_dtype = x.dtype
+ ctx.linear_bias_is_none = linear_bias is None
+ return out if not prenorm else (out, residual_out.reshape(x_shape_og))
+
+ @staticmethod
+ @contiguous
+ def backward(ctx, dout, *args):
+ x, norm_weight, norm_bias, linear_weight, mean, rstd = ctx.saved_tensors
+ dout = dout.reshape(-1, dout.shape[-1])
+ dy = F.linear(dout, linear_weight.t())
+ dy = dy.reshape(-1, (dy.shape[-1] // ctx.num_groups))
+ dlinear_bias = None if ctx.linear_bias_is_none else dout.sum(0)
+ assert dy.shape == x.shape
+ if ctx.prenorm:
+ dresidual = args[0]
+ dresidual = dresidual.reshape(-1, x.shape[-1])
+ assert dresidual.shape == x.shape
+ else:
+ dresidual = None
+ dx, dnorm_weight, dnorm_bias, dresidual_in, y = _layer_norm_bwd(
+ dy,
+ x,
+ norm_weight,
+ norm_bias,
+ ctx.eps,
+ mean,
+ rstd,
+ dresidual,
+ ctx.has_residual,
+ ctx.is_rms_norm,
+ x_dtype=ctx.x_dtype,
+ recompute_output=True,
+ num_groups=ctx.num_groups
+ )
+ dlinear_weight = torch.einsum("bo,bi->oi", dout, y.view(-1, linear_weight.shape[-1]))
+ return (
+ dx.reshape(ctx.x_shape_og),
+ dnorm_weight,
+ dnorm_bias,
+ dlinear_weight,
+ dlinear_bias,
+ dresidual_in.reshape(ctx.x_shape_og) if ctx.has_residual else None,
+ None,
+ None,
+ None,
+ None,
+ None
+ )
+
+
+class LayerNormLinear(nn.Module):
+
+ def __init__(
+ self,
+ hidden_size,
+ elementwise_affine: bool = True,
+ bias: bool = False,
+ eps: float = 1e-5
+ ) -> LayerNormLinear:
+ super().__init__()
+
+ self.hidden_size = hidden_size
+ self.elementwise_affine = elementwise_affine
+ self.eps = eps
+
+ self.register_parameter("weight", None)
+ self.register_parameter("bias", None)
+ if elementwise_affine:
+ self.weight = nn.Parameter(torch.ones(hidden_size))
+ if bias:
+ self.bias = nn.Parameter(torch.zeros(hidden_size))
+
+ def __repr__(self) -> str:
+ s = f"{self.__class__.__name__}({self.hidden_size}"
+ if not self.elementwise_affine:
+ s += f", elementwise_affine={self.elementwise_affine}"
+ s += f", eps={self.eps}"
+ s += ")"
+ return s
+
+ def forward(self, x, weight, bias, residual=None, prenorm=False, residual_in_fp32=False):
+ return layer_norm_linear(
+ x=x,
+ norm_weight=self.weight,
+ norm_bias=self.bias,
+ linear_weight=weight,
+ linear_bias=bias,
+ residual=residual,
+ eps=self.eps,
+ prenorm=prenorm,
+ residual_in_fp32=residual_in_fp32,
+ is_rms_norm=False
+ )
+
+
+class GroupNormLinear(nn.Module):
+
+ def __init__(
+ self,
+ num_groups: int,
+ hidden_size: int,
+ elementwise_affine: bool = True,
+ bias: bool = False,
+ eps: float = 1e-5
+ ) -> GroupNormLinear:
+ super().__init__()
+
+ if hidden_size % num_groups != 0:
+ raise ValueError('num_channels must be divisible by num_groups')
+
+ self.num_groups = num_groups
+ self.hidden_size = hidden_size
+ self.elementwise_affine = elementwise_affine
+ self.eps = eps
+
+ self.register_parameter("weight", None)
+ self.register_parameter("bias", None)
+ if elementwise_affine:
+ self.weight = nn.Parameter(torch.ones(hidden_size))
+ if bias:
+ self.bias = nn.Parameter(torch.zeros(hidden_size))
+
+ def __repr__(self) -> str:
+ s = f"{self.__class__.__name__}({self.num_groups}, {self.hidden_size}"
+ if not self.elementwise_affine:
+ s += f", elementwise_affine={self.elementwise_affine}"
+ s += f", eps={self.eps}"
+ s += ")"
+ return s
+
+ def forward(self, x, weight, bias, residual=None, prenorm=False, residual_in_fp32=False):
+ return layer_norm_linear(
+ x=x,
+ norm_weight=self.weight,
+ norm_bias=self.bias,
+ linear_weight=weight,
+ linear_bias=bias,
+ residual=residual,
+ eps=self.eps,
+ prenorm=prenorm,
+ residual_in_fp32=residual_in_fp32,
+ is_rms_norm=False,
+ num_groups=self.num_groups
+ )
+
+
+class RMSNormLinear(nn.Module):
+
+ def __init__(
+ self,
+ hidden_size,
+ elementwise_affine: bool = True,
+ bias: bool = False,
+ eps: float = 1e-5
+ ) -> RMSNormLinear:
+ super().__init__()
+
+ self.hidden_size = hidden_size
+ self.elementwise_affine = elementwise_affine
+ self.eps = eps
+
+ self.register_parameter("weight", None)
+ self.register_parameter("bias", None)
+ if elementwise_affine:
+ self.weight = nn.Parameter(torch.ones(hidden_size))
+ if bias:
+ self.bias = nn.Parameter(torch.zeros(hidden_size))
+
+ def __repr__(self) -> str:
+ s = f"{self.__class__.__name__}({self.hidden_size}"
+ if not self.elementwise_affine:
+ s += f", elementwise_affine={self.elementwise_affine}"
+ s += f", eps={self.eps}"
+ s += ")"
+ return s
+
+ def forward(self, x, weight, bias, residual=None, prenorm=False, residual_in_fp32=False):
+ return layer_norm_linear(
+ x=x,
+ norm_weight=self.weight,
+ norm_bias=self.bias,
+ linear_weight=weight,
+ linear_bias=bias,
+ residual=residual,
+ eps=self.eps,
+ prenorm=prenorm,
+ residual_in_fp32=residual_in_fp32,
+ is_rms_norm=True
+ )
+
+
+def layer_norm_linear(
+ x: torch.Tensor,
+ norm_weight: torch.Tensor,
+ norm_bias: torch.Tensor,
+ linear_weight: torch.Tensor,
+ linear_bias: torch.Tensor,
+ residual: torch.Tensor = None,
+ eps: float = 1e-5,
+ prenorm: bool = False,
+ residual_in_fp32: bool = False,
+ is_rms_norm: bool = False,
+ num_groups: int = 1
+):
+ return LayerNormLinearFunction.apply(
+ x,
+ norm_weight,
+ norm_bias,
+ linear_weight,
+ linear_bias,
+ residual,
+ eps,
+ prenorm,
+ residual_in_fp32,
+ is_rms_norm,
+ num_groups
+ )
+
+
+def rms_norm_linear(
+ x: torch.Tensor,
+ norm_weight: torch.Tensor,
+ norm_bias: torch.Tensor,
+ linear_weight: torch.Tensor,
+ linear_bias: torch.Tensor,
+ residual: torch.Tensor = None,
+ eps: float = 1e-5,
+ prenorm: bool = False,
+ residual_in_fp32: bool = False
+):
+ return layer_norm_linear(
+ x=x,
+ norm_weight=norm_weight,
+ norm_bias=norm_bias,
+ linear_weight=linear_weight,
+ linear_bias=linear_bias,
+ residual=residual,
+ eps=eps,
+ prenorm=prenorm,
+ residual_in_fp32=residual_in_fp32,
+ is_rms_norm=True
+ )
+
+
+def group_norm_linear(
+ x: torch.Tensor,
+ norm_weight: torch.Tensor,
+ norm_bias: torch.Tensor,
+ linear_weight: torch.Tensor,
+ linear_bias: torch.Tensor,
+ residual: torch.Tensor = None,
+ eps: float = 1e-5,
+ prenorm: bool = False,
+ residual_in_fp32: bool = False,
+ is_rms_norm: bool = False,
+ num_groups: int = 1
+):
+ return layer_norm_linear(
+ x=x,
+ norm_weight=norm_weight,
+ norm_bias=norm_bias,
+ linear_weight=linear_weight,
+ linear_bias=linear_bias,
+ residual=residual,
+ eps=eps,
+ prenorm=prenorm,
+ residual_in_fp32=residual_in_fp32,
+ is_rms_norm=is_rms_norm,
+ num_groups=num_groups
+ )
diff --git a/fla/modules/layernorm_gated.py b/fla/modules/layernorm_gated.py
new file mode 100644
index 0000000000000000000000000000000000000000..5faf02f4d84169901f800152509d494868aac49c
--- /dev/null
+++ b/fla/modules/layernorm_gated.py
@@ -0,0 +1,447 @@
+# Copyright (c) 2024, Tri Dao.
+# Based on the Triton LayerNorm tutorial: https://triton-lang.org/main/getting-started/tutorials/05-layer-norm.html
+# For the backward pass, we keep weight_grad and bias_grad in registers and accumulate.
+# This backward pass is faster for dimensions up to 8k, but after that it's much slower due to register spilling.
+# The models we train have hidden dim up to 8k anyway (e.g. Llama 70B), so this is fine.
+
+import math
+
+import torch
+import torch.nn.functional as F
+import triton
+import triton.language as tl
+from einops import rearrange
+
+
+def rms_norm_ref(x, weight, bias, z=None, eps=1e-6, group_size=None, norm_before_gate=True, upcast=True):
+ dtype = x.dtype
+ weight = weight.float()
+ bias = bias.float() if bias is not None else None
+ if upcast:
+ x = x.float()
+ z = z.float() if z is not None else z
+ if z is not None and not norm_before_gate:
+ x = x * F.silu(z)
+ if group_size is None:
+ rstd = 1 / torch.sqrt((x.square()).mean(dim=-1, keepdim=True) + eps)
+ out = (x * rstd * weight) + bias if bias is not None else (x * rstd * weight)
+ else:
+ x_group = rearrange(x, "... (g d) -> ... g d", d=group_size)
+ rstd = 1 / torch.sqrt((x_group.square()).mean(dim=-1, keepdim=True) + eps)
+ out = rearrange(x_group * rstd, "... g d -> ... (g d)") * weight
+ if bias is not None:
+ out = out + bias
+ if z is not None and norm_before_gate:
+ out *= F.silu(z)
+ return out.to(dtype)
+
+
+@triton.heuristics({"HAS_BIAS": lambda args: args["B"] is not None})
+@triton.heuristics({"HAS_Z": lambda args: args["Z"] is not None})
+@triton.jit
+def _layer_norm_fwd_1pass_kernel(
+ X, # pointer to the input
+ Y, # pointer to the output
+ W, # pointer to the weights
+ B, # pointer to the biases
+ Z, # pointer to the other branch
+ Mean, # pointer to the mean
+ Rstd, # pointer to the 1/std
+ stride_x_row, # how much to increase the pointer when moving by 1 row
+ stride_y_row,
+ stride_z_row,
+ M, # number of rows in X
+ N, # number of columns in X
+ eps, # epsilon to avoid division by zero
+ BLOCK_N: tl.constexpr,
+ HAS_BIAS: tl.constexpr,
+ HAS_Z: tl.constexpr,
+ NORM_BEFORE_GATE: tl.constexpr,
+ IS_RMS_NORM: tl.constexpr,
+):
+ # Map the program id to the row of X and Y it should compute.
+ row = tl.program_id(0)
+ group = tl.program_id(1)
+ X += row * stride_x_row + group * N
+ Y += row * stride_y_row + group * N
+ if HAS_Z:
+ Z += row * stride_z_row + group * N
+ if not IS_RMS_NORM:
+ Mean += group * M
+ Rstd += group * M
+ W += group * N
+ if HAS_BIAS:
+ B += group * N
+ # Compute mean and variance
+ cols = tl.arange(0, BLOCK_N)
+ x = tl.load(X + cols, mask=cols < N, other=0.).to(tl.float32)
+ if HAS_Z and not NORM_BEFORE_GATE:
+ z = tl.load(Z + cols, mask=cols < N).to(tl.float32)
+ x *= z * tl.sigmoid(z)
+ if not IS_RMS_NORM:
+ mean = tl.sum(x, axis=0) / N
+ tl.store(Mean + row, mean)
+ xbar = tl.where(cols < N, x - mean, 0.)
+ var = tl.sum(xbar * xbar, axis=0) / N
+ else:
+ xbar = tl.where(cols < N, x, 0.)
+ var = tl.sum(xbar * xbar, axis=0) / N
+ rstd = 1 / tl.sqrt(var + eps)
+ tl.store(Rstd + row, rstd)
+ # Normalize and apply linear transformation
+ mask = cols < N
+ w = tl.load(W + cols, mask=mask).to(tl.float32)
+ if HAS_BIAS:
+ b = tl.load(B + cols, mask=mask).to(tl.float32)
+ x_hat = (x - mean) * rstd if not IS_RMS_NORM else x * rstd
+ y = x_hat * w + b if HAS_BIAS else x_hat * w
+ if HAS_Z and NORM_BEFORE_GATE:
+ z = tl.load(Z + cols, mask=mask).to(tl.float32)
+ y *= z * tl.sigmoid(z)
+ # Write output
+ tl.store(Y + cols, y, mask=mask)
+
+
+def _layer_norm_fwd(x, weight, bias, eps, z=None, out=None, group_size=None, norm_before_gate=True, is_rms_norm=False):
+ M, N = x.shape
+ if group_size is None:
+ group_size = N
+ assert N % group_size == 0
+ ngroups = N // group_size
+ assert x.stride(-1) == 1
+ if z is not None:
+ assert z.stride(-1) == 1
+ assert z.shape == (M, N)
+ assert weight.shape == (N,)
+ assert weight.stride(-1) == 1
+ if bias is not None:
+ assert bias.stride(-1) == 1
+ assert bias.shape == (N,)
+ # allocate output
+ if out is not None:
+ assert out.shape == x.shape
+ else:
+ out = torch.empty_like(x)
+ assert out.stride(-1) == 1
+ mean = torch.empty((ngroups * M, ), dtype=torch.float32, device=x.device) if not is_rms_norm else None
+ rstd = torch.empty((ngroups * M, ), dtype=torch.float32, device=x.device)
+ # Less than 64KB per feature: enqueue fused kernel
+ MAX_FUSED_SIZE = 65536 // x.element_size()
+ BLOCK_N = min(MAX_FUSED_SIZE, triton.next_power_of_2(group_size))
+ if group_size > BLOCK_N:
+ raise RuntimeError("This layer norm doesn't support feature dim >= 64KB.")
+ # heuristics for number of warps
+ num_warps = min(max(BLOCK_N // 256, 1), 8)
+ grid = (M, ngroups)
+ with torch.cuda.device(x.device.index):
+ _layer_norm_fwd_1pass_kernel[grid](x, out, weight, bias, z, mean, rstd,
+ x.stride(0), out.stride(0), z.stride(0) if z is not None else 0,
+ M, group_size, eps,
+ BLOCK_N=BLOCK_N,
+ NORM_BEFORE_GATE=norm_before_gate,
+ IS_RMS_NORM=is_rms_norm,
+ num_warps=num_warps)
+ return out, mean, rstd
+
+
+@triton.heuristics({"HAS_BIAS": lambda args: args["B"] is not None})
+@triton.heuristics({"HAS_Z": lambda args: args["Z"] is not None})
+@triton.heuristics({"RECOMPUTE_OUTPUT": lambda args: args["Y"] is not None})
+@triton.jit
+def _layer_norm_bwd_kernel(
+ X, # pointer to the input
+ W, # pointer to the weights
+ B, # pointer to the biases
+ Z, # pointer to the other branch
+ Y, # pointer to the output to be recomputed
+ DY, # pointer to the output gradient
+ DX, # pointer to the input gradient
+ DW, # pointer to the partial sum of weights gradient
+ DB, # pointer to the partial sum of biases gradient
+ DZ, # pointer to the other branch
+ Mean, # pointer to the mean
+ Rstd, # pointer to the 1/std
+ stride_x_row, # how much to increase the pointer when moving by 1 row
+ stride_z_row,
+ stride_y_row,
+ stride_dy_row,
+ stride_dx_row,
+ stride_dz_row,
+ stride_dw_row,
+ stride_db_row,
+ M, # number of rows in X
+ N, # number of columns in X
+ eps, # epsilon to avoid division by zero
+ rows_per_program,
+ NORM_BEFORE_GATE: tl.constexpr,
+ IS_RMS_NORM: tl.constexpr,
+ HAS_BIAS: tl.constexpr,
+ HAS_Z: tl.constexpr,
+ RECOMPUTE_OUTPUT: tl.constexpr,
+ BLOCK_N: tl.constexpr,
+):
+ # Map the program id to the elements of X, DX, and DY it should compute.
+ row_block_id = tl.program_id(0)
+ group = tl.program_id(1)
+ row_start = row_block_id * rows_per_program
+ cols = tl.arange(0, BLOCK_N)
+ mask = cols < N
+ X += row_start * stride_x_row + group * N
+ if HAS_Z:
+ Z += row_start * stride_z_row + group * N
+ DZ += row_start * stride_dz_row + group * N
+ DY += row_start * stride_dy_row + group * N
+ DX += row_start * stride_dx_row + group * N
+ if RECOMPUTE_OUTPUT:
+ Y += row_start * stride_y_row + group * N
+ if not IS_RMS_NORM:
+ Mean += group * M
+ Rstd += group * M
+ W += group * N
+ w = tl.load(W + cols, mask=mask).to(tl.float32)
+ if (RECOMPUTE_OUTPUT or HAS_Z) and HAS_BIAS:
+ B += group * N
+ b = tl.load(B + cols, mask=mask, other=0.).to(tl.float32)
+ dw = tl.zeros((BLOCK_N,), dtype=tl.float32)
+ if HAS_BIAS:
+ db = tl.zeros((BLOCK_N,), dtype=tl.float32)
+ row_end = min((row_block_id + 1) * rows_per_program, M)
+ for row in range(row_start, row_end):
+ # Load data to SRAM
+ x = tl.load(X + cols, mask=mask, other=0).to(tl.float32)
+ dy = tl.load(DY + cols, mask=mask, other=0).to(tl.float32)
+ if not IS_RMS_NORM:
+ mean = tl.load(Mean + row)
+ if HAS_Z and not NORM_BEFORE_GATE:
+ z = tl.load(Z + cols, mask=mask, other=0.).to(tl.float32)
+ x_og = x
+ x = x_og * z * tl.sigmoid(z)
+ rstd = tl.load(Rstd + row)
+ # Compute dx
+ xhat = (x - mean) * rstd if not IS_RMS_NORM else x * rstd
+ xhat = tl.where(mask, xhat, 0.)
+ if HAS_Z and NORM_BEFORE_GATE:
+ z = tl.load(Z + cols, mask=mask, other=0.).to(tl.float32)
+ z_sigmoid = tl.sigmoid(z)
+ y = xhat * w + b if HAS_BIAS else xhat * w
+ if RECOMPUTE_OUTPUT:
+ tl.store(Y + cols, y * z * z_sigmoid, mask=mask)
+ dz = dy * y * z_sigmoid * (1 + z * (1 - z_sigmoid))
+ tl.store(DZ + cols, dz, mask=mask)
+ dy *= z * z_sigmoid
+ else:
+ if RECOMPUTE_OUTPUT:
+ y = xhat * w + b if HAS_BIAS else xhat * w
+ tl.store(Y + cols, y, mask=mask)
+ wdy = w * dy
+ c1 = tl.sum(xhat * wdy, axis=0) / N
+ if not IS_RMS_NORM:
+ c2 = tl.sum(wdy, axis=0) / N
+ dx = (wdy - (xhat * c1 + c2)) * rstd
+ else:
+ dx = (wdy - xhat * c1) * rstd
+ dw += dy * xhat
+ if HAS_BIAS:
+ db += dy
+ if HAS_Z and not NORM_BEFORE_GATE:
+ z_sigmoid = tl.sigmoid(z)
+ dz = dx * x_og * z_sigmoid * (1 + z * (1 - z_sigmoid))
+ tl.store(DZ + cols, dz, mask=mask)
+ dx *= z * z_sigmoid
+ # Write dx
+ tl.store(DX + cols, dx, mask=mask)
+
+ X += stride_x_row
+ if HAS_Z:
+ Z += stride_z_row
+ DZ += stride_dz_row
+ if RECOMPUTE_OUTPUT:
+ Y += stride_y_row
+ DY += stride_dy_row
+ DX += stride_dx_row
+ tl.store(DW + row_block_id * stride_dw_row + group * N + cols, dw, mask=mask)
+ if HAS_BIAS:
+ tl.store(DB + row_block_id * stride_db_row + group * N + cols, db, mask=mask)
+
+
+def _layer_norm_bwd(dy, x, weight, bias, eps, mean, rstd, z=None, group_size=None,
+ norm_before_gate=True, is_rms_norm=False, recompute_output=False, dz=None, out=None):
+ M, N = x.shape
+ if group_size is None:
+ group_size = N
+ assert N % group_size == 0
+ ngroups = N // group_size
+ assert x.stride(-1) == 1
+ assert dy.stride(-1) == 1
+ assert dy.shape == (M, N)
+ if z is not None:
+ assert z.stride(-1) == 1
+ assert z.shape == (M, N)
+ assert weight.shape == (N,)
+ assert weight.stride(-1) == 1
+ if bias is not None:
+ assert bias.stride(-1) == 1
+ assert bias.shape == (N,)
+ # allocate output
+ dx = torch.empty_like(x)
+ if dz is not None:
+ assert z is not None
+ assert dz.shape == z.shape
+ assert dz.stride(-1) == 1
+ else:
+ dz = torch.empty_like(z) if z is not None else None
+ if recompute_output:
+ if out is None:
+ out = torch.empty_like(x)
+ assert out.shape == x.shape
+
+ # Less than 64KB per feature: enqueue fused kernel
+ MAX_FUSED_SIZE = 65536 // x.element_size()
+ BLOCK_N = min(MAX_FUSED_SIZE, triton.next_power_of_2(group_size))
+ if group_size > BLOCK_N:
+ raise RuntimeError("This layer norm doesn't support feature dim >= 64KB.")
+ # heuristics for number of warps
+ num_warps = min(max(BLOCK_N // 256, 1), 8)
+ sm_count = torch.cuda.get_device_properties(x.device).multi_processor_count
+ # If group size is small (e.g., 64), we're only using 1 warp. So having just 108 programs
+ # would limit the occupancy.
+ nrow_groups = math.ceil(sm_count * math.ceil(4 / num_warps) / ngroups)
+ _dw = torch.empty((nrow_groups, N), dtype=torch.float32, device=weight.device)
+ _db = torch.empty((nrow_groups, N), dtype=torch.float32, device=bias.device) if bias is not None else None
+ rows_per_program = math.ceil(M / nrow_groups)
+ grid = (nrow_groups, ngroups)
+ with torch.cuda.device(x.device.index):
+ _layer_norm_bwd_kernel[grid](x, weight, bias, z, out if recompute_output else None,
+ dy, dx, _dw, _db, dz, mean, rstd,
+ x.stride(0),
+ z.stride(0) if z is not None else 0,
+ 0 if not recompute_output else out.stride(0),
+ dy.stride(0), dx.stride(0),
+ dz.stride(0) if dz is not None else 0,
+ _dw.stride(0),
+ _db.stride(0) if _db is not None else 0,
+ M, group_size, eps,
+ rows_per_program,
+ BLOCK_N=BLOCK_N,
+ NORM_BEFORE_GATE=norm_before_gate,
+ IS_RMS_NORM=is_rms_norm,
+ num_warps=num_warps)
+ dw = _dw.sum(0).to(weight.dtype)
+ db = _db.sum(0).to(bias.dtype) if bias is not None else None
+ return (dx, dw, db, dz) if not recompute_output else (dx, dw, db, dz, out)
+
+
+class LayerNormFn(torch.autograd.Function):
+
+ @staticmethod
+ def forward(ctx, x, weight, bias, z=None, eps=1e-6, group_size=None, norm_before_gate=True,
+ is_rms_norm=False):
+ """If z is not None, we do norm(x) * silu(z) if norm_before_gate, else norm(x * silu(z))
+ """
+
+ x_shape_og = x.shape
+ # reshape input data into 2D tensor
+ x = x.reshape(-1, x.shape[-1])
+ if x.stride(-1) != 1:
+ x = x.contiguous()
+ if z is not None:
+ assert z.shape == x_shape_og
+ z = z.reshape(-1, z.shape[-1])
+ if z.stride(-1) != 1:
+ z = z.contiguous()
+ weight = weight.contiguous()
+ if bias is not None:
+ bias = bias.contiguous()
+ y, mean, rstd = _layer_norm_fwd(x, weight, bias, eps, z=z, group_size=group_size,
+ norm_before_gate=norm_before_gate, is_rms_norm=is_rms_norm)
+ ctx.save_for_backward(x, weight, bias, mean, rstd, z)
+ ctx.x_shape_og = x_shape_og
+ ctx.eps = eps
+ ctx.group_size = group_size
+ ctx.norm_before_gate = norm_before_gate
+ ctx.is_rms_norm = is_rms_norm
+ return y.reshape(x_shape_og)
+
+ @staticmethod
+ def backward(ctx, dy):
+ x, weight, bias, mean, rstd, z = ctx.saved_tensors
+ dy = dy.reshape(-1, dy.shape[-1])
+ if dy.stride(-1) != 1:
+ dy = dy.contiguous()
+ assert dy.shape == x.shape
+ dx, dw, db, dz = _layer_norm_bwd(
+ dy,
+ x,
+ weight,
+ bias,
+ ctx.eps,
+ mean,
+ rstd,
+ z,
+ ctx.group_size,
+ ctx.norm_before_gate,
+ ctx.is_rms_norm
+ )
+ dx = dx.reshape(ctx.x_shape_og)
+ dx = dz.reshape(ctx.x_shape_og) if dz is not None else None
+ return dx, dw, db, dz, None, None, None, None
+
+
+def layernorm_fn(x, weight, bias, z=None, eps=1e-6, group_size=None, norm_before_gate=True, is_rms_norm=False):
+ return LayerNormFn.apply(x, weight, bias, z, eps, group_size, norm_before_gate, is_rms_norm)
+
+
+def rmsnorm_fn(x, weight, bias, z=None, eps=1e-6, group_size=None, norm_before_gate=True):
+ return LayerNormFn.apply(x, weight, bias, z, eps, group_size, norm_before_gate, True)
+
+
+class LayerNormGated(torch.nn.Module):
+
+ def __init__(self, hidden_size, eps=1e-5, group_size=None, norm_before_gate=True, device=None, dtype=None):
+ """If group_size is not None, we do GroupNorm with each group having group_size elements.
+ group_size=None is equivalent to group_size=hidden_size (i.e. there's only 1 group).
+ """
+
+ factory_kwargs = {"device": device, "dtype": dtype}
+ super().__init__()
+ self.eps = eps
+ self.weight = torch.nn.Parameter(torch.empty(hidden_size, **factory_kwargs))
+ self.bias = torch.nn.Parameter(torch.empty(hidden_size, **factory_kwargs))
+ self.group_size = group_size
+ self.norm_before_gate = norm_before_gate
+ self.reset_parameters()
+
+ def reset_parameters(self):
+ torch.nn.init.ones_(self.weight)
+ torch.nn.init.zeros_(self.bias)
+
+ def forward(self, x, z=None):
+ """If z is not None, we do norm(x) * silu(z) if norm_before_gate, else norm(x * silu(z))
+ """
+ return layernorm_fn(x, self.weight, self.bias, z=z, group_size=self.group_size, eps=self.eps,
+ norm_before_gate=self.norm_before_gate)
+
+
+class RMSNormGated(torch.nn.Module):
+
+ def __init__(self, hidden_size, eps=1e-5, group_size=None, norm_before_gate=False, device=None, dtype=None):
+ """If group_size is not None, we do GroupNorm with each group having group_size elements.
+ group_size=None is equivalent to group_size=hidden_size (i.e. there's only 1 group).
+ """
+ factory_kwargs = {"device": device, "dtype": dtype}
+ super().__init__()
+ self.eps = eps
+ self.weight = torch.nn.Parameter(torch.empty(hidden_size, **factory_kwargs))
+ self.register_parameter("bias", None)
+ self.group_size = group_size
+ self.norm_before_gate = norm_before_gate
+ self.reset_parameters()
+
+ def reset_parameters(self):
+ torch.nn.init.ones_(self.weight)
+
+ def forward(self, x, z=None):
+ """If z is not None, we do norm(x) * silu(z) if norm_before_gate, else norm(x * silu(z))
+ """
+ return rmsnorm_fn(x, self.weight, self.bias, z=z, eps=self.eps, group_size=self.group_size,
+ norm_before_gate=self.norm_before_gate)
diff --git a/fla/modules/rotary.py b/fla/modules/rotary.py
new file mode 100644
index 0000000000000000000000000000000000000000..ed7ed9c99749975c6a13cb1ab8a788578f8d5864
--- /dev/null
+++ b/fla/modules/rotary.py
@@ -0,0 +1,304 @@
+# -*- coding: utf-8 -*-
+
+# Copyright (c) 2023, Tri Dao.
+
+from typing import Optional, Tuple, Union
+
+import torch
+from einops import rearrange, repeat
+
+from fla.ops.rotary import apply_rotary
+
+
+def rotate_half(x, interleaved=False):
+ if not interleaved:
+ x1, x2 = x.chunk(2, dim=-1)
+ return torch.cat((-x2, x1), dim=-1)
+ else:
+ x1, x2 = x[..., ::2], x[..., 1::2]
+ return rearrange(torch.stack((-x2, x1), dim=-1), "... d two -> ... (d two)", two=2)
+
+
+def rotary_embedding_torch(x, cos, sin, interleaved=False):
+ """
+ x: (batch_size, seqlen, nheads, headdim)
+ cos, sin: (seqlen, rotary_dim / 2) or (batch_size, seqlen, rotary_dim / 2)
+ """
+ ro_dim = cos.shape[-1] * 2
+ assert ro_dim <= x.shape[-1]
+ cos = repeat(
+ cos, "... d -> ... 1 (2 d)" if not interleaved else "... d -> ... 1 (d 2)")
+ sin = repeat(
+ sin, "... d -> ... 1 (2 d)" if not interleaved else "... d -> ... 1 (d 2)")
+ return torch.cat(
+ [x[..., :ro_dim] * cos +
+ rotate_half(x[..., :ro_dim], interleaved) * sin, x[..., ro_dim:]],
+ dim=-1,
+ )
+
+
+class RotaryEmbeddingFunction(torch.autograd.Function):
+
+ @staticmethod
+ def forward(
+ ctx,
+ x,
+ cos,
+ sin,
+ interleaved=False,
+ inplace=False,
+ seqlen_offsets: Union[int, torch.Tensor] = 0,
+ cu_seqlens: Optional[torch.Tensor] = None,
+ max_seqlen: Optional[int] = None,
+ ):
+ out = apply_rotary(
+ x,
+ cos,
+ sin,
+ seqlen_offsets=seqlen_offsets,
+ cu_seqlens=cu_seqlens,
+ max_seqlen=max_seqlen,
+ interleaved=interleaved,
+ inplace=inplace,
+ )
+ if isinstance(seqlen_offsets, int):
+ # Can't save int with save_for_backward
+ ctx.save_for_backward(cos, sin, cu_seqlens)
+ ctx.seqlen_offsets = seqlen_offsets
+ else:
+ ctx.save_for_backward(cos, sin, cu_seqlens, seqlen_offsets)
+ ctx.seqlen_offsets = None
+ ctx.interleaved = interleaved
+ ctx.inplace = inplace
+ ctx.max_seqlen = max_seqlen
+ return out if not inplace else x
+
+ @staticmethod
+ def backward(ctx, do):
+ seqlen_offsets = ctx.seqlen_offsets
+ if seqlen_offsets is None:
+ cos, sin, cu_seqlens, seqlen_offsets = ctx.saved_tensors
+ else:
+ cos, sin, cu_seqlens = ctx.saved_tensors
+ # TD [2023-09-02]: For some reason Triton (2.0.0.post1) errors with
+ # "[CUDA]: invalid device context", and cloning makes it work. Idk why. Triton 2.1.0 works.
+ if not ctx.interleaved and not ctx.inplace:
+ do = do.clone()
+ dx = apply_rotary(
+ do,
+ cos,
+ sin,
+ seqlen_offsets=seqlen_offsets,
+ cu_seqlens=cu_seqlens,
+ max_seqlen=ctx.max_seqlen,
+ interleaved=ctx.interleaved,
+ inplace=ctx.inplace,
+ conjugate=True,
+ )
+ return dx, None, None, None, None, None, None, None
+
+
+def rotary_embedding(
+ x,
+ cos,
+ sin,
+ interleaved=False,
+ inplace=False,
+ seqlen_offsets: Union[int, torch.Tensor] = 0,
+ cu_seqlens: Optional[torch.Tensor] = None,
+ max_seqlen: Optional[int] = None,
+):
+ """
+ Arguments:
+ x: (batch_size, seqlen, nheads, headdim) if cu_seqlens is None
+ else (total_seqlen, nheads, headdim)
+ cos, sin: (seqlen_rotary, rotary_dim / 2)
+ interleaved: if True, rotate pairs of even and odd dimensions (GPT-J style) instead
+ of 1st half and 2nd half (GPT-NeoX style).
+ inplace: if True, apply rotary embedding in-place.
+ seqlen_offsets: (batch_size,) or int. Each sequence in x is shifted by this amount.
+ Most commonly used in inference when we have KV cache.
+ cu_seqlens: (batch + 1,) or None
+ max_seqlen: int
+ Return:
+ out: (batch_size, seqlen, nheads, headdim) if cu_seqlens is None
+ else (total_seqlen, nheads, headdim)
+ rotary_dim must be <= headdim
+ Apply rotary embedding to the first rotary_dim of x.
+ """
+ return RotaryEmbeddingFunction.apply(
+ x, cos, sin, interleaved, inplace, seqlen_offsets, cu_seqlens, max_seqlen
+ )
+
+
+class RotaryEmbedding(torch.nn.Module):
+ """
+ The rotary position embeddings from RoFormer_ (Su et. al).
+ A crucial insight from the method is that the query and keys are
+ transformed by rotation matrices which depend on the relative positions.
+
+ Other implementations are available in the Rotary Transformer repo_ and in
+ GPT-NeoX_, GPT-NeoX was an inspiration
+
+ .. _RoFormer: https://arxiv.org/abs/2104.09864
+ .. _repo: https://github.com/ZhuiyiTechnology/roformer
+ .. _GPT-NeoX: https://github.com/EleutherAI/gpt-neox
+
+ If scale_base is not None, this implements XPos (Sun et al., https://arxiv.org/abs/2212.10554).
+ A recommended value for scale_base is 512: https://github.com/HazyResearch/flash-attention/issues/96
+ Reference: https://github.com/sunyt32/torchscale/blob/main/torchscale/component/xpos_relative_position.py
+ """
+
+ def __init__(
+ self,
+ dim: int,
+ base=10000.0,
+ interleaved=False,
+ scale_base=None,
+ pos_idx_in_fp32=True,
+ device=None,
+ ):
+ """
+ interleaved: if True, rotate pairs of even and odd dimensions (GPT-J style) instead
+ of 1st half and 2nd half (GPT-NeoX style).
+ pos_idx_in_fp32: if True, the position indices [0.0, ..., seqlen - 1] are in fp32,
+ otherwise they might be in lower precision.
+ This option was added because previously (before 2023-07-02), when we construct
+ the position indices, we use the dtype of self.inv_freq. In most cases this would
+ be fp32, but if the model is trained in pure bf16 (not mixed precision), then
+ self.inv_freq would be bf16, and the position indices are also in bf16.
+ Because of the limited precision of bf16 (e.g. 1995.0 is rounded to 2000.0), the
+ embeddings for some positions will coincide.
+ To maintain compatibility with models previously trained in pure bf16,
+ we add this option.
+ """
+ super().__init__()
+ self.dim = dim
+ self.base = float(base)
+ self.pos_idx_in_fp32 = pos_idx_in_fp32
+ # Generate and save the inverse frequency buffer (non trainable)
+ inv_freq = self._compute_inv_freq(device)
+ self.register_buffer("inv_freq", inv_freq, persistent=False)
+ self.interleaved = interleaved
+ self.scale_base = scale_base
+ scale = (
+ (torch.arange(0, dim, 2, device=device,
+ dtype=torch.float32) + 0.4 * dim) / (1.4 * dim)
+ if scale_base is not None
+ else None
+ )
+ self.register_buffer("scale", scale, persistent=False)
+
+ self._seq_len_cached = 0
+ self._cos_cached = None
+ self._sin_cached = None
+ self._cos_k_cached = None
+ self._sin_k_cached = None
+
+ def _compute_inv_freq(self, device=None):
+ return 1.0 / (
+ self.base
+ ** (torch.arange(0, self.dim, 2, device=device, dtype=torch.float32) / self.dim)
+ )
+
+ def _update_cos_sin_cache(self, seqlen, device=None, dtype=None):
+ # Reset the tables if the sequence length has changed,
+ # if we're on a new device (possibly due to tracing for instance),
+ # or if we're switching from inference mode to training
+ if (
+ seqlen > self._seq_len_cached
+ or self._cos_cached is None
+ or self._cos_cached.device != device
+ or self._cos_cached.dtype != dtype
+ or (self.training and self._cos_cached.is_inference())
+ ):
+ self._seq_len_cached = seqlen
+ # We want fp32 here, not self.inv_freq.dtype, since the model could be loaded in bf16
+ # And the output of arange can be quite large, so bf16 would lose a lot of precision.
+ # However, for compatibility reason, we add an option to use the dtype of self.inv_freq.
+ if self.pos_idx_in_fp32:
+ t = torch.arange(seqlen, device=device, dtype=torch.float32)
+ # We want fp32 here as well since inv_freq will be multiplied with t, and the output
+ # will be large. Having it in bf16 will lose a lot of precision and cause the
+ # cos & sin output to change significantly.
+ # We want to recompute self.inv_freq if it was not loaded in fp32
+ if self.inv_freq.dtype != torch.float32:
+ inv_freq = self._compute_inv_freq(device=device)
+ else:
+ inv_freq = self.inv_freq
+ else:
+ t = torch.arange(seqlen, device=device, dtype=self.inv_freq.dtype)
+ inv_freq = self.inv_freq
+ # Don't do einsum, it converts fp32 to fp16 under AMP
+ # freqs = torch.einsum("i,j->ij", t, self.inv_freq)
+ freqs = torch.outer(t, inv_freq)
+ if self.scale is None:
+ self._cos_cached = torch.cos(freqs).to(dtype)
+ self._sin_cached = torch.sin(freqs).to(dtype)
+ else:
+ power = (
+ torch.arange(seqlen, dtype=self.scale.dtype, device=self.scale.device)
+ - seqlen // 2
+ ) / self.scale_base
+ scale = self.scale.to(device=power.device) ** rearrange(power, "s -> s 1")
+ # We want the multiplication by scale to happen in fp32
+ self._cos_cached = (torch.cos(freqs) * scale).to(dtype)
+ self._sin_cached = (torch.sin(freqs) * scale).to(dtype)
+ self._cos_k_cached = (torch.cos(freqs) / scale).to(dtype)
+ self._sin_k_cached = (torch.sin(freqs) / scale).to(dtype)
+
+ def forward(
+ self,
+ q: torch.Tensor,
+ k: torch.Tensor,
+ seqlen_offset: Union[int, torch.Tensor] = 0,
+ max_seqlen: Optional[int] = None,
+ ) -> Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]]:
+ """
+ q: (batch, seqlen, nheads, headdim)
+ k: (batch, seqlen, nheads, headdim)
+ seqlen_offset:
+ (batch_size,) or int. Each sequence in x is shifted by this amount.
+ Most commonly used in inference when we have KV cache.
+ If it's a tensor of shape (batch_size,), then to update the cos / sin cache, one
+ should pass in max_seqlen, which will update the cos / sin cache up to that length.
+ max_seqlen: int
+ """
+ seqlen = q.shape[1]
+ if max_seqlen is not None:
+ self._update_cos_sin_cache(max_seqlen, device=q.device, dtype=q.dtype)
+ elif isinstance(seqlen_offset, int):
+ self._update_cos_sin_cache(seqlen + seqlen_offset, device=q.device, dtype=q.dtype)
+ if self.scale is None:
+ q = rotary_embedding(
+ q,
+ self._cos_cached,
+ self._sin_cached,
+ interleaved=self.interleaved,
+ seqlen_offsets=seqlen_offset
+ )
+ k = rotary_embedding(
+ k,
+ self._cos_cached,
+ self._sin_cached,
+ interleaved=self.interleaved,
+ seqlen_offsets=seqlen_offset
+ )
+
+ else:
+ q = rotary_embedding(
+ q,
+ self._cos_cached,
+ self._sin_cached,
+ interleaved=self.interleaved,
+ seqlen_offsets=seqlen_offset
+ )
+ k = rotary_embedding(
+ k,
+ self._cos_k_cached,
+ self._sin_k_cached,
+ interleaved=self.interleaved,
+ seqlen_offsets=seqlen_offset
+ )
+
+ return q, k
diff --git a/fla/ops/__init__.py b/fla/ops/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..4f8681d2a16bb0c9b86fc0f3cb268c4bb69ce5b8
--- /dev/null
+++ b/fla/ops/__init__.py
@@ -0,0 +1,18 @@
+# -*- coding: utf-8 -*-
+
+from .based import fused_chunk_based, parallel_based
+from .gla import chunk_gla, fused_chunk_gla, fused_recurrent_gla
+from .retention import (chunk_retention, fused_chunk_retention,
+ fused_recurrent_retention, parallel_retention)
+
+__all__ = [
+ 'fused_chunk_based',
+ 'parallel_based',
+ 'chunk_gla',
+ 'fused_chunk_gla',
+ 'fused_recurrent_gla',
+ 'chunk_retention',
+ 'fused_chunk_retention',
+ 'fused_recurrent_retention',
+ 'parallel_retention'
+]
diff --git a/fla/ops/abc/__init__.py b/fla/ops/abc/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..fdac8d900fc51485a55716443ee1f00424b522b9
--- /dev/null
+++ b/fla/ops/abc/__init__.py
@@ -0,0 +1,7 @@
+# -*- coding: utf-8 -*-
+
+from .chunk import chunk_abc
+
+__all__ = [
+ 'chunk_abc'
+]
diff --git a/fla/ops/abc/chunk.py b/fla/ops/abc/chunk.py
new file mode 100644
index 0000000000000000000000000000000000000000..da419716a776d4e301a929b8dcbc44ec2474a6cd
--- /dev/null
+++ b/fla/ops/abc/chunk.py
@@ -0,0 +1,1220 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from typing import Optional, Tuple
+
+import torch
+import triton
+import triton.language as tl
+
+from fla.ops.utils import (logcumsumexp_fwd_kernel, softmax_bwd_kernel,
+ softmax_fwd_kernel)
+from fla.utils import contiguous
+
+
+@triton.jit
+def chunk_abc_fwd_kernel_h(
+ k,
+ v,
+ z,
+ h,
+ h0,
+ ht,
+ s_k_h,
+ s_k_t,
+ s_k_d,
+ s_v_h,
+ s_v_t,
+ s_v_d,
+ s_h_h,
+ s_h_t,
+ s_h_d,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ NT: tl.constexpr,
+ NORMK: tl.constexpr,
+ USE_INITIAL_STATE: tl.constexpr,
+ STORE_FINAL_STATE: tl.constexpr
+):
+ i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+
+ b_h = tl.zeros([BK, BV], dtype=tl.float32)
+ if USE_INITIAL_STATE:
+ p_h = tl.make_block_ptr(h0 + i_bh * K * V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ b_h += tl.load(p_h, boundary_check=(0, 1)).to(tl.float32)
+ if NORMK:
+ p_z0 = tl.make_block_ptr(z + i_bh * s_k_h, (T * K,), (s_k_d,), (i_k * BK,), (BK,), (0,))
+ else:
+ p_z0 = tl.make_block_ptr(z + i_bh * s_v_h, (T * V,), (s_v_d,), (i_v * BV,), (BV,), (0,))
+ b_zp = tl.load(p_z0).to(tl.float32)
+ for i_t in range(NT):
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * K * V, (K, V), (s_h_t, s_h_d), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+
+ tl.store(p_h, b_h.to(p_h.dtype.element_ty), boundary_check=(0, 1))
+ # [BK, BT]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BT, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ if NORMK:
+ p_zc = tl.make_block_ptr(z + i_bh * s_k_h, (T * K,), (s_k_d,), ((i_t * BT + BT - 1) * K + i_k * BK,), (BK,), (0,))
+ # [BK,]
+ b_zc = tl.load(p_zc, boundary_check=(0,))
+ b_r, b_zp = tl.exp(b_zp - b_zc), b_zc
+ # [BK, BV]
+ b_h = b_h * b_r[:, None]
+ b_k = tl.exp(b_k - b_zc[:, None]).to(b_k.dtype)
+ else:
+ p_zc = tl.make_block_ptr(z + i_bh * s_v_h, (T * V,), (s_v_d,), ((i_t * BT + BT - 1) * V + i_v * BV,), (BV,), (0,))
+ # [BV,]
+ b_zc = tl.load(p_zc, boundary_check=(0,))
+ b_r, b_zp = tl.exp(b_zp - b_zc), b_zc
+ # [BK, BV]
+ b_h = b_h * b_r[None, :]
+ b_v = tl.exp(b_v - b_zc[None, :]).to(b_v.dtype)
+ # [BK, BV]
+ b_h += tl.dot(b_k, b_v, allow_tf32=False)
+
+ if STORE_FINAL_STATE:
+ p_h = tl.make_block_ptr(ht + i_bh * K * V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ tl.store(p_h, b_h.to(p_h.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.jit
+def chunk_abc_fwd_kernel_intra_K(
+ v,
+ z,
+ o,
+ A,
+ s_v_h,
+ s_v_t,
+ s_v_d,
+ T: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BC: tl.constexpr,
+ BV: tl.constexpr,
+ NC: tl.constexpr
+):
+ i_v, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_t, i_i = i_c // NC, i_c % NC
+
+ p_z = tl.make_block_ptr(z + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0))
+ p_zn = tl.make_block_ptr(z + i_bh * s_v_h, (T * V,), (s_v_d,), ((i_t * BT + i_i * BC) * V + i_v * BV,), (BV,), (0,))
+ # [BV,]
+ b_zn = tl.load(p_zn, boundary_check=(0,))
+ # [BC, BV]
+ b_o = tl.zeros([BC, BV], dtype=tl.float32)
+ for i_j in range(0, i_i):
+ p_A = tl.make_block_ptr(A + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT + i_i * BC, i_j * BC), (BC, BC), (1, 0))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_j * BC, i_v * BV), (BC, BV), (1, 0))
+ # [BC, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BC, BC]
+ b_A = tl.load(p_A, boundary_check=(0, 1))
+ b_o += tl.dot(b_A, tl.exp(b_v - b_zn[None, :]).to(b_v.dtype), allow_tf32=False)
+ b_z = tl.load(p_z, boundary_check=(0, 1))
+ b_o *= tl.exp(b_zn[None, :] - b_z)
+
+ o_i = tl.arange(0, BC)
+ o_A = i_bh * T * BT + (i_t * BT + i_i * BC + tl.arange(0, BC)) * BT + i_i * BC
+ m_A = (i_t * BT + i_i * BC + tl.arange(0, BC)) < T
+ for j in range(0, BC):
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T * V,), (1,), ((i_t * BT + i_i * BC + j) * V + i_v * BV,), (BV,), (0,))
+ # [BC,]
+ b_A = tl.load(A + o_A + j, mask=m_A, other=0)
+ # [BV,]
+ b_v = tl.load(p_v, boundary_check=(0,)).to(tl.float32)
+ # [BC, BV]
+ # avoid 0 * inf = inf
+ m_i = o_i[:, None] >= j
+ b_o += tl.where(m_i, b_A[:, None] * tl.exp(b_v[None, :] - b_z), 0)
+ p_o = tl.make_block_ptr(o + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0))
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.jit
+def chunk_abc_fwd_kernel_K(
+ q,
+ k,
+ z,
+ h,
+ o,
+ A,
+ s_k_h,
+ s_k_t,
+ s_k_d,
+ s_v_h,
+ s_v_t,
+ s_v_d,
+ s_h_h,
+ s_h_t,
+ s_h_d,
+ scale,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr
+):
+ i_v, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_p = tl.maximum(i_t * BT - 1, 0)
+
+ o_i = tl.arange(0, BT)
+ m_s = o_i[:, None] >= o_i[None, :]
+
+ b_o = tl.zeros([BT, BV], dtype=tl.float32)
+ b_A = tl.zeros([BT, BT], dtype=tl.float32)
+ for i_k in range(tl.cdiv(K, BK)):
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * K * V, (K, V), (s_h_t, s_h_d), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+
+ # [BT, BK]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_q = (b_q * scale).to(b_q.dtype)
+ # [BK, BT]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BK, BV]
+ b_h = tl.load(p_h, boundary_check=(0, 1))
+ # [BT, BV]
+ b_o += tl.dot(b_q, b_h, allow_tf32=False)
+ # [BT, BT]
+ b_A += tl.dot(b_q, b_k, allow_tf32=False)
+ p_z = tl.make_block_ptr(z + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_o = tl.make_block_ptr(o + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ # [BT, BV]
+ b_z = tl.load(p_z, boundary_check=(0, 1))
+ # [BT, BV]
+ p_zp = tl.make_block_ptr(z + i_bh * s_v_h, (T * V,), (s_v_d,), (i_p * V + i_v * BV,), (BV,), (0,))
+ b_zp = tl.load(p_zp, boundary_check=(0,))
+ b_o = b_o * tl.exp(b_zp[None, :] - b_z)
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1))
+
+ p_A = tl.make_block_ptr(A + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0))
+ # [BT, BT]
+ b_A = tl.where(m_s, b_A, 0.)
+ if i_v == 0:
+ tl.store(p_A, b_A.to(p_A.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.jit
+def chunk_abc_fwd_kernel_intra_V(
+ q,
+ k,
+ z,
+ A,
+ s_k_h,
+ s_k_t,
+ s_k_d,
+ scale,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ BT: tl.constexpr,
+ BC: tl.constexpr,
+ BK: tl.constexpr,
+ NC: tl.constexpr
+):
+ i_k, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_t, i_i, i_j = i_c // (NC * NC), (i_c % (NC * NC)) // NC, (i_c % (NC * NC)) % NC
+ n_bh = tl.num_programs(2)
+
+ if i_i > i_j:
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT + i_j * BC), (BK, BC), (0, 1))
+ p_z = tl.make_block_ptr(z + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ p_A = tl.make_block_ptr(A + (i_k*n_bh+i_bh)*T*BT, (T, BT), (BT, 1), (i_t * BT + i_i * BC, i_j * BC), (BC, BC), (1, 0))
+ p_zn = tl.make_block_ptr(z + i_bh * s_k_h, (T * K,), (s_k_d,), ((i_t * BT + i_i * BC) * K + i_k * BK,), (BK,), (0,))
+ # [BK,]
+ b_zn = tl.load(p_zn, boundary_check=(0,))
+ # [BC, BK]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_z = tl.load(p_z, boundary_check=(0, 1))
+ b_q = (b_q * tl.exp(b_zn[None, :] - b_z) * scale).to(b_q.dtype)
+ # [BK, BC]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ b_k = tl.exp(b_k - b_zn[:, None]).to(b_k.dtype)
+ # [BC, BC]
+ b_A = tl.dot(b_q, b_k, allow_tf32=False)
+ tl.store(p_A, b_A.to(A.dtype.element_ty), boundary_check=(0, 1))
+ elif i_i == i_j:
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T * K,), (s_k_d,), ((i_t * BT + i_j * BC) * K + i_k * BK,), (BK,), (0,))
+ p_z = tl.make_block_ptr(z + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ # [BC, BK]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_z = tl.load(p_z, boundary_check=(0, 1))
+
+ o_i = tl.arange(0, BC)
+ o_A = (i_bh + i_k * n_bh) * T * BT + (i_t * BT + i_i * BC + tl.arange(0, BC)) * BT + i_j * BC
+ m_A = (i_t * BT + i_i * BC + tl.arange(0, BC)) < T
+ for j in range(0, BC):
+ # [BK,]
+ b_k = tl.load(p_k, boundary_check=(0,)).to(tl.float32)
+ # [BC,]
+ b_A = tl.sum(b_q * tl.exp(b_k[None, :] - b_z) * scale, 1)
+ b_A = tl.where(o_i >= j, b_A, 0.)
+ tl.store(A + o_A + j, b_A.to(b_q.dtype), mask=m_A)
+
+ p_k = tl.advance(p_k, (K,))
+
+
+@triton.jit
+def chunk_abc_fwd_kernel_V(
+ q,
+ v,
+ z,
+ h,
+ o,
+ A,
+ s_k_h,
+ s_k_t,
+ s_k_d,
+ s_v_h,
+ s_v_t,
+ s_v_d,
+ s_h_h,
+ s_h_t,
+ s_h_d,
+ scale,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr
+):
+ i_v, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_p = tl.maximum(i_t * BT - 1, 0)
+
+ b_o = tl.zeros([BT, BV], dtype=tl.float32)
+ for i_k in range(tl.cdiv(K, BK)):
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_z = tl.make_block_ptr(z + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * K * V, (K, V), (s_h_t, s_h_d), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ p_zp = tl.make_block_ptr(z + i_bh * s_k_h, (T * K,), (s_k_d,), (i_p * K + i_k * BK,), (BK,), (0,))
+
+ # [BT, BK]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_q = (b_q * scale).to(b_q.dtype)
+ # [BT, BK]
+ b_z = tl.load(p_z, boundary_check=(0, 1))
+ # [BT, BK]
+ b_zp = tl.load(p_zp, boundary_check=(0,))
+ b_q = (b_q * tl.exp(b_zp[None, :] - b_z)).to(b_q.dtype)
+ # [BK, BV]
+ b_h = tl.load(p_h, boundary_check=(0, 1))
+ # works but dkw, owing to divine benevolence
+ # [BT, BV]
+ if i_k >= 0:
+ b_o += tl.dot(b_q, b_h, allow_tf32=False)
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_o = tl.make_block_ptr(o + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_A = tl.make_block_ptr(A + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0))
+ # [BT, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BT, BT]
+ b_A = tl.load(p_A, boundary_check=(0, 1))
+ b_o += tl.dot(b_A, b_v, allow_tf32=False)
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.jit
+def chunk_abc_bwd_kernel_dh(
+ q,
+ z,
+ do,
+ dh,
+ s_k_h,
+ s_k_t,
+ s_k_d,
+ s_v_h,
+ s_v_t,
+ s_v_d,
+ s_h_h,
+ s_h_t,
+ s_h_d,
+ scale,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ NT: tl.constexpr,
+ NORMK: tl.constexpr
+):
+ i_k, i_v, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+
+ b_dh = tl.zeros([BK, BV], dtype=tl.float32)
+ b_zp = tl.full([BK if NORMK else BV], float('inf'), dtype=tl.float32)
+ for i_t in range(NT - 1, -1, -1):
+ i_p = tl.maximum(i_t * BT - 1, 0)
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dh = tl.make_block_ptr(dh + i_bh * s_h_h + i_t * K*V, (K, V), (s_h_t, s_h_d), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+
+ # [BK, BT]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_q = (b_q * scale).to(b_q.dtype)
+ # [BT, BV]
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+
+ tl.store(p_dh, b_dh.to(p_dh.dtype.element_ty), boundary_check=(0, 1))
+ if NORMK:
+ p_z = tl.make_block_ptr(z + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_zc = tl.make_block_ptr(z + i_bh * s_k_h, (T * K,), (s_k_d,), (i_p * K + i_k * BK,), (BK,), (0,))
+ # [BK,]
+ b_zc = tl.load(p_zc, boundary_check=(0,))
+ b_r, b_zp = tl.exp(b_zc - b_zp), b_zc
+ # [BK, BT]
+ b_z = tl.load(p_z, boundary_check=(0, 1))
+ b_q = (b_q * tl.exp(b_zc[:, None] - b_z)).to(b_q.dtype)
+ # [BK, BV]
+ b_dh = b_dh * b_r[:, None]
+ else:
+ p_z = tl.make_block_ptr(z + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_zc = tl.make_block_ptr(z + i_bh * s_v_h, (T * V,), (s_v_d,), (i_p * V + i_v * BV,), (BV,), (0,))
+ # [BV,]
+ b_zc = tl.load(p_zc, boundary_check=(0,))
+ b_r, b_zp = tl.exp(b_zc - b_zp), b_zc
+ # [BT, BV]
+ b_z = tl.load(p_z, boundary_check=(0,))
+ b_do = (b_do * tl.exp(b_zc[None, :] - b_z)).to(b_do.dtype)
+ # [BK, BV]
+ b_dh = b_dh * b_r[None, :]
+ # [BK, BV]
+ b_dh += tl.dot(b_q, b_do, allow_tf32=False)
+
+
+@triton.jit
+def chunk_abc_bwd_kernel_V(
+ k,
+ v,
+ z,
+ h,
+ A,
+ do,
+ dh,
+ dq,
+ dk,
+ dv,
+ dA,
+ s_k_h,
+ s_k_t,
+ s_k_d,
+ s_v_h,
+ s_v_t,
+ s_v_d,
+ s_h_h,
+ s_h_t,
+ s_h_d,
+ scale,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr
+):
+ i_k, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_p = tl.maximum(i_t * BT - 1, 0)
+ n_bh = tl.num_programs(2)
+
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_zc = tl.make_block_ptr(z + i_bh * s_k_h, (T * K,), (s_k_d,), ((i_t * BT + BT - 1) * K + i_k * BK,), (BK,), (0,))
+ p_A = tl.make_block_ptr(A + i_bh * T * BT, (BT, T), (1, BT), (0, i_t * BT), (BT, BT), (0, 1))
+
+ # [BK,]
+ b_zc = tl.load(p_zc, boundary_check=(0,))
+ # [BT, BK]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ b_k = tl.exp(b_k - b_zc[None, :]).to(b_k.dtype)
+ # [BT, BT]
+ b_A = tl.load(p_A, boundary_check=(0, 1))
+
+ b_dq = tl.zeros([BT, BK], dtype=tl.float32)
+ b_dk = tl.zeros([BT, BK], dtype=tl.float32)
+ b_dA = tl.zeros([BT, BT], dtype=tl.float32)
+ for i_v in range(tl.cdiv(V, BV)):
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * V * K, (V, K), (s_h_d, s_h_t), (i_v * BV, i_k * BK), (BV, BK), (0, 1))
+ p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dh = tl.make_block_ptr(dh + i_bh * s_h_h + i_t * K*V, (K, V), (s_h_t, s_h_d), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ p_dv = tl.make_block_ptr(dv + (i_k*n_bh+i_bh) * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+
+ # [BT, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BV, BK]
+ b_h = tl.load(p_h, boundary_check=(0, 1))
+ # [BT, BV]
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+ # [BK, BV]
+ b_dh = tl.load(p_dh, boundary_check=(0, 1))
+
+ # [BT, BV]
+ b_dv = tl.dot(b_k, b_dh, allow_tf32=False)
+ if i_k == 0:
+ b_dv += tl.dot(b_A, b_do, allow_tf32=False)
+ b_do = (b_do * scale).to(b_do.dtype)
+ tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1))
+ # [BT, BT]
+ b_dA += tl.dot(b_do, tl.trans(b_v), allow_tf32=False)
+ # [BT, BK]
+ b_dq += tl.dot(b_do, b_h, allow_tf32=False)
+ # [BT, BK]
+ b_dk += tl.dot(b_v, tl.trans(b_dh), allow_tf32=False)
+ p_z = tl.make_block_ptr(z + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_zp = tl.make_block_ptr(z + i_bh * s_k_h, (T * K,), (s_k_d,), (i_p * K + i_k * BK,), (BK,), (0,))
+ # [BK,]
+ b_zp = tl.load(p_zp, boundary_check=(0,))
+ # [BT, BK]
+ b_z = tl.load(p_z, boundary_check=(0, 1))
+ b_z = tl.exp(b_zp[None, :] - b_z)
+ # [BT, BK]
+ b_dq = b_dq * b_z
+ b_dk = b_dk * b_k
+
+ p_dq = tl.make_block_ptr(dq + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dk = tl.make_block_ptr(dk + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dA = tl.make_block_ptr(dA + i_bh * T * BT, (T, BT,), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0))
+ tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1))
+ tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1))
+
+ o_i = tl.arange(0, BT)
+ m_s = o_i[:, None] >= o_i[None, :]
+ # [BT, BT]
+ b_dA = tl.where(m_s, b_dA, 0.).to(b_k.dtype)
+ if i_k == 0:
+ tl.store(p_dA, b_dA.to(p_dA.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.jit
+def chunk_abc_bwd_kernel_intra_V(
+ q,
+ k,
+ z,
+ dA,
+ dq,
+ dk,
+ s_k_h,
+ s_k_t,
+ s_k_d,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ BT: tl.constexpr,
+ BC: tl.constexpr,
+ BK: tl.constexpr,
+ NC: tl.constexpr
+):
+ i_k, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_t, i_i = i_c // NC, i_c % NC
+
+ p_z = tl.make_block_ptr(z + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ p_zn = tl.make_block_ptr(z + i_bh * s_k_h, (T * K,), (s_k_d,), ((i_t * BT + i_i * BC) * K + i_k * BK,), (BK,), (0,))
+ # [BK,]
+ b_zn = tl.load(p_zn, boundary_check=(0,))
+ # [BC, BK]
+ b_z = tl.load(p_z, boundary_check=(0, 1))
+ b_zq = tl.exp(b_zn[None, :] - b_z)
+ b_dq = tl.zeros([BC, BK], dtype=tl.float32)
+ for i_j in range(0, i_i):
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_j * BC, i_k * BK), (BC, BK), (1, 0))
+ p_dA = tl.make_block_ptr(dA + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT + i_i * BC, i_j * BC), (BC, BC), (1, 0))
+ # [BC, BK]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ b_kz = tl.exp(b_k - b_zn[None, :]).to(b_k.dtype)
+ # [BC, BC]
+ b_dA = tl.load(p_dA, boundary_check=(0, 1))
+ # [BC, BK]
+ b_dq += tl.dot(b_dA, b_kz, allow_tf32=False)
+ b_dq *= b_zq
+
+ o_i = tl.arange(0, BC)
+ o_dA = i_bh * T * BT + (i_t * BT + i_i * BC + tl.arange(0, BC)) * BT + i_i * BC
+ m_dA = (i_t * BT + i_i * BC + tl.arange(0, BC)) < T
+ for j in range(0, BC):
+ p_kj = tl.make_block_ptr(k + i_bh * s_k_h, (T * K,), (1,), ((i_t * BT + i_i*BC+j) * K + i_k * BK,), (BK,), (0,))
+ # [BC,]
+ b_dA = tl.load(dA + o_dA + j, mask=m_dA, other=0)
+ # [BK,]
+ b_kj = tl.load(p_kj, boundary_check=(0,)).to(tl.float32)
+ # [BC, BK]
+ m_i = o_i[:, None] >= j
+ # [BC, BK]
+ b_dq += tl.where(m_i, b_dA[:, None] * tl.exp(b_kj[None, :] - b_z), 0.)
+ p_dq = tl.make_block_ptr(dq + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1))
+
+ tl.debug_barrier()
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ p_zn = tl.make_block_ptr(z + i_bh * s_k_h, (T*K,), (s_k_d,), ((i_t * BT + i_i * BC + BC - 1) * K + i_k * BK,), (BK,), (0,))
+ # [BK,]
+ b_zn = tl.load(p_zn, boundary_check=(0,))
+ # [BC, BK]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ b_kz = tl.exp(b_k - b_zn[None, :])
+ b_dk = tl.zeros([BC, BK], dtype=tl.float32)
+ for i_j in range(i_i + 1, NC):
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_j * BC, i_k * BK), (BC, BK), (1, 0))
+ p_z = tl.make_block_ptr(z + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_j * BC, i_k * BK), (BC, BK), (1, 0))
+ p_dA = tl.make_block_ptr(dA + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT + i_j * BC, i_i * BC), (BC, BC), (1, 0))
+ # [BC, BK]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_z = tl.load(p_z, boundary_check=(0, 1))
+ b_qz = (b_q * tl.exp(b_zn[None, :] - b_z)).to(b_q.dtype)
+ # [BC, BC]
+ b_dA = tl.load(p_dA, boundary_check=(0, 1))
+ # [BC, BK]
+ b_dk += tl.dot(tl.trans(b_dA), b_qz, allow_tf32=False)
+ b_dk *= b_kz
+
+ o_dA = i_bh * T * BT + (i_t * BT + i_i * BC) * BT + i_i * BC + tl.arange(0, BC)
+ for j in range(0, BC):
+ p_qj = tl.make_block_ptr(q + i_bh * s_k_h, (T * K,), (1,), ((i_t * BT + i_i * BC + j) * K + i_k * BK,), (BK,), (0,))
+ p_zj = tl.make_block_ptr(z + i_bh * s_k_h, (T * K,), (1,), ((i_t * BT + i_i * BC + j) * K + i_k * BK,), (BK,), (0,))
+ # [BC,]
+ b_dA = tl.load(dA + o_dA + j * BT, mask=(i_t * BT + i_i * BC + j < T), other=0)
+ # [BK,]
+ b_qj = tl.load(p_qj, boundary_check=(0,)).to(tl.float32)
+ b_zj = tl.load(p_zj, boundary_check=(0,)).to(tl.float32)
+ # [BC, BK]
+ m_i = o_i[:, None] <= j
+ b_dk += tl.where(m_i, b_dA[:, None] * b_qj[None, :] * tl.exp(b_k - b_zj[None, :]), 0.)
+ p_dk = tl.make_block_ptr(dk + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.jit
+def chunk_abc_bwd_kernel_intra_K(
+ v,
+ z,
+ do,
+ dA,
+ s_v_h,
+ s_v_t,
+ s_v_d,
+ scale,
+ T: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BC: tl.constexpr,
+ BV: tl.constexpr,
+ NC: tl.constexpr
+):
+ i_v, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_t, i_i, i_j = i_c // (NC * NC), (i_c % (NC * NC)) // NC, (i_c % (NC * NC)) % NC
+ n_bh = tl.num_programs(2)
+
+ if i_i > i_j:
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (V, T), (s_v_d, s_v_t), (i_v * BV, i_t * BT + i_j * BC), (BV, BC), (0, 1))
+ p_z = tl.make_block_ptr(z + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0))
+ p_zn = tl.make_block_ptr(z + i_bh * s_v_h, (T * V,), (s_v_d,), ((i_t * BT + i_i * BC) * V + i_v * BV,), (BV,), (0,))
+ p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0))
+ p_dA = tl.make_block_ptr(dA+(i_bh+i_v*n_bh)*T*BT, (T, BT), (BT, 1), (i_t * BT + i_i * BC, i_j * BC), (BC, BC), (1, 0))
+ # [BV,]
+ b_zn = tl.load(p_zn, boundary_check=(0,))
+ # [BC, BV]
+ b_z = tl.load(p_z, boundary_check=(0, 1))
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+ b_do = (b_do * tl.exp(b_zn[None, :] - b_z) * scale).to(b_do.dtype)
+ # [BV, BC]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ b_v = tl.exp(b_v - b_zn[:, None]).to(b_v.dtype)
+ # [BC, BC]
+ b_dA = tl.dot(b_do, b_v, allow_tf32=False)
+ tl.store(p_dA, b_dA.to(dA.dtype.element_ty), boundary_check=(0, 1))
+ elif i_i == i_j:
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T * V,), (s_v_d,), ((i_t * BT + i_j * BC) * V + i_v * BV,), (BV,), (0,))
+ p_z = tl.make_block_ptr(z + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0))
+ p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0))
+ # [BC, BV]
+ b_z = tl.load(p_z, boundary_check=(0, 1))
+ b_do = tl.load(p_do, boundary_check=(0, 1)) * scale
+
+ o_i = tl.arange(0, BC)
+ o_A = (i_bh + i_v * n_bh) * T * BT + (i_t * BT + i_i * BC + tl.arange(0, BC)) * BT + i_j * BC
+ m_A = (i_t * BT + i_i * BC + tl.arange(0, BC)) < T
+ for j in range(0, BC):
+ # [BV,]
+ b_v = tl.load(p_v, boundary_check=(0,)).to(tl.float32)
+ # [BC,]
+ b_dA = tl.sum(b_do * tl.exp(b_v[None, :] - b_z), 1)
+ b_dA = tl.where(o_i >= j, b_dA, 0)
+ tl.store(dA + o_A + j, b_dA.to(b_do.dtype), mask=m_A)
+
+ p_v = tl.advance(p_v, (V,))
+
+
+@triton.jit
+def chunk_abc_bwd_kernel_K(
+ q,
+ k,
+ v,
+ z,
+ h,
+ A,
+ do,
+ dh,
+ dq,
+ dk,
+ dv,
+ dA,
+ s_k_h,
+ s_k_t,
+ s_k_d,
+ s_v_h,
+ s_v_t,
+ s_v_d,
+ s_h_h,
+ s_h_t,
+ s_h_d,
+ scale,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr
+):
+ i_k, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_p = tl.maximum(i_t * BT - 1, 0)
+ n_bh = tl.num_programs(2)
+
+ o_i = tl.arange(0, BT)
+ m_s = o_i[:, None] >= o_i[None, :]
+
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_A = tl.make_block_ptr(A + (i_k*n_bh+i_bh) * T * BT, (T, BT, ), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0))
+
+ # [BT, BK]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BT, BT]
+ b_A = tl.dot((b_q * scale).to(b_q.dtype), tl.trans(b_k), allow_tf32=False)
+ b_A = tl.where(m_s, b_A, 0.)
+ tl.store(p_A, b_A.to(p_A.dtype.element_ty), boundary_check=(0, 1))
+
+ b_dq = tl.zeros([BT, BK], dtype=tl.float32)
+ b_dk = tl.zeros([BT, BK], dtype=tl.float32)
+ for i_v in range(tl.cdiv(V, BV)):
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_z = tl.make_block_ptr(z + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_zp = tl.make_block_ptr(z + i_bh * s_v_h, (T * V,), (s_v_d,), (i_p * V + i_v * BV,), (BV,), (0,))
+ p_zc = tl.make_block_ptr(z + i_bh * s_v_h, (T * V,), (s_v_d,), ((i_t * BT + BT - 1) * V + i_v * BV,), (BV,), (0,))
+ p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * K*V, (V, K), (s_h_d, s_h_t), (i_v * BV, i_k * BK), (BV, BK), (0, 1))
+
+ p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dh = tl.make_block_ptr(dh + i_bh * s_h_h + i_t * K*V, (K, V), (s_h_t, s_h_d), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ p_dv = tl.make_block_ptr(dv + (i_k*n_bh+i_bh) * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+
+ # [BV,]
+ b_zp = tl.load(p_zp, boundary_check=(0,))
+ b_zc = tl.load(p_zc, boundary_check=(0,))
+ # [BT, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ b_v = tl.exp(b_v - b_zc[None, :]).to(b_v.dtype)
+ b_z = tl.load(p_z, boundary_check=(0, 1))
+ b_z = tl.exp(b_zp[None, :] - b_z)
+ # [BV, BK]
+ b_h = tl.load(p_h, boundary_check=(0, 1))
+ # [BT, BV]
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+ b_do = (b_do * b_z * scale).to(b_do.dtype)
+ # [BK, BV]
+ b_dh = tl.load(p_dh, boundary_check=(0, 1))
+
+ # [BT, BK]
+ b_dq += tl.dot(b_do, b_h, allow_tf32=False)
+ b_dk += tl.dot(b_v, tl.trans(b_dh), allow_tf32=False)
+ # [BT, BV]
+ b_dv = b_v * tl.dot(b_k, b_dh, allow_tf32=False)
+ tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1))
+ p_dA = tl.make_block_ptr(dA + i_bh * T * BT, (T, BT, ), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0))
+ # [BT, BT]
+ b_dA = tl.load(p_dA, boundary_check=(0, 1))
+ # [BT, BK]
+ b_dq += tl.dot(b_dA, b_k, allow_tf32=False)
+ b_dk += tl.dot(tl.trans(b_dA).to(b_k.dtype), b_q, allow_tf32=False)
+
+ p_dq = tl.make_block_ptr(dq + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dk = tl.make_block_ptr(dk + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1))
+ tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.jit
+def chunk_abc_bwd_kernel_intra_KV(
+ v,
+ z,
+ A,
+ do,
+ dv,
+ s_v_h,
+ s_v_t,
+ s_v_d,
+ T: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BC: tl.constexpr,
+ BV: tl.constexpr,
+ NC: tl.constexpr
+):
+ i_v, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_t, i_i = i_c // NC, i_c % NC
+
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0))
+ p_zn = tl.make_block_ptr(z + i_bh * s_v_h, (T*V,), (s_v_d,), ((i_t * BT + i_i * BC + BC - 1) * V + i_v * BV,), (BV,), (0,))
+ # [BV,]
+ b_zn = tl.load(p_zn, boundary_check=(0,))
+ # [BC, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ b_dv = tl.zeros([BC, BV], dtype=tl.float32)
+ for i_j in range(i_i + 1, NC):
+ p_z = tl.make_block_ptr(z + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_j * BC, i_v * BV), (BC, BV), (1, 0))
+ p_A = tl.make_block_ptr(A + i_bh * T * BT, (BT, T), (1, BT), (i_i * BC, i_t * BT + i_j * BC), (BC, BC), (0, 1))
+ p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_j * BC, i_v * BV), (BC, BV), (1, 0))
+ # [BC, BV]
+ b_z = tl.load(p_z, boundary_check=(0, 1))
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+ b_do = (b_do * tl.exp(b_zn[None, :] - b_z)).to(b_do.dtype)
+ # [BC, BC]
+ b_A = tl.load(p_A, boundary_check=(0, 1))
+ b_dv += tl.dot(b_A, b_do, allow_tf32=False)
+ b_dv *= tl.exp(b_v - b_zn[None, :])
+
+ o_i = tl.arange(0, BC)
+ for j in range(0, BC):
+ p_z = tl.make_block_ptr(z + i_bh * s_v_h, (T * V,), (1,), ((i_t * BT + i_i * BC + j) * V + i_v * BV,), (BV,), (0,))
+ p_A = tl.make_block_ptr(A + i_bh * T * BT, (T * BT,), (1,), ((i_t * BT + i_i * BC + j) * BT + i_i * BC,), (BC,), (0,))
+ p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T * V,), (1,), ((i_t * BT + i_i * BC + j) * V + i_v * BV,), (BV,), (0,))
+ # [BC,]
+ b_A = tl.load(p_A, boundary_check=(0,))
+ # [BV,]
+ b_z = tl.load(p_z, boundary_check=(0,))
+ b_do = tl.load(p_do, boundary_check=(0,))
+ # [BC, BV]
+ m_i = o_i[:, None] <= j
+ b_dv += tl.where(m_i, tl.exp(b_v - b_z[None, :]) * b_A[:, None] * b_do[None, :], 0.)
+ p_dv = tl.make_block_ptr(dv + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0))
+ tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.jit
+def chunk_abc_bwd_kernel_rcum_inter(
+ s,
+ z,
+ ss,
+ doo,
+ s_s_h,
+ s_s_t,
+ s_s_d,
+ T: tl.constexpr,
+ S: tl.constexpr,
+ BT: tl.constexpr,
+ BS: tl.constexpr,
+ NT: tl.constexpr
+):
+ i_m, i_bh = tl.program_id(0), tl.program_id(1)
+
+ b_sp = tl.zeros([BS,], dtype=tl.float32)
+ b_zp = tl.full([BS,], float('inf'), dtype=tl.float32)
+ for i_t in range(NT - 1, -1, -1):
+ p_s = tl.make_block_ptr(s + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, i_m * BS), (BT, BS), (1, 0))
+ p_z = tl.make_block_ptr(z + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, i_m * BS), (BT, BS), (1, 0))
+ p_zc = tl.make_block_ptr(z + i_bh * s_s_h, (T * S,), (s_s_d,), ((i_t * BT) * S + i_m * BS,), (BS,), (0,))
+ p_ss = tl.make_block_ptr(ss + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, i_m * BS), (BT, BS), (1, 0))
+ p_doo = tl.make_block_ptr(doo + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, i_m * BS), (BT, BS), (1, 0))
+ # [BS,]
+ b_zc = tl.load(p_zc, boundary_check=(0,))
+ # [BT, BS]
+ b_s = tl.load(p_s, boundary_check=(0, 1))
+ b_z = tl.load(p_z, boundary_check=(0, 1))
+ b_ss = tl.load(p_ss, boundary_check=(0, 1))
+
+ b_doo = tl.exp(b_s - b_zp[None, :]) * b_sp[None, :]
+ tl.store(p_doo, b_doo.to(p_doo.dtype.element_ty), boundary_check=(0, 1))
+ # [BS,]
+ b_sp = b_sp * tl.exp(b_zc - b_zp) + tl.sum(b_ss * tl.exp(b_zc[None, :] - b_z), 0)
+ b_zp = b_zc
+
+
+@triton.jit
+def chunk_abc_bwd_kernel_rcum_intra(
+ s,
+ z,
+ ss,
+ doo,
+ s_s_h,
+ s_s_t,
+ s_s_d,
+ T: tl.constexpr,
+ S: tl.constexpr,
+ BT: tl.constexpr,
+ BC: tl.constexpr,
+ BS: tl.constexpr,
+ NC: tl.constexpr
+):
+ i_s, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_t, i_i = i_c // NC, i_c % NC
+
+ o_i = tl.arange(0, BC)
+ m_o = tl.full([BC, BC], 1., dtype=tl.float32)
+
+ p_s = tl.make_block_ptr(s + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT + i_i * BC, i_s * BS), (BC, BS), (1, 0))
+ p_zn = tl.make_block_ptr(z + i_bh * s_s_h, (T*S,), (s_s_d,), ((i_t * BT + i_i * BC + BC - 1) * S + i_s * BS,), (BS,), (0,))
+ p_doo = tl.make_block_ptr(doo + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT + i_i * BC, i_s * BS), (BC, BS), (1, 0))
+ # [BC, BS]
+ b_s = tl.load(p_s, boundary_check=(0, 1))
+ # [BS,]
+ b_zn = tl.load(p_zn, boundary_check=(0,))
+
+ b_doo = tl.zeros([BC, BS], dtype=tl.float32)
+ for i_j in range(i_i + 1, NC):
+ p_z = tl.make_block_ptr(z + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT + i_j * BC, i_s * BS), (BC, BS), (1, 0))
+ p_ss = tl.make_block_ptr(ss + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT + i_j * BC, i_s * BS), (BC, BS), (1, 0))
+ # [BC, BS]
+ b_z = tl.load(p_z, boundary_check=(0, 1))
+ b_ss = tl.load(p_ss, boundary_check=(0, 1))
+ # [BC, BS]
+ b_doo += b_ss * tl.exp(b_zn[None, :] - b_z)
+ b_doo = tl.exp(b_s - b_zn[None, :]) * tl.dot(m_o.to(b_s.dtype), b_doo.to(b_s.dtype), allow_tf32=False)
+
+ for j in range(0, BC):
+ p_z = tl.make_block_ptr(z + i_bh * s_s_h, (T * S,), (1,), ((i_t * BT + i_i * BC + j) * S + i_s * BS,), (BS,), (0,))
+ p_ss = tl.make_block_ptr(ss + i_bh * s_s_h, (T * S,), (1,), ((i_t * BT + i_i * BC + j) * S + i_s * BS,), (BS,), (0,))
+ # [BS,]
+ b_z = tl.load(p_z, boundary_check=(0,))
+ b_ss = tl.load(p_ss, boundary_check=(0,))
+ # [BC, BS]
+ m_i = o_i[:, None] <= j
+ b_doo += tl.where(m_i, tl.exp(b_s - b_z[None, :]) * b_ss[None, :], 0.)
+ b_doo += tl.load(p_doo, boundary_check=(0, 1))
+ tl.store(p_doo, b_doo.to(p_doo.dtype.element_ty), boundary_check=(0, 1))
+
+
+class ChunkABCFunction(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ def forward(ctx, q, k, v, s, initial_state, output_final_state):
+ B, H, T, K, V, M = *q.shape, v.shape[-1], s.shape[-1]
+ BT, BC = 64, 16
+ BK = min(64, triton.next_power_of_2(K))
+ BV = min(64, triton.next_power_of_2(V))
+ BM = min(64, triton.next_power_of_2(M))
+ NT, NC = triton.cdiv(T, BT), triton.cdiv(BT, BC)
+ NV, NM = triton.cdiv(V, BV), triton.cdiv(M, BM)
+ num_warps = 4 if BK == 64 else 2
+ num_stages = 1
+
+ def fwd_pre(s, B, H, T, S):
+ # keep cummulative normalizer in fp32
+ z = torch.empty_like(s, dtype=torch.float)
+ grid = (B * H,)
+ logcumsumexp_fwd_kernel[grid](
+ s, z,
+ s.stride(1), s.stride(2), s.stride(3),
+ T=T, S=S
+ )
+ return z
+
+ def fwd_inner(q, k, v, z, B, H, T, K, V, BT, BK, BV, NT, normk=False, h0=None, ht=None):
+ NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV)
+ h = q.new_empty(B, H, NT * K, V)
+ grid = (NV, NK, B * H)
+ chunk_abc_fwd_kernel_h[grid](
+ k, v, z, h, h0, ht,
+ k.stride(1), k.stride(2), k.stride(3),
+ v.stride(1), v.stride(2), v.stride(3),
+ h.stride(1), h.stride(2), h.stride(3),
+ T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, NT=NT,
+ NORMK=normk,
+ USE_INITIAL_STATE=h0 is not None,
+ STORE_FINAL_STATE=ht is not None,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+ return h
+
+ final_state = None
+ if output_final_state:
+ final_state = (q.new_empty(B, H, K, M, dtype=torch.float),
+ q.new_empty(B, H, M, V, dtype=torch.float))
+
+ z = fwd_pre(s, B, H, T, M)
+ scale = K ** -0.5
+ hk = fwd_inner(
+ q=q, k=k, v=s, z=z,
+ B=B, H=H, T=T, K=K, V=M, BT=BT, BK=BK, BV=BM, NT=NT,
+ normk=False,
+ h0=initial_state[0] if initial_state is not None else None,
+ ht=final_state[0] if final_state is not None else None
+ )
+ ok1 = torch.empty_like(s)
+ Ak = q.new_empty(B, H, T, BT)
+ grid = (NM, NT, B * H)
+ chunk_abc_fwd_kernel_K[grid](
+ q, k, z, hk, ok1, Ak,
+ k.stride(1), k.stride(2), k.stride(3),
+ s.stride(1), s.stride(2), s.stride(3),
+ hk.stride(1), hk.stride(2), hk.stride(3),
+ scale=scale,
+ T=T, K=K, V=M, BT=BT, BK=BK, BV=BM,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+ ok0 = torch.empty_like(s)
+ grid = (NM, NT * NC, B * H)
+ chunk_abc_fwd_kernel_intra_K[grid](
+ s, z, ok0, Ak,
+ s.stride(1), s.stride(2), s.stride(3),
+ T=T, V=M, BT=BT, BC=BC, BV=BM, NC=NC,
+ num_warps=2,
+ num_stages=num_stages
+ )
+ ok = ok0.add_(ok1)
+
+ scale = 1.
+ # equivalent to:
+ # p = ok.softmax(-1, torch.float)
+ # p is kept in fp32 for safe softmax backward
+ p = torch.empty_like(ok, dtype=torch.float)
+ grid = (NT, B * H)
+ softmax_fwd_kernel[grid](
+ ok, p,
+ s.stride(1), s.stride(2), s.stride(3),
+ T=T, S=M, BT=BT
+ )
+ qv = p.to(q.dtype)
+
+ scale = 1.
+ hv = fwd_inner(
+ q=qv, k=s, v=v, z=z,
+ B=B, H=H, T=T, K=M, V=V, BT=BT, BK=BM, BV=BV, NT=NT,
+ normk=True,
+ h0=initial_state[1] if initial_state is not None else None,
+ ht=final_state[1] if final_state is not None else None
+ )
+ Av = q.new_zeros(NM, B, H, T, BT)
+ grid = (NM, NT * NC * NC, B * H)
+ chunk_abc_fwd_kernel_intra_V[grid](
+ qv, s, z, Av,
+ s.stride(1), s.stride(2), s.stride(3),
+ scale=scale,
+ T=T, K=M, BT=BT, BC=BC, BK=BM, NC=NC,
+ num_warps=2,
+ num_stages=num_stages
+ )
+ Av = Av.sum(0)
+ ov = torch.empty_like(v)
+ grid = (NV, NT, B * H)
+ chunk_abc_fwd_kernel_V[grid](
+ qv, v, z, hv, ov, Av,
+ s.stride(1), s.stride(2), s.stride(3),
+ v.stride(1), v.stride(2), v.stride(3),
+ hv.stride(1), hv.stride(2), hv.stride(3),
+ scale=scale,
+ T=T, K=M, V=V, BT=BT, BK=BM, BV=BV,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+ ctx.save_for_backward(q, k, v, s, z, ok, p, hk, hv, Av)
+ ctx.BT = BT
+ return ov, final_state
+
+ @staticmethod
+ @contiguous
+ def backward(ctx, dov, dht=None):
+ q, k, v, s, z, ok, p, hk, hv, Av = ctx.saved_tensors
+ B, H, T, K, V, M = *q.shape, v.shape[-1], s.shape[-1]
+ BT, BC = ctx.BT, 16
+ BK = min(64, triton.next_power_of_2(K))
+ BV = min(64, triton.next_power_of_2(V))
+ BM = min(64, triton.next_power_of_2(M))
+ NT, NC = triton.cdiv(T, BT), triton.cdiv(BT, BC)
+ NK, NM = triton.cdiv(K, BK), triton.cdiv(M, BM)
+ num_warps = 4 if BK == 64 else 2
+ num_stages = 1
+
+ def bwd_inner(q, z, do, B, H, T, K, V, BT, BK, BV, NT, scale, normk=False):
+ NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV)
+ dh = q.new_empty(B, H, NT * K, V)
+ grid = (NK, NV, B * H)
+ chunk_abc_bwd_kernel_dh[grid](
+ q, z, do, dh,
+ q.stride(1), q.stride(2), q.stride(3),
+ do.stride(1), do.stride(2), do.stride(3),
+ dh.stride(1), dh.stride(2), dh.stride(3),
+ scale=scale,
+ T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, NT=NT,
+ NORMK=normk,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+ return dh
+
+ def bwd_post(s, z, ss, B, H, T, S, BT, BC, BS, NT, NC, NS):
+ doo = torch.empty_like(s)
+ grid = (NS, B * H)
+ chunk_abc_bwd_kernel_rcum_inter[grid](
+ s, z, ss, doo,
+ s.stride(1), s.stride(2), s.stride(3),
+ T=T, S=S, BT=BT, BS=BS, NT=NT,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+ grid = (NS, NT * NC, B * H)
+ chunk_abc_bwd_kernel_rcum_intra[grid](
+ s, z, ss, doo,
+ s.stride(1), s.stride(2), s.stride(3),
+ T=T, S=S, BT=BT, BC=BC, BS=BS, NC=NC,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+ return doo
+
+ scale = 1.
+ qv = p.to(q.dtype)
+ dhv = bwd_inner(
+ qv, z, dov,
+ B=B, H=H, T=T, K=M, V=V, BT=BT, BK=BM, BV=BV, NT=NT,
+ scale=scale,
+ normk=True
+ )
+ dp1 = torch.empty_like(p)
+ dsv1 = torch.empty_like(s, dtype=torch.float)
+ dv = v.new_empty(NM, *v.shape)
+ dAv = q.new_zeros(B, H, T, BT)
+ grid = (NM, NT, B * H)
+ chunk_abc_bwd_kernel_V[grid](
+ s, v, z, hv, Av, dov, dhv, dp1, dsv1, dv, dAv,
+ s.stride(1), s.stride(2), s.stride(3),
+ v.stride(1), v.stride(2), v.stride(3),
+ hv.stride(1), hv.stride(2), hv.stride(3),
+ scale=scale,
+ T=T, K=M, V=V, BT=BT, BK=BM, BV=BV,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+ dv = dv.sum(0)
+ dp0 = torch.empty_like(p)
+ dsv0 = s.new_zeros(s.shape, dtype=torch.float)
+ grid = (NM, NT * NC, B * H)
+ chunk_abc_bwd_kernel_intra_V[grid](
+ qv, s, z, dAv, dp0, dsv0,
+ s.stride(1), s.stride(2), s.stride(3),
+ T=T, K=M, BT=BT, BC=BC, BK=BM, NC=NC,
+ num_warps=2,
+ num_stages=num_stages
+ )
+ dp = dp1.add_(dp0)
+ dsv = dsv1.add_(dsv0)
+
+ # softmax gradient, equivalent to:
+ # dok = p * (dp - (p * dp).sum(-1, True))
+ dok = torch.empty_like(ok)
+ grid = (NT, B * H)
+ softmax_bwd_kernel[grid](
+ p, dp, dok,
+ s.stride(1), s.stride(2), s.stride(3),
+ T=T, S=M, BT=BT
+ )
+
+ scale = K ** -0.5
+ dhk = bwd_inner(
+ q, z, dok,
+ B=B, H=H, T=T, K=K, V=M, BT=BT, BK=BK, BV=BM, NT=NT,
+ scale=scale,
+ normk=False
+ )
+ dAk = q.new_zeros(NM, B, H, T, BT)
+ grid = (NM, NT * NC * NC, B * H)
+ chunk_abc_bwd_kernel_intra_K[grid](
+ s, z, dok, dAk,
+ s.stride(1), s.stride(2), s.stride(3),
+ scale=scale,
+ T=T, V=M, BT=BT, BC=BC, BV=BM, NC=NC,
+ num_warps=2,
+ num_stages=num_stages
+ )
+ dAk = dAk.sum(0)
+
+ Ak = q.new_zeros(NK, B, H, T, BT)
+ dq = torch.empty_like(q)
+ dk = torch.empty_like(k)
+ dsk1 = s.new_empty(NK, *s.shape, dtype=torch.float)
+ grid = (NK, NT, B * H)
+ chunk_abc_bwd_kernel_K[grid](
+ q, k, s, z, hk, Ak, dok, dhk, dq, dk, dsk1, dAk,
+ q.stride(1), q.stride(2), q.stride(3),
+ s.stride(1), s.stride(2), s.stride(3),
+ hk.stride(1), hk.stride(2), hk.stride(3),
+ scale=scale,
+ T=T, K=K, V=M, BT=BT, BK=BK, BV=BM,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+ Ak = Ak.sum(0)
+ dsk1 = dsk1.sum(0)
+ dsk0 = torch.empty_like(s, dtype=torch.float)
+ grid = (NM, NT * NC, B * H)
+ chunk_abc_bwd_kernel_intra_KV[grid](
+ s, z, Ak, dok, dsk0,
+ s.stride(1), s.stride(2), s.stride(3),
+ T=T, V=M, BT=BT, BC=BC, BV=BM, NC=NC,
+ num_warps=2,
+ num_stages=num_stages
+ )
+ ds = dsv.add_(dsk1.add_(dsk0))
+ ds -= bwd_post(s, z, ok * dok + p * dp, B, H, T, M, BT, BC, BM, NT, NC, NM)
+ ds = ds.to(s.dtype)
+ return dq, dk, dv, ds, None, None
+
+
+def chunk_abc(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ s: torch.Tensor,
+ initial_state: Optional[Tuple[torch.Tensor]] = None,
+ output_final_state: bool = False,
+ head_first: bool = True
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ r"""
+ Args:
+ q (torch.Tensor):
+ queries of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`
+ k (torch.Tensor):
+ keys of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`
+ v (torch.Tensor):
+ values of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`
+ s (torch.Tensor):
+ slot representations of shape `[B, H, T, M]` if `head_first=True` else `[B, T, H, M]`
+ initial_state (Optional[Tuple[torch.Tensor, torch.Tensor]]):
+ Initial states of shape `[B, H, K, M]` and `[B, H, M, V]`. Default: `None`.
+ output_final_state (Optional[bool]):
+ Whether to output the final state of shape `[B, H, K, M]` and `[B, H, M, V]`. Default: `False`.
+ head_first (Optional[bool]):
+ Whether the inputs are in the head-first format.
+ Default: `True`.
+
+ Returns:
+ o (torch.Tensor):
+ Outputs of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ final_state (torch.Tensor):
+ Final state of shape `[B, H, K, M]` and `[B, H, M, V]` if `output_final_state=True` else `None`.
+ """
+ if not head_first:
+ q, k, v, s = map(lambda x: x.transpose(1, 2), (q, k, v, s))
+ o, final_state = ChunkABCFunction.apply(q, k, v, s, initial_state, output_final_state)
+ if not head_first:
+ o = o.transpose(1, 2)
+ return o, final_state
diff --git a/fla/ops/abc/naive.py b/fla/ops/abc/naive.py
new file mode 100644
index 0000000000000000000000000000000000000000..a7f25c40db73bcf33d1599761be0008cc5be7c59
--- /dev/null
+++ b/fla/ops/abc/naive.py
@@ -0,0 +1,96 @@
+# -*- coding: utf-8 -*-
+
+from typing import Optional
+
+import torch
+from einops import repeat
+
+
+def naive_recurrent_abc(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ s: torch.Tensor,
+ g: Optional[torch.Tensor] = None,
+ scale: Optional[int] = None,
+ initial_state: Optional[torch.Tensor] = None,
+ output_final_state: Optional[bool] = False
+) -> torch.Tensor:
+ dtype = q.dtype
+
+ NG = q.shape[1]//k.shape[1]
+ # [batch_size, n_heads, seq_len, n_slots]
+ if g is None:
+ z = s.float().logcumsumexp(2)
+ g = torch.cat((z[:, :, :1], z[:, :, :-1]), 2) - z
+ s = torch.exp(s - z)
+ q, k, v, s, g = map(lambda x: x.float(), (q, k, v, s, g))
+ k, v, s, g = map(lambda x: repeat(x, 'b h t d -> b (h g) t d', g=NG), (k, v, s, g))
+ if initial_state is not None:
+ initial_state = tuple(map(lambda x: repeat(x, 'b h k v -> b (h g) k v', g=NG), initial_state))
+
+ B, H, T, K, V, M = *q.shape, v.shape[-1], s.shape[-1]
+
+ hk = torch.zeros(B, H, K, M, dtype=torch.float, device=q.device)
+ ok = torch.zeros_like(s)
+
+ if scale is None:
+ scale = q.shape[-1] ** -0.5
+
+ final_state = None
+ if initial_state is not None:
+ hk += initial_state[0]
+
+ for i in range(T):
+ q_i = q[:, :, i] * scale
+ k_i = k[:, :, i]
+ v_i = s[:, :, i]
+ g_i = g[:, :, i].exp()
+ hk = hk * g_i[..., None, :] + k_i[..., None] * v_i[..., None, :]
+ ok[:, :, i] = (q_i[..., None] * hk).sum(-2)
+
+ qv = ok.softmax(-1)
+ hv = torch.zeros(B, H, M, V, dtype=torch.float, device=q.device)
+ ov = torch.zeros_like(v)
+ if initial_state is not None:
+ hv += initial_state[1]
+
+ for i in range(T):
+ q_i = qv[:, :, i]
+ k_i = s[:, :, i]
+ v_i = v[:, :, i]
+ g_i = g[:, :, i].exp()
+ hv = hv * g_i[..., :, None] + k_i[..., None] * v_i[..., None, :]
+ ov[:, :, i] = (q_i[..., None] * hv).sum(-2)
+
+ if output_final_state:
+ final_state = (hk.view(B, -1, NG, K, M)[:, :, 0], hv.view(B, -1, NG, M, V)[:, :, 0])
+ return ov.to(dtype), final_state
+
+
+def naive_cumsum_abc(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ s: torch.Tensor
+) -> torch.Tensor:
+ """
+ A simple implementation of vanilla ABC that is more aligned with the descriptions in the paper.
+ This is just for demonstration purposes, with no numerical stabilities guaranteed.
+ """
+
+ dtype = q.dtype
+ q, k, v, s = map(lambda x: x.float(), (q, k, v, s))
+
+ scale = q.shape[-1] ** -0.5
+ # [batch_size, n_heads, seq_len, n_slots]
+ s = (s - s.max(2, True)[0]).exp()
+ z = s.cumsum(2)
+ # [batch_size, n_heads, seq_len, n_slots, d_head]
+ K = (s.unsqueeze(-1) * k.unsqueeze(-2)).cumsum(2) / z.unsqueeze(-1)
+ V = (s.unsqueeze(-1) * v.unsqueeze(-2)).cumsum(2) / z.unsqueeze(-1)
+ # [batch_size, n_heads, seq_len, n_slots]
+ p = torch.einsum('...d,...md->...m', q * scale, K).softmax(-1)
+ # [batch_size, n_heads, seq_len, d_head]
+ o = torch.einsum('...m,...md->...d', p, V)
+ return o.to(dtype), None
diff --git a/fla/ops/based/__init__.py b/fla/ops/based/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..f20b31ba0ea4c7d345761fbd6ab5f6ced5136236
--- /dev/null
+++ b/fla/ops/based/__init__.py
@@ -0,0 +1,9 @@
+# -*- coding: utf-8 -*-
+
+from .fused_chunk import fused_chunk_based
+from .parallel import parallel_based
+
+__all__ = [
+ 'fused_chunk_based',
+ 'parallel_based'
+]
diff --git a/fla/ops/based/fused_chunk.py b/fla/ops/based/fused_chunk.py
new file mode 100644
index 0000000000000000000000000000000000000000..3e3c5df52859073ea2952aa86488a28c037600bf
--- /dev/null
+++ b/fla/ops/based/fused_chunk.py
@@ -0,0 +1,390 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from typing import Optional
+
+import torch
+import triton
+import triton.language as tl
+
+from fla.utils import autocast_custom_bwd, autocast_custom_fwd, contiguous
+
+
+@triton.jit
+def fused_chunk_based_fwd_kernel(
+ q, # query [B, H, L, K]
+ k, # key [B, H, L, V]
+ v, # value [B, H, L, V]
+ o, # output [B, H, L, V]
+ z, # normalizer [B, H, L, 1]
+ s_k_h, # stride size: L * K
+ s_k_t, # stride size: K
+ s_k_d, # stride size: 1
+ s_v_h, # stride size: L * V
+ s_v_t, # stride size: V
+ s_v_d, # stride size: 1
+ scale, # K ** -0.5
+ B: tl.constexpr, # batch size
+ H: tl.constexpr, # H
+ T: tl.constexpr, # T
+ K: tl.constexpr, # K
+ V: tl.constexpr, # V
+ BT: tl.constexpr, # BLOCK SIZE along the sequence dimension, a.k.a. chunk size
+ BK: tl.constexpr, # BLOCK SIZE along the K dimension
+ BV: tl.constexpr, # BLOCK SIZE along the V dimension
+):
+ # indices
+ i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+
+ o_i = tl.arange(0, BT)
+
+ # [BT, BT]
+ m_s = o_i[:, None] >= o_i[None, :]
+
+ # [BV], zero-order taylor expansion
+ b_h_0o = tl.zeros([BV], dtype=tl.float32)
+ # [BK, BV], first-order taylor expansion
+ b_h_1o = tl.zeros([BK, BV], dtype=tl.float32)
+ # [BK, BK, BV] second-order taylor expansion
+ b_h_2o = tl.zeros([BK*BK, BV], dtype=tl.float32)
+
+ # make block pointers
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (0, i_k * BK), (BT, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, 0), (BK, BT), (0, 1))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (0, i_v * BV), (BT, BV), (1, 0))
+ p_o = tl.make_block_ptr(o + (i_bh + i_k*B*H) * s_v_h, (T, V), (s_v_t, s_v_d), (0, i_v * BV), (BT, BV), (1, 0))
+
+ p_z = z + (i_bh + i_k * B * H) * T + tl.arange(0, BT)
+ k_2o = tl.zeros([1, BK * BK], dtype=tl.float32)
+ k_1o = tl.zeros([1, BK], dtype=tl.float32)
+ k_0o = 0
+
+ for i in range(0, tl.cdiv(T, BT)):
+ # [BK, BT]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BK*BK, BT]
+ b_k_2o = b_k[:, None, :] * b_k[None, :, :]
+ b_k_2o = tl.reshape(b_k_2o, [BK * BK, BT]).to(b_k.dtype)
+ # [BT, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BT, BK]
+ b_q = (tl.load(p_q, boundary_check=(0, 1)) * scale).to(b_k.dtype)
+ b_o = tl.zeros([BT, BV], dtype=tl.float32)
+ b_z = tl.zeros([BT], dtype=tl.float32)
+
+ # interchunk
+ # zero-order
+ b_o += b_h_0o
+ b_z += k_0o
+ # first-order
+ b_o += tl.dot(b_q, b_h_1o.to(b_q.dtype), allow_tf32=False)
+ b_z += tl.sum(b_q * k_1o, axis=1)
+ # second-order
+ b_q_2o = b_q[:, :, None] * b_q[:, None, :]
+ b_q_2o = tl.reshape(b_q_2o, [BT, BK * BK]).to(b_k.dtype)
+ b_o += tl.dot(b_q_2o, b_h_2o.to(b_q_2o.dtype), allow_tf32=False) * 0.5
+ b_z += tl.sum(b_q_2o * k_2o, axis=1) * 0.5
+
+ # update running statistics
+ k_1o += tl.sum(b_k, axis=1)[None, :]
+ k_2o += tl.sum(b_k_2o, axis=1)[None, :]
+ k_0o += BT
+
+ # intrachunk
+ # [BT, BT]
+ b_s = tl.dot(b_q, b_k, allow_tf32=False)
+ b_s = 1 + b_s + 0.5 * b_s * b_s
+ b_s = tl.where(m_s, b_s, 0)
+ b_z += tl.sum(b_s, axis=1)
+ b_o += tl.dot(b_s.to(b_q.dtype), b_v, allow_tf32=False)
+ # [TB, BV]
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1))
+ tl.store(p_z, b_z.to(p_z.dtype.element_ty), mask=(i * BT + tl.arange(0, BT)) < T)
+
+ # update hidden state
+ # [BK, BV]
+ b_h_2o = b_h_2o + tl.dot(b_k_2o.to(b_v.dtype), b_v, allow_tf32=False)
+ b_h_1o = b_h_1o + tl.dot(b_k, b_v, allow_tf32=False)
+ b_h_0o = b_h_0o + tl.sum(b_v, axis=0)
+
+ p_q = tl.advance(p_q, (BT, 0))
+ p_k = tl.advance(p_k, (0, BT))
+ p_v = tl.advance(p_v, (BT, 0))
+ p_o = tl.advance(p_o, (BT, 0))
+ p_z += BT
+
+
+# Similar to Algorithm1 of https://arxiv.org/abs/2006.16236
+@triton.jit
+def fused_chunk_based_bwd_kernel(
+ # NV: number of split in the V dimension. NK: number of split in the K dimension
+ q, # query [B, H, L, K]
+ k, # key [B, H, L, V]
+ v, # value [B, H, L, V]
+ do, # gradient of output [B, H, L, V]
+ dz, # gradient of normalizer [B, H, L]
+ dq, # gradient of query [NV, B, H, L, K]
+ dk, # gradient of key [NV, B, H, L, K]
+ dv, # gradient of value [NK, B, H, L, V]
+ s_k_h, # stride size: L * K
+ s_k_t, # stride size: K
+ s_k_d, # stride size: 1
+ s_v_h, # stride size: L * V
+ s_v_t, # stride size: V
+ s_v_d, # stride size: 1
+ scale, # K ** -0.5
+ B: tl.constexpr, # B
+ H: tl.constexpr, # H
+ T: tl.constexpr, # T
+ K: tl.constexpr, # K
+ V: tl.constexpr, # V
+ BT: tl.constexpr, # BLOCK SIZE along the sequence dimension, a.k.a. chunk size
+ BK: tl.constexpr, # BLOCK SIZE along the K dimension
+ BV: tl.constexpr, # BLOCK SIZE along the V dimension
+):
+ i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+
+ o_i = tl.arange(0, BT)
+ m_s = o_i[:, None] >= o_i[None, :]
+
+ # [BV], zero-order taylor expansion
+ # b_h_0o = tl.zeros([BV], dtype=tl.float32)
+ # [BK, BV], first-order taylor expansion
+ b_h_1o = tl.zeros([BV, BK], dtype=tl.float32)
+ # [BK, BK, BV] second-order taylor expansion
+ b_h_2o = tl.zeros([BV, BK*BK], dtype=tl.float32)
+
+ k_1o = tl.zeros([1, BK], dtype=tl.float32)
+ k_2o = tl.zeros([1, BK * BK], dtype=tl.float32)
+
+ for i in range(0, tl.cdiv(T, BT)):
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i * BT, i_k * BK), (BT, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i * BT, i_k * BK), (BT, BK), (1, 0))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (V, T), (s_v_d, s_v_t), (i_v * BV, i * BT), (BV, BT), (0, 1))
+ p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dq = tl.make_block_ptr(dq + (i_bh + i_v*B*H) * s_k_h, (T, K), (s_k_t, s_k_d), (i*BT, i_k*BK), (BT, BK), (1, 0))
+ p_dz = dz + (i_bh) * T + tl.arange(0, BT) + i * BT
+ b_dq = tl.zeros([BT, BK], dtype=tl.float32)
+
+ # load tensors
+ # [BT, BK]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_q = (b_q * scale).to(b_q.dtype)
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ b_do = tl.load(p_do, boundary_check=(0, 1)).to(b_q.dtype)
+ b_dz = tl.load(p_dz, mask=(tl.arange(0, BT) + i * BT) < T)
+ # [BV, BT]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+
+ # inter-chunk
+ b_dq += tl.dot(b_do, (b_h_1o).to(b_do.dtype), allow_tf32=False)
+ if i_v == 0:
+ b_dq += b_dz[:, None] * k_1o
+ b_dq_2o = tl.dot(b_do, (b_h_2o).to(b_do.dtype), allow_tf32=False) * 0.5
+ if i_v == 0:
+ b_dq_2o += (b_dz[:, None] * k_2o) * 0.5
+ b_dq_2o = tl.reshape(b_dq_2o, [BT, BK, BK])
+ b_dq += tl.sum(b_dq_2o * b_q[:, :, None], axis=1)
+ b_dq += tl.sum(b_dq_2o * b_q[:, None, :], axis=2)
+ b_dq *= scale
+
+ # intra-chunk
+ # [BT, BT]
+ b_ds = tl.dot(b_do, b_v, allow_tf32=False)
+ if i_v == 0:
+ b_ds += b_dz[:, None]
+ b_ds = tl.where(m_s, b_ds, 0) * scale
+ b_s = tl.dot(b_q, tl.trans(b_k), allow_tf32=False)
+ b_s = tl.where(m_s, b_s, 0)
+ b_dq += tl.dot((b_ds * (1 + b_s)).to(b_q.dtype), b_k, allow_tf32=False)
+
+ # store
+ tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1))
+
+ # update hidden state
+ # [BT, BK*BK]
+ b_k_2o = b_k[:, :, None] * b_k[:, None, :]
+ b_k_2o = tl.reshape(b_k_2o, [BT, BK * BK]).to(b_k.dtype)
+ # [BV, BK*BK]
+ b_h_2o = b_h_2o + tl.dot(b_v, b_k_2o.to(b_v.dtype), allow_tf32=False)
+ # [BV, BK]
+ b_h_1o = b_h_1o + tl.dot(b_v, b_k, allow_tf32=False)
+
+ if i_v == 0:
+ # update running statistics
+ k_1o += tl.sum(b_k, axis=0)[None, :]
+ k_2o += tl.sum(b_k_2o, axis=0)[None, :]
+
+ tl.debug_barrier()
+ b_h_1o = None
+ b_h_2o = None
+
+ # [BK, BV], first-order taylor expansion
+ b_dh_1o = tl.zeros([BK, BV], dtype=tl.float32)
+ # [BK, BK, BV] second-order taylor expansion
+ b_dh_2o = tl.zeros([BK*BK, BV], dtype=tl.float32)
+ b_dh_0o = tl.zeros([BV], dtype=tl.float32)
+ m_s = tl.arange(0, BT)[:, None] <= tl.arange(0, BT)[None, :]
+
+ dq_1o = tl.zeros([1, BK], dtype=tl.float32)
+ dq_2o = tl.zeros([BK * BK, 1], dtype=tl.float32)
+
+ for i in range(tl.cdiv(T, BT) * BT - BT, -BT, -BT):
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i), (BK, BT), (0, 1))
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i, i_k * BK), (BT, BK), (1, 0))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i, i_v * BV), (BT, BV), (1, 0))
+ p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i, i_v * BV), (BT, BV), (1, 0))
+ p_dk = tl.make_block_ptr(dk + (i_bh+i_v*B*H) * s_k_h, (T, K), (s_k_t, s_k_d), (i, i_k*BK), (BT, BK), (1, 0))
+ p_dv = tl.make_block_ptr(dv + (i_bh+i_k*B*H) * s_v_h, (T, V), (s_v_t, s_v_d), (i, i_v*BV), (BT, BV), (1, 0))
+ p_dz = dz + (i_bh) * T + tl.arange(0, BT) + i
+
+ b_dk = tl.zeros([BT, BK], dtype=tl.float32)
+ b_dv = tl.zeros([BT, BV], dtype=tl.float32)
+
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ b_do = tl.load(p_do, boundary_check=(0, 1)).to(b_q.dtype)
+ b_dz = tl.load(p_dz, mask=(tl.arange(0, BT)+i) < T)
+ b_q = (b_q * scale).to(b_k.dtype)
+
+ # intra chunk
+ b_ds = tl.dot(b_v, tl.trans(b_do), allow_tf32=False)
+ if i_v == 0:
+ b_ds += b_dz[None, :]
+ b_ds = tl.where(m_s, b_ds, 0)
+ b_s = tl.dot(b_k, b_q, allow_tf32=False)
+ b_s2 = 1 + b_s + 0.5 * b_s * b_s
+ b_s = tl.where(m_s, b_s, 0)
+ b_s2 = tl.where(m_s, b_s2, 0)
+ b_ds *= (1+b_s)
+
+ b_dk += tl.dot(b_ds.to(b_k.dtype), tl.trans(b_q), allow_tf32=False)
+ b_dv += tl.dot(b_s2.to(b_do.dtype), b_do, allow_tf32=False)
+
+ # inter chunk
+ b_k_2o = b_k[:, :, None] * b_k[:, None, :]
+ b_k_2o = tl.reshape(b_k_2o, [BT, BK * BK]).to(b_k.dtype)
+
+ b_dv += tl.dot(b_k, b_dh_1o.to(b_k.dtype), allow_tf32=False)
+ b_dv += tl.dot(b_k_2o, b_dh_2o.to(b_k.dtype), allow_tf32=False)
+ b_dv += b_dh_0o
+
+ b_dk += tl.dot(b_v, tl.trans(b_dh_1o).to(b_k.dtype), allow_tf32=False)
+
+ if i_v == 0:
+ b_dk += dq_1o
+
+ b_dk_2o = tl.dot(b_dh_2o.to(b_k.dtype), tl.trans(b_v), allow_tf32=False)
+ if i_v == 0:
+ b_dk_2o += dq_2o
+ b_dk_2o = tl.reshape(b_dk_2o, [BK, BK, BT])
+ b_k_fp32 = tl.trans(b_k.to(tl.float32))
+ b_dk2 = tl.sum(b_dk_2o * b_k_fp32[:, None, :], axis=0)
+ b_dk2 += tl.sum(b_dk_2o * b_k_fp32[None, :, :], axis=1)
+ b_dk += tl.trans(b_dk2)
+
+ # hidden state update
+ b_dh_0o += tl.sum(b_do, axis=0)
+ b_dh_1o = b_dh_1o + tl.dot(b_q, b_do, allow_tf32=False)
+ b_q_2o = b_q[None, :, :] * b_q[:, None, :]
+ b_q_2o = tl.reshape(b_q_2o, [BK * BK, BT]).to(b_k.dtype)
+ b_dh_2o = b_dh_2o + tl.dot(b_q_2o, b_do, allow_tf32=False) * 0.5
+
+ if i_v == 0:
+ dq_1o += (tl.sum(b_dz[None, :] * b_q, axis=1))[None, :]
+ dq_2o += (tl.sum(b_dz[None, :] * b_q_2o, axis=1) * 0.5)[:, None]
+
+ tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1))
+ tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1))
+
+
+class FusedChunkBasedFunction(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_fwd
+ def forward(ctx, q, k, v, scale=1):
+ B, H, T, K, V = *k.shape, v.shape[-1]
+
+ scale = scale
+ BT = 16
+ BK, BV = min(K, 16), min(V, 32)
+ BK, BV = max(BK, 16), max(BV, 16)
+ NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV)
+
+ num_warps = 4
+
+ # the norm of o might explode, so we need to use float32 here
+ o = q.new_empty(NK, B, H, T, V, dtype=torch.float32)
+ z = q.new_empty(NK, B, H, T, dtype=torch.float32)
+
+ grid = (NV, NK, B * H)
+ fused_chunk_based_fwd_kernel[grid](
+ q, k, v, o, z,
+ q.stride(1), q.stride(2), q.stride(3),
+ v.stride(1), v.stride(2), v.stride(3),
+ scale,
+ B=B, H=H, T=T, K=K, V=V, BT=BT, BK=BK, BV=BV,
+ num_warps=num_warps,
+ )
+ o = o.sum(0)
+ z = z.sum(0)
+ ctx.save_for_backward(q, k, v)
+ ctx.scale = scale
+ return o.to(q.dtype), z.to(z.dtype)
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_bwd
+ def backward(ctx, do, dz):
+ q, k, v = ctx.saved_tensors
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ scale = ctx.scale
+
+ BT = 16
+ BK, BV = min(K, 16), min(V, 32)
+ BK, BV = max(BK, 16), max(BV, 16)
+ NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV)
+ num_stages = 1
+ num_warps = 4
+
+ dq = q.new_empty(NV, B, H, T, K)
+ dk = q.new_empty(NV, B, H, T, K)
+ dv = q.new_empty(NK, B, H, T, V)
+ grid = (NV, NK, B * H)
+
+ fused_chunk_based_bwd_kernel[grid](
+ q, k, v, do, dz, dq, dk, dv,
+ q.stride(1), q.stride(2), q.stride(3),
+ v.stride(1), v.stride(2), v.stride(3),
+ scale,
+ B=B, H=H, T=T, K=K, V=V, BT=BT, BK=BK, BV=BV,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+ dq = dq.sum(0)
+ dk = dk.sum(0)
+ dv = dv.sum(0)
+ return dq.to(q.dtype), dk.to(k.dtype), dv.to(v.dtype), None
+
+
+def fused_chunk_based(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ scale: Optional[float] = None,
+ use_norm: bool = True,
+ head_first: bool = True
+):
+ assert q.shape[-1] <= 16, 'only support feature dimension up to 16.'
+ if scale is None:
+ scale = q.shape[-1] ** -0.5
+ if not head_first:
+ q, k, v = map(lambda x: x.transpose(1, 2), (q, k, v))
+ o, z = FusedChunkBasedFunction.apply(q, k, v, scale)
+ if use_norm:
+ o = o / (z[..., None] + 1e-6)
+ if not head_first:
+ o = o.transpose(1, 2)
+ return o.to(q.dtype)
diff --git a/fla/ops/based/naive.py b/fla/ops/based/naive.py
new file mode 100644
index 0000000000000000000000000000000000000000..4de614137ed28567ebb1df39c0892f498b91fb5a
--- /dev/null
+++ b/fla/ops/based/naive.py
@@ -0,0 +1,72 @@
+# -*- coding: utf-8 -*-
+
+from typing import Optional
+
+import torch
+from einops import rearrange
+
+
+def naive_parallel_based(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ scale: Optional[float] = None,
+ use_norm: bool = True
+):
+ if scale is None:
+ scale = q.shape[-1] ** -0.5
+ q = q * scale
+ attn = q @ k.transpose(-2, -1)
+ attn = 1 + attn + 1/2 * (attn ** 2)
+ attn.masked_fill_(~torch.tril(torch.ones(
+ q.shape[-2], q.shape[-2], dtype=torch.bool, device=q.device)), 0)
+ o = attn @ v
+ if use_norm:
+ z = attn.sum(-1)
+ return o / (z[..., None] + 1e-6)
+ else:
+ return o
+
+
+def naive_chunk_based(q, k, v, chunk_size=256):
+ q = q * (q.shape[-1] ** -0.5)
+ # compute normalizer.
+ k_cumsum = torch.cumsum(k, dim=-2)
+ kk_cumsum = torch.cumsum(k.unsqueeze(-1) * k.unsqueeze(-2), dim=-3)
+ # first
+ z = (q * k_cumsum).sum(-1)
+ # second order
+ z += (q.unsqueeze(-1) * q.unsqueeze(-2) * kk_cumsum).sum((-1, -2)) * 0.5
+ # zero-th order
+ z += (torch.arange(0, q.shape[-2]).to(z.device) * 1.0 + 1.0)[None, None, :]
+
+ # compute o
+ # constant term
+ _o = v.cumsum(-2)
+
+ q = rearrange(q, 'b h (n c) d -> b h n c d', c=chunk_size)
+
+ k = rearrange(k, 'b h (n c) d -> b h n c d', c=chunk_size)
+ v = rearrange(v, 'b h (n c) d -> b h n c d', c=chunk_size)
+
+ intra_chunk_attn = q @ k.transpose(-2, -1)
+ intra_chunk_attn = intra_chunk_attn + 1/2 * (intra_chunk_attn ** 2)
+ intra_chunk_attn.masked_fill_(~torch.tril(torch.ones(chunk_size, chunk_size, dtype=torch.bool, device=q.device)), 0)
+ o = intra_chunk_attn @ v
+
+ # quadractic term
+ kv = torch.einsum('b h n c x, b h n c y, b h n c z -> b h n x y z', k, k, v)
+ kv = kv.cumsum(2)
+ kv = torch.cat([torch.zeros_like(kv[:, :, :1]), kv[:, :, :-1]], dim=2)
+
+ o += 0.5 * torch.einsum('b h n x y z, b h n c x, b h n c y -> b h n c z', kv, q, q)
+
+ # linear term
+ kv = torch.einsum('b h n c x, b h n c y -> b h n x y', k, v)
+ kv = kv.cumsum(2)
+ kv = torch.cat([torch.zeros_like(kv[:, :, :1]), kv[:, :, :-1]], dim=2)
+ o += torch.einsum('b h n x y, b h n c x -> b h n c y', kv, q)
+
+ o = rearrange(o, 'b h n c d -> b h (n c) d')
+ o = o + _o
+ return o / (z[..., None] + 1e-6)
diff --git a/fla/ops/based/parallel.py b/fla/ops/based/parallel.py
new file mode 100644
index 0000000000000000000000000000000000000000..70330eae283756155b1f1aa875550a5a9aa0d591
--- /dev/null
+++ b/fla/ops/based/parallel.py
@@ -0,0 +1,409 @@
+
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from typing import Optional
+
+import torch
+import triton
+import triton.language as tl
+
+from fla.utils import autocast_custom_bwd, autocast_custom_fwd, contiguous
+
+# Based: An Educational and Effective Sequence Mixer
+# https://hazyresearch.stanford.edu/blog/2023-12-11-zoology2-based
+
+
+@triton.jit
+def parallel_based_fwd_kernel(
+ q, # query [B, H, L, K]
+ k, # key [B, H, L, V]
+ v, # value [B, H, L, V]
+ o, # output [B, H, L, V]
+ z, # normalizer [B, H, L]
+ s_k_h, # stride size: L * K
+ s_k_t, # stride size: K
+ s_k_d, # stride size: 1
+ s_v_h, # stride size: L * V
+ s_v_t, # stride size: V
+ s_v_d, # stride size: 1
+ scale, # K ** -0.5
+ B: tl.constexpr, # batch size
+ H: tl.constexpr, # H
+ T: tl.constexpr, # T
+ K: tl.constexpr, # K
+ V: tl.constexpr, # V
+ BTL: tl.constexpr, # BLOCK SIZE along the sequence dimension for Q
+ BTS: tl.constexpr, # BLOCK SIZE along the sequence dimension for K/V
+ BK: tl.constexpr, # BLOCK SIZE along the K dimension
+ BV: tl.constexpr, # BLOCK SIZE along the V dimension
+):
+ # i_c: chunk index. used for sequence parallelism
+ i_kv, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ NV = tl.cdiv(V, BV)
+ i_k = i_kv // (NV)
+ i_v = i_kv % (NV)
+
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_c * BTL, i_k * BK), (BTL, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, 0), (BK, BTS), (0, 1))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (0, i_v * BV), (BTS, BV), (1, 0))
+
+ # [BQ, BD] block Q, in the shared memory throughout the whole kernel
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_q = (b_q * scale).to(b_q.dtype)
+ b_o = tl.zeros([BTL, BV], dtype=tl.float32)
+ b_z = tl.zeros([BTL], dtype=tl.float32)
+
+ # Q block and K block have no overlap
+ # no need for mask, thereby saving flops
+ for _ in range(0, i_c * BTL, BTS):
+ # [BK, BTS]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+
+ # [BTS, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BTL, BTS]
+ b_s = tl.dot(b_q, (b_k), allow_tf32=False)
+ b_s = 1 + b_s + 0.5 * b_s * b_s
+ b_z += tl.sum(b_s, axis=1)
+
+ # [BQ, BD]
+ b_o = b_o + tl.dot(b_s.to(b_v.dtype), b_v, allow_tf32=False)
+ p_k = tl.advance(p_k, (0, BTS))
+ p_v = tl.advance(p_v, (BTS, 0))
+
+ # # rescale interchunk output
+ tl.debug_barrier()
+ o_q = tl.arange(0, BTL)
+ # # sync threads, easy for compiler to optimize
+ # tl.debug_barrier()
+
+ o_k = tl.arange(0, BTS)
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_c * BTL), (BK, BTS), (0, 1))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_c * BTL, i_v * BV), (BTS, BV), (1, 0))
+ # Q block and K block have overlap. masks required
+ for _ in range(i_c * BTL, (i_c + 1) * BTL, BTS):
+ # [BK, BTS]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BTS, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BTL, BTS]
+ m_s = o_q[:, None] >= o_k[None, :]
+ b_s = tl.dot(b_q, b_k, allow_tf32=False)
+ b_s = 1 + b_s + 0.5 * b_s * b_s
+ b_s = tl.where(m_s, b_s, 0)
+ b_z += tl.sum(b_s, axis=1)
+ # [BTL, BV]
+ b_o += tl.dot(b_s.to(b_q.dtype), b_v, allow_tf32=False)
+
+ p_k = tl.advance(p_k, (0, BTS))
+ p_v = tl.advance(p_v, (BTS, 0))
+ o_k += BTS
+
+ p_o = tl.make_block_ptr(o + (i_bh + B * H * i_k) * s_v_h, (T, V), (s_v_t, s_v_d), (i_c*BTL, i_v*BV), (BTL, BV), (1, 0))
+ p_z = z + (i_bh + B * H * i_k) * T + i_c * BTL + tl.arange(0, BTL)
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1))
+ tl.store(p_z, b_z.to(p_z.dtype.element_ty), mask=((i_c * BTL + tl.arange(0, BTL)) < T))
+
+
+@triton.jit
+def _parallel_based_bwd_dq(
+ i_bh,
+ i_c,
+ i_k,
+ i_v,
+ i_h,
+ q,
+ k,
+ v,
+ do,
+ dz,
+ dq,
+ s_k_h,
+ s_k_t,
+ s_k_d,
+ s_v_h,
+ s_v_t, s_v_d, B, H, T, scale,
+ BTL: tl.constexpr,
+ BTS: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+):
+ p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d),
+ (i_c * BTL, i_v * BV), (BTL, BV), (1, 0))
+ p_q = tl.make_block_ptr(q + (i_bh) * s_k_h, (T, K),
+ (s_k_t, s_k_d), (i_c*BTL, i_k*BK), (BTL, BK), (1, 0))
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_do = tl.load(p_do, boundary_check=(0, 1)).to(b_q.dtype)
+ b_q = (b_q * scale).to(b_q.dtype)
+ b_dq = tl.zeros([BTL, BK], dtype=tl.float32)
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (0, i_k * BK), (BTS, BK), (1, 0))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (V, T), (s_v_d, s_v_t), (i_v * BV, 0), (BV, BTS), (0, 1))
+ p_dz = dz + i_bh * T + i_c * BTL + tl.arange(0, BTL)
+ b_dz = tl.load(p_dz, mask=(i_c * BTL + tl.arange(0, BTL)) < T)
+
+ for _ in range(0, i_c * BTL, BTS):
+ # [BTS, BK]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BV, BTS]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BTL, BTS]
+ b_ds = tl.dot(b_do, b_v, allow_tf32=False)
+ if i_v == 0:
+ b_ds += b_dz[:, None]
+ else:
+ b_ds = b_ds
+ b_s = tl.dot(b_q, tl.trans(b_k), allow_tf32=False)
+ # [BQ, BD]
+ b_dq += tl.dot((b_ds * (1 + b_s)).to(b_v.dtype), b_k, allow_tf32=False)
+ p_k = tl.advance(p_k, (BTS, 0))
+ p_v = tl.advance(p_v, (0, BTS))
+
+ b_dq *= scale
+ o_q = tl.arange(0, BTL)
+ o_k = tl.arange(0, BTS)
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_c * BTL, i_k * BK), (BTS, BK), (1, 0))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (V, T), (s_v_d, s_v_t), (i_v * BV, i_c * BTL), (BV, BTS), (0, 1))
+ # Q block and K block have overlap. masks required
+ for _ in range(i_c * BTL, (i_c + 1) * BTL, BTS):
+ # [BTS, BK]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BV, BTS]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BTL, BTS]
+ m_s = o_q[:, None] >= o_k[None, :]
+ b_ds = tl.dot(b_do, b_v, allow_tf32=False)
+ if i_v == 0:
+ b_ds += b_dz[:, None]
+ else:
+ b_ds = b_ds
+ b_ds = tl.where(m_s, b_ds, 0) * scale
+ b_s = tl.dot(b_q, tl.trans(b_k), allow_tf32=False)
+ b_s = tl.where(m_s, b_s, 0)
+ # [BTL, BK]
+ b_dq += tl.dot((b_ds + b_ds * b_s).to(b_k.dtype),
+ b_k, allow_tf32=False)
+ p_k = tl.advance(p_k, (BTS, 0))
+ p_v = tl.advance(p_v, (0, BTS))
+ o_k += BTS
+ p_dq = tl.make_block_ptr(dq + (i_bh + B * H * i_v) * s_k_h, (T, K),
+ (s_k_t, s_k_d), (i_c*BTL, i_k*BK), (BTL, BK), (1, 0))
+ tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1))
+ return
+
+
+@triton.jit
+def _parallel_based_bwd_dkv(
+ i_bh, i_c, i_k, i_v, i_h,
+ q, k, v, do, dz, dk, dv, s_k_h, s_k_t, s_k_d, s_v_h,
+ s_v_t, s_v_d, B, H, T, scale,
+ BTL: tl.constexpr, BTS: tl.constexpr, BK: tl.constexpr, BV: tl.constexpr,
+ K: tl.constexpr, V: tl.constexpr,
+):
+ # compute dk dv
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_c * BTL, i_k * BK), (BTL, BK), (1, 0))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_c * BTL, i_v * BV), (BTL, BV), (1, 0))
+ b_k, b_v = tl.load(p_k, boundary_check=(0, 1)), tl.load(
+ p_v, boundary_check=(0, 1))
+ b_dk, b_dv = tl.zeros([BTL, BK], dtype=tl.float32), tl.zeros(
+ [BTL, BV], dtype=tl.float32)
+
+ for i in range((tl.cdiv(T, BTS) * BTS)-BTS, (i_c + 1) * BTL - BTS, -BTS):
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i), (BK, BTS), (0, 1))
+ p_do = tl.make_block_ptr(do + i_bh * s_v_h, (V, T), (s_v_d, s_v_t), (i_v * BV, i), (BV, BTS), (0, 1))
+ p_dz = dz + i_bh * T + i + tl.arange(0, BTS)
+ b_q = tl.load(p_q, boundary_check=(0, 1)) # [BK, BTS]
+ b_do = tl.load(p_do, boundary_check=(0, 1)).to(b_q.dtype) # [BV, BTS]
+ b_dz = tl.load(p_dz, mask=(i + tl.arange(0, BTS)) < T)
+ b_s = tl.dot(b_k.to(b_q.dtype), b_q, allow_tf32=False) * \
+ scale # [BTL, BTS]
+ b_s2 = 1 + b_s + 0.5 * b_s * b_s
+ b_dv += tl.dot(b_s2.to(b_q.dtype), tl.trans(b_do), allow_tf32=False)
+ b_ds = tl.dot(b_v, b_do, allow_tf32=False) * scale
+ if i_v == 0:
+ b_ds += b_dz[None, :] * scale
+ else:
+ b_ds = b_ds
+ b_dk += tl.dot((b_ds + b_ds * b_s).to(b_q.dtype), tl.trans(b_q), allow_tf32=False)
+
+ tl.debug_barrier()
+ o_q, o_k = tl.arange(0, BTS), tl.arange(0, BTL)
+ for i in range(i_c*BTL, (i_c+1)*BTL, BTS):
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i), (BK, BTS), (0, 1))
+ p_do = tl.make_block_ptr(do + i_bh * s_v_h, (V, T), (s_v_d, s_v_t), (i_v * BV, i), (BV, BTS), (0, 1))
+ p_dz = dz + i_bh * T + i + tl.arange(0, BTS)
+ b_q = tl.load(p_q, boundary_check=(0, 1)) # [BD, BQ]
+ b_do = tl.load(p_do, boundary_check=(0, 1)).to(b_q.dtype)
+ b_dz = tl.load(p_dz, mask=(i + tl.arange(0, BTS)) < T)
+ # [BK, BQ]
+ m_s = o_k[:, None] <= o_q[None, :]
+ b_s = tl.dot(b_k, b_q, allow_tf32=False) * scale
+ b_s2 = 1 + b_s + 0.5 * b_s * b_s
+ b_s = tl.where(m_s, b_s, 0)
+ b_s2 = tl.where(m_s, b_s2, 0)
+
+ b_ds = tl.dot(b_v, b_do, allow_tf32=False)
+ if i_v == 0:
+ b_ds += b_dz[None, :]
+ else:
+ b_ds = b_ds
+ b_ds = tl.where(m_s, b_ds, 0) * scale
+ # [BK, BD]
+ b_dv += tl.dot(b_s2.to(b_q.dtype), tl.trans(b_do), allow_tf32=False)
+ b_dk += tl.dot((b_ds + b_ds * b_s).to(b_q.dtype),
+ tl.trans(b_q), allow_tf32=False)
+ o_q += BTS
+
+ p_dk = tl.make_block_ptr(dk + (i_bh + B * H * i_v) * s_k_h, (T, K),
+ (s_k_t, s_k_d), (i_c*BTL, i_k*BK), (BTL, BK), (1, 0))
+ p_dv = tl.make_block_ptr(dv + (i_bh + B * H * i_k) * s_v_h, (T, V),
+ (s_v_t, s_v_d), (i_c*BTL, i_v*BV), (BTL, BV), (1, 0))
+ tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1))
+ tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1))
+ return
+
+
+@triton.jit
+def parallel_based_bwd_kernel(
+ q,
+ k,
+ v,
+ do,
+ dz,
+ dq,
+ dk,
+ dv,
+ s_k_h,
+ s_k_t,
+ s_k_d,
+ s_v_h,
+ s_v_t,
+ s_v_d,
+ scale,
+ B: tl.constexpr,
+ H: tl.constexpr,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BTL: tl.constexpr,
+ BTS: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+):
+ i_kv, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ NV = tl.cdiv(V, BV)
+ i_k = i_kv // (NV)
+ i_v = i_kv % (NV)
+ i_h = i_bh % H
+ _parallel_based_bwd_dq(
+ i_bh, i_c, i_k, i_v, i_h,
+ q, k, v, do, dz, dq, s_k_h, s_k_t, s_k_d, s_v_h,
+ s_v_t, s_v_d, B, H, T, scale, BTL=BTL, BTS=BTS, BK=BK, BV=BV, K=K, V=V
+ )
+ tl.debug_barrier()
+ _parallel_based_bwd_dkv(
+ i_bh, i_c, i_k, i_v, i_h,
+ q, k, v, do, dz, dk, dv, s_k_h, s_k_t, s_k_d, s_v_h,
+ s_v_t, s_v_d, B, H, T, scale, BTL, BTS, BK, BV, K, V
+ )
+
+
+class ParallelBasedFunction(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_fwd
+ def forward(ctx, q, k, v, scale):
+ BTL, BTS = 128, 32
+ assert BTL % BTS == 0
+ # assert q.shape[-1] % 16 == 0
+ BK = min(128, triton.next_power_of_2(k.shape[-1]))
+ BV = min(128, triton.next_power_of_2(v.shape[-1]))
+ BK, BV = max(BK, 16), max(BV, 16)
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ num_stages = 2
+ num_warps = 4
+ NK = triton.cdiv(K, BK)
+ NV = triton.cdiv(V, BV)
+ grid = (NK * NV, triton.cdiv(T, BTL), B * H)
+
+ assert NK == 1, "will encounter some synchronization issue if not."
+
+ o = torch.empty(NK, B, H, T, V, device=q.device)
+ z = torch.empty(NK, B, H, T, device=q.device)
+ parallel_based_fwd_kernel[grid](
+ q, k, v, o, z,
+ q.stride(1), q.stride(2), q.stride(3),
+ v.stride(1), v.stride(2), v.stride(3),
+ scale,
+ B=B, H=H, T=T, K=K, V=V,
+ BTL=BTL, BTS=BTS, BK=BK, BV=BV,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+ ctx.save_for_backward(q, k, v)
+ ctx.scale = scale
+ return o.sum(0).to(q.dtype), z.sum(0).to(q.dtype)
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_bwd
+ def backward(ctx, do, dz):
+ q, k, v = ctx.saved_tensors
+ scale = ctx.scale
+ BTL, BTS = 64, 32
+ assert BTL % BTS == 0
+ BK = min(128, triton.next_power_of_2(k.shape[-1]))
+ BV = min(128, triton.next_power_of_2(v.shape[-1]))
+ BK, BV = max(BK, 16), max(BV, 16)
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ num_stages = 2
+ num_warps = 4
+ NK = triton.cdiv(K, BK)
+ NV = triton.cdiv(V, BV)
+ grid = (NK * NV, triton.cdiv(T, BTL), B * H)
+
+ assert NK == 1, "will encounter some synchronization issue if not"
+
+ dq = torch.empty(NV, B, H, T, K, dtype=q.dtype, device=q.device)
+ dk = torch.empty(NV, B, H, T, K, dtype=q.dtype, device=q.device)
+ dv = torch.empty(NK, B, H, T, V, dtype=q.dtype, device=q.device)
+
+ parallel_based_bwd_kernel[grid](
+ q, k, v, do, dz, dq, dk, dv,
+ q.stride(1), q.stride(2), q.stride(3),
+ v.stride(1), v.stride(2), v.stride(3),
+ scale,
+ B=B, H=H, T=T, K=K, V=V,
+ BTL=BTL, BTS=BTS, BK=BK, BV=BV,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+
+ return dq.sum(0).to(q.dtype), dk.sum(0).to(k.dtype), dv.sum(0).to(v.dtype), None
+
+
+triton_parallel_based = ParallelBasedFunction.apply
+
+
+def parallel_based(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ scale: Optional[float] = None,
+ use_norm: bool = True,
+ head_first: bool = True
+):
+ assert q.shape[-1] <= 128, "only support feature dim up to 128"
+ if scale is None:
+ scale = q.shape[-1] ** -0.5
+ if not head_first:
+ q, k, v = map(lambda x: x.transpose(1, 2), (q, k, v))
+ o, z = triton_parallel_based(q, k, v, scale)
+ if use_norm:
+ o = o / (z[..., None] + 1e-6)
+ if not head_first:
+ o = o.transpose(1, 2)
+ return o.to(q.dtype)
diff --git a/fla/ops/common/__init__.py b/fla/ops/common/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..40a96afc6ff09d58a702b76e3f7dd412fe975e26
--- /dev/null
+++ b/fla/ops/common/__init__.py
@@ -0,0 +1 @@
+# -*- coding: utf-8 -*-
diff --git a/fla/ops/common/chunk_h.py b/fla/ops/common/chunk_h.py
new file mode 100644
index 0000000000000000000000000000000000000000..580cd01c460e114cd54f5e19524fe02a34e763da
--- /dev/null
+++ b/fla/ops/common/chunk_h.py
@@ -0,0 +1,395 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from typing import Optional, Tuple
+
+import torch
+import triton
+import triton.language as tl
+
+
+@triton.heuristics({
+ 'USE_INITIAL_STATE': lambda args: args['h0'] is not None,
+ 'STORE_FINAL_STATE': lambda args: args['ht'] is not None,
+ 'USE_OFFSETS': lambda args: args['offsets'] is not None
+})
+@triton.autotune(
+ configs=[
+ triton.Config({'BK': BK, 'BV': BV}, num_warps=num_warps)
+ for BK in [32, 64, 128]
+ for BV in [32, 64, 128]
+ for num_warps in [1, 2, 4, 8]
+ ],
+ key=['BT', 'USE_G', 'USE_GK', 'USE_GV'],
+)
+@triton.jit
+def chunk_fwd_kernel_h(
+ k,
+ v,
+ h,
+ g,
+ gk,
+ gv,
+ h0,
+ ht,
+ offsets,
+ c_offsets,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ USE_G: tl.constexpr,
+ USE_GK: tl.constexpr,
+ USE_GV: tl.constexpr,
+ USE_INITIAL_STATE: tl.constexpr,
+ STORE_FINAL_STATE: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_k, i_v, i_nh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_n, i_h = i_nh // H, i_nh % H
+ if USE_OFFSETS:
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ T = eos - bos
+ NT = tl.cdiv(T, BT)
+ boh = tl.load(c_offsets + i_n).to(tl.int32)
+ else:
+ bos, eos = i_n * T, i_n * T + T
+ NT = tl.cdiv(T, BT)
+ boh = i_n * NT
+
+ # [BK, BV]
+ b_h = tl.zeros([BK, BV], dtype=tl.float32)
+ if USE_INITIAL_STATE:
+ p_h0 = tl.make_block_ptr(h0 + i_nh * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ b_h = tl.load(p_h0, boundary_check=(0, 1)).to(tl.float32)
+
+ for i_t in range(NT):
+ if HEAD_FIRST:
+ p_k = tl.make_block_ptr(k + i_nh * T*K, (K, T), (1, K), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_v = tl.make_block_ptr(v + i_nh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_h = tl.make_block_ptr(h + (i_nh * NT + i_t) * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ else:
+ p_k = tl.make_block_ptr(k + (bos*H + i_h) * K, (K, T), (1, H*K), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_v = tl.make_block_ptr(v + (bos*H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_h = tl.make_block_ptr(h + ((boh + i_t) * H + i_h) * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+
+ tl.store(p_h, b_h.to(p_h.dtype.element_ty), boundary_check=(0, 1))
+ # [BK, BT]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BT, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ last_idx = min((i_t + 1) * BT, T) - 1
+
+ # scalar decay
+ if USE_G:
+ if HEAD_FIRST:
+ b_g_last = tl.load(g + i_nh * T + last_idx)
+ p_g = g + i_nh * T + i_t * BT + tl.arange(0, BT)
+ p_g = tl.max_contiguous(tl.multiple_of(p_g, BT), BT)
+ else:
+ b_g_last = tl.load(g + bos * H + last_idx * H + i_h)
+ p_g = g + bos*H + (i_t * BT + tl.arange(0, BT)) * H + i_h
+ b_h *= tl.exp(b_g_last)
+ b_g = tl.load(p_g, mask=(i_t * BT + tl.arange(0, BT) < T), other=0.)
+ b_v = (b_v * tl.exp(b_g_last - b_g)[:, None]).to(b_v.dtype)
+
+ # vector decay, h = Diag(gk) @ h
+ if USE_GK:
+ if HEAD_FIRST:
+ p_gk = tl.make_block_ptr(gk + i_nh * T*K, (K, T), (1, K), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_gk_last = gk + i_nh * T*K + last_idx * K + i_k * BK + tl.arange(0, BK)
+ else:
+ p_gk = tl.make_block_ptr(gk + (bos*H + i_h) * K, (K, T), (1, H*K), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_gk_last = gk + (bos + last_idx) * H*K + i_h * K + i_k * BK + tl.arange(0, BK)
+ p_gk_last = tl.max_contiguous(tl.multiple_of(p_gk_last, BK), BK)
+ b_gk_last = tl.load(p_gk_last, mask=(i_k * BK + tl.arange(0, BK) < K), other=0.)
+ b_h *= tl.exp(b_gk_last)[:, None]
+
+ b_gk = tl.load(p_gk, boundary_check=(0, 1))
+ b_k = (b_k * tl.exp(b_gk_last[:, None] - b_gk)).to(b_k.dtype)
+
+ # vector decay, h = h @ Diag(gv)
+ if USE_GV:
+ if HEAD_FIRST:
+ p_gv = tl.make_block_ptr(gv + i_nh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_gv_last = gv + i_nh * T*V + last_idx * V + i_v * BV + tl.arange(0, BV)
+ else:
+ p_gv = tl.make_block_ptr(gv + (bos*H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_gv_last = gv + (bos + last_idx) * H*V + i_h * V + i_v * BV + tl.arange(0, BV)
+ p_gv_last = tl.max_contiguous(tl.multiple_of(p_gv_last, BV), BV)
+ b_gv_last = tl.load(p_gv_last, mask=(i_v * BV + tl.arange(0, BV) < V), other=0.)
+ b_h *= tl.exp(b_gv_last)[None, :]
+
+ b_gv = tl.load(p_gv, boundary_check=(0, 1))
+ b_v = (b_v * tl.exp(b_gv_last[None, :] - b_gv)).to(b_v.dtype)
+
+ b_h += tl.dot(b_k, b_v)
+
+ if STORE_FINAL_STATE:
+ p_ht = tl.make_block_ptr(ht + i_nh * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ tl.store(p_ht, b_h.to(p_ht.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.heuristics({
+ 'STORE_INITIAL_STATE_GRADIENT': lambda args: args['dh0'] is not None,
+ 'USE_FINAL_STATE_GRADIENT': lambda args: args['dht'] is not None,
+ 'USE_OFFSETS': lambda args: args['offsets'] is not None
+})
+@triton.autotune(
+ configs=[
+ triton.Config({'BK': BK, 'BV': BV}, num_warps=num_warps)
+ for BK in [32, 64, 128]
+ for BV in [32, 64, 128]
+ for num_warps in [1, 2, 4, 8]
+ ],
+ key=['BT', 'USE_G', 'USE_GK', 'USE_GV'],
+)
+@triton.jit
+def chunk_bwd_kernel_dh(
+ q,
+ g,
+ gk,
+ gv,
+ do,
+ dh,
+ dht,
+ dh0,
+ offsets,
+ c_offsets,
+ scale,
+ T: tl.constexpr,
+ HQ: tl.constexpr,
+ H: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ NG: tl.constexpr,
+ USE_G: tl.constexpr,
+ USE_GK: tl.constexpr,
+ USE_GV: tl.constexpr,
+ STORE_INITIAL_STATE_GRADIENT: tl.constexpr,
+ USE_FINAL_STATE_GRADIENT: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_k, i_v, i_nh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_bg = i_nh // NG
+ i_n, i_hq = i_nh // HQ, i_nh % HQ
+ i_h = i_hq // NG
+ if USE_OFFSETS:
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ T = eos - bos
+ NT = tl.cdiv(T, BT)
+ boh = tl.load(c_offsets + i_n).to(tl.int32)
+ else:
+ bos, eos = i_n * T, i_n * T + T
+ NT = tl.cdiv(T, BT)
+ boh = i_n * NT
+
+ # [BK, BV]
+ b_dh = tl.zeros([BK, BV], dtype=tl.float32)
+ if USE_FINAL_STATE_GRADIENT:
+ p_dht = tl.make_block_ptr(dht + i_nh * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ b_dh += tl.load(p_dht, boundary_check=(0, 1)).to(tl.float32)
+
+ for i_t in range(NT - 1, -1, -1):
+ if HEAD_FIRST:
+ p_dh = tl.make_block_ptr(dh + (i_nh * NT + i_t) * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ else:
+ p_dh = tl.make_block_ptr(dh + ((boh+i_t) * H + i_h) * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ tl.store(p_dh, b_dh.to(p_dh.dtype.element_ty), boundary_check=(0, 1))
+ last_idx = min(i_t * BT + BT, T) - 1
+ # [BK, BT]
+ if HEAD_FIRST:
+ p_q = tl.make_block_ptr(q + i_nh * T*K, (K, T), (1, K), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_do = tl.make_block_ptr(do + i_nh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ else:
+ p_q = tl.make_block_ptr(q + (bos*HQ + i_hq) * K, (K, T), (1, HQ*K), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_do = tl.make_block_ptr(do + (bos*HQ + i_hq) * V, (T, V), (HQ*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_q = (b_q * scale).to(b_q.dtype)
+ # [BT, BV]
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+
+ if USE_G:
+ if HEAD_FIRST:
+ p_g = g + i_bg * T + i_t * BT + tl.arange(0, BT)
+ p_g = tl.max_contiguous(tl.multiple_of(p_g, BT), BT)
+ b_g_last = tl.load(g + i_bg * T + last_idx)
+ else:
+ p_g = g + (bos + i_t * BT + tl.arange(0, BT)) * H + i_h
+ b_g_last = tl.load(g + (bos + last_idx) * H + i_h)
+ b_g = tl.load(p_g, mask=(i_t * BT + tl.arange(0, BT) < T), other=0.)
+ b_q = (b_q * tl.exp(b_g)[None, :]).to(b_q.dtype)
+
+ b_dh *= tl.exp(b_g_last)
+
+ if USE_GK:
+ if HEAD_FIRST:
+ p_gk = tl.make_block_ptr(gk + i_bg * T*K, (K, T), (1, K), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_gk_last = gk + (i_bg * T + last_idx) * K + i_k * BK + tl.arange(0, BK)
+
+ else:
+ p_gk = tl.make_block_ptr(gk + (bos*H + i_h) * K, (K, T), (1, H*K), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_gk_last = gk + (bos + last_idx) * H*K + i_h * K + i_k * BK + tl.arange(0, BK)
+ p_gk_last = tl.max_contiguous(tl.multiple_of(p_gk_last, BK), BK)
+
+ b_gk = tl.load(p_gk, boundary_check=(0, 1))
+ b_q = (b_q * tl.exp(b_gk)).to(b_q.dtype)
+ b_gk_last = tl.load(p_gk_last, mask=(i_k * BK + tl.arange(0, BK) < K), other=0.)
+ b_dh *= tl.exp(b_gk_last)[:, None]
+
+ if USE_GV:
+ if HEAD_FIRST:
+ p_gv = tl.make_block_ptr(gv + i_bg * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_gv_last = gv + (i_bg * T + last_idx) * V + i_v * BV + tl.arange(0, BV)
+ else:
+ p_gv = tl.make_block_ptr(gv + (bos*H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_gv_last = gv + (bos + last_idx) * H*V + i_h * V + i_v * BV + tl.arange(0, BV)
+ p_gv_last = tl.max_contiguous(tl.multiple_of(p_gv_last, BV), BV)
+
+ b_gv = tl.load(p_gv, boundary_check=(0, 1))
+ b_do = (b_do * tl.exp(b_gv)).to(b_do.dtype)
+
+ b_gv_last = tl.load(p_gv_last, mask=(i_v * BV + tl.arange(0, BV) < V), other=0.)
+ b_dh *= tl.exp(b_gv_last)[None, :]
+
+ b_dh += tl.dot(b_q, b_do)
+
+ if STORE_INITIAL_STATE_GRADIENT:
+ p_dh0 = tl.make_block_ptr(dh0 + i_nh * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ tl.store(p_dh0, b_dh.to(p_dh0.dtype.element_ty), boundary_check=(0, 1))
+
+
+def chunk_fwd_h(
+ k: torch.Tensor,
+ v: torch.Tensor,
+ g: torch.Tensor,
+ gk: torch.Tensor,
+ gv: torch.Tensor,
+ h0: torch.Tensor,
+ output_final_state: bool,
+ states_in_fp32: bool = False,
+ offsets: Optional[torch.Tensor] = None,
+ c_offsets: Optional[torch.Tensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ if head_first:
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ else:
+ B, T, H, K, V = *k.shape, v.shape[-1]
+ BT = chunk_size
+ # N: the actual number of sequences in the batch with either equal or variable lengths
+ if offsets is None:
+ N, NT, c_offsets = B, triton.cdiv(T, BT), None
+ else:
+ N = len(offsets) - 1
+ if c_offsets is None:
+ c_offsets = torch.cat([offsets.new_tensor([0]), triton.cdiv(offsets[1:] - offsets[:-1], BT)]).cumsum(-1)
+ NT = c_offsets[-1]
+
+ if head_first:
+ h = k.new_empty(B, H, NT, K, V, dtype=k.dtype if not states_in_fp32 else torch.float32)
+ else:
+ h = k.new_empty(B, NT, H, K, V, dtype=k.dtype if not states_in_fp32 else torch.float32)
+ ht = k.new_empty(N, H, K, V, dtype=torch.float32) if output_final_state else None
+
+ def grid(meta): return (triton.cdiv(K, meta['BK']), triton.cdiv(V, meta['BV']), N * H)
+ chunk_fwd_kernel_h[grid](
+ k=k,
+ v=v,
+ h=h,
+ g=g,
+ gk=gk,
+ gv=gv,
+ h0=h0,
+ ht=ht,
+ offsets=offsets,
+ c_offsets=c_offsets,
+ T=T,
+ H=H,
+ K=K,
+ V=V,
+ BT=BT,
+ USE_G=g is not None,
+ USE_GK=gk is not None,
+ USE_GV=gv is not None,
+ HEAD_FIRST=head_first
+ )
+ return h, ht
+
+
+def chunk_bwd_dh(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ g: torch.Tensor,
+ gk: torch.Tensor,
+ gv: torch.Tensor,
+ do: torch.Tensor,
+ h0: torch.Tensor,
+ dht: torch.Tensor,
+ scale: float,
+ states_in_fp32: bool = False,
+ offsets: Optional[torch.Tensor] = None,
+ c_offsets: Optional[torch.Tensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ if head_first:
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ HQ = q.shape[1]
+ else:
+ B, T, H, K, V = *k.shape, v.shape[-1]
+ HQ = q.shape[2]
+ BT = chunk_size
+ # N: the actual number of sequences in the batch with either equal or variable lengths
+ if offsets is None:
+ N, NT, c_offsets = B, triton.cdiv(T, BT), None
+ else:
+ N = len(offsets) - 1
+ if c_offsets is None:
+ c_offsets = torch.cat([offsets.new_tensor([0]), triton.cdiv(offsets[1:] - offsets[:-1], BT)]).cumsum(-1)
+ NT = c_offsets[-1]
+ # number of groups in GQA
+ NG = HQ // H
+
+ if head_first:
+ dh = k.new_empty(B, HQ, NT, K, V, dtype=k.dtype if not states_in_fp32 else torch.float32)
+ else:
+ dh = k.new_empty(B, NT, HQ, K, V, dtype=k.dtype if not states_in_fp32 else torch.float32)
+ dh0 = torch.empty_like(h0, dtype=torch.float32) if h0 is not None else None
+
+ def grid(meta): return (triton.cdiv(K, meta['BK']), triton.cdiv(V, meta['BV']), N * H)
+ chunk_bwd_kernel_dh[grid](
+ q=q,
+ g=g,
+ gk=gk,
+ gv=gv,
+ do=do,
+ dh=dh,
+ dht=dht,
+ dh0=dh0,
+ offsets=offsets,
+ c_offsets=c_offsets,
+ scale=scale,
+ T=T,
+ HQ=HQ,
+ H=H,
+ K=K,
+ V=V,
+ BT=BT,
+ NG=NG,
+ USE_G=g is not None,
+ USE_GK=gk is not None,
+ USE_GV=gv is not None,
+ HEAD_FIRST=head_first,
+ )
+ return dh, dh0
diff --git a/fla/ops/common/fused_recurrent.py b/fla/ops/common/fused_recurrent.py
new file mode 100644
index 0000000000000000000000000000000000000000..e418f8a10c5e6b6702c248b6b0b2a15d584b2c21
--- /dev/null
+++ b/fla/ops/common/fused_recurrent.py
@@ -0,0 +1,577 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from typing import Optional
+
+import torch
+import triton
+import triton.language as tl
+
+from fla.ops.utils import chunk_global_cumsum
+from fla.utils import autocast_custom_bwd, autocast_custom_fwd, contiguous
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ ],
+ key=["BK", "BV", "USE_GK", "USE_GV", "USE_G"],
+)
+@triton.heuristics({
+ 'USE_INITIAL_STATE': lambda args: args['h0'] is not None,
+ 'STORE_FINAL_STATE': lambda args: args['ht'] is not None,
+ 'USE_OFFSETS': lambda args: args['offsets'] is not None
+})
+@triton.jit
+def fused_recurrent_fwd_kernel(
+ q, # query [B, H, T, K]/[B, T, H, K]
+ k, # key [B, H, T, K]/[B, T, H, K]
+ v, # value [B, H, T, V]/[B, T, H, V]
+ g, # log gate [B, H, T]/[B, T, H] or None
+ gk, # log gate [B, H, T, K]/[B, T, H, K] or None
+ gv, # log gate [B, H, T, V]/[B, T, H, V] or None
+ o, # output [NK, B, H, T, V]/[NK, B, T, H, V]
+ h0, # initial hidden state [B, H, K, V]
+ ht, # final hidden state [B, H, K, V]
+ offsets,
+ scale,
+ B: tl.constexpr,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ REVERSE: tl.constexpr, # whether to reverse the recurrence
+ USE_G: tl.constexpr, # whether to use g
+ USE_GK: tl.constexpr, # whether to use gk
+ USE_GV: tl.constexpr, # whether to use gv
+ USE_INITIAL_STATE: tl.constexpr, # whether to use initial state
+ STORE_FINAL_STATE: tl.constexpr, # whether to store final state
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ # indices
+ i_v, i_k, i_nh = tl.program_id(0).to(tl.int64), tl.program_id(1).to(tl.int64), tl.program_id(2).to(tl.int64)
+ i_n, i_h = i_nh // H, i_nh % H
+ if USE_OFFSETS:
+ bos, eos = tl.load(offsets + i_n).to(tl.int64), tl.load(offsets + i_n + 1).to(tl.int64)
+ all = T
+ T = eos - bos
+ else:
+ bos, eos = i_n * T, i_n * T + T
+ all = B * T
+
+ if HEAD_FIRST:
+ p_q = q + i_nh * T*K + ((T-1) * K if REVERSE else 0) + i_k * BK + tl.arange(0, BK)
+ p_k = k + i_nh * T*K + ((T-1) * K if REVERSE else 0) + i_k * BK + tl.arange(0, BK)
+ p_v = v + i_nh * T*V + ((T-1) * V if REVERSE else 0) + i_v * BV + tl.arange(0, BV)
+ p_o = o + (i_k * B*H + i_nh) * T*V + ((T-1) * V if REVERSE else 0) + i_v * BV + tl.arange(0, BV)
+ if USE_G:
+ p_g = g + i_nh * T + ((T-1) if REVERSE else 0)
+ if USE_GK:
+ p_gk = gk + i_nh * T*K + ((T-1) * K if REVERSE else 0) + i_k * BK + tl.arange(0, BK)
+ if USE_GV:
+ p_gv = gv + i_nh * T*V + ((T-1) * V if REVERSE else 0) + i_v * BV + tl.arange(0, BV)
+ else:
+ p_q = q + (bos + ((T-1) if REVERSE else 0)) * H*K + i_h * K + i_k * BK + tl.arange(0, BK)
+ p_k = k + (bos + ((T-1) if REVERSE else 0)) * H*K + i_h * K + i_k * BK + tl.arange(0, BK)
+ p_v = v + (bos + ((T-1) if REVERSE else 0)) * H*V + i_h * V + i_v * BV + tl.arange(0, BV)
+ p_o = o + ((i_k * all + bos) + ((T-1) if REVERSE else 0)) * H*V + i_h * V + i_v * BV + tl.arange(0, BV)
+ if USE_G:
+ p_g = g + (bos + ((T-1) if REVERSE else 0)) * H + i_h
+ if USE_GK:
+ p_gk = gk + (bos + ((T-1) if REVERSE else 0)) * H*K + i_h * K + i_k * BK + tl.arange(0, BK)
+ if USE_GV:
+ p_gv = gv + (bos + ((T-1) if REVERSE else 0)) * H*V + i_h * V + i_v * BV + tl.arange(0, BV)
+
+ mask_k = (i_k * BK + tl.arange(0, BK)) < K
+ mask_v = (i_v * BV + tl.arange(0, BV)) < V
+ mask_h = mask_k[None, :] & mask_v[:, None]
+ b_h = tl.zeros([BV, BK], dtype=tl.float32)
+
+ if USE_INITIAL_STATE:
+ p_h0 = h0 + i_nh * K*V + (i_k * BK + tl.arange(0, BK)[None, :]) * V + (i_v * BV + tl.arange(0, BV)[:, None])
+ b_h += tl.load(p_h0, mask=mask_h, other=0).to(tl.float32)
+
+ for _ in range(0, T):
+ b_q = tl.load(p_q, mask=mask_k, other=0).to(tl.float32) * scale
+ b_k = tl.load(p_k, mask=mask_k, other=0).to(tl.float32)
+ b_v = tl.load(p_v, mask=mask_v, other=0).to(tl.float32)
+ if USE_GK:
+ b_gk = tl.load(p_gk, mask=mask_k, other=0).to(tl.float32)
+ b_h = b_h * tl.exp(b_gk[None, :])
+ if USE_GV:
+ b_gv = tl.load(p_gv, mask=mask_v, other=0).to(tl.float32)
+ b_h = b_h * tl.exp(b_gv[:, None])
+ if USE_G:
+ b_g = tl.load(p_g).to(tl.float32)
+ b_h = b_h * tl.exp(b_g)
+ b_h += b_k[None, :] * b_v[:, None]
+ b_o = b_h * b_q[None, :]
+ b_o = tl.sum(b_o, axis=1)
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), mask=mask_v)
+ p_q += (-1 if REVERSE else 1) * (1 if HEAD_FIRST else H) * K
+ p_k += (-1 if REVERSE else 1) * (1 if HEAD_FIRST else H) * K
+ p_v += (-1 if REVERSE else 1) * (1 if HEAD_FIRST else H) * V
+ p_o += (-1 if REVERSE else 1) * (1 if HEAD_FIRST else H) * V
+ if USE_GK:
+ p_gk += (-1 if REVERSE else 1) * (1 if HEAD_FIRST else H) * K
+ if USE_GV:
+ p_gv += (-1 if REVERSE else 1) * (1 if HEAD_FIRST else H) * V
+ if USE_G:
+ p_g += (-1 if REVERSE else 1) * (1 if HEAD_FIRST else H)
+
+ if STORE_FINAL_STATE:
+ p_ht = ht + i_nh * K*V + (i_k * BK + tl.arange(0, BK)[None, :]) * V + (i_v * BV + tl.arange(0, BV)[:, None])
+ tl.store(p_ht, b_h.to(p_ht.dtype.element_ty), mask=mask_h)
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ ],
+ key=["BK", "BV", "USE_GK", "USE_GV", "USE_G"],
+)
+@triton.heuristics({
+ 'USE_INITIAL_STATE': lambda args: args['h0'] is not None,
+ 'STORE_INITIAL_STATE_GRADIENT': lambda args: args['dh0'] is not None,
+ 'USE_FINAL_STATE_GRADIENT': lambda args: args['dht'] is not None,
+ 'USE_OFFSETS': lambda args: args['offsets'] is not None
+})
+# Similar to Algorithm1 of https://arxiv.org/abs/2006.16236
+@triton.jit
+def fused_recurrent_bwd_kernel(
+ q, # query [B, H, T, K]/[B, T, H, K]
+ k, # key [B, H, T, V]/[B, T, H, V]
+ v, # value [B, H, T, V]/[B, T, H, V]
+ g, # log gate [B, H, T]/[B, T, H] or None
+ gk, # log gate [B, H, T, K]/[B, T, H, K] or None
+ gv, # log gate [B, H, T, V]/[B, T, H, V] or None
+ h0, # initial hidden state [B, H, K, V]
+ do, # gradient wrt output [B, H, T, V]/[B, T, H, V]
+ dq, # gradient wrt query [NV, B, H, T, K]/[NK, B, T, H, K]
+ dk, # gradient wrt key [NV, B, H, T, K]/[NK, B, T, H, K]
+ dv, # gradient wrt value [NK, B, H, T, V]/[NV, B, T, H, V]
+ dht, # gradient wrt final hidden state [B, H, K, V]
+ dh0, # gradient wrt initial hidden state [B, H, K, V]
+ offsets,
+ scale,
+ B: tl.constexpr,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ REVERSE: tl.constexpr, # whether to do autoregressive modeling in the reverse direction
+ USE_G: tl.constexpr, # whether to use g
+ USE_GK: tl.constexpr, # whether to use gk
+ USE_GV: tl.constexpr, # whether to use gv
+ USE_INITIAL_STATE: tl.constexpr, # whether to use initial state
+ STORE_INITIAL_STATE_GRADIENT: tl.constexpr, # whether to store gradient wrt initial state
+ USE_FINAL_STATE_GRADIENT: tl.constexpr, # whether to compute gradient wrt final state
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_v, i_k, i_nh = tl.program_id(0).to(tl.int64), tl.program_id(1).to(tl.int64), tl.program_id(2).to(tl.int64)
+ i_n, i_h = i_nh // H, i_nh % H
+ if USE_OFFSETS:
+ bos, eos = tl.load(offsets + i_n).to(tl.int64), tl.load(offsets + i_n + 1).to(tl.int64)
+ all = T
+ T = eos - bos
+ else:
+ bos, eos = i_n * T, i_n * T + T
+ all = B * T
+
+ if HEAD_FIRST:
+ p_k = k + i_nh * T*K + ((T-1) * K if REVERSE else 0) + i_k * BK + tl.arange(0, BK)
+ p_v = v + i_nh * T*V + ((T-1) * V if REVERSE else 0) + i_v * BV + tl.arange(0, BV)
+ p_do = do + i_nh * T*V + ((T-1) * V if REVERSE else 0) + i_v * BV + tl.arange(0, BV)
+ p_dq = dq + (i_v * B*H + i_nh) * T*K + ((T-1) * K if REVERSE else 0) + i_k * BK + tl.arange(0, BK)
+ if USE_G:
+ p_g = g + i_nh * T + ((T-1) if REVERSE else 0)
+ if USE_GK:
+ p_gk = gk + i_nh * T*K + ((T-1) * K if REVERSE else 0) + i_k * BK + tl.arange(0, BK)
+ if USE_GV:
+ p_gv = gv + i_nh * T*V + ((T-1) * V if REVERSE else 0) + i_v * BV + tl.arange(0, BV)
+ else:
+ p_k = k + (bos + ((T-1) if REVERSE else 0)) * H*K + i_h * K + i_k * BK + tl.arange(0, BK)
+ p_v = v + (bos + ((T-1) if REVERSE else 0)) * H*V + i_h * V + i_v * BV + tl.arange(0, BV)
+ p_do = do + (bos + ((T-1) if REVERSE else 0)) * H*V + i_h * V + i_v * BV + tl.arange(0, BV)
+ p_dq = dq + ((i_v * all + bos) + ((T-1) if REVERSE else 0)) * H*K + i_h * K + i_k * BK + tl.arange(0, BK)
+ if USE_G:
+ p_g = g + (bos + ((T-1) if REVERSE else 0)) * H + i_h
+ if USE_GK:
+ p_gk = gk + (bos + ((T-1) if REVERSE else 0)) * H*K + i_h * K + i_k * BK + tl.arange(0, BK)
+ if USE_GV:
+ p_gv = gv + (bos + ((T-1) if REVERSE else 0)) * H*V + i_h * V + i_v * BV + tl.arange(0, BV)
+
+ mask_k = i_k * BK + tl.arange(0, BK) < K
+ mask_v = i_v * BV + tl.arange(0, BV) < V
+ mask_h = mask_k[:, None] & mask_v[None, :]
+
+ b_h = tl.zeros([BK, BV], dtype=tl.float32)
+ if USE_INITIAL_STATE:
+ p_h0 = h0 + i_nh * K*V + (i_k * BK + tl.arange(0, BK)[:, None]) * V + (i_v * BV + tl.arange(0, BV)[None, :])
+ b_h += tl.load(p_h0, mask=mask_h, other=0).to(tl.float32)
+
+ for _ in range(0, T):
+ b_k = tl.load(p_k, mask=mask_k, other=0).to(tl.float32)
+ b_v = tl.load(p_v, mask=mask_v, other=0).to(tl.float32)
+ b_do = tl.load(p_do, mask=mask_v, other=0).to(tl.float32)
+ if USE_G:
+ b_g = tl.load(p_g).to(tl.float32)
+ b_h = b_h * tl.exp(b_g)
+ if USE_GK:
+ b_gk = tl.load(p_gk, mask=mask_k, other=0).to(tl.float32)
+ b_h = b_h * tl.exp(b_gk[:, None])
+ if USE_GV:
+ b_gv = tl.load(p_gv, mask=mask_v, other=0).to(tl.float32)
+ b_h = b_h * tl.exp(b_gv[None, :])
+ b_h += b_k[:, None] * b_v[None, :]
+ b_dq = b_h * b_do[None, :]
+ b_dq = tl.sum(b_dq, axis=1) * scale
+ tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), mask=mask_k)
+
+ p_k += (-1 if REVERSE else 1) * (1 if HEAD_FIRST else H) * K
+ p_v += (-1 if REVERSE else 1) * (1 if HEAD_FIRST else H) * V
+ p_do += (-1 if REVERSE else 1) * (1 if HEAD_FIRST else H) * V
+ p_dq += (-1 if REVERSE else 1) * (1 if HEAD_FIRST else H) * K
+ if USE_G:
+ p_g += (-1 if REVERSE else 1) * (1 if HEAD_FIRST else H)
+ if USE_GK:
+ p_gk += (-1 if REVERSE else 1) * (1 if HEAD_FIRST else H) * K
+ if USE_GV:
+ p_gv += (-1 if REVERSE else 1) * (1 if HEAD_FIRST else H) * V
+
+ # sync threads
+ tl.debug_barrier()
+
+ if HEAD_FIRST:
+ p_q = q + i_nh * T*K + ((T - 1) * K if not REVERSE else 0) + i_k * BK + tl.arange(0, BK)
+ p_k = k + i_nh * T*K + ((T - 1) * K if not REVERSE else 0) + i_k * BK + tl.arange(0, BK)
+ p_v = v + i_nh * T*V + ((T - 1) * V if not REVERSE else 0) + i_v * BV + tl.arange(0, BV)
+ p_do = do + i_nh * T*V + ((T - 1) * V if not REVERSE else 0) + i_v * BV + tl.arange(0, BV)
+ p_dk = dk + (i_v * B*H + i_nh) * T*K + ((T - 1) * K if not REVERSE else 0) + i_k * BK + tl.arange(0, BK)
+ p_dv = dv + (i_k * B*H + i_nh) * T*V + ((T - 1) * V if not REVERSE else 0) + i_v * BV + tl.arange(0, BV)
+ if USE_G:
+ p_g = g + i_nh * T + ((T - 1) if not REVERSE else 0)
+ if USE_GK:
+ p_gk = gk + i_nh * T*K + ((T - 1) * K if not REVERSE else 0) + i_k * BK + tl.arange(0, BK)
+ if USE_GV:
+ p_gv = gv + i_nh * T*V + ((T - 1) * V if not REVERSE else 0) + i_v * BV + tl.arange(0, BV)
+ else:
+ p_q = q + (bos + ((T - 1) if not REVERSE else 0)) * H*K + i_h * K + i_k * BK + tl.arange(0, BK)
+ p_k = k + (bos + ((T - 1) if not REVERSE else 0)) * H*K + i_h * K + i_k * BK + tl.arange(0, BK)
+ p_v = v + (bos + ((T - 1) if not REVERSE else 0)) * H*V + i_h * V + i_v * BV + tl.arange(0, BV)
+ p_do = do + (bos + ((T - 1) if not REVERSE else 0)) * H*V + i_h * V + i_v * BV + tl.arange(0, BV)
+ p_dk = dk + ((i_v * all + bos) + ((T - 1) if not REVERSE else 0)) * H*K + i_h * K + i_k * BK + tl.arange(0, BK)
+ p_dv = dv + ((i_k * all + bos) + ((T - 1) if not REVERSE else 0)) * H*V + i_h * V + i_v * BV + tl.arange(0, BV)
+ if USE_G:
+ p_g = g + (bos + ((T - 1) if not REVERSE else 0)) * H + i_h
+ if USE_GK:
+ p_gk = gk + (bos + ((T - 1) if not REVERSE else 0)) * H*K + i_h * K + i_k * BK + tl.arange(0, BK)
+ if USE_GV:
+ p_gv = gv + (bos + ((T - 1) if not REVERSE else 0)) * H*V + i_h * V + i_v * BV + tl.arange(0, BV)
+
+ b_dh = tl.zeros([BK, BV], dtype=tl.float32)
+ if USE_FINAL_STATE_GRADIENT:
+ p_dht = dht + i_nh * K*V + (i_k * BK + tl.arange(0, BK)[:, None]) * V + (i_v * BV + tl.arange(0, BV)[None, :])
+ b_dh += tl.load(p_dht, mask=mask_h, other=0).to(tl.float32)
+
+ for _ in range(T):
+ b_q = tl.load(p_q, mask=mask_k, other=0).to(tl.float32) * scale
+ b_k = tl.load(p_k, mask=mask_k, other=0).to(tl.float32)
+ b_v = tl.load(p_v, mask=mask_v, other=0).to(tl.float32)
+ b_do = tl.load(p_do, mask=mask_v, other=0).to(tl.float32)
+ b_dh += b_q[:, None] * b_do[None, :]
+ b_dk = tl.sum(b_dh * b_v[None, :], axis=1)
+ b_dv = tl.sum(b_dh * b_k[:, None], axis=0)
+ if USE_G:
+ b_g = tl.load(p_g).to(tl.float32)
+ b_dh *= tl.exp(b_g)
+ if USE_GK:
+ b_gk = tl.load(p_gk, mask=mask_k, other=0).to(tl.float32)
+ b_dh *= tl.exp(b_gk)[:, None]
+ if USE_GV:
+ b_gv = tl.load(p_gv, mask=mask_v, other=0).to(tl.float32)
+ b_dh *= tl.exp(b_gv)[None, :]
+ tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), mask=mask_k)
+ tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), mask=mask_v)
+
+ p_q += (1 if REVERSE else -1) * (1 if HEAD_FIRST else H) * K
+ p_k += (1 if REVERSE else -1) * (1 if HEAD_FIRST else H) * K
+ p_v += (1 if REVERSE else -1) * (1 if HEAD_FIRST else H) * V
+ p_do += (1 if REVERSE else -1) * (1 if HEAD_FIRST else H) * V
+ p_dk += (1 if REVERSE else -1) * (1 if HEAD_FIRST else H) * K
+ p_dv += (1 if REVERSE else -1) * (1 if HEAD_FIRST else H) * V
+ if USE_G:
+ p_g += (1 if REVERSE else -1) * (1 if HEAD_FIRST else H)
+ if USE_GK:
+ p_gk += (1 if REVERSE else -1) * (1 if HEAD_FIRST else H) * K
+ if USE_GV:
+ p_gv += (1 if REVERSE else -1) * (1 if HEAD_FIRST else H) * V
+
+ if STORE_INITIAL_STATE_GRADIENT:
+ p_dh0 = dh0 + i_nh * K*V + (i_k * BK + tl.arange(0, BK)[:, None]) * V + (i_v * BV + tl.arange(0, BV)[None, :])
+ tl.store(p_dh0, b_dh.to(p_dh0.dtype.element_ty), mask=mask_h)
+
+
+def fused_recurrent_fwd(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ g: Optional[torch.Tensor] = None,
+ gk: Optional[torch.Tensor] = None,
+ gv: Optional[torch.Tensor] = None,
+ scale: Optional[float] = None,
+ initial_state: Optional[torch.Tensor] = None,
+ output_final_state: bool = False,
+ reverse: bool = False,
+ offsets: Optional[torch.LongTensor] = None,
+ head_first: bool = True
+):
+ if head_first:
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ else:
+ B, T, H, K, V = *k.shape, v.shape[-1]
+ N = B if offsets is None else len(offsets) - 1
+ BK, BV = min(K, 64), min(V, 64)
+ NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV)
+
+ h0 = initial_state
+ if output_final_state:
+ ht = q.new_empty(N, H, K, V, dtype=torch.float32)
+ else:
+ ht = None
+ o = q.new_empty(NK, *v.shape, dtype=torch.float32)
+
+ grid = (NV, NK, N * H)
+ fused_recurrent_fwd_kernel[grid](
+ q,
+ k,
+ v,
+ g,
+ gk,
+ gv,
+ o,
+ h0,
+ ht,
+ offsets,
+ scale,
+ B=B,
+ T=T,
+ H=H,
+ K=K,
+ V=V,
+ BK=BK,
+ BV=BV,
+ USE_G=g is not None,
+ USE_GK=gk is not None,
+ USE_GV=gv is not None,
+ REVERSE=reverse,
+ HEAD_FIRST=head_first
+ )
+ o = o.sum(0)
+ return o, ht
+
+
+def fused_recurrent_bwd(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ g: Optional[torch.Tensor] = None,
+ gk: Optional[torch.Tensor] = None,
+ gv: Optional[torch.Tensor] = None,
+ o: Optional[torch.Tensor] = None,
+ do: Optional[torch.Tensor] = None,
+ dht: Optional[torch.Tensor] = None,
+ scale: Optional[float] = None,
+ initial_state: Optional[torch.Tensor] = None,
+ reverse: bool = False,
+ offsets: Optional[torch.LongTensor] = None,
+ head_first: bool = True
+):
+ if head_first:
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ else:
+ B, T, H, K, V = *k.shape, v.shape[-1]
+ N = B if offsets is None else len(offsets) - 1
+
+ BK, BV = min(K, 64), min(V, 64)
+ NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV)
+
+ dq = q.new_empty(NV, *q.shape, dtype=torch.float32)
+ dk = q.new_empty(NV, *k.shape, dtype=torch.float32)
+ dv = q.new_empty(NK, *v.shape, dtype=torch.float32)
+ h0 = initial_state
+ dh0 = torch.empty_like(initial_state) if initial_state is not None else None
+
+ grid = (NV, NK, N * H)
+ fused_recurrent_bwd_kernel[grid](
+ q,
+ k,
+ v,
+ g,
+ gk,
+ gv,
+ h0,
+ do,
+ dq,
+ dk,
+ dv,
+ dht,
+ dh0,
+ offsets,
+ scale,
+ B=B,
+ T=T,
+ H=H,
+ K=K,
+ V=V,
+ BK=BK,
+ BV=BV,
+ USE_G=g is not None,
+ USE_GK=gk is not None,
+ USE_GV=gv is not None,
+ REVERSE=reverse,
+ HEAD_FIRST=head_first
+ )
+ dq = dq.sum(0)
+ dk = dk.sum(0)
+ dv = dv.sum(0)
+ dg, dgk, dgv = None, None, None
+ if g is not None:
+ dg = chunk_global_cumsum(
+ (dq * q.float() - dk * k.float()).sum(-1),
+ reverse=not reverse,
+ offsets=offsets,
+ head_first=head_first
+ )
+ if gk is not None:
+ dgk = chunk_global_cumsum(
+ dq * q.float() - dk * k.float(),
+ reverse=not reverse,
+ offsets=offsets,
+ head_first=head_first
+ )
+ if gv is not None:
+ dgv = chunk_global_cumsum(
+ do.float() * o.float() - dv * v.float(),
+ reverse=not reverse,
+ offsets=offsets,
+ head_first=head_first
+ )
+
+ return dq, dk, dv, dg, dgk, dgv, dh0
+
+
+class FusedRecurrentFunction(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_fwd
+ def forward(
+ ctx,
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ g: Optional[torch.Tensor] = None,
+ gk: Optional[torch.Tensor] = None,
+ gv: Optional[torch.Tensor] = None,
+ scale: Optional[float] = None,
+ initial_state: Optional[torch.Tensor] = None,
+ output_final_state: bool = False,
+ reverse: bool = False,
+ offsets: Optional[torch.LongTensor] = None,
+ head_first: bool = True
+ ):
+ o, ht = fused_recurrent_fwd(
+ q=q,
+ k=k,
+ v=v,
+ g=g,
+ gk=gk,
+ gv=gv,
+ scale=scale,
+ initial_state=initial_state,
+ output_final_state=output_final_state,
+ reverse=reverse,
+ offsets=offsets,
+ head_first=head_first
+ )
+ ctx.save_for_backward(q, k, v, g, gk, gv, initial_state, o)
+ ctx.scale = scale
+ ctx.reverse = reverse
+ ctx.offsets = offsets
+ ctx.head_first = head_first
+ return o.to(q.dtype), ht
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_bwd
+ def backward(ctx, do, dht):
+ q, k, v, g, gk, gv, initial_state, o = ctx.saved_tensors
+
+ # not supported yet.
+ if dht is not None:
+ if g is not None:
+ assert g.requires_grad is False, "Cannot load final state gradient and use gates at the same time"
+ if gk is not None:
+ assert gk.requires_grad is False, "Cannot load final state gradient and use gates at the same time"
+ if gv is not None:
+ assert gv.requires_grad is False, "Cannot load final state gradient and use gates at the same time"
+ dq, dk, dv, dg, dgk, dgv, dh0 = fused_recurrent_bwd(
+ q=q,
+ k=k,
+ v=v,
+ g=g,
+ gk=gk,
+ gv=gv,
+ o=o,
+ do=do,
+ dht=dht,
+ scale=ctx.scale,
+ initial_state=initial_state,
+ reverse=ctx.reverse,
+ offsets=ctx.offsets,
+ head_first=ctx.head_first
+ )
+ return dq.to(q.dtype), dk.to(k.dtype), dv.to(v.dtype), dg, dgk, dgv, None, dh0, None, None, None, None
+
+
+def fused_recurrent(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ g: Optional[torch.Tensor] = None,
+ gk: Optional[torch.Tensor] = None,
+ gv: Optional[torch.Tensor] = None,
+ scale: Optional[float] = None,
+ initial_state: Optional[torch.Tensor] = None,
+ output_final_state: bool = False,
+ reverse: bool = False,
+ offsets: Optional[torch.LongTensor] = None,
+ head_first: bool = True
+):
+ if scale is None:
+ scale = k.shape[-1] ** -0.5
+ return FusedRecurrentFunction.apply(
+ q,
+ k,
+ v,
+ g,
+ gk,
+ gv,
+ scale,
+ initial_state,
+ output_final_state,
+ reverse,
+ offsets,
+ head_first
+ )
diff --git a/fla/ops/delta_rule/README.md b/fla/ops/delta_rule/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..1ab2d485a9552d70238c1f68288c72c62f9e0ef2
--- /dev/null
+++ b/fla/ops/delta_rule/README.md
@@ -0,0 +1,4 @@
+- Delta Rule
+
+The implementation of delta rule described in https://arxiv.org/abs/2102.11174
+
diff --git a/fla/ops/delta_rule/__init__.py b/fla/ops/delta_rule/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..e0acb6a7d0e4eec9a8dc697615604783b8858d13
--- /dev/null
+++ b/fla/ops/delta_rule/__init__.py
@@ -0,0 +1,11 @@
+# -*- coding: utf-8 -*-
+
+from .chunk import chunk_delta_rule
+from .fused_chunk import fused_chunk_delta_rule
+from .fused_recurrent import fused_recurrent_delta_rule
+
+__all__ = [
+ 'fused_chunk_delta_rule',
+ 'fused_recurrent_delta_rule',
+ 'chunk_delta_rule'
+]
diff --git a/fla/ops/delta_rule/chunk.py b/fla/ops/delta_rule/chunk.py
new file mode 100644
index 0000000000000000000000000000000000000000..458bb55ec1cfd8a29ae93fbed124b9806f396e09
--- /dev/null
+++ b/fla/ops/delta_rule/chunk.py
@@ -0,0 +1,1116 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from typing import Optional, Tuple
+
+import torch
+import triton
+import triton.language as tl
+
+from fla.ops.delta_rule.wy_fast import (bwd_prepare_wy_repr,
+ fwd_prepare_wy_repr, fwd_recompute_w_u)
+from fla.utils import autocast_custom_bwd, autocast_custom_fwd, contiguous
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ triton.Config({}, num_warps=16),
+ ],
+ key=['BT', 'BK', 'BV'],
+)
+@triton.heuristics({
+ 'USE_INITIAL_STATE': lambda args: args['h0'] is not None,
+ 'STORE_FINAL_STATE': lambda args: args['ht'] is not None,
+ 'USE_OFFSETS': lambda args: args['offsets'] is not None
+})
+@triton.jit
+def chunk_delta_rule_fwd_kernel_h(
+ k,
+ v,
+ d,
+ v_new,
+ h,
+ h0,
+ ht,
+ offsets,
+ c_offsets,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BC: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ NT: tl.constexpr,
+ USE_INITIAL_STATE: tl.constexpr,
+ STORE_FINAL_STATE: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_k, i_v, i_nh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_n, i_h = i_nh // H, i_nh % H
+ if USE_OFFSETS:
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ T = eos - bos
+ NT = tl.cdiv(T, BT)
+ boh = tl.load(c_offsets + i_n).to(tl.int32)
+ else:
+ bos, eos = i_n * T, i_n * T + T
+ NT = tl.cdiv(T, BT)
+ boh = i_n * NT
+
+ # [BK, BV]
+ b_h = tl.zeros([BK, BV], dtype=tl.float32)
+ if USE_INITIAL_STATE:
+ p_h0 = tl.make_block_ptr(h0 + i_nh * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ b_h = tl.load(p_h0, boundary_check=(0, 1)).to(tl.float32)
+
+ for i_t in range(NT):
+ if HEAD_FIRST:
+ p_h = tl.make_block_ptr(h + (i_nh * NT + i_t) * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ else:
+ p_h = tl.make_block_ptr(h + ((boh + i_t) * H + i_h) * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ tl.store(p_h, b_h.to(p_h.dtype.element_ty), boundary_check=(0, 1))
+
+ b_hc = tl.zeros([BK, BV], dtype=tl.float32)
+ # since we need to make all DK in the SRAM. we face serve SRAM memory burden. By subchunking we allievate such burden
+ for i_c in range(tl.cdiv(min(BT, T - i_t * BT), BC)):
+ if HEAD_FIRST:
+ p_k = tl.make_block_ptr(k + i_nh * T*K, (K, T), (1, K), (i_k * BK, i_t * BT + i_c * BC), (BK, BC), (0, 1))
+ p_d = tl.make_block_ptr(d + i_nh * T*K, (T, K), (K, 1), (i_t * BT + i_c * BC, i_k * BK), (BC, BK), (1, 0))
+ p_v = tl.make_block_ptr(v + i_nh * T*V, (T, V), (V, 1), (i_t * BT + i_c * BC, i_v * BV), (BC, BV), (1, 0))
+ p_v_new = tl.make_block_ptr(v_new+i_nh*T*V, (T, V), (V, 1), (i_t * BT + i_c * BC, i_v * BV), (BC, BV), (1, 0))
+ else:
+ p_k = tl.make_block_ptr(k+(bos*H+i_h)*K, (K, T), (1, H*K), (i_k * BK, i_t * BT + i_c * BC), (BK, BC), (0, 1))
+ p_d = tl.make_block_ptr(d+(bos*H+i_h)*K, (T, K), (H*K, 1), (i_t * BT + i_c * BC, i_k * BK), (BC, BK), (1, 0))
+ p_v = tl.make_block_ptr(v+(bos*H+i_h)*V, (T, V), (H*V, 1), (i_t * BT + i_c * BC, i_v * BV), (BC, BV), (1, 0))
+ p_v_new = tl.make_block_ptr(v_new+(bos*H+i_h)*V, (T, V), (H*V, 1), (i_t*BT+i_c*BC, i_v * BV), (BC, BV), (1, 0))
+ # [BK, BC]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BC, BK]
+ b_d = tl.load(p_d, boundary_check=(0, 1))
+ # [BC, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ b_v -= tl.dot(b_d, b_h.to(b_k.dtype))
+ # [BK, BV]
+ tl.store(p_v_new, b_v.to(p_v_new.dtype.element_ty), boundary_check=(0, 1))
+ b_hc += tl.dot(b_k, b_v.to(b_k.dtype), allow_tf32=False)
+ b_h += b_hc
+
+ if STORE_FINAL_STATE:
+ p_ht = tl.make_block_ptr(ht + i_nh * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ tl.store(p_ht, b_h.to(p_ht.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ ],
+ key=['BT', 'BK', 'BV'],
+)
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def chunk_delta_rule_fwd_kernel_o(
+ q,
+ k,
+ v,
+ h,
+ o,
+ offsets,
+ indices,
+ scale,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_v, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_b, i_h = i_bh // H, i_bh % H
+ if USE_OFFSETS:
+ i_tg = i_t
+ i_n, i_t = tl.load(indices + i_t * 2).to(tl.int32), tl.load(indices + i_t * 2 + 1).to(tl.int32)
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ T = eos - bos
+ NT = tl.cdiv(T, BT)
+ else:
+ NT = tl.cdiv(T, BT)
+ i_tg = i_b * NT + i_t
+ bos, eos = i_b * T, i_b * T + T
+
+ o_i = tl.arange(0, BT)
+ m_s = o_i[:, None] >= o_i[None, :]
+
+ b_o = tl.zeros([BT, BV], dtype=tl.float32)
+ b_s = tl.zeros([BT, BT], dtype=tl.float32)
+ for i_k in range(tl.cdiv(K, BK)):
+ if HEAD_FIRST:
+ p_q = tl.make_block_ptr(q + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + i_bh * T*K, (K, T), (1, K), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_h = tl.make_block_ptr(h + (i_bh * NT + i_t) * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ else:
+ p_q = tl.make_block_ptr(q + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + (bos * H + i_h) * K, (K, T), (1, H*K), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_h = tl.make_block_ptr(h + (i_tg * H + i_h) * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+
+ # [BT, BK]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_q = (b_q * scale).to(b_q.dtype)
+ # [BK, BT]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BK, BV]
+ b_h = tl.load(p_h, boundary_check=(0, 1))
+ b_o += tl.dot(b_q, b_h, allow_tf32=False)
+ b_s += tl.dot(b_q, b_k, allow_tf32=False)
+
+ b_s = tl.where(m_s, b_s, 0)
+ if HEAD_FIRST:
+ p_v = tl.make_block_ptr(v + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_o = tl.make_block_ptr(o + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ else:
+ p_v = tl.make_block_ptr(v + (bos * H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_o = tl.make_block_ptr(o + (bos * H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ b_o = (b_o + tl.dot(b_s.to(b_v.dtype), b_v, allow_tf32=False))
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ ],
+ key=['BT', 'BK', 'BV'],
+)
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def chunk_delta_rule_fwd_kernel_prepare_dv(
+ q,
+ k,
+ do,
+ dv,
+ offsets,
+ indices,
+ scale,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_t, i_bh = tl.program_id(0), tl.program_id(1)
+ i_b, i_h = i_bh // H, i_bh % H
+ if USE_OFFSETS:
+ i_n, i_t = tl.load(indices + i_t * 2).to(tl.int32), tl.load(indices + i_t * 2 + 1).to(tl.int32)
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ T = eos - bos
+ else:
+ bos, eos = i_b * T, i_b * T + T
+
+ b_A = tl.zeros([BT, BT], dtype=tl.float32)
+ for i_k in range(tl.cdiv(K, BK)):
+ if HEAD_FIRST:
+ p_q = tl.make_block_ptr(q + i_bh * T*K, (K, T), (1, K), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_k = tl.make_block_ptr(k + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ else:
+ p_q = tl.make_block_ptr(q + (bos * H + i_h) * K, (K, T), (1, H*K), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_k = tl.make_block_ptr(k + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_q = (b_q * scale).to(b_k.dtype)
+ b_A += tl.dot(b_k, b_q, allow_tf32=False)
+
+ b_A = tl.where(tl.arange(0, BT)[:, None] <= tl.arange(0, BT)[None, :], b_A, 0).to(do.dtype.element_ty)
+ for i_v in range(tl.cdiv(V, BV)):
+ if HEAD_FIRST:
+ p_do = tl.make_block_ptr(do + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dv = tl.make_block_ptr(dv + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ else:
+ p_do = tl.make_block_ptr(do + (bos * H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dv = tl.make_block_ptr(dv + (bos * H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+ b_dv = tl.dot(b_A, b_do, allow_tf32=False)
+ tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4)
+ ],
+ key=['BT', 'BK', 'BV'],
+)
+@triton.heuristics({
+ 'USE_FINAL_STATE_GRADIENT': lambda args: args['dht'] is not None,
+ 'USE_INITIAL_STATE': lambda args: args['dh0'] is not None,
+ 'USE_OFFSETS': lambda args: args['offsets'] is not None
+})
+@triton.jit
+def chunk_delta_rule_bwd_kernel_dhu(
+ q,
+ k,
+ d,
+ dht,
+ dh0,
+ do,
+ dh,
+ dv,
+ dv2,
+ offsets,
+ c_offsets,
+ scale,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BC: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ USE_FINAL_STATE_GRADIENT: tl.constexpr,
+ USE_INITIAL_STATE: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_k, i_v, i_nh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_n, i_h = i_nh // H, i_nh % H
+ if USE_OFFSETS:
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ T = eos - bos
+ NT = tl.cdiv(T, BT)
+ boh = tl.load(c_offsets + i_n).to(tl.int32)
+ else:
+ bos, eos = i_n * T, i_n * T + T
+ NT = tl.cdiv(T, BT)
+ boh = i_n * NT
+
+ # [BK, BV]
+ b_dh = tl.zeros([BK, BV], dtype=tl.float32)
+ if USE_FINAL_STATE_GRADIENT:
+ p_dht = tl.make_block_ptr(dht + i_nh * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ b_dh += tl.load(p_dht, boundary_check=(0, 1))
+
+ for i_t in range(NT - 1, -1, -1):
+ if HEAD_FIRST:
+ p_dh = tl.make_block_ptr(dh + (i_nh * NT + i_t) * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ else:
+ p_dh = tl.make_block_ptr(dh + ((boh+i_t) * H + i_h) * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ tl.store(p_dh, b_dh.to(p_dh.dtype.element_ty), boundary_check=(0, 1))
+ b_dh_tmp = tl.zeros([BK, BV], dtype=tl.float32)
+ for i_c in range(tl.cdiv(BT, BC) - 1, -1, -1):
+ if HEAD_FIRST:
+ p_q = tl.make_block_ptr(q + i_nh * T*K, (K, T), (1, K), (i_k * BK, i_t * BT + i_c * BC), (BK, BC), (0, 1))
+ p_k = tl.make_block_ptr(k + i_nh * T*K, (T, K), (K, 1), (i_t * BT + i_c * BC, i_k * BK), (BC, BK), (1, 0))
+ p_d = tl.make_block_ptr(d + i_nh * T*K, (K, T), (1, K), (i_k * BK, i_t * BT + i_c * BC), (BK, BC), (0, 1))
+ p_dv = tl.make_block_ptr(dv + i_nh * T*V, (T, V), (V, 1), (i_t * BT + i_c * BC, i_v * BV), (BC, BV), (1, 0))
+ p_do = tl.make_block_ptr(do + i_nh * T*V, (T, V), (V, 1), (i_t * BT + i_c * BC, i_v * BV), (BC, BV), (1, 0))
+ p_dv2 = tl.make_block_ptr(dv2 + i_nh * T*V, (T, V), (V, 1), (i_t * BT + i_c * BC, i_v * BV), (BC, BV), (1, 0))
+ else:
+ p_q = tl.make_block_ptr(q+(bos*H+i_h)*K, (K, T), (1, H*K), (i_k * BK, i_t * BT + i_c * BC), (BK, BC), (0, 1))
+ p_k = tl.make_block_ptr(k+(bos*H+i_h)*K, (T, K), (H*K, 1), (i_t * BT + i_c * BC, i_k * BK), (BC, BK), (1, 0))
+ p_d = tl.make_block_ptr(d+(bos*H+i_h)*K, (K, T), (1, H*K), (i_k * BK, i_t * BT + i_c * BC), (BK, BC), (0, 1))
+ p_dv = tl.make_block_ptr(dv+(bos*H+i_h)*V, (T, V), (H*V, 1), (i_t*BT + i_c * BC, i_v * BV), (BC, BV), (1, 0))
+ p_do = tl.make_block_ptr(do+(bos*H+i_h)*V, (T, V), (H*V, 1), (i_t*BT + i_c * BC, i_v * BV), (BC, BV), (1, 0))
+ p_dv2 = tl.make_block_ptr(dv2+(bos*H+i_h)*V, (T, V), (H*V, 1), (i_t*BT + i_c * BC, i_v * BV), (BC, BV), (1, 0))
+ # [BK, BT]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_q = (b_q * scale).to(b_q.dtype)
+ # [BT, BK]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ b_d = tl.load(p_d, boundary_check=(0, 1))
+ # [BT, V]
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+
+ b_dv = tl.load(p_dv, boundary_check=(0, 1))
+ b_dv += tl.dot(b_k, b_dh.to(b_k.dtype), allow_tf32=False)
+
+ tl.store(p_dv2, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1))
+ # [BK, BV]
+ b_dh_tmp += tl.dot(b_q, b_do.to(b_q.dtype), allow_tf32=False)
+ b_dh_tmp -= tl.dot(b_d, b_dv.to(b_q.dtype), allow_tf32=False)
+ b_dh += b_dh_tmp
+
+ if USE_INITIAL_STATE:
+ p_dh0 = tl.make_block_ptr(dh0 + i_nh * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ tl.store(p_dh0, b_dh.to(p_dh0.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4)
+ ],
+ key=['BT', 'BK', 'BV'],
+)
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def chunk_delta_rule_bwd_kernel_dqkw(
+ q,
+ k,
+ v,
+ h,
+ do,
+ dh,
+ dq,
+ dk,
+ dv,
+ dw,
+ offsets,
+ indices,
+ scale,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ NT: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_k, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_b, i_h = i_bh // H, i_bh % H
+ if USE_OFFSETS:
+ i_tg = i_t
+ i_n, i_t = tl.load(indices + i_t * 2).to(tl.int32), tl.load(indices + i_t * 2 + 1).to(tl.int32)
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ T = eos - bos
+ NT = tl.cdiv(T, BT)
+ else:
+ NT = tl.cdiv(T, BT)
+ i_tg = i_b * NT + i_t
+ bos, eos = i_b * T, i_b * T + T
+
+ o_i = tl.arange(0, BT)
+
+ if HEAD_FIRST:
+ p_q = tl.make_block_ptr(q + i_bh * T*K, (K, T), (1, K), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_k = tl.make_block_ptr(k + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ else:
+ p_q = tl.make_block_ptr(q + (bos * H + i_h) * K, (K, T), (1, H*K), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_k = tl.make_block_ptr(k + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+
+ b_dq = tl.zeros([BT, BK], dtype=tl.float32)
+ b_dk = tl.zeros([BT, BK], dtype=tl.float32)
+ b_dw = tl.zeros([BT, BK], dtype=tl.float32)
+ b_ds = tl.zeros([BT, BT], dtype=tl.float32)
+ for i_v in range(tl.cdiv(V, BV)):
+ if HEAD_FIRST:
+ p_v = tl.make_block_ptr(v + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_do = tl.make_block_ptr(do + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dv = tl.make_block_ptr(dv + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_h = tl.make_block_ptr(h + i_bh * NT*K*V + i_t * K*V, (V, K), (1, V), (i_v * BV, i_k * BK), (BV, BK), (0, 1))
+ p_dh = tl.make_block_ptr(dh + i_bh * NT*K*V + i_t * K*V, (V, K), (1, V), (i_v * BV, i_k * BK), (BV, BK), (0, 1))
+ else:
+ p_v = tl.make_block_ptr(v + (bos * H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_do = tl.make_block_ptr(do + (bos * H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dv = tl.make_block_ptr(dv + (bos * H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_h = tl.make_block_ptr(h + (i_tg * H + i_h) * K*V, (V, K), (1, V), (i_v * BV, i_k * BK), (BV, BK), (0, 1))
+ p_dh = tl.make_block_ptr(dh + (i_tg * H + i_h) * K*V, (V, K), (1, V), (i_v * BV, i_k * BK), (BV, BK), (0, 1))
+
+ # [BT, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+ # [BV, BK]
+ b_h = tl.load(p_h, boundary_check=(0, 1))
+ # [BK, BV]
+ b_dh = tl.load(p_dh, boundary_check=(0, 1))
+ # [BT, BT]
+ b_ds += tl.dot(b_do, tl.trans(b_v), allow_tf32=False)
+ # [BT, BK]
+ b_dq += tl.dot(b_do, b_h, allow_tf32=False)
+ b_dk += tl.dot(b_v, b_dh, allow_tf32=False)
+
+ b_dv = tl.load(p_dv, boundary_check=(0, 1))
+ b_dw += tl.dot(b_dv.to(b_v.dtype), b_h.to(b_v.dtype), allow_tf32=False)
+
+ # [BK, BT]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_q = (b_q * scale).to(b_q.dtype)
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ b_ds = tl.where(o_i[:, None] >= o_i[None, :], b_ds, 0).to(b_q.dtype)
+ b_dq += tl.dot(b_ds, b_k, allow_tf32=False)
+ b_dq *= scale
+ b_dk += tl.trans(tl.dot(b_q, b_ds, allow_tf32=False))
+
+ if HEAD_FIRST:
+ p_dq = tl.make_block_ptr(dq + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dk = tl.make_block_ptr(dk + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dw = tl.make_block_ptr(dw + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ else:
+ p_dq = tl.make_block_ptr(dq + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dk = tl.make_block_ptr(dk + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dw = tl.make_block_ptr(dw + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+
+ tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1))
+ tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1))
+ tl.store(p_dw, -b_dw.to(p_dw.dtype.element_ty), boundary_check=(0, 1))
+
+
+def chunk_delta_rule_fwd_prepare_dv(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ do: torch.Tensor,
+ scale: float,
+ offsets: Optional[torch.LongTensor] = None,
+ indices: Optional[torch.LongTensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+) -> torch.Tensor:
+ if head_first:
+ B, H, T, K, V = *k.shape, do.shape[-1]
+ else:
+ B, T, H, K, V = *k.shape, do.shape[-1]
+ BT = min(chunk_size, max(triton.next_power_of_2(T), 16))
+ if offsets is None:
+ NT = triton.cdiv(T, BT)
+ else:
+ if indices is None:
+ indices = torch.cat([torch.arange(n) for n in triton.cdiv(offsets[1:] - offsets[:-1], BT).tolist()])
+ indices = torch.stack([indices.eq(0).cumsum(0) - 1, indices], 1).to(offsets)
+ NT = len(indices)
+ BK = min(triton.next_power_of_2(K), 64)
+ BV = min(triton.next_power_of_2(V), 64)
+
+ dv = torch.empty_like(do)
+ chunk_delta_rule_fwd_kernel_prepare_dv[(NT, B * H)](
+ q,
+ k,
+ do,
+ dv,
+ offsets,
+ indices,
+ scale,
+ T=T,
+ H=H,
+ K=K,
+ V=V,
+ BT=BT,
+ BK=BK,
+ BV=BV,
+ HEAD_FIRST=head_first
+ )
+ return dv
+
+
+def chunk_delta_rule_fwd_h(
+ k: torch.Tensor,
+ w: torch.Tensor,
+ u: torch.Tensor,
+ initial_state: Optional[torch.Tensor] = None,
+ output_final_state: bool = False,
+ offsets: Optional[torch.LongTensor] = None,
+ c_offsets: Optional[torch.Tensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ if head_first:
+ B, H, T, K, V = *k.shape, u.shape[-1]
+ else:
+ B, T, H, K, V = *k.shape, u.shape[-1]
+ BT = min(chunk_size, max(triton.next_power_of_2(T), 16))
+ # N: the actual number of sequences in the batch with either equal or variable lengths
+ if offsets is None:
+ N, NT, c_offsets = B, triton.cdiv(T, BT), None
+ else:
+ N = len(offsets) - 1
+ if c_offsets is None:
+ c_offsets = torch.cat([offsets.new_tensor([0]), triton.cdiv(offsets[1:] - offsets[:-1], BT)]).cumsum(-1)
+ NT = c_offsets[-1]
+ BK = triton.next_power_of_2(K)
+ assert BK <= 256, "current kernel does not support head dimension larger than 256."
+ # H100 can have larger block size
+ if torch.cuda.get_device_capability()[0] >= 9:
+ BV = 64
+ BC = 64
+ # A100
+ elif torch.cuda.get_device_capability() == (8, 0):
+ BV = 32
+ BC = 64
+ else:
+ BV = 32
+ BC = 64 if K <= 128 else 32
+ BC = min(BT, BC)
+ NK = triton.cdiv(K, BK)
+ NV = triton.cdiv(V, BV)
+ assert NK == 1, 'NK > 1 is not supported because it involves time-consuming synchronization'
+
+ if head_first:
+ h = k.new_empty(B, H, NT, K, V)
+ else:
+ h = k.new_empty(B, NT, H, K, V)
+ final_state = k.new_empty(N, H, K, V, dtype=torch.float32) if output_final_state else None
+
+ v_new = torch.empty_like(u)
+ grid = (NK, NV, N * H)
+ chunk_delta_rule_fwd_kernel_h[grid](
+ k=k,
+ v=u,
+ d=w,
+ v_new=v_new,
+ h=h,
+ h0=initial_state,
+ ht=final_state,
+ offsets=offsets,
+ c_offsets=c_offsets,
+ T=T,
+ H=H,
+ K=K,
+ V=V,
+ BT=BT,
+ BC=BC,
+ BK=BK,
+ BV=BV,
+ NT=NT,
+ HEAD_FIRST=head_first
+ )
+ return h, v_new, final_state
+
+
+def chunk_delta_rule_bwd_dhu(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ w: torch.Tensor,
+ h0: torch.Tensor,
+ dht: Optional[torch.Tensor],
+ do: torch.Tensor,
+ dv: torch.Tensor,
+ scale: float,
+ offsets: Optional[torch.LongTensor] = None,
+ c_offsets: Optional[torch.LongTensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
+ if head_first:
+ B, H, T, K, V = *q.shape, do.shape[-1]
+ else:
+ B, T, H, K, V = *q.shape, do.shape[-1]
+ BT = min(chunk_size, max(triton.next_power_of_2(T), 16))
+ # N: the actual number of sequences in the batch with either equal or variable lengths
+ if offsets is None:
+ N, NT, c_offsets = B, triton.cdiv(T, BT), None
+ else:
+ N = len(offsets) - 1
+ if c_offsets is None:
+ c_offsets = torch.cat([offsets.new_tensor([0]), triton.cdiv(offsets[1:] - offsets[:-1], BT)]).cumsum(-1)
+ NT = c_offsets[-1]
+
+ BK = triton.next_power_of_2(K)
+ assert BK <= 256, "current kernel does not support head dimension being larger than 256."
+ # H100
+ if torch.cuda.get_device_capability()[0] >= 9:
+ BV = 64
+ BC = 64
+ # A100
+ elif torch.cuda.get_device_capability() == (8, 0):
+ BV = 32
+ BC = 64 if K <= 128 else 32
+ else:
+ BV = 32
+ BC = 64 if K <= 128 else 32
+ BC = min(BT, BC)
+ NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV)
+ assert NK == 1, 'NK > 1 is not supported because it involves time-consuming synchronization'
+
+ if head_first:
+ dh = q.new_empty(B, H, NT, K, V)
+ else:
+ dh = q.new_empty(B, NT, H, K, V)
+ dh0 = torch.empty_like(h0, dtype=torch.float32) if h0 is not None else None
+ dv2 = torch.empty_like(dv)
+
+ grid = (NK, NV, N * H)
+ chunk_delta_rule_bwd_kernel_dhu[grid](
+ q=q,
+ k=k,
+ d=w,
+ dht=dht,
+ dh0=dh0,
+ do=do,
+ dh=dh,
+ dv=dv,
+ dv2=dv2,
+ offsets=offsets,
+ c_offsets=c_offsets,
+ scale=scale,
+ T=T,
+ H=H,
+ K=K,
+ V=V,
+ BT=BT,
+ BC=BC,
+ BK=BK,
+ BV=BV,
+ HEAD_FIRST=head_first
+ )
+ return dh, dh0, dv2
+
+
+def chunk_delta_rule_fwd_o(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v_new: torch.Tensor,
+ h: torch.Tensor,
+ scale: float,
+ offsets: Optional[torch.LongTensor] = None,
+ indices: Optional[torch.LongTensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+) -> torch.Tensor:
+ if head_first:
+ B, H, T, K, V = *q.shape, v_new.shape[-1]
+ else:
+ B, T, H, K, V = *q.shape, v_new.shape[-1]
+ BT = min(chunk_size, max(triton.next_power_of_2(T), 16))
+ if offsets is None:
+ NT = triton.cdiv(T, BT)
+ else:
+ if indices is None:
+ indices = torch.cat([torch.arange(n) for n in triton.cdiv(offsets[1:] - offsets[:-1], BT).tolist()])
+ indices = torch.stack([indices.eq(0).cumsum(0) - 1, indices], 1).to(offsets)
+ NT = len(indices)
+ BK = min(triton.next_power_of_2(K), 64)
+ BV = min(triton.next_power_of_2(V), 64)
+ NV = triton.cdiv(V, BV)
+
+ o = torch.empty_like(v_new)
+ grid = (NV, NT, B * H)
+ chunk_delta_rule_fwd_kernel_o[grid](
+ q=q,
+ k=k,
+ v=v_new,
+ h=h,
+ o=o,
+ offsets=offsets,
+ indices=indices,
+ scale=scale,
+ T=T,
+ H=H,
+ K=K,
+ V=V,
+ BT=BT,
+ BK=BK,
+ BV=BV,
+ HEAD_FIRST=head_first
+ )
+ return o
+
+
+def chunk_delta_rule_bwd_dqkw(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v_new: torch.Tensor,
+ w: torch.Tensor,
+ h: torch.Tensor,
+ du: torch.Tensor,
+ do: torch.Tensor,
+ dh: torch.Tensor,
+ scale: float,
+ offsets: Optional[torch.LongTensor] = None,
+ indices: Optional[torch.LongTensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
+ if head_first:
+ B, H, T, K, V = *q.shape, v_new.shape[-1]
+ else:
+ B, T, H, K, V = *q.shape, v_new.shape[-1]
+ BT = min(chunk_size, max(triton.next_power_of_2(T), 16))
+ if offsets is None:
+ NT = triton.cdiv(T, BT)
+ else:
+ if indices is None:
+ indices = torch.cat([torch.arange(n) for n in triton.cdiv(offsets[1:] - offsets[:-1], BT).tolist()])
+ indices = torch.stack([indices.eq(0).cumsum(0) - 1, indices], 1).to(offsets)
+ NT = len(indices)
+ BK = min(triton.next_power_of_2(K), 64)
+ BV = min(triton.next_power_of_2(V), 64)
+ NK = triton.cdiv(K, BK)
+
+ dq = torch.empty_like(q)
+ dk = torch.empty_like(k)
+ dw = torch.empty_like(w)
+ grid = (NK, NT, B * H)
+ chunk_delta_rule_bwd_kernel_dqkw[grid](
+ q=q,
+ k=k,
+ v=v_new,
+ h=h,
+ do=do,
+ dh=dh,
+ dq=dq,
+ dk=dk,
+ dv=du,
+ dw=dw,
+ offsets=offsets,
+ indices=indices,
+ scale=scale,
+ T=T,
+ H=H,
+ K=K,
+ V=V,
+ BT=BT,
+ BK=BK,
+ BV=BV,
+ NT=NT,
+ HEAD_FIRST=head_first
+ )
+ return dq, dk, dw
+
+
+def chunk_delta_rule_fwd(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ beta: torch.Tensor,
+ scale: float,
+ initial_state: torch.Tensor,
+ output_final_state: bool,
+ checkpoint_level: int = 1,
+ offsets: Optional[torch.LongTensor] = None,
+ indices: Optional[torch.LongTensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+):
+ T = q.shape[2] if head_first else q.shape[1]
+ BT = min(chunk_size, max(triton.next_power_of_2(T), 16))
+ # obtain WY representation. u is actually the new v.
+ w, u, A = fwd_prepare_wy_repr(
+ k=k,
+ v=v,
+ beta=beta,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=BT
+ )
+
+ h, v_new, final_state = chunk_delta_rule_fwd_h(
+ k=k,
+ w=w,
+ u=u,
+ initial_state=initial_state,
+ output_final_state=output_final_state,
+ offsets=offsets,
+ head_first=head_first,
+ chunk_size=BT
+ )
+
+ # obtain output
+ o = chunk_delta_rule_fwd_o(
+ q=q,
+ k=k,
+ v_new=v_new,
+ h=h,
+ scale=scale,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=BT
+ )
+ if checkpoint_level == 1:
+ h, v_new = None, None
+ return o, A, h, v_new, final_state
+
+
+def chunk_delta_rule_bwd(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ beta: torch.Tensor,
+ A: torch.Tensor,
+ h: torch.Tensor,
+ v_new: torch.Tensor,
+ scale: float,
+ initial_state: torch.Tensor,
+ do: torch.Tensor,
+ dht: torch.Tensor,
+ offsets: Optional[torch.LongTensor] = None,
+ indices: Optional[torch.LongTensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+):
+ T = q.shape[2] if head_first else q.shape[1]
+ BT = min(chunk_size, max(triton.next_power_of_2(T), 16))
+ w, u = fwd_recompute_w_u(
+ k=k,
+ v=v,
+ beta=beta,
+ A=A,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=BT
+ )
+ if h is None:
+ h, v_new, _ = chunk_delta_rule_fwd_h(
+ k=k,
+ w=w,
+ u=u,
+ initial_state=initial_state,
+ output_final_state=False,
+ offsets=offsets,
+ head_first=head_first,
+ chunk_size=BT
+ )
+ dv = chunk_delta_rule_fwd_prepare_dv(
+ q=q,
+ k=k,
+ do=do,
+ scale=scale,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=BT
+ )
+ dh, dh0, dv = chunk_delta_rule_bwd_dhu(
+ q=q,
+ k=k,
+ w=w,
+ h0=initial_state,
+ dht=dht,
+ do=do,
+ dv=dv,
+ scale=scale,
+ offsets=offsets,
+ head_first=head_first,
+ chunk_size=BT
+ )
+ dq, dk, dw = chunk_delta_rule_bwd_dqkw(
+ q=q,
+ k=k,
+ v_new=v_new,
+ w=w,
+ h=h,
+ du=dv,
+ do=do,
+ dh=dh,
+ scale=scale,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=BT
+ )
+ dk2, dv, db = bwd_prepare_wy_repr(
+ k=k,
+ v=v,
+ beta=beta,
+ A=A,
+ dw=dw,
+ du=dv,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=BT
+ )
+ dk.add_(dk2)
+ return dq, dk, dv, db, dh0
+
+
+class ChunkDeltaRuleFunction(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_fwd
+ def forward(
+ ctx,
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ beta: torch.Tensor,
+ scale: float,
+ initial_state: torch.Tensor,
+ output_final_state: bool,
+ checkpoint_level: int = 1,
+ offsets: Optional[torch.LongTensor] = None,
+ head_first: bool = True
+ ):
+ T = q.shape[2] if head_first else q.shape[1]
+ chunk_size = min(64, max(triton.next_power_of_2(T), 16))
+
+ # 2-d indices denoting the offsets of chunks in each sequence
+ # for example, if the passed `offsets` is [0, 100, 356] and `chunk_size` is 64,
+ # then there are 2 and 4 chunks in the 1st and 2nd sequences respectively, and `indices` will be
+ # [[0, 0], [0, 1], [1, 0], [1, 1], [1, 2], [1, 3]]
+ indices = None
+ if offsets is not None:
+ indices = torch.cat([torch.arange(n) for n in triton.cdiv(offsets[1:] - offsets[:-1], chunk_size).tolist()])
+ indices = torch.stack([indices.eq(0).cumsum(0) - 1, indices], 1).to(offsets)
+
+ o, A, h, v_new, final_state = chunk_delta_rule_fwd(
+ q=q,
+ k=k,
+ v=v,
+ beta=beta,
+ scale=scale,
+ initial_state=initial_state,
+ output_final_state=output_final_state,
+ checkpoint_level=checkpoint_level,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=chunk_size
+ )
+ ctx.save_for_backward(q, k, v, beta, A, h, v_new, initial_state)
+ ctx.chunk_size = chunk_size
+ ctx.scale = scale
+ ctx.offsets = offsets
+ ctx.indices = indices
+ ctx.head_first = head_first
+ return o.to(q.dtype), final_state
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_bwd
+ def backward(
+ ctx,
+ do: torch.Tensor,
+ dht: torch.Tensor
+ ):
+ q, k, v, beta, A, h, v_new, initial_state = ctx.saved_tensors
+ dq, dk, dv, db, dh0 = chunk_delta_rule_bwd(
+ q=q,
+ k=k,
+ v=v,
+ beta=beta,
+ A=A,
+ h=h,
+ v_new=v_new,
+ scale=ctx.scale,
+ initial_state=initial_state,
+ do=do,
+ dht=dht,
+ offsets=ctx.offsets,
+ indices=ctx.indices,
+ head_first=ctx.head_first,
+ chunk_size=ctx.chunk_size
+ )
+ return dq.to(q.dtype), dk.to(k.dtype), dv.to(v.dtype), db.to(beta.dtype), None, dh0, None, None, None, None, None
+
+
+def chunk_delta_rule(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ beta: torch.Tensor,
+ scale: float = None,
+ initial_state: torch.Tensor = None,
+ output_final_state: bool = False,
+ checkpoint_level: int = 1,
+ offsets: Optional[torch.LongTensor] = None,
+ head_first: bool = True
+):
+ r"""
+ Args:
+ q (torch.Tensor):
+ queries of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`.
+ k (torch.Tensor):
+ keys of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`.
+ v (torch.Tensor):
+ values of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ beta (torch.Tensor):
+ betas of shape `[B, H, T]` if `head_first=True` else `[B, T, H]`.
+ scale (Optional[int]):
+ Scale factor for the RetNet attention scores.
+ If not provided, it will default to `1 / sqrt(K)`. Default: `None`.
+ initial_state (Optional[torch.Tensor]):
+ Initial state of shape `[N, H, K, V]` for `N` input sequences.
+ For equal-length input sequences, `N` equals the batch size `B`.
+ Default: `None`.
+ output_final_state (Optional[bool]):
+ Whether to output the final state of shape `[N, H, K, V]`. Default: `False`.
+ checkpoint_level (Optional[int]):
+ Checkpointing level; higher values will save more memories and do more recomputations during backward.
+ Default: `1`:
+ - Level `0`: no memory saved, no recomputation.
+ - Level `1`: recompute the forward hidden states during backward.
+ offsets (Optional[torch.LongTensor]):
+ Offsets of shape `[N+1]` defining the bos/eos positions of `N` variable-length sequences in the batch.
+ For example,
+ if `offsets` is `[0, 1, 3, 6, 10, 15]`, there are `N=5` sequences with lengths 1, 2, 3, 4 and 5 respectively.
+ If provided, the inputs are concatenated and the batch size `B` is expected to be 1.
+ Default: `None`.
+ head_first (Optional[bool]):
+ Whether the inputs are in the head-first format, which is not supported for variable-length inputs.
+ Default: `True`.
+
+ Returns:
+ o (torch.Tensor):
+ Outputs of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ final_state (torch.Tensor):
+ Final state of shape `[N, H, K, V]` if `output_final_state=True` else `None`.
+
+ Examples::
+ >>> import torch
+ >>> import torch.nn.functional as F
+ >>> from einops import rearrange
+ >>> from fla.ops.delta_rule import chunk_delta_rule
+ # inputs with equal lengths
+ >>> B, T, H, K, V = 4, 2048, 4, 512, 512
+ >>> q = torch.randn(B, T, H, K, dtype=torch.bfloat16, device='cuda')
+ >>> k = F.normalize(torch.randn(B, T, H, K, dtype=torch.bfloat16, device='cuda'), p=2, dim=-1)
+ >>> v = torch.randn(B, T, H, V, dtype=torch.bfloat16, device='cuda')
+ >>> beta = torch.rand(B, T, H, dtype=torch.bfloat16, device='cuda').sigmoid()
+ >>> h0 = torch.randn(B, H, K, V, dtype=torch.bfloat16, device='cuda')
+ >>> o, ht = chunk_delta_rule(q, k, v, beta,
+ initial_state=h0,
+ output_final_state=True,
+ head_first=False)
+ # for variable-length inputs, the batch size `B` is expected to be 1 and `offsets` is required
+ >>> q, k, v, beta = map(lambda x: rearrange(x, 'b t ... -> 1 (b t) ...'), (q, k, v, beta))
+ # for a batch with 4 sequences, offsets with 5 start/end positions are expected
+ >>> offsets = q.new_tensor([0, 2048, 4096, 6144, 8192], dtype=torch.long)
+ >>> o_var, ht_var = chunk_delta_rule(q, k, v, beta,
+ initial_state=h0,
+ output_final_state=True,
+ offsets=offsets,
+ head_first=False)
+ """
+ assert q.dtype == k.dtype == v.dtype
+ assert q.dtype != torch.float32, "ChunkDeltaRuleFunction does not support float32. Please use bfloat16."
+ assert len(beta.shape) == 3, "beta must be of shape (batch size, num of head, seq len)."
+
+ if offsets is not None:
+ if q.shape[0] != 1:
+ raise ValueError(f"The batch size is expected to be 1 rather than {q.shape[0]} when using `offsets`."
+ f"Please flatten variable-length inputs before processing.")
+ if head_first:
+ raise RuntimeError("Sequences with variable lengths are not supported for head-first mode")
+ if initial_state is not None and initial_state.shape[0] != len(offsets) - 1:
+ raise ValueError(f"The number of initial states is expected to be equal to the number of input sequences, "
+ f"i.e., {len(offsets) - 1} rather than {initial_state.shape[0]}.")
+ if scale is None:
+ scale = k.shape[-1] ** -0.5
+ o, final_state = ChunkDeltaRuleFunction.apply(
+ q,
+ k,
+ v,
+ beta,
+ scale,
+ initial_state,
+ output_final_state,
+ checkpoint_level,
+ offsets,
+ head_first
+ )
+ return o, final_state
diff --git a/fla/ops/delta_rule/fused_chunk.py b/fla/ops/delta_rule/fused_chunk.py
new file mode 100644
index 0000000000000000000000000000000000000000..c74b013b4abeffb0fe558600e5eb7fb2e23d86ce
--- /dev/null
+++ b/fla/ops/delta_rule/fused_chunk.py
@@ -0,0 +1,409 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+import torch
+import triton
+import triton.language as tl
+
+from fla.ops.delta_rule.wy_fast import (bwd_prepare_wy_repr,
+ fwd_prepare_wy_repr, fwd_recompute_w)
+from fla.utils import autocast_custom_bwd, autocast_custom_fwd, contiguous
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4)
+ ],
+ key=["BT", "BK"],
+)
+@triton.jit
+def fused_chunk_delta_rule_fwd_kernel(
+ # B: batch_size, H: n_heads, T: seq_len, D: d_head
+ q, # query [B, H, L, D_head_K]
+ k, # key [B, H, L, D_head_K]
+ v, # value [B, H, L, D_head_V]
+ v_new,
+ d, # decay [B, H, L, D_head_K]
+ o, # output [B, H, L, D_head_V]
+ initial_state, # initial state of the chunk [B, H, D_head_K, D_head_V]
+ final_state, # final state of the chunk [B, H, D_head_K, D_head_V]
+ s_k_h, # stride size: L * D_head_K
+ s_k_t, # stride size: D_head_K
+ s_k_d, # stride size: 1
+ s_v_h, # stride size: L * D_head_V
+ s_v_t, # stride size: D_head_V
+ s_v_d, # stride size: 1
+ B, # batch size
+ H, # n_heads
+ T, # seq_len
+ scale, # D_head_K ** -0.5
+ BT: tl.constexpr, # BLOCK SIZE along the sequence dimension, a.k.a. chunk size
+ BK: tl.constexpr, # BLOCK SIZE along the K dimension
+ BV: tl.constexpr, # BLOCK SIZE along the V dimension
+ DK: tl.constexpr, # D_head_K
+ DV: tl.constexpr, # D_head_V
+ USE_INITIAL_STATE: tl.constexpr,
+ STORE_FINAL_STATE: tl.constexpr,
+ CHECK: tl.constexpr
+):
+ # indices
+ i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+
+ o_i = tl.arange(0, BT)
+
+ # [BT, BT]
+ m_s = o_i[:, None] >= o_i[None, :]
+ # [BK, BV]
+ b_h = tl.zeros([BK, BV], dtype=tl.float32)
+
+ # make block pointers
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, DK), (s_k_t, s_k_d), (0, i_k * BK), (BT, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (DK, T), (s_k_d, s_k_t), (i_k * BK, 0), (BK, BT), (0, 1))
+ p_d = tl.make_block_ptr(d + i_bh * s_k_h, (T, DK), (s_k_t, s_k_d), (0, i_k * BK), (BT, BK), (1, 0))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, DV), (s_v_t, s_v_d), (0, i_v * BV), (BT, BV), (1, 0))
+ p_o = tl.make_block_ptr(o + (i_bh+i_k*B*H) * s_v_h, (T, DV), (s_v_t, s_v_d), (0, i_v * BV), (BT, BV), (1, 0))
+ p_v_new = tl.make_block_ptr(v_new + i_bh * s_v_h, (T, DV), (s_v_t, s_v_d), (0, i_v * BV), (BT, BV), (1, 0))
+
+ if USE_INITIAL_STATE:
+ p_h = tl.make_block_ptr(initial_state + i_bh * DK * DV, (DK, DV), (DV, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ b_h = tl.load(p_h, boundary_check=(0, 1)).to(tl.float32)
+
+ for i in range(0, tl.cdiv(T, BT)):
+ # [BK, BT]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BT, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BT, BK]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_d = tl.load(p_d, boundary_check=(0, 1))
+ b_q = (b_q * scale).to(b_k.dtype)
+
+ # [BT, BT]
+ b_s = tl.dot(b_q, b_k, allow_tf32=False)
+ b_s = tl.where(m_s, b_s, 0)
+ # [BT, BV]
+ b_v_prime = tl.dot(b_d, b_h.to(b_q.dtype), allow_tf32=False)
+ b_v = b_v - b_v_prime
+ tl.store(p_v_new, b_v.to(p_v.dtype.element_ty), boundary_check=(0, 1))
+
+ b_o = tl.dot(b_s.to(b_q.dtype), b_v.to(b_q.dtype), allow_tf32=False)
+ if CHECK and i == 0:
+ b_o += tl.dot(b_q, b_h.to(b_q.dtype), allow_tf32=False)
+ b_h = b_h + tl.dot(b_k, b_v.to(b_k.dtype), allow_tf32=False)
+ else:
+ b_o += tl.dot(b_q, b_h.to(b_q.dtype), allow_tf32=False)
+ b_h = b_h + tl.dot(b_k, b_v.to(b_k.dtype), allow_tf32=False)
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1))
+ p_q = tl.advance(p_q, (BT, 0))
+ p_k = tl.advance(p_k, (0, BT))
+ p_v = tl.advance(p_v, (BT, 0))
+ p_v_new = tl.advance(p_v_new, (BT, 0))
+ p_o = tl.advance(p_o, (BT, 0))
+ p_d = tl.advance(p_d, (BT, 0))
+
+ if STORE_FINAL_STATE:
+ p_final = tl.make_block_ptr(final_state + i_bh * DK * DV, (DK, DV), (DV, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ tl.store(p_final, b_h.to(p_final.dtype.element_ty), boundary_check=(0, 1))
+
+
+# Similar to Algorithm1 of https://arxiv.org/abs/2006.16236
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8)
+ ],
+ key=["BT", "BK", "BV"],
+)
+@triton.jit
+def fused_chunk_delta_rule_bwd_kernel(
+ # B: batch_size, H: n_heads, T: seq_len, D: d_head
+ # NV: number of split in the V dimension. NK: number of split in the K dimension
+ q, # query [B, H, L, D_head_K]
+ k, # key [B, H, L, D_head_V]
+ v, # value [B, H, L, D_head_V]
+ d, # decay [B, H, L, D_head_K]
+ dht, # gradient of final state [B, H, D_head_K, D_head_V]
+ dh0, # gradient of initial state [B, H, D_head_K, D_head_V]
+ do, # gradient of output [B, H, L, D_head_V]
+ dq, # gradient of query [NV, B, H, L, D_head_K]
+ dk, # gradient of key [NV, B, H, L, D_head_K]
+ dv, # gradient of value [NK, B, H, L, D_head_V]
+ dd, # gradient of decay [NV, B, H, L, D_head_K]
+ initial_state, # initial state of the chunk [B, H, D_head_K, D_head_V]
+ s_k_h, # stride size: L * D_head_K
+ s_k_t, # stride size: D_head_K
+ s_k_d, # stride size: 1
+ s_v_h, # stride size: L * D_head_V
+ s_v_t, # stride size: D_head_V
+ s_v_d, # stride size: 1
+ B, # batch_size
+ H, # n_heads
+ T, # seq_len
+ scale, # D_head_K ** -0.5 by default
+ BT: tl.constexpr, # BLOCK SIZE along the sequence dimension, a.k.a. chunk size
+ BK: tl.constexpr, # BLOCK SIZE along the K dimension
+ BV: tl.constexpr, # BLOCK SIZE along the V dimension
+ DK: tl.constexpr, # D_head_K
+ DV: tl.constexpr, # D_head_V
+ USE_INITIAL_STATE: tl.constexpr, # whether to use initial state
+ USE_DHT: tl.constexpr, # whether to use final state gradient
+ USE_DHO: tl.constexpr, # whether to use initial state gradient
+ CHECK: tl.constexpr
+):
+ i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ o_i = tl.arange(0, BT)
+
+ # first reverse
+ # [BK, BV]
+ b_dh = tl.zeros([BK, BV], dtype=tl.float32)
+ if USE_DHT:
+ p_dht = tl.make_block_ptr(dht + i_bh * DK * DV, (DK, DV), (DV, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ b_dh += tl.load(p_dht, boundary_check=(0, 1)).to(tl.float32)
+ m_s = o_i[:, None] <= o_i[None, :]
+
+ for i in range(tl.cdiv(T, BT) - 1, -1, -1):
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (DK, T), (s_k_d, s_k_t), (i_k * BK, i * BT), (BK, BT), (0, 1))
+ p_d = tl.make_block_ptr(d + i_bh * s_k_h, (DK, T), (s_k_d, s_k_t), (i_k * BK, i * BT), (BK, BT), (0, 1))
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, DK), (s_k_t, s_k_d), (i * BT, i_k * BK), (BT, BK), (1, 0))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, DV), (s_v_t, s_v_d), (i * BT, i_v * BV), (BT, BV), (1, 0))
+ p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, DV), (s_v_t, s_v_d), (i * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dk = tl.make_block_ptr(dk + (i_bh+i_v*B*H) * s_k_h, (T, DK), (s_k_t, s_k_d), (i*BT, i_k*BK), (BT, BK), (1, 0))
+ p_dv = tl.make_block_ptr(dv + (i_bh+i_k*B*H) * s_v_h, (T, DV), (s_v_t, s_v_d), (i*BT, i_v*BV), (BT, BV), (1, 0))
+ # [DK, BT]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_q = (b_q * scale).to(b_q.dtype)
+ # [BT, DK]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BT, DV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+
+ # [BT, BT]
+ b_ds = tl.dot(b_v, tl.trans(b_do), allow_tf32=False)
+ b_ds = tl.where(m_s, b_ds, 0).to(b_q.dtype)
+ # [BT, BT]
+ b_s = tl.dot(b_k, b_q, allow_tf32=False)
+ b_s = tl.where(m_s, b_s, 0).to(b_q.dtype)
+ # [BT, DK]
+ b_dk = tl.dot(b_ds, tl.trans(b_q), allow_tf32=False)
+ # [BT, DV]
+ b_dv = tl.dot(b_s, b_do, allow_tf32=False)
+ b_d = tl.load(p_d, boundary_check=(0, 1))
+ b_dk += tl.dot(b_v, tl.trans(b_dh).to(b_v.dtype), allow_tf32=False)
+ b_dv += tl.dot(b_k, b_dh.to(b_k.dtype), allow_tf32=False)
+ b_dh += tl.dot(b_q, b_do, allow_tf32=False)
+ b_dh -= tl.dot(b_d, b_dv.to(b_d.dtype), allow_tf32=False)
+
+ tl.store(p_dk, (b_dk).to(p_dk.dtype.element_ty), boundary_check=(0, 1))
+ tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1))
+
+ if USE_DHO:
+ p_dh0 = tl.make_block_ptr(dh0 + i_bh * DK * DV, (DK, DV), (DV, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ tl.store(p_dh0, b_dh.to(p_dh0.dtype.element_ty), boundary_check=(0, 1))
+
+ # sync threads
+ b_h = None
+ tl.debug_barrier()
+ m_s = o_i[:, None] >= o_i[None, :]
+ b_h = tl.zeros([BV, BK], dtype=tl.float32)
+ if USE_INITIAL_STATE:
+ p_h = tl.make_block_ptr(initial_state + i_bh * DK * DV, (DV, DK), (1, DV), (i_v * BV, i_k * BK), (BV, BK), (0, 1))
+ b_h += tl.load(p_h, boundary_check=(0, 1)).to(tl.float32)
+ NT = tl.cdiv(T, BT)
+ for i in range(0, NT):
+ p_dv = tl.make_block_ptr(dv + i_bh * s_v_h, (T, DV), (s_v_t, s_v_d), (i * BT, i_v * BV), (BT, BV), (1, 0))
+ b_dv = tl.load(p_dv, boundary_check=(0, 1))
+ b_dd = tl.dot(b_dv.to(k.dtype.element_ty), b_h.to(k.dtype.element_ty), allow_tf32=False)
+ p_dd = tl.make_block_ptr(dd + (i_bh + i_v*B*H) * s_k_h, (T, DK), (s_k_t, s_k_d),
+ (i * BT, i_k * BK), (BT, BK), (1, 0))
+ tl.store(p_dd, -b_dd.to(p_dd.dtype.element_ty), boundary_check=(0, 1))
+
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, DK), (s_k_t, s_k_d), (i * BT, i_k * BK), (BT, BK), (1, 0))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (DV, T), (s_v_d, s_v_t), (i_v * BV, i * BT), (BV, BT), (0, 1))
+ p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, DV), (s_v_t, s_v_d), (i * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dq = tl.make_block_ptr(dq + (i_bh + i_v*B*H) * s_k_h, (T, DK), (s_k_t, s_k_d), (i*BT, i_k*BK), (BT, BK), (1, 0))
+ # [BT, DK]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [DV, BT]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BT, DV]
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+
+ # [BT, BT]
+ b_ds = tl.dot(b_do, b_v, allow_tf32=False)
+ b_ds = tl.where(m_s, b_ds, 0)
+ # [BT, DK]
+ b_dq = tl.dot(b_ds.to(b_k.dtype), b_k, allow_tf32=False)
+ # [DV, DK]
+ if CHECK and i == 0:
+ b_dq += tl.dot(b_do, b_h.to(b_do.dtype), allow_tf32=False)
+ b_h = b_h + tl.dot(b_v, b_k, allow_tf32=False)
+ else:
+ b_dq += tl.dot(b_do, b_h.to(b_do.dtype), allow_tf32=False)
+ b_h = b_h + tl.dot(b_v, b_k, allow_tf32=False)
+ b_dq *= scale
+ tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1))
+
+
+def fused_chunk_delta_rule_fwd(q, k, v, d, BT, scale, initial_state, output_final_state):
+ batch_size, n_heads, seq_len, d_head_qk = q.shape
+ d_head_v = v.shape[-1]
+ BT = BT
+ BK, BV = triton.next_power_of_2(d_head_qk), min(triton.next_power_of_2(d_head_v), 32)
+ NK, NV = triton.cdiv(d_head_qk, BK), triton.cdiv(d_head_v, BV)
+ assert NK == 1, 'NK should be 1'
+ o = q.new_empty(batch_size, n_heads, seq_len, d_head_v)
+ if output_final_state:
+ final_state = q.new_empty(batch_size, n_heads, d_head_qk, d_head_v, dtype=torch.float32, requires_grad=False)
+ else:
+ final_state = None
+ CHECK = True
+ # if version.parse(triton.__version__) < version.parse('2.2.0'):
+ # import warnings
+ # warnings.warn(
+ # "Triton<2.2.0 detected for running this kernel, "
+ # "which is known to have some weird compiler issues (refer to https://github.com/openai/triton/issues/2852) "
+ # "that lead to significant precision loss. "
+ # "We've add some initial condition checks to resolve this, sadly at the sacrifice of the speed. "
+ # "For optimal performance, it is recommended to install Triton>=2.2.0 (if possible)."
+ # )
+ # CHECK = True
+ grid = (NV, NK, batch_size * n_heads)
+ v_new = torch.empty_like(v)
+ fused_chunk_delta_rule_fwd_kernel[grid](
+ q, k, v, v_new, d, o, initial_state, final_state,
+ q.stride(1), q.stride(2), q.stride(3),
+ v.stride(1), v.stride(2), v.stride(3),
+ batch_size, n_heads, seq_len, scale,
+ BT=BT, DK=d_head_qk, DV=d_head_v, BK=BK, BV=BV,
+ USE_INITIAL_STATE=initial_state is not None,
+ STORE_FINAL_STATE=output_final_state,
+ CHECK=CHECK,
+ )
+ return o, v_new, CHECK, final_state
+
+
+def fused_chunk_delta_rule_bwd(q, k, v, d, dht, dh0, do, BT, CHECK, initial_state, scale):
+ batch_size, n_heads, seq_len, d_head_qk = q.shape
+ d_head_v = v.shape[-1]
+ BK, BV = triton.next_power_of_2(d_head_qk), min(triton.next_power_of_2(d_head_v), 32)
+ NK, NV = triton.cdiv(d_head_qk, BK), triton.cdiv(d_head_v, BV)
+ assert NK == 1
+ dq = q.new_empty(NV, batch_size, n_heads, seq_len, d_head_qk)
+ dk = q.new_empty(NV, batch_size, n_heads, seq_len, d_head_qk)
+ dd = q.new_empty(NV, batch_size, n_heads, seq_len, d_head_qk)
+ dv = q.new_empty(NK, batch_size, n_heads, seq_len, d_head_v)
+ grid = (NV, NK, batch_size * n_heads)
+ fused_chunk_delta_rule_bwd_kernel[grid](
+ q, k, v, d, dht, dh0, do, dq, dk, dv, dd, initial_state,
+ q.stride(1), q.stride(2), q.stride(3),
+ v.stride(1), v.stride(2), v.stride(3),
+ batch_size, n_heads, seq_len, scale,
+ BT=BT, DK=d_head_qk, DV=d_head_v, BK=BK, BV=BV,
+ USE_INITIAL_STATE=initial_state is not None,
+ USE_DHT=dht is not None,
+ USE_DHO=dh0 is not None,
+ CHECK=CHECK
+ # num_warps=num_warps,
+ # num_stages=num_stages
+ )
+ dq = dq.sum(0)
+ dk = dk.sum(0)
+ dv = dv.sum(0)
+ dd = dd.sum(0)
+ return dq, dk, dv, dd
+
+
+class FusedChunkDeltaRuleFunction(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_fwd
+ def forward(ctx, q, k, v, beta, BT, scale, initial_state, output_final_state):
+ # obtain WY representation. u is actually the new v.
+ w, u, A = fwd_prepare_wy_repr(k, v, beta, BT)
+ # ### forward_h
+ final_state = None
+ if output_final_state:
+ final_state = q.new_empty(q.shape[0], q.shape[1], q.shape[-1], v.shape[-1],
+ dtype=torch.float32, requires_grad=False)
+ o, v_new, CHECK, final_state = fused_chunk_delta_rule_fwd(q, k, u, w, BT, scale, initial_state, output_final_state)
+ ctx.save_for_backward(q, k, v, beta, A, v_new, initial_state)
+ ctx.CHECK = CHECK
+ ctx.BT = BT
+ ctx.scale = scale
+ return o.to(q.dtype), final_state
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_bwd
+ def backward(ctx, do, dht):
+ q, k, v, beta, A, v_new, initial_state = ctx.saved_tensors
+ BT = ctx.BT
+ scale = ctx.scale
+ w = fwd_recompute_w(k, beta, A, BT)
+ if initial_state is not None and initial_state.requires_grad:
+ dh0 = torch.empty_like(initial_state, dtype=torch.float32)
+ else:
+ dh0 = None
+ dq, dk, dv, dw = fused_chunk_delta_rule_bwd(q, k, v_new, w, dht, dh0, do, BT, ctx.CHECK, initial_state, scale)
+ dk2, dv, dbeta = bwd_prepare_wy_repr(k, v, beta, A, dw, dv, BT)
+ dk.add_(dk2)
+ return dq.to(q.dtype), dk.to(k.dtype), dv.to(v.dtype), dbeta.to(beta.dtype), None, None, dh0, None, None, None
+
+
+def fused_chunk_delta_rule(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ beta: torch.Tensor,
+ scale: float = None,
+ initial_state: torch.Tensor = None,
+ output_final_state: bool = False,
+ head_first: bool = True
+):
+ r"""
+ Args:
+ q (torch.Tensor):
+ queries of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`.
+ k (torch.Tensor):
+ keys of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`.
+ v (torch.Tensor):
+ values of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ beta (torch.Tensor):
+ betas of shape `[B, H, T]` if `head_first=True` else `(B, T, H)`.
+ scale (Optional[int]):
+ Scale factor for the RetNet attention scores.
+ If not provided, it will default to `1 / sqrt(K)`. Default: `None`.
+ initial_state (Optional[torch.Tensor]):
+ Initial state of shape `[B, H, K, V]`. Default: `None`.
+ output_final_state (Optional[bool]):
+ Whether to output the final state of shape `[B, H, K, V]`. Default: `False`.
+ head_first (Optional[bool]):
+ Whether the inputs are in the head-first format.
+ Default: `True`.
+
+ Returns:
+ o (torch.Tensor):
+ Outputs of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ final_state (torch.Tensor):
+ Final state of shape `[B, H, K, V]` if `output_final_state=True` else `None`.
+ """
+ BT = 32 if q.shape[-1] <= 128 else 16
+ assert q.dtype == k.dtype == v.dtype
+ assert q.dtype != torch.float32, "ChunkDeltaRuleFunction does not support float32. Please use bfloat16."
+ assert len(beta.shape) == 3, "beta must be of shape (batch size, num of head, seq len)."
+ if scale is None:
+ scale = k.shape[-1] ** -0.5
+ else:
+ assert scale > 0, "scale must be positive."
+ if not head_first:
+ q, k, v, beta = map(lambda x: x.transpose(1, 2), (q, k, v, beta))
+ o, final_state = FusedChunkDeltaRuleFunction.apply(q, k, v, beta, BT, scale, initial_state, output_final_state)
+ if not head_first:
+ o = o.transpose(1, 2)
+ return o, final_state
diff --git a/fla/ops/delta_rule/fused_recurrent.py b/fla/ops/delta_rule/fused_recurrent.py
new file mode 100644
index 0000000000000000000000000000000000000000..5f5511487881f0fe150202624d48be43e995d64e
--- /dev/null
+++ b/fla/ops/delta_rule/fused_recurrent.py
@@ -0,0 +1,578 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from typing import Optional, Tuple
+
+import torch
+import triton
+import triton.language as tl
+
+from fla.utils import contiguous
+
+
+@triton.heuristics({
+ 'USE_INITIAL_STATE': lambda args: args['h0'] is not None,
+ 'STORE_FINAL_STATE': lambda args: args['ht'] is not None,
+ 'USE_OFFSETS': lambda args: args['offsets'] is not None
+})
+@triton.jit
+def fused_recurrent_delta_rule_fwd_kernel(
+ q,
+ k,
+ v,
+ u,
+ beta,
+ o,
+ h0,
+ ht,
+ offsets,
+ scale,
+ B: tl.constexpr,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ USE_INITIAL_STATE: tl.constexpr, # whether to use initial state
+ STORE_FINAL_STATE: tl.constexpr, # whether to store final state
+ IS_BETA_HEADWISE: tl.constexpr, # whether beta is headwise vector or scalar,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_v, i_k, i_nh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_n, i_h = i_nh // H, i_nh % H
+ if USE_OFFSETS:
+ bos, eos = tl.load(offsets + i_n).to(tl.int64), tl.load(offsets + i_n + 1).to(tl.int64)
+ all = T
+ T = eos - bos
+ else:
+ bos, eos = i_n * T, i_n * T + T
+ all = B * T
+
+ if HEAD_FIRST:
+ p_q = q + i_nh * T*K + i_k * BK + tl.arange(0, BK)
+ p_k = k + i_nh * T*K + i_k * BK + tl.arange(0, BK)
+ p_v = v + i_nh * T*V + i_v * BV + tl.arange(0, BV)
+ p_u = u + i_nh * T*V + i_v * BV + tl.arange(0, BV)
+ if IS_BETA_HEADWISE:
+ p_beta = beta + i_nh * T*V + i_v * BV + tl.arange(0, BV)
+ else:
+ p_beta = beta + i_nh * T
+ p_o = o + (i_k * B*H + i_nh) * T*V + i_v * BV + tl.arange(0, BV)
+ else:
+ p_q = q + (bos * H + i_h) * K + i_k * BK + tl.arange(0, BK)
+ p_k = k + (bos * H + i_h) * K + i_k * BK + tl.arange(0, BK)
+ p_v = v + (bos * H + i_h) * V + i_v * BV + tl.arange(0, BV)
+ p_u = u + (bos * H + i_h) * V + i_v * BV + tl.arange(0, BV)
+ if IS_BETA_HEADWISE:
+ p_beta = beta + (bos * H + i_h) * V + i_v * BV + tl.arange(0, BV)
+ else:
+ p_beta = beta + bos * H + i_h
+ p_o = o + ((i_k * all + bos) * H + i_h) * V + i_v * BV + tl.arange(0, BV)
+
+ mask_k = (i_k * BK + tl.arange(0, BK)) < K
+ mask_v = (i_v * BV + tl.arange(0, BV)) < V
+ mask_h = mask_k[None, :] & mask_v[:, None]
+
+ b_h = tl.zeros([BV, BK], dtype=tl.float32)
+ if USE_INITIAL_STATE:
+ p_h0 = h0 + i_nh * K * V + (i_k * BK + tl.arange(0, BK)[None, :]) * V + (i_v * BV + tl.arange(0, BV)[:, None])
+ b_h += tl.load(p_h0, mask=mask_h, other=0).to(tl.float32)
+
+ for _ in range(0, T):
+ b_k = tl.load(p_k, mask=mask_k, other=0).to(tl.float32)
+ b_v = tl.load(p_v, mask=mask_v, other=0).to(tl.float32)
+ b_q = tl.load(p_q, mask=mask_k, other=0).to(tl.float32) * scale
+ b_v_minus = tl.sum(b_h * b_k[None, :], axis=1)
+ b_v -= b_v_minus
+ if IS_BETA_HEADWISE:
+ b_beta = tl.load(p_beta, mask=mask_v, other=0).to(tl.float32)
+ else:
+ b_beta = tl.load(p_beta).to(tl.float32)
+ tl.store(p_u, b_v.to(p_v.dtype.element_ty), mask=mask_v)
+ b_v *= b_beta
+ b_h += b_k[None, :] * b_v[:, None]
+ b_o = b_h * b_q[None, :]
+ b_o = tl.sum(b_o, axis=1)
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), mask=mask_v)
+
+ p_q += K if HEAD_FIRST else H*K
+ p_k += K if HEAD_FIRST else H*K
+ p_o += V if HEAD_FIRST else H*V
+ p_v += V if HEAD_FIRST else H*V
+ p_u += V if HEAD_FIRST else H*V
+ p_beta += (1 if HEAD_FIRST else H) * (V if IS_BETA_HEADWISE else 1)
+
+ if STORE_FINAL_STATE:
+ p_ht = ht + i_nh * K * V + (i_k * BK + tl.arange(0, BK)[None, :]) * V + (i_v * BV + tl.arange(0, BV)[:, None])
+ tl.store(p_ht, b_h.to(p_ht.dtype.element_ty), mask=mask_h)
+
+
+@triton.heuristics({
+ 'USE_INITIAL_STATE': lambda args: args['h0'] is not None,
+ 'USE_FINAL_STATE_GRADIENT': lambda args: args['dht'] is not None,
+ 'USE_OFFSETS': lambda args: args['offsets'] is not None
+})
+@triton.jit
+def fused_recurrent_delta_rule_bwd_kernel(
+ q,
+ k,
+ v,
+ beta,
+ h0,
+ dh0,
+ dht,
+ do,
+ dq,
+ dk,
+ dv,
+ db,
+ offsets,
+ scale,
+ B: tl.constexpr,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ NK: tl.constexpr,
+ IS_BETA_HEADWISE: tl.constexpr, # whether beta is headwise vector or scalar
+ USE_INITIAL_STATE: tl.constexpr, # whether to use dh0
+ USE_FINAL_STATE_GRADIENT: tl.constexpr, # whether to use dht
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_v, i_k, i_nh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_n, i_h = i_nh // H, i_nh % H
+ if USE_OFFSETS:
+ bos, eos = tl.load(offsets + i_n).to(tl.int64), tl.load(offsets + i_n + 1).to(tl.int64)
+ all = T
+ T = eos - bos
+ else:
+ bos, eos = i_n * T, i_n * T + T
+ all = B * T
+
+ mask_k = i_k * BK + tl.arange(0, BK) < K
+ mask_v = i_v * BV + tl.arange(0, BV) < V
+
+ if HEAD_FIRST:
+ p_q = q + i_nh * T*K + i_k * BK + tl.arange(0, BK) + (T - 1) * K
+ p_k = k + i_nh * T*K + i_k * BK + tl.arange(0, BK) + (T - 1) * K
+ p_v = v + i_nh * T*V + i_v * BV + tl.arange(0, BV) + (T - 1) * V
+ p_do = do + i_nh * T*V + i_v * BV + tl.arange(0, BV) + (T - 1) * V
+ p_dk = dk + (i_v * B*H + i_nh) * T*K + i_k * BK + tl.arange(0, BK) + (T - 1) * K
+ p_dv = dv + (i_k * B*H + i_nh) * T*V + i_v * BV + tl.arange(0, BV) + (T - 1) * V
+ if IS_BETA_HEADWISE:
+ p_beta = beta + i_nh * T*V + i_v * BV + tl.arange(0, BV) + (T - 1) * V
+ p_dbeta = db + (i_v * NK*B*H + i_k * B*H + i_nh) * T*V + tl.arange(0, BV) + (T - 1) * V
+ else:
+ p_beta = beta + i_nh * T + T - 1
+ p_dbeta = db + (i_v * B*H + i_nh) * T + T - 1
+ else:
+ p_q = q + (bos * H + i_h) * K + i_k * BK + tl.arange(0, BK) + (T - 1) * H*K
+ p_k = k + (bos * H + i_h) * K + i_k * BK + tl.arange(0, BK) + (T - 1) * H*K
+ p_v = v + (bos * H + i_h) * V + i_v * BV + tl.arange(0, BV) + (T - 1) * H*V
+ p_do = do + (bos * H + i_h) * V + i_v * BV + tl.arange(0, BV) + (T - 1) * H*V
+ p_dk = dk + ((i_v * all + bos) * H + i_h) * K + i_k * BK + tl.arange(0, BK) + (T - 1) * H*K
+ p_dv = dv + ((i_k * all + bos) * H + i_h) * V + i_v * BV + tl.arange(0, BV) + (T - 1) * H*V
+ if IS_BETA_HEADWISE:
+ p_beta = beta + (bos + T - 1) * H*V + i_h * V + i_v * BV + tl.arange(0, BV)
+ p_dbeta = db + ((i_v * NK + i_k) * all + bos + T - 1) * H*V + i_h * V + tl.arange(0, BV)
+ else:
+ p_beta = beta + (bos + T - 1) * H + i_h
+ p_dbeta = db + (i_v * all + bos + T - 1) * H + i_h
+
+ b_dh = tl.zeros([BK, BV], dtype=tl.float32)
+ if USE_FINAL_STATE_GRADIENT:
+ p_ht = dht + i_nh * K * V + (i_k * BK + tl.arange(0, BK)[:, None]) * V + (i_v * BV + tl.arange(0, BV)[None, :])
+ b_dh += tl.load(p_ht, mask=mask_k[:, None] & mask_v[None, :], other=0).to(tl.float32)
+
+ for _ in range(T):
+ b_q = tl.load(p_q, mask=mask_k, other=0).to(tl.float32) * scale
+ b_k = tl.load(p_k, mask=mask_k, other=0).to(tl.float32)
+ b_v = tl.load(p_v, mask=mask_v, other=0).to(tl.float32)
+ b_do = tl.load(p_do, mask=mask_v, other=0).to(tl.float32)
+ if IS_BETA_HEADWISE:
+ b_beta = tl.load(p_beta, mask=mask_v, other=0).to(tl.float32)
+ else:
+ b_beta = tl.load(p_beta).to(tl.float32)
+ b_dh += b_q[:, None] * b_do[None, :]
+ b_dk = tl.sum(b_dh * (b_v * b_beta)[None, :], axis=1)
+ b_dv = tl.sum(b_dh * b_k[:, None], axis=0)
+
+ b_db = b_dv * b_v if IS_BETA_HEADWISE else tl.sum(b_dv * b_v)
+ b_dv = b_dv * b_beta
+
+ tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), mask=mask_k)
+ tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), mask=mask_v)
+ if IS_BETA_HEADWISE:
+ tl.store(p_dbeta, b_db.to(p_dbeta.dtype.element_ty), mask=mask_v)
+ else:
+ tl.store(p_dbeta, b_db.to(p_dbeta.dtype.element_ty))
+
+ b_dh -= b_k[:, None] * b_dv[None, :]
+
+ p_q -= K if HEAD_FIRST else H*K
+ p_k -= K if HEAD_FIRST else H*K
+ p_v -= V if HEAD_FIRST else H*V
+ p_do -= V if HEAD_FIRST else H*V
+ p_dk -= K if HEAD_FIRST else H*K
+ p_dv -= V if HEAD_FIRST else H*V
+ p_dbeta -= (1 if HEAD_FIRST else H) * (V if IS_BETA_HEADWISE else 1)
+ p_beta -= (1 if HEAD_FIRST else H) * (V if IS_BETA_HEADWISE else 1)
+
+ if USE_INITIAL_STATE:
+ p_dh0 = dh0 + i_nh * K * V + (i_k * BK + tl.arange(0, BK)[:, None]) * V + (i_v * BV + tl.arange(0, BV)[None, :])
+ tl.store(p_dh0, b_dh.to(p_dh0.dtype.element_ty), mask=mask_k[:, None] & mask_v[None, :])
+
+ tl.debug_barrier()
+
+ b_h = tl.zeros([BK, BV], dtype=tl.float32)
+
+ if HEAD_FIRST:
+ p_q = q + i_nh * T*K + i_k * BK + tl.arange(0, BK)
+ p_k = k + i_nh * T*K + i_k * BK + tl.arange(0, BK)
+ p_v = v + i_nh * T*V + i_v * BV + tl.arange(0, BV)
+ if IS_BETA_HEADWISE:
+ p_beta = beta + i_nh * T*V + i_v * BV + tl.arange(0, BV)
+ else:
+ p_beta = beta + i_nh * T
+ p_do = do + i_nh * T*V + i_v * BV + tl.arange(0, BV)
+ p_dq = dq + (i_v * B*H + i_nh) * T*K + i_k * BK + tl.arange(0, BK)
+ p_dk = dk + (i_v * B*H + i_nh) * T*K + i_k * BK + tl.arange(0, BK)
+ p_dv = dv + (i_k * B*H + i_nh) * T*V + i_v * BV + tl.arange(0, BV)
+ else:
+ p_q = q + (bos * H + i_h) * K + i_k * BK + tl.arange(0, BK)
+ p_k = k + (bos * H + i_h) * K + i_k * BK + tl.arange(0, BK)
+ p_v = v + (bos * H + i_h) * V + i_v * BV + tl.arange(0, BV)
+ if IS_BETA_HEADWISE:
+ p_beta = beta + (bos * H + i_h) * V + i_v * BV + tl.arange(0, BV)
+ else:
+ p_beta = beta + bos * H + i_h
+ p_do = do + (bos * H + i_h) * V + i_v * BV + tl.arange(0, BV)
+ p_dq = dq + ((i_v * all + bos) * H + i_h) * K + i_k * BK + tl.arange(0, BK)
+ p_dk = dk + ((i_v * all + bos) * H + i_h) * K + i_k * BK + tl.arange(0, BK)
+ p_dv = dv + ((i_k * all + bos) * H + i_h) * V + i_v * BV + tl.arange(0, BV)
+
+ if USE_INITIAL_STATE:
+ mask_h = mask_k[:, None] & mask_v[None, :]
+ p_h0 = h0 + i_nh * K * V + (i_k * BK + tl.arange(0, BK)[:, None]) * V + (i_v * BV + tl.arange(0, BV)[None, :])
+ b_h += tl.load(p_h0, mask=mask_h, other=0).to(tl.float32)
+
+ for _ in range(0, T):
+ b_dk = tl.load(p_dk, mask=mask_k, other=0).to(tl.float32)
+ b_dv = tl.load(p_dv, mask=mask_v, other=0).to(tl.float32)
+ b_dk -= tl.sum(b_dv[None, :] * b_h, axis=1)
+ tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), mask=mask_k)
+
+ b_k = tl.load(p_k, mask=mask_k, other=0).to(tl.float32)
+ b_v = tl.load(p_v, mask=mask_v, other=0).to(tl.float32)
+ b_do = tl.load(p_do, mask=mask_v, other=0).to(tl.float32)
+ if IS_BETA_HEADWISE:
+ b_beta = tl.load(p_beta, mask=mask_v, other=0).to(tl.float32)
+ else:
+ b_beta = tl.load(p_beta).to(tl.float32)
+ b_v *= b_beta
+
+ b_h += b_k[:, None] * b_v[None, :]
+ b_dq = b_h * b_do[None, :]
+ d_q = tl.sum(b_dq, axis=1) * scale
+ tl.store(p_dq, d_q.to(p_dq.dtype.element_ty), mask=mask_k)
+
+ p_k += K if HEAD_FIRST else H*K
+ p_v += V if HEAD_FIRST else H*V
+ p_do += V if HEAD_FIRST else H*V
+ p_dq += K if HEAD_FIRST else H*K
+ p_dk += K if HEAD_FIRST else H*K
+ p_dv += V if HEAD_FIRST else H*V
+ p_beta += (1 if HEAD_FIRST else H) * (V if IS_BETA_HEADWISE else 1)
+
+
+def fused_recurrent_delta_rule_fwd(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ beta: torch.Tensor,
+ scale: float,
+ initial_state: torch.Tensor,
+ output_final_state: bool,
+ offsets: Optional[torch.LongTensor] = None,
+ head_first: bool = True
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ if head_first:
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ else:
+ B, T, H, K, V = *k.shape, v.shape[-1]
+ N = B if offsets is None else len(offsets) - 1
+ BK, BV = triton.next_power_of_2(K), min(triton.next_power_of_2(V), 8)
+ NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV)
+ assert NK == 1, "NK > 1 is not supported yet"
+ num_stages = 1
+ num_warps = 1
+
+ o = q.new_empty(NK, *v.shape)
+ if output_final_state:
+ final_state = q.new_empty(N, H, K, V, dtype=torch.float32)
+ else:
+ final_state = None
+
+ grid = (NV, NK, N * H)
+ u = torch.empty_like(v)
+ fused_recurrent_delta_rule_fwd_kernel[grid](
+ q,
+ k,
+ v,
+ u,
+ beta,
+ o,
+ initial_state,
+ final_state,
+ offsets,
+ scale,
+ B=B,
+ T=T,
+ H=H,
+ K=K,
+ V=V,
+ BK=BK,
+ BV=BV,
+ IS_BETA_HEADWISE=beta.ndim == v.ndim,
+ HEAD_FIRST=head_first,
+ num_warps=num_warps,
+ num_stages=num_stages,
+ )
+ o = o.squeeze(0)
+ return o, u, final_state
+
+
+def fused_recurrent_delta_rule_bwd(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ beta: torch.Tensor,
+ dht: torch.Tensor,
+ do: torch.Tensor,
+ scale: float,
+ initial_state: torch.Tensor,
+ offsets: Optional[torch.LongTensor] = None,
+ head_first: bool = True
+) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:
+ if head_first:
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ else:
+ B, T, H, K, V = *k.shape, v.shape[-1]
+ N = B if offsets is None else len(offsets) - 1
+ BK, BV = triton.next_power_of_2(K), min(triton.next_power_of_2(V), 32)
+ NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV)
+ assert NK == 1, "NK > 1 is not supported yet"
+ num_stages = 1
+ num_warps = 2
+
+ beta_vector = beta.ndim == v.ndim
+
+ dq = q.new_empty(NV, *q.shape)
+ dk = q.new_empty(NV, *k.shape)
+ dv = q.new_empty(NK, *v.shape)
+ if beta_vector:
+ db = q.new_empty(NV, NK, B, H, T, V) if head_first else q.new_empty(NV, NK, B, T, H, V)
+ else:
+ db = q.new_empty(NV, B, H, T) if head_first else q.new_empty(NV, B, T, H)
+ grid = (NV, NK, N * H)
+
+ if initial_state is not None and initial_state.requires_grad:
+ dh0 = torch.empty_like(initial_state, dtype=torch.float32)
+ else:
+ dh0 = None
+
+ fused_recurrent_delta_rule_bwd_kernel[grid](
+ q,
+ k,
+ v,
+ beta,
+ initial_state,
+ dh0,
+ dht,
+ do,
+ dq,
+ dk,
+ dv,
+ db,
+ offsets,
+ scale,
+ B=B,
+ T=T,
+ H=H,
+ K=K,
+ V=V,
+ BK=BK,
+ BV=BV,
+ NK=NK,
+ IS_BETA_HEADWISE=beta_vector,
+ HEAD_FIRST=head_first,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+ dq = dq.sum(0)
+ dk = dk.sum(0)
+ dv = dv.sum(0)
+ db = db.sum((0, 1)) if beta_vector else db.sum(0)
+
+ return dq, dk, dv, db, dh0
+
+
+class FusedRecurrentFunction(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ def forward(
+ ctx,
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ beta: torch.Tensor,
+ scale: float,
+ initial_state: torch.Tensor,
+ output_final_state: bool,
+ offsets: Optional[torch.LongTensor] = None,
+ head_first: bool = True
+ ):
+ o, u, final_state = fused_recurrent_delta_rule_fwd(
+ q=q,
+ k=k,
+ v=v,
+ beta=beta,
+ scale=scale,
+ initial_state=initial_state,
+ output_final_state=output_final_state,
+ offsets=offsets,
+ head_first=head_first
+ )
+
+ ctx.save_for_backward(q, k, u, beta, initial_state)
+ ctx.scale = scale
+ ctx.offsets = offsets
+ ctx.head_first = head_first
+ return o, final_state
+
+ @staticmethod
+ @contiguous
+ def backward(ctx, do, dht):
+ q, k, v, beta, initial_state = ctx.saved_tensors
+ dq, dk, dv, db, dh0 = fused_recurrent_delta_rule_bwd(
+ q=q,
+ k=k,
+ v=v,
+ beta=beta,
+ dht=dht,
+ do=do,
+ scale=ctx.scale,
+ initial_state=initial_state,
+ offsets=ctx.offsets,
+ head_first=ctx.head_first
+ )
+ return dq.to(q), dk.to(k), dv.to(v), db.to(beta), None, dh0, None, None, None
+
+
+def fused_recurrent_delta_rule(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ beta: torch.Tensor = None,
+ scale: float = None,
+ initial_state: torch.Tensor = None,
+ output_final_state: bool = False,
+ offsets: Optional[torch.LongTensor] = None,
+ head_first: bool = True
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ r"""
+ Args:
+ q (torch.Tensor):
+ queries of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`.
+ k (torch.Tensor):
+ keys of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`.
+ v (torch.Tensor):
+ values of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ beta (torch.Tensor):
+ betas of shape `[B, H, T]` if `head_first=True` else `(B, T, H)`.
+ scale (Optional[int]):
+ Scale factor for the RetNet attention scores.
+ If not provided, it will default to `1 / sqrt(K)`. Default: `None`.
+ initial_state (Optional[torch.Tensor]):
+ Initial state of shape `[N, H, K, V]` for `N` input sequences.
+ For equal-length input sequences, `N` equals the batch size `B`.
+ Default: `None`.
+ output_final_state (Optional[bool]):
+ Whether to output the final state of shape `[N, H, K, V]`. Default: `False`.
+ offsets (Optional[torch.LongTensor]):
+ Offsets of shape `[N+1]` defining the bos/eos positions of `N` variable-length sequences in the batch.
+ For example,
+ if `offsets` is `[0, 1, 3, 6, 10, 15]`, there are `N=5` sequences with lengths 1, 2, 3, 4 and 5 respectively.
+ If provided, the inputs are concatenated and the batch size `B` is expected to be 1.
+ Default: `None`.
+ head_first (Optional[bool]):
+ Whether the inputs are in the head-first format, which is not supported for variable-length inputs.
+ Default: `True`.
+
+ Returns:
+ o (torch.Tensor):
+ Outputs of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ final_state (torch.Tensor):
+ Final state of shape `[N, H, K, V]` if `output_final_state=True` else `None`.
+
+ Examples::
+ >>> import torch
+ >>> import torch.nn.functional as F
+ >>> from einops import rearrange
+ >>> from fla.ops.delta_rule import fused_recurrent_delta_rule
+ # inputs with equal lengths
+ >>> B, T, H, K, V = 4, 2048, 4, 512, 512
+ >>> q = torch.randn(B, T, H, K, device='cuda')
+ >>> k = F.normalize(torch.randn(B, T, H, K, device='cuda'), p=2, dim=-1)
+ >>> v = torch.randn(B, T, H, V, device='cuda')
+ >>> beta = torch.rand(B, T, H, device='cuda').sigmoid()
+ >>> h0 = torch.randn(B, H, K, V, device='cuda')
+ >>> o, ht = fused_recurrent_delta_rule(q, k, v, beta,
+ initial_state=h0,
+ output_final_state=True,
+ head_first=False)
+ # for variable-length inputs, the batch size `B` is expected to be 1 and `offsets` is required
+ >>> q, k, v, beta = map(lambda x: rearrange(x, 'b t ... -> 1 (b t) ...'), (q, k, v, beta))
+ # for a batch with 4 sequences, offsets with 5 start/end positions are expected
+ >>> offsets = q.new_tensor([0, 2048, 4096, 6144, 8192], dtype=torch.long)
+ >>> o_var, ht_var = fused_recurrent_delta_rule(q, k, v, beta,
+ initial_state=h0,
+ output_final_state=True,
+ offsets=offsets,
+ head_first=False)
+ >>> assert o.allclose(o_var.view(o.shape))
+ >>> assert ht.allclose(ht_var)
+ """
+ if offsets is not None:
+ if q.shape[0] != 1:
+ raise ValueError(f"The batch size is expected to be 1 rather than {q.shape[0]} when using `offsets`."
+ f"Please flatten variable-length inputs before processing.")
+ if head_first:
+ raise RuntimeError("Sequences with variable lengths are not supported for head-first mode")
+ if initial_state is not None and initial_state.shape[0] != len(offsets) - 1:
+ raise ValueError(f"The number of initial states is expected to be equal to the number of input sequences, "
+ f"i.e., {len(offsets) - 1} rather than {initial_state.shape[0]}.")
+ if scale is None:
+ scale = k.shape[-1] ** -0.5
+ else:
+ assert scale > 0, "scale must be positive"
+ if beta is None:
+ beta = torch.ones_like(q[..., 0])
+ o, final_state = FusedRecurrentFunction.apply(
+ q,
+ k,
+ v,
+ beta,
+ scale,
+ initial_state,
+ output_final_state,
+ offsets,
+ head_first
+ )
+ return o, final_state
diff --git a/fla/ops/delta_rule/naive.py b/fla/ops/delta_rule/naive.py
new file mode 100644
index 0000000000000000000000000000000000000000..bdd73cf29a345f24c49de38b9c4b7986c21573ab
--- /dev/null
+++ b/fla/ops/delta_rule/naive.py
@@ -0,0 +1,164 @@
+# -*- coding: utf-8 -*-
+
+import torch
+from einops import rearrange
+
+
+def delta_rule_recurrence(q, k, v, beta, initial_state=None, output_final_state=True):
+ orig_dtype = q.dtype
+ b, h, l, d_k = q.shape
+ q, k, v, beta = map(lambda x: x.float(), [q, k, v, beta])
+ d_v = v.shape[-1]
+ o = torch.zeros_like(v)
+ S = torch.zeros(b, h, d_k, d_v).to(v)
+ q = q * (d_k ** -0.5)
+
+ if beta.ndim < v.ndim:
+ beta = beta[..., None]
+
+ if initial_state is not None:
+ S += initial_state
+
+ for i in range(l):
+ _k = k[:, :, i]
+ _q = q[:, :, i]
+ _v = v[:, :, i].clone()
+ beta_i = beta[:, :, i]
+ _v = _v - (S.clone() * _k[..., None]).sum(-2)
+ _v = _v * beta_i
+ S = S.clone() + _k.unsqueeze(-1) * _v.unsqueeze(-2)
+ o[:, :, i] = torch.einsum('bhd,bhdm->bhm', _q, S)
+ S = None if output_final_state is False else S
+ return o.to(orig_dtype), S
+
+
+def delta_rule_chunkwise(q, k, v, beta, chunk_size=32):
+ b, h, l, d_k = q.shape
+ d_v = v.shape[-1]
+ q = q * (d_k ** -0.5)
+ v = v * beta[..., None]
+ k_beta = k * beta[..., None]
+
+ assert l % chunk_size == 0
+
+ # compute (I - tri(diag(beta) KK^T))^{-1}
+ mask = torch.triu(torch.ones(chunk_size, chunk_size, dtype=torch.bool, device=q.device), diagonal=0)
+ q, k, v, k_beta = map(lambda x: rearrange(x, 'b h (n c) d -> b h n c d', c=chunk_size), [q, k, v, k_beta])
+ attn = -(k_beta @ k.transpose(-1, -2)).masked_fill(mask, 0)
+ for i in range(1, chunk_size):
+ attn[..., i, :i] = attn[..., i, :i] + (attn[..., i, :, None].clone() * attn[..., :, :i].clone()).sum(-2)
+ attn = attn + torch.eye(chunk_size, dtype=torch.float, device=q.device)
+
+ u = attn @ v
+ w = attn @ k_beta
+ S = k.new_zeros(b, h, d_k, d_v)
+ o = torch.zeros_like(v)
+ mask = torch.triu(torch.ones(chunk_size, chunk_size, dtype=torch.bool, device=q.device), diagonal=1)
+ for i in range(0, l // chunk_size):
+ q_i, k_i = q[:, :, i], k[:, :, i]
+ attn = (q_i @ k_i.transpose(-1, -2)).masked_fill_(mask, 0)
+ u_i = u[:, :, i] - w[:, :, i] @ S
+ o_inter = q_i @ S
+ o[:, :, i] = o_inter + attn @ u_i
+ S = S + k_i.transpose(-1, -2) @ u_i
+
+ return rearrange(o, 'b h n c d -> b h (n c) d'), S
+
+
+def delta_rule_parallel(q, k, v, beta, BM=128, BN=32):
+ b, h, l, d_k = q.shape
+ # d_v = v.shape[-1]
+ q = q * (d_k ** -0.5)
+ v = v * beta[..., None]
+ k_beta = k * beta[..., None]
+ # compute (I - tri(diag(beta) KK^T))^{-1}
+ q, k, v, k_beta = map(lambda x: rearrange(x, 'b h (n c) d -> b h n c d', c=BN), [q, k, v, k_beta])
+ mask = torch.triu(torch.ones(BN, BN, dtype=torch.bool, device=q.device), diagonal=0)
+ T = -(k_beta @ k.transpose(-1, -2)).masked_fill(mask, 0)
+ for i in range(1, BN):
+ T[..., i, :i] = T[..., i, :i].clone() + (T[..., i, :, None].clone() * T[..., :, :i].clone()).sum(-2)
+ T = T + torch.eye(BN, dtype=torch.float, device=q.device)
+
+ mask2 = torch.triu(torch.ones(BN, BN, dtype=torch.bool, device=q.device), diagonal=1)
+ A_local = (q @ k.transpose(-1, -2)).masked_fill(mask2, 0) @ T
+ o_intra = A_local @ v
+
+ # apply cumprod transition matrices on k to the last position within the chunk
+ k = k - ((k @ k.transpose(-1, -2)).masked_fill(mask, 0) @ T).transpose(-1, -2) @ k_beta
+ # apply cumprod transition matrices on q to the first position within the chunk
+ q = q - A_local @ k_beta
+ o_intra = A_local @ v
+
+ A = torch.zeros(b, h, l, l, device=q.device)
+
+ q, k, v, k_beta, o_intra = map(lambda x: rearrange(x, 'b h n c d -> b h (n c) d'), [q, k, v, k_beta, o_intra])
+ o = torch.empty_like(v)
+ for i in range(0, l, BM):
+ q_i = q[:, :, i:i+BM]
+ o_i = o_intra[:, :, i:i+BM]
+ # intra block
+ for j in range(i + BM - 2 * BN, i-BN, -BN):
+ k_j = k[:, :, j:j+BN]
+ A_ij = q_i @ k_j.transpose(-1, -2)
+ mask = torch.arange(i, i+BM) >= (j + BN)
+ A_ij = A_ij.masked_fill_(~mask[:, None].to(A_ij.device), 0)
+ A[:, :, i:i+BM, j:j+BN] = A_ij
+ q_i = q_i - A_ij @ k_beta[:, :, j:j+BN]
+ o_i += A_ij @ v[:, :, j:j+BN]
+ # inter block
+ for j in range(i - BN, -BN, -BN):
+ k_j = k[:, :, j:j+BN]
+ A_ij = q_i @ k_j.transpose(-1, -2)
+ A[:, :, i:i+BM, j:j+BN] = A_ij
+ q_i = q_i - A_ij @ k_beta[:, :, j:j+BN]
+ o_i += A_ij @ v[:, :, j:j+BN]
+ o[:, :, i:i+BM] = o_i
+
+ for i in range(0, l//BN):
+ A[:, :, i*BN:i*BN+BN, i*BN:i*BN+BN] = A_local[:, :, i]
+
+ return o, A
+
+
+if __name__ == '__main__':
+ B = 2
+ H = 4
+ L = 512
+ DK = 128
+ DV = 128
+ q = (torch.randn(B, H, L, DK)).cuda().requires_grad_(True)
+ k = (torch.randn(B, H, L, DK)).cuda()
+ k = torch.nn.functional.normalize(k, dim=-1, p=2).requires_grad_(True)
+ v = (torch.randn(B, H, L, DV)).cuda().requires_grad_(True)
+ beta = torch.randn(B, H, L).cuda().sigmoid().requires_grad_(True)
+
+ o, _ = delta_rule_recurrence(q, k, v, beta)
+ do = torch.randn(B, H, L, DV).cuda()
+ o.backward(do, retain_graph=True)
+ q_grad, q.grad = q.grad, None
+ k_grad, k.grad = k.grad, None
+ v_grad, v.grad = v.grad, None
+ beta_grad, beta.grad = beta.grad, None
+
+ o2, _ = delta_rule_chunkwise(q, k, v, beta)
+ o2.backward(do)
+ assert torch.allclose(o, o2, atol=1e-4), breakpoint()
+ assert torch.allclose(q.grad, q_grad, atol=1e-4), breakpoint()
+ assert torch.allclose(k.grad, k_grad, atol=1e-4), breakpoint()
+ assert torch.allclose(v.grad, v_grad, atol=1e-4), breakpoint()
+ assert torch.allclose(beta.grad, beta_grad, atol=1e-4), breakpoint()
+
+ q_grad, q.grad = q.grad, None
+ k_grad, k.grad = k.grad, None
+ v_grad, v.grad = v.grad, None
+ beta_grad, beta.grad = beta.grad, None
+
+ o3, _ = delta_rule_parallel(q, k, v, beta)
+ o3.backward(do)
+ assert torch.allclose(o, o3, atol=1e-4), breakpoint()
+ assert torch.allclose(q.grad, q_grad, atol=1e-4), breakpoint()
+ assert torch.allclose(k.grad, k_grad, atol=1e-4), breakpoint()
+ assert torch.allclose(v.grad, v_grad, atol=1e-4), breakpoint()
+ assert torch.allclose(beta.grad, beta_grad, atol=1e-4), breakpoint()
+
+ print("All passed!")
diff --git a/fla/ops/delta_rule/parallel.py b/fla/ops/delta_rule/parallel.py
new file mode 100644
index 0000000000000000000000000000000000000000..5613dcaffcf0342f81849eeff5279bf993e64b4a
--- /dev/null
+++ b/fla/ops/delta_rule/parallel.py
@@ -0,0 +1,400 @@
+
+
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from typing import Tuple
+
+import torch
+import triton
+import triton.language as tl
+from einops import rearrange
+
+from fla.ops.delta_rule.wy_fast import fwd_prepare_T
+from fla.utils import autocast_custom_bwd, autocast_custom_fwd, contiguous
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ ],
+ key=["BT", "K", "V"],
+)
+@triton.jit
+def chunk_transform_qk_fwd_kernel(
+ q,
+ k,
+ v,
+ beta,
+ o,
+ A,
+ q_new,
+ k_new,
+ A_local,
+ s_k_h,
+ s_k_t,
+ s_k_d,
+ s_v_h,
+ s_v_t,
+ s_v_d,
+ scale,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ BT: tl.constexpr,
+ OUTPUT_ATTENTIONS: tl.constexpr,
+ # SAVE_ATTENTION: tl.constexpr
+):
+ i_t, i_bh = tl.program_id(0), tl.program_id(1)
+
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, 0), (BT, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, 0), (BT, BK), (1, 0))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, 0), (BT, BV), (1, 0))
+ b_q = (tl.load(p_q, boundary_check=(0, 1)) * scale).to(p_q.dtype.element_ty)
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+
+ p_T = tl.make_block_ptr(A + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0))
+ b_T = tl.load(p_T, boundary_check=(0, 1))
+
+ o_i = tl.arange(0, BT)
+ m_t = o_i[:, None] >= o_i[None, :]
+ b_qk = tl.where(m_t, tl.dot(b_q, tl.trans(b_k), allow_tf32=False), 0).to(b_q.dtype)
+ m_t = o_i[:, None] > o_i[None, :]
+ b_kk = tl.where(m_t, tl.dot(b_k, tl.trans(b_k), allow_tf32=False), 0).to(b_k.dtype)
+
+ p_beta = tl.make_block_ptr(beta + i_bh * T, (T, ), (1, ), (i_t * BT, ), (BT, ), (0, ))
+ b_beta = tl.load(p_beta, boundary_check=(0, ))
+ b_k_beta = (b_k * b_beta[:, None]).to(b_k.dtype)
+
+ b_qkT = tl.dot(b_qk, b_T, allow_tf32=False).to(b_k.dtype)
+
+ if OUTPUT_ATTENTIONS:
+ p_a = tl.make_block_ptr(A_local + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0))
+ tl.store(p_a, b_qkT.to(p_a.dtype.element_ty), boundary_check=(0, 1))
+
+ b_kkT = tl.dot(b_kk, b_T, allow_tf32=False).to(b_k.dtype)
+ p_o = tl.make_block_ptr(o + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, 0), (BT, BV), (1, 0))
+ tl.store(p_o, tl.dot(b_qkT, b_v).to(p_o.dtype.element_ty), boundary_check=(0, 1))
+
+ p_q_new = tl.make_block_ptr(q_new + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, 0), (BT, BK), (1, 0))
+ tl.store(p_q_new, (b_q - tl.dot(b_qkT, b_k_beta, allow_tf32=False)).to(p_q_new.dtype.element_ty), boundary_check=(0, 1))
+
+ p_k_new = tl.make_block_ptr(k_new + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, 0), (BT, BK), (1, 0))
+ tl.store(p_k_new, (b_k - tl.dot(tl.trans(b_kkT), b_k_beta, allow_tf32=False)
+ ).to(p_k_new.dtype.element_ty), boundary_check=(0, 1))
+
+
+def chunk_transform_qk_fwd_fn(q, k, v, beta, A, scale, BT, output_attentions):
+ B, H, T, K = k.shape
+ q_new = torch.empty_like(q)
+ k_new = torch.empty_like(k)
+ o = torch.empty_like(v)
+ grid = (triton.cdiv(T, BT), B*H)
+ V = v.shape[-1]
+ A_local = torch.empty_like(A) if output_attentions else None
+ chunk_transform_qk_fwd_kernel[grid](
+ q, k, v, beta, o, A, q_new, k_new, A_local,
+ q.stride(1), q.stride(2), q.stride(3),
+ v.stride(1), v.stride(2), v.stride(3),
+ scale=scale,
+ T=T,
+ K=K,
+ V=V,
+ BT=BT,
+ BK=triton.next_power_of_2(K),
+ BV=triton.next_power_of_2(V),
+ OUTPUT_ATTENTIONS=output_attentions
+ )
+ return q_new, k_new, o, A_local
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ ],
+ key=["BT"],
+)
+@triton.jit
+def save_intra_chunk_attn(
+ A,
+ A_local,
+ T: tl.constexpr,
+ BT: tl.constexpr,
+):
+ i_t, i_bh = tl.program_id(0), tl.program_id(1)
+ p_A = tl.make_block_ptr(A + i_bh * T * T, (T, T), (T, 1), (i_t * BT, i_t * BT), (BT, BT), (1, 0))
+ p_A_local = tl.make_block_ptr(A_local + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0))
+ b_A_local = tl.load(p_A_local, boundary_check=(0, 1))
+ tl.store(p_A, b_A_local.to(p_A.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.heuristics({
+ 'OUTPUT_ATTENTIONS': lambda args: args['attn'] is not None
+})
+@triton.jit
+def parallel_delta_rule_fwd_kernel(
+ q,
+ k,
+ k2, # original k
+ v,
+ beta,
+ o,
+ o_new,
+ attn,
+ s_k_h,
+ s_k_t,
+ s_v_h,
+ s_v_t,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BS: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ OUTPUT_ATTENTIONS: tl.constexpr
+):
+ i_t, i_bh = tl.program_id(0), tl.program_id(1)
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, 1), (i_t * BT, 0), (BT, BK), (1, 0))
+
+ # the Q block is kept in the shared memory throughout the whole kernel
+ # [BT, BK]
+ b_q = tl.zeros([BT, BK], dtype=tl.float32)
+ b_q += tl.load(p_q, boundary_check=(0, 1))
+
+ b_o = tl.zeros([BT, BV], dtype=tl.float32)
+ p_o = tl.make_block_ptr(o + i_bh * s_v_h, (T, V), (s_v_t, 1), (i_t * BT, 0), (BT, BV), (1, 0))
+ b_o += tl.load(p_o, boundary_check=(0, 1))
+
+ # As opposed to Flashattention, this kernel requires scanning the KV blocks from right to left
+ # Q block and K block have overlap.
+ # masks required
+ for offset in range((i_t + 1) * BT - 2 * BS, i_t * BT - BS, -BS):
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (K, T), (1, s_k_t), (0, offset), (BK, BS), (0, 1))
+ p_k2 = tl.make_block_ptr(k2 + i_bh * s_k_h, (T, K), (s_k_t, 1), (offset, 0), (BS, BK), (1, 0))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, 1), (offset, 0), (BS, BV), (1, 0))
+ p_beta = tl.make_block_ptr(beta + i_bh * T, (T, ), (1, ), (offset, ), (BS, ), (0,))
+ # [BK, BS]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BS, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BS]
+ b_beta = tl.load(p_beta, boundary_check=(0,))
+ # [BT, BS]
+ m_s = tl.arange(0, BT) >= (offset - i_t*BT + BS)
+ b_s = tl.dot(b_q.to(b_k.dtype), b_k, allow_tf32=False)
+ b_s = tl.where(m_s[:, None], b_s, 0)
+
+ b_o += tl.dot(b_s.to(b_v.dtype), b_v, allow_tf32=False)
+ b_k2 = (tl.load(p_k2, boundary_check=(0, 1)) * b_beta[:, None]).to(b_v.dtype)
+ b_q -= tl.dot(b_s.to(b_v.dtype), b_k2, allow_tf32=False)
+
+ if OUTPUT_ATTENTIONS:
+ p_a = tl.make_block_ptr(attn + i_bh * T * T, (T, T), (T, 1), (i_t * BT, offset), (BT, BS), (1, 0))
+ tl.store(p_a, b_s.to(p_a.dtype.element_ty), boundary_check=(0, 1))
+
+ # Q block and K block have no overlap
+ # no need for mask, thereby saving flops
+ for offset in range(i_t * BT - BS, -BS, -BS):
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (K, T), (1, s_k_t), (0, offset), (BK, BS), (0, 1))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, 1), (offset, 0), (BS, BV), (1, 0))
+ p_beta = tl.make_block_ptr(beta + i_bh * T, (T, ), (1, ), (offset, ), (BS, ), (0,))
+ p_k2 = tl.make_block_ptr(k2 + i_bh * s_k_h, (T, K), (s_k_t, 1), (offset, 0), (BS, BK), (1, 0))
+
+ # [BK, BS]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BS, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BS]
+ b_beta = tl.load(p_beta, boundary_check=(0,))
+ # [BT, BS]
+ b_s = (tl.dot(b_q.to(b_k.dtype), b_k, allow_tf32=False))
+ # [BT, BV]
+ b_o += tl.dot(b_s.to(b_v.dtype), b_v, allow_tf32=False)
+ b_k2 = (tl.load(p_k2, boundary_check=(0, 1)) * b_beta[:, None]).to(b_v.dtype)
+ b_q -= tl.dot(b_s.to(b_v.dtype), b_k2, allow_tf32=False).to(b_q.dtype)
+
+ if OUTPUT_ATTENTIONS:
+ p_a = tl.make_block_ptr(attn + i_bh * T * T, (T, T), (T, 1), (i_t * BT, offset), (BT, BS), (1, 0))
+ tl.store(p_a, b_s.to(p_a.dtype.element_ty), boundary_check=(0, 1))
+
+ p_o_new = tl.make_block_ptr(o_new + i_bh * s_v_h, (T, V), (s_v_t, 1), (i_t*BT, 0), (BT, BV), (1, 0))
+ tl.store(p_o_new, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1))
+
+
+class ParallelDeltaRuleFunction(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_fwd
+ def forward(ctx, q, k, v, beta, scale, output_attentions):
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ assert q.shape[-1] <= 128, 'The maximum supported sequence length is 128.'
+ BT, BS = 128, 32
+ BK = triton.next_power_of_2(k.shape[-1])
+ BV = triton.next_power_of_2(v.shape[-1])
+ assert BT % BS == 0
+
+ A = fwd_prepare_T(k, beta, BS)
+ attn = q.new_zeros(B, H, T, T) if output_attentions else None
+ q_new, k_new, o, A_local = chunk_transform_qk_fwd_fn(q, k, v, beta, A, scale, BS, output_attentions)
+
+ num_stages = 3 if K <= 64 else 2
+ num_warps = 4
+ grid = (triton.cdiv(T, BT), B * H)
+ o_new = torch.empty_like(o)
+
+ parallel_delta_rule_fwd_kernel[grid](
+ q=q_new,
+ k=k_new,
+ k2=k,
+ v=v,
+ beta=beta,
+ o=o,
+ o_new=o_new,
+ attn=attn,
+ s_k_h=k.stride(1),
+ s_k_t=k.stride(2),
+ s_v_h=v.stride(1),
+ s_v_t=v.stride(2),
+ T=T,
+ K=K,
+ V=V,
+ BT=BT,
+ BS=BS,
+ BK=BK,
+ BV=BV,
+ num_stages=num_stages,
+ num_warps=num_warps
+ )
+
+ if output_attentions:
+ grid = (triton.cdiv(T, BS), B * H)
+ save_intra_chunk_attn[grid](
+ A=attn, A_local=A_local, T=T, BT=BS
+ )
+ return o_new.to(q.dtype), attn
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_bwd
+ def backward(ctx, do, d_attn=None):
+ raise NotImplementedError('Backward pass is not implemented. Stay tuned!')
+
+
+def parallel_delta_rule(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ beta: torch.Tensor,
+ scale: float = None,
+ output_attentions: bool = False,
+ head_first: bool = True
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ r"""
+ Args:
+ q (torch.Tensor):
+ queries of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`.
+ k (torch.Tensor):
+ keys of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`.
+ v (torch.Tensor):
+ values of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ beta (torch.Tensor):
+ betas of shape `[B, H, T]` if `head_first=True` else `[B, T, H]`.
+ scale (Optional[int]):
+ Scale factor for attention scores.
+ If not provided, it will default to `1 / sqrt(K)`. Default: `None`.
+ output_attentions (bool):
+ Whether to output the materialized attention scores of shape [B, H, T, T]. Default: `False`.
+ head_first (Optional[bool]):
+ Whether the inputs are in the head-first format.
+ Default: `True`.
+
+ Returns:
+ o (torch.Tensor):
+ Outputs of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ attn (torch.Tensor):
+ Attention scores of shape `[B, H, T, T]` if `output_attentions=True` else `None`.
+ """
+ if not head_first:
+ q, k, v, beta = map(lambda x: x.transpose(1, 2), (q, k, v, beta))
+ o, attn = ParallelDeltaRuleFunction.apply(q, k, v, beta, scale, output_attentions)
+ if not head_first:
+ o = o.transpose(1, 2)
+ return o, attn
+
+
+def naive_delta_rule_parallel(q, k, v, beta, BM=128, BN=32):
+ b, h, l, d_k = q.shape
+ q = q * (d_k ** -0.5)
+ v = v * beta[..., None]
+ k_beta = k * beta[..., None]
+ # compute (I - tri(diag(beta) KK^T))^{-1}
+ q, k, v, k_beta = map(lambda x: rearrange(x, 'b h (n c) d -> b h n c d', c=BN), [q, k, v, k_beta])
+ mask = torch.triu(torch.ones(BN, BN, dtype=torch.bool, device=q.device), diagonal=0)
+ T = -(k_beta @ k.transpose(-1, -2)).masked_fill(mask, 0)
+ for i in range(1, BN):
+ T[..., i, :i] = T[..., i, :i].clone() + (T[..., i, :, None].clone() * T[..., :, :i].clone()).sum(-2)
+ T = T + torch.eye(BN, dtype=q.dtype, device=q.device)
+
+ mask2 = torch.triu(torch.ones(BN, BN, dtype=torch.bool, device=q.device), diagonal=1)
+ A_local = (q @ k.transpose(-1, -2)).masked_fill(mask2, 0) @ T
+ o_intra = A_local @ v
+
+ # apply cumprod transition matrices on k to the last position within the chunk
+ k = k - ((k @ k.transpose(-1, -2)).masked_fill(mask, 0) @ T).transpose(-1, -2) @ k_beta
+ # apply cumprod transition matrices on q to the first position within the chunk
+ q = q - A_local @ k_beta
+ o_intra = A_local @ v
+
+ A = torch.zeros(b, h, l, l, device=q.device)
+
+ q, k, v, k_beta, o_intra = map(lambda x: rearrange(x, 'b h n c d -> b h (n c) d'), [q, k, v, k_beta, o_intra])
+ o = torch.empty_like(v)
+ for i in range(0, l, BM):
+ q_i = q[:, :, i:i+BM]
+ o_i = o_intra[:, :, i:i+BM]
+ # intra block
+ for j in range(i + BM - 2 * BN, i-BN, -BN):
+ k_j = k[:, :, j:j+BN]
+ A_ij = q_i @ k_j.transpose(-1, -2)
+ mask = torch.arange(i, i+BM) >= (j + BN)
+ A_ij = A_ij.masked_fill_(~mask[:, None].to(A_ij.device), 0)
+ A[:, :, i:i+BM, j:j+BN] = A_ij
+ q_i = q_i - A_ij @ k_beta[:, :, j:j+BN]
+ o_i += A_ij @ v[:, :, j:j+BN]
+ # inter block
+ for j in range(i - BN, -BN, -BN):
+ k_j = k[:, :, j:j+BN]
+ A_ij = q_i @ k_j.transpose(-1, -2)
+ A[:, :, i:i+BM, j:j+BN] = A_ij
+ q_i = q_i - A_ij @ k_beta[:, :, j:j+BN]
+ o_i += A_ij @ v[:, :, j:j+BN]
+ o[:, :, i:i+BM] = o_i
+
+ for i in range(0, l//BN):
+ A[:, :, i*BN:i*BN+BN, i*BN:i*BN+BN] = A_local[:, :, i]
+
+ return o, A
+
+
+if __name__ == "__main__":
+ B, H, T, K, V = 2, 4, 512, 64, 64
+ torch.set_default_dtype(torch.bfloat16)
+
+ q = torch.randn[B, H, T, K].cuda()
+ k = torch.nn.functional.normalize(torch.randn[B, H, T, K].cuda(), p=2, dim=-1)
+ v = torch.randn[B, H, T, V].cuda()
+ beta = torch.ones(B, H, T).cuda()
+
+ output_attentions = True
+ ref_o, ref_attn = naive_delta_rule_parallel(q.clone(), k.clone(), v.clone(), beta.clone())
+ o, attn = parallel_delta_rule(q.clone(), k.clone(), v.clone(), beta.clone(), K**-0.5, output_attentions)
+ print((ref_o-o).abs().max())
+ print((ref_attn-attn).abs().max())
diff --git a/fla/ops/delta_rule/wy_fast.py b/fla/ops/delta_rule/wy_fast.py
new file mode 100644
index 0000000000000000000000000000000000000000..ea8962bd0c97fd0e6800272af2edad8ca0dcc4b4
--- /dev/null
+++ b/fla/ops/delta_rule/wy_fast.py
@@ -0,0 +1,805 @@
+# -*- coding: utf-8 -*-
+
+from typing import Optional, Tuple
+
+import torch
+import triton
+import triton.language as tl
+from einops import rearrange
+
+from fla.utils import autocast_custom_bwd, autocast_custom_fwd, contiguous
+
+
+# Inspired by "THE WY REPRESENTATION FOR PRODUCTS OF HOUSEHOLDER MATRICES" https://epubs.siam.org/doi/pdf/10.1137/0908009
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ triton.Config({}, num_warps=16)
+ ],
+ key=["BK"]
+)
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def fwd_prepare_wy_repr_kernel_chunk32(
+ k,
+ beta,
+ A,
+ offsets,
+ indices,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ K: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BC: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr,
+):
+ i_t, i_bh = tl.program_id(0), tl.program_id(1)
+ i_b, i_h = i_bh // H, i_bh % H
+ if USE_OFFSETS:
+ i_n, i_t = tl.load(indices + i_t * 2).to(tl.int32), tl.load(indices + i_t * 2 + 1).to(tl.int32)
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ T = eos - bos
+ else:
+ bos, eos = i_b * T, i_b * T + T
+
+ if HEAD_FIRST:
+ p_beta = tl.make_block_ptr(beta + i_bh * T, (T,), (1,), (i_t * BT,), (BT,), (0,))
+ else:
+ p_beta = tl.make_block_ptr(beta + bos * H + i_h, (T,), (H,), (i_t * BT,), (BT,), (0,))
+ b_beta = tl.load(p_beta, boundary_check=(0,))
+
+ b_A = tl.zeros([BT, BT], dtype=tl.float32)
+ for i_k in range(tl.cdiv(K, BK)):
+ if HEAD_FIRST:
+ p_k = tl.make_block_ptr(k + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ else:
+ p_k = tl.make_block_ptr(k + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ b_kb = (b_k * b_beta[:, None]).to(b_k.dtype)
+ b_A += tl.dot(b_kb, tl.trans(b_k), allow_tf32=False)
+
+ b_A = -tl.where(tl.arange(0, BT)[:, None] > tl.arange(0, BT)[None, :], b_A, 0)
+ for i in range(1, BT):
+ mask = tl.arange(0, BT) == i
+ b_a = tl.sum(tl.where(mask[:, None], b_A, 0), 0)
+ b_a = b_a + tl.sum(b_a[:, None] * b_A, 0) * (tl.arange(0, BT) < i)
+ b_A = tl.where(mask[:, None], b_a, b_A)
+ b_A += tl.arange(0, BT)[:, None] == tl.arange(0, BT)[None, :]
+
+ if HEAD_FIRST:
+ p_A = tl.make_block_ptr(A + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0))
+ else:
+ p_A = tl.make_block_ptr(A + (bos*H + i_h) * BT, (T, BT), (H*BT, 1), (i_t * BT, 0), (BT, BT), (1, 0))
+ tl.store(p_A, (b_A).to(p_A.dtype.element_ty), boundary_check=(0, 1))
+ b_A = b_A.to(k.dtype.element_ty)
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ triton.Config({}, num_warps=16)
+ ],
+ key=["BK"],
+)
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def fwd_prepare_wy_repr_kernel_chunk64(
+ k,
+ beta,
+ A,
+ offsets,
+ indices,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ K: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BC: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_t, i_bh = tl.program_id(0), tl.program_id(1)
+ i_b, i_h = i_bh // H, i_bh % H
+ if USE_OFFSETS:
+ i_n, i_t = tl.load(indices + i_t * 2).to(tl.int32), tl.load(indices + i_t * 2 + 1).to(tl.int32)
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ T = eos - bos
+ else:
+ bos, eos = i_b * T, i_b * T + T
+
+ b_A = tl.zeros([BC, BC], dtype=tl.float32)
+ b_A2 = tl.zeros([BC, BC], dtype=tl.float32)
+ b_A3 = tl.zeros([BC, BC], dtype=tl.float32)
+
+ if HEAD_FIRST:
+ p_beta = tl.make_block_ptr(beta + i_bh * T, (T,), (1,), (i_t * BT,), (BC,), (0,))
+ else:
+ p_beta = tl.make_block_ptr(beta + bos * H + i_h, (T,), (H,), (i_t * BT,), (BC,), (0,))
+ b_beta = tl.load(p_beta, boundary_check=(0,))
+
+ if HEAD_FIRST:
+ p_beta2 = tl.make_block_ptr(beta + i_bh * T, (T,), (1,), (i_t * BT + BC,), (BC,), (0,))
+ else:
+ p_beta2 = tl.make_block_ptr(beta + bos * H + i_h, (T,), (H,), (i_t * BT + BC,), (BC,), (0,))
+ b_beta2 = tl.load(p_beta2, boundary_check=(0,))
+
+ for i_k in range(tl.cdiv(K, BK)):
+ if HEAD_FIRST:
+ p_k = tl.make_block_ptr(k + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BC, BK), (1, 0))
+ p_k2 = tl.make_block_ptr(k + i_bh * T*K, (T, K), (K, 1), (i_t * BT + BC, i_k * BK), (BC, BK), (1, 0))
+ else:
+ p_k = tl.make_block_ptr(k + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BC, BK), (1, 0))
+ p_k2 = tl.make_block_ptr(k + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT + BC, i_k * BK), (BC, BK), (1, 0))
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ b_kb = (b_k * b_beta[:, None]).to(b_k.dtype)
+ b_k2 = tl.load(p_k2, boundary_check=(0, 1))
+ b_kb2 = (b_k2 * b_beta2[:, None]).to(b_k2.dtype)
+ b_A += tl.dot(b_kb, tl.trans(b_k), allow_tf32=False)
+ b_A2 += tl.dot(b_kb2, tl.trans(b_k2), allow_tf32=False)
+ b_A3 += tl.dot(b_kb2, tl.trans(b_k), allow_tf32=False)
+
+ b_A = -tl.where(tl.arange(0, BC)[:, None] > tl.arange(0, BC)[None, :], b_A, 0)
+ b_A2 = -tl.where(tl.arange(0, BC)[:, None] > tl.arange(0, BC)[None, :], b_A2, 0)
+ for i in range(1, BC):
+ mask = tl.arange(0, BC) == i
+ b_a = tl.sum(tl.where(mask[:, None], b_A, 0), 0)
+ b_a2 = tl.sum(tl.where(mask[:, None], b_A2, 0), 0)
+ b_a = b_a + tl.sum(b_a[:, None] * b_A, 0) * (tl.arange(0, BC) < i)
+ b_a2 = b_a2 + tl.sum(b_a2[:, None] * b_A2, 0) * (tl.arange(0, BC) < i)
+ b_A = tl.where(mask[:, None], b_a, b_A)
+ b_A2 = tl.where(mask[:, None], b_a2, b_A2)
+
+ # blockwise computation of lower triangular matrix's inverse
+ # i.e., [A11, 0; A21, A22]^-1 = [A11^-1, 0; -A22^-1 A21 A11^-1, A22^-1]
+ b_A += tl.arange(0, BC)[:, None] == tl.arange(0, BC)[None, :]
+ b_A2 += tl.arange(0, BC)[:, None] == tl.arange(0, BC)[None, :]
+ b_A3 = -tl.dot(tl.dot(b_A2, b_A3, allow_tf32=False), b_A, allow_tf32=False)
+
+ if HEAD_FIRST:
+ p_A1 = tl.make_block_ptr(A + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT, 0), (BC, BC), (1, 0))
+ p_A2 = tl.make_block_ptr(A + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT + BC, BC), (BC, BC), (1, 0))
+ p_A3 = tl.make_block_ptr(A + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT + BC, 0), (BC, BC), (1, 0))
+ p_A4 = tl.make_block_ptr(A + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT, BC), (BC, BC), (1, 0))
+ else:
+ p_A1 = tl.make_block_ptr(A + (bos*H + i_h) * BT, (T, BT), (H*BT, 1), (i_t * BT, 0), (BC, BC), (1, 0))
+ p_A2 = tl.make_block_ptr(A + (bos*H + i_h) * BT, (T, BT), (H*BT, 1), (i_t * BT + BC, BC), (BC, BC), (1, 0))
+ p_A3 = tl.make_block_ptr(A + (bos*H + i_h) * BT, (T, BT), (H*BT, 1), (i_t * BT + BC, 0), (BC, BC), (1, 0))
+ p_A4 = tl.make_block_ptr(A + (bos*H + i_h) * BT, (T, BT), (H*BT, 1), (i_t * BT, BC), (BC, BC), (1, 0))
+ tl.store(p_A1, b_A.to(p_A1.dtype.element_ty), boundary_check=(0, 1))
+ tl.store(p_A2, b_A2.to(p_A2.dtype.element_ty), boundary_check=(0, 1))
+ tl.store(p_A3, b_A3.to(p_A3.dtype.element_ty), boundary_check=(0, 1))
+ # causal mask
+ tl.store(p_A4, tl.zeros([BC, BC], dtype=tl.float32).to(p_A4.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8)
+ ],
+ key=["BT", "BK", "BV"],
+)
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def fwd_recompute_w_u_kernel(
+ k,
+ v,
+ beta,
+ w,
+ u,
+ A,
+ offsets,
+ indices,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_t, i_bh = tl.program_id(0), tl.program_id(1)
+ i_b, i_h = i_bh // H, i_bh % H
+ if USE_OFFSETS:
+ i_n, i_t = tl.load(indices + i_t * 2).to(tl.int32), tl.load(indices + i_t * 2 + 1).to(tl.int32)
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ T = eos - bos
+ else:
+ bos, eos = i_b * T, i_b * T + T
+
+ if HEAD_FIRST:
+ p_beta = tl.make_block_ptr(beta + i_bh * T, (T,), (1,), (i_t * BT,), (BT,), (0,))
+ p_A = tl.make_block_ptr(A + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0))
+ else:
+ p_beta = tl.make_block_ptr(beta + bos * H + i_h, (T,), (H,), (i_t * BT,), (BT,), (0,))
+ p_A = tl.make_block_ptr(A + (bos*H + i_h) * BT, (T, BT), (H*BT, 1), (i_t * BT, 0), (BT, BT), (1, 0))
+ b_beta = tl.load(p_beta, boundary_check=(0,))
+ b_A = tl.load(p_A, boundary_check=(0, 1))
+
+ for i_v in range(tl.cdiv(V, BV)):
+ if HEAD_FIRST:
+ p_v = tl.make_block_ptr(v + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_u = tl.make_block_ptr(u + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ else:
+ p_v = tl.make_block_ptr(v + (bos*H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_u = tl.make_block_ptr(u + (bos*H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ b_vb = (b_v * b_beta[:, None]).to(b_v.dtype)
+ b_u = tl.dot(b_A.to(b_vb.dtype), b_vb, allow_tf32=False)
+ tl.store(p_u, (b_u).to(p_u.dtype.element_ty), boundary_check=(0, 1))
+
+ for i_k in range(tl.cdiv(K, BK)):
+ if HEAD_FIRST:
+ p_k = tl.make_block_ptr(k + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_w = tl.make_block_ptr(w + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ else:
+ p_k = tl.make_block_ptr(k + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_w = tl.make_block_ptr(w + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ b_kb = (b_k * b_beta[:, None]).to(b_k.dtype)
+ b_w = tl.dot(b_A.to(b_kb.dtype), b_kb, allow_tf32=False)
+ tl.store(p_w, b_w.to(p_w.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8)
+ ],
+ key=["BT", "BK"],
+)
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def fwd_recompute_w_kernel(
+ k,
+ beta,
+ w,
+ A,
+ offsets,
+ indices,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ K: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_t, i_bh = tl.program_id(0), tl.program_id(1)
+ i_b, i_h = i_bh // H, i_bh % H
+ if USE_OFFSETS:
+ i_n, i_t = tl.load(indices + i_t * 2).to(tl.int32), tl.load(indices + i_t * 2 + 1).to(tl.int32)
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ T = eos - bos
+ else:
+ bos, eos = i_b * T, i_b * T + T
+
+ if HEAD_FIRST:
+ p_beta = tl.make_block_ptr(beta + i_bh * T, (T,), (1,), (i_t * BT,), (BT,), (0,))
+ p_A = tl.make_block_ptr(A + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0))
+ else:
+ p_beta = tl.make_block_ptr(beta + bos * H + i_h, (T,), (H,), (i_t * BT,), (BT,), (0,))
+ p_A = tl.make_block_ptr(A + (bos*H + i_h) * BT, (T, BT), (H*BT, 1), (i_t * BT, 0), (BT, BT), (1, 0))
+ b_beta = tl.load(p_beta, boundary_check=(0,))
+ b_A = tl.load(p_A, boundary_check=(0, 1)).to(k.dtype.element_ty)
+
+ for i_k in range(tl.cdiv(K, BK)):
+ if HEAD_FIRST:
+ p_k = tl.make_block_ptr(k + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_w = tl.make_block_ptr(w + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ else:
+ p_k = tl.make_block_ptr(k + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_w = tl.make_block_ptr(w + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ b_kb = (b_k * b_beta[:, None]).to(b_k.dtype)
+ b_w = tl.dot(b_A, b_kb, allow_tf32=False)
+
+ tl.store(p_w, b_w.to(p_w.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ triton.Config({}, num_warps=16)
+ ],
+ key=["BT", "BK", "BV"],
+)
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def bwd_prepare_wy_repr_kernel(
+ k,
+ v,
+ beta,
+ A,
+ dw,
+ du,
+ dk,
+ dv,
+ dbeta,
+ offsets,
+ indices,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_b, i_h = i_bh // H, i_bh % H
+ if USE_OFFSETS:
+ i_n, i_t = tl.load(indices + i_t * 2).to(tl.int32), tl.load(indices + i_t * 2 + 1).to(tl.int32)
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ T = eos - bos
+ else:
+ bos, eos = i_b * T, i_b * T + T
+
+ if HEAD_FIRST:
+ p_beta = tl.make_block_ptr(beta + i_bh * T, (T,), (1,), (i_t * BT,), (BT,), (0,))
+ p_A = tl.make_block_ptr(A + i_bh * T * BT, (BT, T), (1, BT), (0, i_t * BT), (BT, BT), (0, 1))
+ else:
+ p_beta = tl.make_block_ptr(beta + bos * H + i_h, (T,), (H,), (i_t * BT,), (BT,), (0,))
+ p_A = tl.make_block_ptr(A + (bos*H + i_h) * BT, (BT, T), (1, H*BT), (0, i_t * BT), (BT, BT), (0, 1))
+
+ b_beta = tl.load(p_beta, boundary_check=(0,))
+ b_A = tl.load(p_A, boundary_check=(0, 1))
+
+ b_dbeta = tl.zeros([BT], dtype=tl.float32)
+ b_dA = tl.zeros([BT, BT], dtype=tl.float32)
+ for i_v in range(tl.cdiv(V, BV)):
+ if HEAD_FIRST:
+ p_v = tl.make_block_ptr(v + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dv = tl.make_block_ptr(dv + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_du = tl.make_block_ptr(du + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ else:
+ p_v = tl.make_block_ptr(v + (bos*H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dv = tl.make_block_ptr(dv + (bos*H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_du = tl.make_block_ptr(du + (bos*H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ b_v_beta = (b_v * b_beta[:, None]).to(b_v.dtype)
+ b_du = tl.load(p_du, boundary_check=(0, 1))
+ b_dA += tl.dot(b_du, tl.trans(b_v_beta), allow_tf32=False)
+ b_dv_beta = tl.dot(b_A, b_du, allow_tf32=False)
+ b_dv = b_dv_beta * b_beta[:, None]
+ b_dbeta += tl.sum(b_dv_beta * b_v, 1)
+
+ tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1))
+
+ for i_k in range(tl.cdiv(K, BK)):
+ if HEAD_FIRST:
+ p_k = tl.make_block_ptr(k + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dk = tl.make_block_ptr(dk + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dw = tl.make_block_ptr(dw + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ else:
+ p_k = tl.make_block_ptr(k + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dk = tl.make_block_ptr(dk + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dw = tl.make_block_ptr(dw + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ b_k_beta = (b_k * b_beta[:, None]).to(b_k.dtype)
+ b_dw = tl.load(p_dw, boundary_check=(0, 1))
+ b_dA += tl.dot(b_dw, tl.trans(b_k_beta), allow_tf32=False)
+ b_dk_beta = tl.dot(b_A, b_dw, allow_tf32=False)
+ b_dk = b_dk_beta * b_beta[:, None]
+ b_dbeta += tl.sum(b_dk_beta * b_k, 1)
+
+ tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1))
+
+ b_dA = tl.where(tl.arange(0, BT)[:, None] > tl.arange(0, BT)[None, :], b_dA, 0)
+ b_dA = tl.dot(b_dA.to(b_A.dtype), b_A)
+ b_dA = tl.dot(b_A, b_dA.to(b_A.dtype))
+ b_dA = tl.where(tl.arange(0, BT)[:, None] > tl.arange(0, BT)[None, :], -b_dA, 0).to(k.dtype.element_ty)
+
+ for i_k in range(tl.cdiv(K, BK)):
+ if HEAD_FIRST:
+ p_k = tl.make_block_ptr(k + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dk = tl.make_block_ptr(dk + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ else:
+ p_k = tl.make_block_ptr(k + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dk = tl.make_block_ptr(dk + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ b_dk = tl.load(p_dk, boundary_check=(0, 1))
+ b_k_beta = (b_k * b_beta[:, None]).to(b_k.dtype)
+
+ b_dk_beta = tl.dot(b_dA, b_k, allow_tf32=False)
+ b_dbeta += tl.sum(b_dk_beta * b_k, 1)
+ b_dk += tl.dot(tl.trans(b_dA), b_k_beta, allow_tf32=False)
+ b_dk += b_dk_beta * b_beta[:, None]
+ tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1))
+
+ if HEAD_FIRST:
+ p_dbeta = tl.make_block_ptr(dbeta + i_bh * T, (T,), (1,), (i_t * BT,), (BT,), (0,))
+ else:
+ p_dbeta = tl.make_block_ptr(dbeta + bos * H + i_h, (T,), (H,), (i_t * BT,), (BT,), (0,))
+ tl.store(p_dbeta, b_dbeta.to(p_dbeta.dtype.element_ty), boundary_check=(0,))
+
+
+def fwd_prepare_wy_repr(
+ k: torch.Tensor,
+ v: torch.Tensor,
+ beta: torch.Tensor,
+ offsets: Optional[torch.LongTensor],
+ indices: Optional[torch.LongTensor],
+ head_first: bool = True,
+ chunk_size: int = 64
+) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
+ if head_first:
+ B, H, T, K = k.shape
+ else:
+ B, T, H, K = k.shape
+ BT = min(chunk_size, max(triton.next_power_of_2(T), 16))
+ if offsets is None:
+ NT = triton.cdiv(T, BT)
+ else:
+ if indices is None:
+ indices = torch.cat([torch.arange(n) for n in triton.cdiv(offsets[1:] - offsets[:-1], BT).tolist()])
+ indices = torch.stack([indices.eq(0).cumsum(0) - 1, indices], 1).to(offsets)
+ NT = len(indices)
+ BC = min(BT, 32)
+ BK = min(triton.next_power_of_2(K), 64)
+
+ u = torch.empty_like(v)
+ w = torch.empty_like(k)
+ A = torch.empty(B, *((H, T) if head_first else (T, H)), BT, device=k.device, dtype=k.dtype)
+ fwd_fn = fwd_prepare_wy_repr_kernel_chunk64 if BT == 64 else fwd_prepare_wy_repr_kernel_chunk32
+ fwd_fn[(NT, B * H)](
+ k=k,
+ beta=beta,
+ A=A,
+ offsets=offsets,
+ indices=indices,
+ T=T,
+ H=H,
+ K=K,
+ BT=BT,
+ BK=BK,
+ BC=BC,
+ HEAD_FIRST=head_first
+ )
+ w, u = fwd_recompute_w_u(
+ k=k,
+ v=v,
+ beta=beta,
+ A=A,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=chunk_size
+ )
+ return w, u, A
+
+
+def fwd_prepare_T(
+ k: torch.Tensor,
+ beta: torch.Tensor,
+ offsets: Optional[torch.LongTensor],
+ indices: Optional[torch.LongTensor],
+ head_first: bool,
+ chunk_size: int
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ if head_first:
+ B, H, T, K = k.shape
+ else:
+ B, T, H, K = k.shape
+ BT = min(chunk_size, max(triton.next_power_of_2(T), 16))
+ assert BT in [16, 32, 64]
+ if offsets is None:
+ NT = triton.cdiv(T, BT)
+ else:
+ if indices is None:
+ indices = torch.cat([torch.arange(n) for n in triton.cdiv(offsets[1:] - offsets[:-1], BT).tolist()])
+ indices = torch.stack([indices.eq(0).cumsum(0) - 1, indices], 1).to(offsets)
+ NT = len(indices)
+ BC = min(BT, 32)
+
+ BK = min(triton.next_power_of_2(K), 64)
+ A = torch.empty(B, *((H, T) if head_first else (T, H)), BT, device=k.device, dtype=k.dtype)
+ fwd_fn = fwd_prepare_wy_repr_kernel_chunk64 if BT == 64 else fwd_prepare_wy_repr_kernel_chunk32
+ fwd_fn[(NT, B * H)](
+ k=k,
+ beta=beta,
+ A=A,
+ offsets=offsets,
+ indices=indices,
+ T=T,
+ H=H,
+ K=K,
+ BT=BT,
+ BK=BK,
+ BC=BC,
+ HEAD_FIRST=head_first
+ )
+ return A
+
+
+def fwd_recompute_w_u(
+ k: torch.Tensor,
+ v: torch.Tensor,
+ beta: torch.Tensor,
+ A: torch.Tensor,
+ offsets: Optional[torch.LongTensor],
+ indices: Optional[torch.LongTensor],
+ head_first: bool,
+ chunk_size: int
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ if head_first:
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ else:
+ B, T, H, K, V = *k.shape, v.shape[-1]
+ BT = min(chunk_size, max(triton.next_power_of_2(T), 16))
+ if offsets is None:
+ NT = triton.cdiv(T, BT)
+ else:
+ if indices is None:
+ indices = torch.cat([torch.arange(n) for n in triton.cdiv(offsets[1:] - offsets[:-1], BT).tolist()])
+ indices = torch.stack([indices.eq(0).cumsum(0) - 1, indices], 1).to(offsets)
+ NT = len(indices)
+ BK = min(triton.next_power_of_2(K), 64)
+ BV = min(triton.next_power_of_2(V), 64)
+
+ u = torch.empty_like(v)
+ w = torch.empty_like(k)
+ fwd_recompute_w_u_kernel[(NT, B*H)](
+ k,
+ v,
+ beta,
+ w,
+ u,
+ A,
+ offsets=offsets,
+ indices=indices,
+ T=T,
+ H=H,
+ K=K,
+ V=V,
+ BT=BT,
+ BK=BK,
+ BV=BV,
+ HEAD_FIRST=head_first
+ )
+ return w, u
+
+
+def fwd_recompute_w(
+ k: torch.Tensor,
+ beta: torch.Tensor,
+ A: torch.Tensor,
+ offsets: Optional[torch.LongTensor],
+ indices: Optional[torch.LongTensor],
+ head_first: bool,
+ chunk_size: int
+) -> torch.Tensor:
+ if head_first:
+ B, H, T, K = k.shape
+ else:
+ B, T, H, K = k.shape
+ BT = min(chunk_size, max(triton.next_power_of_2(T), 16))
+ if offsets is None:
+ NT = triton.cdiv(T, BT)
+ else:
+ if indices is None:
+ indices = torch.cat([torch.arange(n) for n in triton.cdiv(offsets[1:] - offsets[:-1], BT).tolist()])
+ indices = torch.stack([indices.eq(0).cumsum(0) - 1, indices], 1).to(offsets)
+ NT = len(indices)
+ BK = min(triton.next_power_of_2(K), 64)
+
+ w = torch.empty_like(k)
+ fwd_recompute_w_kernel[(NT, B*H)](
+ k,
+ beta,
+ w,
+ A,
+ T=T,
+ H=H,
+ K=K,
+ BT=BT,
+ BK=BK,
+ HEAD_FIRST=head_first
+ )
+ return w
+
+
+def bwd_prepare_wy_repr(
+ k: torch.Tensor,
+ v: torch.Tensor,
+ beta: torch.Tensor,
+ A: torch.Tensor,
+ dw: torch.Tensor,
+ du: torch.Tensor,
+ offsets: Optional[torch.LongTensor],
+ indices: Optional[torch.LongTensor],
+ head_first: bool,
+ chunk_size: int
+) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
+ if head_first:
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ else:
+ B, T, H, K, V = *k.shape, v.shape[-1]
+ BT = min(chunk_size, max(triton.next_power_of_2(T), 16))
+ if offsets is None:
+ NT = triton.cdiv(T, BT)
+ else:
+ if indices is None:
+ indices = torch.cat([torch.arange(n) for n in triton.cdiv(offsets[1:] - offsets[:-1], BT).tolist()])
+ indices = torch.stack([indices.eq(0).cumsum(0) - 1, indices], 1).to(offsets)
+ NT = len(indices)
+ BK = min(triton.next_power_of_2(K), 64)
+ BV = min(triton.next_power_of_2(V), 64)
+
+ dk = torch.empty_like(k)
+ dv = torch.empty_like(v)
+ dbeta = torch.empty_like(beta)
+ bwd_prepare_wy_repr_kernel[(NT, B * H)](
+ k,
+ v,
+ beta,
+ A,
+ dw,
+ du,
+ dk,
+ dv,
+ dbeta,
+ offsets=offsets,
+ indices=indices,
+ T=T,
+ H=H,
+ K=K,
+ V=V,
+ BT=BT,
+ BK=BK,
+ BV=BV,
+ HEAD_FIRST=head_first
+ )
+ return dk, dv, dbeta
+
+
+class WYRepresentationPrepration(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_fwd
+ def forward(
+ ctx,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ beta: torch.Tensor,
+ offsets: Optional[torch.LongTensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+ ):
+ assert chunk_size in [16, 32, 64]
+ # 2-d indices denoting the offsets of chunks in each sequence
+ # for example, if the passed `offsets` is [0, 100, 356] and `chunk_size` is 64,
+ # then there are 2 and 4 chunks in the 1st and 2nd sequences respectively, and `indices` will be
+ # [[0, 0], [0, 1], [1, 0], [1, 1], [1, 2], [1, 3]]
+ indices = None
+ if offsets is not None:
+ indices = torch.cat([torch.arange(n) for n in triton.cdiv(offsets[1:] - offsets[:-1], chunk_size).tolist()])
+ indices = torch.stack([indices.eq(0).cumsum(0) - 1, indices], 1).to(offsets)
+
+ w, u, A = fwd_prepare_wy_repr(
+ k=k,
+ v=v,
+ beta=beta,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=chunk_size
+ )
+ ctx.offsets = offsets
+ ctx.indices = indices
+ ctx.head_first = head_first
+ ctx.chunk_size = chunk_size
+ ctx.save_for_backward(k, v, beta, A)
+ return w, u
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_bwd
+ def backward(
+ ctx,
+ dw: torch.Tensor,
+ du: torch.Tensor
+ ):
+ k, v, beta, A = ctx.saved_tensors
+ dk, dv, dbeta = bwd_prepare_wy_repr(
+ k=k,
+ v=v,
+ beta=beta,
+ A=A,
+ dw=dw,
+ du=du,
+ offsets=ctx.offsets,
+ indices=ctx.indices,
+ head_first=ctx.head_first,
+ chunk_size=ctx.chunk_size
+ )
+ return dk, dv, dbeta, None, None, None
+
+
+prepare_wy_repr = WYRepresentationPrepration.apply
+
+
+def naive(k, v, beta, chunk_size):
+ l_org = k.shape[2]
+ l_new = triton.next_power_of_2(l_org)
+ # pad k, v, beta
+ k = torch.cat([k, torch.zeros_like(k)[:, :, :l_new-l_org, :]], dim=2)
+ v = torch.cat([v, torch.zeros_like(v)[:, :, :l_new-l_org, :]], dim=2)
+ beta = torch.cat([beta, torch.zeros_like(beta)[:, :, :l_new-l_org]], dim=2)
+
+ k, v = map(lambda x: rearrange(x, 'b h (n c) d -> b h n c d', c=chunk_size), (k, v))
+ # k = torch.nn.functional.normalize(k, dim=-1, p=2)
+ beta = rearrange(beta, 'b h (n c) -> b h n c', c=chunk_size)
+ mask = torch.triu(torch.ones(chunk_size, chunk_size, dtype=torch.bool, device=k.device), diagonal=0)
+ k_beta = k * beta[..., None]
+ v = v * beta[..., None]
+ attn = (k @ k.transpose(-1, -2)).masked_fill_(mask, 0)
+ attn = attn * beta[..., None]
+ x = attn @ v
+
+ o = torch.zeros_like(k)
+ o2 = torch.zeros_like(v)
+
+ o[..., 0, :] = k_beta[..., 0, :].clone()
+ o2[..., 0, :] = x[..., 0, :].clone()
+ for i in range(1, chunk_size):
+ o_i = (o[..., :i, :]).clone()
+ o[..., i, :] = -(attn[..., i, :i, None] * o_i).sum(3) + k_beta[..., i, :]
+ o2_i = (o2[..., :i, :]).clone()
+ o2[..., i, :] = -(attn[..., i, :i, None] * o2_i).sum(3) + x[..., i, :]
+ return map(lambda x: rearrange(x, 'b h n c d -> b h (n c) d')[:, :, :l_org], (o, v-o2))
+
+
+if __name__ == "__main__":
+ torch.set_default_dtype(torch.bfloat16)
+ seq_len = 1024
+ b = 4
+ h = 4
+ k = torch.nn.functional.normalize(torch.randn(b, h, seq_len, 128), dim=-1, p=2)
+ v = torch.randn(b, h, seq_len, 128)
+ beta = torch.rand(b, h, seq_len).sigmoid()
+ # beta = torch.ones(b, h, seq_len)
+ require_grad = True
+
+ k, v, beta = map(lambda x: x.cuda().requires_grad_(require_grad), (k, v, beta))
+ do = torch.rand_like(k)
+ do2 = torch.rand_like(v)
+
+ o1, o2 = naive(k.clone(), v.clone(), beta.clone(), 64)
+ if require_grad:
+ o1.backward(do, retain_graph=True)
+ o2.backward(do2, retain_graph=True)
+ k_grad2, v_grad2, beta_grad2 = k.grad, v.grad, beta.grad
+ k.grad = v.grad = beta.grad = None
+ o3, o4 = prepare_wy_repr(k.clone(), v.clone(), beta.clone(), 64)
+ print((o1-o3).abs().max())
+ print((o2-o4).abs().max())
+
+ if require_grad:
+ o3.backward(do, retain_graph=True)
+ o4.backward(do2, retain_graph=True)
+ k_grad, v_grad, beta_grad = k.grad, v.grad, beta.grad
+ print((k_grad2-k_grad).abs().max())
+ print((v_grad2-v_grad).abs().max())
+ print((beta_grad2-beta_grad).abs().max())
diff --git a/fla/ops/generalized_delta_rule/__init__.py b/fla/ops/generalized_delta_rule/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/fla/ops/generalized_delta_rule/iplr/__init__.py b/fla/ops/generalized_delta_rule/iplr/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/fla/ops/generalized_delta_rule/iplr/fused_recurrent.py b/fla/ops/generalized_delta_rule/iplr/fused_recurrent.py
new file mode 100644
index 0000000000000000000000000000000000000000..1cbffdbcf586e286cf01bb7988530fa64718ef67
--- /dev/null
+++ b/fla/ops/generalized_delta_rule/iplr/fused_recurrent.py
@@ -0,0 +1,349 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from typing import Tuple
+
+import torch
+import triton
+import triton.language as tl
+
+from fla.utils import contiguous
+
+
+@triton.jit
+def fused_recurrent_fwd_kernel(
+ # B: batch_size, H: n_heads, T: seq_len, D: d_head
+ q, # query [B, H, L, K]
+ k, # key [B, H, L, V]
+ v, # value [B, H, L, V].
+ alpha, # beta [B, H, L]
+ beta,
+ o, # output [B, H, L, V]
+ ha, # tmp variable [B, H, L, V] for storing intermediate results of (h * alpha[None, :]).sum(0)
+ h0,
+ ht, # final hidden state [B, H, K, V]
+ s_k_h, # stride size: L * K
+ s_v_h, # stride size: L * V
+ scale, # K ** -0.5
+ B, # batch size
+ H, # n_heads
+ T, # seq_len
+ K: tl.constexpr, # K
+ V: tl.constexpr, # V
+ BK: tl.constexpr, # BLOCK SIZE along the K dimension
+ BV: tl.constexpr, # BLOCK SIZE along the V dimension
+ USE_INITIAL_STATE: tl.constexpr, # whether to use initial state
+ STORE_FINAL_STATE: tl.constexpr, # whether to store final state
+):
+
+ # indices
+ i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+
+ p_q = q + i_bh * s_k_h + i_k * BK + tl.arange(0, BK)
+ p_k = k + i_bh * s_k_h + i_k * BK + tl.arange(0, BK)
+ p_v = v + i_bh * s_v_h + i_v * BV + tl.arange(0, BV)
+ p_alpha = alpha + i_bh * s_k_h + i_k * BK + tl.arange(0, BK)
+ p_beta = beta + i_bh * s_k_h + i_k * BK + tl.arange(0, BK)
+ p_o = o + (i_bh + i_k * B * H) * s_v_h + i_v * BV + tl.arange(0, BV)
+ p_ha = ha + (i_bh + i_k * B * H) * s_v_h + i_v * BV + tl.arange(0, BV)
+
+ mask_bk = (i_k * BK + tl.arange(0, BK)) < K
+ mask_bv = (i_v * BV + tl.arange(0, BV)) < V
+ mask_kv = mask_bk[None, :] & mask_bv[:, None]
+
+ h = tl.zeros([BV, BK], dtype=tl.float32)
+
+ if USE_INITIAL_STATE:
+ p_h0 = h0 + i_bh * K * V + (i_k * BK + tl.arange(0, BK)[None, :]) * V + (i_v * BV + tl.arange(0, BV)[:, None])
+ h += tl.load(p_h0, mask=mask_kv, other=0).to(tl.float32)
+
+ for _ in range(0, T):
+ b_k = tl.load(p_k, mask=mask_bk, other=0).to(tl.float32)
+ b_v = tl.load(p_v, mask=mask_bv, other=0).to(tl.float32)
+ b_q = tl.load(p_q, mask=mask_bk, other=0).to(tl.float32) * scale
+ b_alpha = tl.load(p_alpha, mask=mask_bk, other=0).to(tl.float32)
+ b_beta = tl.load(p_beta, mask=mask_bk, other=0).to(tl.float32)
+ # to store
+ tmp = tl.sum(h * b_alpha[None, :], axis=1)
+ h += (tmp[:, None] * b_beta[None, :] + b_k[None, :] * b_v[:, None])
+ _o = h * b_q[None, :]
+ _o = tl.sum(_o, axis=1)
+ tl.store(p_o, _o.to(p_o.dtype.element_ty), mask=mask_bv)
+ tl.store(p_ha, tmp.to(p_ha.dtype.element_ty), mask=mask_bv)
+ p_q += K
+ p_k += K
+ p_o += V
+ p_v += V
+ p_ha += V
+ p_alpha += K
+ p_beta += K
+
+ if STORE_FINAL_STATE:
+ p_ht = ht + i_bh * K * V + (i_k * BK + tl.arange(0, BK)[None, :]) * V + (i_v * BV + tl.arange(0, BV)[:, None])
+ tl.store(p_ht, h.to(p_ht.dtype.element_ty), mask=mask_kv)
+
+
+# Similar to Algorithm1 of https://arxiv.org/abs/2006.16236
+@triton.jit
+def fused_recurrent_bwd_kernel(
+ # B: batch_size, H: n_heads, T: seq_len, D: d_head
+ # NV: number of split in the V dimension. NK: number of split in the K dimension
+ q, # query [B, H, L, K]
+ k, # key [B, H, L, V]
+ v, # value [B, H, L, V]
+ alpha, # alpha [B, H, L, K]
+ beta, # beta [B, H, L, K]
+ ha, # ha [B, H, L, V]
+ dht, # gradient of final state [B, H, K, V]
+ dh0, # gradient of initial state [B, H, K, V]
+ do, # gradient of output [B, H, L, V]
+ dq, # gradient of query [NV, B, H, L, K]
+ dk, # gradient of key [NV, B, H, L, K]
+ dv, # gradient of value [NK, B, H, L, V]
+ dalpha, # gradient of alpha [NV, B, H, L, K]
+ dbeta, # gradient of beta [NV, B, H, L, K]
+ dha, # gradient of ha [NK, B, H, L, V]
+ h0, # initial state [B, H, K, V]
+ s_k_h, # stride size: L * K
+ s_v_h, # stride size: L * V
+ NK, # NK block size
+ scale, # K ** -0.5
+ B, # batch_size
+ H, # n_heads
+ T, # seq_len
+ K: tl.constexpr, # K
+ V: tl.constexpr, # V
+ BK: tl.constexpr, # BLOCK SIZE along the K dimension
+ BV: tl.constexpr, # BLOCK SIZE along the V dimension
+ USE_INITIAL_STATE: tl.constexpr, # whether to use initial state h0
+ USE_DH0: tl.constexpr, # whether to use dh0
+ USE_DHT: tl.constexpr, # whether to use dht
+):
+ i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ mask_bk = i_k * BK + tl.arange(0, BK) < K
+ mask_bv = i_v * BV + tl.arange(0, BV) < V
+
+ p_q = q + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + (T - 1) * K
+ p_k = k + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + (T - 1) * K
+ p_do = do + i_bh * s_v_h + i_v * BV + tl.arange(0, BV) + (T - 1) * V
+ p_v = v + i_bh * s_v_h + i_v * BV + tl.arange(0, BV) + (T - 1) * V
+ p_ha = ha + i_bh * s_v_h + i_v * BV + tl.arange(0, BV) + (T - 1) * V
+ p_alpha = alpha + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + (T - 1) * K
+ p_beta = beta + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + (T - 1) * K
+
+ p_dk = dk + (i_bh + i_v * B * H) * s_k_h + i_k * BK + tl.arange(0, BK) + (T - 1) * K
+ p_dbeta = dbeta + (i_bh + i_v * B * H) * s_k_h + i_k * BK + tl.arange(0, BK) + (T - 1) * K
+ p_dv = dv + (i_bh + i_k * B * H) * s_v_h + i_v * BV + tl.arange(0, BV) + (T - 1) * V
+ p_dha = dha + (i_bh + i_k * B * H) * s_v_h + i_v * BV + tl.arange(0, BV) + (T - 1) * V
+ d_h = tl.zeros([BK, BV], dtype=tl.float32)
+
+ if USE_DHT:
+ p_ht = dht + i_bh * K * V + (i_k * BK + tl.arange(0, BK)[:, None]) * V + (i_v * BV + tl.arange(0, BV)[None, :])
+ d_h += tl.load(p_ht, mask=mask_bk[:, None] & mask_bv[None, :], other=0).to(tl.float32)
+
+ for _ in range(T):
+ b_q = tl.load(p_q, mask=mask_bk, other=0).to(tl.float32) * scale
+ b_k = tl.load(p_k, mask=mask_bk, other=0).to(tl.float32)
+ b_v = tl.load(p_v, mask=mask_bv, other=0).to(tl.float32)
+ b_do = tl.load(p_do, mask=mask_bv, other=0).to(tl.float32)
+ b_beta = tl.load(p_beta, mask=mask_bk, other=0).to(tl.float32)
+ b_alpha = tl.load(p_alpha, mask=mask_bk, other=0).to(tl.float32)
+ b_ha = tl.load(p_ha, mask=mask_bv, other=0).to(tl.float32)
+
+ d_h += b_q[:, None] * b_do[None, :]
+ d_k = tl.sum(d_h * b_v[None, :], axis=1)
+ d_v = tl.sum(d_h * b_k[:, None], axis=0)
+ tl.store(p_dk, d_k.to(p_dk.dtype.element_ty), mask=mask_bk)
+ tl.store(p_dv, d_v.to(p_dv.dtype.element_ty), mask=mask_bv)
+
+ b_dha = tl.sum(d_h * b_beta[:, None], axis=0)
+ tl.store(p_dha, b_dha.to(p_dha.dtype.element_ty), mask=mask_bv)
+ b_dbeta = tl.sum(d_h * b_ha[None, :], axis=1)
+ tl.store(p_dbeta, b_dbeta.to(p_dbeta.dtype.element_ty), mask=mask_bk)
+
+ d_h += b_dha[None, :] * b_alpha[:, None]
+ p_do -= V
+ p_q -= K
+ p_k -= K
+ p_v -= V
+ p_dk -= K
+ p_dv -= V
+ p_beta -= K
+ p_dbeta -= K
+ p_alpha -= K
+ p_dha -= V
+ p_ha -= V
+
+ if USE_DH0:
+ p_dh0 = dh0 + i_bh * K * V + (i_k * BK + tl.arange(0, BK)[:, None]) * V + (i_v * BV + tl.arange(0, BV)[None, :])
+ tl.store(p_dh0, d_h.to(p_dh0.dtype.element_ty), mask=mask_bk[:, None] & mask_bv[None, :])
+
+ tl.debug_barrier()
+
+ h = tl.zeros([BK, BV], dtype=tl.float32)
+ p_q = q + i_bh * s_k_h + i_k * BK + tl.arange(0, BK)
+ p_k = k + i_bh * s_k_h + i_k * BK + tl.arange(0, BK)
+ p_beta = beta + i_bh * s_k_h + i_k * BK + tl.arange(0, BK)
+ p_v = v + i_bh * s_v_h + i_v * BV + tl.arange(0, BV)
+ p_ha = ha + i_bh * s_v_h + i_v * BV + tl.arange(0, BV)
+ p_do = do + i_bh * s_v_h + i_v * BV + tl.arange(0, BV)
+ p_dq = dq + (i_bh + i_v * B * H) * s_k_h + i_k * BK + tl.arange(0, BK)
+ p_dv = dv + (i_bh + i_k * B * H) * s_v_h + i_v * BV + tl.arange(0, BV)
+ p_dha = dha + (i_bh + i_k * B * H) * s_v_h + i_v * BV + tl.arange(0, BV)
+ p_alpha = alpha + (i_bh + i_v * B * H) * s_k_h + i_k * BK + tl.arange(0, BK)
+ p_dalpha = dalpha + (i_bh + i_v * B * H) * s_k_h + i_k * BK + tl.arange(0, BK)
+
+ if USE_INITIAL_STATE:
+ mask_kv = mask_bk[:, None] & mask_bv[None, :]
+ p_h0 = h0 + i_bh * K * V + (i_k * BK + tl.arange(0, BK)[:, None]) * V + (i_v * BV + tl.arange(0, BV)[None, :])
+ h += tl.load(p_h0, mask=mask_kv, other=0).to(tl.float32)
+
+ for i in range(0, T):
+ d_ha = tl.load(p_dha, mask=mask_bv, other=0).to(tl.float32)
+ d_alpha = tl.sum(d_ha[None, :] * h, axis=1)
+ tl.store(p_dalpha, d_alpha.to(p_dalpha.dtype.element_ty), mask=mask_bk)
+ b_k = tl.load(p_k, mask=mask_bk, other=0).to(tl.float32)
+ b_v = tl.load(p_v, mask=mask_bv, other=0).to(tl.float32)
+ b_do = tl.load(p_do, mask=mask_bv, other=0).to(tl.float32)
+ b_beta = tl.load(p_beta, mask=mask_bk, other=0).to(tl.float32)
+ b_ha = tl.load(p_ha, mask=mask_bv, other=0).to(tl.float32)
+ h += b_k[:, None] * b_v[None, :] + b_beta[:, None] * b_ha[None, :]
+ _d_q = h * b_do[None, :]
+ d_q = tl.sum(_d_q, axis=1) * scale
+ tl.store(p_dq, d_q.to(p_dq.dtype.element_ty), mask=mask_bk)
+
+ p_k += K
+ p_do += V
+ p_v += V
+ p_dk += K
+ p_dalpha += K
+ p_dha += V
+ p_ha += V
+ p_dq += K
+ p_beta += K
+
+
+class FusedRecurrentFunction(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ def forward(ctx, q, k, v, alpha, beta, scale=None, initial_state=None, output_final_state=False):
+ B, H, T, K, V = *q.shape, v.shape[-1]
+ BK, BV = triton.next_power_of_2(K), min(triton.next_power_of_2(V), 8)
+ NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV)
+ num_stages = 1
+ num_warps = 1
+ assert NK == 1, "NK > 1 is not supported yet"
+ o = q.new_empty(NK, B, H, T, V)
+
+ if output_final_state:
+ final_state = q.new_empty(B, H, K, V, dtype=torch.float32)
+ else:
+ final_state = None
+
+ ha = torch.empty_like(v, dtype=torch.float32)
+
+ grid = (NV, NK, B * H)
+ fused_recurrent_fwd_kernel[grid](
+ q, k, v, alpha, beta, o, ha, initial_state, final_state,
+ q.stride(1),
+ v.stride(1),
+ scale,
+ B=B, H=H, T=T, K=K, V=V,
+ BK=BK, BV=BV,
+ USE_INITIAL_STATE=initial_state is not None,
+ STORE_FINAL_STATE=final_state is not None,
+ num_warps=num_warps,
+ num_stages=num_stages,
+ )
+ o = o.squeeze(0)
+ ctx.save_for_backward(q, k, v, alpha, beta, ha, initial_state)
+ ctx.scale = scale
+ return o, final_state
+
+ @staticmethod
+ @contiguous
+ def backward(ctx, do, dht):
+ q, k, v, alpha, beta, ha, initial_state = ctx.saved_tensors
+ B, H, T, K, V = *q.shape, v.shape[-1]
+ scale = ctx.scale
+ BK, BV = triton.next_power_of_2(K), min(triton.next_power_of_2(V), 32)
+ NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV)
+ assert NK == 1, "NK > 1 is not supported yet"
+ num_stages = 1
+ num_warps = 2
+
+ dq = q.new_empty(NV, B, H, T, K)
+ dk = k.new_empty(NV, B, H, T, K)
+ dalpha = alpha.new_empty(NV, B, H, T, K)
+ dbeta = beta.new_empty(NV, B, H, T, K)
+ dv = v.new_empty(NK, B, H, T, V)
+ dha = ha.new_empty(NK, B, H, T, V)
+
+ grid = (NV, NK, B * H)
+
+ if initial_state is not None and initial_state.requires_grad:
+ dh0 = torch.empty_like(initial_state, dtype=torch.float32)
+ else:
+ dh0 = None
+
+ fused_recurrent_bwd_kernel[grid](
+ q, k, v, alpha, beta, ha, dht, dh0, do, dq, dk, dv, dalpha, dbeta, dha, initial_state,
+ q.stride(1),
+ v.stride(1),
+ NK, scale,
+ B=B, H=H, T=T, K=K, V=V,
+ BK=BK, BV=BV,
+ USE_INITIAL_STATE=initial_state is not None,
+ USE_DH0=dh0 is not None,
+ USE_DHT=dht is not None,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+ dq = dq.sum(0)
+ dk = dk.sum(0)
+ dv = dv.sum(0)
+ dalpha = dalpha.sum(0)
+ dbeta = dbeta.sum(0)
+ return dq.to(q), dk.to(k), dv.to(v), dalpha.to(alpha), dbeta.to(beta), None, dh0, None
+
+
+def fused_recurrent_iplr(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ alpha: torch.Tensor,
+ beta: torch.Tensor,
+ scale: float = None,
+ initial_state: torch.Tensor = None,
+ output_final_state: bool = False
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ r"""
+ This function computes the recurrence S_t = S_t @ (I + alpha_t beta_t^T) + v_t k_t^T in a recurrent manner.
+ Since the transition matrices is identity-plus-low-rank, we call it the Identity-Plus-Low-Rank (IPLR).
+
+ Args:
+ q (torch.Tensor):
+ queries of shape `[B, H, T, K]`
+ k (torch.Tensor):
+ keys of shape `[B, H, T, K]`
+ v (torch.Tensor):
+ values of shape `[B, H, T, V]`
+ alpha (torch.Tensor):
+ alphas of shape `[B, H, T, K]`
+ beta (torch.Tensor):
+ betas of shape `[B, H, T, K]`
+ scale (Optional[int]):
+ Scale factor for the RetNet attention scores.
+ If not provided, it will default to `1 / sqrt(K)`. Default: `None`.
+ initial_state (Optional[torch.Tensor]):
+ Initial state of shape `[B, H, K, V]`. Default: `None`.
+ output_final_state (Optional[bool]):
+ Whether to output the final state of shape `[B, H, K, V]`. Default: `False`.
+ """
+ if scale is None:
+ scale = q.shape[-1] ** -0.5
+ else:
+ assert scale > 0, "scale must be positive"
+ o, final_state = FusedRecurrentFunction.apply(q, k, v, alpha, beta, scale, initial_state, output_final_state)
+ return o, final_state
diff --git a/fla/ops/generalized_delta_rule/iplr/naive.py b/fla/ops/generalized_delta_rule/iplr/naive.py
new file mode 100644
index 0000000000000000000000000000000000000000..1123d4e2a719c49d4bc07384dcf51bad23094464
--- /dev/null
+++ b/fla/ops/generalized_delta_rule/iplr/naive.py
@@ -0,0 +1,100 @@
+# -*- coding: utf-8 -*-
+
+import torch
+from einops import rearrange
+
+
+# S_t = S_t @ (I + alpha_t beta_t^T) + v_t k_t^T
+# q, k, alpha, beta [B, H, L, D_K]
+# v [B, H, L, D_V]
+def iplr_recurrence(q, k, v, alpha, beta, initial_state=None, output_final_state=True):
+ orig_dtype = q.dtype
+ b, h, l, d_k = q.shape
+ q, k, v, beta = map(lambda x: x.float(), [q, k, v, beta])
+ d_v = v.shape[-1]
+ o = torch.zeros_like(v)
+ S = torch.zeros(b, h, d_k, d_v).to(v)
+ q = q * (d_k ** -0.5)
+
+ if initial_state is not None:
+ S += initial_state
+
+ for i in range(l):
+ _k = k[:, :, i]
+ _q = q[:, :, i]
+ _v = v[:, :, i]
+ _alpha = alpha[:, :, i]
+ _beta = beta[:, :, i]
+ _kv = _k[..., None] * _v[..., None, :] + (S.clone() * _alpha[..., None]).sum(-2, keepdim=True) * _beta[..., None]
+ S = S + _kv
+ o[:, :, i] = torch.einsum('bhd,bhdm->bhm', _q, S)
+ S = None if output_final_state is False else S
+ return o.to(orig_dtype), S
+
+
+def iplr_chunkwise(q, k, v, alpha, beta, initial_state=None, output_final_state=True, chunk_size=32):
+ b, h, l, d_k = q.shape
+ d_v = v.shape[-1]
+ q = q * (d_k ** -0.5)
+ v = v
+ assert l % chunk_size == 0
+
+ if initial_state is not None:
+ S += initial_state
+
+ # note that diagonal is masked.
+ mask = torch.triu(torch.ones(chunk_size, chunk_size, dtype=torch.bool, device=q.device), diagonal=0)
+ q, k, v, alpha, beta = map(lambda x: rearrange(x, 'b h (n c) d -> b h n c d', c=chunk_size), [q, k, v, alpha, beta])
+
+ v2 = (alpha @ k.transpose(-1, -2)).masked_fill_(mask, 0) @ v
+ attn = (alpha @ beta.transpose(-1, -2)).masked_fill(mask, 0)
+ for i in range(1, chunk_size):
+ attn[..., i, :i] = attn[..., i, :i] + (attn[..., i, :, None].clone() * attn[..., :, :i].clone()).sum(-2)
+
+ attn = attn + torch.eye(chunk_size, dtype=torch.float, device=q.device)
+ u = attn @ v2
+ w = attn @ alpha
+ S = k.new_zeros(b, h, d_k, d_v)
+ o = torch.zeros_like(v)
+ mask = torch.triu(torch.ones(chunk_size, chunk_size, dtype=torch.bool, device=q.device), diagonal=1)
+ for i in range(0, l // chunk_size):
+ q_i, k_i, v_i, u_i, w_i, beta_i = q[:, :, i], k[:, :, i], v[:, :, i], u[:, :, i], w[:, :, i], beta[:, :, i]
+ o_1 = (q_i @ k_i.transpose(-1, -2)).masked_fill_(mask, 0) @ v_i
+ v2_i = u_i + w_i @ S
+ o_2 = (q_i @ beta_i.transpose(-1, -2)).masked_fill_(mask, 0) @ (v2_i)
+ o_3 = q_i @ S
+ o[:, :, i] = o_1 + o_2 + o_3
+ S = S + k_i.transpose(-1, -2) @ v_i + beta_i.transpose(-1, -2) @ v2_i
+ S = None if output_final_state is False else S
+ return rearrange(o, 'b h n c d -> b h (n c) d'), S
+
+
+if __name__ == '__main__':
+ B = 2
+ H = 4
+ L = 128
+ DK = 128
+ DV = 128
+ q = (torch.randn(B, H, L, DK)).cuda().requires_grad_(True)
+ k = (torch.randn(B, H, L, DK)).cuda().requires_grad_(True)
+ v = (torch.randn(B, H, L, DV)).cuda().requires_grad_(True)
+ alpha = torch.randn(B, H, L, DK).cuda().softmax(-1).requires_grad_(True)
+ beta = torch.randn(B, H, L, DK).cuda().softmax(-1).requires_grad_(True)
+
+ o, s = iplr_recurrence(q, k, v, -alpha, beta)
+ do = torch.randn_like(o).cuda()
+ o.backward(do, retain_graph=True)
+ q_grad, q.grad = q.grad, None
+ k_grad, k.grad = k.grad, None
+ v_grad, v.grad = v.grad, None
+ beta_grad, beta.grad = beta.grad, None
+
+ o2, s2 = iplr_chunkwise(q, k, v, -alpha, beta)
+ o2.backward(do)
+ assert torch.allclose(o, o2, atol=1e-4), breakpoint()
+ assert torch.allclose(s, s2, atol=1e-4), breakpoint()
+ assert torch.allclose(q.grad, q_grad, atol=1e-4), breakpoint()
+ assert torch.allclose(k.grad, k_grad, atol=1e-4), breakpoint()
+ assert torch.allclose(v.grad, v_grad, atol=1e-4), breakpoint()
+ assert torch.allclose(beta.grad, beta_grad, atol=1e-4), breakpoint()
+ print("All passed!")
diff --git a/fla/ops/gla/__init__.py b/fla/ops/gla/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..367c85442a26fe56516716622433f8b6f87afd2c
--- /dev/null
+++ b/fla/ops/gla/__init__.py
@@ -0,0 +1,11 @@
+# -*- coding: utf-8 -*-
+
+from .chunk import chunk_gla
+from .fused_chunk import fused_chunk_gla
+from .fused_recurrent import fused_recurrent_gla
+
+__all__ = [
+ 'chunk_gla',
+ 'fused_chunk_gla',
+ 'fused_recurrent_gla'
+]
diff --git a/fla/ops/gla/chunk.py b/fla/ops/gla/chunk.py
new file mode 100644
index 0000000000000000000000000000000000000000..c3aecc20582492188e44ff8d13aa205f629c5de4
--- /dev/null
+++ b/fla/ops/gla/chunk.py
@@ -0,0 +1,1514 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from typing import Optional, Tuple
+
+import torch
+import triton
+import triton.language as tl
+
+from fla.ops.common.chunk_h import chunk_bwd_dh, chunk_fwd_h
+from fla.ops.utils import chunk_local_cumsum
+from fla.utils import contiguous
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ ],
+ key=["BC", "BK"],
+)
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def chunk_gla_fwd_A_kernel_intra_sub_inter(
+ q,
+ k,
+ g,
+ A,
+ offsets,
+ indices,
+ scale,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ K: tl.constexpr,
+ BT: tl.constexpr,
+ BC: tl.constexpr,
+ BK: tl.constexpr,
+ NC: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_t, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_b, i_h = i_bh // H, i_bh % H
+ i_i, i_j = i_c // NC, i_c % NC
+ if USE_OFFSETS:
+ i_n, i_t = tl.load(indices + i_t * 2).to(tl.int32), tl.load(indices + i_t * 2 + 1).to(tl.int32)
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ T = eos - bos
+ else:
+ bos, eos = i_b * T, i_b * T + T
+
+ if i_t * BT + i_i * BC >= T:
+ return
+ if i_i <= i_j:
+ return
+
+ b_A = tl.zeros([BC, BC], dtype=tl.float32)
+ for i_k in range(tl.cdiv(K, BK)):
+ o_k = i_k * BK + tl.arange(0, BK)
+ m_k = o_k < K
+
+ if HEAD_FIRST:
+ p_q = tl.make_block_ptr(q + i_bh * T*K, (T, K), (K, 1), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ p_g = tl.make_block_ptr(g + i_bh * T*K, (T, K), (K, 1), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + i_bh * T*K, (K, T), (1, K), (i_k * BK, i_t * BT + i_j * BC), (BK, BC), (0, 1))
+ p_gk = tl.make_block_ptr(g + i_bh * T*K, (K, T), (1, K), (i_k * BK, i_t * BT + i_j * BC), (BK, BC), (0, 1))
+ p_gn = tl.max_contiguous(tl.multiple_of(g + (i_bh * T + i_t * BT + i_i * BC) * K + o_k, BK), BK)
+ else:
+ p_q = tl.make_block_ptr(q + (bos*H+i_h)*K, (T, K), (H*K, 1), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ p_g = tl.make_block_ptr(g + (bos*H+i_h)*K, (T, K), (H*K, 1), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + (bos*H+i_h)*K, (K, T), (1, H*K), (i_k * BK, i_t * BT + i_j * BC), (BK, BC), (0, 1))
+ p_gk = tl.make_block_ptr(g + (bos*H+i_h)*K, (K, T), (1, H*K), (i_k * BK, i_t * BT + i_j * BC), (BK, BC), (0, 1))
+ p_gn = tl.max_contiguous(tl.multiple_of(g + (bos + i_t * BT + i_i * BC) * H*K + i_h * K + o_k, BK), BK)
+
+ # [BK,]
+ b_gn = tl.load(p_gn, mask=m_k, other=0)
+ # [BC, BK]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_g = tl.load(p_g, boundary_check=(0, 1))
+ b_qg = b_q * tl.exp(b_g - b_gn[None, :]) * scale
+ # [BK, BC]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ b_gk = tl.load(p_gk, boundary_check=(0, 1))
+ b_kg = b_k * tl.exp(b_gn[:, None] - b_gk)
+ # [BC, BC] using tf32 to improve precision here.
+ b_A += tl.dot(b_qg, b_kg)
+
+ if HEAD_FIRST:
+ p_A = tl.make_block_ptr(A + i_bh*T*BT, (T, BT), (BT, 1), (i_t * BT + i_i * BC, i_j * BC), (BC, BC), (1, 0))
+ else:
+ p_A = tl.make_block_ptr(A + (bos*H + i_h)*BT, (T, BT), (H*BT, 1), (i_t * BT + i_i * BC, i_j * BC), (BC, BC), (1, 0))
+ tl.store(p_A, b_A.to(A.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ ],
+ key=["BK", "BT"],
+)
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def chunk_gla_fwd_A_kernel_intra_sub_intra(
+ q,
+ k,
+ g,
+ A,
+ offsets,
+ indices,
+ scale,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ K: tl.constexpr,
+ BT: tl.constexpr,
+ BC: tl.constexpr,
+ BK: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_t, i_i, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_b, i_h = i_bh // H, i_bh % H
+ i_j = i_i
+ if USE_OFFSETS:
+ i_n, i_t = tl.load(indices + i_t * 2).to(tl.int32), tl.load(indices + i_t * 2 + 1).to(tl.int32)
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ T = eos - bos
+ else:
+ bos, eos = i_b * T, i_b * T + T
+
+ if i_t * BT + i_i * BC >= T:
+ return
+
+ o_i = tl.arange(0, BC)
+ o_k = tl.arange(0, BK)
+ m_k = o_k < K
+ m_A = (i_t * BT + i_i * BC + tl.arange(0, BC)) < T
+ if HEAD_FIRST:
+ o_A = i_bh * T*BT + (i_t * BT + i_i * BC + tl.arange(0, BC)) * BT + i_j * BC
+ p_q = tl.make_block_ptr(q + i_bh * T*K, (T, K), (K, 1), (i_t * BT + i_i * BC, 0), (BC, BK), (1, 0))
+ p_g = tl.make_block_ptr(g + i_bh * T*K, (T, K), (K, 1), (i_t * BT + i_i * BC, 0), (BC, BK), (1, 0))
+ p_k = tl.max_contiguous(tl.multiple_of(k + (i_bh * T + i_t * BT + i_j * BC) * K + o_k, BK), BK)
+ p_gk = tl.max_contiguous(tl.multiple_of(g + (i_bh * T + i_t * BT + i_j * BC) * K + o_k, BK), BK)
+ else:
+ o_A = (bos + i_t * BT + i_i * BC + tl.arange(0, BC)) * H*BT + i_h * BT + i_j * BC
+ p_q = tl.make_block_ptr(q + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT + i_i * BC, 0), (BC, BK), (1, 0))
+ p_g = tl.make_block_ptr(g + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT + i_i * BC, 0), (BC, BK), (1, 0))
+ p_k = tl.max_contiguous(tl.multiple_of(k + (bos + i_t * BT + i_j * BC) * H*K + i_h * K + o_k, BK), BK)
+ p_gk = tl.max_contiguous(tl.multiple_of(g + (bos + i_t * BT + i_j * BC) * H*K + i_h * K + o_k, BK), BK)
+
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_g = tl.load(p_g, boundary_check=(0, 1))
+ for j in range(0, min(BC, T - i_t * BT - i_i * BC)):
+ b_k = tl.load(p_k, mask=m_k, other=0).to(tl.float32)
+ b_gk = tl.load(p_gk, mask=m_k, other=0).to(tl.float32)
+ b_A = tl.sum(b_q * b_k[None, :] * tl.exp(b_g - b_gk[None, :]), 1)
+ b_A = tl.where(o_i >= j, b_A * scale, 0.)
+
+ tl.store(A + o_A + j, b_A, mask=m_A)
+ p_k += K if HEAD_FIRST else H*K
+ p_gk += K if HEAD_FIRST else H*K
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ ],
+ key=["BC", "BK"],
+)
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def chunk_gla_fwd_A_kernel_intra_sub_intra_split(
+ q,
+ k,
+ g,
+ A,
+ offsets,
+ indices,
+ scale,
+ B: tl.constexpr,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ K: tl.constexpr,
+ BT: tl.constexpr,
+ BC: tl.constexpr,
+ BK: tl.constexpr,
+ NC: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_k, i_tc, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_b, i_h = i_bh // H, i_bh % H
+ i_t, i_i = i_tc // NC, i_tc % NC
+ i_j = i_i
+ if USE_OFFSETS:
+ i_n, i_t = tl.load(indices + i_t * 2).to(tl.int32), tl.load(indices + i_t * 2 + 1).to(tl.int32)
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ all = T
+ T = eos - bos
+ else:
+ bos, eos = i_b * T, i_b * T + T
+ all = B * T
+
+ if i_t * BT + i_i * BC >= T:
+ return
+
+ o_i = tl.arange(0, BC)
+ o_k = i_k * BK + tl.arange(0, BK)
+ m_k = o_k < K
+ m_A = (i_t * BT + i_i * BC + tl.arange(0, BC)) < T
+
+ if HEAD_FIRST:
+ o_A = (i_k * B*H + i_bh) * T * BC + (i_t * BT + i_i * BC + tl.arange(0, BC)) * BC
+ p_q = tl.make_block_ptr(q + i_bh * T*K, (T, K), (K, 1), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ p_g = tl.make_block_ptr(g + i_bh * T*K, (T, K), (K, 1), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ p_k = tl.max_contiguous(tl.multiple_of(k + (i_bh * T + i_t * BT + i_j * BC) * K + o_k, BK), BK)
+ p_gk = tl.max_contiguous(tl.multiple_of(g + (i_bh * T + i_t * BT + i_j * BC) * K + o_k, BK), BK)
+ else:
+ o_A = (i_k * all + bos + i_t * BT + i_i * BC + tl.arange(0, BC)) * H*BC + i_h * BC
+ p_q = tl.make_block_ptr(q + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ p_g = tl.make_block_ptr(g + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ p_k = tl.max_contiguous(tl.multiple_of(k + (bos + i_t * BT + i_j * BC) * H*K + i_h * K + o_k, BK), BK)
+ p_gk = tl.max_contiguous(tl.multiple_of(g + (bos + i_t * BT + i_j * BC) * H*K + i_h * K + o_k, BK), BK)
+
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_g = tl.load(p_g, boundary_check=(0, 1))
+ for j in range(0, min(BC, T - i_t * BT - i_i * BC)):
+ b_A = tl.zeros([BC], dtype=tl.float32)
+ b_k = tl.load(p_k, mask=m_k, other=0).to(tl.float32)
+ b_gk = tl.load(p_gk, mask=m_k, other=0).to(tl.float32)
+ b_A += tl.sum(b_q * b_k[None, :] * tl.exp(b_g - b_gk[None, :]), 1)
+ b_A = tl.where(o_i >= j, b_A * scale, 0.)
+ tl.store(A + o_A + j, b_A, mask=m_A)
+ p_k += K if HEAD_FIRST else H*K
+ p_gk += K if HEAD_FIRST else H*K
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ ],
+ key=["BC"],
+)
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def chunk_gla_fwd_A_kernel_intra_sub_intra_merge(
+ A,
+ A2,
+ offsets,
+ indices,
+ B: tl.constexpr,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ BT: tl.constexpr,
+ BC: tl.constexpr,
+ NK: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_t, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_b, i_h = i_bh // H, i_bh % H
+ if USE_OFFSETS:
+ i_n, i_t = tl.load(indices + i_t * 2).to(tl.int32), tl.load(indices + i_t * 2 + 1).to(tl.int32)
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ all = T
+ T = eos - bos
+ else:
+ bos, eos = i_b * T, i_b * T + T
+ all = B * T
+
+ if i_t * BT + i_c * BC >= T:
+ return
+
+ b_A = tl.zeros([BC, BC], dtype=tl.float32)
+ for i_k in range(0, NK):
+ if HEAD_FIRST:
+ p_A = tl.make_block_ptr(A + (i_k*B*H+i_bh)*T*BC, (T, BC), (BC, 1), (i_t*BT + i_c*BC, 0), (BC, BC), (1, 0))
+ else:
+ p_A = tl.make_block_ptr(A + (i_k*all+bos)*H*BC+i_h*BC, (T, BC), (H*BC, 1), (i_t*BT + i_c*BC, 0), (BC, BC), (1, 0))
+ b_A += tl.load(p_A, boundary_check=(0, 1))
+ if HEAD_FIRST:
+ p_A2 = tl.make_block_ptr(A2 + i_bh*T*BT, (T, BT), (BT, 1), (i_t * BT + i_c * BC, i_c * BC), (BC, BC), (1, 0))
+ else:
+ p_A2 = tl.make_block_ptr(A2 + (bos*H+i_h)*BT, (T, BT), (H*BT, 1), (i_t * BT + i_c * BC, i_c * BC), (BC, BC), (1, 0))
+ tl.store(p_A2, b_A.to(A2.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ ],
+ key=["BK", "BV", "BT"],
+)
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def chunk_gla_fwd_kernel_o(
+ q,
+ v,
+ g,
+ h,
+ o,
+ A,
+ offsets,
+ indices,
+ scale,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_v, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_b, i_h = i_bh // H, i_bh % H
+ if USE_OFFSETS:
+ i_tg = i_t
+ i_n, i_t = tl.load(indices + i_t * 2).to(tl.int32), tl.load(indices + i_t * 2 + 1).to(tl.int32)
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ T = eos - bos
+ NT = tl.cdiv(T, BT)
+ else:
+ NT = tl.cdiv(T, BT)
+ i_tg = i_b * NT + i_t
+ bos, eos = i_b * T, i_b * T + T
+
+ m_s = tl.arange(0, BT)[:, None] >= tl.arange(0, BT)[None, :]
+
+ b_o = tl.zeros([BT, BV], dtype=tl.float32)
+ for i_k in range(tl.cdiv(K, BK)):
+ if HEAD_FIRST:
+ p_q = tl.make_block_ptr(q + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_g = tl.make_block_ptr(g + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_h = tl.make_block_ptr(h + (i_bh * NT + i_t) * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ else:
+ p_q = tl.make_block_ptr(q + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_g = tl.make_block_ptr(g + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_h = tl.make_block_ptr(h + (i_tg * H + i_h) * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+
+ # [BT, BK]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_q = (b_q * scale).to(b_q.dtype)
+ # [BT, BK]
+ b_g = tl.load(p_g, boundary_check=(0, 1))
+ # [BT, BK]
+ b_qg = (b_q * tl.exp(b_g)).to(b_q.dtype)
+ # [BK, BV]
+ b_h = tl.load(p_h, boundary_check=(0, 1))
+ # works but dkw, owing to divine benevolence
+ # [BT, BV]
+ if i_k >= 0:
+ b_o += tl.dot(b_qg, b_h.to(b_qg.dtype))
+ if HEAD_FIRST:
+ p_v = tl.make_block_ptr(v + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_o = tl.make_block_ptr(o + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_A = tl.make_block_ptr(A + i_bh * T*BT, (T, BT), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0))
+ else:
+ p_v = tl.make_block_ptr(v + (bos * H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_o = tl.make_block_ptr(o + (bos * H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_A = tl.make_block_ptr(A + (bos * H + i_h) * BT, (T, BT), (H*BT, 1), (i_t * BT, 0), (BT, BT), (1, 0))
+ # [BT, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BT, BT]
+ b_A = tl.load(p_A, boundary_check=(0, 1))
+ b_A = tl.where(m_s, b_A, 0.).to(b_v.dtype)
+ b_o += tl.dot(b_A, b_v, allow_tf32=False)
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ ],
+ key=["BK", "NC", "BT"],
+)
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def chunk_gla_bwd_kernel_intra(
+ q,
+ k,
+ g,
+ dA,
+ dq,
+ dk,
+ offsets,
+ indices,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ K: tl.constexpr,
+ BT: tl.constexpr,
+ BC: tl.constexpr,
+ BK: tl.constexpr,
+ NC: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_k, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_b, i_h = i_bh // H, i_bh % H
+ i_t, i_i = i_c // NC, i_c % NC
+ if USE_OFFSETS:
+ i_n, i_t = tl.load(indices + i_t * 2).to(tl.int32), tl.load(indices + i_t * 2 + 1).to(tl.int32)
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ else:
+ bos, eos = i_b * T, i_b * T + T
+ T = eos - bos
+ if i_t * BT + i_i * BC >= T:
+ return
+
+ o_k = i_k * BK + tl.arange(0, BK)
+ m_k = o_k < K
+
+ if HEAD_FIRST:
+ p_g = tl.make_block_ptr(g + i_bh * T*K, (T, K), (K, 1), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ else:
+ p_g = tl.make_block_ptr(g + (bos*H + i_h) * K, (T, K), (H*K, 1), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ # [BC, BK]
+ b_g = tl.load(p_g, boundary_check=(0, 1))
+ b_dq = tl.zeros([BC, BK], dtype=tl.float32)
+ if i_i > 0:
+ if HEAD_FIRST:
+ p_gn = tl.max_contiguous(tl.multiple_of(g + (i_bh * T + i_t * BT + i_i * BC) * K + o_k, BK), BK)
+ else:
+ p_gn = tl.max_contiguous(tl.multiple_of(g + (bos + i_t * BT + i_i * BC) * H*K + i_h*K + o_k, BK), BK)
+ # [BK,]
+ b_gn = tl.load(p_gn, mask=m_k, other=0)
+ for i_j in range(0, i_i):
+ if HEAD_FIRST:
+ p_k = tl.make_block_ptr(k + i_bh * T*K, (T, K), (K, 1), (i_t * BT + i_j * BC, i_k * BK), (BC, BK), (1, 0))
+ p_gk = tl.make_block_ptr(g + i_bh * T*K, (T, K), (K, 1), (i_t * BT + i_j * BC, i_k * BK), (BC, BK), (1, 0))
+ p_dA = tl.make_block_ptr(dA + i_bh * T*BT, (T, BT), (BT, 1), (i_t * BT + i_i * BC, i_j * BC), (BC, BC), (1, 0))
+ else:
+ p_k = tl.make_block_ptr(k+(bos*H+i_h)*K, (T, K), (H*K, 1), (i_t*BT+i_j*BC, i_k * BK), (BC, BK), (1, 0))
+ p_gk = tl.make_block_ptr(g+(bos*H+i_h)*K, (T, K), (H*K, 1), (i_t*BT+i_j*BC, i_k * BK), (BC, BK), (1, 0))
+ p_dA = tl.make_block_ptr(dA+(bos*H+i_h)*BT, (T, BT), (H*BT, 1), (i_t*BT+i_i*BC, i_j * BC), (BC, BC), (1, 0))
+ # [BC, BK]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ b_gk = tl.load(p_gk, boundary_check=(0, 1))
+ b_kg = (b_k * tl.exp(b_gn[None, :] - b_gk))
+ # [BC, BC]
+ b_dA = tl.load(p_dA, boundary_check=(0, 1))
+ # [BC, BK]
+ b_dq += tl.dot(b_dA, b_kg)
+ b_dq *= tl.exp(b_g - b_gn[None, :])
+
+ o_i = tl.arange(0, BC)
+ m_dA = (i_t * BT + i_i * BC + tl.arange(0, BC)) < T
+ if HEAD_FIRST:
+ o_dA = i_bh * T*BT + (i_t * BT + i_i * BC + tl.arange(0, BC)) * BT + i_i * BC
+ p_kj = tl.max_contiguous(tl.multiple_of(k + (i_bh * T + i_t * BT + i_i * BC) * K + o_k, BK), BK)
+ p_gkj = tl.max_contiguous(tl.multiple_of(g + (i_bh * T + i_t * BT + i_i * BC) * K + o_k, BK), BK)
+ p_dq = tl.make_block_ptr(dq + i_bh * T*K, (T, K), (K, 1), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ else:
+ o_dA = bos*H*BT + (i_t * BT + i_i * BC + tl.arange(0, BC)) * H*BT + i_h * BT + i_i * BC
+ p_kj = tl.max_contiguous(tl.multiple_of(k + (bos + i_t * BT + i_i * BC) * H*K + i_h * K + o_k, BK), BK)
+ p_gkj = tl.max_contiguous(tl.multiple_of(g + (bos + i_t * BT + i_i * BC) * H*K + i_h * K + o_k, BK), BK)
+ p_dq = tl.make_block_ptr(dq + (bos*H + i_h) * K, (T, K), (H*K, 1), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+
+ for j in range(0, min(BC, T - i_t * BT - i_i * BC)):
+ # [BC,]
+ b_dA = tl.load(dA + o_dA + j, mask=m_dA, other=0)
+ # [BK,]
+ b_kj = tl.load(p_kj, mask=m_k, other=0).to(tl.float32)
+ b_gkj = tl.load(p_gkj, mask=m_k, other=0).to(tl.float32)
+ # [BC, BK]
+ m_i = o_i[:, None] >= j
+ # [BC, BK]
+ # (SY 09/17) important to not use bf16 here to have a good precision.
+ b_dq += tl.where(m_i, b_dA[:, None] * b_kj[None, :] * tl.exp(b_g - b_gkj[None, :]), 0.)
+ p_kj += K if HEAD_FIRST else H*K
+ p_gkj += K if HEAD_FIRST else H*K
+ tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1))
+
+ tl.debug_barrier()
+ if HEAD_FIRST:
+ p_k = tl.make_block_ptr(k + i_bh * T*K, (T, K), (K, 1), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ p_gk = tl.make_block_ptr(g + i_bh * T*K, (T, K), (K, 1), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ else:
+ p_k = tl.make_block_ptr(k + (bos*H + i_h) * K, (T, K), (H*K, 1), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ p_gk = tl.make_block_ptr(g + (bos*H + i_h) * K, (T, K), (H*K, 1), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+
+ # [BC, BK]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ b_gk = tl.load(p_gk, boundary_check=(0, 1))
+ b_dk = tl.zeros([BC, BK], dtype=tl.float32)
+
+ NC = min(NC, tl.cdiv(T - i_t * BT, BC))
+ if i_i < NC - 1:
+ if HEAD_FIRST:
+ p_gn = tl.max_contiguous(tl.multiple_of(g + i_bh*T*K + (i_t * BT + i_i * BC + BC - 1)*K + o_k, BK), BK)
+ else:
+ p_gn = tl.max_contiguous(tl.multiple_of(g + bos*H*K + (i_t * BT + i_i * BC + BC - 1)*H*K + i_h*K + o_k, BK), BK)
+ # [BK,]
+ b_gn = tl.load(p_gn, mask=m_k, other=0)
+ for i_j in range(i_i + 1, NC):
+ if HEAD_FIRST:
+ p_q = tl.make_block_ptr(q + i_bh * T*K, (T, K), (K, 1), (i_t * BT + i_j * BC, i_k * BK), (BC, BK), (1, 0))
+ p_g = tl.make_block_ptr(g + i_bh * T*K, (T, K), (K, 1), (i_t * BT + i_j * BC, i_k * BK), (BC, BK), (1, 0))
+ p_dA = tl.make_block_ptr(dA + i_bh * T * BT, (BT, T), (1, BT), (i_i*BC, i_t*BT + i_j*BC), (BC, BC), (0, 1))
+ else:
+ p_q = tl.make_block_ptr(q + (bos*H+i_h)*K, (T, K), (H*K, 1), (i_t * BT + i_j * BC, i_k * BK), (BC, BK), (1, 0))
+ p_g = tl.make_block_ptr(g + (bos*H+i_h)*K, (T, K), (H*K, 1), (i_t * BT + i_j * BC, i_k * BK), (BC, BK), (1, 0))
+ p_dA = tl.make_block_ptr(dA + (bos*H+i_h)*BT, (BT, T), (1, H*BT), (i_i*BC, i_t*BT + i_j*BC), (BC, BC), (0, 1))
+ # [BC, BK]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_g = tl.load(p_g, boundary_check=(0, 1))
+ b_qg = (b_q * tl.exp(b_g - b_gn[None, :]))
+ # [BC, BC]
+ b_dA = tl.load(p_dA, boundary_check=(0, 1))
+ # [BC, BK]
+ # (SY 09/17) important to not use bf16 here to have a good precision.
+ b_dk += tl.dot(b_dA, b_qg)
+ b_dk *= tl.exp(b_gn[None, :] - b_gk)
+ if HEAD_FIRST:
+ o_dA = i_bh * T * BT + (i_t * BT + i_i * BC) * BT + i_i * BC + tl.arange(0, BC)
+ p_qj = tl.max_contiguous(tl.multiple_of(q + (i_bh * T + i_t * BT + i_i * BC) * K + o_k, BK), BK)
+ p_gqj = tl.max_contiguous(tl.multiple_of(g + (i_bh * T + i_t * BT + i_i * BC) * K + o_k, BK), BK)
+ p_dk = tl.make_block_ptr(dk + i_bh*T*K, (T, K), (K, 1), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ else:
+ o_dA = bos*H*BT + (i_t * BT + i_i * BC) * H*BT + i_h * BT + i_i * BC + tl.arange(0, BC)
+ p_qj = tl.max_contiguous(tl.multiple_of(q + (bos + i_t * BT + i_i * BC) * H*K + i_h * K + o_k, BK), BK)
+ p_gqj = tl.max_contiguous(tl.multiple_of(g + (bos + i_t * BT + i_i * BC) * H*K + i_h * K + o_k, BK), BK)
+ p_dk = tl.make_block_ptr(dk + (bos*H+i_h)*K, (T, K), (H*K, 1), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ for j in range(0, min(BC, T - i_t * BT - i_i * BC)):
+ # [BC,]
+ b_dA = tl.load(dA + o_dA + j * (1 if HEAD_FIRST else H) * BT)
+ # [BK,]
+ b_qj = tl.load(p_qj, mask=m_k, other=0).to(tl.float32)
+ b_gqj = tl.load(p_gqj, mask=m_k, other=0).to(tl.float32)
+ # [BC, BK]
+ m_i = o_i[:, None] <= j
+ b_dk += tl.where(m_i, b_dA[:, None] * b_qj[None, :] * tl.exp(b_gqj[None, :] - b_gk), 0.)
+ p_qj += K if HEAD_FIRST else H*K
+ p_gqj += K if HEAD_FIRST else H*K
+ tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ ],
+ key=["BV", "BT"],
+)
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def chunk_gla_bwd_kernel_dA(
+ v,
+ do,
+ dA,
+ offsets,
+ indices,
+ scale,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BV: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_t, i_bh = tl.program_id(0), tl.program_id(1)
+ i_b, i_h = i_bh // H, i_bh % H
+ if USE_OFFSETS:
+ i_n, i_t = tl.load(indices + i_t * 2).to(tl.int32), tl.load(indices + i_t * 2 + 1).to(tl.int32)
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ else:
+ bos, eos = i_b * T, i_b * T + T
+ T = eos - bos
+
+ b_dA = tl.zeros([BT, BT], dtype=tl.float32)
+ for i_v in range(tl.cdiv(V, BV)):
+ if HEAD_FIRST:
+ p_do = tl.make_block_ptr(do + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_v = tl.make_block_ptr(v + i_bh * T*V, (V, T), (1, V), (i_v * BV, i_t * BT), (BV, BT), (0, 1))
+ else:
+ p_do = tl.make_block_ptr(do + (bos*H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_v = tl.make_block_ptr(v + (bos*H + i_h) * V, (V, T), (1, H*V), (i_v * BV, i_t * BT), (BV, BT), (0, 1))
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+ b_dA += tl.dot(b_do, b_v)
+ if HEAD_FIRST:
+ p_dA = tl.make_block_ptr(dA + i_bh * T*BT, (T, BT), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0))
+ else:
+ p_dA = tl.make_block_ptr(dA + (bos * H + i_h) * BT, (T, BT), (H*BT, 1), (i_t * BT, 0), (BT, BT), (1, 0))
+ m_s = tl.arange(0, BT)[:, None] >= tl.arange(0, BT)[None, :]
+ b_dA = tl.where(m_s, b_dA * scale, 0.)
+ tl.store(p_dA, b_dA.to(p_dA.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ ],
+ key=["BK", "BV", "BT"],
+)
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def chunk_gla_bwd_kernel_dv(
+ k,
+ g,
+ A,
+ do,
+ dh,
+ dv,
+ offsets,
+ indices,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_v, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_b, i_h = i_bh // H, i_bh % H
+ if USE_OFFSETS:
+ i_tg = i_t
+ i_n, i_t = tl.load(indices + i_t * 2).to(tl.int32), tl.load(indices + i_t * 2 + 1).to(tl.int32)
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ T = eos - bos
+ NT = tl.cdiv(T, BT)
+ else:
+ NT = tl.cdiv(T, BT)
+ i_tg = i_b * NT + i_t
+ bos, eos = i_b * T, i_b * T + T
+
+ if HEAD_FIRST:
+ p_A = tl.make_block_ptr(A + i_bh * T * BT, (BT, T), (1, BT), (0, i_t * BT), (BT, BT), (0, 1))
+ p_do = tl.make_block_ptr(do + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dv = tl.make_block_ptr(dv + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ else:
+ p_A = tl.make_block_ptr(A + (bos * H + i_h) * BT, (BT, T), (1, H*BT), (0, i_t * BT), (BT, BT), (0, 1))
+ p_do = tl.make_block_ptr(do + (bos * H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dv = tl.make_block_ptr(dv + (bos * H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+
+ b_A = tl.load(p_A, boundary_check=(0, 1))
+ b_A = tl.where(tl.arange(0, BT)[:, None] <= tl.arange(0, BT)[None, :], b_A, 0.)
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+ # (SY 09/17) important to disallow tf32 here to maintain a good precision.
+ b_dv = tl.dot(b_A, b_do.to(b_A.dtype), allow_tf32=False)
+
+ for i_k in range(tl.cdiv(K, BK)):
+ o_k = i_k * BK + tl.arange(0, BK)
+ m_k = o_k < K
+
+ if HEAD_FIRST:
+ p_k = tl.make_block_ptr(k + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_gk = tl.make_block_ptr(g + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_gn = tl.max_contiguous(tl.multiple_of(g + i_bh * T*K + min(i_t * BT + BT, T) * K - K + o_k, BK), BK)
+ p_dh = tl.make_block_ptr(dh + (i_bh * NT + i_t) * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ else:
+ p_k = tl.make_block_ptr(k + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_gk = tl.make_block_ptr(g + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_gn = tl.max_contiguous(tl.multiple_of(g + (bos + min(i_t * BT + BT, T) - 1)*H*K + i_h * K + o_k, BK), BK)
+ p_dh = tl.make_block_ptr(dh + (i_tg * H + i_h) * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ b_gk = tl.load(p_gk, boundary_check=(0, 1))
+ b_gn = tl.exp(tl.load(p_gn, mask=m_k, other=0)[None, :] - b_gk)
+ b_k = (b_k * b_gn).to(b_k.dtype)
+ b_dh = tl.load(p_dh, boundary_check=(0, 1))
+ # [BT, BV]
+ # (SY 09/17) it is ok to have bf16 interchunk gradient contribution here
+ b_dv += tl.dot(b_k, b_dh.to(b_k.dtype))
+ tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ ],
+ key=["BK", "BV", "BT"],
+)
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def chunk_gla_bwd_kernel_inter(
+ q,
+ k,
+ v,
+ h,
+ g,
+ do,
+ dh,
+ dq,
+ dk,
+ dq2,
+ dk2,
+ dg,
+ offsets,
+ indices,
+ scale,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_k, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_b, i_h = i_bh // H, i_bh % H
+ if USE_OFFSETS:
+ i_tg = i_t
+ i_n, i_t = tl.load(indices + i_t * 2).to(tl.int32), tl.load(indices + i_t * 2 + 1).to(tl.int32)
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ T = eos - bos
+ NT = tl.cdiv(T, BT)
+ else:
+ NT = tl.cdiv(T, BT)
+ i_tg = i_b * NT + i_t
+ bos, eos = i_b * T, i_b * T + T
+ o_k = i_k * BK + tl.arange(0, BK)
+ m_k = o_k < K
+
+ if HEAD_FIRST:
+ p_gk = tl.make_block_ptr(g + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_gn = tl.max_contiguous(tl.multiple_of(g + i_bh * T*K + (min(T, i_t * BT + BT)-1) * K + o_k, BK), BK)
+ else:
+ p_gk = tl.make_block_ptr(g + (bos*H+i_h)*K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_gn = tl.max_contiguous(tl.multiple_of(g + (bos + min(T, i_t * BT + BT)-1) * H*K + i_h * K + o_k, BK), BK)
+ b_gn = tl.load(p_gn, mask=m_k, other=0)
+ b_dq = tl.zeros([BT, BK], dtype=tl.float32)
+ b_dk = tl.zeros([BT, BK], dtype=tl.float32)
+ b_dgk = tl.zeros([BK,], dtype=tl.float32)
+
+ for i_v in range(tl.cdiv(V, BV)):
+ if HEAD_FIRST:
+ p_v = tl.make_block_ptr(v + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_do = tl.make_block_ptr(do + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_h = tl.make_block_ptr(h + i_bh * NT*K*V + i_t * K*V, (V, K), (1, V), (i_v * BV, i_k * BK), (BV, BK), (0, 1))
+ p_dh = tl.make_block_ptr(dh + i_bh * NT*K*V + i_t * K*V, (V, K), (1, V), (i_v * BV, i_k * BK), (BV, BK), (0, 1))
+ else:
+ p_v = tl.make_block_ptr(v + (bos*H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_do = tl.make_block_ptr(do + (bos*H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_h = tl.make_block_ptr(h + (i_tg * H + i_h) * K*V, (V, K), (1, V), (i_v * BV, i_k * BK), (BV, BK), (0, 1))
+ p_dh = tl.make_block_ptr(dh + (i_tg * H + i_h) * K*V, (V, K), (1, V), (i_v * BV, i_k * BK), (BV, BK), (0, 1))
+ # [BT, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+ # [BV, BK]
+ b_h = tl.load(p_h, boundary_check=(0, 1))
+ b_dh = tl.load(p_dh, boundary_check=(0, 1))
+ # [BK]
+ b_dgk += tl.sum(b_h * b_dh, axis=0)
+ # [BT, BK]
+ b_dq += tl.dot(b_do, b_h.to(b_do.dtype))
+ b_dk += tl.dot(b_v, b_dh.to(b_v.dtype))
+ b_dgk *= tl.exp(b_gn)
+ b_dq *= scale
+ b_gk = tl.load(p_gk, boundary_check=(0, 1))
+ b_dq = b_dq * tl.exp(b_gk)
+ b_dk = b_dk * tl.exp(b_gn[None, :] - b_gk)
+
+ if HEAD_FIRST:
+ p_q = tl.make_block_ptr(q + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dq = tl.make_block_ptr(dq + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dk = tl.make_block_ptr(dk + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ else:
+ p_q = tl.make_block_ptr(q + (bos*H+i_h)*K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + (bos*H+i_h)*K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dq = tl.make_block_ptr(dq + (bos*H+i_h)*K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dk = tl.make_block_ptr(dk + (bos*H+i_h)*K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ b_dgk += tl.sum(b_dk * b_k, axis=0)
+ b_dq += tl.load(p_dq, boundary_check=(0, 1))
+ b_dk += tl.load(p_dk, boundary_check=(0, 1))
+ b_dg = b_q * b_dq - b_k * b_dk
+ # tl.debug_barrier()
+ b_dg = b_dg - tl.cumsum(b_dg, axis=0) + tl.sum(b_dg, axis=0)[None, :] + b_dgk[None, :]
+ # Buggy due to strange triton compiler issue.
+ # m_s = tl.where(tl.arange(0, BT)[:, None] <= tl.arange(0, BT)[None, :], 1., 0.)
+ # b_dg = tl.dot(m_s, b_dg, allow_tf32=False) + b_dgk[None, :]
+ if HEAD_FIRST:
+ p_dq = tl.make_block_ptr(dq2 + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dk = tl.make_block_ptr(dk2 + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dg = tl.make_block_ptr(dg + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ else:
+ p_dq = tl.make_block_ptr(dq2 + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dk = tl.make_block_ptr(dk2 + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dg = tl.make_block_ptr(dg + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1))
+ tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1))
+ tl.store(p_dg, b_dg.to(p_dg.dtype.element_ty), boundary_check=(0, 1))
+
+
+def chunk_gla_fwd_intra_gk(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ g: torch.Tensor,
+ scale: float,
+ offsets: Optional[torch.LongTensor] = None,
+ indices: Optional[torch.LongTensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+):
+ if head_first:
+ B, H, T, K = k.shape
+ else:
+ B, T, H, K = k.shape
+ BT = min(chunk_size, triton.next_power_of_2(T))
+ if offsets is None:
+ NT = triton.cdiv(T, BT)
+ else:
+ if indices is None:
+ indices = torch.cat([torch.arange(n) for n in triton.cdiv(offsets[1:] - offsets[:-1], BT).tolist()])
+ indices = torch.stack([indices.eq(0).cumsum(0) - 1, indices], 1).to(offsets)
+ NT = len(indices)
+ BC = min(16, triton.next_power_of_2(T))
+ BK = min(64, triton.next_power_of_2(K))
+ NC = triton.cdiv(BT, BC)
+
+ A = q.new_empty(B, *((H, T) if head_first else (T, H)), BT, dtype=torch.float32)
+ grid = (NT, NC * NC, B * H)
+ chunk_gla_fwd_A_kernel_intra_sub_inter[grid](
+ q,
+ k,
+ g,
+ A,
+ offsets,
+ indices,
+ scale,
+ T=T,
+ H=H,
+ K=K,
+ BT=BT,
+ BC=BC,
+ BK=BK,
+ NC=NC,
+ HEAD_FIRST=head_first
+ )
+
+ grid = (NT, NC, B * H)
+ # load the entire [BC, K] blocks into SRAM at once
+ if K <= 256:
+ BK = triton.next_power_of_2(K)
+ chunk_gla_fwd_A_kernel_intra_sub_intra[grid](
+ q,
+ k,
+ g,
+ A,
+ offsets,
+ indices,
+ scale,
+ T=T,
+ H=H,
+ K=K,
+ BT=BT,
+ BC=BC,
+ BK=BK,
+ HEAD_FIRST=head_first
+ )
+ # split then merge
+ else:
+ BK = min(128, triton.next_power_of_2(K))
+ NK = triton.cdiv(K, BK)
+ A_intra = q.new_empty(NK, B, *((H, T) if head_first else (T, H)), BC, dtype=torch.float32)
+
+ grid = (NK, NT * NC, B * H)
+ chunk_gla_fwd_A_kernel_intra_sub_intra_split[grid](
+ q,
+ k,
+ g,
+ A_intra,
+ offsets,
+ indices,
+ scale,
+ B=B,
+ T=T,
+ H=H,
+ K=K,
+ BT=BT,
+ BC=BC,
+ BK=BK,
+ NC=NC,
+ HEAD_FIRST=head_first
+ )
+
+ grid = (NT, NC, B * H)
+ chunk_gla_fwd_A_kernel_intra_sub_intra_merge[grid](
+ A_intra,
+ A,
+ offsets,
+ indices,
+ B=B,
+ T=T,
+ H=H,
+ BT=BT,
+ BC=BC,
+ NK=NK,
+ HEAD_FIRST=head_first
+ )
+ return A
+
+
+def chunk_gla_fwd_o_gk(
+ q: torch.Tensor,
+ v: torch.Tensor,
+ g: torch.Tensor,
+ A: torch.Tensor,
+ h: torch.Tensor,
+ scale: float,
+ offsets: Optional[torch.LongTensor] = None,
+ indices: Optional[torch.LongTensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+):
+ if head_first:
+ B, H, T, K, V = *q.shape, v.shape[-1]
+ else:
+ B, T, H, K, V = *q.shape, v.shape[-1]
+ BT = min(chunk_size, triton.next_power_of_2(T))
+ if offsets is None:
+ NT = triton.cdiv(T, BT)
+ else:
+ if indices is None:
+ indices = torch.cat([torch.arange(n) for n in triton.cdiv(offsets[1:] - offsets[:-1], BT).tolist()])
+ indices = torch.stack([indices.eq(0).cumsum(0) - 1, indices], 1).to(offsets)
+ NT = len(indices)
+ BK = min(32, triton.next_power_of_2(K))
+ BV = min(32, triton.next_power_of_2(V))
+ NV = triton.cdiv(V, BV)
+
+ grid = (NV, NT, B * H)
+ o = torch.empty_like(v)
+ chunk_gla_fwd_kernel_o[grid](
+ q,
+ v,
+ g,
+ h,
+ o,
+ A,
+ offsets,
+ indices,
+ scale,
+ T=T,
+ H=H,
+ K=K,
+ V=V,
+ BT=BT,
+ BK=BK,
+ BV=BV,
+ HEAD_FIRST=head_first
+ )
+ return o
+
+
+def chunk_gla_bwd_dA(
+ v: torch.Tensor,
+ do: torch.Tensor,
+ scale: float,
+ offsets: Optional[torch.LongTensor] = None,
+ indices: Optional[torch.LongTensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+):
+ if head_first:
+ B, H, T, V = v.shape
+ else:
+ B, T, H, V = v.shape
+ BT = min(chunk_size, triton.next_power_of_2(T))
+ if offsets is None:
+ NT = triton.cdiv(T, BT)
+ else:
+ if indices is None:
+ indices = torch.cat([torch.arange(n) for n in triton.cdiv(offsets[1:] - offsets[:-1], BT).tolist()])
+ indices = torch.stack([indices.eq(0).cumsum(0) - 1, indices], 1).to(offsets)
+ NT = len(indices)
+ BV = min(64, triton.next_power_of_2(V))
+
+ dA = v.new_empty(B, *((H, T) if head_first else (T, H)), BT, dtype=torch.float32)
+ grid = (NT, B * H)
+ chunk_gla_bwd_kernel_dA[grid](
+ v,
+ do,
+ dA,
+ offsets,
+ indices,
+ scale,
+ T=T,
+ H=H,
+ V=V,
+ BT=BT,
+ BV=BV,
+ HEAD_FIRST=head_first
+ )
+ return dA
+
+
+def chunk_gla_bwd_dv(
+ k: torch.Tensor,
+ g: torch.Tensor,
+ A: torch.Tensor,
+ do: torch.Tensor,
+ dh: torch.Tensor,
+ offsets: Optional[torch.LongTensor] = None,
+ indices: Optional[torch.LongTensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+):
+ if head_first:
+ B, H, T, K, V = *k.shape, do.shape[-1]
+ else:
+ B, T, H, K, V = *k.shape, do.shape[-1]
+ BT = min(chunk_size, triton.next_power_of_2(T))
+ if offsets is None:
+ NT = triton.cdiv(T, BT)
+ else:
+ if indices is None:
+ indices = torch.cat([torch.arange(n) for n in triton.cdiv(offsets[1:] - offsets[:-1], BT).tolist()])
+ indices = torch.stack([indices.eq(0).cumsum(0) - 1, indices], 1).to(offsets)
+ NT = len(indices)
+ BK = min(64, triton.next_power_of_2(K))
+ BV = min(32, triton.next_power_of_2(V))
+
+ dv = torch.empty_like(do)
+ grid = (triton.cdiv(V, BV), NT, B * H)
+ chunk_gla_bwd_kernel_dv[grid](
+ k,
+ g,
+ A,
+ do,
+ dh,
+ dv,
+ offsets,
+ indices,
+ T=T,
+ H=H,
+ K=K,
+ V=V,
+ BT=BT,
+ BK=BK,
+ BV=BV,
+ HEAD_FIRST=head_first
+ )
+ return dv
+
+
+def chunk_gla_bwd_dqk_intra(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ g: torch.Tensor,
+ dA: torch.Tensor,
+ offsets: Optional[torch.LongTensor] = None,
+ indices: Optional[torch.LongTensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+):
+ if head_first:
+ B, H, T, K = q.shape
+ else:
+ B, T, H, K = q.shape
+ BT = min(chunk_size, triton.next_power_of_2(T))
+ if offsets is None:
+ NT = triton.cdiv(T, BT)
+ else:
+ if indices is None:
+ indices = torch.cat([torch.arange(n) for n in triton.cdiv(offsets[1:] - offsets[:-1], BT).tolist()])
+ indices = torch.stack([indices.eq(0).cumsum(0) - 1, indices], 1).to(offsets)
+ NT = len(indices)
+ BC = min(16, triton.next_power_of_2(T))
+ BK = min(64, triton.next_power_of_2(K))
+ NK = triton.cdiv(K, BK)
+ NC = triton.cdiv(BT, BC)
+
+ dq = torch.empty_like(q, dtype=torch.float32)
+ dk = torch.empty_like(k, dtype=torch.float32)
+ grid = (NK, NT * NC, B * H)
+ chunk_gla_bwd_kernel_intra[grid](
+ q,
+ k,
+ g,
+ dA,
+ dq,
+ dk,
+ offsets,
+ indices,
+ T=T,
+ H=H,
+ K=K,
+ BT=BT,
+ BC=BC,
+ BK=BK,
+ NC=NC,
+ HEAD_FIRST=head_first
+ )
+ return dq, dk
+
+
+def chunk_gla_bwd_dqkg(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ h: torch.Tensor,
+ g: torch.Tensor,
+ do: torch.Tensor,
+ dh: torch.Tensor,
+ dq: torch.Tensor,
+ dk: torch.Tensor,
+ scale: float,
+ offsets: Optional[torch.LongTensor] = None,
+ indices: Optional[torch.LongTensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+):
+ if head_first:
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ else:
+ B, T, H, K, V = *k.shape, v.shape[-1]
+ BT = min(chunk_size, triton.next_power_of_2(T))
+ if offsets is None:
+ NT = triton.cdiv(T, BT)
+ else:
+ if indices is None:
+ indices = torch.cat([torch.arange(n) for n in triton.cdiv(offsets[1:] - offsets[:-1], chunk_size).tolist()])
+ indices = torch.stack([indices.eq(0).cumsum(0) - 1, indices], 1).to(offsets)
+ NT = len(indices)
+ BK = min(64, triton.next_power_of_2(K))
+ BV = min(64, triton.next_power_of_2(V))
+ NK = triton.cdiv(K, BK)
+
+ dg = torch.empty_like(g)
+ grid = (NK, NT, B * H)
+ # work around triton compiler bugs.
+ dq2 = torch.empty_like(dq)
+ dk2 = torch.empty_like(dk)
+ chunk_gla_bwd_kernel_inter[grid](
+ q,
+ k,
+ v,
+ h,
+ g,
+ do,
+ dh,
+ dq,
+ dk,
+ dq2,
+ dk2,
+ dg,
+ offsets,
+ indices,
+ scale,
+ T=T,
+ H=H,
+ K=K,
+ V=V,
+ BT=BT,
+ BK=BK,
+ BV=BV,
+ HEAD_FIRST=head_first
+ )
+ return dq2, dk2, dg
+
+
+def chunk_gla_fwd(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ g: torch.Tensor,
+ g_cumsum: Optional[torch.Tensor],
+ scale: float,
+ initial_state: torch.Tensor,
+ output_final_state: bool,
+ offsets: Optional[torch.LongTensor] = None,
+ indices: Optional[torch.LongTensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
+ T = q.shape[2] if head_first else q.shape[1]
+ BT = min(chunk_size, triton.next_power_of_2(T))
+ if g_cumsum is None:
+ g_cumsum = chunk_local_cumsum(g, BT, offsets=offsets, head_first=head_first)
+
+ h, ht = chunk_fwd_h(
+ k=k,
+ v=v,
+ g=None,
+ gk=g_cumsum,
+ gv=None,
+ h0=initial_state,
+ output_final_state=output_final_state,
+ states_in_fp32=False,
+ offsets=offsets,
+ head_first=head_first,
+ chunk_size=BT
+ )
+
+ # the intra A is kept in fp32
+ # the computation has very marginal effect on the entire throughput
+ A = chunk_gla_fwd_intra_gk(
+ q=q,
+ k=k,
+ g=g_cumsum,
+ scale=scale,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=BT
+ )
+ o = chunk_gla_fwd_o_gk(
+ q=q,
+ v=v,
+ g=g_cumsum,
+ A=A,
+ h=h,
+ scale=scale,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=BT
+ )
+ return g_cumsum, A, h, ht, o
+
+
+def chunk_gla_bwd(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ g: torch.Tensor,
+ g_cumsum: Optional[torch.Tensor],
+ scale: float,
+ initial_state: torch.Tensor,
+ h: torch.Tensor,
+ A: torch.Tensor,
+ do: torch.Tensor,
+ dht: torch.Tensor,
+ offsets: Optional[torch.LongTensor] = None,
+ indices: Optional[torch.LongTensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+):
+ T = q.shape[2] if head_first else q.shape[1]
+ BT = min(chunk_size, triton.next_power_of_2(T))
+ if g_cumsum is None:
+ g_cumsum = chunk_local_cumsum(g, BT, offsets=offsets, head_first=head_first)
+
+ if h is None:
+ h, _ = chunk_fwd_h(
+ k=k,
+ v=v,
+ g=None,
+ gk=g_cumsum,
+ gv=None,
+ h0=initial_state,
+ output_final_state=False,
+ states_in_fp32=True,
+ offsets=offsets,
+ head_first=head_first,
+ chunk_size=BT
+ )
+ dh, dh0 = chunk_bwd_dh(
+ q=q,
+ k=k,
+ v=v,
+ g=None,
+ gk=g_cumsum,
+ gv=None,
+ do=do,
+ h0=initial_state,
+ dht=dht,
+ scale=scale,
+ states_in_fp32=True,
+ offsets=offsets,
+ head_first=head_first,
+ chunk_size=BT
+ )
+ dv = chunk_gla_bwd_dv(
+ k=k,
+ g=g_cumsum,
+ A=A,
+ do=do,
+ dh=dh,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=BT
+ )
+ # dq dk in fp32
+ dA = chunk_gla_bwd_dA(
+ v=v,
+ do=do,
+ scale=scale,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=BT
+ )
+ dq, dk = chunk_gla_bwd_dqk_intra(
+ q=q,
+ k=k,
+ g=g_cumsum,
+ dA=dA,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=BT
+ )
+ dq, dk, dg = chunk_gla_bwd_dqkg(
+ q=q,
+ k=k,
+ v=v,
+ h=h,
+ g=g_cumsum,
+ do=do,
+ dh=dh,
+ dq=dq,
+ dk=dk,
+ scale=scale,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=BT
+ )
+ return dq, dk, dv, dg, dh0
+
+
+class ChunkGLAFunction(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ def forward(
+ ctx,
+ q,
+ k,
+ v,
+ g,
+ scale,
+ initial_state,
+ output_final_state,
+ offsets,
+ head_first
+ ):
+ T = q.shape[2] if head_first else q.shape[1]
+ chunk_size = min(64, triton.next_power_of_2(T))
+
+ # 2-d indices denoting the offsets of chunks in each sequence
+ # for example, if the passed `offsets` is [0, 100, 356] and `chunk_size` is 64,
+ # then there are 2 and 4 chunks in the 1st and 2nd sequences respectively, and `indices` will be
+ # [[0, 0], [0, 1], [1, 0], [1, 1], [1, 2], [1, 3]]
+ indices = None
+ if offsets is not None:
+ indices = torch.cat([torch.arange(n) for n in triton.cdiv(offsets[1:] - offsets[:-1], chunk_size).tolist()])
+ indices = torch.stack([indices.eq(0).cumsum(0) - 1, indices], 1).to(offsets)
+ g_cumsum, A, h, ht, o = chunk_gla_fwd(
+ q=q,
+ k=k,
+ v=v,
+ g=g,
+ g_cumsum=None,
+ scale=scale,
+ initial_state=initial_state,
+ output_final_state=output_final_state,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=chunk_size
+ )
+ # recompute g_cumsum in bwd pass
+ if g.dtype != torch.float32:
+ g_cumsum = None
+ else:
+ g = None
+ ctx.save_for_backward(q, k, v, g, g_cumsum, initial_state, A)
+ ctx.chunk_size = chunk_size
+ ctx.scale = scale
+ ctx.offsets = offsets
+ ctx.indices = indices
+ ctx.head_first = head_first
+ return o, ht
+
+ @staticmethod
+ @contiguous
+ def backward(ctx, do, dht):
+ q, k, v, g, g_cumsum, initial_state, A = ctx.saved_tensors
+ chunk_size, scale, offsets, indices, head_first = ctx.chunk_size, ctx.scale, ctx.offsets, ctx.indices, ctx.head_first
+ dq, dk, dv, dg, dh0 = chunk_gla_bwd(
+ q=q,
+ k=k,
+ v=v,
+ g=g,
+ g_cumsum=g_cumsum,
+ scale=scale,
+ h=None,
+ A=A,
+ initial_state=initial_state,
+ do=do,
+ dht=dht,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=chunk_size
+ )
+ return dq.to(q), dk.to(k), dv.to(v), dg, None, dh0, None, None, None
+
+
+def chunk_gla(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ g: torch.Tensor,
+ scale: Optional[int] = None,
+ initial_state: torch.Tensor = None,
+ output_final_state: bool = False,
+ offsets: Optional[torch.LongTensor] = None,
+ head_first: bool = True
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ r"""
+ Args:
+ q (torch.Tensor):
+ queries of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`.
+ k (torch.Tensor):
+ keys of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`.
+ v (torch.Tensor):
+ values of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ g (torch.Tensor):
+ Forget gates of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]` applied to keys.
+ scale (Optional[int]):
+ Scale factor for the attention scores.
+ If not provided, it will default to `1 / sqrt(K)`. Default: `None`.
+ initial_state (Optional[torch.Tensor]):
+ Initial state of shape `[N, H, K, V]` for `N` input sequences.
+ For equal-length input sequences, `N` equals the batch size `B`.
+ Default: `None`.
+ output_final_state (Optional[bool]):
+ Whether to output the final state of shape `[N, H, K, V]`. Default: `False`.
+ offsets (Optional[torch.LongTensor]):
+ Offsets of shape `[N+1]` defining the bos/eos positions of `N` variable-length sequences in the batch.
+ For example,
+ if `offsets` is `[0, 1, 3, 6, 10, 15]`, there are `N=5` sequences with lengths 1, 2, 3, 4 and 5 respectively.
+ If provided, the inputs are concatenated and the batch size `B` is expected to be 1.
+ Default: `None`.
+ head_first (Optional[bool]):
+ Whether the inputs are in the head-first format, which is not supported for variable-length inputs.
+ Default: `True`.
+
+ Returns:
+ o (torch.Tensor):
+ Outputs of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ final_state (torch.Tensor):
+ Final state of shape `[N, H, K, V]` if `output_final_state=True` else `None`.
+
+ Examples::
+ >>> import torch
+ >>> import torch.nn.functional as F
+ >>> from einops import rearrange
+ >>> from fla.ops.gla import chunk_gla
+ # inputs with equal lengths
+ >>> B, T, H, K, V = 4, 2048, 4, 512, 512
+ >>> q = torch.randn(B, T, H, K, device='cuda')
+ >>> k = torch.randn(B, T, H, K, device='cuda')
+ >>> v = torch.randn(B, T, H, V, device='cuda')
+ >>> g = F.logsigmoid(torch.randn(B, T, H, K, device='cuda'))
+ >>> h0 = torch.randn(B, H, K, V, device='cuda')
+ >>> o, ht = chunk_gla(q, k, v, g,
+ initial_state=h0,
+ output_final_state=True,
+ head_first=False)
+ # for variable-length inputs, the batch size `B` is expected to be 1 and `offsets` is required
+ >>> q, k, v, g = map(lambda x: rearrange(x, 'b t h d -> 1 (b t) h d'), (q, k, v, g))
+ # for a batch with 4 sequences, offsets with 5 start/end positions are expected
+ >>> offsets = q.new_tensor([0, 2048, 4096, 6144, 8192], dtype=torch.long)
+ >>> o_var, ht_var = chunk_gla(q, k, v, g,
+ initial_state=h0,
+ output_final_state=True,
+ offsets=offsets,
+ head_first=False)
+ >>> assert o.allclose(o_var.view(o.shape))
+ >>> assert ht.allclose(ht_var)
+ """
+ if offsets is not None:
+ if q.shape[0] != 1:
+ raise ValueError(f"The batch size is expected to be 1 rather than {q.shape[0]} when using `offsets`."
+ f"Please flatten variable-length inputs before processing.")
+ if head_first:
+ raise RuntimeError("Sequences with variable lengths are not supported for head-first mode")
+ if initial_state is not None and initial_state.shape[0] != len(offsets) - 1:
+ raise ValueError(f"The number of initial states is expected to be equal to the number of input sequences, "
+ f"i.e., {len(offsets) - 1} rather than {initial_state.shape[0]}.")
+ if scale is None:
+ scale = q.shape[-1] ** -0.5
+ o, final_state = ChunkGLAFunction.apply(q, k, v, g, scale, initial_state, output_final_state, offsets, head_first)
+ return o, final_state
diff --git a/fla/ops/gla/fused_chunk.py b/fla/ops/gla/fused_chunk.py
new file mode 100644
index 0000000000000000000000000000000000000000..5313f789fc672a895ff1e31b1a4aa3910863d40e
--- /dev/null
+++ b/fla/ops/gla/fused_chunk.py
@@ -0,0 +1,644 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+# Gated Linear Attention Transformers with Hardware-Efficient Training: https://arxiv.org/abs/2312.06635
+
+from typing import Tuple
+
+import torch
+import torch.nn.functional as F
+import triton
+import triton.language as tl
+from einops import rearrange
+from packaging import version
+
+from fla.ops.utils import chunk_local_cumsum
+from fla.utils import autocast_custom_bwd, autocast_custom_fwd, contiguous
+
+
+@triton.jit
+def prepare_qg_kg(
+ q,
+ k,
+ g,
+ qg,
+ kg,
+ s_k_h,
+ scale,
+ K: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr
+):
+
+ i_k, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ p_q = q + i_bh * s_k_h + i_c * BT * K + i_k * BK + tl.arange(0, BK)
+ p_g = g + i_bh * s_k_h + i_c * BT * K + i_k * BK + tl.arange(0, BK)
+ p_k = k + i_bh * s_k_h + i_c * BT * K + i_k * BK + tl.arange(0, BK)
+ p_qg = qg + i_bh * s_k_h + i_c * BT * K + i_k * BK + tl.arange(0, BK)
+ p_kg = kg + i_bh * s_k_h + i_c * BT * K + i_k * BK + tl.arange(0, BK)
+
+ mask = (i_k * BK + tl.arange(0, BK)) < K
+
+ last_decay = tl.load(g + i_bh * s_k_h + (i_c * BT + BT - 1) * K + i_k * BK + tl.arange(0, BK))
+
+ for i in range(BT):
+ b_q = tl.load(p_q, mask=mask, other=0)
+ b_k = tl.load(p_k, mask=mask, other=0)
+ _g = tl.load(p_g, mask=mask, other=0).to(tl.float32)
+ b_q *= tl.exp(_g) * scale
+ b_k *= tl.exp(last_decay - _g)
+ tl.store(p_kg, b_k.to(p_kg.dtype.element_ty), mask=mask)
+ tl.store(p_qg, b_q.to(p_qg.dtype.element_ty), mask=mask)
+ p_q += K
+ p_g += K
+ p_k += K
+ p_kg += K
+ p_qg += K
+
+
+@triton.jit
+def bwd_decay_global_cumsum(
+ dq_inner,
+ dq_inter,
+ dk_inner,
+ dk_inter,
+ q, k, g, dg,
+ s_k_h,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ K: tl.constexpr
+):
+ i_k, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ p_q = q + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + (i_c * BT + BT - 1) * K
+ p_k = k + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + (i_c * BT + BT - 1) * K
+ p_g = g + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + (i_c * BT + BT - 1) * K
+ p_dg = dg + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + (i_c * BT + BT - 1) * K
+ p_dq_inner = dq_inner + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + (i_c * BT + BT - 1) * K
+ p_dk_inner = dk_inner + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + (i_c * BT + BT - 1) * K
+ p_dq_inter = dq_inter + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + (i_c * BT + BT - 1) * K
+ p_dk_inter = dk_inter + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + (i_c * BT + BT - 1) * K
+ cum_grad_dg = tl.zeros([BK], dtype=tl.float32)
+ mask = (i_k * BK + tl.arange(0, BK)) < K
+ last_g = tl.zeros([BK], dtype=tl.float32)
+ for j in range(BT-1, -1, -1):
+ _g = tl.load(p_g, mask=mask, other=0).to(tl.float32)
+ if j == (BT-1):
+ last_g = _g
+ b_dq1 = tl.load(p_dq_inner, mask=mask, other=0)
+ b_dq2 = tl.load(p_dq_inter, mask=mask, other=0)
+ b_dq2 *= tl.exp(_g)
+ b_dq = b_dq1 + b_dq2
+ tl.store(p_dq_inter, b_dq, mask=mask)
+ b_dk1 = tl.load(p_dk_inner, mask=mask, other=0)
+ b_dk2 = tl.load(p_dk_inter, mask=mask, other=0)
+ b_dk2 *= tl.exp(last_g - _g)
+ b_dk = b_dk1 + b_dk2
+ tl.store(p_dk_inter, b_dk, mask=mask)
+ b_q = tl.load(p_q, mask=mask, other=0)
+ b_k = tl.load(p_k, mask=mask, other=0)
+ b_dg = b_dq * b_q - b_dk * b_k
+ cum_grad_dg += b_dg
+ tl.store(p_dg, cum_grad_dg.to(p_dg.dtype.element_ty), mask=mask)
+ p_g -= K
+ p_k -= K
+ p_q -= K
+ p_dq_inner -= K
+ p_dk_inner -= K
+ p_dq_inter -= K
+ p_dk_inter -= K
+ p_dg -= K
+
+
+@triton.jit
+def fused_chunk_gla_fwd_kernel(
+ q, # query [B, H, L, K]
+ k, # key [B, H, L, K]
+ v, # value [B, H, L, V]
+ g, # cumulative sum of log decay [B, H, L, K]
+ o, # output [B, H, L, V]
+
+ h0, # initial state of the chunk [B, H, K, V]
+ ht, # final state of the chunk [B, H, K, V]
+
+ s_k_h, # stride size: L * K
+ s_k_t, # stride size: K
+ s_k_d, # stride size: 1
+
+ s_v_h, # stride size: L * V
+ s_v_t, # stride size: V
+ s_v_d, # stride size: 1
+
+ B: tl.constexpr, # batch size
+ H: tl.constexpr, # H
+ T: tl.constexpr, # T
+ K: tl.constexpr, # K
+ V: tl.constexpr, # V
+ BT: tl.constexpr, # BLOCK SIZE along the sequence dimension, a.k.a. chunk size
+ BK: tl.constexpr, # BLOCK SIZE along the K dimension
+ BV: tl.constexpr, # BLOCK SIZE along the V dimension
+ USE_INITIAL_STATE: tl.constexpr,
+ STORE_FINAL_STATE: tl.constexpr,
+ CHECK: tl.constexpr
+):
+ # indices
+ i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+
+ b_h = tl.zeros([BK, BV], dtype=tl.float32)
+
+ # make block pointers
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (0, i_k * BK), (BT, BK), (1, 0))
+ p_db = g + i_bh * s_k_h + (BT - 1) * s_k_t + i_k * BK + tl.arange(0, BK)
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, 0), (BK, BT), (0, 1))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (0, i_v * BV), (BT, BV), (1, 0))
+ p_o = tl.make_block_ptr(o + (i_bh + i_k * B * H) * s_v_h, (T, V), (s_v_t, s_v_d), (0, i_v * BV), (BT, BV), (1, 0))
+
+ if USE_INITIAL_STATE:
+ p_h = tl.make_block_ptr(h0 + i_bh * K * V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ b_h += tl.load(p_h, boundary_check=(0, 1)).to(tl.float32)
+
+ mask = (i_k * BK + tl.arange(0, BK)) < K
+
+ for i in range(0, tl.cdiv(T, BT)):
+ # [BK, BT]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BT, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BT, BK]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ d_b = tl.load(p_db, mask=mask, other=0).to(tl.float32)
+ if CHECK and i == 0:
+ b_o = tl.dot(b_q.to(b_v.dtype), b_h.to(b_v.dtype), allow_tf32=False)
+ b_h = b_h * tl.exp(d_b)[:, None] + tl.dot(b_k.to(b_v.dtype), b_v, allow_tf32=False)
+ else:
+ b_o = tl.dot(b_q.to(b_v.dtype), b_h.to(b_v.dtype), allow_tf32=False)
+ b_h = b_h * tl.exp(d_b)[:, None] + tl.dot(b_k.to(b_v.dtype), b_v, allow_tf32=False)
+
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1))
+ p_q = tl.advance(p_q, (BT, 0))
+ p_k = tl.advance(p_k, (0, BT))
+ p_v = tl.advance(p_v, (BT, 0))
+ p_o = tl.advance(p_o, (BT, 0))
+ p_db += BT * K
+
+ if STORE_FINAL_STATE:
+ p_final = tl.make_block_ptr(ht + i_bh * K * V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ tl.store(p_final, b_h.to(p_final.dtype.element_ty), boundary_check=(0, 1))
+
+
+# Similar to Algorithm1 of https://arxiv.org/abs/2006.16236
+@triton.jit
+def fused_chunk_gla_bwd_kernel(
+ q, k, v, g,
+ do, # gradient of output [B, H, L, V]
+ dq, # gradient of query [NV, B, H, L, K]
+ dk, # gradient of key [NV, B, H, L, K]
+ dv, # gradient of value [NK, B, H, L, V]
+
+ h0, # initial state of the chunk [B, H, K, V]
+
+ s_k_h, # stride size: L * K
+ s_k_t, # stride size: K
+ s_k_d, # stride size: 1
+
+ s_v_h, # stride size: L * V
+ s_v_t, # stride size: V
+ s_v_d, # stride size: 1
+ scale, # K ** -0.5
+
+ B: tl.constexpr, # B
+ H: tl.constexpr, # H
+ T: tl.constexpr, # T
+ K: tl.constexpr, # K
+ V: tl.constexpr, # V
+ # clamp_min, # minimum log value of the gate for numerical stability. default: -5
+ BT: tl.constexpr, # BLOCK SIZE along the sequence dimension, a.k.a. chunk size
+ BK: tl.constexpr, # BLOCK SIZE along the K dimension
+ BV: tl.constexpr, # BLOCK SIZE along the V dimension
+ USE_INITIAL_STATE: tl.constexpr,
+ CHECK: tl.constexpr
+):
+ i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ # [BV, BK]
+ b_h = tl.zeros([BV, BK], dtype=tl.float32)
+
+ if USE_INITIAL_STATE:
+ p_h = tl.make_block_ptr(h0 + i_bh * K * V, (V, K), (1, V), (i_v * BV, i_k * BK), (BV, BK), (0, 1))
+ b_h += tl.load(p_h, boundary_check=(0, 1)).to(tl.float32)
+
+ mask = (i_k * BK + tl.arange(0, BK)) < K
+ for i in range(0, tl.cdiv(T, BT)):
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i * BT, i_k * BK), (BT, BK), (1, 0))
+ p_db = g + i_bh * s_k_h + ((i+1) * BT - 1) * s_k_t + i_k * BK + tl.arange(0, BK)
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (V, T), (s_v_d, s_v_t), (i_v * BV, i * BT), (BV, BT), (0, 1))
+ p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dq = tl.make_block_ptr(dq + (i_bh+i_v*B*H)*s_k_h, (T, K), (s_k_t, s_k_d), (i * BT, i_k * BK), (BT, BK), (1, 0))
+ b_dq = tl.zeros([BT, BK], dtype=tl.float32)
+ # [BT, K]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ d_b = tl.load(p_db, mask=mask, other=0).to(tl.float32)
+
+ # [V, BT]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BT, V]
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+ # [V, K]
+ if CHECK and i == 0:
+ b_dq += tl.dot(b_do, b_h.to(b_do.dtype), allow_tf32=False)
+ b_h = b_h * tl.exp(d_b)[None, :] + tl.dot(b_v, b_k.to(b_v.dtype), allow_tf32=False)
+ else:
+ b_dq += tl.dot(b_do, b_h.to(b_do.dtype), allow_tf32=False)
+ b_h = b_h * tl.exp(d_b)[None, :] + tl.dot(b_v, b_k.to(b_v.dtype), allow_tf32=False)
+ b_dq *= scale
+ tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1))
+
+ # sync threads
+ b_h = None
+ tl.debug_barrier()
+ # [BK, BV]
+ b_dh = tl.zeros([BK, BV], dtype=tl.float32)
+
+ # cum = tl.zeros([BK], dtype=tl.float32)
+ for i in range(1, tl.cdiv(T, BT) + 1):
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, T - i * BT), (BK, BT), (0, 1))
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (T - i * BT, i_k * BK), (BT, BK), (1, 0))
+ p_db = g + i_bh * s_k_h + (T - (i-1) * BT - 1) * s_k_t + i_k * BK + tl.arange(0, BK)
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (T - i * BT, i_v * BV), (BT, BV), (1, 0))
+ p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (T - i * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dk = tl.make_block_ptr(dk + (i_bh + i_v * B * H) * s_k_h, (T, K),
+ (s_k_t, s_k_d), (T - i * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dv = tl.make_block_ptr(dv + (i_bh + i_k * B * H) * s_v_h, (T, V),
+ (s_v_t, s_v_d), (T - i * BT, i_v * BV), (BT, BV), (1, 0))
+ # [K, BT]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ # [BT, K]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BT, V]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+ b_db = tl.load(p_db, mask=mask, other=0).to(tl.float32)
+
+ # inter-chunk
+ # [K, V]
+ if CHECK and i == 1:
+ b_dk = tl.trans(tl.dot(b_dh.to(b_v.dtype), tl.trans(b_v), allow_tf32=False))
+ b_dv = tl.dot((b_k).to(b_v.dtype), b_dh.to(b_v.dtype), allow_tf32=False)
+ b_dh = b_dh * tl.exp(b_db)[:, None] + tl.dot(b_q.to(b_do.dtype), b_do, allow_tf32=False)
+ else:
+ b_dk = tl.trans(tl.dot(b_dh.to(b_v.dtype), tl.trans(b_v), allow_tf32=False))
+ b_dv = tl.dot((b_k).to(b_v.dtype), b_dh.to(b_v.dtype), allow_tf32=False)
+ b_dh = b_dh * tl.exp(b_db)[:, None] + tl.dot(b_q.to(b_do.dtype), b_do, allow_tf32=False)
+
+ tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1))
+ tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.jit
+def fwd_inner_chunk(
+ q, k, g, A,
+ s_k_h, # stride size: L * K
+ s_k_t, # stride size: K
+ s_k_d, # stride size: 1
+ scale, # K ** -0.5
+ B: tl.constexpr, # B
+ H: tl.constexpr, # H
+ T: tl.constexpr, # T
+ K: tl.constexpr, # K
+ BT: tl.constexpr, # BLOCK SIZE along the sequence dimension, a.k.a. chunk size
+ BK: tl.constexpr # BLOCK SIZE along the K dimension
+):
+
+ i_k, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+
+ p_g = tl.make_block_ptr(g + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+
+ b_g = tl.load(p_g, boundary_check=(0, 1)).to(tl.float32)
+
+ mask = (i_k * BK + tl.arange(0, BK)) < K
+ o_i = tl.arange(0, BT)
+
+ p_q = q + i_bh * s_k_h + i_k * BK + i_t * BT * K + tl.arange(0, BK)
+ p_gq = g + i_bh * s_k_h + i_k * BK + i_t * BT * K + tl.arange(0, BK)
+ p_A = A + (i_bh + (i_k * B * H)) * (tl.cdiv(T, BT) * BT * BT) + i_t * BT * BT + tl.arange(0, BT)
+
+ for i in range(BT):
+ _q = tl.load(p_q, mask=mask, other=0) * scale
+ gq = tl.load(p_gq, mask=mask, other=0).to(tl.float32)
+ s = _q[None, :] * b_k * tl.exp(gq[None, :] - b_g)
+ score = tl.sum(s, axis=1)
+ score = tl.where(o_i <= i, score, 0)
+ tl.store(p_A, score.to(p_A.dtype.element_ty))
+ p_q += K
+ p_gq += K
+ p_A += BT
+
+
+@triton.jit
+def bwd_inner_chunk(
+ q,
+ k,
+ g,
+ dA,
+ dq,
+ dk,
+ s_k_h, # stride size: L * K
+ s_k_t, # stride size: K
+ s_k_d, # stride size: 1
+ T: tl.constexpr, # T
+ K: tl.constexpr, # K
+ # clamp_min, # minimum log value of the gate for numerical stability. default: -5
+ BT: tl.constexpr, # BLOCK SIZE along the sequence dimension, a.k.a. chunk size
+ BK: tl.constexpr, # BLOCK SIZE along the K dimension
+):
+ i_k, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ p_g = tl.make_block_ptr(g + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ b_g = tl.load(p_g, boundary_check=(0, 1)).to(tl.float32)
+
+ mask = (i_k * BK + tl.arange(0, BK)) < K
+ o_i = tl.arange(0, BT)
+
+ p_q = q + i_bh * s_k_h + i_k * BK + i_t * BT * K + tl.arange(0, BK)
+ p_dq = dq + (i_bh) * s_k_h + i_k * BK + i_t * BT * K + tl.arange(0, BK)
+ p_gq = g + i_bh * s_k_h + i_k * BK + i_t * BT * K + tl.arange(0, BK)
+ p_dA = dA + i_bh * (tl.cdiv(T, BT) * BT * BT) + i_t * BT * BT + tl.arange(0, BT)
+
+ b_dk = tl.zeros([BT, BK], dtype=tl.float32)
+
+ for i in range(BT):
+ _q = tl.load(p_q, mask=mask, other=0)
+ gq = tl.load(p_gq, mask=mask, other=0).to(tl.float32)
+ score = tl.exp(gq[None, :] - b_g)
+ score = tl.where(o_i[:, None] <= i, score, 0)
+ _dA = tl.load(p_dA)
+ _dA = tl.where(o_i <= i, _dA, 0)
+ b_dk += (_dA[:, None] * score * _q[None, :])
+ b_dq = tl.sum(_dA[:, None] * score * b_k, axis=0)
+ tl.store(p_dq, b_dq, mask=mask)
+ p_q += K
+ p_dq += K
+ p_gq += K
+ p_dA += BT
+
+ p_dk = tl.make_block_ptr(dk + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ tl.store(p_dk, b_dk.to(dk.dtype.element_ty), boundary_check=(0, 1))
+
+
+class FusedChunkGLAFunction(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_fwd
+ def forward(ctx, q, k, v, g, scale, initial_state, output_final_state):
+ ctx.g_dtype = g.dtype
+ ctx.scale = scale
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ BT = 16 # chunk_size
+ BK, BV = min(K, 64), min(V, 64)
+ NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV)
+ num_stages = 1
+ num_warps = 2
+
+ g_org = g
+ # cumulative decay should be in float32, otherwise the err will be accumulated and amplified.
+ g = chunk_local_cumsum(g_org, chunk_size=BT)
+ o = q.new_empty(NK, B, H, T, V)
+ q_g = torch.empty_like(q)
+ k_g = torch.empty_like(k)
+
+ grid = (NK, triton.cdiv(T, BT), B * H)
+ prepare_qg_kg[grid](
+ q, k, g, q_g, k_g,
+ q.stride(1),
+ scale,
+ K=K,
+ BT=BT,
+ BK=BK,
+ num_warps=1
+ )
+
+ if output_final_state:
+ final_state = q.new_empty(B, H, K, V, dtype=torch.float, requires_grad=False)
+ else:
+ final_state = None
+ # the bug still exists even for Triton 2.2 on H100 GPUs
+ # so we always enable initial checks
+ CHECK = True
+ if version.parse(triton.__version__) < version.parse('2.2.0'):
+ import warnings
+ warnings.warn(
+ "Triton<2.2.0 detected for running this kernel, "
+ "which is known to have some weird compiler issues (refer to https://github.com/openai/triton/issues/2852) "
+ "that lead to significant precision loss. "
+ "We've add some initial condition checks to resolve this, sadly at the sacrifice of the speed. "
+ "For optimal performance, it is recommended to install Triton>=2.2.0 (if possible)."
+ )
+ CHECK = True
+
+ grid = (NV, NK, B * H)
+ fused_chunk_gla_fwd_kernel[grid](
+ q_g, k_g, v, g, o, initial_state, final_state,
+ q.stride(1), q.stride(2), q.stride(3),
+ v.stride(1), v.stride(2), v.stride(3),
+ B=B,
+ H=H,
+ T=T,
+ K=K,
+ V=V,
+ BT=BT,
+ BK=BK,
+ BV=BV,
+ USE_INITIAL_STATE=initial_state is not None,
+ STORE_FINAL_STATE=output_final_state,
+ CHECK=CHECK,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+
+ o = o.sum(0)
+
+ # intra-chunk
+ chunk_size = 16
+ num_chunk = T // chunk_size
+ v2 = rearrange(v, 'b h (n c) d -> b h n c d', n=num_chunk)
+ BK = min(K, 64)
+ NK = triton.cdiv(K, BK)
+ A = q.new_empty(NK, B, H, triton.cdiv(T, BT), BT, BT)
+ grid = (NK, triton.cdiv(T, BT), B * H)
+ fwd_inner_chunk[grid](
+ q, k, g, A,
+ q.stride(1), q.stride(2), q.stride(3),
+ scale,
+ B=B,
+ H=H,
+ T=T,
+ K=K,
+ BT=BT,
+ BK=BK,
+ num_stages=3,
+ num_warps=4
+ )
+ A = A.sum(0)
+ o2 = A @ v2
+ o2 = rearrange(o2, 'b h n c d -> b h (n c) d')
+ # combine inner and inter
+ o.add_(o2)
+ ctx.save_for_backward(q, k, v, g_org, A, initial_state)
+ ctx.CHECK = CHECK
+ return o.to(v), final_state
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_bwd
+ def backward(ctx, do, dht=None):
+ q, k, v, g_org, A, initial_state = ctx.saved_tensors
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ scale = ctx.scale
+
+ # recomputation
+ # inter-chunk
+ BT = 16 # chunk_size
+ g = chunk_local_cumsum(g_org, chunk_size=BT)
+ BK, BV = min(K, 64), min(V, 64)
+ NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV)
+ q_g = torch.empty_like(q)
+ k_g = torch.empty_like(k)
+ grid = (NK, triton.cdiv(T, BT), B * H)
+ prepare_qg_kg[grid](
+ q, k, g, q_g, k_g,
+ q.stride(1),
+ scale,
+ K=K,
+ BT=BT,
+ BK=BK,
+ num_warps=1
+ )
+
+ # inter-chunk
+ BT = 16
+ BK, BV = min(triton.next_power_of_2(K), 64), min(triton.next_power_of_2(V), 64)
+ NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV)
+ num_stages = 1
+ num_warps = 2
+ dq = q.new_empty(NV, B, H, T, K)
+ dk = q.new_empty(NV, B, H, T, K)
+ dv = q.new_empty(NK, B, H, T, V)
+
+ grid = (NV, NK, B * H)
+
+ fused_chunk_gla_bwd_kernel[grid](
+ q_g, k_g, v, g, do, dq, dk, dv, initial_state,
+ q.stride(1), q.stride(2), q.stride(3),
+ v.stride(1), v.stride(2), v.stride(3),
+ scale,
+ B=B,
+ H=H,
+ T=T,
+ K=K,
+ V=V,
+ BT=BT,
+ BK=BK,
+ BV=BV,
+ USE_INITIAL_STATE=initial_state is not None,
+ CHECK=ctx.CHECK,
+ num_warps=num_warps,
+ num_stages=num_stages,
+ )
+ dq = dq.sum(0)
+ dk = dk.sum(0)
+ dv = dv.sum(0)
+
+ # intra chunk
+ num_chunk = T // BT
+ v2 = rearrange(v, 'b h (n c) d -> b h n c d', n=num_chunk)
+ do2 = rearrange(do, 'b h (n c) d -> b h n c d', n=num_chunk)
+ dA2 = (do2 @ v2.transpose(-2, -1)) * scale
+ dv2 = A.transpose(-1, -2) @ do2
+ dv2 = rearrange(dv2, 'b h n c d -> b h (n c) d', n=num_chunk)
+
+ BK = min(triton.next_power_of_2(K), 16)
+ NK = triton.cdiv(K, BK)
+ dk2 = torch.empty_like(k)
+ dq2 = torch.empty_like(q)
+
+ grid = (NK, triton.cdiv(T, BT), B * H)
+ bwd_inner_chunk[grid](
+ q, k, g,
+ dA2, dq2, dk2,
+ q.stride(1), q.stride(2), q.stride(3),
+ T=T,
+ K=K,
+ BT=BT,
+ BK=BK,
+ num_warps=1,
+ num_stages=3
+ )
+
+ BK = min(triton.next_power_of_2(K), 32)
+ NK = triton.cdiv(K, BK)
+ dg = torch.empty_like(g, dtype=torch.float32)
+ grid = (NK, triton.cdiv(T, BT), B * H)
+ bwd_decay_global_cumsum[grid](
+ dq2, dq, dk2, dk, q, k, g, dg,
+ q.stride(1),
+ K=K,
+ BT=BT,
+ BK=BK,
+ num_warps=1,
+ num_stages=1
+ )
+ dg = rearrange(dg, 'b h (n c) d -> b h n c d', c=BT)
+
+ def rev_cumsum_exclusive(x):
+ cumsum_x = x.cumsum(-2)
+ rev_cumsum_x = cumsum_x[..., -1, None, :] - cumsum_x
+ return rev_cumsum_x
+
+ rev_cumsum_dg = rev_cumsum_exclusive(dg[..., 0, :])
+ dg.add_(rev_cumsum_dg.unsqueeze(-2))
+ dv.add_(dv2)
+ dg = rearrange(dg, 'b h n c d -> b h (n c) d')
+
+ return dq.to(q), dk.to(k), dv.to(v), dg.to(ctx.g_dtype), None, None, None
+
+
+def pad(x, chunk_size=16):
+ T = x.shape[-2]
+ padded_seq_len = ceildiv(T, chunk_size) * chunk_size
+ if x.shape[-2] % chunk_size != 0:
+ x = F.pad(x, (0, 0, 0, padded_seq_len - T))
+
+ return x
+
+
+def ceildiv(a, b):
+ return -(a // -b)
+
+
+def fused_chunk_gla(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ g: torch.Tensor,
+ scale: int = -1,
+ initial_state: torch.Tensor = None,
+ output_final_state: bool = False,
+ head_first: bool = True
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ if scale == -1:
+ scale = q.shape[-1] ** -0.5
+ if initial_state is not None:
+ initial_state = initial_state.detach()
+ seq_len = q.shape[-2]
+ q, k, v, g = map(lambda x: pad(x), [q, k, v, g])
+ if not head_first:
+ q, k, v, g = map(lambda x: x.transpose(1, 2), (q, k, v, g))
+ o, final_state = FusedChunkGLAFunction.apply(q, k, v, g, scale, initial_state, output_final_state)
+ o = o[..., :seq_len, :]
+ if not head_first:
+ o = o.transpose(1, 2)
+ return o, final_state
diff --git a/fla/ops/gla/fused_recurrent.py b/fla/ops/gla/fused_recurrent.py
new file mode 100644
index 0000000000000000000000000000000000000000..68eb1492d58ce5c9229bd4b9bd3acd320bfb223d
--- /dev/null
+++ b/fla/ops/gla/fused_recurrent.py
@@ -0,0 +1,116 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from typing import Optional, Tuple
+
+import torch
+
+from fla.ops.common.fused_recurrent import fused_recurrent
+
+
+def fused_recurrent_gla(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ gk: Optional[torch.Tensor] = None,
+ gv: Optional[torch.Tensor] = None,
+ scale: Optional[int] = None,
+ initial_state: Optional[torch.Tensor] = None,
+ output_final_state: bool = False,
+ reverse: bool = False,
+ offsets: Optional[torch.LongTensor] = None,
+ head_first: bool = True
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ r"""
+ Args:
+ q (torch.Tensor):
+ queries of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`.
+ k (torch.Tensor):
+ keys of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`.
+ v (torch.Tensor):
+ values of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ gk (torch.Tensor):
+ Forget gates of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]` applied to keys.
+ gv (torch.Tensor):
+ Forget gates of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]` applied to values.
+ scale (Optional[int]):
+ Scale factor for the attention scores.
+ If not provided, it will default to `1 / sqrt(K)`. Default: `None`.
+ initial_state (Optional[torch.Tensor]):
+ Initial state of shape `[N, H, K, V]` for `N` input sequences.
+ For equal-length input sequences, `N` equals the batch size `B`.
+ Default: `None`.
+ output_final_state (Optional[bool]):
+ Whether to output the final state of shape `[N, H, K, V]`. Default: `False`.
+ reverse (Optional[bool]):
+ If `True`, process the state passing in reverse order. Default: `False`.
+ offsets (Optional[torch.LongTensor]):
+ Offsets of shape `[N+1]` defining the bos/eos positions of `N` variable-length sequences in the batch.
+ For example,
+ if `offsets` is `[0, 1, 3, 6, 10, 15]`, there are `N=5` sequences with lengths 1, 2, 3, 4 and 5 respectively.
+ If provided, the inputs are concatenated and the batch size `B` is expected to be 1.
+ Default: `None`.
+ head_first (Optional[bool]):
+ Whether the inputs are in the head-first format, which is not supported for variable-length inputs.
+ Default: `True`.
+
+ Returns:
+ o (torch.Tensor):
+ Outputs of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ final_state (torch.Tensor):
+ Final state of shape `[N, H, K, V]` if `output_final_state=True` else `None`.
+
+ Examples::
+ >>> import torch
+ >>> import torch.nn.functional as F
+ >>> from einops import rearrange
+ >>> from fla.ops.gla import fused_recurrent_gla
+ # inputs with equal lengths
+ >>> B, T, H, K, V = 4, 2048, 4, 512, 512
+ >>> q = torch.randn(B, T, H, K, device='cuda')
+ >>> k = torch.randn(B, T, H, K, device='cuda')
+ >>> v = torch.randn(B, T, H, V, device='cuda')
+ >>> g = F.logsigmoid(torch.randn(B, T, H, K, device='cuda'))
+ >>> h0 = torch.randn(B, H, K, V, device='cuda')
+ >>> o, ht = fused_recurrent_gla(q, k, v, g,
+ initial_state=h0,
+ output_final_state=True,
+ head_first=False)
+ # for variable-length inputs, the batch size `B` is expected to be 1 and `offsets` is required
+ >>> q, k, v, g = map(lambda x: rearrange(x, 'b t h d -> 1 (b t) h d'), (q, k, v, g))
+ # for a batch with 4 sequences, offsets with 5 start/end positions are expected
+ >>> offsets = q.new_tensor([0, 2048, 4096, 6144, 8192], dtype=torch.long)
+ >>> o_var, ht_var = fused_recurrent_gla(q, k, v, g,
+ initial_state=h0,
+ output_final_state=True,
+ offsets=offsets,
+ head_first=False)
+ >>> assert o.allclose(o_var.view(o.shape))
+ >>> assert ht.allclose(ht_var)
+ """
+ if offsets is not None:
+ if q.shape[0] != 1:
+ raise ValueError(f"The batch size is expected to be 1 rather than {q.shape[0]} when using `offsets`."
+ f"Please flatten variable-length inputs before processing.")
+ if head_first:
+ raise RuntimeError("Sequences with variable lengths are not supported for head-first mode")
+ if initial_state is not None and initial_state.shape[0] != len(offsets) - 1:
+ raise ValueError(f"The number of initial states is expected to be equal to the number of input sequences, "
+ f"i.e., {len(offsets) - 1} rather than {initial_state.shape[0]}.")
+ if scale is None:
+ scale = k.shape[-1] ** -0.5
+ o, final_state = fused_recurrent(
+ q=q,
+ k=k,
+ v=v,
+ g=None,
+ gk=gk,
+ gv=gv,
+ scale=scale,
+ initial_state=initial_state,
+ output_final_state=output_final_state,
+ reverse=reverse,
+ offsets=offsets,
+ head_first=head_first
+ )
+ return o, final_state
diff --git a/fla/ops/gla/naive.py b/fla/ops/gla/naive.py
new file mode 100644
index 0000000000000000000000000000000000000000..507a7395c0c28b0a9c54008e1735098cd3fbdc85
--- /dev/null
+++ b/fla/ops/gla/naive.py
@@ -0,0 +1,41 @@
+# -*- coding: utf-8 -*-
+
+from typing import Optional
+
+import torch
+
+
+def ceildiv(a, b):
+ return -(a // -b)
+
+
+def naive_recurrent_gla(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ gk: torch.Tensor,
+ initial_state: Optional[torch.Tensor] = None,
+ output_final_state: bool = False
+):
+ dtype = q.dtype
+ q, k, v, gk = map(lambda x: x.float(), (q, k, v, gk))
+ B, H, T, K, V = *q.shape, v.shape[-1]
+ o = torch.zeros_like(v)
+ scale = K ** -0.5
+
+ h = q.new_zeros(B, H, K, V, dtype=torch.float32)
+ if initial_state is not None:
+ h += initial_state.float()
+
+ for i in range(T):
+ q_i = q[:, :, i] * scale
+ k_i = k[:, :, i]
+ v_i = v[:, :, i]
+ gk_i = gk[:, :, i].exp()
+ kv_i = k_i[..., None] * v_i[..., None, :]
+ h = h * gk_i[..., None] + kv_i
+ o[:, :, i] = (q_i[..., None] * h).sum(-2)
+
+ if not output_final_state:
+ h = None
+ return o.to(dtype), h
diff --git a/fla/ops/gsa/__init__.py b/fla/ops/gsa/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..ed8a88014ddfc3143e67d3a48c38a54b75d7f3d6
--- /dev/null
+++ b/fla/ops/gsa/__init__.py
@@ -0,0 +1,9 @@
+# -*- coding: utf-8 -*-
+
+from .chunk import chunk_gsa
+from .fused_recurrent import fused_recurrent_gsa
+
+__all__ = [
+ 'chunk_gsa',
+ 'fused_recurrent_gsa'
+]
diff --git a/fla/ops/gsa/chunk.py b/fla/ops/gsa/chunk.py
new file mode 100644
index 0000000000000000000000000000000000000000..45825184dce1765adc568eeec20b05208b9f8be9
--- /dev/null
+++ b/fla/ops/gsa/chunk.py
@@ -0,0 +1,1255 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from typing import Optional, Tuple
+
+import torch
+import triton
+import triton.language as tl
+from einops import reduce
+
+from fla.ops.common.chunk_h import chunk_bwd_dh, chunk_fwd_h
+from fla.ops.gla.chunk import chunk_gla_bwd, chunk_gla_fwd
+from fla.ops.utils import chunk_local_cumsum, softmax_bwd, softmax_fwd
+from fla.utils import contiguous
+
+
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def chunk_gsa_fwd_k_kernel_inter(
+ q,
+ k,
+ h,
+ g,
+ o,
+ A,
+ offsets,
+ indices,
+ scale,
+ T: tl.constexpr,
+ HQ: tl.constexpr,
+ H: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ NG: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_v, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_bg = i_bh // NG
+ i_b, i_hq = i_bh // HQ, i_bh % HQ
+ i_h = i_hq // NG
+ if USE_OFFSETS:
+ i_tg = i_t
+ i_n, i_t = tl.load(indices + i_t * 2).to(tl.int32), tl.load(indices + i_t * 2 + 1).to(tl.int32)
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ T = eos - bos
+ NT = tl.cdiv(T, BT)
+ else:
+ NT = tl.cdiv(T, BT)
+ i_tg = i_b * NT + i_t
+ bos, eos = i_b * T, i_b * T + T
+
+ o_i = tl.arange(0, BT)
+ m_s = o_i[:, None] >= o_i[None, :]
+
+ b_o = tl.zeros([BT, BV], dtype=tl.float32)
+ b_A = tl.zeros([BT, BT], dtype=tl.float32)
+ for i_k in range(tl.cdiv(K, BK)):
+ if HEAD_FIRST:
+ p_q = tl.make_block_ptr(q + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + i_bg * T*K, (K, T), (1, K), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_h = tl.make_block_ptr(h + (i_bg * NT + i_t) * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ else:
+ p_q = tl.make_block_ptr(q + (bos * HQ + i_hq) * K, (T, K), (HQ*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + (bos * H + i_h) * K, (K, T), (1, H*K), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_h = tl.make_block_ptr(h + (i_tg * H + i_h) * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+
+ # [BT, BK]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_q = (b_q * scale).to(b_q.dtype)
+ # [BK, BT]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BK, BV]
+ b_h = tl.load(p_h, boundary_check=(0, 1))
+ # [BT, BV]
+ b_o += tl.dot(b_q, b_h)
+ # [BT, BT]
+ b_A += tl.dot(b_q, b_k)
+ if HEAD_FIRST:
+ p_g = tl.make_block_ptr(g + i_bg * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_o = tl.make_block_ptr(o + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_A = tl.make_block_ptr(A + i_bh * T*BT, (T, BT), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0))
+ else:
+ p_g = tl.make_block_ptr(g + (bos * H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_o = tl.make_block_ptr(o + (bos * HQ + i_hq) * V, (T, V), (HQ*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_A = tl.make_block_ptr(A + (bos * HQ + i_hq) * BT, (T, BT), (HQ*BT, 1), (i_t * BT, 0), (BT, BT), (1, 0))
+ # [BT, BV]
+ b_g = tl.load(p_g, boundary_check=(0, 1))
+ b_o = b_o * tl.exp(b_g)
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1))
+
+ # [BT, BT]
+ b_A = tl.where(m_s, b_A, 0.)
+ if i_v == 0:
+ tl.store(p_A, b_A.to(p_A.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def chunk_gsa_fwd_k_kernel_intra(
+ v,
+ g,
+ o,
+ A,
+ offsets,
+ indices,
+ T: tl.constexpr,
+ HQ: tl.constexpr,
+ H: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BC: tl.constexpr,
+ BV: tl.constexpr,
+ NC: tl.constexpr,
+ NG: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_v, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_bg = i_bh // NG
+ i_b, i_hq = i_bh // HQ, i_bh % HQ
+ i_h = i_hq // NG
+ i_t, i_i = i_c // NC, i_c % NC
+ if USE_OFFSETS:
+ i_n, i_t = tl.load(indices + i_t * 2).to(tl.int32), tl.load(indices + i_t * 2 + 1).to(tl.int32)
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ T = eos - bos
+ else:
+ bos, eos = i_b * T, i_b * T + T
+
+ o_v = i_v * BV + tl.arange(0, BV)
+ m_v = o_v < V
+
+ if i_t * BT + i_i * BC > T:
+ return
+
+ if HEAD_FIRST:
+ p_g = tl.make_block_ptr(g + i_bg * T*V, (T, V), (V, 1), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0))
+ p_gn = tl.max_contiguous(tl.multiple_of(g + i_bg * T*V + min(i_t * BT + i_i * BC, T) * V + o_v, BV), BV)
+ else:
+ p_g = tl.make_block_ptr(g + (bos * H + i_h) * V, (T, V), (H*V, 1), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0))
+ p_gn = tl.max_contiguous(tl.multiple_of(g + (bos + min(i_t * BT + i_i * BC, T)) * H*V + i_h * V + o_v, BV), BV)
+ # [BV,]
+ b_gn = tl.load(p_gn, mask=m_v, other=0)
+ # [BC, BV]
+ b_o = tl.zeros([BC, BV], dtype=tl.float32)
+ for i_j in range(0, i_i):
+ if HEAD_FIRST:
+ p_A = tl.make_block_ptr(A + i_bh * T*BT, (T, BT), (BT, 1), (i_t * BT + i_i * BC, i_j * BC), (BC, BC), (1, 0))
+ p_v = tl.make_block_ptr(v + i_bg * T*V, (T, V), (V, 1), (i_t * BT + i_j * BC, i_v * BV), (BC, BV), (1, 0))
+ p_gv = tl.make_block_ptr(g + i_bg * T*V, (T, V), (V, 1), (i_t * BT + i_j * BC, i_v * BV), (BC, BV), (1, 0))
+ else:
+ p_A = tl.make_block_ptr(A + (bos*HQ+i_hq) * BT, (T, BT), (HQ*BT, 1), (i_t*BT+i_i*BC, i_j * BC), (BC, BC), (1, 0))
+ p_v = tl.make_block_ptr(v + (bos*H+i_h) * V, (T, V), (H*V, 1), (i_t * BT + i_j * BC, i_v * BV), (BC, BV), (1, 0))
+ p_gv = tl.make_block_ptr(g + (bos*H+i_h) * V, (T, V), (H*V, 1), (i_t * BT + i_j * BC, i_v * BV), (BC, BV), (1, 0))
+ # [BC, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ b_gv = tl.load(p_gv, boundary_check=(0, 1))
+ b_vg = (b_v * tl.exp(b_gn[None, :] - b_gv)).to(b_v.dtype)
+ # [BC, BC]
+ b_A = tl.load(p_A, boundary_check=(0, 1))
+ b_o += tl.dot(b_A, b_vg)
+ # [BC, BV]
+ b_g = tl.load(p_g, boundary_check=(0, 1))
+ b_o *= tl.exp(b_g - b_gn[None, :])
+
+ o_i = tl.arange(0, BC)
+ if HEAD_FIRST:
+ o_A = i_bh * T*BT + (i_t * BT + i_i * BC + tl.arange(0, BC)) * BT + i_i * BC
+ else:
+ o_A = (bos + i_t * BT + i_i * BC + tl.arange(0, BC)) * HQ*BT + i_hq * BT + i_i * BC
+ m_A = (i_t * BT + i_i * BC + tl.arange(0, BC)) < T
+ for j in range(0, min(BC, T - i_t * BT - i_i * BC)):
+ if HEAD_FIRST:
+ p_v = tl.max_contiguous(tl.multiple_of(v + i_bg * T*V + (i_t * BT + i_i * BC + j) * V + o_v, BV), BV)
+ p_gv = tl.max_contiguous(tl.multiple_of(g + i_bg * T*V + (i_t * BT + i_i * BC + j) * V + o_v, BV), BV)
+ else:
+ p_v = tl.max_contiguous(tl.multiple_of(v + (bos + i_t * BT + i_i * BC + j) * H*V + i_h * V + o_v, BV), BV)
+ p_gv = tl.max_contiguous(tl.multiple_of(g + (bos + i_t * BT + i_i * BC + j) * H*V + i_h * V + o_v, BV), BV)
+ # [BC,]
+ b_A = tl.load(A + o_A + j, mask=m_A, other=0)
+ # [BV,]
+ b_v = tl.load(p_v, mask=m_v, other=0).to(tl.float32)
+ b_gv = tl.load(p_gv, mask=m_v, other=0).to(tl.float32)
+ # [BC, BV]
+ b_vg = b_v[None, :] * tl.exp(b_g - b_gv[None, :])
+ # avoid 0 * inf = inf
+ b_o += tl.where(o_i[:, None] >= j, b_A[:, None] * b_vg, 0.)
+ if HEAD_FIRST:
+ p_o = tl.make_block_ptr(o + i_bh * T*V, (T, V), (V, 1), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0))
+ else:
+ p_o = tl.make_block_ptr(o + (bos*HQ + i_hq) * V, (T, V), (HQ*V, 1), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0))
+ b_o += tl.load(p_o, boundary_check=(0, 1))
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def chunk_gsa_bwd_k_kernel_dA(
+ v,
+ g,
+ do,
+ dA,
+ indices,
+ offsets,
+ scale,
+ B: tl.constexpr,
+ T: tl.constexpr,
+ HQ: tl.constexpr,
+ H: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BC: tl.constexpr,
+ BV: tl.constexpr,
+ NC: tl.constexpr,
+ NG: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_v, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_bg = i_bh // NG
+ i_b, i_hq = i_bh // HQ, i_bh % HQ
+ i_h = i_hq // NG
+ i_t, i_i, i_j = i_c // (NC * NC), (i_c % (NC * NC)) // NC, (i_c % (NC * NC)) % NC
+ if USE_OFFSETS:
+ i_n, i_t = tl.load(indices + i_t * 2).to(tl.int32), tl.load(indices + i_t * 2 + 1).to(tl.int32)
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ all = T
+ T = eos - bos
+ else:
+ bos, eos = i_b * T, i_b * T + T
+ all = B * T
+
+ o_v = i_v * BV + tl.arange(0, BV)
+ m_v = o_v < V
+
+ if i_t * BT + i_i * BC > T:
+ return
+
+ if HEAD_FIRST:
+ p_dA = tl.make_block_ptr(dA+(i_v*B*H+i_bh)*T*BT, (T, BT), (BT, 1), (i_t * BT + i_i * BC, i_j * BC), (BC, BC), (1, 0))
+ else:
+ p_dA = tl.make_block_ptr(dA+((i_v*all+bos)*HQ+i_hq)*BT, (T, BT), (HQ*BT, 1), (i_t*BT+i_i*BC, i_j*BC), (BC, BC), (1, 0))
+
+ # [BC, BC]
+ b_dA = tl.zeros([BC, BC], dtype=tl.float32)
+ if i_i > i_j:
+ if HEAD_FIRST:
+ p_v = tl.make_block_ptr(v + i_bg * T*V, (V, T), (1, V), (i_v * BV, i_t * BT + i_j * BC), (BV, BC), (0, 1))
+ p_gv = tl.make_block_ptr(g + i_bg * T*V, (V, T), (1, V), (i_v * BV, i_t * BT + i_j * BC), (BV, BC), (0, 1))
+ p_gn = tl.max_contiguous(tl.multiple_of(g + i_bg * T*V + (i_t * BT + i_i * BC) * V + o_v, BV), BV)
+ p_g = tl.make_block_ptr(g + i_bg * T*V, (T, V), (V, 1), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0))
+ p_do = tl.make_block_ptr(do + i_bh * T*V, (T, V), (V, 1), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0))
+ else:
+ p_v = tl.make_block_ptr(v + (bos*H+i_h) * V, (V, T), (1, H*V), (i_v * BV, i_t*BT + i_j*BC), (BV, BC), (0, 1))
+ p_gv = tl.make_block_ptr(g + (bos*H+i_h) * V, (V, T), (1, H*V), (i_v * BV, i_t*BT + i_j*BC), (BV, BC), (0, 1))
+ p_gn = tl.max_contiguous(tl.multiple_of(g + (bos + i_t*BT + i_i*BC) * H*V + i_h * V + o_v, BV), BV)
+ p_g = tl.make_block_ptr(g + (bos*H+i_h) * V, (T, V), (H*V, 1), (i_t*BT + i_i*BC, i_v*BV), (BC, BV), (1, 0))
+ p_do = tl.make_block_ptr(do + (bos*HQ+i_hq) * V, (T, V), (HQ*V, 1), (i_t*BT + i_i*BC, i_v*BV), (BC, BV), (1, 0))
+ # [BV,]
+ b_gn = tl.load(p_gn, mask=m_v, other=0.)
+ # [BC, BV]
+ b_g = tl.load(p_g, boundary_check=(0, 1))
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+ b_do = (b_do * tl.exp(b_g - b_gn[None, :]) * scale).to(b_do.dtype)
+ # [BV, BC]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ b_gv = tl.load(p_gv, boundary_check=(0, 1))
+ b_vg = (b_v * tl.exp(b_gn[:, None] - b_gv)).to(b_v.dtype)
+ # [BC, BC]
+ b_dA = tl.dot(b_do, b_vg)
+ elif i_i == i_j:
+ if HEAD_FIRST:
+ p_g = tl.make_block_ptr(g + i_bg * T*V, (T, V), (V, 1), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0))
+ p_do = tl.make_block_ptr(do + i_bh * T*V, (T, V), (V, 1), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0))
+ p_v = tl.max_contiguous(tl.multiple_of(v + i_bg * T*V + (i_t * BT + i_j * BC) * V + o_v, BV), BV)
+ p_gv = tl.max_contiguous(tl.multiple_of(g + i_bg * T*V + (i_t * BT + i_j * BC) * V + o_v, BV), BV)
+ else:
+ p_g = tl.make_block_ptr(g + (bos*H + i_h) * V, (T, V), (H*V, 1), (i_t*BT + i_i*BC, i_v*BV), (BC, BV), (1, 0))
+ p_do = tl.make_block_ptr(do + (bos*HQ + i_hq) * V, (T, V), (HQ*V, 1), (i_t*BT + i_i*BC, i_v*BV), (BC, BV), (1, 0))
+ p_v = tl.max_contiguous(tl.multiple_of(v + (bos + i_t*BT + i_j*BC) * H*V + i_h * V + o_v, BV), BV)
+ p_gv = tl.max_contiguous(tl.multiple_of(g + (bos + i_t*BT + i_j*BC) * H*V + i_h * V + o_v, BV), BV)
+ # [BC, BV]
+ b_g = tl.load(p_g, boundary_check=(0, 1))
+ b_do = tl.load(p_do, boundary_check=(0, 1)) * scale
+ m_v = o_v < V
+
+ o_i = tl.arange(0, BC)
+ # [BC, BC]
+ m_dA = o_i[:, None] >= o_i[None, :]
+ for j in range(0, min(BC, T - i_t * BT - i_j * BC)):
+ # [BV,]
+ b_v = tl.load(p_v, mask=m_v, other=0).to(tl.float32)
+ b_gv = tl.load(p_gv, mask=m_v, other=0).to(tl.float32)
+ # [BC,]
+ b_dAj = tl.sum(b_do * b_v[None, :] * tl.exp(b_g - b_gv[None, :]), 1)
+ b_dA = tl.where((o_i == j)[None, :], b_dAj[:, None], b_dA)
+
+ p_v += (1 if HEAD_FIRST else H) * V
+ p_gv += (1 if HEAD_FIRST else H) * V
+ b_dA = tl.where(m_dA, b_dA, 0.)
+ tl.store(p_dA, b_dA.to(dA.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def chunk_gsa_bwd_k_kernel_dqkvg(
+ q,
+ k,
+ v,
+ h,
+ g,
+ A,
+ do,
+ dh,
+ dq,
+ dk,
+ dv,
+ dg,
+ dgv,
+ dA,
+ offsets,
+ indices,
+ scale,
+ B: tl.constexpr,
+ T: tl.constexpr,
+ HQ: tl.constexpr,
+ H: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ NG: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_k, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_bg = i_bh // NG
+ i_b, i_hq = i_bh // HQ, i_bh % HQ
+ i_h = i_hq // NG
+ if USE_OFFSETS:
+ i_tg = i_t
+ i_n, i_t = tl.load(indices + i_t * 2).to(tl.int32), tl.load(indices + i_t * 2 + 1).to(tl.int32)
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ all = T
+ T = eos - bos
+ NT = tl.cdiv(T, BT)
+ else:
+ NT = tl.cdiv(T, BT)
+ i_tg = i_b * NT + i_t
+ bos, eos = i_b * T, i_b * T + T
+ all = B * T
+
+ o_i = tl.arange(0, BT)
+ o_t = min(i_t * BT + BT, T)
+ m_s = o_i[:, None] >= o_i[None, :]
+
+ if HEAD_FIRST:
+ p_q = tl.make_block_ptr(q + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + i_bg * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_A = tl.make_block_ptr(A + (i_k*B*H+i_bh) * T*BT, (T, BT), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0))
+ else:
+ p_q = tl.make_block_ptr(q + (bos*HQ+i_hq) * K, (T, K), (HQ*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + (bos*H+i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_A = tl.make_block_ptr(A + ((i_k*all+bos)*HQ+i_hq)*BT, (T, BT), (HQ*BT, 1), (i_t * BT, 0), (BT, BT), (1, 0))
+
+ # [BT, BK]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BT, BT]
+ b_A = tl.dot((b_q * scale).to(b_q.dtype), tl.trans(b_k))
+ b_A = tl.where(m_s, b_A, 0.)
+ tl.store(p_A, b_A.to(p_A.dtype.element_ty), boundary_check=(0, 1))
+
+ b_dq = tl.zeros([BT, BK], dtype=tl.float32)
+ b_dk = tl.zeros([BT, BK], dtype=tl.float32)
+ for i_v in range(tl.cdiv(V, BV)):
+ o_v = i_v * BV + tl.arange(0, BV)
+ if HEAD_FIRST:
+ p_v = tl.make_block_ptr(v + i_bg * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_g = tl.make_block_ptr(g + i_bg * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_gn = tl.max_contiguous(tl.multiple_of(g + i_bg * T*V + (o_t - 1) * V + o_v, BV), BV)
+ p_do = tl.make_block_ptr(do + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dv = tl.make_block_ptr(dv + (i_k*B*H+i_bh) * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dg = tl.make_block_ptr(dg + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dgv = tl.make_block_ptr(dgv + (i_k*B*H+i_bh) * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_h = tl.make_block_ptr(h + i_bg * NT*K*V + i_t * K*V, (V, K), (1, V), (i_v * BV, i_k * BK), (BV, BK), (0, 1))
+ p_dh = tl.make_block_ptr(dh + i_bh * NT*K*V + i_t * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ else:
+ p_v = tl.make_block_ptr(v + (bos*H+i_h)*V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_g = tl.make_block_ptr(g + (bos*H+i_h)*V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_gn = tl.max_contiguous(tl.multiple_of(g + (bos + o_t - 1) * H*V + i_h * V + o_v, BV), BV)
+ p_do = tl.make_block_ptr(do + (bos*HQ+i_hq)*V, (T, V), (HQ*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dv = tl.make_block_ptr(dv + ((i_k*all+bos)*HQ+i_hq)*V, (T, V), (HQ*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dg = tl.make_block_ptr(dg + (bos*HQ+i_hq)*V, (T, V), (HQ*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dgv = tl.make_block_ptr(dgv+((i_k*all+bos)*HQ+i_hq)*V, (T, V), (HQ*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_h = tl.make_block_ptr(h + (i_tg * H + i_h) * K*V, (V, K), (1, V), (i_v * BV, i_k * BK), (BV, BK), (0, 1))
+ p_dh = tl.make_block_ptr(dh + (i_tg * HQ + i_hq) * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ m_v = o_v < V
+
+ # [BV,]
+ b_gn = tl.load(p_gn, mask=m_v, other=0)
+ # [BT, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ b_g = tl.load(p_g, boundary_check=(0, 1))
+ b_gv = tl.exp(b_gn[None, :] - b_g)
+ # [BV, BK]
+ b_h = tl.load(p_h, boundary_check=(0, 1))
+ # [BT, BV]
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+ b_do = (b_do * tl.exp(b_g) * scale).to(b_do.dtype)
+ # [BK, BV]
+ b_dh = tl.load(p_dh, boundary_check=(0, 1))
+ # [BV]
+ b_dg = tl.sum(tl.trans(b_h) * b_dh, 0) * tl.exp(b_gn)
+
+ b_dh = b_dh.to(b_k.dtype)
+ # [BT, BK]
+ b_dq += tl.dot(b_do, b_h.to(b_k.dtype))
+ b_dk += tl.dot((b_v * b_gv).to(b_v.dtype), tl.trans(b_dh))
+ # [BT, BV]
+ b_dv = tl.dot(b_k, b_dh) * b_gv
+ # [BV]
+ b_dg += tl.sum(b_dv * b_v, 0)
+
+ if i_k == 0:
+ b_dgv = tl.load(p_dg, boundary_check=(0, 1)) + b_dg[None, :]
+ else:
+ b_dgv = tl.zeros([BT, BV], dtype=tl.float32) + b_dg[None, :]
+
+ tl.store(p_dgv, b_dgv.to(p_dgv.dtype.element_ty), boundary_check=(0, 1))
+ tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1))
+ if HEAD_FIRST:
+ p_dA = tl.make_block_ptr(dA + i_bh * T*BT, (T, BT, ), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0))
+ p_dq = tl.make_block_ptr(dq + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dk = tl.make_block_ptr(dk + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ else:
+ p_dA = tl.make_block_ptr(dA + (bos*HQ + i_hq) * BT, (T, BT), (HQ*BT, 1), (i_t * BT, 0), (BT, BT), (1, 0))
+ p_dq = tl.make_block_ptr(dq + (bos*HQ + i_hq) * K, (T, K), (HQ*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dk = tl.make_block_ptr(dk + (bos*HQ + i_hq) * K, (T, K), (HQ*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ # [BT, BT]
+ b_dA = tl.load(p_dA, boundary_check=(0, 1))
+ # [BT, BK]
+ b_dq += tl.dot(b_dA, b_k)
+ b_dk += tl.dot(tl.trans(b_dA).to(b_k.dtype), b_q)
+
+ tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1))
+ tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def chunk_gsa_bwd_k_kernel_intra_dvg(
+ v,
+ g,
+ o,
+ A,
+ do,
+ dv,
+ dg,
+ offsets,
+ indices,
+ T: tl.constexpr,
+ HQ: tl.constexpr,
+ H: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BC: tl.constexpr,
+ BV: tl.constexpr,
+ NC: tl.constexpr,
+ NG: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_v, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_bg = i_bh // NG
+ i_b, i_hq = i_bh // HQ, i_bh % HQ
+ i_h = i_hq // NG
+ i_t, i_i = i_c // NC, i_c % NC
+ if USE_OFFSETS:
+ i_n, i_t = tl.load(indices + i_t * 2).to(tl.int32), tl.load(indices + i_t * 2 + 1).to(tl.int32)
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ T = eos - bos
+ else:
+ bos, eos = i_b * T, i_b * T + T
+
+ o_v = i_v * BV + tl.arange(0, BV)
+ m_v = o_v < V
+
+ if i_t * BT + i_i * BC > T:
+ return
+
+ if HEAD_FIRST:
+ p_gv = tl.make_block_ptr(g + i_bg * T*V, (T, V), (V, 1), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0))
+ p_gn = tl.max_contiguous(tl.multiple_of(g + i_bg * T*V + (min(i_t * BT + i_i * BC + BC, T) - 1) * V + o_v, BV), BV)
+ else:
+ p_gv = tl.make_block_ptr(g + (bos*H+i_h)*V, (T, V), (H*V, 1), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0))
+ p_gn = tl.max_contiguous(tl.multiple_of(g + (bos + min(i_t * BT + i_i * BC + BC, T)-1)*H*V + i_h*V + o_v, BV), BV)
+ # [BV,]
+ b_gn = tl.load(p_gn, mask=m_v, other=0)
+ # [BC, BV]
+ b_gv = tl.load(p_gv, boundary_check=(0, 1))
+ b_dv = tl.zeros([BC, BV], dtype=tl.float32)
+ for i_j in range(i_i + 1, NC):
+ if HEAD_FIRST:
+ p_g = tl.make_block_ptr(g + i_bg * T*V, (T, V), (V, 1), (i_t * BT + i_j * BC, i_v * BV), (BC, BV), (1, 0))
+ p_A = tl.make_block_ptr(A + i_bh * T*BT, (BT, T), (1, BT), (i_i * BC, i_t * BT + i_j * BC), (BC, BC), (0, 1))
+ p_do = tl.make_block_ptr(do + i_bh * T*V, (T, V), (V, 1), (i_t * BT + i_j * BC, i_v * BV), (BC, BV), (1, 0))
+ else:
+ p_g = tl.make_block_ptr(g + (bos*H+i_h) * V, (T, V), (H*V, 1), (i_t * BT + i_j * BC, i_v * BV), (BC, BV), (1, 0))
+ p_A = tl.make_block_ptr(A + (bos*HQ+i_hq) * BT, (BT, T), (1, HQ*BT), (i_i*BC, i_t*BT + i_j*BC), (BC, BC), (0, 1))
+ p_do = tl.make_block_ptr(do + (bos*HQ+i_hq) * V, (T, V), (HQ*V, 1), (i_t*BT + i_j*BC, i_v*BV), (BC, BV), (1, 0))
+ # [BC, BV]
+ b_g = tl.load(p_g, boundary_check=(0, 1))
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+ b_do = (b_do * tl.exp(b_g - b_gn[None, :])).to(b_do.dtype)
+ # [BC, BC]
+ b_A = tl.load(p_A, boundary_check=(0, 1))
+ b_dv += tl.dot(b_A, b_do)
+ b_dv *= tl.exp(b_gn[None, :] - b_gv)
+
+ o_i = tl.arange(0, BC)
+ o_c = i_i * BC + tl.arange(0, BC)
+
+ if HEAD_FIRST:
+ p_g = tl.max_contiguous(tl.multiple_of(g + i_bg * T*V + (i_t * BT + i_i * BC) * V + o_v, BV), BV)
+ p_A = tl.max_contiguous(tl.multiple_of(A + i_bh * T*BT + (i_t * BT + i_i * BC) * BT + o_c, BC), BC)
+ p_do = tl.max_contiguous(tl.multiple_of(do + i_bh * T*V + (i_t * BT + i_i * BC) * V + o_v, BV), BV)
+ else:
+ p_g = tl.max_contiguous(tl.multiple_of(g + (bos + i_t * BT + i_i * BC) * H*V + i_h * V + o_v, BV), BV)
+ p_A = tl.max_contiguous(tl.multiple_of(A + (bos + i_t*BT + i_i*BC) * HQ*BT + i_hq * BT + o_c, BC), BC)
+ p_do = tl.max_contiguous(tl.multiple_of(do + (bos + i_t*BT + i_i*BC) * HQ*V + i_hq * V + o_v, BV), BV)
+
+ for j in range(0, min(BC, T - i_t * BT - i_i * BC)):
+ # [BC,]
+ b_A = tl.load(p_A)
+ # [BV,]
+ b_g = tl.load(p_g, mask=m_v, other=0)
+ b_do = tl.load(p_do, mask=m_v, other=0)
+ # [BC, BV]
+ m_i = o_i[:, None] <= j
+ b_dv += tl.where(m_i, tl.exp(b_g[None, :] - b_gv) * b_A[:, None] * b_do[None, :], 0.)
+
+ p_g += (1 if HEAD_FIRST else H) * V
+ p_A += (1 if HEAD_FIRST else HQ) * BT
+ p_do += (1 if HEAD_FIRST else HQ) * V
+ if HEAD_FIRST:
+ p_o = tl.make_block_ptr(o + i_bh * T*V, (T, V), (V, 1), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0))
+ p_v = tl.make_block_ptr(v + i_bg * T*V, (T, V), (V, 1), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0))
+ p_do = tl.make_block_ptr(do + i_bh * T*V, (T, V), (V, 1), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0))
+ p_dv = tl.make_block_ptr(dv + i_bh * T*V, (T, V), (V, 1), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0))
+ p_dg = tl.make_block_ptr(dg + i_bh * T*V, (T, V), (V, 1), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0))
+ else:
+ p_o = tl.make_block_ptr(o + (bos*HQ+i_hq)*V, (T, V), (HQ*V, 1), (i_t*BT + i_i*BC, i_v*BV), (BC, BV), (1, 0))
+ p_v = tl.make_block_ptr(v + (bos*H+i_h)*V, (T, V), (H*V, 1), (i_t*BT + i_i*BC, i_v*BV), (BC, BV), (1, 0))
+ p_do = tl.make_block_ptr(do + (bos*HQ+i_hq)*V, (T, V), (HQ*V, 1), (i_t*BT + i_i*BC, i_v*BV), (BC, BV), (1, 0))
+ p_dv = tl.make_block_ptr(dv + (bos*HQ+i_hq)*V, (T, V), (HQ*V, 1), (i_t*BT + i_i*BC, i_v*BV), (BC, BV), (1, 0))
+ p_dg = tl.make_block_ptr(dg + (bos*HQ+i_hq)*V, (T, V), (HQ*V, 1), (i_t*BT + i_i*BC, i_v*BV), (BC, BV), (1, 0))
+
+ b_o = tl.load(p_o, boundary_check=(0, 1)).to(tl.float32)
+ b_v = tl.load(p_v, boundary_check=(0, 1)).to(tl.float32)
+ b_do = tl.load(p_do, boundary_check=(0, 1)).to(tl.float32)
+ b_dv = b_dv + tl.load(p_dv, boundary_check=(0, 1)).to(tl.float32)
+ b_dg = b_o * b_do - b_v * b_dv
+ tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1))
+ tl.store(p_dg, b_dg.to(p_dg.dtype.element_ty), boundary_check=(0, 1))
+
+
+def chunk_gsa_fwd_v(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ g: torch.Tensor,
+ scale: float = 1.,
+ initial_state: Optional[torch.Tensor] = None,
+ output_final_state: bool = False,
+ offsets: Optional[torch.LongTensor] = None,
+ indices: Optional[torch.LongTensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:
+ _, A, h, ht, o = chunk_gla_fwd(
+ q=q,
+ k=k,
+ v=v,
+ g=None,
+ g_cumsum=g,
+ scale=scale,
+ initial_state=initial_state,
+ output_final_state=output_final_state,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=chunk_size
+ )
+ return A, h, ht, o
+
+
+def chunk_gsa_fwd_k(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ g: torch.Tensor,
+ h0: Optional[torch.Tensor] = None,
+ output_final_state: bool = False,
+ scale: float = 1.,
+ offsets: Optional[torch.LongTensor] = None,
+ indices: Optional[torch.LongTensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:
+ if head_first:
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ else:
+ B, T, H, K, V = *k.shape, v.shape[-1]
+ BT = min(chunk_size, triton.next_power_of_2(T))
+ if offsets is None:
+ NT = triton.cdiv(T, BT)
+ else:
+ if indices is None:
+ indices = torch.cat([torch.arange(n) for n in triton.cdiv(offsets[1:] - offsets[:-1], BT).tolist()])
+ indices = torch.stack([indices.eq(0).cumsum(0) - 1, indices], 1).to(offsets)
+ NT = len(indices)
+ BC = min(16, BT)
+ BK = min(64, triton.next_power_of_2(K))
+ BV = min(64, triton.next_power_of_2(V))
+ HQ = q.shape[1] if head_first else q.shape[2]
+ NV = triton.cdiv(V, BV)
+ NC = triton.cdiv(BT, BC)
+ NG = HQ // H
+ num_warps = 4 if BK == 64 else 2
+ num_stages = 1
+
+ h, ht = chunk_fwd_h(
+ k=k,
+ v=v,
+ g=None,
+ gk=None,
+ gv=g,
+ h0=h0,
+ output_final_state=output_final_state,
+ states_in_fp32=False,
+ offsets=offsets,
+ head_first=head_first,
+ chunk_size=BT
+ )
+ o = v.new_empty(B, *((HQ, T) if head_first else (T, HQ)), V)
+ A = q.new_empty(B, *((HQ, T) if head_first else (T, HQ)), BT)
+ grid = (NV, NT, B * HQ)
+ chunk_gsa_fwd_k_kernel_inter[grid](
+ q,
+ k,
+ h,
+ g,
+ o,
+ A,
+ offsets=offsets,
+ indices=indices,
+ scale=scale,
+ T=T,
+ HQ=HQ,
+ H=H,
+ K=K,
+ V=V,
+ BT=BT,
+ BK=BK,
+ BV=BV,
+ NG=NG,
+ HEAD_FIRST=head_first,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+ grid = (NV, NT * NC, B * HQ)
+ chunk_gsa_fwd_k_kernel_intra[grid](
+ v,
+ g,
+ o,
+ A,
+ offsets=offsets,
+ indices=indices,
+ T=T,
+ HQ=HQ,
+ H=H,
+ V=V,
+ BT=BT,
+ BC=BC,
+ BV=BV,
+ NC=NC,
+ NG=NG,
+ HEAD_FIRST=head_first,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+ return A, h, ht, o
+
+
+def chunk_gsa_bwd_v(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ g: torch.Tensor,
+ h0: torch.Tensor,
+ h: torch.Tensor,
+ A: torch.Tensor,
+ do: torch.Tensor,
+ dht: torch.Tensor,
+ dg: torch.Tensor,
+ scale: float = 1.,
+ offsets: Optional[torch.LongTensor] = None,
+ indices: Optional[torch.LongTensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+):
+ dq, dk, dv, dg, dh0 = chunk_gla_bwd(
+ q=q,
+ k=k,
+ v=v,
+ g=None,
+ g_cumsum=g,
+ scale=scale,
+ initial_state=h0,
+ h=h,
+ A=A,
+ do=do,
+ dht=dht,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=chunk_size
+ )
+ return dq, dk, dv, dg, dh0
+
+
+def chunk_gsa_bwd_k(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ g: torch.Tensor,
+ h: torch.Tensor,
+ h0: torch.Tensor,
+ o: torch.Tensor,
+ do: torch.Tensor,
+ dht: torch.Tensor,
+ dg: torch.Tensor,
+ scale: float = 1.,
+ offsets: Optional[torch.LongTensor] = None,
+ indices: Optional[torch.LongTensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+):
+ if head_first:
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ else:
+ B, T, H, K, V = *k.shape, v.shape[-1]
+ BT = min(chunk_size, triton.next_power_of_2(T))
+ if offsets is None:
+ NT = triton.cdiv(T, BT)
+ else:
+ if indices is None:
+ indices = torch.cat([torch.arange(n) for n in triton.cdiv(offsets[1:] - offsets[:-1], BT).tolist()])
+ indices = torch.stack([indices.eq(0).cumsum(0) - 1, indices], 1).to(offsets)
+ NT = len(indices)
+ BC = min(16, BT)
+ BK = min(64, triton.next_power_of_2(K))
+ BV = min(64, triton.next_power_of_2(V))
+ HQ = q.shape[1] if head_first else q.shape[2]
+ NC = triton.cdiv(BT, BC)
+ NK = triton.cdiv(K, BK)
+ NV = triton.cdiv(V, BV)
+ NG = HQ // H
+ num_warps = 4 if BK == 64 else 2
+ num_stages = 1
+
+ if h is None:
+ h, _ = chunk_fwd_h(
+ k=k,
+ v=v,
+ g=None,
+ gk=None,
+ gv=g,
+ h0=h0,
+ output_final_state=False,
+ states_in_fp32=False,
+ offsets=offsets,
+ head_first=head_first,
+ chunk_size=chunk_size
+ )
+ dh, dh0 = chunk_bwd_dh(
+ q=q,
+ k=k,
+ v=v,
+ g=None,
+ gk=None,
+ gv=g,
+ do=do,
+ h0=h0,
+ dht=dht,
+ scale=scale,
+ states_in_fp32=True,
+ offsets=offsets,
+ head_first=head_first,
+ chunk_size=BT
+ )
+ dA = q.new_empty(NV, B, *((HQ, T) if head_first else (T, HQ)), BT)
+ grid = (NV, NT * NC * NC, B * HQ)
+ chunk_gsa_bwd_k_kernel_dA[grid](
+ v,
+ g,
+ do,
+ dA,
+ offsets=offsets,
+ indices=indices,
+ scale=scale,
+ B=B,
+ T=T,
+ HQ=HQ,
+ H=H,
+ V=V,
+ BT=BT,
+ BC=BC,
+ BV=BV,
+ NC=NC,
+ NG=NG,
+ HEAD_FIRST=head_first,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+ dA = dA.sum(0, dtype=dA.dtype)
+
+ A = do.new_empty(NK, B, *((HQ, T) if head_first else (T, HQ)), BT)
+ dq = torch.empty_like(q)
+ dk = k.new_empty(B, *((HQ, T) if head_first else (T, HQ)), K)
+ dv = v.new_empty(NK, B, *((HQ, T) if head_first else (T, HQ)), V)
+ dgv = g.new_empty(NK, B, *((HQ, T) if head_first else (T, HQ)), V, dtype=torch.float)
+ grid = (NK, NT, B * HQ)
+ chunk_gsa_bwd_k_kernel_dqkvg[grid](
+ q,
+ k,
+ v,
+ h,
+ g,
+ A,
+ do,
+ dh,
+ dq,
+ dk,
+ dv,
+ dg,
+ dgv,
+ dA,
+ offsets=offsets,
+ indices=indices,
+ scale=scale,
+ B=B,
+ T=T,
+ HQ=HQ,
+ H=H,
+ K=K,
+ V=V,
+ BT=BT,
+ BK=BK,
+ BV=BV,
+ NG=NG,
+ HEAD_FIRST=head_first,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+ A = A.sum(0, dtype=A.dtype)
+ dv = dv.sum(0, dtype=dv.dtype)
+ dgv = dgv.sum(0, dtype=dgv.dtype)
+
+ grid = (NV, NT * NC, B * HQ)
+ chunk_gsa_bwd_k_kernel_intra_dvg[grid](
+ v,
+ g,
+ o,
+ A,
+ do,
+ dv,
+ dg,
+ offsets=offsets,
+ indices=indices,
+ T=T,
+ HQ=HQ,
+ H=H,
+ V=V,
+ BT=BT,
+ BC=BC,
+ BV=BV,
+ NC=NC,
+ NG=NG,
+ HEAD_FIRST=head_first,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+ dg = dgv.add_(chunk_local_cumsum(dg, chunk_size=BT, reverse=True, offsets=offsets, indices=indices, head_first=head_first))
+
+ return dq, dk, dv, dg, dh0
+
+
+def chunk_gsa_fwd(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ s: torch.Tensor,
+ g: torch.Tensor,
+ initial_state: Optional[Tuple[torch.Tensor, torch.Tensor]] = None,
+ output_final_state: bool = False,
+ scale: float = 1.,
+ offsets: Optional[torch.LongTensor] = None,
+ indices: Optional[torch.LongTensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:
+ hk0, hv0 = None, None
+ if initial_state is not None:
+ hk0, hv0 = initial_state
+ Ak, hk, hkt, ok = chunk_gsa_fwd_k(
+ q=q,
+ k=k,
+ v=s,
+ g=g,
+ h0=hk0,
+ output_final_state=output_final_state,
+ scale=scale,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=chunk_size
+ )
+
+ # p is kept in fp32 for safe softmax backward
+ p = softmax_fwd(ok, dtype=torch.float)
+
+ qv = p.to(q.dtype)
+ Av, hv, hvt, ov = chunk_gsa_fwd_v(
+ q=qv,
+ k=s,
+ v=v,
+ g=g,
+ scale=1.,
+ initial_state=hv0,
+ output_final_state=output_final_state,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=chunk_size
+ )
+ return Ak, hk, hkt, ok, p, Av, hv, hvt, ov
+
+
+def chunk_gsa_bwd(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ s: torch.Tensor,
+ g: torch.Tensor,
+ ok: torch.Tensor,
+ p: torch.Tensor,
+ A: Tuple[torch.Tensor, torch.Tensor],
+ h: Tuple[torch.Tensor, torch.Tensor],
+ initial_state: Optional[Tuple[torch.Tensor, torch.Tensor]],
+ scale: float,
+ do: torch.Tensor,
+ dht: Tuple[torch.Tensor, torch.Tensor],
+ offsets: Optional[torch.LongTensor] = None,
+ indices: Optional[torch.LongTensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+):
+ hk0, hv0 = None, None
+ if initial_state is not None:
+ hk0, hv0 = initial_state
+
+ _, Av = A
+ hk, hv = h
+ dhkt, dhvt = dht
+
+ qv = p.to(q.dtype)
+ dqv, dsv, dv, dg, dhv0 = chunk_gsa_bwd_v(
+ q=qv,
+ k=s,
+ v=v,
+ g=g,
+ h0=hv0,
+ h=hv,
+ A=Av,
+ do=do,
+ dht=dhvt,
+ dg=None,
+ scale=1.,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=chunk_size
+ )
+
+ # softmax gradient, equivalent to:
+ # dok = qv * (dqv - (qv * dqv).sum(-1, True))
+ dok = softmax_bwd(p, dqv, dtype=ok.dtype)
+
+ dq, dk, dsk, dg, dhk0 = chunk_gsa_bwd_k(
+ q=q,
+ k=k,
+ v=s,
+ g=g,
+ h0=hk0,
+ h=hk,
+ o=ok,
+ do=dok,
+ dht=dhkt,
+ dg=dg,
+ scale=scale,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=chunk_size
+ )
+
+ ds = dsv.add_(dsk)
+ if q.shape[1] != k.shape[1]:
+ dk, dv, ds, dg = map(lambda x: reduce(x, 'b (h g) ... -> b h ...', 'sum', h=k.shape[1]), (dk, dv, ds, dg))
+ dg = dg.to(s.dtype)
+ return dq, dk, dv, ds, dg, dhk0, dhv0
+
+
+class ChunkGSAFunction(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ def forward(
+ ctx,
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ s: torch.Tensor,
+ g: torch.Tensor,
+ scale: float,
+ hk0: Optional[torch.Tensor],
+ hv0: Optional[torch.Tensor],
+ output_final_state: bool,
+ checkpoint_level: int,
+ offsets: Optional[torch.LongTensor],
+ head_first: bool = True
+ ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
+ T = q.shape[2] if head_first else q.shape[1]
+ chunk_size = min(64, triton.next_power_of_2(T))
+
+ # 2-d indices denoting the offsets of chunks in each sequence
+ # for example, if the passed `offsets` is [0, 100, 356] and `chunk_size` is 64,
+ # then there are 2 and 4 chunks in the 1st and 2nd sequences respectively, and `indices` will be
+ # [[0, 0], [0, 1], [1, 0], [1, 1], [1, 2], [1, 3]]
+ indices = None
+ if offsets is not None:
+ indices = torch.cat([torch.arange(n) for n in triton.cdiv(offsets[1:] - offsets[:-1], chunk_size).tolist()])
+ indices = torch.stack([indices.eq(0).cumsum(0) - 1, indices], 1).to(offsets)
+ g_org, g = g, chunk_local_cumsum(g, chunk_size, offsets=offsets, indices=indices, head_first=head_first)
+ Ak, hk, hkt, ok, p, Av, hv, hvt, ov = chunk_gsa_fwd(
+ q=q,
+ k=k,
+ v=v,
+ s=s,
+ g=g,
+ initial_state=(hk0, hv0),
+ output_final_state=output_final_state,
+ scale=scale,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=chunk_size
+ )
+
+ if checkpoint_level >= 1:
+ del g
+ g = g_org
+ if checkpoint_level > 1:
+ del hk
+ del hv
+ hk, hv = None, None
+ else:
+ hk0, hv0 = None, None
+
+ ctx.save_for_backward(q, k, v, s, g, ok, p, Av, hk0, hv0, hk, hv)
+ ctx.checkpoint_level = checkpoint_level
+ ctx.scale = scale
+ ctx.offsets = offsets
+ ctx.indices = indices
+ ctx.head_first = head_first
+ ctx.chunk_size = chunk_size
+ return ov, hkt, hvt
+
+ @staticmethod
+ @contiguous
+ def backward(ctx, dov, dhkt=None, dhvt=None):
+ q, k, v, s, g, ok, p, Av, hk0, hv0, hk, hv = ctx.saved_tensors
+ scale = ctx.scale
+ offsets = ctx.offsets
+ indices = ctx.indices
+ head_first = ctx.head_first
+ chunk_size = ctx.chunk_size
+
+ if ctx.checkpoint_level >= 1:
+ g = chunk_local_cumsum(g, chunk_size, offsets=offsets, indices=indices, head_first=head_first)
+ dq, dk, dv, ds, dg, dhk0, dhv0 = chunk_gsa_bwd(
+ q=q,
+ k=k,
+ v=v,
+ s=s,
+ g=g,
+ ok=ok,
+ p=p,
+ A=(None, Av),
+ h=(hk, hv),
+ initial_state=(hk0, hv0),
+ scale=scale,
+ do=dov,
+ dht=(dhkt, dhvt),
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=chunk_size
+ )
+ return dq, dk, dv, ds, dg, None, dhk0, dhv0, None, None, None, None
+
+
+def chunk_gsa(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ s: torch.Tensor,
+ g: Optional[torch.Tensor] = None,
+ scale: Optional[int] = None,
+ initial_state: Optional[Tuple[torch.Tensor]] = None,
+ output_final_state: Optional[bool] = False,
+ checkpoint_level: Optional[int] = 2,
+ offsets: Optional[torch.LongTensor] = None,
+ head_first: Optional[bool] = True
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ r"""
+ Args:
+ q (torch.Tensor):
+ queries of shape `[B, HQ, T, K]` if `head_first=True` else `[B, T, HQ, K]`.
+ k (torch.Tensor):
+ keys of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`.
+ GQA is performed if `H` is not equal to `HQ`.
+ v (torch.Tensor):
+ values of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ s (torch.Tensor):
+ slot representations of shape `[B, H, T, M]` if `head_first=True` else `[B, T, H, M]`.
+ g (torch.Tensor):
+ Forget gates of shape `[B, H, T, M]` applied to keys.
+ If not provided, this function is equivalent to vanilla ABC.
+ scale (Optional[int]):
+ Scale factor for attention scores.
+ If not provided, it will default to `1 / sqrt(K)`. Default: `None`.
+ initial_state (Optional[Tuple[torch.Tensor]]):
+ Initial state tuple having tensors of shape `[N, H, K, M]` and `[N, H, M, V]` for `N` input sequences.
+ For equal-length input sequences, `N` equals the batch size `B`.
+ Default: `None`.
+ output_final_state (Optional[bool]):
+ Whether to output the final state tuple, having tensors of shape `[N, H, K, M]` and `[N, H, M, V]`.
+ Default: `False`.
+ checkpoint_level (Optional[int]):
+ Checkpointing level; higher values will save more memories and do more recomputations during backward.
+ Default: `2`:
+ - Level `0`: no memory saved, no recomputation.
+ - Level `1`: recompute the fp32 cumulative values during backward.
+ - Level `2`: recompute the fp32 cumulative values and forward hidden states during backward.
+ offsets (Optional[torch.LongTensor]):
+ Offsets of shape `[N+1]` defining the bos/eos positions of `N` variable-length sequences in the batch.
+ For example,
+ if `offsets` is `[0, 1, 3, 6, 10, 15]`, there are `N=5` sequences with lengths 1, 2, 3, 4 and 5 respectively.
+ If provided, the inputs are concatenated and the batch size `B` is expected to be 1.
+ Default: `None`.
+ head_first (Optional[bool]):
+ Whether the inputs are in the head-first format, which is not supported for variable-length inputs.
+ Default: `True`.
+
+ Returns:
+ o (torch.Tensor):
+ Outputs of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ final_state (Tuple[torch.Tensor]):
+ Final state tuple having tensors of shape `[N, H, K, M]` and `[N, H, M, V]` if `output_final_state=True`.
+ `None` otherwise.
+
+ Examples::
+ >>> import torch
+ >>> import torch.nn.functional as F
+ >>> from einops import rearrange
+ >>> from fla.ops.gsa import fused_recurrent_gsa
+ # inputs with equal lengths
+ >>> B, T, H, K, V, M = 4, 2048, 4, 512, 512, 64
+ >>> q = torch.randn(B, T, H, K, device='cuda')
+ >>> k = torch.randn(B, T, H, K, device='cuda')
+ >>> v = torch.randn(B, T, H, V, device='cuda')
+ >>> s = torch.randn(B, T, H, M, device='cuda')
+ >>> g = F.logsigmoid(torch.randn(B, T, H, M, device='cuda'))
+ >>> h0 = (torch.randn(B, H, K, M, device='cuda'), torch.randn(B, H, M, V, device='cuda'))
+ >>> o, (hk, hv) = chunk_gsa(q, k, v, s, g,
+ initial_state=h0,
+ output_final_state=True,
+ head_first=False)
+ # for variable-length inputs, the batch size `B` is expected to be 1 and `offsets` is required
+ >>> q, k, v, s, g = map(lambda x: rearrange(x, 'b t h d -> 1 (b t) h d'), (q, k, v, s, g))
+ # for a batch with 4 sequences, offsets with 5 start/end positions are expected
+ >>> offsets = q.new_tensor([0, 2048, 4096, 6144, 8192], dtype=torch.long)
+ >>> o_var, (hk_var, hv_var) = chunk_gsa(q, k, v, s, g,
+ initial_state=h0,
+ output_final_state=True,
+ offsets=offsets,
+ head_first=False)
+ >>> assert o.allclose(o_var.view(o.shape))
+ >>> assert hk.allclose(hk_var)
+ >>> assert hv.allclose(hv_var)
+ """
+ if offsets is not None:
+ if q.shape[0] != 1:
+ raise ValueError(f"The batch size is expected to be 1 rather than {q.shape[0]} when using `offsets`."
+ f"Please flatten variable-length inputs before processing.")
+ if head_first:
+ raise RuntimeError("Sequences with variable lengths are not supported for head-first mode")
+ if initial_state is not None and initial_state[0].shape[0] != len(offsets) - 1:
+ raise ValueError(f"The number of initial states is expected to be equal to the number of input sequences, "
+ f"i.e., {len(offsets) - 1} rather than {initial_state[0].shape[0]}.")
+ assert checkpoint_level in [0, 1, 2]
+ if g is None:
+ # TODO: this 3 steps took huge amount of time, ought to be optimized
+ z = s.float().logcumsumexp(2)
+ g = torch.cat((z[:, :, :1], z[:, :, :-1]), 2) - z
+ s = torch.exp(s - z).to(k.dtype)
+ if scale is None:
+ scale = q.shape[-1] ** -0.5
+
+ hk0, hv0 = None, None
+ if initial_state is not None:
+ hk0, hv0 = initial_state
+ o, *final_state = ChunkGSAFunction.apply(
+ q,
+ k,
+ v,
+ s,
+ g,
+ scale,
+ hk0,
+ hv0,
+ output_final_state,
+ checkpoint_level,
+ offsets,
+ head_first
+ )
+ return o, final_state
diff --git a/fla/ops/gsa/fused_recurrent.py b/fla/ops/gsa/fused_recurrent.py
new file mode 100644
index 0000000000000000000000000000000000000000..bebc04c6fae7a521199a533d10caece277f34630
--- /dev/null
+++ b/fla/ops/gsa/fused_recurrent.py
@@ -0,0 +1,565 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from typing import Optional, Tuple
+
+import torch
+import triton
+import triton.language as tl
+
+from fla.ops.common.fused_recurrent import (fused_recurrent_bwd_kernel,
+ fused_recurrent_fwd_kernel)
+from fla.ops.utils import chunk_global_cumsum
+from fla.utils import autocast_custom_bwd, autocast_custom_fwd, contiguous
+
+
+@triton.jit
+def fused_recurrent_gsa_inference_kernel(
+ q,
+ k,
+ v,
+ s,
+ g,
+ o,
+ hk0,
+ hv0,
+ hkt,
+ hvt,
+ scale,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ M: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ NG: tl.constexpr
+):
+ i_bh = tl.program_id(0)
+ i_bg = i_bh // NG
+
+ b_s = tl.load(s + i_bg * M + tl.arange(0, M)).to(tl.float32)
+ b_g = tl.load(g + i_bg * M + tl.arange(0, M)).to(tl.float32)
+ b_g = tl.exp(b_g)
+
+ b_ok = tl.zeros([M], dtype=tl.float32)
+ for i_k in range(tl.cdiv(K, BK)):
+ o_k = i_k * BK + tl.arange(0, BK)
+
+ p_hk0 = hk0 + i_bg * K * M + (o_k[None, :]) * M + tl.arange(0, M)[:, None]
+ # [BK,]
+ mask_k = o_k < K
+ # [M, BK]
+ mask_hk = (tl.arange(0, M) < M)[:, None] & mask_k[None, :]
+ # [M, BK]
+ b_hk = tl.load(p_hk0, mask=mask_hk, other=0.).to(tl.float32)
+ # [BK,]
+ b_q = tl.load(q + i_bh * K + o_k, mask=mask_k, other=0.).to(tl.float32) * scale
+ b_k = tl.load(k + i_bg * K + o_k, mask=mask_k, other=0.).to(tl.float32)
+ b_hk = b_hk * b_g[:, None] + b_k[None, :] * b_s[:, None]
+ b_ok += tl.sum(b_hk * b_q[None, :], axis=1)
+
+ if i_bh % NG == 0:
+ p_hkt = hkt + i_bg * K * M + o_k[None, :] * M + tl.arange(0, M)[:, None]
+ tl.store(p_hkt, b_hk.to(p_hkt.dtype.element_ty), mask=mask_hk)
+
+ b_qv = tl.softmax(b_ok)
+ for i_v in range(tl.cdiv(V, BV)):
+ o_v = i_v * BV + tl.arange(0, BV)
+
+ p_hv0 = hv0 + i_bg * M * V + tl.arange(0, M)[None, :] * V + o_v[:, None]
+ # [BV,]
+ mask_v = o_v < V
+ # [BV, M]
+ mask_hv = mask_v[:, None] & (tl.arange(0, M) < M)[None, :]
+ # [BV, M]
+ b_hv = tl.load(p_hv0, mask=mask_hv, other=0).to(tl.float32)
+ # [BV,]
+ b_v = tl.load(v + i_bg * V + o_v, mask=mask_v, other=0).to(tl.float32)
+ b_hv = b_hv * b_g[None, :] + b_s[None, :] * b_v[:, None]
+ b_ov = tl.sum(b_hv * b_qv[None, :], axis=1)
+
+ tl.store(o + i_bh * V + o_v, b_ov.to(o.dtype.element_ty), mask=mask_v)
+
+ if i_bh % NG == 0:
+ p_hvt = hvt + i_bg * M * V + tl.arange(0, M)[None, :] * V + o_v[:, None]
+ tl.store(p_hvt, b_hv.to(p_hvt.dtype.element_ty), mask=mask_hv)
+
+
+def fused_recurrent_gsa_inference(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ s: torch.Tensor,
+ g: torch.Tensor,
+ initial_state: Optional[Tuple[torch.Tensor, torch.Tensor]] = None,
+ output_final_state: bool = False,
+ scale: float = 1.,
+ head_first: bool = True
+) -> torch.Tensor:
+ if head_first:
+ B, H, T, K, V, M = *k.shape, v.shape[-1], s.shape[-1]
+ else:
+ B, T, H, K, V, M = *k.shape, v.shape[-1], s.shape[-1]
+ HQ = q.shape[1] if head_first else q.shape[2]
+ BK, BV = min(triton.next_power_of_2(K), 64), min(triton.next_power_of_2(V), 64)
+ NG = HQ // H
+
+ hk0, hv0 = None, None
+ if initial_state is not None:
+ hk0, hv0 = initial_state
+ hkt, hvt = None, None
+ if output_final_state:
+ if NG == 1:
+ hkt, hvt = hk0, hv0
+ else:
+ hkt, hvt = q.new_empty(B, H, K, M, dtype=torch.float), q.new_empty(B, H, M, V, dtype=torch.float)
+
+ o = v.new_empty(B, HQ, T, V) if head_first else v.new_empty(B, T, HQ, V)
+ grid = (B * HQ,)
+ fused_recurrent_gsa_inference_kernel[grid](
+ q,
+ k,
+ v,
+ s,
+ g,
+ o,
+ hk0,
+ hv0,
+ hkt,
+ hvt,
+ scale=scale,
+ K=K,
+ V=V,
+ M=M,
+ BK=BK,
+ BV=BV,
+ NG=NG
+ )
+ return o, (hkt, hvt)
+
+
+def fused_recurrent_gsa_fwd(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ s: torch.Tensor,
+ g: torch.Tensor,
+ initial_state: Optional[Tuple[torch.Tensor, torch.Tensor]] = None,
+ output_final_state: bool = False,
+ scale: float = 1.,
+ reverse: bool = False,
+ offsets: Optional[torch.LongTensor] = None,
+ head_first: bool = True
+) -> Tuple[torch.Tensor, Tuple[torch.Tensor]]:
+ if head_first:
+ B, H, T, K, V, M = *k.shape, v.shape[-1], s.shape[-1]
+ else:
+ B, T, H, K, V, M = *k.shape, v.shape[-1], s.shape[-1]
+ N = B if offsets is None else len(offsets) - 1
+ HQ = q.shape[1] if head_first else q.shape[2]
+ if HQ != H:
+ raise ValueError("GQA not supported yet.")
+
+ BK, BV, BM = min(triton.next_power_of_2(K), 64), min(triton.next_power_of_2(V), 64), min(M, 64)
+ NK, NV, NM = triton.cdiv(K, BK), triton.cdiv(V, BV), triton.cdiv(M, BM)
+
+ hk0, hv0 = None, None
+ if initial_state is not None:
+ hk0, hv0 = initial_state
+ hkt, hvt = None, None
+ if output_final_state:
+ hkt, hvt = q.new_empty(N, H, K, M, dtype=torch.float), q.new_empty(N, H, M, V, dtype=torch.float)
+
+ ok = q.new_empty(NK, *s.shape, dtype=torch.float)
+ gk, gv = None, g
+ grid = (NM, NK, N * H)
+ fused_recurrent_fwd_kernel[grid](
+ q=q,
+ k=k,
+ v=s,
+ g=None,
+ gk=gk,
+ gv=gv,
+ o=ok,
+ h0=hk0,
+ ht=hkt,
+ offsets=offsets,
+ scale=scale,
+ B=B,
+ T=T,
+ H=H,
+ K=K,
+ V=M,
+ BK=BK,
+ BV=BM,
+ USE_G=False,
+ USE_GK=False,
+ USE_GV=True,
+ REVERSE=reverse,
+ HEAD_FIRST=head_first
+ )
+ ok = ok.sum(0)
+
+ qv = ok.softmax(-1, dtype=torch.float)
+ ov = q.new_empty(NM, *v.shape, dtype=torch.float)
+ gk, gv = g, None
+ grid = (NV, NM, N * H)
+ fused_recurrent_fwd_kernel[grid](
+ q=qv,
+ k=s,
+ v=v,
+ g=None,
+ gk=gk,
+ gv=gv,
+ o=ov,
+ h0=hv0,
+ ht=hvt,
+ offsets=offsets,
+ scale=1.,
+ B=B,
+ T=T,
+ H=H,
+ K=M,
+ V=V,
+ BK=BM,
+ BV=BV,
+ USE_G=False,
+ USE_GK=True,
+ USE_GV=False,
+ REVERSE=reverse,
+ HEAD_FIRST=head_first
+ )
+ ov = ov.sum(0)
+ return ok, hkt, qv, ov, hvt
+
+
+def fused_recurrent_gsa_bwd(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ s: torch.Tensor,
+ g: torch.Tensor,
+ qv: torch.Tensor,
+ hk0: Optional[torch.Tensor] = None,
+ hv0: Optional[torch.Tensor] = None,
+ ok: Optional[torch.Tensor] = None,
+ do: Optional[torch.Tensor] = None,
+ dhkt: Optional[torch.Tensor] = None,
+ dhvt: Optional[torch.Tensor] = None,
+ scale: float = 1.,
+ reverse: bool = False,
+ offsets: Optional[torch.LongTensor] = None,
+ head_first: bool = True
+) -> Tuple[torch.Tensor]:
+ if head_first:
+ B, H, T, K, V, M = *q.shape, v.shape[-1], s.shape[-1]
+ else:
+ B, T, H, K, V, M = *q.shape, v.shape[-1], s.shape[-1]
+ N = B if offsets is None else len(offsets) - 1
+
+ BK, BV, BM = min(K, 64), min(V, 64), min(M, 64)
+ NK, NV, NM = triton.cdiv(K, BK), triton.cdiv(V, BV), triton.cdiv(M, BM)
+
+ if head_first:
+ dqv = q.new_empty(NV, B, H, T, M, dtype=torch.float)
+ dsv = q.new_empty(NV, B, H, T, M, dtype=torch.float)
+ dv = q.new_empty(NM, B, H, T, V, dtype=torch.float)
+ else:
+ dqv = q.new_empty(NV, B, T, H, M, dtype=torch.float)
+ dsv = q.new_empty(NV, B, T, H, M, dtype=torch.float)
+ dv = q.new_empty(NM, B, T, H, V, dtype=torch.float)
+ dhk0 = torch.empty_like(hk0)if hk0 is not None else None
+ dhv0 = torch.empty_like(hv0)if hv0 is not None else None
+
+ gk, gv = g, None
+ grid = (NV, NM, N * H)
+ fused_recurrent_bwd_kernel[grid](
+ q=qv,
+ k=s,
+ v=v,
+ g=None,
+ gk=gk,
+ gv=gv,
+ h0=hv0,
+ do=do,
+ dq=dqv,
+ dk=dsv,
+ dv=dv,
+ dht=dhvt,
+ dh0=dhv0,
+ offsets=offsets,
+ scale=1.,
+ B=B,
+ T=T,
+ H=H,
+ K=M,
+ V=V,
+ BK=BM,
+ BV=BV,
+ USE_G=False,
+ USE_GK=True,
+ USE_GV=False,
+ REVERSE=reverse,
+ HEAD_FIRST=head_first
+ )
+ dqv = dqv.sum(0)
+ dsv = dsv.sum(0)
+ dv = dv.sum(0)
+ dgk = chunk_global_cumsum(dqv * qv.float() - dsv * s.float(),
+ reverse=not reverse,
+ offsets=offsets,
+ head_first=head_first)
+
+ dok = qv * (dqv - (qv * dqv).sum(-1, True))
+ if head_first:
+ dq = q.new_empty(NM, B, H, T, K, dtype=torch.float)
+ dk = q.new_empty(NM, B, H, T, K, dtype=torch.float)
+ dsk = q.new_empty(NK, B, H, T, M, dtype=torch.float)
+ else:
+ dq = q.new_empty(NM, B, T, H, K, dtype=torch.float)
+ dk = q.new_empty(NM, B, T, H, K, dtype=torch.float)
+ dsk = q.new_empty(NK, B, T, H, M, dtype=torch.float)
+ gk, gv = None, g
+ grid = (NM, NK, N * H)
+ fused_recurrent_bwd_kernel[grid](
+ q=q,
+ k=k,
+ v=s,
+ g=None,
+ gk=gk,
+ gv=gv,
+ h0=hk0,
+ do=dok,
+ dq=dq,
+ dk=dk,
+ dv=dsk,
+ dht=dhkt,
+ dh0=dhk0,
+ offsets=offsets,
+ scale=scale,
+ B=B,
+ T=T,
+ H=H,
+ K=K,
+ V=M,
+ BK=BK,
+ BV=BM,
+ USE_G=False,
+ USE_GK=False,
+ USE_GV=True,
+ REVERSE=reverse,
+ HEAD_FIRST=head_first
+ )
+ dq = dq.sum(0)
+ dk = dk.sum(0)
+ dsk = dsk.sum(0)
+
+ dgv = chunk_global_cumsum(dok.float() * ok.float() - dsk * s.float(),
+ reverse=not reverse,
+ offsets=offsets,
+ head_first=head_first)
+
+ ds = dsk.add_(dsv)
+ dg = dgk.add_(dgv)
+
+ return dq, dk, dv, ds, dg, dhk0, dhv0
+
+
+class FusedRecurrentGSAFunction(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_fwd
+ def forward(
+ ctx,
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ s: torch.Tensor,
+ g: torch.Tensor,
+ scale: Optional[float] = None,
+ hk0: Optional[torch.Tensor] = None,
+ hv0: Optional[torch.Tensor] = None,
+ output_final_state: bool = False,
+ reverse: bool = False,
+ offsets: Optional[torch.LongTensor] = None,
+ head_first: bool = True
+ ) -> Tuple[torch.Tensor, Tuple[torch.Tensor]]:
+ T = q.shape[2] if head_first else q.shape[1]
+ if T == 1 and not q.requires_grad:
+ o, (hkt, hvt) = fused_recurrent_gsa_inference(
+ q=q,
+ k=k,
+ v=v,
+ s=s,
+ g=g,
+ initial_state=(hk0, hv0),
+ output_final_state=output_final_state,
+ scale=scale,
+ head_first=head_first
+ )
+ return o, (hkt, hvt)
+ ok, hkt, qv, ov, hvt = fused_recurrent_gsa_fwd(
+ q=q,
+ k=k,
+ v=v,
+ s=s,
+ g=g,
+ initial_state=(hk0, hv0),
+ output_final_state=output_final_state,
+ scale=scale,
+ reverse=reverse,
+ offsets=offsets,
+ head_first=head_first
+ )
+ ctx.save_for_backward(q, k, v, s, g, qv, hk0, hv0, ok)
+ ctx.scale = scale
+ ctx.reverse = reverse
+ ctx.offsets = offsets
+ ctx.head_first = head_first
+ return ov.to(q.dtype), hkt, hvt
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_bwd
+ def backward(ctx, do, dhkt=None, dhvt=None):
+ q, k, v, s, g, qv, hk0, hv0, ok = ctx.saved_tensors
+ scale = ctx.scale
+ reverse = ctx.reverse
+ offsets = ctx.offsets
+ head_first = ctx.head_first
+
+ # not supported yet.
+ if dhkt is not None or dhvt is not None:
+ if g is not None:
+ assert g.requires_grad is False, "Cannot load final state gradient and use gates at the same time"
+ dq, dk, dv, ds, dg, dhk0, dhv0 = fused_recurrent_gsa_bwd(
+ q=q,
+ k=k,
+ v=v,
+ s=s,
+ g=g,
+ qv=qv,
+ hk0=hk0,
+ hv0=hv0,
+ ok=ok,
+ do=do,
+ dhkt=dhkt,
+ dhvt=dhvt,
+ scale=scale,
+ reverse=reverse,
+ offsets=offsets,
+ head_first=head_first
+ )
+ return dq.to(q), dk.to(k), dv.to(v), ds.to(s), dg.to(g), None, dhk0, dhv0, None, None, None, None
+
+
+def fused_recurrent_gsa(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ s: torch.Tensor,
+ g: Optional[torch.Tensor] = None,
+ scale: Optional[int] = None,
+ initial_state: Optional[Tuple[torch.Tensor]] = None,
+ output_final_state: Optional[bool] = False,
+ reverse: Optional[bool] = False,
+ offsets: Optional[torch.LongTensor] = None,
+ head_first: bool = True
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ r"""
+ Args:
+ q (torch.Tensor):
+ queries of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`.
+ k (torch.Tensor):
+ keys of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`.
+ v (torch.Tensor):
+ values of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ s (torch.Tensor):
+ slot representations of shape `[B, H, T, M]` if `head_first=True` else `[B, T, H, M]`.
+ g (torch.Tensor):
+ Forget gates of shape `[B, H, T, M]` applied to keys.
+ scale (Optional[int]):
+ Scale factor for the attention scores.
+ If not provided, it will default to `1 / sqrt(K)`. Default: `None`.
+ initial_state (Optional[Tuple[torch.Tensor]]):
+ Initial state tuple having tensors of shape `[N, H, K, M]` and `[N, H, M, V]` for `N` input sequences.
+ For equal-length input sequences, `N` equals the batch size `B`.
+ Default: `None`.
+ output_final_state (Optional[bool]):
+ Whether to output the final state of shape `[N, H, K, V]` and `[N, H, M, V]`.
+ Default: `False`.
+ reverse (Optional[bool]):
+ If `True`, process the state passing in reverse order. Default: `False`.
+ offsets (Optional[torch.LongTensor]):
+ Offsets of shape `[N+1]` defining the bos/eos positions of `N` variable-length sequences in the batch.
+ For example,
+ if `offsets` is `[0, 1, 3, 6, 10, 15]`, there are `N=5` sequences with lengths 1, 2, 3, 4 and 5 respectively.
+ If provided, the inputs are concatenated and the batch size `B` is expected to be 1.
+ Default: `None`.
+ head_first (Optional[bool]):
+ Whether the inputs are in the head-first format, which is not supported for variable-length inputs.
+ Default: `True`.
+
+ Returns:
+ o (torch.Tensor):
+ Outputs of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ final_state (Tuple[torch.Tensor]):
+ Final state tuple having tensors of shape `[N, H, K, M]` and `[N, H, M, V]`.
+
+ Examples::
+ >>> import torch
+ >>> import torch.nn.functional as F
+ >>> from einops import rearrange
+ >>> from fla.ops.gsa import fused_recurrent_gsa
+ # inputs with equal lengths
+ >>> B, T, H, K, V, M = 4, 2048, 4, 512, 512, 64
+ >>> q = torch.randn(B, T, H, K, device='cuda')
+ >>> k = torch.randn(B, T, H, K, device='cuda')
+ >>> v = torch.randn(B, T, H, V, device='cuda')
+ >>> s = torch.randn(B, T, H, M, device='cuda')
+ >>> g = F.logsigmoid(torch.randn(B, T, H, M, device='cuda'))
+ >>> h0 = (torch.randn(B, H, K, M, device='cuda'), torch.randn(B, H, M, V, device='cuda'))
+ >>> o, (hk, hv) = fused_recurrent_gsa(q, k, v, s, g,
+ initial_state=h0,
+ output_final_state=True,
+ head_first=False)
+ # for variable-length inputs, the batch size `B` is expected to be 1 and `offsets` is required
+ >>> q, k, v, s, g = map(lambda x: rearrange(x, 'b t h d -> 1 (b t) h d'), (q, k, v, s, g))
+ # for a batch with 4 sequences, offsets with 5 start/end positions are expected
+ >>> offsets = q.new_tensor([0, 2048, 4096, 6144, 8192], dtype=torch.long)
+ >>> o_var, (hk_var, hv_var) = fused_recurrent_gsa(q, k, v, s, g,
+ initial_state=h0,
+ output_final_state=True,
+ offsets=offsets,
+ head_first=False)
+ >>> assert o.allclose(o_var.view(o.shape))
+ >>> assert hk.allclose(hk_var)
+ >>> assert hv.allclose(hv_var)
+ """
+ if offsets is not None:
+ if q.shape[0] != 1:
+ raise ValueError(f"The batch size is expected to be 1 rather than {q.shape[0]} when using `offsets`."
+ f"Please flatten variable-length inputs before processing.")
+ if head_first:
+ raise RuntimeError("Sequences with variable lengths are not supported for head-first mode")
+ if initial_state is not None and initial_state[0].shape[0] != len(offsets) - 1:
+ raise ValueError(f"The number of initial states is expected to be equal to the number of input sequences, "
+ f"i.e., {len(offsets) - 1} rather than {initial_state[0].shape[0]}.")
+ if scale is None:
+ scale = k.shape[-1] ** -0.5
+ if initial_state is None:
+ initial_state = (None, None)
+ o, final_state = FusedRecurrentGSAFunction.apply(
+ q,
+ k,
+ v,
+ s,
+ g,
+ scale,
+ *initial_state,
+ output_final_state,
+ reverse,
+ offsets,
+ head_first
+ )
+ return o, final_state
diff --git a/fla/ops/gsa/naive.py b/fla/ops/gsa/naive.py
new file mode 100644
index 0000000000000000000000000000000000000000..699b2a4d1e5b4b8415a98c27da142a6130797685
--- /dev/null
+++ b/fla/ops/gsa/naive.py
@@ -0,0 +1,68 @@
+# -*- coding: utf-8 -*-
+
+from typing import Optional
+
+import torch
+from einops import repeat
+
+
+def naive_recurrent_gsa(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ s: torch.Tensor,
+ g: Optional[torch.Tensor] = None,
+ scale: Optional[int] = None,
+ initial_state: Optional[torch.Tensor] = None,
+ output_final_state: Optional[bool] = False
+) -> torch.Tensor:
+ dtype = q.dtype
+
+ NG = q.shape[1]//k.shape[1]
+ # [batch_size, n_heads, seq_len, n_slots]
+ if g is None:
+ z = s.float().logcumsumexp(2)
+ g = torch.cat((z[:, :, :1], z[:, :, :-1]), 2) - z
+ s = torch.exp(s - z)
+ q, k, v, s, g = map(lambda x: x.float(), (q, k, v, s, g))
+ k, v, s, g = map(lambda x: repeat(x, 'b h t d -> b (h g) t d', g=NG), (k, v, s, g))
+ if initial_state is not None:
+ initial_state = tuple(map(lambda x: repeat(x, 'b h k v -> b (h g) k v', g=NG), initial_state))
+
+ B, H, T, K, V, M = *q.shape, v.shape[-1], s.shape[-1]
+
+ hk = torch.zeros(B, H, K, M, dtype=torch.float, device=q.device)
+ ok = torch.zeros_like(s)
+
+ if scale is None:
+ scale = q.shape[-1] ** -0.5
+
+ final_state = None
+ if initial_state is not None:
+ hk += initial_state[0]
+
+ for i in range(T):
+ q_i = q[:, :, i] * scale
+ k_i = k[:, :, i]
+ v_i = s[:, :, i]
+ g_i = g[:, :, i].exp()
+ hk = hk * g_i[..., None, :] + k_i[..., None] * v_i[..., None, :]
+ ok[:, :, i] = (q_i[..., None] * hk).sum(-2)
+
+ qv = ok.softmax(-1)
+ hv = torch.zeros(B, H, M, V, dtype=torch.float, device=q.device)
+ ov = torch.zeros_like(v)
+ if initial_state is not None:
+ hv += initial_state[1]
+
+ for i in range(T):
+ q_i = qv[:, :, i]
+ k_i = s[:, :, i]
+ v_i = v[:, :, i]
+ g_i = g[:, :, i].exp()
+ hv = hv * g_i[..., :, None] + k_i[..., None] * v_i[..., None, :]
+ ov[:, :, i] = (q_i[..., None] * hv).sum(-2)
+
+ if output_final_state:
+ final_state = (hk.view(B, -1, NG, K, M)[:, :, 0], hv.view(B, -1, NG, M, V)[:, :, 0])
+ return ov.to(dtype), final_state
diff --git a/fla/ops/hgrn/__init__.py b/fla/ops/hgrn/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..f2012c3c15f125271df225ce755ed3b2dbe01a83
--- /dev/null
+++ b/fla/ops/hgrn/__init__.py
@@ -0,0 +1,9 @@
+# -*- coding: utf-8 -*-
+
+from .chunk import chunk_hgrn
+from .fused_recurrent import fused_recurrent_hgrn
+
+__all__ = [
+ 'chunk_hgrn',
+ 'fused_recurrent_hgrn'
+]
diff --git a/fla/ops/hgrn/chunk.py b/fla/ops/hgrn/chunk.py
new file mode 100644
index 0000000000000000000000000000000000000000..5d71cc90da258ff6a1112b0097ae686ed35d2b95
--- /dev/null
+++ b/fla/ops/hgrn/chunk.py
@@ -0,0 +1,289 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+# this function implements the chunkwise form of HGRN, inspired by
+# [Volodymyr Kyrylov in his blog post](https://proger.github.io/posts/scan/chunk.html)
+# also refer to the `accelerated-scan` lib: https://github.com/proger/accelerated-scan
+
+# from tests on H800, with B, D = 16, 128, we see that the chunk can be greatly faster than the recurrent:
+#
+# Performance:
+# seq_len chunk recurrent chunk_bwd recurrent_bwd
+# 0 128.0 0.039360 0.061056 0.312160 0.205008
+# 1 256.0 0.045824 0.123712 0.308784 0.297696
+# 2 512.0 0.058688 0.241952 0.310720 0.626528
+# 3 1024.0 0.088288 0.476992 0.313184 1.333152
+# 4 2048.0 0.169472 0.943264 0.452464 2.724864
+# 5 4096.0 0.329920 1.886144 0.881600 5.551520
+# 6 8192.0 0.647872 3.755040 1.740496 11.117184
+# 7 16384.0 1.272064 7.520576 3.446608 22.362528
+
+from typing import Tuple
+
+import torch
+import triton
+import triton.language as tl
+
+from fla.utils import contiguous
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({'BD': 32}, num_warps=1),
+ triton.Config({'BD': 32}, num_warps=2),
+ triton.Config({'BD': 32}, num_warps=4),
+ triton.Config({'BD': 32}, num_warps=8),
+ triton.Config({'BD': 64}, num_warps=1),
+ triton.Config({'BD': 64}, num_warps=2),
+ triton.Config({'BD': 64}, num_warps=4),
+ triton.Config({'BD': 64}, num_warps=8),
+ triton.Config({'BD': 128}, num_warps=1),
+ triton.Config({'BD': 128}, num_warps=2),
+ triton.Config({'BD': 128}, num_warps=4),
+ triton.Config({'BD': 128}, num_warps=8),
+ ],
+ key=['D']
+)
+@triton.jit
+def chunk_hgrn_fwd_kernel_h(
+ x,
+ g,
+ gc,
+ o,
+ h0,
+ T: tl.constexpr,
+ D: tl.constexpr,
+ BT: tl.constexpr,
+ BD: tl.constexpr,
+ USE_INITIAL_STATE: tl.constexpr
+):
+ i_d, i_t, i_b = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ o_d = i_d * BD + tl.arange(0, BD)
+ mask = o_d < D
+
+ p_x = x + i_b * T * D + i_t * BT * D + o_d
+ p_g = g + i_b * T * D + i_t * BT * D + o_d
+ p_gc = gc + i_b * T * D + i_t * BT * D + o_d
+ p_o = o + i_b * T * D + i_t * BT * D + o_d
+
+ b_h = tl.zeros([BD], dtype=tl.float32)
+ b_gc = tl.zeros([BD], dtype=tl.float32)
+ if USE_INITIAL_STATE:
+ if i_t == 0:
+ b_h += tl.load(h0 + i_b * D + o_d, mask=mask, other=0).to(tl.float32)
+ for i in range(0, BT):
+ mask_t = mask & ((i_t * BT + i) < T)
+ b_x = tl.load(p_x, mask=mask_t, other=0).to(tl.float32)
+ b_g = tl.load(p_g, mask=mask_t, other=0).to(tl.float32)
+ b_h = tl.exp(b_g) * b_h + b_x
+ b_gc = b_gc + b_g
+ tl.store(p_gc, b_gc.to(p_o.dtype.element_ty), mask=mask_t)
+ tl.store(p_o, b_h.to(p_o.dtype.element_ty), mask=mask_t)
+
+ p_x += D
+ p_g += D
+ p_gc += D
+ p_o += D
+
+
+@triton.jit
+def chunk_hgrn_fwd_kernel_o(
+ gc,
+ o,
+ s_b,
+ s_t,
+ s_d,
+ T: tl.constexpr,
+ D: tl.constexpr,
+ BT: tl.constexpr,
+ BD: tl.constexpr
+):
+ i_d, i_b = tl.program_id(0), tl.program_id(1)
+ o_d = i_d * BD + tl.arange(0, BD)
+ mask = o_d < D
+
+ for i_t in range(1, tl.cdiv(T, BT)):
+ p_gc = tl.make_block_ptr(gc + i_b * s_b, (T, D), (s_t, s_d), (i_t * BT, i_d * BD), (BT, BD), (1, 0))
+ p_o = tl.make_block_ptr(o + i_b * s_b, (T, D), (s_t, s_d), (i_t * BT, i_d * BD), (BT, BD), (1, 0))
+
+ # [BD,]
+ b_h0 = tl.load(o + i_b * T * D + i_t * BT * D - D + o_d, mask=mask, other=0).to(tl.float32)
+ # [BT, BD]
+ b_gc = tl.load(p_gc, boundary_check=(0, 1)).to(tl.float32)
+ b_o = tl.load(p_o, boundary_check=(0, 1)).to(tl.float32)
+ b_o = b_o + tl.exp(b_gc) * b_h0[None, :]
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({'BD': 32}, num_warps=1),
+ triton.Config({'BD': 32}, num_warps=2),
+ triton.Config({'BD': 32}, num_warps=4),
+ triton.Config({'BD': 32}, num_warps=8),
+ triton.Config({'BD': 64}, num_warps=1),
+ triton.Config({'BD': 64}, num_warps=2),
+ triton.Config({'BD': 64}, num_warps=4),
+ triton.Config({'BD': 64}, num_warps=8),
+ triton.Config({'BD': 128}, num_warps=1),
+ triton.Config({'BD': 128}, num_warps=2),
+ triton.Config({'BD': 128}, num_warps=4),
+ triton.Config({'BD': 128}, num_warps=8),
+ ],
+ key=['D']
+)
+@triton.jit
+def chunk_hgrn_bwd_kernel_h(
+ g,
+ gc,
+ dx,
+ do,
+ T: tl.constexpr,
+ D: tl.constexpr,
+ BT: tl.constexpr,
+ BD: tl.constexpr
+):
+ i_d, i_t, i_b = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ o_d = i_d * BD + tl.arange(0, BD)
+ mask = o_d < D
+ BC = min(BT, T - i_t * BT)
+ NT = tl.num_programs(1)
+
+ p_g = g + (i_b * T + i_t * BT + BC - 1) * D + o_d
+ p_gc = gc + (i_b * T + i_t * BT + BC - 1) * D + o_d
+ p_dx = dx + (i_b * T + i_t * BT + BC - 1) * D + o_d
+ p_do = do + (i_b * T + i_t * BT + BC - 1) * D + o_d
+
+ if i_t == NT - 1:
+ b_gc = tl.zeros([BD], dtype=tl.float32)
+ else:
+ b_gc = tl.load(g + (i_b * T + i_t * BT + BT) * D + o_d, mask=mask, other=0).to(tl.float32)
+ b_dh = tl.zeros([BD], dtype=tl.float32)
+ for _ in range(BC - 1, -1, -1):
+ tl.store(p_gc, b_gc.to(p_gc.dtype.element_ty), mask=mask)
+
+ b_g = tl.load(p_g, mask=mask, other=0).to(tl.float32)
+ b_do = tl.load(p_do, mask=mask, other=0).to(tl.float32)
+
+ b_gc = b_gc + b_g
+ b_dh = b_dh + b_do
+ b_dx = b_dh
+ b_dh = b_dh * tl.exp(b_g)
+
+ tl.store(p_dx, b_dx.to(p_dx.dtype.element_ty), mask=mask)
+
+ p_g -= D
+ p_gc -= D
+ p_dx -= D
+ p_do -= D
+
+
+@triton.jit
+def chunk_hgrn_bwd_kernel_o(
+ g,
+ gc,
+ o,
+ dx,
+ dg,
+ s_b,
+ s_t,
+ s_d,
+ T: tl.constexpr,
+ D: tl.constexpr,
+ BT: tl.constexpr,
+ BD: tl.constexpr
+):
+ i_d, i_b = tl.program_id(0), tl.program_id(1)
+ o_d = i_d * BD + tl.arange(0, BD)
+ mask = o_d < D
+
+ for i_t in range(tl.cdiv(T, BT) - 1, -1, -1):
+ p_g = tl.make_block_ptr(g + i_b * s_b, (T, D), (s_t, s_d), (i_t * BT, i_d * BD), (BT, BD), (1, 0))
+ p_gc = tl.make_block_ptr(gc + i_b * s_b, (T, D), (s_t, s_d), (i_t * BT, i_d * BD), (BT, BD), (1, 0))
+ p_o = tl.make_block_ptr(o + i_b * s_b, (T, D), (s_t, s_d), (i_t * BT - 1, i_d * BD), (BT, BD), (1, 0))
+ p_dx = tl.make_block_ptr(dx + i_b * s_b, (T, D), (s_t, s_d), (i_t * BT, i_d * BD), (BT, BD), (1, 0))
+ p_dg = tl.make_block_ptr(dg + i_b * s_b, (T, D), (s_t, s_d), (i_t * BT, i_d * BD), (BT, BD), (1, 0))
+
+ # [BD,]
+ mask_t = mask & ((i_t + 1) * BT < T)
+ b_ht = tl.load(dx + i_b * T * D + (i_t + 1) * BT * D + o_d, mask=mask_t, other=0).to(tl.float32)
+ # [BT, BD]
+ b_g = tl.load(p_g, boundary_check=(0, 1)).to(tl.float32)
+ b_gc = tl.load(p_gc, boundary_check=(0, 1)).to(tl.float32)
+ b_o = tl.load(p_o, boundary_check=(0, 1)).to(tl.float32)
+ b_dx = tl.load(p_dx, boundary_check=(0, 1)).to(tl.float32)
+
+ b_dx = b_dx + tl.exp(b_gc) * b_ht[None, :]
+ b_dg = b_o * b_dx * tl.exp(b_g)
+ tl.store(p_dx, b_dx.to(p_dx.dtype.element_ty), boundary_check=(0, 1))
+ tl.store(p_dg, b_dg.to(p_dg.dtype.element_ty), boundary_check=(0, 1))
+
+
+class ChunkHGRNFunction(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ def forward(ctx, x, g, initial_state=None, output_final_state=False):
+ B, T, D = x.shape
+ BT, BD = 128, min(64, triton.next_power_of_2(D))
+ num_warps = 8 if BD == 64 else 4
+
+ gc = torch.empty_like(g, dtype=torch.float)
+ o = torch.empty_like(x, dtype=torch.float)
+ def grid(meta): return (triton.cdiv(D, meta['BD']), triton.cdiv(T, meta['BT']), B)
+ chunk_hgrn_fwd_kernel_h[grid](
+ x, g, gc, o, initial_state,
+ T=T, D=D, BT=BT,
+ USE_INITIAL_STATE=initial_state is not None
+ )
+ def grid(meta): return (triton.cdiv(D, meta['BD']), B)
+ chunk_hgrn_fwd_kernel_o[grid](
+ gc, o,
+ o.stride(-3), o.stride(-2), o.stride(-1),
+ T=T, D=D, BT=BT, BD=BD,
+ num_warps=num_warps
+ )
+ final_state = None
+ if output_final_state:
+ final_state = o[:, -1].clone()
+ o = o.to(x.dtype)
+ ctx.save_for_backward(g, o, initial_state)
+ return o, final_state
+
+ @staticmethod
+ @contiguous
+ def backward(ctx, do, dht=None):
+ g, o, initial_state = ctx.saved_tensors
+ B, T, D = do.shape
+ BT, BD = 128, min(64, triton.next_power_of_2(D))
+ num_warps = 8 if BD == 64 else 4
+
+ gc = torch.empty_like(g, dtype=torch.float)
+ dx = torch.empty_like(o, dtype=torch.float)
+ def grid(meta): return (triton.cdiv(D, meta['BD']), triton.cdiv(T, meta['BT']), B)
+ chunk_hgrn_bwd_kernel_h[grid](
+ g, gc, dx, do,
+ T=T, D=D, BT=BT
+ )
+
+ dg = torch.empty_like(g, dtype=torch.float)
+ def grid(meta): return (triton.cdiv(D, meta['BD']), B)
+ chunk_hgrn_bwd_kernel_o[grid](
+ g, gc, o, dx, dg,
+ o.stride(-3), o.stride(-2), o.stride(-1),
+ T=T, D=D, BT=BT, BD=BD,
+ num_warps=num_warps
+ )
+ if initial_state is not None:
+ dg[:, 0] = (initial_state * dx[:, 0] * g[:, 0].float().exp()).to(dg.dtype)
+
+ return dx.to(o.dtype), dg, None, None
+
+
+def chunk_hgrn(
+ x: torch.Tensor,
+ g: torch.Tensor,
+ initial_state: torch.Tensor = None,
+ output_final_state: bool = False
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ return ChunkHGRNFunction.apply(x, g, initial_state, output_final_state)
diff --git a/fla/ops/hgrn/fused_recurrent.py b/fla/ops/hgrn/fused_recurrent.py
new file mode 100644
index 0000000000000000000000000000000000000000..3a88980db42b59820a771ee742bfc13675599bbe
--- /dev/null
+++ b/fla/ops/hgrn/fused_recurrent.py
@@ -0,0 +1,327 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from typing import Optional, Tuple
+
+import torch
+import triton
+import triton.language as tl
+
+from fla.utils import contiguous
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({'BD': 32}, num_warps=1),
+ triton.Config({'BD': 32}, num_warps=2),
+ triton.Config({'BD': 32}, num_warps=4),
+ triton.Config({'BD': 32}, num_warps=8),
+ triton.Config({'BD': 64}, num_warps=1),
+ triton.Config({'BD': 64}, num_warps=2),
+ triton.Config({'BD': 64}, num_warps=4),
+ triton.Config({'BD': 64}, num_warps=8),
+ triton.Config({'BD': 128}, num_warps=1),
+ triton.Config({'BD': 128}, num_warps=2),
+ triton.Config({'BD': 128}, num_warps=4),
+ triton.Config({'BD': 128}, num_warps=8),
+ ],
+ key=['D']
+)
+@triton.heuristics({
+ 'USE_INITIAL_STATE': lambda args: args['h0'] is not None,
+ 'STORE_FINAL_STATE': lambda args: args['ht'] is not None,
+ 'USE_OFFSETS': lambda args: args['offsets'] is not None
+})
+@triton.jit
+def fused_recurrent_hgrn_fwd_kernel(
+ x,
+ g,
+ o,
+ h0,
+ ht,
+ offsets,
+ T: tl.constexpr,
+ D: tl.constexpr,
+ BD: tl.constexpr,
+ USE_INITIAL_STATE: tl.constexpr,
+ STORE_FINAL_STATE: tl.constexpr,
+ USE_OFFSETS: tl.constexpr
+):
+ i_d, i_n = tl.program_id(0), tl.program_id(1)
+ if USE_OFFSETS:
+ bos, eos = tl.load(offsets + i_n).to(tl.int64), tl.load(offsets + i_n + 1).to(tl.int64)
+ T = eos - bos
+ else:
+ bos, eos = i_n * T, i_n * T + T
+
+ o_d = i_d * BD + tl.arange(0, BD)
+ mask = o_d < D
+
+ p_x = x + bos * D + o_d
+ p_g = g + bos * D + o_d
+ p_o = o + bos * D + o_d
+
+ b_h = tl.zeros([BD], dtype=tl.float32)
+ if USE_INITIAL_STATE:
+ p_h0 = h0 + i_n * D + o_d
+ b_h += tl.load(p_h0, mask=mask, other=0).to(tl.float32)
+ for _ in range(0, T):
+ b_x = tl.load(p_x, mask=mask, other=0).to(tl.float32)
+ b_g = tl.load(p_g, mask=mask, other=0).to(tl.float32)
+ b_h = tl.exp(b_g) * b_h + b_x
+ tl.store(p_o, b_h.to(p_o.dtype.element_ty), mask=mask)
+
+ p_x += D
+ p_g += D
+ p_o += D
+
+ if STORE_FINAL_STATE:
+ p_ht = ht + i_n * D + o_d
+ tl.store(p_ht, b_h.to(p_ht.dtype.element_ty), mask=mask)
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({'BD': 32}, num_warps=1),
+ triton.Config({'BD': 32}, num_warps=2),
+ triton.Config({'BD': 32}, num_warps=4),
+ triton.Config({'BD': 32}, num_warps=8),
+ triton.Config({'BD': 64}, num_warps=1),
+ triton.Config({'BD': 64}, num_warps=2),
+ triton.Config({'BD': 64}, num_warps=4),
+ triton.Config({'BD': 64}, num_warps=8),
+ triton.Config({'BD': 128}, num_warps=1),
+ triton.Config({'BD': 128}, num_warps=2),
+ triton.Config({'BD': 128}, num_warps=4),
+ triton.Config({'BD': 128}, num_warps=8),
+ ],
+ key=['D']
+)
+@triton.heuristics({
+ 'USE_INITIAL_STATE': lambda args: args['h0'] is not None,
+ 'USE_FINAL_STATE_GRADIENT': lambda args: args['dht'] is not None,
+ 'USE_OFFSETS': lambda args: args['offsets'] is not None
+})
+@triton.jit
+def fused_recurrent_hgrn_bwd_kernel(
+ g,
+ o,
+ h0,
+ dx,
+ dg,
+ do,
+ dht,
+ dh0,
+ offsets,
+ T: tl.constexpr,
+ D: tl.constexpr,
+ BD: tl.constexpr,
+ USE_INITIAL_STATE: tl.constexpr,
+ USE_FINAL_STATE_GRADIENT: tl.constexpr,
+ USE_OFFSETS: tl.constexpr
+):
+ i_d, i_n = tl.program_id(0), tl.program_id(1)
+ if USE_OFFSETS:
+ bos, eos = tl.load(offsets + i_n).to(tl.int64), tl.load(offsets + i_n + 1).to(tl.int64)
+ T = eos - bos
+ else:
+ bos, eos = i_n * T, i_n * T + T
+
+ o_d = i_d * BD + tl.arange(0, BD)
+ mask = o_d < D
+
+ p_g = g + (bos + T - 1) * D + o_d
+ p_o = o + (bos + T - 2) * D + o_d
+ p_dx = dx + (bos + T - 1) * D + o_d
+ p_dg = dg + (bos + T - 1) * D + o_d
+ p_do = do + (bos + T - 1) * D + o_d
+
+ b_dh = tl.zeros([BD], dtype=tl.float32)
+ if USE_FINAL_STATE_GRADIENT:
+ p_dht = dht + i_n * D + o_d
+ b_dh += tl.load(p_dht, mask=mask, other=0).to(tl.float32)
+
+ for i in range(T - 1, -1, -1):
+ b_g = tl.load(p_g, mask=mask, other=0).to(tl.float32)
+ b_do = tl.load(p_do, mask=mask, other=0).to(tl.float32)
+ if i > 0:
+ b_o = tl.load(p_o, mask=mask, other=0).to(tl.float32)
+ elif USE_INITIAL_STATE:
+ b_o = tl.load(h0 + i_n * D + o_d, mask=mask, other=0).to(tl.float32)
+ else:
+ b_o = tl.zeros([BD], dtype=tl.float32)
+
+ b_dh = b_dh + b_do
+ b_dx = b_dh
+ b_dh = b_dh * tl.exp(b_g)
+ b_dg = b_dh * b_o
+ tl.store(p_dx, b_dx.to(p_dx.dtype.element_ty), mask=mask)
+ tl.store(p_dg, b_dg.to(p_dg.dtype.element_ty), mask=mask)
+
+ p_g -= D
+ p_o -= D
+ p_dx -= D
+ p_dg -= D
+ p_do -= D
+
+ if USE_INITIAL_STATE:
+ p_dh0 = dh0 + i_n * D + o_d
+ tl.store(p_dh0, b_dh.to(p_dh0.dtype.element_ty), mask=mask)
+
+
+def fused_recurrent_hgrn_fwd(
+ x: torch.Tensor,
+ g: torch.Tensor,
+ initial_state: torch.Tensor = None,
+ output_final_state: bool = False,
+ offsets: Optional[torch.LongTensor] = None,
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ B, T, D = x.shape
+ N = B if offsets is None else len(offsets) - 1
+
+ o = torch.empty_like(x)
+ final_state = x.new_empty(N, D) if output_final_state else None
+
+ def grid(meta): return (triton.cdiv(D, meta['BD']), N)
+ fused_recurrent_hgrn_fwd_kernel[grid](
+ x=x,
+ g=g,
+ o=o,
+ h0=initial_state,
+ ht=final_state,
+ offsets=offsets,
+ T=T,
+ D=D
+ )
+ return o, final_state
+
+
+def fused_recurrent_hgrn_bwd(
+ g: torch.Tensor,
+ o: torch.Tensor,
+ do: torch.Tensor,
+ dht: torch.Tensor = None,
+ initial_state: torch.Tensor = None,
+ offsets: Optional[torch.LongTensor] = None
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ B, T, D = do.shape
+ N = B if offsets is None else len(offsets) - 1
+
+ dx = torch.empty_like(o, dtype=torch.float)
+ dg = torch.empty_like(g, dtype=torch.float)
+ dh0 = torch.empty_like(initial_state, dtype=torch.float) if initial_state is not None else None
+ def grid(meta): return (triton.cdiv(D, meta['BD']), N)
+ fused_recurrent_hgrn_bwd_kernel[grid](
+ g=g,
+ o=o,
+ h0=initial_state,
+ dx=dx,
+ dg=dg,
+ do=do,
+ dht=dht,
+ dh0=dh0,
+ offsets=offsets,
+ T=T,
+ D=D
+ )
+ return dx, dg, dh0
+
+
+class FusedRecurrentHGRNFunction(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ def forward(
+ ctx,
+ x: torch.Tensor,
+ g: torch.Tensor,
+ initial_state: torch.Tensor = None,
+ output_final_state: bool = False,
+ offsets: Optional[torch.LongTensor] = None
+ ):
+ o, ht = fused_recurrent_hgrn_fwd(
+ x=x,
+ g=g,
+ initial_state=initial_state,
+ output_final_state=output_final_state,
+ offsets=offsets
+ )
+ ctx.save_for_backward(g, o, initial_state)
+ ctx.offsets = offsets
+ return o, ht
+
+ @staticmethod
+ @contiguous
+ def backward(ctx, do, dht=None):
+ g, o, initial_state = ctx.saved_tensors
+ offsets = ctx.offsets
+
+ dx, dg, dh0 = fused_recurrent_hgrn_bwd(
+ g=g,
+ o=o,
+ do=do,
+ dht=dht,
+ initial_state=initial_state,
+ offsets=offsets
+ )
+ return dx, dg, dh0, None, None
+
+
+def fused_recurrent_hgrn(
+ x: torch.Tensor,
+ g: torch.Tensor,
+ initial_state: torch.Tensor = None,
+ output_final_state: bool = False,
+ offsets: Optional[torch.LongTensor] = None
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ r"""
+ Args:
+ x (torch.Tensor):
+ inputs of shape `[B, T, D].
+ g (torch.Tensor):
+ Forget gates of shape `[B, T, D]`.
+ initial_state (Optional[torch.Tensor]):
+ Initial state of shape `[N, D]` for `N` input sequences.
+ For equal-length input sequences, `N` equals the batch size `B`.
+ Default: `None`.
+ output_final_state (Optional[bool]):
+ Whether to output the final state of shape `[N, D]`. Default: `False`.
+ offsets (Optional[torch.LongTensor]):
+ Offsets of shape `[N+1]` defining the bos/eos positions of `N` variable-length sequences in the batch.
+ For example,
+ if `offsets` is `[0, 1, 3, 6, 10, 15]`, there are `N=5` sequences with lengths 1, 2, 3, 4 and 5 respectively.
+ If provided, the inputs are concatenated and the batch size `B` is expected to be 1.
+ Default: `None`.
+
+ Returns:
+ o (torch.Tensor):
+ Outputs of shape `[B, T, D]`.
+ final_state (torch.Tensor):
+ Final state of shape `[N, D]` if `output_final_state=True` else `None`.
+
+ Examples::
+ >>> import torch
+ >>> import torch.nn.functional as F
+ >>> from einops import rearrange
+ >>> from fla.ops.hgrn import fused_recurrent_hgrn
+ # inputs with equal lengths
+ >>> B, T, D = 4, 2048, 512
+ >>> x = torch.randn(B, T, D, device='cuda')
+ >>> g = F.logsigmoid(torch.randn(B, T, D, device='cuda'))
+ >>> h0 = torch.randn(B, D, device='cuda')
+ >>> o, ht = fused_recurrent_hgrn(x, g, initial_state=h0, output_final_state=True)
+ # for variable-length inputs, the batch size `B` is expected to be 1 and `offsets` is required
+ >>> x, g = map(lambda x: rearrange(x, 'b t d -> 1 (b t) d'), (x, g))
+ # for a batch with 4 sequences, offsets with 5 start/end positions are expected
+ >>> offsets = x.new_tensor([0, 2048, 4096, 6144, 8192], dtype=torch.long)
+ >>> o_var, ht_var = fused_recurrent_hgrn(x, g, initial_state=h0, output_final_state=True, offsets=offsets)
+ >>> assert o.allclose(o_var.view(o.shape))
+ >>> assert ht.allclose(ht_var)
+ """
+ return FusedRecurrentHGRNFunction.apply(
+ x,
+ g,
+ initial_state,
+ output_final_state,
+ offsets
+ )
diff --git a/fla/ops/hgrn/naive.py b/fla/ops/hgrn/naive.py
new file mode 100644
index 0000000000000000000000000000000000000000..9bcddc1967b31c5181d330704c7b5ff2127e9d68
--- /dev/null
+++ b/fla/ops/hgrn/naive.py
@@ -0,0 +1,63 @@
+# -*- coding: utf-8 -*-
+
+from typing import Optional
+
+import torch
+
+
+def naive_recurrent_hgrn(
+ x: torch.Tensor,
+ g: torch.Tensor,
+ initial_state: Optional[torch.Tensor] = None,
+ output_final_state: Optional[bool] = False
+) -> torch.Tensor:
+ dtype = x.dtype
+ x, g = map(lambda i: i.float(), (x, g))
+ B, T, D = x.shape
+
+ h = torch.zeros(B, D, dtype=torch.float, device=x.device)
+ o = torch.zeros_like(x)
+
+ final_state = None
+ if initial_state is not None:
+ h += initial_state
+
+ for i in range(T):
+ h = g[:, i].exp() * h + x[:, i]
+ o[:, i] = h
+
+ if output_final_state:
+ final_state = h
+ return o.to(dtype), final_state
+
+
+def naive_chunk_hgrn(
+ x: torch.Tensor,
+ g: torch.Tensor,
+ initial_state: Optional[torch.Tensor] = None,
+ output_final_state: Optional[bool] = False,
+ chunk_size: int = 64
+) -> torch.Tensor:
+ dtype = x.dtype
+ x, g = map(lambda i: i.float(), (x, g))
+ B, T, D = x.shape
+
+ gc = g.view(B, chunk_size, D).cumsum(-2).view_as(g)
+ h = torch.zeros(B, D, dtype=torch.float, device=x.device)
+ o = torch.zeros_like(x)
+
+ final_state = None
+ if initial_state is not None:
+ h += initial_state
+
+ for i in range(0, T, chunk_size):
+ hp = h
+ h = torch.zeros(B, D, dtype=torch.float, device=x.device)
+ for j in range(i, i + chunk_size):
+ h = g[:, j].exp() * h + x[:, j]
+ o[:, j] = hp * gc[:, j].exp() + h
+ h = o[:, j].clone()
+
+ if output_final_state:
+ final_state = h
+ return o.to(dtype), final_state
diff --git a/fla/ops/linear_attn/__init__.py b/fla/ops/linear_attn/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..1a981054aaf9ab98b30ac08fa525bde73e68e7e4
--- /dev/null
+++ b/fla/ops/linear_attn/__init__.py
@@ -0,0 +1,11 @@
+# -*- coding: utf-8 -*-
+
+from .chunk import chunk_linear_attn
+from .fused_chunk import fused_chunk_linear_attn
+from .fused_recurrent import fused_recurrent_linear_attn
+
+__all__ = [
+ 'chunk_linear_attn',
+ 'fused_chunk_linear_attn',
+ 'fused_recurrent_linear_attn'
+]
diff --git a/fla/ops/linear_attn/chunk.py b/fla/ops/linear_attn/chunk.py
new file mode 100644
index 0000000000000000000000000000000000000000..3f00b056f9908eb6856fc8b455176336f30b05f8
--- /dev/null
+++ b/fla/ops/linear_attn/chunk.py
@@ -0,0 +1,374 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2023, Yu Zhang, Songlin Yang
+
+from typing import Optional, Tuple
+
+import torch
+import triton
+import triton.language as tl
+
+from fla.ops.linear_attn.utils import normalize_output
+from fla.utils import autocast_custom_bwd, autocast_custom_fwd, contiguous
+
+
+@triton.jit
+def chunk_linear_attn_fwd_kernel_h(
+ k,
+ v,
+ h,
+ h0,
+ ht,
+ s_k_h,
+ s_k_t,
+ s_k_d,
+ s_v_h,
+ s_v_t,
+ s_v_d,
+ s_h_h,
+ s_h_t,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ NT: tl.constexpr,
+ USE_INITIAL_STATE: tl.constexpr,
+ STORE_FINAL_STATE: tl.constexpr
+):
+ i_k, i_v, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+
+ # [BK, BV]
+ b_h = tl.zeros([BK, BV], dtype=tl.float32)
+
+ if USE_INITIAL_STATE:
+ p_h0 = tl.make_block_ptr(h0 + i_bh * K * V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ b_h = tl.load(p_h0, boundary_check=(0, 1)).to(tl.float32)
+
+ for i_t in range(NT):
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * K * V, (K, V), (s_h_t, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+
+ tl.store(p_h, b_h.to(p_h.dtype.element_ty), boundary_check=(0, 1))
+ # [BK, BT]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BT, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BK, BV]
+ b_h += tl.dot(b_k, b_v, allow_tf32=False)
+
+ if STORE_FINAL_STATE:
+ p_ht = tl.make_block_ptr(ht + i_bh * K * V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ tl.store(p_ht, b_h.to(p_ht.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.jit
+def chunk_linear_attn_fwd_kernel_o(
+ q,
+ k,
+ v,
+ h,
+ o,
+ s_k_h,
+ s_k_t,
+ s_k_d,
+ s_v_h,
+ s_v_t,
+ s_v_d,
+ s_h_h,
+ s_h_t,
+ scale,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr
+):
+ i_v, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+
+ o_i = tl.arange(0, BT)
+ m_s = o_i[:, None] >= o_i[None, :]
+
+ b_o = tl.zeros([BT, BV], dtype=tl.float32)
+ b_s = tl.zeros([BT, BT], dtype=tl.float32)
+ for i_k in range(tl.cdiv(K, BK)):
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * K * V, (K, V), (s_h_t, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ # [BT, BK]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ # [BK, BT]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BK, BV]
+ b_h = tl.load(p_h, boundary_check=(0, 1))
+ b_o += tl.dot(b_q, b_h, allow_tf32=False)
+ b_s += tl.dot(b_q, b_k, allow_tf32=False)
+ b_s = tl.where(m_s, b_s, 0)
+
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_o = tl.make_block_ptr(o + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ b_o = (b_o + tl.dot(b_s.to(b_v.dtype), b_v, allow_tf32=False)) * scale
+
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.jit
+def chunk_linear_attn_bwd_kernel_dh(
+ q,
+ do,
+ dh,
+ s_k_h,
+ s_k_t,
+ s_k_d,
+ s_v_h,
+ s_v_t,
+ s_v_d,
+ s_h_h,
+ s_h_t,
+ scale,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ NT: tl.constexpr
+):
+ i_k, i_v, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+
+ # [BK, BV]
+ b_dh = tl.zeros([BK, BV], dtype=tl.float32)
+ for i_t in range(NT - 1, -1, -1):
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dh = tl.make_block_ptr(dh + i_bh * s_h_h + i_t * K * V, (K, V), (s_h_t, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+
+ tl.store(p_dh, b_dh.to(p_dh.dtype.element_ty), boundary_check=(0, 1))
+ # [BK, BT]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_q = (b_q * scale).to(b_q.dtype)
+ # [BT, V]
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+ # [BK, BV]
+ b_dh += tl.dot(b_q, b_do.to(b_q.dtype), allow_tf32=False)
+
+
+@triton.jit
+def chunk_linear_attn_bwd_kernel_dqkv(
+ q,
+ k,
+ v,
+ h,
+ do,
+ dh,
+ dq,
+ dk,
+ dv,
+ s_k_h,
+ s_k_t,
+ s_k_d,
+ s_v_h,
+ s_v_t,
+ s_v_d,
+ s_h_h,
+ s_h_t,
+ scale,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ NT: tl.constexpr
+):
+ i_k, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ n_bh = tl.num_programs(2)
+ o_i = tl.arange(0, BT)
+
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ b_s = tl.dot(b_k, b_q, allow_tf32=False) * scale
+ b_s = tl.where(o_i[:, None] <= o_i[None, :], b_s, 0)
+
+ b_dq = tl.zeros([BT, BK], dtype=tl.float32)
+ b_dk = tl.zeros([BT, BK], dtype=tl.float32)
+ b_ds = tl.zeros([BT, BT], dtype=tl.float32)
+ for i_v in range(tl.cdiv(V, BV)):
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_h = tl.make_block_ptr(h + i_bh * s_h_h, (V, NT * K), (1, s_h_t), (i_v * BV, i_t * K + i_k * BK), (BV, BK), (0, 1))
+ p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dh = tl.make_block_ptr(dh + i_bh * s_h_h, (NT * K, V), (s_h_t, 1), (i_t * K + i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ p_dv = tl.make_block_ptr(dv + (i_k*n_bh+i_bh)*s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ # [BT, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+ # [BV, BK]
+ b_h = tl.load(p_h, boundary_check=(0, 1))
+ # [BK, BV]
+ b_dh = tl.load(p_dh, boundary_check=(0, 1))
+ # [BT, BT]
+ b_ds += tl.dot(b_do, tl.trans(b_v), allow_tf32=False)
+ # [BT, BK]
+ b_dq += tl.dot(b_do, b_h, allow_tf32=False) * scale
+ b_dk += tl.dot(b_v, tl.trans(b_dh), allow_tf32=False)
+ # [BT, BV]
+ b_dv = tl.dot(b_k, b_dh, allow_tf32=False) + tl.dot(b_s.to(b_q.dtype), b_do, allow_tf32=False)
+ tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1))
+ # [BT, BT]
+ b_ds = tl.where(o_i[:, None] >= o_i[None, :], b_ds * scale, 0).to(b_q.dtype)
+ # [BT, BK]
+ b_dq += tl.dot(b_ds, b_k, allow_tf32=False)
+ b_dk += tl.trans(tl.dot(b_q, b_ds, allow_tf32=False))
+
+ p_dq = tl.make_block_ptr(dq + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dk = tl.make_block_ptr(dk + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1))
+ tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1))
+
+
+class ChunkLinearAttentionFunction(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_fwd
+ def forward(ctx, q, k, v, scale, initial_state, output_final_state):
+ B, H, T, K, V = *q.shape, v.shape[-1]
+ BT = 64
+ BK, BV = min(64, triton.next_power_of_2(K)), min(64, triton.next_power_of_2(V))
+ NT, NK, NV = triton.cdiv(T, BT), triton.cdiv(K, BK), triton.cdiv(V, BV)
+ num_stages = 1
+ num_warps = 4 if BK == 64 else 2
+ ctx.scale = scale
+
+ final_state = None
+ if output_final_state:
+ final_state = q.new_empty(B, H, K, V, dtype=torch.float32, requires_grad=False)
+
+ h = q.new_empty(B, H, NT * K, V)
+ grid = (NK, NV, B * H)
+ chunk_linear_attn_fwd_kernel_h[grid](
+ k, v, h, initial_state, final_state,
+ q.stride(1), q.stride(2), q.stride(3),
+ v.stride(1), v.stride(2), v.stride(3),
+ h.stride(1), h.stride(2),
+ T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, NT=NT,
+ USE_INITIAL_STATE=initial_state is not None,
+ STORE_FINAL_STATE=output_final_state,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+ grid = (NV, NT, B * H)
+ o = torch.empty_like(v)
+ chunk_linear_attn_fwd_kernel_o[grid](
+ q, k, v, h, o,
+ q.stride(1), q.stride(2), q.stride(3),
+ v.stride(1), v.stride(2), v.stride(3),
+ h.stride(1), h.stride(2),
+ scale,
+ T=T, K=K, V=V, BT=BT, BK=BK, BV=BV,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+ ctx.save_for_backward(q, k, v, h)
+ return o.to(q.dtype), final_state
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_bwd
+ def backward(ctx, do, dht=None):
+ q, k, v, h = ctx.saved_tensors
+
+ B, H, T, K, V = *q.shape, v.shape[-1]
+ BT = 64
+ BK, BV = min(64, triton.next_power_of_2(K)), min(32 if q.dtype == torch.float32 else 64, triton.next_power_of_2(V))
+ NT, NK, NV = triton.cdiv(T, BT), triton.cdiv(K, BK), triton.cdiv(V, BV)
+ num_stages = 1
+ num_warps = 4 if BK == 64 else 2
+ scale = ctx.scale
+
+ dh = q.new_empty(B, H, NT * K, V)
+ grid = (NK, NV, B * H)
+ chunk_linear_attn_bwd_kernel_dh[grid](
+ q, do, dh,
+ q.stride(1), q.stride(2), q.stride(3),
+ v.stride(1), v.stride(2), v.stride(3),
+ dh.stride(1), dh.stride(2),
+ scale,
+ T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, NT=NT,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+
+ grid = (NK, NT, B * H)
+ dq = torch.empty_like(q)
+ dk = torch.empty_like(k)
+ dv = v.new_empty(NK, *v.shape)
+ num_stages = 1
+ num_warps = 4 if BK == 64 else 2
+ chunk_linear_attn_bwd_kernel_dqkv[grid](
+ q, k, v, h, do, dh, dq, dk, dv,
+ q.stride(1), q.stride(2), q.stride(3),
+ v.stride(1), v.stride(2), v.stride(3),
+ dh.stride(1), dh.stride(2),
+ scale,
+ T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, NT=NT,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+ dv = dv.sum(0)
+ return dq.to(q.dtype), dk.to(k.dtype), dv.to(v.dtype), None, None, None
+
+
+def chunk_linear_attn(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ scale: Optional[float] = None,
+ initial_state: torch.Tensor = None,
+ output_final_state: bool = False,
+ normalize: bool = True,
+ head_first: bool = True
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ r"""
+ Args:
+ q (torch.Tensor):
+ queries of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`
+ k (torch.Tensor):
+ keys of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`
+ v (torch.Tensor):
+ values of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`
+ scale (Optional[int]):
+ Scale factor for the linear attention scores.
+ If not provided, it will default to `1 / sqrt(K)`. Default: `None`.
+ initial_state (Optional[torch.Tensor]):
+ Initial state of shape `[B, H, K, V]`. Default: `None`.
+ output_final_state (Optional[bool]):
+ Whether to output the final state of shape `[B, H, K, V]`. Default: `False`.
+ normalize (bool):
+ Whether to normalize the output. Default: `True`.
+ head_first (Optional[bool]):
+ Whether the inputs are in the head-first format. Default: `True`.
+
+ Returns:
+ o (torch.Tensor):
+ Outputs of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`
+ final_state (torch.Tensor):
+ Final state of shape `[B, H, K, V]` if `output_final_state=True` else `None`
+ """
+ if scale is None:
+ scale = q.shape[-1] ** -0.5
+ if not head_first:
+ q, k, v = map(lambda x: x.transpose(1, 2), (q, k, v))
+ o, final_state = ChunkLinearAttentionFunction.apply(q, k, v, scale, initial_state, output_final_state)
+ if normalize:
+ o = normalize_output(q * scale, k, o)
+ if not head_first:
+ o = o.transpose(1, 2)
+ return o, final_state
diff --git a/fla/ops/linear_attn/fused_chunk.py b/fla/ops/linear_attn/fused_chunk.py
new file mode 100644
index 0000000000000000000000000000000000000000..c13ce2c19854b8b5a8e0204ed05ec42702b24d24
--- /dev/null
+++ b/fla/ops/linear_attn/fused_chunk.py
@@ -0,0 +1,336 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from typing import Optional, Tuple
+
+import torch
+import triton
+import triton.language as tl
+from packaging import version
+
+from fla.ops.linear_attn.utils import normalize_output
+from fla.utils import autocast_custom_bwd, autocast_custom_fwd, contiguous
+
+
+@triton.jit
+def fused_chunk_linear_attn_fwd_kernel(
+ q, # query [B, H, T, K]
+ k, # key [B, H, T, V]
+ v, # value [B, H, T, V]
+ o, # output [B, H, T, V]
+ h0,
+ ht,
+ s_k_h, # stride size: T * K
+ s_k_t, # stride size: K
+ s_k_d, # stride size: 1
+ s_v_h, # stride size: T * V
+ s_v_t, # stride size: V
+ s_v_d, # stride size: 1
+ scale,
+ B, # batch size
+ H, # H
+ T, # T
+ K: tl.constexpr, # K
+ V: tl.constexpr, # V
+ BT: tl.constexpr, # BLOCK SIZE along the sequence dimension, a.k.a. chunk size
+ BK: tl.constexpr, # BLOCK SIZE along the K dimension
+ BV: tl.constexpr, # BLOCK SIZE along the V dimension
+ USE_INITIAL_STATE: tl.constexpr,
+ STORE_FINAL_STATE: tl.constexpr,
+ CHECK: tl.constexpr
+):
+ # indices
+ i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+
+ o_i = tl.arange(0, BT)
+
+ # [BT, BT]
+ m_s = o_i[:, None] >= o_i[None, :]
+ # [BK, BV]
+ b_h = tl.zeros([BK, BV], dtype=tl.float32)
+
+ # make block pointers
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (0, i_k * BK), (BT, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, 0), (BK, BT), (0, 1))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (0, i_v * BV), (BT, BV), (1, 0))
+ p_o = tl.make_block_ptr(o + (i_bh+i_k*B*H) * s_v_h, (T, V), (s_v_t, s_v_d), (0, i_v * BV), (BT, BV), (1, 0))
+
+ if USE_INITIAL_STATE:
+ p_h0 = tl.make_block_ptr(h0 + i_bh * K * V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ b_h = tl.load(p_h0, boundary_check=(0, 1)).to(tl.float32)
+
+ for i in range(0, tl.cdiv(T, BT)):
+ # [BT, BK]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_q = (b_q * scale).to(b_q.dtype)
+ # [BK, BT]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BT, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+
+ # [BT, BT]
+ b_s = tl.dot(b_q, b_k, allow_tf32=False)
+ b_s = tl.where(m_s, b_s, 0)
+ # [BT, BV]
+ b_o = tl.dot(b_s.to(b_q.dtype), b_v, allow_tf32=False)
+ if CHECK and i == 0:
+ b_o += tl.dot(b_q, b_h.to(b_q.dtype), allow_tf32=False)
+ b_h = b_h + tl.dot(b_k, b_v, allow_tf32=False)
+ else:
+ b_o += tl.dot(b_q, b_h.to(b_q.dtype), allow_tf32=False)
+ b_h = b_h + tl.dot(b_k, b_v, allow_tf32=False)
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1))
+ p_q = tl.advance(p_q, (BT, 0))
+ p_k = tl.advance(p_k, (0, BT))
+ p_v = tl.advance(p_v, (BT, 0))
+ p_o = tl.advance(p_o, (BT, 0))
+
+ if STORE_FINAL_STATE:
+ p_ht = tl.make_block_ptr(ht + i_bh * K * V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ tl.store(p_ht, b_h.to(p_ht.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.jit
+def fused_chunk_linear_attn_bwd_kernel(
+ q, # query [B, H, T, K]
+ k, # key [B, H, T, V]
+ v, # value [B, H, T, V]
+ do, # gradient of output [B, H, T, V]
+ dq, # gradient of query [NV, B, H, T, K]
+ dk, # gradient of key [NV, B, H, T, K]
+ dv, # gradient of value [NK, B, H, T, V]
+
+ h0, # initial state of the chunk [B, H, K, V]
+
+ s_k_h, # stride size: T * K
+ s_k_t, # stride size: K
+ s_k_d, # stride size: 1
+ s_v_h, # stride size: T * V
+ s_v_t, # stride size: V
+ s_v_d, # stride size: 1
+ scale, # K ** -0.5
+ B, # B
+ H, # H
+ T, # T
+ K: tl.constexpr, # K
+ V: tl.constexpr, # V
+ BT: tl.constexpr, # BLOCK SIZE along the sequence dimension, a.k.a. chunk size
+ BK: tl.constexpr, # BLOCK SIZE along the K dimension
+ BV: tl.constexpr, # BLOCK SIZE along the V dimension
+ USE_INITIAL_STATE: tl.constexpr,
+ CHECK: tl.constexpr
+):
+ i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ o_i = tl.arange(0, BT)
+
+ m_s = o_i[:, None] >= o_i[None, :]
+ # [BV, BK]
+ b_h = tl.zeros([BV, BK], dtype=tl.float32)
+ if USE_INITIAL_STATE:
+ p_h = tl.make_block_ptr(h0 + i_bh * K * V, (V, K), (1, V), (i_v * BV, i_k * BK), (BV, BK), (0, 1))
+ b_h = tl.load(p_h, boundary_check=(0, 1)).to(tl.float32)
+
+ for i in range(0, tl.cdiv(T, BT)):
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i * BT, i_k * BK), (BT, BK), (1, 0))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (V, T), (s_v_d, s_v_t), (i_v * BV, i * BT), (BV, BT), (0, 1))
+ p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dq = tl.make_block_ptr(dq + (i_bh + i_v*B*H) * s_k_h, (T, K), (s_k_t, s_k_d), (i*BT, i_k*BK), (BT, BK), (1, 0))
+
+ # [BT, BK]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [V, BT]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BT, V]
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+
+ # [BT, BT]
+ b_ds = tl.dot(b_do, b_v, allow_tf32=False)
+ b_ds = tl.where(m_s, b_ds, 0)
+ # [BT, BK]
+ b_dq = tl.dot(b_ds.to(b_k.dtype), b_k, allow_tf32=False)
+ # [BV, BK]
+ if CHECK and i == 0:
+ b_dq += tl.dot(b_do, b_h.to(b_do.dtype), allow_tf32=False)
+ b_h = b_h + tl.dot(b_v, b_k, allow_tf32=False)
+ else:
+ b_dq += tl.dot(b_do, b_h.to(b_do.dtype), allow_tf32=False)
+ b_h = b_h + tl.dot(b_v, b_k, allow_tf32=False)
+ b_dq *= scale
+ tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1))
+
+ # sync threads
+ b_h = None
+ tl.debug_barrier()
+ # [BK, BV]
+ b_dh = tl.zeros([BK, BV], dtype=tl.float32)
+ m_s = o_i[:, None] <= o_i[None, :]
+ for i in range(1, tl.cdiv(T, BT) + 1):
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, T - i * BT), (BK, BT), (0, 1))
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (T - i * BT, i_k * BK), (BT, BK), (1, 0))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (T - i * BT, i_v * BV), (BT, BV), (1, 0))
+ p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (T - i * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dk = tl.make_block_ptr(dk + (i_bh+i_v*B*H) * s_k_h, (T, K), (s_k_t, s_k_d), (T - i*BT, i_k*BK), (BT, BK), (1, 0))
+ p_dv = tl.make_block_ptr(dv + (i_bh+i_k*B*H) * s_v_h, (T, V), (s_v_t, s_v_d), (T - i*BT, i_v*BV), (BT, BV), (1, 0))
+ # [BK, BT]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_q = (b_q * scale).to(b_q.dtype)
+ # [BT, BK]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BT, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+
+ # [BT, BT]
+ b_s = tl.dot(b_k, b_q, allow_tf32=False)
+ b_s = tl.where(m_s, b_s, 0).to(b_q.dtype)
+ # [BT, BT]
+ b_ds = tl.dot(b_v, tl.trans(b_do), allow_tf32=False)
+ b_ds = tl.where(m_s, b_ds, 0).to(b_q.dtype)
+ # [BT, BK]
+ b_dk = tl.dot(b_ds, tl.trans(b_q), allow_tf32=False)
+ # [BT, BV]
+ b_dv = tl.dot(b_s, b_do, allow_tf32=False)
+ if CHECK and i == 1:
+ b_dk += tl.dot(b_v, tl.trans(b_dh).to(b_v.dtype), allow_tf32=False)
+ b_dv += tl.dot(b_k, b_dh.to(b_k.dtype), allow_tf32=False)
+ b_dh += tl.dot(b_q, b_do, allow_tf32=False)
+ else:
+ b_dk += tl.dot(b_v, tl.trans(b_dh).to(b_v.dtype), allow_tf32=False)
+ b_dv += tl.dot(b_k, b_dh.to(b_k.dtype), allow_tf32=False)
+ b_dh += tl.dot(b_q, b_do, allow_tf32=False)
+
+ tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1))
+ tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1))
+
+
+class FusedChunkLinearAttentionFunction(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_fwd
+ def forward(ctx, q, k, v, scale, initial_state, output_final_state):
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ BT = 64
+ BK, BV = min(triton.next_power_of_2(K), 64), min(triton.next_power_of_2(V), 64)
+ NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV)
+ num_warps = 4
+ num_stages = 1
+
+ o = q.new_empty(NK, B, H, T, V)
+ final_state = q.new_empty(B, H, K, V, dtype=torch.float) if output_final_state else None
+ # the bug still exists even for Triton 2.2 on H100 GPUs
+ # so we always enable initial checks
+ CHECK = True
+ if version.parse(triton.__version__) < version.parse('2.2.0'):
+ import warnings
+ warnings.warn(
+ "Triton<2.2.0 detected for running this kernel, "
+ "which is known to have some weird compiler issues (refer to https://github.com/openai/triton/issues/2852) "
+ "that lead to significant precision loss. "
+ "We've add some initial condition checks to resolve this, sadly at the sacrifice of the speed. "
+ "For optimal performance, it is recommended to install Triton>=2.2.0 (if possible)."
+ )
+ CHECK = True
+
+ grid = (NV, NK, B * H)
+ fused_chunk_linear_attn_fwd_kernel[grid](
+ q, k, v, o, initial_state, final_state,
+ q.stride(1), q.stride(2), q.stride(3),
+ v.stride(1), v.stride(2), v.stride(3),
+ scale,
+ B=B, H=H, T=T, K=K, V=V, BT=BT, BK=BK, BV=BV,
+ USE_INITIAL_STATE=initial_state is not None,
+ STORE_FINAL_STATE=output_final_state,
+ CHECK=CHECK,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+ o = o.sum(0) if NK > 1 else o[0]
+
+ ctx.save_for_backward(q, k, v, initial_state)
+ ctx.scale = scale
+ ctx.CHECK = CHECK
+ return o.to(q.dtype), final_state
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_bwd
+ def backward(ctx, do, dht=None):
+ q, k, v, initial_state = ctx.saved_tensors
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ scale = ctx.scale
+
+ BT = 64
+ BK, BV = min(triton.next_power_of_2(K), 64), min(triton.next_power_of_2(V), 64)
+ NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV)
+ num_warps = 4
+ num_stages = 1
+
+ dq = q.new_empty(NV, B, H, T, K)
+ dk = q.new_empty(NV, B, H, T, K)
+ dv = q.new_empty(NK, B, H, T, V)
+ grid = (NV, NK, B * H)
+
+ fused_chunk_linear_attn_bwd_kernel[grid](
+ q, k, v, do, dq, dk, dv, initial_state,
+ q.stride(1), q.stride(2), q.stride(3),
+ v.stride(1), v.stride(2), v.stride(3),
+ scale,
+ B=B, H=H, T=T, K=K, V=V, BT=BT, BK=BK, BV=BV,
+ USE_INITIAL_STATE=initial_state is not None,
+ CHECK=ctx.CHECK,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+ dq = dq.sum(0)
+ dk = dk.sum(0)
+ dv = dv.sum(0)
+ return dq.to(q.dtype), dk.to(k.dtype), dv.to(v.dtype), None, None, None
+
+
+def fused_chunk_linear_attn(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ scale: Optional[float] = None,
+ initial_state: torch.Tensor = None,
+ output_final_state: bool = False,
+ normalize: bool = True,
+ head_first: bool = True
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ r"""
+ Args:
+ q (torch.Tensor):
+ queries of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`
+ k (torch.Tensor):
+ keys of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`
+ v (torch.Tensor):
+ values of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`
+ scale (Optional[int]):
+ Scale factor for linear attention scores.
+ If not provided, it will default to `1 / sqrt(K)`. Default: `None`.
+ initial_state (Optional[torch.Tensor]):
+ Initial state of shape `[B, H, K, V]`. Default: `None`.
+ output_final_state (Optional[bool]):
+ Whether to output the final state of shape `[B, H, K, V]`. Default: `False`.
+ normalize (bool):
+ Whether to normalize the output. Default: `True`.
+ head_first (Optional[bool]):
+ Whether the inputs are in the head-first format. Default: `True`.
+
+ Returns:
+ o (torch.Tensor):
+ Outputs of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`
+ final_state (torch.Tensor):
+ Final state of shape `[B, H, K, V]` if `output_final_state=True` else `None`
+ """
+ if scale is None:
+ scale = q.shape[-1] ** -0.5
+ if not head_first:
+ q, k, v = map(lambda x: x.transpose(1, 2), (q, k, v))
+ o, final_state = FusedChunkLinearAttentionFunction.apply(q, k, v, scale, initial_state, output_final_state)
+ if normalize:
+ o = normalize_output(q * scale, k, o)
+ if not head_first:
+ o = o.transpose(1, 2)
+ return o, final_state
diff --git a/fla/ops/linear_attn/fused_recurrent.py b/fla/ops/linear_attn/fused_recurrent.py
new file mode 100644
index 0000000000000000000000000000000000000000..c8be0d62328658bf9527032849afc332d43e1182
--- /dev/null
+++ b/fla/ops/linear_attn/fused_recurrent.py
@@ -0,0 +1,251 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from typing import Optional, Tuple
+
+import torch
+import triton
+import triton.language as tl
+
+from fla.ops.linear_attn.utils import normalize_output
+from fla.utils import contiguous
+
+
+@triton.jit
+def fused_recurrent_linear_attn_fwd_kernel(
+ q, # query [B, H, L, K]
+ k, # key [B, H, L, V]
+ v, # value [B, H, L, V]
+ o, # output [B, H, L, V]
+ h0,
+ ht, # final hidden state [B, H, K, V]
+
+ s_k_h, # stride size: L * K
+ s_v_h, # stride size: L * V
+
+ scale,
+ B, # batch size
+ H, # H
+ T, # T
+ K: tl.constexpr, # K
+ V: tl.constexpr, # V
+ BK: tl.constexpr, # BLOCK SIZE along the K dimension
+ BV: tl.constexpr, # BLOCK SIZE along the V dimension
+ USE_INITIAL_STATE: tl.constexpr, # whether to use initial state
+ STORE_FINAL_STATE: tl.constexpr, # whether to store final state
+):
+ # indices
+ i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+
+ p_q = q + i_bh * s_k_h + i_k * BK + tl.arange(0, BK)
+ p_k = k + i_bh * s_k_h + i_k * BK + tl.arange(0, BK)
+ p_v = v + i_bh * s_v_h + i_v * BV + tl.arange(0, BV)
+ p_o = o + (i_bh + i_k * B * H) * s_v_h + i_v * BV + tl.arange(0, BV)
+
+ mask_bk = (i_k * BK + tl.arange(0, BK)) < K
+ mask_bv = (i_v * BV + tl.arange(0, BV)) < V
+ mask_kv = mask_bk[None, :] & mask_bv[:, None]
+
+ b_h = tl.zeros([BV, BK], dtype=tl.float32)
+
+ if USE_INITIAL_STATE:
+ p_h0 = h0 + i_bh * K * V + (i_k * BK + tl.arange(0, BK)[None, :]) * V + (i_v * BV + tl.arange(0, BV)[:, None])
+ b_h += tl.load(p_h0, mask=mask_kv, other=0).to(tl.float32)
+
+ for _ in range(0, T):
+ b_k = tl.load(p_k, mask=mask_bk, other=0).to(tl.float32)
+ b_v = tl.load(p_v, mask=mask_bv, other=0).to(tl.float32)
+ b_q = tl.load(p_q, mask=mask_bk, other=0).to(tl.float32) * scale
+
+ b_h += b_k[None, :] * b_v[:, None]
+ b_o = b_h * b_q[None, :]
+ b_o = tl.sum(b_o, axis=1)
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), mask=mask_bv)
+
+ p_q += K
+ p_k += K
+ p_o += V
+ p_v += V
+
+ if STORE_FINAL_STATE:
+ p_ht = ht + i_bh * K * V + (i_k * BK + tl.arange(0, BK)[None, :]) * V + (i_v * BV + tl.arange(0, BV)[:, None])
+ tl.store(p_ht, b_h.to(p_ht.dtype.element_ty), mask=mask_kv)
+
+
+# Similar to Algorithm1 of https://arxiv.org/abs/2006.16236
+@triton.jit
+def fused_recurrent_linear_attn_bwd_kernel(
+ q, # query [B, H, L, K]
+ k, # key [B, H, L, V]
+ v, # value [B, H, L, V]
+
+ do, # gradient of output [B, H, L, V]
+ dq, # gradient of query [NV, B, H, L, K]
+ dk, # gradient of key [NV, B, H, L, K]
+ dv, # gradient of value [NK, B, H, L, V]
+ h0, # initial hidden state initialization [B, H, K, V]
+
+ s_k_h, # stride size: L * K
+ s_v_h, # stride size: L * V
+ scale, # K ** -0.5
+
+ B, # B
+ H, # H
+ T, # T
+ K: tl.constexpr, # K
+ V: tl.constexpr, # V
+ BK: tl.constexpr, # BLOCK SIZE along the K dimension
+ BV: tl.constexpr, # BLOCK SIZE along the V dimension
+ USE_INITIAL_STATE: tl.constexpr, # whether to use initial state
+):
+ i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+
+ p_q = q + i_bh * s_k_h + i_k * BK + tl.arange(0, BK)
+ p_k = k + i_bh * s_k_h + i_k * BK + tl.arange(0, BK)
+ p_v = v + i_bh * s_v_h + i_v * BV + tl.arange(0, BV)
+ p_do = do + i_bh * s_v_h + i_v * BV + tl.arange(0, BV)
+
+ p_dq = dq + (i_bh + i_v * B * H) * s_k_h + i_k * BK + tl.arange(0, BK)
+ mask_bk = i_k * BK + tl.arange(0, BK) < K
+ mask_bv = i_v * BV + tl.arange(0, BV) < V
+
+ b_h = tl.zeros([BK, BV], dtype=tl.float32)
+
+ if USE_INITIAL_STATE:
+ mask_kv = mask_bk[:, None] & mask_bv[None, :]
+ p_h0 = h0 + i_bh * K * V + (i_k * BK + tl.arange(0, BK)[:, None]) * V + (i_v * BV + tl.arange(0, BV)[None, :])
+ b_h += tl.load(p_h0, mask=mask_kv, other=0).to(tl.float32)
+
+ for _ in range(0, T):
+ b_k = tl.load(p_k, mask=mask_bk, other=0).to(tl.float32)
+ b_v = tl.load(p_v, mask=mask_bv, other=0).to(tl.float32)
+ b_do = tl.load(p_do, mask=mask_bv, other=0).to(tl.float32)
+
+ b_h += b_k[:, None] * b_v[None, :]
+ _d_q = b_h * b_do[None, :]
+ d_q = tl.sum(_d_q, axis=1) * scale
+ tl.store(p_dq, d_q.to(p_dq.dtype.element_ty), mask=mask_bk)
+
+ p_k += K
+ p_do += V
+ p_v += V
+ p_dq += K
+
+ # sync threads
+ tl.debug_barrier()
+
+ p_q = q + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + (T - 1) * K
+ p_k = k + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + (T - 1) * K
+ p_do = do + i_bh * s_v_h + i_v * BV + tl.arange(0, BV) + (T - 1) * V
+ p_v = v + i_bh * s_v_h + i_v * BV + tl.arange(0, BV) + (T - 1) * V
+ p_dk = dk + (i_bh + i_v * B * H) * s_k_h + i_k * BK + tl.arange(0, BK) + (T - 1) * K
+ p_dv = dv + (i_bh + i_k * B * H) * s_v_h + i_v * BV + tl.arange(0, BV) + (T - 1) * V
+ d_h = tl.zeros([BK, BV], dtype=tl.float32)
+
+ for _ in range(T):
+ b_do = tl.load(p_do, mask=mask_bv, other=0).to(tl.float32)
+ b_q = tl.load(p_q, mask=mask_bk, other=0).to(tl.float32) * scale
+ b_k = tl.load(p_k, mask=mask_bk, other=0).to(tl.float32)
+ b_v = tl.load(p_v, mask=mask_bv, other=0).to(tl.float32)
+ d_h += b_q[:, None] * b_do[None, :]
+ d_k = tl.sum(d_h * b_v[None, :], axis=1)
+ d_v = tl.sum(d_h * b_k[:, None], axis=0)
+
+ tl.store(p_dk, d_k.to(p_dk.dtype.element_ty), mask=mask_bk)
+ tl.store(p_dv, d_v.to(p_dv.dtype.element_ty), mask=mask_bv)
+
+ p_do -= V
+ p_q -= K
+ p_k -= K
+ p_v -= V
+ p_dk -= K
+ p_dv -= V
+
+
+class FusedRecurrentLinearAttentionFunction(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ def forward(ctx, q, k, v, scale, initial_state=None, output_final_state=False):
+ B, H, T, K = q.shape
+ V = v.shape[-1]
+
+ BK, BV = min(K, 32), min(V, 32)
+ NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV)
+ num_warps = 1
+ num_stages = 1
+
+ o = q.new_empty(NK, B, H, T, V)
+ final_state = q.new_empty(B, H, K, V) if output_final_state else None
+
+ grid = (NV, NK, B * H)
+ fused_recurrent_linear_attn_fwd_kernel[grid](
+ q, k, v, o, initial_state, final_state,
+ q.stride(1),
+ v.stride(1), scale,
+ B=B, H=H, T=T, K=K, V=V, BK=BK, BV=BV,
+ USE_INITIAL_STATE=initial_state is not None,
+ STORE_FINAL_STATE=final_state is not None,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+
+ o = o.sum(0)
+ ctx.save_for_backward(q, k, v, initial_state)
+ ctx.scale = scale
+ return o, final_state
+
+ @staticmethod
+ @contiguous
+ def backward(ctx, do, dht=None):
+ q, k, v, initial_state = ctx.saved_tensors
+ B, H, T, K = q.shape
+ V = v.shape[-1]
+ scale = ctx.scale
+
+ BK, BV = min(K, 32), min(V, 32)
+ NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV)
+ num_warps = 1
+ num_stages = 1
+
+ dq = q.new_empty(NV, B, H, T, K)
+ dk = q.new_empty(NV, B, H, T, K)
+ dv = q.new_empty(NK, B, H, T, V)
+ grid = (NV, NK, B * H)
+
+ fused_recurrent_linear_attn_bwd_kernel[grid](
+ q, k, v, do, dq, dk, dv, initial_state,
+ q.stride(1),
+ v.stride(1),
+ scale,
+ B=B, H=H, T=T, K=K, V=V, BK=BK, BV=BV,
+ USE_INITIAL_STATE=initial_state is not None,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+ dq = dq.sum(0)
+ dk = dk.sum(0)
+ dv = dv.sum(0)
+ return dq, dk, dv, None, None, None
+
+
+def fused_recurrent_linear_attn(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ scale: Optional[float] = None,
+ initial_state: torch.Tensor = None,
+ output_final_state: bool = False,
+ normalize: bool = False,
+ head_first: bool = True
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ if scale is None:
+ scale = q.shape[-1] ** -0.5
+ if not head_first:
+ q, k, v = map(lambda x: x.transpose(1, 2), (q, k, v))
+ o, final_state = FusedRecurrentLinearAttentionFunction.apply(q, k, v, scale, initial_state, output_final_state)
+ if normalize:
+ o = normalize_output(q * scale, k, o)
+ if not head_first:
+ o = o.transpose(1, 2)
+ return o, final_state
diff --git a/fla/ops/linear_attn/naive.py b/fla/ops/linear_attn/naive.py
new file mode 100644
index 0000000000000000000000000000000000000000..b6ecf2718fcac8eef80f445ed02b95f36329f3c4
--- /dev/null
+++ b/fla/ops/linear_attn/naive.py
@@ -0,0 +1,36 @@
+# -*- coding: utf-8 -*-
+
+from typing import Optional, Tuple
+
+import torch
+from einops import rearrange
+
+from fla.ops.linear_attn.utils import normalize_output
+
+
+def naive_chunk_linear_attn(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ scale: Optional[float] = None,
+ normalize: bool = False
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ if scale is None:
+ scale = q.shape[-1] ** -0.5
+ chunk_size = 64
+ q = rearrange(q, 'b h (n c) d -> b h n c d', c=chunk_size) * scale
+ k = rearrange(k, 'b h (n c) d -> b h n c d', c=chunk_size)
+ v = rearrange(v, 'b h (n c) d -> b h n c d', c=chunk_size)
+ kv = k.transpose(-1, -2) @ v
+ kv = kv.cumsum(2)
+ kv = torch.cat([torch.zeros_like(kv[:, :, :1]), kv[:, :, :-1]], dim=2)
+ inter = q @ kv
+ intra = ((
+ q @ k.transpose(-1, -2)).masked_fill_(
+ torch.triu(torch.ones(chunk_size, chunk_size, dtype=bool, device=q.device), diagonal=1),
+ 0
+ )) @ v
+ o = inter + intra
+ if normalize:
+ o = normalize_output(q * scale, k, o)
+ return rearrange(o, 'b h n c d -> b h (n c) d')
diff --git a/fla/ops/linear_attn/utils.py b/fla/ops/linear_attn/utils.py
new file mode 100644
index 0000000000000000000000000000000000000000..b444376833f5d512af6fc2db387db75a43a92e5d
--- /dev/null
+++ b/fla/ops/linear_attn/utils.py
@@ -0,0 +1,10 @@
+# -*- coding: utf-8 -*-
+
+import torch
+
+
+@torch.jit.script
+def normalize_output(q, k, o):
+ k = k.cumsum(-2)
+ z = (q * k).sum(-1, keepdim=True)
+ return o / (z + 1e-10)
diff --git a/fla/ops/rebased/__init__.py b/fla/ops/rebased/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..6ec6a0cb31f7f635aa528cad753d5e19196a2028
--- /dev/null
+++ b/fla/ops/rebased/__init__.py
@@ -0,0 +1,7 @@
+# -*- coding: utf-8 -*-
+
+from .parallel import parallel_rebased
+
+__all__ = [
+ 'parallel_rebased'
+]
diff --git a/fla/ops/rebased/naive.py b/fla/ops/rebased/naive.py
new file mode 100644
index 0000000000000000000000000000000000000000..e9436a0802c964485354082dcc9fbcd437e5d7f7
--- /dev/null
+++ b/fla/ops/rebased/naive.py
@@ -0,0 +1,48 @@
+# -*- coding: utf-8 -*-
+
+import torch
+
+from fla.ops.rebased.parallel import parallel_rebased
+
+
+def naive_parallel_rebased(q, k, v, use_scale=True, use_norm=True):
+ if use_scale:
+ q = q * (q.shape[-1] ** -0.5)
+ attn = q @ k.transpose(-2, -1)
+ attn = (attn ** 2)
+ attn.masked_fill_(~torch.tril(torch.ones(
+ q.shape[-2], q.shape[-2], dtype=torch.bool, device=q.device)), 0)
+ o = attn @ v
+ if use_norm:
+ z = attn.sum(-1)
+ return o / (z[..., None] + 1e-6)
+ else:
+ return o
+
+
+if __name__ == "__main__":
+ B = 4
+ H = 4
+ L = 128
+ # D = 15
+ dtype = torch.float32
+ q = (torch.randn(B, H, L, 16).cuda().to(dtype)).requires_grad_(True)
+ k = (torch.randn(B, H, L, 16).cuda().to(dtype)).requires_grad_(True)
+ v = torch.randn(B, H, L, 128).cuda().to(dtype).requires_grad_(True)
+
+ do = torch.randn_like(v).cuda()
+ ref = naive_parallel_rebased(q, k, v, True, True)
+ ref.backward(do, retain_graph=True)
+ ref_dq, q.grad = q.grad.clone(), None
+ ref_dk, k.grad = k.grad.clone(), None
+ ref_dv, v.grad = v.grad.clone(), None
+
+ tri = parallel_rebased(q, k, v, 1e-6, True, True)
+ tri.backward(do, retain_graph=True)
+ tri_dq, q.grad = q.grad.clone(), None
+ tri_dk, k.grad = k.grad.clone(), None
+ tri_dv, v.grad = v.grad.clone(), None
+ print((ref-tri).abs().max())
+ print((ref_dq-tri_dq).abs().max())
+ print((ref_dk-tri_dk).abs().max())
+ print((ref_dv-tri_dv).abs().max())
diff --git a/fla/ops/rebased/parallel.py b/fla/ops/rebased/parallel.py
new file mode 100644
index 0000000000000000000000000000000000000000..7a0aecb81b6350fa08c143f7967452968321c1c8
--- /dev/null
+++ b/fla/ops/rebased/parallel.py
@@ -0,0 +1,440 @@
+
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+import torch
+import triton
+import triton.language as tl
+
+from fla.utils import autocast_custom_bwd, autocast_custom_fwd, contiguous
+
+# Rebased: Linear Transformers with Learnable Kernel Functions are Better In-Context Models
+# https://github.com/corl-team/rebased/blob/main/flash_linear_attention/fla/ops/triton/rebased_fast/parallel.py
+
+
+@triton.jit
+def parallel_rebased_fwd_kernel(
+ q, # query [B, H, L, D_head_K]
+ k, # key [B, H, L, D_head_V]
+ v, # value [B, H, L, D_head_V]
+ o, # output [B, H, L, D_head_V]
+ z, # normalizer [B, H, L]
+ s_k_h, # stride size: L * D_head_K
+ s_k_t, # stride size: D_head_K
+ s_k_d, # stride size: 1
+ s_v_h, # stride size: L * D_head_V
+ s_v_t, # stride size: D_head_V
+ s_v_d, # stride size: 1
+ scale, # D_head_K ** -0.5
+ B, # batch size
+ H, # H
+ T, # T
+ K: tl.constexpr, # D_head_K
+ V: tl.constexpr, # D_head_V
+ BTL: tl.constexpr, # BLOCK SIZE along the sequence dimension for Q
+ BTS: tl.constexpr, # BLOCK SIZE along the sequence dimension for K/V
+ BK: tl.constexpr, # BLOCK SIZE along the K dimension
+ BV: tl.constexpr, # BLOCK SIZE along the V dimension
+):
+ # i_c: chunk index. used for sequence parallelism
+ i_kv, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ NV = tl.cdiv(V, BV)
+ i_k = i_kv // (NV)
+ i_v = i_kv % (NV)
+
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_c * BTL, i_k * BK), (BTL, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, 0), (BK, BTS), (0, 1))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (0, i_v * BV), (BTS, BV), (1, 0))
+
+ # [BQ, BD] block Q, in the shared memory throughout the whole kernel
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_q = (b_q * scale).to(b_q.dtype)
+ b_o = tl.zeros([BTL, BV], dtype=tl.float32)
+ b_z = tl.zeros([BTL], dtype=tl.float32)
+
+ # Q block and K block have no overlap
+ # no need for mask, thereby saving flops
+ for _ in range(0, i_c * BTL, BTS):
+ # [BK, BTS]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+
+ # [BTS, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BTL, BTS]
+ b_s = tl.dot(b_q, (b_k), allow_tf32=False)
+ b_s = b_s * b_s
+ b_z += tl.sum(b_s, axis=1)
+
+ # [BQ, BD]
+ b_o = b_o + tl.dot(b_s.to(b_v.dtype), b_v, allow_tf32=False)
+ p_k = tl.advance(p_k, (0, BTS))
+ p_v = tl.advance(p_v, (BTS, 0))
+
+ # # rescale interchunk output
+ tl.debug_barrier()
+ o_q = tl.arange(0, BTL)
+ # # sync threads, easy for compiler to optimize
+ # tl.debug_barrier()
+
+ o_k = tl.arange(0, BTS)
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_c * BTL), (BK, BTS), (0, 1))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_c * BTL, i_v * BV), (BTS, BV), (1, 0))
+ # Q block and K block have overlap. masks required
+ for _ in range(i_c * BTL, (i_c + 1) * BTL, BTS):
+ # [BK, BTS]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BTS, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BTL, BTS]
+ m_s = o_q[:, None] >= o_k[None, :]
+ b_s = tl.dot(b_q, b_k, allow_tf32=False)
+ b_s = b_s * b_s
+ b_s = tl.where(m_s, b_s, 0)
+ b_z += tl.sum(b_s, axis=1)
+ # [BTL, BV]
+ b_o += tl.dot(b_s.to(b_q.dtype), b_v, allow_tf32=False)
+ p_k = tl.advance(p_k, (0, BTS))
+ p_v = tl.advance(p_v, (BTS, 0))
+ o_k += BTS
+
+ p_o = tl.make_block_ptr(o + (i_bh + B * H * i_k) * s_v_h, (T, V), (s_v_t, s_v_d), (i_c*BTL, i_v*BV), (BTL, BV), (1, 0))
+ p_z = z + (i_bh + B * H * i_k) * T + i_c * BTL + tl.arange(0, BTL)
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1))
+ tl.store(p_z, b_z.to(p_z.dtype.element_ty),
+ mask=((i_c * BTL + tl.arange(0, BTL)) < T))
+
+
+@triton.jit
+def _parallel_rebased_bwd_dq(
+ i_bh,
+ i_c,
+ i_k,
+ i_v,
+ i_h,
+ q,
+ k,
+ v,
+ do,
+ dz,
+ dq,
+ s_k_h,
+ s_k_t,
+ s_k_d,
+ s_v_h,
+ s_v_t,
+ s_v_d,
+ scale,
+ B: tl.constexpr,
+ H: tl.constexpr,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BTL: tl.constexpr,
+ BTS: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr
+):
+ p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d),
+ (i_c * BTL, i_v * BV), (BTL, BV), (1, 0))
+ p_q = tl.make_block_ptr(q + (i_bh) * s_k_h, (T, K),
+ (s_k_t, s_k_d), (i_c*BTL, i_k*BK), (BTL, BK), (1, 0))
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_do = tl.load(p_do, boundary_check=(0, 1)).to(b_q.dtype)
+ b_q = (b_q * scale).to(b_q.dtype)
+ b_dq = tl.zeros([BTL, BK], dtype=tl.float32)
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K),
+ (s_k_t, s_k_d), (0, i_k * BK), (BTS, BK), (1, 0))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (V, T),
+ (s_v_d, s_v_t), (i_v * BV, 0), (BV, BTS), (0, 1))
+ p_dz = dz + i_bh * T + i_c * BTL + tl.arange(0, BTL)
+ b_dz = tl.load(p_dz, mask=(i_c * BTL + tl.arange(0, BTL)) < T)
+
+ for _ in range(0, i_c * BTL, BTS):
+ # [BTS, BK]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BV, BTS]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BTL, BTS]
+ b_ds = tl.dot(b_do, b_v, allow_tf32=False)
+ if i_v == 0:
+ b_ds += b_dz[:, None]
+ else:
+ b_ds = b_ds
+ b_s = tl.dot(b_q, tl.trans(b_k), allow_tf32=False)
+ # [BQ, BD]
+ b_dq += tl.dot((2 * b_ds * b_s).to(b_v.dtype), b_k, allow_tf32=False)
+ p_k = tl.advance(p_k, (BTS, 0))
+ p_v = tl.advance(p_v, (0, BTS))
+
+ b_dq *= scale
+ o_q = tl.arange(0, BTL)
+ o_k = tl.arange(0, BTS)
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K),
+ (s_k_t, s_k_d), (i_c * BTL, i_k * BK), (BTS, BK), (1, 0))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (V, T),
+ (s_v_d, s_v_t), (i_v * BV, i_c * BTL), (BV, BTS), (0, 1))
+ # Q block and K block have overlap. masks required
+ for _ in range(i_c * BTL, (i_c + 1) * BTL, BTS):
+ # [BTS, BK]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BV, BTS]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BTL, BTS]
+ m_s = o_q[:, None] >= o_k[None, :]
+ b_ds = tl.dot(b_do, b_v, allow_tf32=False)
+ if i_v == 0:
+ b_ds += b_dz[:, None]
+ else:
+ b_ds = b_ds
+ b_ds = tl.where(m_s, b_ds, 0) * scale
+ b_s = tl.dot(b_q, tl.trans(b_k), allow_tf32=False)
+ b_s = tl.where(m_s, b_s, 0)
+ # [BTL, BK]
+ b_dq += tl.dot((2 * b_ds * b_s).to(b_k.dtype),
+ b_k, allow_tf32=False)
+ p_k = tl.advance(p_k, (BTS, 0))
+ p_v = tl.advance(p_v, (0, BTS))
+ o_k += BTS
+ p_dq = tl.make_block_ptr(dq + (i_bh + B * H * i_v) * s_k_h, (T, K),
+ (s_k_t, s_k_d), (i_c*BTL, i_k*BK), (BTL, BK), (1, 0))
+ tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1))
+ return
+
+
+@triton.jit
+def _parallel_rebased_bwd_dkv(
+ i_bh, i_c, i_k, i_v, i_h,
+ q, k, v, do, dz, dk, dv, s_k_h, s_k_t, s_k_d, s_v_h,
+ s_v_t, s_v_d,
+ scale,
+ B: tl.constexpr,
+ H: tl.constexpr,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BTL: tl.constexpr,
+ BTS: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+):
+ # compute dk dv
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d),
+ (i_c * BTL, i_k * BK), (BTL, BK), (1, 0))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d),
+ (i_c * BTL, i_v * BV), (BTL, BV), (1, 0))
+ b_k, b_v = tl.load(p_k, boundary_check=(0, 1)), tl.load(
+ p_v, boundary_check=(0, 1))
+ b_dk, b_dv = tl.zeros([BTL, BK], dtype=tl.float32), tl.zeros(
+ [BTL, BV], dtype=tl.float32)
+
+ for i in range((tl.cdiv(T, BTS) * BTS)-BTS, (i_c + 1) * BTL - BTS, -BTS):
+ p_q = tl.make_block_ptr(
+ q + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i), (BK, BTS), (0, 1))
+ p_do = tl.make_block_ptr(
+ do + i_bh * s_v_h, (V, T), (s_v_d, s_v_t), (i_v * BV, i), (BV, BTS), (0, 1))
+ p_dz = dz + i_bh * T + i + tl.arange(0, BTS)
+ b_q = tl.load(p_q, boundary_check=(0, 1)) # [BK, BTS]
+ b_do = tl.load(p_do, boundary_check=(0, 1)).to(b_q.dtype) # [BV, BTS]
+ b_dz = tl.load(p_dz, mask=(i + tl.arange(0, BTS)) < T)
+ b_s = tl.dot(b_k.to(b_q.dtype), b_q, allow_tf32=False) * \
+ scale # [BTL, BTS]
+ b_s2 = b_s * b_s
+ b_dv += tl.dot(b_s2.to(b_q.dtype), tl.trans(b_do), allow_tf32=False)
+ b_ds = tl.dot(b_v, b_do, allow_tf32=False) * scale
+ if i_v == 0:
+ b_ds += b_dz[None, :] * scale
+ else:
+ b_ds = b_ds
+ b_dk += tl.dot((2 * b_ds * b_s).to(b_q.dtype),
+ tl.trans(b_q), allow_tf32=False)
+
+ tl.debug_barrier()
+ o_q, o_k = tl.arange(0, BTS), tl.arange(0, BTL)
+ for i in range(i_c*BTL, (i_c+1)*BTL, BTS):
+ p_q = tl.make_block_ptr(
+ q + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i), (BK, BTS), (0, 1))
+ p_do = tl.make_block_ptr(
+ do + i_bh * s_v_h, (V, T), (s_v_d, s_v_t), (i_v * BV, i), (BV, BTS), (0, 1))
+ p_dz = dz + i_bh * T + i + tl.arange(0, BTS)
+ b_q = tl.load(p_q, boundary_check=(0, 1)) # [BD, BQ]
+ b_do = tl.load(p_do, boundary_check=(0, 1)).to(b_q.dtype)
+ b_dz = tl.load(p_dz, mask=(i + tl.arange(0, BTS)) < T)
+ # [BK, BQ]
+ m_s = o_k[:, None] <= o_q[None, :]
+ b_s = tl.dot(b_k, b_q, allow_tf32=False) * scale
+ b_s2 = b_s * b_s
+ b_s = tl.where(m_s, b_s, 0)
+ b_s2 = tl.where(m_s, b_s2, 0)
+
+ b_ds = tl.dot(b_v, b_do, allow_tf32=False)
+ if i_v == 0:
+ b_ds += b_dz[None, :]
+ else:
+ b_ds = b_ds
+ b_ds = tl.where(m_s, b_ds, 0) * scale
+ # [BK, BD]
+ b_dv += tl.dot(b_s2.to(b_q.dtype), tl.trans(b_do), allow_tf32=False)
+ b_dk += tl.dot((2 * b_ds * b_s).to(b_q.dtype),
+ tl.trans(b_q), allow_tf32=False)
+ o_q += BTS
+
+ p_dk = tl.make_block_ptr(dk + (i_bh + B * H * i_v) * s_k_h,
+ (T, K), (s_k_t, s_k_d), (i_c*BTL, i_k*BK), (BTL, BK), (1, 0))
+ p_dv = tl.make_block_ptr(dv + (i_bh + B * H * i_k) * s_v_h,
+ (T, V), (s_v_t, s_v_d), (i_c*BTL, i_v*BV), (BTL, BV), (1, 0))
+ tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1))
+ tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1))
+ return
+
+
+@triton.jit
+def parallel_rebased_bwd_kernel(
+ q,
+ k,
+ v,
+ do,
+ dz,
+ dq,
+ dk,
+ dv,
+ s_k_h,
+ s_k_t,
+ s_k_d,
+ s_v_h,
+ s_v_t,
+ s_v_d,
+ scale,
+ B: tl.constexpr,
+ H: tl.constexpr,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BTL: tl.constexpr,
+ BTS: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr
+):
+ i_kv, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ NV = tl.cdiv(V, BV)
+ i_k = i_kv // (NV)
+ i_v = i_kv % (NV)
+ i_h = i_bh % H
+ _parallel_rebased_bwd_dq(
+ i_bh, i_c, i_k, i_v, i_h,
+ q, k, v, do, dz, dq, s_k_h, s_k_t, s_k_d, s_v_h,
+ s_v_t, s_v_d, scale,
+ B=B, H=H, T=T, K=K, V=V, BTL=BTL, BTS=BTS, BK=BK, BV=BV
+ )
+ tl.debug_barrier()
+ _parallel_rebased_bwd_dkv(
+ i_bh, i_c, i_k, i_v, i_h,
+ q, k, v, do, dz, dk, dv, s_k_h, s_k_t, s_k_d, s_v_h,
+ s_v_t, s_v_d,
+ scale,
+ B=B, H=H, T=T, K=K, V=V, BTL=BTL, BTS=BTS, BK=BK, BV=BV
+ )
+
+
+class ParallelBasedFunction(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_fwd
+ def forward(ctx, q, k, v, scale):
+ BTL, BTS = 128, 32
+ assert BTL % BTS == 0
+ # assert q.shape[-1] % 16 == 0
+ BK = min(128, triton.next_power_of_2(k.shape[-1]))
+ BV = min(128, triton.next_power_of_2(v.shape[-1]))
+ BK, BV = max(BK, 16), max(BV, 16)
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ num_stages = 2
+ num_warps = 4
+ NK = triton.cdiv(K, BK)
+ NV = triton.cdiv(V, BV)
+ grid = (NK * NV, triton.cdiv(T, BTL), B * H)
+
+ assert NK == 1, "will encounter some synchronization issue if not."
+
+ o = torch.empty(NK, B, H, T, V, device=q.device)
+ z = torch.empty(NK, B, H, T, device=q.device)
+ parallel_rebased_fwd_kernel[grid](
+ q, k, v, o, z,
+ q.stride(1), q.stride(2), q.stride(3),
+ v.stride(1), v.stride(2), v.stride(3),
+ scale,
+ B=B, H=H, T=T, K=K, V=V,
+ BTL=BTL, BTS=BTS, BK=BK, BV=BV,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+ ctx.save_for_backward(q, k, v)
+ ctx.scale = scale
+ return o.sum(0).to(q.dtype), z.sum(0).to(q.dtype)
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_bwd
+ def backward(ctx, do, dz):
+ q, k, v = ctx.saved_tensors
+ scale = ctx.scale
+ BTL, BTS = 64, 32
+ assert BTL % BTS == 0
+ BK = min(128, triton.next_power_of_2(k.shape[-1]))
+ BV = min(128, triton.next_power_of_2(v.shape[-1]))
+ BK, BV = max(BK, 16), max(BV, 16)
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ num_stages = 2
+ num_warps = 4
+ NK = triton.cdiv(K, BK)
+ NV = triton.cdiv(V, BV)
+ grid = (NK * NV, triton.cdiv(T, BTL), B * H)
+
+ assert NK == 1, "will encounter some synchronization issue if not"
+
+ dq = torch.empty(NV, B, H, T, K, dtype=q.dtype, device=q.device)
+ dk = torch.empty(NV, B, H, T, K, dtype=q.dtype, device=q.device)
+ dv = torch.empty(NK, B, H, T, V, dtype=q.dtype, device=q.device)
+
+ parallel_rebased_bwd_kernel[grid](
+ q, k, v, do, dz, dq, dk, dv,
+ q.stride(1), q.stride(2), q.stride(3),
+ v.stride(1), v.stride(2), v.stride(3),
+ scale,
+ B=B, H=H, T=T, K=K, V=V,
+ BTL=BTL, BTS=BTS, BK=BK, BV=BV,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+
+ return dq.sum(0).to(q.dtype), dk.sum(0).to(k.dtype), dv.sum(0).to(v.dtype), None
+
+
+triton_parallel_based = ParallelBasedFunction.apply
+
+
+def parallel_rebased(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ eps: float = 1e-5,
+ use_scale: bool = True,
+ use_normalize: bool = True,
+ return_both: bool = False,
+ head_first: bool = True
+):
+ assert q.shape[-1] <= 128, "only support feature dim up to 128"
+ if use_scale:
+ scale = q.shape[-1] ** -0.5
+ else:
+ scale = 1
+ if not head_first:
+ q, k, v = map(lambda x: x.transpose(1, 2), (q, k, v))
+ o, z = triton_parallel_based(q, k, v, scale)
+ if return_both:
+ return o, z
+ if use_normalize:
+ o = o / (z[..., None] + eps)
+ if not head_first:
+ o = o.transpose(1, 2)
+ return o.to(q.dtype)
diff --git a/fla/ops/retention/__init__.py b/fla/ops/retention/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..a38ab43c9982c9751bb9db146b9d9fe05663964a
--- /dev/null
+++ b/fla/ops/retention/__init__.py
@@ -0,0 +1,13 @@
+# -*- coding: utf-8 -*-
+
+from .chunk import chunk_retention
+from .fused_chunk import fused_chunk_retention
+from .fused_recurrent import fused_recurrent_retention
+from .parallel import parallel_retention
+
+__all__ = [
+ 'chunk_retention',
+ 'fused_chunk_retention',
+ 'parallel_retention',
+ 'fused_recurrent_retention'
+]
diff --git a/fla/ops/retention/chunk.py b/fla/ops/retention/chunk.py
new file mode 100644
index 0000000000000000000000000000000000000000..6b3c152cdf7df3c01b5e9501ba3a246ec34de5f5
--- /dev/null
+++ b/fla/ops/retention/chunk.py
@@ -0,0 +1,845 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from typing import Optional, Tuple
+
+import torch
+import triton
+import triton.language as tl
+
+from fla.utils import autocast_custom_bwd, autocast_custom_fwd, contiguous
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4)
+ ],
+ key=["BT", "BK", "BV"],
+)
+@triton.heuristics({
+ 'USE_INITIAL_STATE': lambda args: args['h0'] is not None,
+ 'STORE_FINAL_STATE': lambda args: args['ht'] is not None,
+ 'USE_OFFSETS': lambda args: args['offsets'] is not None
+})
+@triton.jit
+def chunk_retention_fwd_kernel_h(
+ k,
+ v,
+ h,
+ h0,
+ ht,
+ offsets,
+ c_offsets,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ USE_INITIAL_STATE: tl.constexpr,
+ STORE_FINAL_STATE: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_k, i_v, i_nh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_n, i_h = i_nh // H, i_nh % H
+ if USE_OFFSETS:
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ T = eos - bos
+ NT = tl.cdiv(T, BT)
+ boh = tl.load(c_offsets + i_n).to(tl.int32)
+ else:
+ bos, eos = i_n * T, i_n * T + T
+ NT = tl.cdiv(T, BT)
+ boh = i_n * NT
+ b_b = tl.math.log2(1 - tl.math.exp2(-5 - i_h * 1.0))
+
+ o_i = tl.arange(0, BT)
+ d_b, d_i = tl.math.exp2(BT * b_b), tl.math.exp2((BT - o_i - 1) * b_b)
+
+ # [BK, BV]
+ b_h = tl.zeros([BK, BV], dtype=tl.float32)
+ if USE_INITIAL_STATE:
+ p_h0 = tl.make_block_ptr(h0 + i_nh * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ b_h = tl.load(p_h0, boundary_check=(0, 1)).to(tl.float32)
+
+ for i_t in range(NT):
+ if HEAD_FIRST:
+ p_k = tl.make_block_ptr(k + i_nh * T*K, (K, T), (1, K), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_v = tl.make_block_ptr(v + i_nh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_h = tl.make_block_ptr(h + (i_nh * NT + i_t) * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ else:
+ p_k = tl.make_block_ptr(k + (bos*H + i_h) * K, (K, T), (1, H*K), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_v = tl.make_block_ptr(v + (bos*H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_h = tl.make_block_ptr(h + ((boh + i_t) * H + i_h) * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+
+ tl.store(p_h, b_h.to(p_h.dtype.element_ty), boundary_check=(0, 1))
+ # [BK, BT]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BT, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BK, BV]
+ if i_t == NT - 1 and (T % BT) != 0:
+ d_b = tl.math.exp2((T % BT) * b_b)
+ d_i = tl.math.exp2(((T % BT) - o_i - 1) * b_b)
+ b_h = d_b * b_h + tl.dot(b_k, (b_v * d_i[:, None]).to(b_k.dtype), allow_tf32=False)
+
+ if STORE_FINAL_STATE:
+ p_ht = tl.make_block_ptr(ht + i_nh * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ tl.store(p_ht, b_h.to(p_ht.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4)
+ ],
+ key=["BT", "BK", "BV"],
+)
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def chunk_retention_fwd_kernel_o(
+ q,
+ k,
+ v,
+ h,
+ o,
+ offsets,
+ indices,
+ scale,
+ H: tl.constexpr,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ NT: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_v, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_b, i_h = i_bh // H, i_bh % H
+ if USE_OFFSETS:
+ i_tg = i_t
+ i_n, i_t = tl.load(indices + i_t * 2).to(tl.int32), tl.load(indices + i_t * 2 + 1).to(tl.int32)
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ T = eos - bos
+ NT = tl.cdiv(T, BT)
+ else:
+ NT = tl.cdiv(T, BT)
+ i_tg = i_b * NT + i_t
+ bos, eos = i_b * T, i_b * T + T
+ b_b = tl.math.log2(1 - tl.math.exp2(-5 - i_h * 1.0))
+
+ o_i = tl.arange(0, BT)
+ d_i = tl.math.exp2((o_i + 1) * b_b)
+ m_s = o_i[:, None] >= o_i[None, :]
+ d_s = tl.where(m_s, tl.math.exp2((o_i[:, None] - o_i[None, :]) * b_b), 0)
+
+ b_o = tl.zeros([BT, BV], dtype=tl.float32)
+ b_s = tl.zeros([BT, BT], dtype=tl.float32)
+ for i_k in range(tl.cdiv(K, BK)):
+ if HEAD_FIRST:
+ p_q = tl.make_block_ptr(q + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + i_bh * T*K, (K, T), (1, K), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_h = tl.make_block_ptr(h + (i_bh * NT + i_t) * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ else:
+ p_q = tl.make_block_ptr(q + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + (bos * H + i_h) * K, (K, T), (1, H*K), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_h = tl.make_block_ptr(h + (i_tg * H + i_h) * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ # [BT, BK]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ # [BK, BT]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BK, BV]
+ b_h = tl.load(p_h, boundary_check=(0, 1))
+ b_o += tl.dot(b_q, b_h, allow_tf32=False)
+ b_s += tl.dot(b_q, b_k, allow_tf32=False)
+
+ b_o = b_o * d_i[:, None]
+ b_s = b_s * d_s
+ if HEAD_FIRST:
+ p_v = tl.make_block_ptr(v + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_o = tl.make_block_ptr(o + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ else:
+ p_v = tl.make_block_ptr(v + (bos * H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_o = tl.make_block_ptr(o + (bos * H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ b_o = (b_o + tl.dot(b_s.to(b_v.dtype), b_v, allow_tf32=False)) * scale
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4)
+ ],
+ key=["BT", "BK", "BV"],
+)
+@triton.heuristics({
+ 'STORE_INITIAL_STATE_GRADIENT': lambda args: args['dh0'] is not None,
+ 'USE_FINAL_STATE_GRADIENT': lambda args: args['dht'] is not None,
+ 'USE_OFFSETS': lambda args: args['offsets'] is not None
+})
+@triton.jit
+def chunk_retention_bwd_kernel_dh(
+ q,
+ do,
+ dh,
+ dh0,
+ dht,
+ offsets,
+ c_offsets,
+ scale,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ STORE_INITIAL_STATE_GRADIENT: tl.constexpr,
+ USE_FINAL_STATE_GRADIENT: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_k, i_v, i_nh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_n, i_h = i_nh // H, i_nh % H
+ b_b = tl.math.log2(1 - tl.math.exp2(-5 - i_h * 1.0))
+ if USE_OFFSETS:
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ T = eos - bos
+ NT = tl.cdiv(T, BT)
+ boh = tl.load(c_offsets + i_n).to(tl.int32)
+ else:
+ bos, eos = i_n * T, i_n * T + T
+ NT = tl.cdiv(T, BT)
+ boh = i_n * NT
+
+ o_i = tl.arange(0, BT)
+ d_i = tl.math.exp2((o_i + 1) * b_b)
+
+ # [BK, BV]
+ b_dh = tl.zeros([BK, BV], dtype=tl.float32)
+ if USE_FINAL_STATE_GRADIENT:
+ p_dht = tl.make_block_ptr(dht + i_nh * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ b_dh += tl.load(p_dht, boundary_check=(0, 1)).to(tl.float32)
+ for i_t in range(NT - 1, -1, -1):
+ if HEAD_FIRST:
+ p_q = tl.make_block_ptr(q + i_nh * T*K, (K, T), (1, K), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_do = tl.make_block_ptr(do + i_nh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dh = tl.make_block_ptr(dh + (i_nh * NT + i_t) * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ else:
+ p_q = tl.make_block_ptr(q + (bos*H + i_h) * K, (K, T), (1, H*K), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_do = tl.make_block_ptr(do + (bos*H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dh = tl.make_block_ptr(dh + ((boh+i_t) * H + i_h) * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ tl.store(p_dh, b_dh.to(p_dh.dtype.element_ty), boundary_check=(0, 1))
+
+ d_b = tl.math.exp2(min(BT, T - i_t * BT) * b_b)
+ # [BK, BT]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_q = (b_q * scale).to(b_q.dtype)
+ # [BT, V]
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+ # [BK, BV]
+ b_dh = d_b * b_dh + tl.dot(b_q, (b_do * d_i[:, None]).to(b_q.dtype), allow_tf32=False)
+
+ if STORE_INITIAL_STATE_GRADIENT:
+ p_dh0 = tl.make_block_ptr(dh0 + i_nh * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ tl.store(p_dh0, b_dh.to(p_dh0.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4)
+ ],
+ key=["BT", "BK", "BV"],
+)
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def chunk_retention_bwd_kernel_dqkv(
+ q,
+ k,
+ v,
+ h,
+ do,
+ dh,
+ dq,
+ dk,
+ dv,
+ offsets,
+ indices,
+ scale,
+ B: tl.constexpr,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ NT: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_k, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_b, i_h = i_bh // H, i_bh % H
+ if USE_OFFSETS:
+ i_tg = i_t
+ i_n, i_t = tl.load(indices + i_t * 2).to(tl.int32), tl.load(indices + i_t * 2 + 1).to(tl.int32)
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ all = T
+ T = eos - bos
+ NT = tl.cdiv(T, BT)
+ else:
+ NT = tl.cdiv(T, BT)
+ i_tg = i_b * NT + i_t
+ bos, eos = i_b * T, i_b * T + T
+ all = B * T
+ b_b = tl.math.log2(1 - tl.math.exp2(-5 - i_h * 1.0))
+
+ o_i = tl.arange(0, BT)
+ d_q, d_k = tl.math.exp2((o_i + 1) * b_b), tl.math.exp2((min(BT, T - i_t * BT) - o_i - 1) * b_b)
+ d_q = (d_q * scale).to(d_q.dtype)
+ m_s = o_i[:, None] >= o_i[None, :]
+ d_s = tl.where(m_s, tl.math.exp2((o_i[:, None] - o_i[None, :]) * b_b), 0) * scale
+
+ if HEAD_FIRST:
+ p_q = tl.make_block_ptr(q + i_bh * T*K, (K, T), (1, K), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_k = tl.make_block_ptr(k + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ else:
+ p_q = tl.make_block_ptr(q + (bos * H + i_h) * K, (K, T), (1, H*K), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_k = tl.make_block_ptr(k + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ b_s = tl.dot(b_k, b_q, allow_tf32=False) * tl.trans(d_s)
+
+ b_dq = tl.zeros([BT, BK], dtype=tl.float32)
+ b_dk = tl.zeros([BT, BK], dtype=tl.float32)
+ b_ds = tl.zeros([BT, BT], dtype=tl.float32)
+ for i_v in range(tl.cdiv(V, BV)):
+ if HEAD_FIRST:
+ p_v = tl.make_block_ptr(v + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_do = tl.make_block_ptr(do + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dv = tl.make_block_ptr(dv + (i_k * B*H + i_bh) * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_h = tl.make_block_ptr(h + (i_bh * NT + i_t) * K*V, (V, K), (1, V), (i_v * BV, i_k * BK), (BV, BK), (0, 1))
+ p_dh = tl.make_block_ptr(dh + (i_bh * NT + i_t) * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ else:
+ p_v = tl.make_block_ptr(v + (bos*H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_do = tl.make_block_ptr(do + (bos*H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dv = tl.make_block_ptr(dv + ((i_k*all+bos)*H+i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_h = tl.make_block_ptr(h + (i_tg * H + i_h) * K*V, (V, K), (1, V), (i_v * BV, i_k * BK), (BV, BK), (0, 1))
+ p_dh = tl.make_block_ptr(dh + (i_tg * H + i_h) * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+
+ # [BT, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+ # [BV, BK]
+ b_h = tl.load(p_h, boundary_check=(0, 1))
+ # [BK, BV]
+ b_dh = tl.load(p_dh, boundary_check=(0, 1))
+
+ # [BT, BT]
+ b_ds += tl.dot(b_do, tl.trans(b_v), allow_tf32=False)
+ # [BT, BK]
+ b_dq += tl.dot(b_do, b_h.to(b_do.dtype), allow_tf32=False)
+ b_dk += tl.dot(b_v, tl.trans(b_dh).to(b_v.dtype), allow_tf32=False)
+ # [BT, BV]
+ b_dv = tl.dot(b_k, b_dh, allow_tf32=False) * d_k[:, None] + tl.dot(b_s.to(b_q.dtype), b_do, allow_tf32=False)
+ tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1))
+ # [BT, BT]
+ b_ds = (b_ds * d_s).to(b_q.dtype)
+ # [BT, BK]
+ b_dq = b_dq * d_q[:, None] + tl.dot(b_ds, b_k, allow_tf32=False)
+ b_dk = b_dk * d_k[:, None] + tl.trans(tl.dot(b_q, b_ds, allow_tf32=False))
+
+ if HEAD_FIRST:
+ p_dq = tl.make_block_ptr(dq + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dk = tl.make_block_ptr(dk + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ else:
+ p_dq = tl.make_block_ptr(dq + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dk = tl.make_block_ptr(dk + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1))
+ tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1))
+
+
+def chunk_retention_fwd_h(
+ k: torch.Tensor,
+ v: torch.Tensor,
+ h0: torch.Tensor,
+ output_final_state: bool,
+ offsets: Optional[torch.Tensor] = None,
+ c_offsets: Optional[torch.Tensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ if head_first:
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ else:
+ B, T, H, K, V = *k.shape, v.shape[-1]
+ BT = chunk_size
+ # N: the actual number of sequences in the batch with either equal or variable lengths
+ if offsets is None:
+ N, NT, c_offsets = B, triton.cdiv(T, BT), None
+ else:
+ N = len(offsets) - 1
+ if c_offsets is None:
+ c_offsets = torch.cat([offsets.new_tensor([0]), triton.cdiv(offsets[1:] - offsets[:-1], BT)]).cumsum(-1)
+ NT = c_offsets[-1]
+ BK = min(triton.next_power_of_2(K), 64)
+ BV = min(triton.next_power_of_2(V), 64)
+ NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV)
+
+ if head_first:
+ h = k.new_empty(B, H, NT, K, V, dtype=k.dtype)
+ else:
+ h = k.new_empty(B, NT, H, K, V, dtype=k.dtype)
+ ht = k.new_empty(N, H, K, V, dtype=torch.float32) if output_final_state else None
+
+ grid = (NK, NV, N * H)
+ chunk_retention_fwd_kernel_h[grid](
+ k=k,
+ v=v,
+ h=h,
+ h0=h0,
+ ht=ht,
+ offsets=offsets,
+ c_offsets=c_offsets,
+ T=T,
+ H=H,
+ K=K,
+ V=V,
+ BT=BT,
+ BK=BK,
+ BV=BV,
+ HEAD_FIRST=head_first
+ )
+ return h, ht
+
+
+def chunk_retention_fwd_o(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ h: torch.Tensor,
+ scale: float,
+ offsets: Optional[torch.LongTensor] = None,
+ indices: Optional[torch.LongTensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+) -> torch.Tensor:
+ if head_first:
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ else:
+ B, T, H, K, V = *k.shape, v.shape[-1]
+ BT = min(chunk_size, triton.next_power_of_2(T))
+ if offsets is None:
+ NT = triton.cdiv(T, BT)
+ else:
+ if indices is None:
+ indices = torch.cat([torch.arange(n) for n in triton.cdiv(offsets[1:] - offsets[:-1], BT).tolist()])
+ indices = torch.stack([indices.eq(0).cumsum(0) - 1, indices], 1).to(offsets)
+ NT = len(indices)
+ BK = min(triton.next_power_of_2(K), 64)
+ BV = min(triton.next_power_of_2(V), 64)
+ NV = triton.cdiv(V, BV)
+
+ o = torch.empty_like(v)
+ grid = (NV, NT, B * H)
+ chunk_retention_fwd_kernel_o[grid](
+ q=q,
+ k=k,
+ v=v,
+ h=h,
+ o=o,
+ offsets=offsets,
+ indices=indices,
+ scale=scale,
+ T=T,
+ H=H,
+ K=K,
+ V=V,
+ BT=BT,
+ BK=BK,
+ BV=BV,
+ NT=NT,
+ HEAD_FIRST=head_first
+ )
+ return o
+
+
+def chunk_retention_bwd_dh(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ do: torch.Tensor,
+ h0: torch.Tensor,
+ dht: torch.Tensor,
+ scale: float,
+ offsets: Optional[torch.Tensor] = None,
+ c_offsets: Optional[torch.Tensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ if head_first:
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ else:
+ B, T, H, K, V = *k.shape, v.shape[-1]
+ BT = chunk_size
+ # N: the actual number of sequences in the batch with either equal or variable lengths
+ if offsets is None:
+ N, NT, c_offsets = B, triton.cdiv(T, BT), None
+ else:
+ N = len(offsets) - 1
+ if c_offsets is None:
+ c_offsets = torch.cat([offsets.new_tensor([0]), triton.cdiv(offsets[1:] - offsets[:-1], BT)]).cumsum(-1)
+ NT = c_offsets[-1]
+ BK = min(triton.next_power_of_2(K), 64)
+ BV = min(triton.next_power_of_2(V), 64)
+ NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV)
+
+ if head_first:
+ dh = k.new_empty(B, H, NT, K, V, dtype=k.dtype)
+ else:
+ dh = k.new_empty(B, NT, H, K, V, dtype=k.dtype)
+ dh0 = torch.empty_like(h0, dtype=torch.float32) if h0 is not None else None
+
+ grid = (NK, NV, N * H)
+ chunk_retention_bwd_kernel_dh[grid](
+ q=q,
+ do=do,
+ dh=dh,
+ dh0=dh0,
+ dht=dht,
+ offsets=offsets,
+ c_offsets=c_offsets,
+ scale=scale,
+ T=T,
+ H=H,
+ K=K,
+ V=V,
+ BT=BT,
+ BK=BK,
+ BV=BV,
+ HEAD_FIRST=head_first
+ )
+ return dh, dh0
+
+
+def chunk_retention_bwd_dqkv(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ h: torch.Tensor,
+ do: torch.Tensor,
+ dh: torch.Tensor,
+ scale: float,
+ offsets: Optional[torch.LongTensor] = None,
+ indices: Optional[torch.LongTensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
+ if head_first:
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ else:
+ B, T, H, K, V = *k.shape, v.shape[-1]
+ BT = min(chunk_size, triton.next_power_of_2(T))
+ if offsets is None:
+ NT = triton.cdiv(T, BT)
+ else:
+ if indices is None:
+ indices = torch.cat([torch.arange(n) for n in triton.cdiv(offsets[1:] - offsets[:-1], chunk_size).tolist()])
+ indices = torch.stack([indices.eq(0).cumsum(0) - 1, indices], 1).to(offsets)
+ NT = len(indices)
+ BK = min(64, triton.next_power_of_2(K))
+ BV = min(64, triton.next_power_of_2(V))
+ NK = triton.cdiv(K, BK)
+
+ dq = torch.empty_like(q)
+ dk = torch.empty_like(k)
+ dv = v.new_empty(NK, *v.shape)
+ grid = (NK, NT, B * H)
+ chunk_retention_bwd_kernel_dqkv[grid](
+ q=q,
+ k=k,
+ v=v,
+ h=h,
+ do=do,
+ dh=dh,
+ dq=dq,
+ dk=dk,
+ dv=dv,
+ offsets=offsets,
+ indices=indices,
+ scale=scale,
+ B=B,
+ T=T,
+ H=H,
+ K=K,
+ V=V,
+ BT=BT,
+ BK=BK,
+ BV=BV,
+ NT=NT,
+ HEAD_FIRST=head_first
+ )
+ dv = dv.sum(0)
+ return dq, dk, dv
+
+
+def chunk_retention_fwd(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ scale: float,
+ initial_state: torch.Tensor,
+ output_final_state: bool,
+ offsets: Optional[torch.LongTensor] = None,
+ indices: Optional[torch.LongTensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ h, ht = chunk_retention_fwd_h(
+ k=k,
+ v=v,
+ h0=initial_state,
+ output_final_state=output_final_state,
+ offsets=offsets,
+ head_first=head_first,
+ chunk_size=chunk_size
+ )
+ o = chunk_retention_fwd_o(
+ q=q,
+ k=k,
+ v=v,
+ h=h,
+ scale=scale,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=chunk_size
+ )
+ return o, ht
+
+
+def chunk_retention_bwd(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ scale: Optional[float],
+ initial_state: Optional[torch.Tensor],
+ do: torch.Tensor,
+ dht: torch.Tensor,
+ offsets: Optional[torch.LongTensor] = None,
+ indices: Optional[torch.LongTensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
+ h, _ = chunk_retention_fwd_h(
+ k=k,
+ v=v,
+ h0=initial_state,
+ output_final_state=False,
+ offsets=offsets,
+ head_first=head_first,
+ chunk_size=chunk_size
+ )
+ dh, dh0 = chunk_retention_bwd_dh(
+ q=q,
+ k=k,
+ v=v,
+ do=do,
+ h0=initial_state,
+ dht=dht,
+ scale=scale,
+ offsets=offsets,
+ head_first=head_first,
+ chunk_size=chunk_size
+ )
+ dq, dk, dv = chunk_retention_bwd_dqkv(
+ q=q,
+ k=k,
+ v=v,
+ h=h,
+ do=do,
+ dh=dh,
+ scale=scale,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=chunk_size
+ )
+ return dq, dk, dv, dh0
+
+
+class ChunkRetentionFunction(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_fwd
+ def forward(
+ ctx,
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ scale: float,
+ initial_state: torch.Tensor,
+ output_final_state: bool,
+ offsets: torch.LongTensor,
+ head_first: bool
+ ):
+ T = q.shape[2] if head_first else q.shape[1]
+ chunk_size = min(64, triton.next_power_of_2(T))
+
+ # 2-d indices denoting the offsets of chunks in each sequence
+ # for example, if the passed `offsets` is [0, 100, 356] and `chunk_size` is 64,
+ # then there are 2 and 4 chunks in the 1st and 2nd sequences respectively, and `indices` will be
+ # [[0, 0], [0, 1], [1, 0], [1, 1], [1, 2], [1, 3]]
+ indices = None
+ if offsets is not None:
+ indices = torch.cat([torch.arange(n) for n in triton.cdiv(offsets[1:] - offsets[:-1], chunk_size).tolist()])
+ indices = torch.stack([indices.eq(0).cumsum(0) - 1, indices], 1).to(offsets)
+
+ o, ht = chunk_retention_fwd(
+ q=q,
+ k=k,
+ v=v,
+ scale=scale,
+ initial_state=initial_state,
+ output_final_state=output_final_state,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=chunk_size
+ )
+ ctx.save_for_backward(q, k, v, initial_state)
+ ctx.scale = scale
+ ctx.offsets = offsets
+ ctx.indices = indices
+ ctx.head_first = head_first
+ ctx.chunk_size = chunk_size
+ return o.to(q.dtype), ht
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_bwd
+ def backward(ctx, do, dht):
+ q, k, v, initial_state = ctx.saved_tensors
+ chunk_size, scale, offsets, indices, head_first = ctx.chunk_size, ctx.scale, ctx.offsets, ctx.indices, ctx.head_first
+ dq, dk, dv, dh0 = chunk_retention_bwd(
+ q=q,
+ k=k,
+ v=v,
+ scale=scale,
+ initial_state=initial_state,
+ do=do,
+ dht=dht,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=chunk_size
+ )
+ return dq.to(q.dtype), dk.to(k.dtype), dv.to(v.dtype), None, dh0, None, None, None
+
+
+def chunk_retention(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ scale: Optional[float] = None,
+ initial_state: Optional[torch.Tensor] = None,
+ output_final_state: bool = False,
+ offsets: Optional[torch.LongTensor] = None,
+ head_first: bool = True
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ r"""
+ Args:
+ q (torch.Tensor):
+ queries of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`
+ k (torch.Tensor):
+ keys of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`
+ v (torch.Tensor):
+ values of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`
+ scale (Optional[int]):
+ Scale factor for the attention scores.
+ If not provided, it will default to `1 / sqrt(K)`. Default: `None`.
+ initial_state (Optional[torch.Tensor]):
+ Initial state of shape `[N, H, K, V]` for `N` input sequences.
+ For equal-length input sequences, `N` equals the batch size `B`.
+ Default: `None`.
+ output_final_state (Optional[bool]):
+ Whether to output the final state of shape `[N, H, K, V]`. Default: `False`.
+ offsets (Optional[torch.LongTensor]):
+ Offsets of shape `[N+1]` defining the bos/eos positions of `N` variable-length sequences in the batch.
+ For example,
+ if `offsets` is `[0, 1, 3, 6, 10, 15]`, there are `N=5` sequences with lengths 1, 2, 3, 4 and 5 respectively.
+ If provided, the inputs are concatenated and the batch size `B` is expected to be 1.
+ Default: `None`.
+ head_first (Optional[bool]):
+ Whether the inputs are in the head-first format, which is not supported for variable-length inputs.
+ Default: `True`.
+
+ Returns:
+ o (torch.Tensor):
+ Outputs of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ final_state (torch.Tensor):
+ Final state of shape `[N, H, K, V]` if `output_final_state=True` else `None`.
+
+ Examples::
+ >>> import torch
+ >>> import torch.nn.functional as F
+ >>> from einops import rearrange
+ >>> from fla.ops.retention import chunk_retention
+ # inputs with equal lengths
+ >>> B, T, H, K, V = 4, 2048, 4, 512, 512
+ >>> q = torch.randn(B, T, H, K, device='cuda')
+ >>> k = torch.randn(B, T, H, K, device='cuda')
+ >>> v = torch.randn(B, T, H, V, device='cuda')
+ >>> h0 = torch.randn(B, H, K, V, device='cuda')
+ >>> o, ht = chunk_retention(q, k, v,
+ initial_state=h0,
+ output_final_state=True,
+ head_first=False)
+ # for variable-length inputs, the batch size `B` is expected to be 1 and `offsets` is required
+ >>> q, k, v = map(lambda x: rearrange(x, 'b t h d -> 1 (b t) h d'), (q, k, v))
+ # for a batch with 4 sequences, offsets with 5 start/end positions are expected
+ >>> offsets = q.new_tensor([0, 2048, 4096, 6144, 8192], dtype=torch.long)
+ >>> o_var, ht_var = chunk_retention(q, k, v,
+ initial_state=h0,
+ output_final_state=True,
+ offsets=offsets,
+ head_first=False)
+ >>> assert o.allclose(o_var.view(o.shape))
+ >>> assert ht.allclose(ht_var)
+ """
+ if offsets is not None:
+ if q.shape[0] != 1:
+ raise ValueError(f"The batch size is expected to be 1 rather than {q.shape[0]} when using `offsets`."
+ f"Please flatten variable-length inputs before processing.")
+ if head_first:
+ raise RuntimeError("Sequences with variable lengths are not supported for head-first mode")
+ if initial_state is not None and initial_state.shape[0] != len(offsets) - 1:
+ raise ValueError(f"The number of initial states is expected to be equal to the number of input sequences, "
+ f"i.e., {len(offsets) - 1} rather than {initial_state.shape[0]}.")
+ if scale is None:
+ scale = k.shape[-1] ** -0.5
+ o, final_state = ChunkRetentionFunction.apply(
+ q,
+ k,
+ v,
+ scale,
+ initial_state,
+ output_final_state,
+ offsets,
+ head_first
+ )
+ return o, final_state
diff --git a/fla/ops/retention/fused_chunk.py b/fla/ops/retention/fused_chunk.py
new file mode 100644
index 0000000000000000000000000000000000000000..ae490cdb8a58b4c1a0e557667532d174eced3439
--- /dev/null
+++ b/fla/ops/retention/fused_chunk.py
@@ -0,0 +1,353 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from typing import Optional, Tuple
+
+import torch
+import triton
+import triton.language as tl
+from packaging import version
+
+from fla.utils import autocast_custom_bwd, autocast_custom_fwd, contiguous
+
+
+@triton.jit
+def fused_chunk_retention_fwd_kernel(
+ q,
+ k,
+ v,
+ o,
+ h0,
+ ht,
+ s_k_h,
+ s_k_t,
+ s_k_d,
+ s_v_h,
+ s_v_t,
+ s_v_d,
+ scale,
+ B: tl.constexpr,
+ H: tl.constexpr,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ USE_INITIAL_STATE: tl.constexpr,
+ STORE_FINAL_STATE: tl.constexpr,
+ CHECK: tl.constexpr
+):
+ # indices
+ i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_h = i_bh % H
+
+ o_i = tl.arange(0, BT)
+ # decay rate given the head index
+ b_b = tl.math.log2(1 - tl.math.exp2(-5 - i_h * 1.0))
+
+ # d_b: overall decay for the entire chunk
+ # d_o: cumulative decay from the start of the chunk
+ # d_h: cumulative decay from the end of the chunk
+ d_b, d_o, d_h = tl.math.exp2(BT * b_b), tl.math.exp2((o_i + 1) * b_b), tl.math.exp2((BT - o_i - 1) * b_b)
+
+ # [BT, BT]
+ m_s = o_i[:, None] >= o_i[None, :]
+ d_s = tl.where(m_s, tl.math.exp2((o_i[:, None] - o_i[None, :]) * b_b), 0)
+ # [BK, BV]
+ b_h = tl.zeros([BK, BV], dtype=tl.float32)
+
+ # make block pointers
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (0, i_k * BK), (BT, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, 0), (BK, BT), (0, 1))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (0, i_v * BV), (BT, BV), (1, 0))
+ p_o = tl.make_block_ptr(o + (i_bh+i_k*B*H) * s_v_h, (T, V), (s_v_t, s_v_d), (0, i_v * BV), (BT, BV), (1, 0))
+
+ if USE_INITIAL_STATE:
+ p_h = tl.make_block_ptr(h0 + i_bh * K * V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ b_h = tl.load(p_h, boundary_check=(0, 1)).to(tl.float32)
+
+ NT = tl.cdiv(T, BT)
+ for i in range(0, NT):
+ # [BK, BT]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BT, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BT, BK]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_q = (b_q * scale).to(b_k.dtype)
+
+ # [BT, BT]
+ b_s = tl.dot(b_q, b_k, allow_tf32=False) * d_s
+ # [BT, BV]
+ b_o = tl.dot(b_s.to(b_q.dtype), b_v, allow_tf32=False)
+ if CHECK and i == 0:
+ b_o += tl.dot(b_q, b_h.to(b_q.dtype), allow_tf32=False) * d_o[:, None]
+ b_h = d_b * b_h + tl.dot(b_k, (b_v * d_h[:, None]).to(b_k.dtype), allow_tf32=False)
+ else:
+ b_o += tl.dot(b_q, b_h.to(b_q.dtype), allow_tf32=False) * d_o[:, None]
+ if i == NT - 1 and (T % BT) != 0:
+ d_b = tl.math.exp2((T % BT) * b_b)
+ d_h = tl.math.exp2(((T % BT) - o_i - 1) * b_b)
+ b_h = d_b * b_h + tl.dot(b_k, (b_v * d_h[:, None]).to(b_k.dtype), allow_tf32=False)
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1))
+
+ p_q = tl.advance(p_q, (BT, 0))
+ p_k = tl.advance(p_k, (0, BT))
+ p_v = tl.advance(p_v, (BT, 0))
+ p_o = tl.advance(p_o, (BT, 0))
+
+ if STORE_FINAL_STATE:
+ p_ht = tl.make_block_ptr(ht + i_bh * K * V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ tl.store(p_ht, b_h.to(p_ht.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.jit
+def fused_chunk_retention_bwd_kernel(
+ q,
+ k,
+ v,
+ do,
+ dq,
+ dk,
+ dv,
+
+ h0,
+
+ s_k_h,
+ s_k_t,
+ s_k_d,
+ s_v_h,
+ s_v_t,
+ s_v_d,
+ scale,
+ B: tl.constexpr,
+ H: tl.constexpr,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ USE_INITIAL_STATE: tl.constexpr,
+ CHECK: tl.constexpr
+):
+ i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_h = i_bh % H
+
+ o_i = tl.arange(0, BT)
+ b_b = tl.math.log2(1 - tl.math.exp2(-5 - i_h * 1.0))
+ d_q, d_k = tl.math.exp2((o_i+1) * b_b) * scale, tl.math.exp2((BT - o_i - 1) * b_b)
+ d_b = tl.math.exp2(BT * b_b)
+
+ m_s = o_i[:, None] >= o_i[None, :]
+ d_s = tl.where(m_s, tl.math.exp2((o_i[:, None] - o_i[None, :]) * b_b), 0) * scale
+ # [BV, BK]
+ b_h = tl.zeros([BV, BK], dtype=tl.float32)
+ if USE_INITIAL_STATE:
+ p_h = tl.make_block_ptr(h0 + i_bh * K * V, (V, K), (1, V), (i_v * BV, i_k * BK), (BV, BK), (0, 1))
+ b_h = tl.load(p_h, boundary_check=(0, 1)).to(tl.float32)
+
+ for i in range(0, tl.cdiv(T, BT)):
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i * BT, i_k * BK), (BT, BK), (1, 0))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (V, T), (s_v_d, s_v_t), (i_v * BV, i * BT), (BV, BT), (0, 1))
+ p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dq = tl.make_block_ptr(dq + (i_bh + i_v*B*H) * s_k_h, (T, K), (s_k_t, s_k_d), (i*BT, i_k*BK), (BT, BK), (1, 0))
+
+ # [BT, K]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [V, BT]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BT, V]
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+ b_dd = (b_do * d_q[:, None]).to(b_do.dtype)
+
+ # [BT, BT]
+ b_ds = tl.dot(b_do, b_v, allow_tf32=False)
+ b_ds = (b_ds * d_s).to(b_k.dtype)
+ # [BT, K]
+ b_dq = tl.dot(b_ds, b_k, allow_tf32=False)
+ # [V, K]
+ if CHECK and i == 0:
+ b_dq += tl.dot(b_dd, b_h.to(b_k.dtype), allow_tf32=False)
+ b_h = d_b * b_h + tl.dot((b_v * d_k[None, :]).to(b_k.dtype), b_k, allow_tf32=False)
+ else:
+ b_dq += tl.dot(b_dd, b_h.to(b_k.dtype), allow_tf32=False)
+ b_h = d_b * b_h + tl.dot((b_v * d_k[None, :]).to(b_k.dtype), b_k, allow_tf32=False)
+
+ tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1))
+
+ # sync threads
+ b_h = None
+ tl.debug_barrier()
+ d_s = tl.trans(d_s)
+ # [BK, BV]
+ b_dh = tl.zeros([BK, BV], dtype=tl.float32)
+ for i in range(1, tl.cdiv(T, BT) + 1):
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, T - i * BT), (BK, BT), (0, 1))
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (T - i * BT, i_k * BK), (BT, BK), (1, 0))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (T - i * BT, i_v * BV), (BT, BV), (1, 0))
+ p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (T - i * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dk = tl.make_block_ptr(dk + (i_bh+i_v*B*H) * s_k_h, (T, K), (s_k_t, s_k_d), (T - i*BT, i_k*BK), (BT, BK), (1, 0))
+ p_dv = tl.make_block_ptr(dv + (i_bh+i_k*B*H) * s_v_h, (T, V), (s_v_t, s_v_d), (T - i*BT, i_v*BV), (BT, BV), (1, 0))
+ # [K, BT]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ # [BT, BK]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BT, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+ b_dd = (b_do * d_q[:, None]).to(b_do.dtype)
+
+ # [BT, BT]
+ b_ds = tl.dot(b_v, tl.trans(b_do), allow_tf32=False)
+ b_ds = (b_ds * d_s).to(b_k.dtype)
+
+ # [BT, BT]
+ b_s = tl.dot(b_k, b_q, allow_tf32=False) * d_s
+ # [BT, BK]
+ b_dk = tl.dot(b_ds, tl.trans(b_q), allow_tf32=False)
+ # [BT, BV]
+ b_dv = tl.dot(b_s.to(b_q.dtype), b_do, allow_tf32=False)
+ if CHECK and i == 1:
+ b_dk += tl.dot(b_v, tl.trans(b_dh).to(b_v.dtype), allow_tf32=False) * d_k[:, None]
+ b_dv += tl.dot(b_k, b_dh.to(b_k.dtype), allow_tf32=False) * d_k[:, None]
+ b_dh = d_b * b_dh + tl.dot(b_q, b_dd, allow_tf32=False)
+ else:
+ b_dk += tl.dot(b_v, tl.trans(b_dh).to(b_v.dtype), allow_tf32=False) * d_k[:, None]
+ b_dv += tl.dot(b_k, b_dh.to(b_k.dtype), allow_tf32=False) * d_k[:, None]
+ b_dh = d_b * b_dh + tl.dot(b_q, b_dd, allow_tf32=False)
+
+ tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1))
+ tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1))
+
+
+class FusedChunkRetentionFunction(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_fwd
+ def forward(ctx, q, k, v, scale, initial_state, output_final_state):
+ B, H, T, K, V = *k.shape, v.shape[-1]
+
+ BT = 64
+ BK, BV = min(triton.next_power_of_2(K), 64), min(triton.next_power_of_2(V), 64)
+ NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV)
+ num_stages = 1
+ num_warps = 4
+
+ o = q.new_empty(NK, B, H, T, V)
+
+ if output_final_state:
+ final_state = q.new_empty(B, H, K, V, dtype=torch.float32, requires_grad=False)
+ else:
+ final_state = None
+ # the bug still exists even for Triton 2.2 on H100 GPUs
+ # so we always enable initial checks
+ CHECK = True
+ if version.parse(triton.__version__) < version.parse('2.2.0'):
+ import warnings
+ warnings.warn(
+ "Triton<2.2.0 detected for running this kernel, "
+ "which is known to have some weird compiler issues (refer to https://github.com/openai/triton/issues/2852) "
+ "that lead to significant precision loss. "
+ "We've add some initial condition checks to resolve this, sadly at the sacrifice of the speed. "
+ "For optimal performance, it is recommended to install Triton>=2.2.0 (if possible)."
+ )
+ CHECK = True
+
+ grid = (NV, NK, B * H)
+ fused_chunk_retention_fwd_kernel[grid](
+ q, k, v, o, initial_state, final_state,
+ q.stride(1), q.stride(2), q.stride(3),
+ v.stride(1), v.stride(2), v.stride(3),
+ scale,
+ B=B, H=H, T=T, K=K, V=V, BT=BT, BK=BK, BV=BV,
+ USE_INITIAL_STATE=initial_state is not None,
+ STORE_FINAL_STATE=output_final_state,
+ CHECK=CHECK,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+
+ o = o.sum(0)
+ ctx.save_for_backward(q, k, v, initial_state)
+ ctx.CHECK = CHECK
+ return o.to(q.dtype), final_state
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_bwd
+ def backward(ctx, do, dht=None):
+ q, k, v, initial_state = ctx.saved_tensors
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ scale = K ** -0.5
+
+ BT = 64
+ BK, BV = min(triton.next_power_of_2(K), 64), min(triton.next_power_of_2(V), 64)
+ NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV)
+ num_stages = 1
+ num_warps = 4
+
+ dq = q.new_empty(NV, B, H, T, K)
+ dk = q.new_empty(NV, B, H, T, K)
+ dv = q.new_empty(NK, B, H, T, V)
+ grid = (NV, NK, B * H)
+
+ fused_chunk_retention_bwd_kernel[grid](
+ q, k, v, do, dq, dk, dv, initial_state,
+ q.stride(1), q.stride(2), q.stride(3),
+ v.stride(1), v.stride(2), v.stride(3),
+ scale,
+ B=B, H=H, T=T, K=K, V=V, BT=BT, BK=BK, BV=BV,
+ USE_INITIAL_STATE=initial_state is not None,
+ CHECK=ctx.CHECK,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+ dq = dq.sum(0)
+ dk = dk.sum(0)
+ dv = dv.sum(0)
+ return dq.to(q.dtype), dk.to(k.dtype), dv.to(v.dtype), None, None, None
+
+
+def fused_chunk_retention(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ scale: Optional[float] = None,
+ initial_state: Optional[torch.Tensor] = None,
+ output_final_state: bool = False,
+ head_first: bool = True
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ r"""
+ Args:
+ q (torch.Tensor):
+ queries of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`
+ k (torch.Tensor):
+ keys of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`
+ v (torch.Tensor):
+ values of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`
+ scale (Optional[int]):
+ Scale factor for the attention scores.
+ If not provided, it will default to `1 / sqrt(K)`. Default: `None`.
+ initial_state (Optional[torch.Tensor]):
+ Initial state of shape `[B, H, K, V]`. Default: `None`.
+ output_final_state (Optional[bool]):
+ Whether to output the final state of shape `[B, H, K, V]`. Default: `False`.
+ head_first (Optional[bool]):
+ Whether the inputs are in the head-first format.
+ Default: `True`.
+
+ Returns:
+ o (torch.Tensor):
+ Outputs of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ final_state (torch.Tensor):
+ Final state of shape `[B, H, K, V]` if `output_final_state=True` else `None`.
+ """
+ assert q.dim() == k.dim() == v.dim() == 4, "q, k, v must have 4 dimensions"
+ assert q.dtype == k.dtype == v.dtype, "q, k, v must have the same dtype"
+ if scale is None:
+ scale = k.shape[-1] ** -0.5
+ o, final_state = FusedChunkRetentionFunction.apply(q, k, v, scale, initial_state, output_final_state, head_first)
+ return o, final_state
diff --git a/fla/ops/retention/fused_recurrent.py b/fla/ops/retention/fused_recurrent.py
new file mode 100644
index 0000000000000000000000000000000000000000..05a5b57c6ab8cb7724b4c1f08a695cd66286c2ba
--- /dev/null
+++ b/fla/ops/retention/fused_recurrent.py
@@ -0,0 +1,472 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from typing import Optional, Tuple
+
+import torch
+import triton
+import triton.language as tl
+
+from fla.utils import contiguous
+
+
+@triton.heuristics({
+ 'USE_INITIAL_STATE': lambda args: args['h0'] is not None,
+ 'STORE_FINAL_STATE': lambda args: args['ht'] is not None,
+ 'USE_OFFSETS': lambda args: args['offsets'] is not None
+})
+@triton.jit
+def fused_recurrent_retention_fwd_kernel(
+ q,
+ k,
+ v,
+ o,
+ h0,
+ ht,
+ offsets,
+ scale,
+ B: tl.constexpr,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ REVERSE: tl.constexpr,
+ USE_INITIAL_STATE: tl.constexpr,
+ STORE_FINAL_STATE: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_v, i_k, i_nh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_n, i_h = i_nh // H, i_nh % H
+ if USE_OFFSETS:
+ bos, eos = tl.load(offsets + i_n).to(tl.int64), tl.load(offsets + i_n + 1).to(tl.int64)
+ all = T
+ T = eos - bos
+ else:
+ bos, eos = i_n * T, i_n * T + T
+ all = B * T
+
+ # decay rate given the head index
+ b_b = (1 - tl.math.exp2(-5 - i_h * 1.0))
+
+ if HEAD_FIRST:
+ p_q = q + i_nh * T*K + i_k * BK + tl.arange(0, BK)
+ p_k = k + i_nh * T*K + i_k * BK + tl.arange(0, BK)
+ p_v = v + i_nh * T*V + i_v * BV + tl.arange(0, BV)
+ p_o = o + (i_k * B*H + i_nh) * T*V + i_v * BV + tl.arange(0, BV)
+ else:
+ p_q = q + (bos + ((T-1) if REVERSE else 0)) * H*K + i_h * K + i_k * BK + tl.arange(0, BK)
+ p_k = k + (bos + ((T-1) if REVERSE else 0)) * H*K + i_h * K + i_k * BK + tl.arange(0, BK)
+ p_v = v + (bos + ((T-1) if REVERSE else 0)) * H*V + i_h * V + i_v * BV + tl.arange(0, BV)
+ p_o = o + ((i_k * all + bos) + ((T-1) if REVERSE else 0)) * H*V + i_h * V + i_v * BV + tl.arange(0, BV)
+
+ mask_k = (i_k * BK + tl.arange(0, BK)) < K
+ mask_v = (i_v * BV + tl.arange(0, BV)) < V
+ mask_h = mask_k[None, :] & mask_v[:, None]
+
+ b_h = tl.zeros([BV, BK], dtype=tl.float32)
+ if USE_INITIAL_STATE:
+ p_h0 = h0 + i_nh * K * V + (i_k * BK + tl.arange(0, BK)[None, :]) * V + (i_v * BV + tl.arange(0, BV)[:, None])
+ b_h += tl.load(p_h0, mask=mask_h, other=0).to(tl.float32)
+
+ for _ in range(0, T):
+ b_q = tl.load(p_q, mask=mask_k, other=0).to(tl.float32) * scale
+ b_k = tl.load(p_k, mask=mask_k, other=0).to(tl.float32)
+ b_v = tl.load(p_v, mask=mask_v, other=0).to(tl.float32)
+
+ b_h = b_b * b_h + b_k[None, :] * b_v[:, None]
+ b_o = b_h * b_q[None, :]
+ b_o = tl.sum(b_o, axis=1)
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), mask=mask_v)
+
+ p_q += (-1 if REVERSE else 1) * (1 if HEAD_FIRST else H) * K
+ p_k += (-1 if REVERSE else 1) * (1 if HEAD_FIRST else H) * K
+ p_v += (-1 if REVERSE else 1) * (1 if HEAD_FIRST else H) * V
+ p_o += (-1 if REVERSE else 1) * (1 if HEAD_FIRST else H) * V
+
+ if STORE_FINAL_STATE:
+ p_ht = ht + i_nh * K * V + (i_k * BK + tl.arange(0, BK)[None, :]) * V + (i_v * BV + tl.arange(0, BV)[:, None])
+ tl.store(p_ht, b_h.to(p_ht.dtype.element_ty), mask=mask_h)
+
+
+@triton.heuristics({
+ 'USE_INITIAL_STATE': lambda args: args['h0'] is not None,
+ 'USE_FINAL_STATE_GRADIENT': lambda args: args['dht'] is not None,
+ 'USE_OFFSETS': lambda args: args['offsets'] is not None
+})
+@triton.jit
+def fused_recurrent_retention_bwd_kernel(
+ q,
+ k,
+ v,
+ h0,
+ do,
+ dq,
+ dk,
+ dv,
+ dh0,
+ dht,
+ offsets,
+ scale,
+ B: tl.constexpr,
+ H: tl.constexpr,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ REVERSE: tl.constexpr,
+ USE_INITIAL_STATE: tl.constexpr,
+ USE_FINAL_STATE_GRADIENT: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_v, i_k, i_nh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_n, i_h = i_nh // H, i_nh % H
+ if USE_OFFSETS:
+ bos, eos = tl.load(offsets + i_n).to(tl.int64), tl.load(offsets + i_n + 1).to(tl.int64)
+ all = T
+ T = eos - bos
+ else:
+ bos, eos = i_n * T, i_n * T + T
+ all = B * T
+
+ b_b = 1 - tl.math.exp2(-5 - i_h * 1.0)
+
+ if HEAD_FIRST:
+ p_k = k + i_nh * T*K + ((T-1) * K if REVERSE else 0) + i_k * BK + tl.arange(0, BK)
+ p_v = v + i_nh * T*V + ((T-1) * V if REVERSE else 0) + i_v * BV + tl.arange(0, BV)
+ p_do = do + i_nh * T*V + ((T-1) * V if REVERSE else 0) + i_v * BV + tl.arange(0, BV)
+ p_dq = dq + (i_v * B*H + i_nh) * T*K + ((T-1) * K if REVERSE else 0) + i_k * BK + tl.arange(0, BK)
+ else:
+ p_k = k + (bos + ((T-1) if REVERSE else 0)) * H*K + i_h * K + i_k * BK + tl.arange(0, BK)
+ p_v = v + (bos + ((T-1) if REVERSE else 0)) * H*V + i_h * V + i_v * BV + tl.arange(0, BV)
+ p_do = do + (bos + ((T-1) if REVERSE else 0)) * H*V + i_h * V + i_v * BV + tl.arange(0, BV)
+ p_dq = dq + ((i_v * all + bos) + ((T-1) if REVERSE else 0)) * H*K + i_h * K + i_k * BK + tl.arange(0, BK)
+ mask_k = i_k * BK + tl.arange(0, BK) < K
+ mask_v = i_v * BV + tl.arange(0, BV) < V
+ mask_h = mask_k[:, None] & mask_v[None, :]
+
+ b_h = tl.zeros([BK, BV], dtype=tl.float32)
+ if USE_INITIAL_STATE:
+ p_h0 = h0 + i_nh * K * V + (i_k * BK + tl.arange(0, BK)[:, None]) * V + (i_v * BV + tl.arange(0, BV)[None, :])
+ b_h += tl.load(p_h0, mask=mask_h, other=0).to(tl.float32)
+
+ for _ in range(0, T):
+ b_k = tl.load(p_k, mask=mask_k, other=0).to(tl.float32)
+ b_v = tl.load(p_v, mask=mask_v, other=0).to(tl.float32)
+ b_do = tl.load(p_do, mask=mask_v, other=0).to(tl.float32)
+
+ b_h = b_b * b_h + b_k[:, None] * b_v[None, :]
+ b_dq = tl.sum(b_h * b_do[None, :], axis=1) * scale
+ tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), mask=mask_k)
+
+ p_k += (-1 if REVERSE else 1) * (1 if HEAD_FIRST else H) * K
+ p_v += (-1 if REVERSE else 1) * (1 if HEAD_FIRST else H) * V
+ p_do += (-1 if REVERSE else 1) * (1 if HEAD_FIRST else H) * V
+ p_dq += (-1 if REVERSE else 1) * (1 if HEAD_FIRST else H) * K
+
+ # sync threads
+ tl.debug_barrier()
+
+ if HEAD_FIRST:
+ p_q = q + i_nh * T*K + ((T - 1) * K if not REVERSE else 0) + i_k * BK + tl.arange(0, BK)
+ p_k = k + i_nh * T*K + ((T - 1) * K if not REVERSE else 0) + i_k * BK + tl.arange(0, BK)
+ p_v = v + i_nh * T*V + ((T - 1) * V if not REVERSE else 0) + i_v * BV + tl.arange(0, BV)
+ p_do = do + i_nh * T*V + ((T - 1) * V if not REVERSE else 0) + i_v * BV + tl.arange(0, BV)
+ p_dk = dk + (i_v * B*H + i_nh) * T*K + ((T - 1) * K if not REVERSE else 0) + i_k * BK + tl.arange(0, BK)
+ p_dv = dv + (i_k * B*H + i_nh) * T*V + ((T - 1) * V if not REVERSE else 0) + i_v * BV + tl.arange(0, BV)
+ else:
+ p_q = q + (bos + ((T - 1) if not REVERSE else 0)) * H*K + i_h * K + i_k * BK + tl.arange(0, BK)
+ p_k = k + (bos + ((T - 1) if not REVERSE else 0)) * H*K + i_h * K + i_k * BK + tl.arange(0, BK)
+ p_v = v + (bos + ((T - 1) if not REVERSE else 0))*H*V + i_h * V + i_v * BV + tl.arange(0, BV)
+ p_do = do + (bos + ((T - 1) if not REVERSE else 0))*H*V + i_h * V + i_v * BV + tl.arange(0, BV)
+ p_dk = dk + ((i_v * all + bos) + ((T - 1) if not REVERSE else 0)) * H*K + i_h * K + i_k * BK + tl.arange(0, BK)
+ p_dv = dv + ((i_k * all + bos) + ((T - 1) if not REVERSE else 0)) * H*V + i_h * V + i_v * BV + tl.arange(0, BV)
+
+ b_dh = tl.zeros([BK, BV], dtype=tl.float32)
+ if USE_FINAL_STATE_GRADIENT:
+ p_ht = dht + i_nh * K * V + (i_k * BK + tl.arange(0, BK)[:, None]) * V + (i_v * BV + tl.arange(0, BV)[None, :])
+ b_dh += tl.load(p_ht, mask=mask_h, other=0).to(tl.float32)
+
+ for _ in range(T):
+ b_q = tl.load(p_q, mask=mask_k, other=0).to(tl.float32) * scale
+ b_k = tl.load(p_k, mask=mask_k, other=0).to(tl.float32)
+ b_v = tl.load(p_v, mask=mask_v, other=0).to(tl.float32)
+ b_do = tl.load(p_do, mask=mask_v, other=0).to(tl.float32)
+
+ b_dh += b_q[:, None] * b_do[None, :]
+ b_dk = tl.sum(b_dh * b_v[None, :], axis=1)
+ b_dv = tl.sum(b_dh * b_k[:, None], axis=0)
+
+ b_dh *= b_b
+ tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), mask=mask_k)
+ tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), mask=mask_v)
+
+ p_q += (1 if REVERSE else -1) * (1 if HEAD_FIRST else H) * K
+ p_k += (1 if REVERSE else -1) * (1 if HEAD_FIRST else H) * K
+ p_v += (1 if REVERSE else -1) * (1 if HEAD_FIRST else H) * V
+ p_do += (1 if REVERSE else -1) * (1 if HEAD_FIRST else H) * V
+ p_dk += (1 if REVERSE else -1) * (1 if HEAD_FIRST else H) * K
+ p_dv += (1 if REVERSE else -1) * (1 if HEAD_FIRST else H) * V
+
+ if USE_INITIAL_STATE:
+ p_dh0 = dh0 + i_nh * K * V + (i_k * BK + tl.arange(0, BK)[:, None]) * V + (i_v * BV + tl.arange(0, BV)[None, :])
+ tl.store(p_dh0, b_dh.to(p_dh0.dtype.element_ty), mask=mask_h)
+
+
+def fused_recurrent_retention_fwd(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ scale: Optional[float] = None,
+ initial_state: Optional[torch.Tensor] = None,
+ output_final_state: bool = False,
+ reverse: bool = False,
+ offsets: Optional[torch.Tensor] = None,
+ head_first: bool = True
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ if head_first:
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ else:
+ B, T, H, K, V = *k.shape, v.shape[-1]
+ N = B if offsets is None else len(offsets) - 1
+ BK, BV = min(K, 64), min(V, 64)
+ NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV)
+
+ h0 = initial_state
+ ht = q.new_empty(N, H, K, V, dtype=torch.float32) if output_final_state else None
+ o = q.new_empty(NK, *v.shape, dtype=torch.float)
+
+ grid = (NV, NK, N * H)
+ fused_recurrent_retention_fwd_kernel[grid](
+ q,
+ k,
+ v,
+ o,
+ h0,
+ ht,
+ offsets,
+ scale,
+ B=B,
+ T=T,
+ H=H,
+ K=K,
+ V=V,
+ BK=BK,
+ BV=BV,
+ REVERSE=reverse,
+ HEAD_FIRST=head_first
+ )
+ o = o.sum(0)
+ return o.to(v.dtype), ht
+
+
+def fused_recurrent_retention_bwd(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ do: torch.Tensor,
+ dht: torch.Tensor,
+ scale: Optional[float] = None,
+ initial_state: Optional[torch.Tensor] = None,
+ reverse: bool = False,
+ offsets: Optional[torch.Tensor] = None,
+ head_first: bool = True
+) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
+ if head_first:
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ else:
+ B, T, H, K, V = *k.shape, v.shape[-1]
+ N = B if offsets is None else len(offsets) - 1
+
+ BK, BV = min(K, 64), min(V, 64)
+ NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV)
+
+ dq = q.new_empty(NV, *q.shape, dtype=torch.float)
+ dk = q.new_empty(NV, *k.shape, dtype=torch.float)
+ dv = q.new_empty(NK, *v.shape, dtype=torch.float)
+ h0 = initial_state
+ dh0 = torch.empty_like(initial_state) if initial_state is not None else None
+
+ grid = (NV, NK, N * H)
+ fused_recurrent_retention_bwd_kernel[grid](
+ q,
+ k,
+ v,
+ h0,
+ do,
+ dq,
+ dk,
+ dv,
+ dh0,
+ dht,
+ offsets,
+ scale,
+ B=B,
+ T=T,
+ H=H,
+ K=K,
+ V=V,
+ BK=BK,
+ BV=BV,
+ REVERSE=reverse,
+ HEAD_FIRST=head_first
+ )
+ dq = dq.sum(0)
+ dk = dk.sum(0)
+ dv = dv.sum(0)
+ return dq, dk, dv, dh0
+
+
+class FusedRecurrentRetentionFunction(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ def forward(
+ ctx,
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ scale: Optional[float] = None,
+ initial_state: Optional[torch.Tensor] = None,
+ output_final_state: bool = False,
+ reverse: bool = False,
+ offsets: Optional[torch.LongTensor] = None,
+ head_first: bool = True
+ ):
+ o, ht = fused_recurrent_retention_fwd(
+ q=q,
+ k=k,
+ v=v,
+ scale=scale,
+ initial_state=initial_state,
+ output_final_state=output_final_state,
+ reverse=reverse,
+ offsets=offsets,
+ head_first=head_first
+ )
+ ctx.save_for_backward(q, k, v, initial_state)
+ ctx.scale = scale
+ ctx.reverse = reverse
+ ctx.offsets = offsets
+ ctx.head_first = head_first
+ return o.to(v.dtype), ht
+
+ @staticmethod
+ @contiguous
+ def backward(ctx, do, dht):
+ q, k, v, initial_state = ctx.saved_tensors
+ dq, dk, dv, dh0 = fused_recurrent_retention_bwd(
+ q=q,
+ k=k,
+ v=v,
+ do=do,
+ dht=dht,
+ scale=ctx.scale,
+ initial_state=initial_state,
+ reverse=ctx.reverse,
+ offsets=ctx.offsets,
+ head_first=ctx.head_first
+ )
+ return dq.to(q), dk.to(k), dv.to(v), None, dh0, None, None, None, None
+
+
+def fused_recurrent_retention(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ scale: float = None,
+ initial_state: torch.Tensor = None,
+ output_final_state: bool = False,
+ reverse: bool = False,
+ offsets: Optional[torch.LongTensor] = None,
+ head_first: bool = True
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ r"""
+ Args:
+ q (torch.Tensor):
+ queries of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`
+ k (torch.Tensor):
+ keys of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`
+ v (torch.Tensor):
+ values of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`
+ scale (Optional[int]):
+ Scale factor for the attention scores.
+ If not provided, it will default to `1 / sqrt(K)`. Default: `None`.
+ initial_state (Optional[torch.Tensor]):
+ Initial state of shape `[N, H, K, V]` for `N` input sequences.
+ For equal-length input sequences, `N` equals the batch size `B`.
+ Default: `None`.
+ output_final_state (Optional[bool]):
+ Whether to output the final state of shape `[N, H, K, V]`. Default: `False`.
+ reverse (Optional[bool]):
+ If `True`, process the state passing in reverse order. Default: `False`.
+ offsets (Optional[torch.LongTensor]):
+ Offsets of shape `[N+1]` defining the bos/eos positions of `N` variable-length sequences in the batch.
+ For example,
+ if `offsets` is `[0, 1, 3, 6, 10, 15]`, there are `N=5` sequences with lengths 1, 2, 3, 4 and 5 respectively.
+ If provided, the inputs are concatenated and the batch size `B` is expected to be 1.
+ Default: `None`.
+ head_first (Optional[bool]):
+ Whether the inputs are in the head-first format, which is not supported for variable-length inputs.
+ Default: `True`.
+
+ Returns:
+ o (torch.Tensor):
+ Outputs of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ final_state (torch.Tensor):
+ Final state of shape `[N, H, K, V]` if `output_final_state=True` else `None`.
+
+ Examples::
+ >>> import torch
+ >>> import torch.nn.functional as F
+ >>> from einops import rearrange
+ >>> from fla.ops.retention import fused_recurrent_retention
+ # inputs with equal lengths
+ >>> B, T, H, K, V = 4, 2048, 4, 512, 512
+ >>> q = torch.randn(B, T, H, K, device='cuda')
+ >>> k = torch.randn(B, T, H, K, device='cuda')
+ >>> v = torch.randn(B, T, H, V, device='cuda')
+ >>> h0 = torch.randn(B, H, K, V, device='cuda')
+ >>> o, ht = fused_recurrent_retention(q, k, v,
+ initial_state=h0,
+ output_final_state=True,
+ head_first=False)
+ # for variable-length inputs, the batch size `B` is expected to be 1 and `offsets` is required
+ >>> q, k, v = map(lambda x: rearrange(x, 'b t h d -> 1 (b t) h d'), (q, k, v))
+ # for a batch with 4 sequences, offsets with 5 start/end positions are expected
+ >>> offsets = q.new_tensor([0, 2048, 4096, 6144, 8192], dtype=torch.long)
+ >>> o_var, ht_var = fused_recurrent_retention(q, k, v,
+ initial_state=h0,
+ output_final_state=True,
+ offsets=offsets,
+ head_first=False)
+ >>> assert o.allclose(o_var.view(o.shape))
+ >>> assert ht.allclose(ht_var)
+ """
+ if offsets is not None:
+ if q.shape[0] != 1:
+ raise ValueError(f"The batch size is expected to be 1 rather than {q.shape[0]} when using `offsets`."
+ f"Please flatten variable-length inputs before processing.")
+ if head_first:
+ raise RuntimeError("Sequences with variable lengths are not supported for head-first mode")
+ if initial_state is not None and initial_state.shape[0] != len(offsets) - 1:
+ raise ValueError(f"The number of initial states is expected to be equal to the number of input sequences, "
+ f"i.e., {len(offsets) - 1} rather than {initial_state.shape[0]}.")
+ if scale is None:
+ scale = k.shape[-1] ** -0.5
+ o, final_state = FusedRecurrentRetentionFunction.apply(
+ q,
+ k,
+ v,
+ scale,
+ initial_state,
+ output_final_state,
+ reverse,
+ offsets,
+ head_first
+ )
+ return o, final_state
diff --git a/fla/ops/retention/naive.py b/fla/ops/retention/naive.py
new file mode 100644
index 0000000000000000000000000000000000000000..15611bf649779d2d956d2ab390b7d72dbb12201d
--- /dev/null
+++ b/fla/ops/retention/naive.py
@@ -0,0 +1,15 @@
+# -*- coding: utf-8 -*-
+
+import torch
+
+
+def naive_retention(q, k, v):
+ orig_type = q.dtype
+ q, k, v = q.float(), k.float(), v.float()
+ _, n_heads, seq_len, d_head = q.shape
+ s = (1 - q.new_tensor(2., dtype=torch.float).pow(-5. - q.new_tensor(range(n_heads), dtype=torch.float))).log2()
+ n = q.new_tensor(range(seq_len), dtype=torch.float)
+ n = torch.exp2((n.unsqueeze(-1) - n) * s.view(-1, 1, 1)) * n.unsqueeze(-1).ge(n)
+ s = torch.einsum('bhqd,bhkd,hqk->bhqk', q * d_head ** -0.5, k, n.to(q.dtype))
+ o = torch.einsum('bhqk,bhkd->bhqd', s, v)
+ return o.to(orig_type)
diff --git a/fla/ops/retention/parallel.py b/fla/ops/retention/parallel.py
new file mode 100644
index 0000000000000000000000000000000000000000..7321663e5a933596048a281c39190237ec77bee0
--- /dev/null
+++ b/fla/ops/retention/parallel.py
@@ -0,0 +1,509 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from typing import Tuple
+
+import torch
+import triton
+import triton.language as tl
+
+from fla.utils import autocast_custom_bwd, autocast_custom_fwd, contiguous
+
+
+@triton.heuristics({
+ 'NV': lambda args: triton.cdiv(args['V'], args['BV']),
+ 'OUTPUT_ATTENTIONS': lambda args: args['attn'] is not None
+})
+@triton.jit
+def parallel_retention_fwd_kernel(
+ q,
+ k,
+ v,
+ o,
+ attn,
+ scale,
+ B: tl.constexpr,
+ H: tl.constexpr,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BS: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ NV: tl.constexpr,
+ OUTPUT_ATTENTIONS: tl.constexpr
+):
+ i_kv, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_k, i_v = i_kv // NV, i_kv % NV
+ i_h = i_bh % H
+ # decay rate given the head index
+ b_b = tl.math.log2(1 - tl.math.exp2(-5 - i_h * 1.0))
+ # cumulative decay from the end of the chunk
+ # [BS]
+ o_k = tl.arange(0, BS)
+ d_h = tl.math.exp2((BS - o_k) * b_b)
+
+ p_q = tl.make_block_ptr(q + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + i_bh * T*K, (K, T), (1, K), (i_k * BK, 0), (BK, BS), (0, 1))
+ p_v = tl.make_block_ptr(v + i_bh * T*V, (T, V), (V, 1), (0, i_v * BV), (BS, BV), (1, 0))
+ if OUTPUT_ATTENTIONS:
+ p_a = tl.make_block_ptr(attn + (i_k*B*H + i_bh) * T * T, (T, T), (T, 1), (i_t * BT, 0), (BT, BS), (1, 0))
+
+ # the Q block is kept in the shared memory throughout the whole kernel
+ # [BT, BK]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_q = (b_q * scale).to(b_q.dtype)
+
+ b_o = tl.zeros([BT, BV], dtype=tl.float32)
+ # Q block and K block have no overlap
+ # no need for mask, thereby saving flops
+ for i in range(0, i_t * BT, BS):
+ # [BK, BS]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BS, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BT, BS]
+ b_s = tl.dot(b_q, b_k, allow_tf32=False) * d_h
+ # do this check to avoid some layout bugs
+ # [[BT, BV]
+ if i > 0:
+ b_o = b_o * tl.math.exp2(b_b * BS)
+ b_o += tl.dot(b_s.to(b_v.dtype), b_v, allow_tf32=False)
+ p_k = tl.advance(p_k, (0, BS))
+ p_v = tl.advance(p_v, (BS, 0))
+ if OUTPUT_ATTENTIONS:
+ tl.store(p_a, b_s.to(p_a.dtype.element_ty), boundary_check=(0, 1))
+ p_a = tl.advance(p_a, (0, BS))
+
+ tl.debug_barrier()
+
+ o_q = tl.arange(0, BT)
+ d_q = tl.math.exp2(tl.arange(0, BT) * b_b)
+ # rescale interchunk output
+ b_o *= d_q[:, None]
+
+ p_k = tl.make_block_ptr(k + i_bh * T*K, (K, T), (1, K), (i_k * BK, i_t * BT), (BK, BS), (0, 1))
+ p_v = tl.make_block_ptr(v + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BS, BV), (1, 0))
+ if OUTPUT_ATTENTIONS:
+ p_a = tl.make_block_ptr(attn + (i_k*B*H + i_bh) * T * T, (T, T), (T, 1), (i_t * BT, i_t * BT), (BT, BS), (1, 0))
+
+ # Q block and K block have overlap.
+ # masks required
+ for _ in range(i_t * BT, (i_t + 1) * BT, BS):
+ # [BK, BS]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BS, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BT, BS]
+ m_s = o_q[:, None] >= o_k[None, :]
+ d_s = tl.where(m_s, tl.math.exp2((o_q[:, None] - o_k[None, :]) * b_b), 0)
+ b_s = tl.dot(b_q, b_k, allow_tf32=False) * d_s
+ # [BT, BV]
+ b_o += tl.dot(b_s.to(b_q.dtype), b_v, allow_tf32=False)
+
+ if OUTPUT_ATTENTIONS:
+ tl.store(p_a, b_s.to(p_a.dtype.element_ty), boundary_check=(0, 1))
+ p_a = tl.advance(p_a, (0, BS))
+ p_k = tl.advance(p_k, (0, BS))
+ p_v = tl.advance(p_v, (BS, 0))
+ o_k += BS
+
+ p_o = tl.make_block_ptr(o + (i_bh + B * H * i_k) * T*V, (T, V), (V, 1), (i_t*BT, i_v*BV), (BT, BV), (1, 0))
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.jit
+def parallel_retention_bwd_kernel_dq(
+ i_bh,
+ i_t,
+ i_k,
+ i_v,
+ i_h,
+ k,
+ v,
+ do,
+ dq,
+ scale,
+ B: tl.constexpr,
+ H: tl.constexpr,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BS: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+):
+ p_k = tl.make_block_ptr(k + i_bh * T*K, (T, K), (K, 1), (0, i_k * BK), (BS, BK), (1, 0))
+ p_v = tl.make_block_ptr(v + i_bh * T*V, (V, T), (1, V), (i_v * BV, 0), (BV, BS), (0, 1))
+ p_do = tl.make_block_ptr(do + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+ b_dq = tl.zeros([BT, BK], dtype=tl.float32)
+ # decay rate given the head index
+ b_b = tl.math.log2(1 - tl.math.exp2(-5 - i_h * 1.0))
+ # overall decay rate for an entire block
+ d_b = tl.math.exp2(b_b * BS)
+ # cumulative decay from the end of the chunk
+ d_h = tl.math.exp2((BS - tl.arange(0, BS)) * b_b)
+ for i in range(0, i_t * BT, BS):
+ # [BS, BK]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BV, BS]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BT, BS]
+ b_ds = tl.dot(b_do, b_v, allow_tf32=False) * d_h[None, :]
+ # [BT, BK]
+ if i != 0:
+ b_dq *= d_b
+ b_dq += tl.dot(b_ds.to(b_v.dtype), b_k, allow_tf32=False)
+
+ p_k = tl.advance(p_k, (BS, 0))
+ p_v = tl.advance(p_v, (0, BS))
+ b_dq *= tl.math.exp2(tl.arange(0, BT) * b_b)[:, None] * scale
+
+ o_q = tl.arange(0, BT)
+ o_k = tl.arange(0, BS)
+ p_k = tl.make_block_ptr(k + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BS, BK), (1, 0))
+ p_v = tl.make_block_ptr(v + i_bh * T*V, (V, T), (1, V), (i_v * BV, i_t * BT), (BV, BS), (0, 1))
+ # Q block and K block have overlap. masks required
+ for _ in range(i_t * BT, (i_t + 1) * BT, BS):
+ # [BS, BK]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BV, BS]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BT, BS]
+ m_s = o_q[:, None] >= o_k[None, :]
+ d_s = tl.where(m_s, tl.math.exp2((o_q[:, None] - o_k[None, :]) * b_b), 0)
+ b_ds = tl.dot(b_do, b_v, allow_tf32=False) * d_s * scale
+ # [BT, BK]
+ b_dq += tl.dot(b_ds.to(b_k.dtype), b_k, allow_tf32=False)
+
+ p_k = tl.advance(p_k, (BS, 0))
+ p_v = tl.advance(p_v, (0, BS))
+ o_k += BS
+ p_dq = tl.make_block_ptr(dq + (i_bh + B * H * i_v) * T*K, (T, K), (K, 1), (i_t*BT, i_k*BK), (BT, BK), (1, 0))
+ tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.jit
+def parallel_retention_bwd_kernel_dkv(
+ i_bh,
+ i_t,
+ i_k,
+ i_v,
+ i_h,
+ q,
+ k,
+ v,
+ do,
+ dk,
+ dv,
+ scale,
+ B: tl.constexpr,
+ H: tl.constexpr,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BS: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+):
+ # no overlap. no need for mask.
+ b_b = tl.math.log2(1 - tl.math.exp2(-5 - i_h * 1.0))
+ # overall decay rate for an entire block
+ d_b = tl.math.exp2(b_b * BS)
+ # compute dk dv
+ p_k = tl.make_block_ptr(k + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_v = tl.make_block_ptr(v + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ # [BT, BK]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ b_dk = tl.zeros([BT, BK], dtype=tl.float32)
+ # [BT, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ b_dv = tl.zeros([BT, BV], dtype=tl.float32)
+
+ NTS = tl.cdiv(T, BS)
+ # [BT]
+ d_h = tl.math.exp2((BT - tl.arange(0, BT)) * b_b)
+ # [BT, BK]
+ b_kd = (b_k * d_h[:, None]).to(b_k.dtype)
+ # [BS]
+ d_q = tl.math.exp2(tl.arange(0, BS) * b_b)
+ for i in range(NTS * BS - BS, (i_t + 1) * BT - BS, -BS):
+ p_q = tl.make_block_ptr(q + i_bh * T*K, (T, K), (K, 1), (i, i_k * BK), (BS, BK), (1, 0))
+ p_do = tl.make_block_ptr(do + i_bh * T*V, (T, V), (V, 1), (i, i_v * BV), (BS, BV), (1, 0))
+ # [BS, BK]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ # [BS, BV]
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+ b_do = (b_do * d_q[:, None]).to(b_do.dtype)
+
+ b_dk *= d_b
+ b_dv *= d_b
+ # [BT, BS]
+ b_ds = tl.dot(b_v, tl.trans(b_do), allow_tf32=False)
+ b_s = tl.dot(b_kd, tl.trans(b_q), allow_tf32=False)
+ # [BT, BK]
+ b_dk += tl.dot(b_ds.to(b_q.dtype), b_q, allow_tf32=False)
+ # [BT, BV]
+ b_dv += tl.dot(b_s.to(b_do.dtype), b_do, allow_tf32=False)
+ b_dk *= d_h[:, None] * scale
+ b_dv *= scale
+
+ tl.debug_barrier()
+ o_q, o_k = tl.arange(0, BS), tl.arange(0, BT)
+ for i in range(i_t * BT, (i_t + 1) * BT, BS):
+ p_q = tl.make_block_ptr(q + i_bh * T*K, (T, K), (K, 1), (i, i_k * BK), (BS, BK), (1, 0))
+ p_do = tl.make_block_ptr(do + i_bh * T*V, (T, V), (V, 1), (i, i_v * BV), (BS, BV), (1, 0))
+ # [BS, BK]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ # [BS, BV]
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+ # [BT, BS]
+ m_s = o_k[:, None] <= o_q[None, :]
+ d_s = tl.where(m_s, tl.math.exp2((-o_k[:, None] + o_q[None, :]) * b_b.to(tl.float32)), 0) * scale
+
+ b_ds = tl.dot(b_v, tl.trans(b_do), allow_tf32=False) * d_s
+ b_s = tl.dot(b_k, tl.trans(b_q), allow_tf32=False) * d_s
+ # [BT, BK]
+ b_dk += tl.dot(b_ds.to(b_q.dtype), b_q, allow_tf32=False)
+ b_dv += tl.dot(b_s.to(b_q.dtype), b_do, allow_tf32=False)
+ o_q += BS
+ p_dk = tl.make_block_ptr(dk + (i_v * B * H + i_bh) * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dv = tl.make_block_ptr(dv + (i_k * B * H + i_bh) * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1))
+ tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.heuristics({
+ 'NV': lambda args: triton.cdiv(args['V'], args['BV'])
+})
+@triton.jit
+def parallel_retention_bwd_kernel(
+ q,
+ k,
+ v,
+ do,
+ dq,
+ dk,
+ dv,
+ scale,
+ B: tl.constexpr,
+ H: tl.constexpr,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BS: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ NV: tl.constexpr
+):
+ i_kv, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_k, i_v = i_kv // NV, i_kv % NV
+ i_h = i_bh % H
+ parallel_retention_bwd_kernel_dq(
+ i_bh,
+ i_t,
+ i_k,
+ i_v,
+ i_h,
+ k,
+ v,
+ do,
+ dq,
+ scale,
+ B=B,
+ H=H,
+ T=T,
+ K=K,
+ V=V,
+ BT=BT,
+ BS=BS,
+ BK=BK,
+ BV=BV
+ )
+ tl.debug_barrier()
+ parallel_retention_bwd_kernel_dkv(
+ i_bh,
+ i_t,
+ i_k,
+ i_v,
+ i_h,
+ q,
+ k,
+ v,
+ do,
+ dk,
+ dv,
+ scale,
+ B,
+ H,
+ T,
+ K,
+ V,
+ BT,
+ BS,
+ BK,
+ BV
+ )
+
+
+def parallel_retention_fwd(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ scale: float,
+ output_attentions: bool = False
+):
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ BT, BS = 64, 32
+ if torch.cuda.get_device_capability()[0] >= 9:
+ BK = min(256, triton.next_power_of_2(K))
+ BV = min(256, triton.next_power_of_2(V))
+ else:
+ BK = min(128, triton.next_power_of_2(K))
+ BV = min(128, triton.next_power_of_2(V))
+ NK = triton.cdiv(K, BK)
+ NV = triton.cdiv(V, BV)
+ assert BT % BS == 0
+
+ num_stages = 3 if K <= 64 else 2
+ num_warps = 4
+
+ grid = (NK * NV, triton.cdiv(T, BT), B * H)
+ o = torch.empty(NK, B, H, T, V, dtype=q.dtype, device=q.device)
+ attn = q.new_zeros(NK, B, H, T, T) if output_attentions else None
+ parallel_retention_fwd_kernel[grid](
+ q=q,
+ k=k,
+ v=v,
+ o=o,
+ attn=attn,
+ scale=scale,
+ B=B,
+ H=H,
+ T=T,
+ K=K,
+ V=V,
+ BT=BT,
+ BS=BS,
+ BK=BK,
+ BV=BV,
+ num_stages=num_stages,
+ num_warps=num_warps
+ )
+ o = o.sum(0)
+ if output_attentions:
+ attn = attn.sum(0)
+ return o, attn
+
+
+def parallel_retention_bwd(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ do: torch.Tensor,
+ scale: float,
+):
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ BT, BS = 64, 32
+ BK = min(128, triton.next_power_of_2(k.shape[-1]))
+ BV = min(128, triton.next_power_of_2(v.shape[-1]))
+ NK = triton.cdiv(K, BK)
+ NV = triton.cdiv(V, BV)
+ assert BT % BS == 0
+
+ num_stages = 3 if K <= 64 else 2
+ num_warps = 4
+
+ dq = torch.empty(NV, B, H, T, K, dtype=q.dtype, device=q.device)
+ dk = torch.empty(NV, B, H, T, K, dtype=q.dtype, device=q.device)
+ dv = torch.empty(NK, B, H, T, V, dtype=q.dtype, device=q.device)
+ grid = (NK * NV, triton.cdiv(T, BT), B * H)
+ parallel_retention_bwd_kernel[grid](
+ q=q,
+ k=k,
+ v=v,
+ do=do,
+ dq=dq,
+ dk=dk,
+ dv=dv,
+ scale=scale,
+ B=B,
+ H=H,
+ T=T,
+ K=K,
+ V=V,
+ BT=BT,
+ BS=BS,
+ BK=BK,
+ BV=BV,
+ num_stages=num_stages,
+ num_warps=num_warps
+ )
+ return dq, dk, dv
+
+
+class ParallelRetentionFunction(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_fwd
+ def forward(ctx, q, k, v, scale, output_attentions):
+ o, attn = parallel_retention_fwd(q, k, v, scale, output_attentions)
+ ctx.save_for_backward(q, k, v)
+ ctx.scale = scale
+ return o.to(q.dtype), attn
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_bwd
+ def backward(ctx, do, d_attn=None):
+ q, k, v = ctx.saved_tensors
+ dq, dk, dv = parallel_retention_bwd(q, k, v, do, ctx.scale)
+ return dq.sum(0).to(q.dtype), dk.sum(0).to(k.dtype), dv.sum(0).to(v.dtype), None, None
+
+
+def parallel_retention(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ scale: float = None,
+ output_attentions: bool = False,
+ head_first: bool = True
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ r"""
+ Args:
+ q (torch.Tensor):
+ queries of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`
+ k (torch.Tensor):
+ keys of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`
+ v (torch.Tensor):
+ values of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`
+ scale (Optional[int]):
+ Scale factor for the attention scores.
+ If not provided, it will default to `1 / sqrt(K)`. Default: `None`.
+ output_attentions (bool):
+ Whether to output the materialized attention scores of shape [B, H, T, T]. Default: `False`.
+ head_first (Optional[bool]):
+ Whether the inputs are in the head-first format.
+ Default: `True`.
+
+ Returns:
+ o (torch.Tensor):
+ Outputs of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`
+ attn (torch.Tensor):
+ Attention scores of shape `[B, H, T, T]` if `output_attentions=True` else `None`
+ """
+ if scale is None:
+ scale = k.shape[-1] ** -0.5
+ if not head_first:
+ q, k, v = map(lambda x: x.transpose(1, 2), (q, k, v))
+ o, attn = ParallelRetentionFunction.apply(q, k, v, scale, output_attentions)
+ if not head_first:
+ o = o.transpose(1, 2)
+ return o, attn
diff --git a/fla/ops/rotary.py b/fla/ops/rotary.py
new file mode 100644
index 0000000000000000000000000000000000000000..0443685c1cda4f055d8d2c02737059cec1b6b47a
--- /dev/null
+++ b/fla/ops/rotary.py
@@ -0,0 +1,229 @@
+# Copyright (c) 2023, Tri Dao.
+# https://github.com/Dao-AILab/flash-attention/blob/main/flash_attn/ops/triton/rotary.py
+
+from typing import Optional, Union
+
+import torch
+import triton
+import triton.language as tl
+
+
+# @triton.autotune(
+# configs=[
+# triton.Config({"BLOCK_M": 2}),
+# triton.Config({"BLOCK_M": 4}),
+# triton.Config({"BLOCK_M": 8}),
+# triton.Config({"BLOCK_M": 16}),
+# ],
+# key=["CACHE_KEY_SEQLEN", "BLOCK_K", "INTERLEAVED"],
+# )
+@triton.jit
+def rotary_kernel(
+ OUT, # Pointers to matrices
+ X,
+ COS,
+ SIN,
+ CU_SEQLENS,
+ SEQLEN_OFFSETS, # this could be int or a pointer
+ # Matrix dimensions
+ seqlen,
+ nheads,
+ rotary_dim,
+ seqlen_ro,
+ CACHE_KEY_SEQLEN,
+ # strides
+ stride_out_batch,
+ stride_out_seqlen,
+ stride_out_nheads,
+ stride_out_headdim,
+ stride_x_batch,
+ stride_x_seqlen,
+ stride_x_nheads,
+ stride_x_headdim,
+ # Meta-parameters
+ BLOCK_K: tl.constexpr,
+ IS_SEQLEN_OFFSETS_TENSOR: tl.constexpr,
+ IS_VARLEN: tl.constexpr,
+ INTERLEAVED: tl.constexpr,
+ CONJUGATE: tl.constexpr,
+ BLOCK_M: tl.constexpr,
+):
+ pid_m = tl.program_id(axis=0)
+ pid_batch = tl.program_id(axis=1)
+ pid_head = tl.program_id(axis=2)
+ rotary_dim_half = rotary_dim // 2
+
+ if not IS_VARLEN:
+ X = X + pid_batch * stride_x_batch + pid_head * stride_x_nheads
+ OUT = OUT + pid_batch * stride_out_batch + pid_head * stride_out_nheads
+ else:
+ start_idx = tl.load(CU_SEQLENS + pid_batch)
+ seqlen = tl.load(CU_SEQLENS + pid_batch + 1) - start_idx
+ X = X + start_idx * stride_x_seqlen + pid_head * stride_x_nheads
+ OUT = OUT + start_idx * stride_out_seqlen + pid_head * stride_out_nheads
+
+ if pid_m * BLOCK_M >= seqlen:
+ return
+ rm = pid_m * BLOCK_M + tl.arange(0, BLOCK_M)
+ if not IS_SEQLEN_OFFSETS_TENSOR:
+ rm_cs = rm + SEQLEN_OFFSETS
+ else:
+ rm_cs = rm + tl.load(SEQLEN_OFFSETS + pid_batch)
+ rk = tl.arange(0, BLOCK_K)
+ rk_half = tl.arange(0, BLOCK_K // 2)
+
+ if not INTERLEAVED:
+ # Load the 1st and 2nd halves of X, do calculation, then store to 1st and 2nd halves of OUT
+ X = X + (rm[:, None] * stride_x_seqlen + rk_half[None, :] * stride_x_headdim)
+ COS = COS + (rm_cs[:, None] * rotary_dim_half + rk_half[None, :])
+ SIN = SIN + (rm_cs[:, None] * rotary_dim_half + rk_half[None, :])
+ cos = tl.load(COS, mask=(rm_cs[:, None] < seqlen_ro) & (rk_half[None, :] < rotary_dim_half), other=1.0).to(tl.float32)
+ sin = tl.load(SIN, mask=(rm_cs[:, None] < seqlen_ro) & (rk_half[None, :] < rotary_dim_half), other=0.0).to(tl.float32)
+ x0 = tl.load(X, mask=(rm[:, None] < seqlen) & (rk_half[None, :] < rotary_dim_half), other=0.0).to(tl.float32)
+ x1 = tl.load(
+ X + rotary_dim_half * stride_x_headdim,
+ mask=(rm[:, None] < seqlen) & (rk_half[None, :] < rotary_dim_half),
+ other=0.0,
+ ).to(tl.float32)
+ if CONJUGATE:
+ sin = -sin
+ o0 = x0 * cos - x1 * sin
+ o1 = x0 * sin + x1 * cos
+ # write back result
+ OUT = OUT + (rm[:, None] * stride_out_seqlen + rk_half[None, :] * stride_out_headdim)
+ tl.store(OUT, o0, mask=(rm[:, None] < seqlen) & (rk_half[None, :] < rotary_dim_half))
+ tl.store(
+ OUT + rotary_dim_half * stride_out_headdim,
+ o1,
+ mask=(rm[:, None] < seqlen) & (rk_half[None, :] < rotary_dim_half),
+ )
+ else:
+ # We don't want to load X[0, 2, 4, ...] and X[1, 3, 5, ...] separately since both are slow.
+ # Instead, we load x0 = X[0, 1, 2, 3, ...] and x1 = X[1, 0, 3, 2, ...].
+ # Loading x0 will be fast but x1 will be slow.
+ # Then we load cos = COS[0, 0, 1, 1, ...] and sin = SIN[0, 0, 1, 1, ...].
+ # Then we do the calculation and use tl.where to pick put the right outputs for the even
+ # and for the odd indices.
+ rk_swap = rk + ((rk + 1) % 2) * 2 - 1 # 1, 0, 3, 2, 5, 4, ...
+ rk_repeat = tl.arange(0, BLOCK_K) // 2
+ X0 = X + (rm[:, None] * stride_x_seqlen + rk[None, :] * stride_x_headdim)
+ X1 = X + (rm[:, None] * stride_x_seqlen + rk_swap[None, :] * stride_x_headdim)
+ COS = COS + (rm_cs[:, None] * rotary_dim_half + rk_repeat[None, :])
+ SIN = SIN + (rm_cs[:, None] * rotary_dim_half + rk_repeat[None, :])
+ cos = tl.load(
+ COS,
+ mask=(rm_cs[:, None] < seqlen_ro) & (rk_repeat[None, :] < rotary_dim_half),
+ other=1.0,
+ ).to(tl.float32)
+ sin = tl.load(
+ SIN,
+ mask=(rm_cs[:, None] < seqlen_ro) & (rk_repeat[None, :] < rotary_dim_half),
+ other=0.0,
+ ).to(tl.float32)
+ x0 = tl.load(X0, mask=(rm[:, None] < seqlen) & (rk[None, :] < rotary_dim), other=0.0).to(tl.float32)
+ x1 = tl.load(X1, mask=(rm[:, None] < seqlen) & (rk_swap[None, :] < rotary_dim), other=0.0).to(tl.float32)
+ if CONJUGATE:
+ sin = -sin
+ x0_cos = x0 * cos
+ x1_sin = x1 * sin
+ out = tl.where(rk[None, :] % 2 == 0, x0_cos - x1_sin, x0_cos + x1_sin)
+ OUT = OUT + (rm[:, None] * stride_out_seqlen + rk[None, :] * stride_out_headdim)
+ tl.store(OUT, out, mask=(rm[:, None] < seqlen) & (rk[None, :] < rotary_dim))
+
+
+def apply_rotary(
+ x: torch.Tensor,
+ cos: torch.Tensor,
+ sin: torch.Tensor,
+ seqlen_offsets: Union[int, torch.Tensor] = 0,
+ cu_seqlens: Optional[torch.Tensor] = None,
+ max_seqlen: Optional[int] = None,
+ interleaved: bool = False,
+ inplace: bool = False,
+ conjugate: bool = False,
+) -> torch.Tensor:
+ """
+ Arguments:
+ x: (batch, seqlen, nheads, headdim) if cu_seqlens is None
+ else (total_seqlen, nheads, headdim).
+ cos: (seqlen_ro, rotary_dim / 2)
+ sin: (seqlen_ro, rotary_dim / 2)
+ seqlen_offsets: integer or integer tensor of size (batch,)
+ cu_seqlens: (batch + 1,) or None
+ max_seqlen: int
+ Returns:
+ y: (batch, seqlen, nheads, headdim)
+ """
+ is_varlen = cu_seqlens is not None
+ if not is_varlen:
+ batch, seqlen, nheads, headdim = x.shape
+ else:
+ assert max_seqlen is not None, "If cu_seqlens is passed in, then max_seqlen must be passed"
+ _, nheads, headdim = x.shape
+ batch_p_1 = cu_seqlens.shape[0]
+ batch = batch_p_1 - 1
+ seqlen = max_seqlen
+ seqlen_ro, rotary_dim = cos.shape
+ assert sin.shape == cos.shape
+ rotary_dim *= 2
+ assert rotary_dim <= headdim, "rotary_dim must be <= headdim"
+ assert headdim <= 256, "Only support headdim <= 256"
+ assert seqlen_ro >= seqlen, "seqlen_ro must be >= seqlen"
+
+ assert cos.dtype == sin.dtype, f"cos and sin must have the same dtype, got {cos.dtype} and {sin.dtype}"
+ assert x.dtype == cos.dtype, f"Input and cos/sin must have the same dtype, got {x.dtype} and {cos.dtype}"
+
+ cos, sin = cos.contiguous(), sin.contiguous()
+ if isinstance(seqlen_offsets, torch.Tensor):
+ assert seqlen_offsets.shape == (batch,)
+ assert seqlen_offsets.dtype in [torch.int32, torch.int64]
+ seqlen_offsets = seqlen_offsets.contiguous()
+ else:
+ assert seqlen_offsets + seqlen <= seqlen_ro
+
+ output = torch.empty_like(x) if not inplace else x
+ if rotary_dim < headdim and not inplace:
+ output[..., rotary_dim:].copy_(x[..., rotary_dim:])
+
+ BLOCK_K = (
+ 32
+ if rotary_dim <= 32
+ else (64 if rotary_dim <= 64 else (128 if rotary_dim <= 128 else 256))
+ )
+ def grid(META): return (triton.cdiv(seqlen, META["BLOCK_M"]), batch, nheads) # noqa
+ BLOCK_M = 4 if interleaved else (8 if rotary_dim <= 64 else 4)
+
+ # Need this, otherwise Triton tries to launch from cuda:0 and we get
+ # ValueError: Pointer argument (at 0) cannot be accessed from Triton (cpu tensor?)
+ with torch.cuda.device(x.device.index):
+ rotary_kernel[grid](
+ output, # data ptrs
+ x,
+ cos,
+ sin,
+ cu_seqlens,
+ seqlen_offsets,
+ seqlen, # shapes
+ nheads,
+ rotary_dim,
+ seqlen_ro,
+ # key for triton cache (limit number of compilations)
+ seqlen // 128,
+ # batch_strides if not varlen else 0
+ output.stride(0) if not is_varlen else 0,
+ output.stride(-3), # seqlen_stride or total_seqlen_stride
+ output.stride(-2), # nheads_stride
+ output.stride(-1), # headdim_stride
+ # batch_strides if not varlen else 0
+ x.stride(0) if not is_varlen else 0,
+ x.stride(-3), # seqlen stride or total_seqlen_stride
+ x.stride(-2), # nheads stride
+ x.stride(-1), # headdim stride
+ BLOCK_K,
+ isinstance(seqlen_offsets, torch.Tensor),
+ is_varlen,
+ interleaved,
+ conjugate,
+ BLOCK_M,
+ )
+ return output
diff --git a/fla/ops/rwkv4/__init__.py b/fla/ops/rwkv4/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..49de2cf83aeec67069b67e0972cfccef8a81383a
--- /dev/null
+++ b/fla/ops/rwkv4/__init__.py
@@ -0,0 +1,7 @@
+# -*- coding: utf-8 -*-
+
+from .fused_recurrent import fused_recurrent_rwkv4
+
+__all__ = [
+ 'fused_recurrent_rwkv4'
+]
diff --git a/fla/ops/rwkv4/fused_recurrent.py b/fla/ops/rwkv4/fused_recurrent.py
new file mode 100644
index 0000000000000000000000000000000000000000..27f8adf28e0f55a8e0a8ae2170639086c2de02fc
--- /dev/null
+++ b/fla/ops/rwkv4/fused_recurrent.py
@@ -0,0 +1,484 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from typing import Any, cast
+
+import torch
+import triton
+import triton.language as tl
+from torch import Tensor
+from torch.autograd.function import Function, FunctionCtx, once_differentiable
+
+
+def get_block_size_c(chans: int) -> int:
+ if chans < 32:
+ return 32
+ if chans < 64:
+ return 64
+ return 128
+
+
+@triton.jit
+def fused_recurrent_rwkv4_forward_kernel(
+ # W
+ w_ptr,
+ w_s_c,
+ # U
+ u_ptr,
+ u_s_c,
+ # K
+ k_ptr,
+ k_s_b,
+ k_s_t,
+ k_s_c,
+ # V
+ v_ptr,
+ v_s_b,
+ v_s_t,
+ v_s_c,
+ # State
+ state_ptr,
+ state_s_b,
+ state_s_abe,
+ state_s_c,
+ # WKV
+ wkv_ptr,
+ wkv_s_b,
+ wkv_s_t,
+ wkv_s_c,
+ # Output state
+ state_out_ptr,
+ state_out_s_b,
+ state_out_s_abe,
+ state_out_s_t,
+ state_out_s_c,
+ # Params
+ chans,
+ tsz,
+ BLOCK_SIZE_C: tl.constexpr,
+):
+ # Parallelize over the batch dimension.
+ b_idx = tl.program_id(0)
+ c_idx = tl.program_id(1)
+
+ cs = (c_idx * BLOCK_SIZE_C) + tl.arange(0, BLOCK_SIZE_C)
+ cmask = cs < chans
+
+ # Pointers to the batch (and possibly channel) for the input tensors.
+ k_ptr = k_ptr + b_idx * k_s_b
+ v_ptr = v_ptr + b_idx * v_s_b
+ alpha_ptr = state_ptr + b_idx * state_s_b
+ beta_ptr = state_ptr + b_idx * state_s_b + state_s_abe
+ eps_ptr = state_ptr + b_idx * state_s_b + 2 * state_s_abe
+
+ # Pointers to the batch (and possibly channel) for the output tensors.
+ wkv_ptr = wkv_ptr + b_idx * wkv_s_b
+ alpha_out_ptr = state_out_ptr + b_idx * state_out_s_b
+ beta_out_ptr = state_out_ptr + b_idx * state_out_s_b + state_out_s_abe
+ eps_out_ptr = state_out_ptr + b_idx * state_out_s_b + 2 * state_out_s_abe
+
+ # Loads parameters.
+ alpha = tl.load(alpha_ptr + cs * state_s_c, mask=cmask).to(tl.float32)
+ beta = tl.load(beta_ptr + cs * state_s_c, mask=cmask).to(tl.float32)
+ eps = tl.load(eps_ptr + cs * state_s_c, mask=cmask).to(tl.float32)
+ w = tl.load(w_ptr + cs * w_s_c, mask=cmask).to(tl.float32)
+ u = tl.load(u_ptr + cs * u_s_c, mask=cmask).to(tl.float32)
+
+ for t in range(tsz):
+ kt = tl.load(k_ptr + t * k_s_t + cs * k_s_c, mask=cmask).to(tl.float32)
+ vt = tl.load(v_ptr + t * v_s_t + cs * v_s_c, mask=cmask).to(tl.float32)
+
+ ukt = u + kt
+ tau = tl.maximum(ukt, eps)
+ e1a = tl.exp(eps - tau)
+ e2a = tl.exp(ukt - tau)
+ wkv = (e1a * alpha + e2a * vt) / (e1a * beta + e2a)
+ tl.store(wkv_ptr + t * wkv_s_t + cs * wkv_s_c, wkv, mask=cmask)
+
+ w_eps = w + eps
+ eps = tl.maximum(w_eps, kt)
+ e1b = tl.exp(w_eps - eps)
+ e2b = tl.exp(kt - eps)
+ alpha = e1b * alpha + e2b * vt
+ beta = e1b * beta + e2b
+ tl.store(alpha_out_ptr + t * state_out_s_t + cs * state_out_s_c, alpha, mask=cmask)
+ tl.store(beta_out_ptr + t * state_out_s_t + cs * state_out_s_c, beta, mask=cmask)
+ tl.store(eps_out_ptr + t * state_out_s_t + cs * state_out_s_c, eps, mask=cmask)
+
+
+def fused_recurrent_rwkv4_forward(
+ w: Tensor,
+ u: Tensor,
+ k: Tensor,
+ v: Tensor,
+ state: Tensor,
+) -> tuple[Tensor, Tensor]:
+ (bsz, tsz, chans) = k.shape
+
+ # New tensors to output.
+ wkvs = k.new_empty(bsz, tsz, chans)
+ state_out = k.new_empty(bsz, 3, tsz, chans)
+
+ # Constants.
+ block_size_c = get_block_size_c(chans)
+
+ def grid(meta: dict[str, Any]) -> tuple[int, ...]:
+ return (bsz, triton.cdiv(chans, meta["BLOCK_SIZE_C"]))
+
+ fused_recurrent_rwkv4_forward_kernel[grid](
+ # W
+ w,
+ w.stride(0),
+ # U
+ u,
+ u.stride(0),
+ # K
+ k,
+ k.stride(0),
+ k.stride(1),
+ k.stride(2),
+ # V
+ v,
+ v.stride(0),
+ v.stride(1),
+ v.stride(2),
+ # State
+ state,
+ state.stride(0),
+ state.stride(1),
+ state.stride(3),
+ # WKV
+ wkvs,
+ wkvs.stride(0),
+ wkvs.stride(1),
+ wkvs.stride(2),
+ # Output state
+ state_out,
+ state_out.stride(0),
+ state_out.stride(1),
+ state_out.stride(2),
+ state_out.stride(3),
+ # Params
+ chans,
+ tsz,
+ BLOCK_SIZE_C=block_size_c,
+ )
+
+ state_out = torch.cat((state, state_out), dim=2)
+
+ return wkvs, state_out
+
+
+@triton.jit
+def fused_recurrent_rwkv4_backward_kernel(
+ # W
+ w_ptr,
+ w_s_c,
+ # U
+ u_ptr,
+ u_s_c,
+ # K
+ k_ptr,
+ k_s_b,
+ k_s_t,
+ k_s_c,
+ # V
+ v_ptr,
+ v_s_b,
+ v_s_t,
+ v_s_c,
+ # State
+ state_ptr,
+ state_s_b,
+ state_s_abe,
+ state_s_t,
+ state_s_c,
+ # WKV grad
+ gwkv_ptr,
+ gwkv_s_b,
+ gwkv_s_t,
+ gwkv_s_c,
+ # Output state grad
+ gstate_out_ptr,
+ gstate_out_s_b,
+ gstate_out_s_abe,
+ gstate_out_s_c,
+ # W grad
+ gw_ptr,
+ gw_s_c,
+ # U grad
+ gu_ptr,
+ gu_s_c,
+ # K grad
+ gk_ptr,
+ gk_s_b,
+ gk_s_t,
+ gk_s_c,
+ # V grad
+ gv_ptr,
+ gv_s_b,
+ gv_s_t,
+ gv_s_c,
+ # State grad
+ gstate_ptr,
+ gstate_s_b,
+ gstate_s_abe,
+ gstate_s_c,
+ # Params
+ tsz,
+ chans,
+ BLOCK_SIZE_C: tl.constexpr,
+):
+ # Parallelize over the batch dimension.
+ b_idx = tl.program_id(0)
+ c_idx = tl.program_id(1)
+
+ cs = (c_idx * BLOCK_SIZE_C) + tl.arange(0, BLOCK_SIZE_C)
+ cmask = cs < chans
+
+ # Pointers to the batch (and possibly channel) for the input tensors.
+ k_ptr = k_ptr + b_idx * k_s_b
+ v_ptr = v_ptr + b_idx * v_s_b
+ alpha_ptr = state_ptr + b_idx * state_s_b
+ beta_ptr = state_ptr + b_idx * state_s_b + state_s_abe
+ eps_ptr = state_ptr + b_idx * state_s_b + 2 * state_s_abe
+
+ # Pointers to the batch (and possibly channel) for the output tensors.
+ gk_ptr = gk_ptr + b_idx * gk_s_b
+ gv_ptr = gv_ptr + b_idx * gv_s_b
+
+ # Pointers to gradients which were recieved by the function.
+ gwkv_ptr = gwkv_ptr + b_idx * gwkv_s_b
+ galpha_out_ptr = gstate_out_ptr + b_idx * gstate_out_s_b
+ gbeta_out_ptr = gstate_out_ptr + b_idx * gstate_out_s_b + gstate_out_s_abe
+ geps_out_ptr = gstate_out_ptr + b_idx * gstate_out_s_b + 2 * gstate_out_s_abe
+
+ # Loads parameters.
+ galpha = tl.load(galpha_out_ptr + gstate_out_s_c * cs, mask=cmask).to(tl.float32)
+ gbeta = tl.load(gbeta_out_ptr + gstate_out_s_c * cs, mask=cmask).to(tl.float32)
+ geps = tl.load(geps_out_ptr + gstate_out_s_c * cs, mask=cmask).to(tl.float32)
+ w = tl.load(w_ptr + w_s_c * cs, mask=cmask).to(tl.float32)
+ u = tl.load(u_ptr + u_s_c * cs, mask=cmask).to(tl.float32)
+
+ # Gradient accumulators.
+ gw = tl.zeros_like(w)
+ gu = tl.zeros_like(u)
+
+ alpha_prev = tl.load(alpha_ptr + tsz * state_s_t + state_s_c * cs, mask=cmask).to(tl.float32)
+ beta_prev = tl.load(beta_ptr + tsz * state_s_t + state_s_c * cs, mask=cmask).to(tl.float32)
+ eps_prev = tl.load(eps_ptr + tsz * state_s_t + state_s_c * cs, mask=cmask).to(tl.float32)
+
+ for t in range(tsz):
+ tc = tsz - t - 1
+
+ kt = tl.load(k_ptr + tc * k_s_t + k_s_c * cs, mask=cmask).to(tl.float32)
+ vt = tl.load(v_ptr + tc * v_s_t + v_s_c * cs, mask=cmask).to(tl.float32)
+
+ alpha_curr = alpha_prev
+ beta_curr = beta_prev
+ eps_curr = eps_prev
+
+ alpha_prev = tl.load(alpha_ptr + tc * state_s_t + state_s_c * cs, mask=cmask).to(tl.float32)
+ beta_prev = tl.load(beta_ptr + tc * state_s_t + state_s_c * cs, mask=cmask).to(tl.float32)
+ eps_prev = tl.load(eps_ptr + tc * state_s_t + state_s_c * cs, mask=cmask).to(tl.float32)
+
+ ukt = u + kt
+ tau = tl.maximum(ukt, eps_prev)
+ e1 = tl.exp(eps_prev - tau)
+ e2 = tl.exp(ukt - tau)
+
+ euke = tl.exp(ukt + eps_prev - 2 * tau)
+
+ denom = e1 * beta_prev + e2
+ denom_sq = denom * denom
+
+ gwkvt = tl.load(gwkv_ptr + tc * gwkv_s_t + gwkv_s_c * cs, mask=cmask).to(tl.float32)
+
+ # Backpropagates wkv gradients.
+ guk = gwkvt * e2 * (e1 * beta_prev * vt - e1 * alpha_prev) / denom_sq
+ gu += guk
+ gk = guk
+ gv = gwkvt * e2 / denom
+
+ galpha_wkv = gwkvt * e1 / denom
+ gbeta_wkv = -gwkvt * e1 * (e2 * vt + e1 * alpha_prev) / denom_sq
+ geps_wkv_denom = e1 * beta_prev + e2
+ geps_wkv = gwkvt * euke * (alpha_prev - vt * beta_prev) / (geps_wkv_denom * geps_wkv_denom)
+
+ e1 = tl.exp(w + eps_prev - eps_curr)
+ e2 = tl.exp(kt - eps_curr)
+
+ # Backpropagates alpha gradients.
+ galpha_we = galpha * e1 * alpha_prev
+ gw += galpha_we
+ gk += galpha * e2 * vt
+ gv += galpha * e2
+ geps += galpha * -alpha_curr
+
+ # Backpropagates beta gradients.
+ gbeta_we = gbeta * e1 * beta_prev
+ gw += gbeta_we
+ gk += gbeta * e2
+ geps += gbeta * -beta_curr
+
+ # Backpropagates epsilon gradients.
+ geps_mask = w + eps_prev > kt
+ geps_we = tl.where(geps_mask, geps, tl.zeros_like(geps))
+ gw += geps_we
+ gk += tl.where(geps_mask, tl.zeros_like(geps), geps)
+
+ # Stores the gradients for k and v.
+ tl.store(gk_ptr + tc * gk_s_t + gk_s_c * cs, gk, mask=cmask)
+ tl.store(gv_ptr + tc * gv_s_t + gv_s_c * cs, gv, mask=cmask)
+
+ # Computes new gradients for alpha and beta.
+ galpha = galpha * e1 + galpha_wkv
+ gbeta = gbeta * e1 + gbeta_wkv
+ geps = galpha_we + gbeta_we + geps_we + geps_wkv
+
+ # Stores final gradients for alpha and beta.
+ galpha_ptr = gstate_ptr + b_idx * gstate_s_b
+ gbeta_ptr = gstate_ptr + b_idx * gstate_s_b + gstate_s_abe
+ geps_ptr = gstate_ptr + b_idx * gstate_s_b + 2 * gstate_s_abe
+ tl.store(galpha_ptr + gstate_s_c * cs, galpha, mask=cmask)
+ tl.store(gbeta_ptr + gstate_s_c * cs, gbeta, mask=cmask)
+ tl.store(geps_ptr + gstate_s_c * cs, geps, mask=cmask)
+
+ # Stores final gradients for w and u.
+ gw_temp = tl.load(gw_ptr + gw_s_c * cs, mask=cmask).to(tl.float32)
+ gw_temp += gw
+ tl.store(gw_ptr + gw_s_c * cs, gw_temp, mask=cmask)
+ gu_temp = tl.load(gu_ptr + gu_s_c * cs, mask=cmask).to(tl.float32)
+ gu_temp += gu
+ tl.store(gu_ptr + gu_s_c * cs, gu_temp, mask=cmask)
+
+
+def fused_recurrent_rwkv4_backward(
+ w: Tensor,
+ u: Tensor,
+ k: Tensor,
+ v: Tensor,
+ state: Tensor,
+ grad_wkv: Tensor,
+ grad_state: Tensor,
+) -> tuple[Tensor, Tensor, Tensor, Tensor, Tensor]:
+ bsz, tsz, chans = k.shape
+
+ gw = torch.zeros_like(w) # New tensors to output.
+ gu = torch.zeros_like(u)
+ gk = torch.empty_like(k)
+ gv = torch.empty_like(v)
+ gstate = k.new_empty(bsz, 3, 1, chans)
+
+ block_size_c = get_block_size_c(chans) # Constants.
+
+ def grid(meta: dict[str, Any]) -> tuple[int, ...]:
+ return (bsz, triton.cdiv(chans, meta["BLOCK_SIZE_C"]))
+
+ fused_recurrent_rwkv4_backward_kernel[grid](
+ # W
+ w,
+ w.stride(0),
+ # U
+ u,
+ u.stride(0),
+ # K
+ k,
+ k.stride(0),
+ k.stride(1),
+ k.stride(2),
+ # V
+ v,
+ v.stride(0),
+ v.stride(1),
+ v.stride(2),
+ # State
+ state,
+ state.stride(0),
+ state.stride(1),
+ state.stride(2),
+ state.stride(3),
+ # WKV grad
+ grad_wkv,
+ grad_wkv.stride(0),
+ grad_wkv.stride(1),
+ grad_wkv.stride(2),
+ # Output state grad
+ grad_state,
+ grad_state.stride(0),
+ grad_state.stride(1),
+ grad_state.stride(3),
+ # W grad
+ gw,
+ gw.stride(0),
+ # U grad
+ gu,
+ gu.stride(0),
+ # K grad
+ gk,
+ gk.stride(0),
+ gk.stride(1),
+ gk.stride(2),
+ # V grad
+ gv,
+ gv.stride(0),
+ gv.stride(1),
+ gv.stride(2),
+ # State grad
+ gstate,
+ gstate.stride(0),
+ gstate.stride(1),
+ gstate.stride(3),
+ # Params
+ tsz,
+ chans,
+ BLOCK_SIZE_C=block_size_c,
+ )
+
+ return gw, gu, gk, gv, gstate
+
+
+class FusedRecurrentRWKV4Function(Function):
+ @staticmethod
+ def forward(
+ ctx: FunctionCtx,
+ w: Tensor,
+ u: Tensor,
+ k: Tensor,
+ v: Tensor,
+ state: Tensor,
+ ) -> tuple[Tensor, Tensor]:
+ ctx.input_dtype = k.dtype
+
+ if (
+ w.device.type != "cuda"
+ or u.device.type != "cuda"
+ or k.device.type != "cuda"
+ or v.device.type != "cuda"
+ ):
+ raise ValueError(
+ "Calling the CUDA kernel for wkv attention requires all tensors to be on CUDA devices."
+ )
+
+ w = -torch.exp(w.float().contiguous())
+ if k.dtype == torch.float16:
+ u = u.float()
+ k = k.float()
+ v = v.float()
+ u = u.contiguous()
+ k = k.contiguous()
+ v = v.contiguous()
+ wkv, state_out = fused_recurrent_rwkv4_forward(w, u, k, v, state)
+ ctx.save_for_backward(w, u, k, v, state_out[:, :, :-1])
+ return wkv, state_out[:, :, -1:]
+
+ @staticmethod
+ @once_differentiable
+ def backward(ctx: FunctionCtx, gwkv: Tensor, gstate: Tensor) -> tuple[Tensor, Tensor, Tensor, Tensor, Tensor]:
+ w, u, k, v, state = cast(tuple[Tensor, ...], ctx.saved_tensors)
+ gw, gu, gk, gv, gstate = fused_recurrent_rwkv4_backward(w, u, k, v, state, gwkv, gstate)
+ return gw, gu, gk, gv, gstate
+
+
+def fused_recurrent_rwkv4(w: Tensor, u: Tensor, k: Tensor, v: Tensor, state: Tensor) -> tuple[Tensor, Tensor]:
+ return FusedRecurrentRWKV4Function.apply(w, u, k, v, state)
diff --git a/fla/ops/rwkv6/__init__.py b/fla/ops/rwkv6/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..b3c7c218eb873a1a2115b5587530fe55f29a9d02
--- /dev/null
+++ b/fla/ops/rwkv6/__init__.py
@@ -0,0 +1,9 @@
+# -*- coding: utf-8 -*-
+
+from .chunk import chunk_rwkv6
+from .fused_recurrent import fused_recurrent_rwkv6
+
+__all__ = [
+ 'chunk_rwkv6',
+ 'fused_recurrent_rwkv6'
+]
diff --git a/fla/ops/rwkv6/chunk.py b/fla/ops/rwkv6/chunk.py
new file mode 100644
index 0000000000000000000000000000000000000000..444e6cb1d9a102ed91fe31d07cba107b77896bb0
--- /dev/null
+++ b/fla/ops/rwkv6/chunk.py
@@ -0,0 +1,936 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from typing import Optional, Tuple
+
+import torch
+import triton
+import triton.language as tl
+
+from fla.ops.common.chunk_h import chunk_fwd_h
+from fla.ops.gla.chunk import chunk_gla_bwd_dA, chunk_gla_bwd_dv
+from fla.utils import contiguous
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({'BS': 16}, num_warps=2),
+ triton.Config({'BS': 16}, num_warps=4),
+ triton.Config({'BS': 16}, num_warps=8),
+ triton.Config({'BS': 32}, num_warps=2),
+ triton.Config({'BS': 32}, num_warps=4),
+ triton.Config({'BS': 32}, num_warps=8),
+ triton.Config({'BS': 64}, num_warps=2),
+ triton.Config({'BS': 64}, num_warps=4),
+ triton.Config({'BS': 64}, num_warps=8),
+ ],
+ key=['S']
+)
+@triton.jit
+def chunk_rwkv6_fwd_cumsum_kernel(
+ s,
+ o,
+ o_minus_s,
+ s_s_h,
+ s_s_t,
+ s_s_d,
+ T: tl.constexpr,
+ S: tl.constexpr,
+ BT: tl.constexpr,
+ BS: tl.constexpr
+):
+ i_s, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ o_i = tl.arange(0, BT)
+ m_s = tl.where(o_i[:, None] >= o_i[None, :], 1., 0.)
+
+ p_s = tl.make_block_ptr(s + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, i_s * BS), (BT, BS), (1, 0))
+ p_o = tl.make_block_ptr(o + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, i_s * BS), (BT, BS), (1, 0))
+ p_o_minus_s = tl.make_block_ptr(o_minus_s + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, i_s * BS), (BT, BS), (1, 0))
+ # [BT, BS]
+ b_s = tl.load(p_s, boundary_check=(0, 1)).to(tl.float32)
+ b_o = tl.dot(m_s, b_s, allow_tf32=False)
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1))
+ tl.store(p_o_minus_s, (b_o - b_s).to(p_o_minus_s.dtype.element_ty), boundary_check=(0, 1))
+
+
+def chunk_rwkv6_fwd_cumsum(g, BT):
+ B, H, T, K = g.shape
+ NT = triton.cdiv(T, BT)
+ g, gi, ge = g, torch.empty_like(g, dtype=torch.float), torch.empty_like(g, dtype=torch.float)
+ def grid(meta): return ((triton.cdiv(meta['S'], meta['BS']), NT, B * H))
+ chunk_rwkv6_fwd_cumsum_kernel[grid](
+ g, gi, ge,
+ g.stride(1), g.stride(2), g.stride(3),
+ T=T,
+ S=K,
+ BT=BT
+ )
+ return gi, ge
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ ],
+ key=["BC", "BK"],
+)
+@triton.jit
+def chunk_rwkv6_fwd_A_kernel_intra_sub_inter(
+ q,
+ k,
+ gi, # cumulative decay inclusive
+ ge, # cumulative decay exclusive
+ A,
+ s_k_h,
+ s_k_t,
+ s_k_d,
+ scale,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ BT: tl.constexpr,
+ BC: tl.constexpr,
+ BK: tl.constexpr,
+ NC: tl.constexpr
+):
+ i_t, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_i, i_j = i_c // NC, i_c % NC
+ if i_i <= i_j:
+ return
+ if i_t * BT + i_i * BC >= T:
+ return
+ b_A = tl.zeros([BC, BC], dtype=tl.float32)
+ for i_k in range(tl.cdiv(K, BK)):
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ # q block exlusive
+ p_gq = tl.make_block_ptr(ge + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT + i_j * BC), (BK, BC), (0, 1))
+ # k block inclusive
+ p_gk = tl.make_block_ptr(gi + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT + i_j * BC), (BK, BC), (0, 1))
+ # the last position of the k block inclusive
+ p_gn = tl.make_block_ptr(gi + i_bh * s_k_h, (T * K,), (s_k_d,),
+ ((i_t * BT + i_j * BC + BC - 1) * K + i_k * BK,), (BK,), (0,))
+ # [BK,]
+ b_gn = tl.load(p_gn, boundary_check=(0,))
+ # [BC, BK]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_gq = tl.load(p_gq, boundary_check=(0, 1))
+ b_qg = (b_q * tl.exp(b_gq - b_gn[None, :]) * scale)
+ # [BK, BC]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ b_gk = tl.load(p_gk, boundary_check=(0, 1))
+ b_kg = (b_k * tl.exp(b_gn[:, None] - b_gk))
+ # [BC, BC] using tf32 to improve precision here.
+ b_A += tl.dot(b_qg, b_kg)
+ p_A = tl.make_block_ptr(A + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT + i_i * BC, i_j * BC), (BC, BC), (1, 0))
+ tl.store(p_A, b_A.to(A.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ ],
+ key=["BK", "BT"],
+)
+@triton.jit
+def chunk_rwkv6_fwd_A_kernel_intra_sub_intra(
+ q,
+ k,
+ gi,
+ ge,
+ u,
+ A,
+ s_k_h,
+ s_k_t,
+ s_k_d,
+ scale,
+ H: tl.constexpr,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ BT: tl.constexpr,
+ BC: tl.constexpr,
+ BK: tl.constexpr,
+ NC: tl.constexpr
+):
+ i_t, i_i, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ if i_t * BT + i_i * BC >= T:
+ return
+
+ i_j = i_i
+ i_h = i_bh % H
+ o_i = tl.arange(0, BC)
+ o_A = i_bh * T * BT + (i_t * BT + i_i * BC + tl.arange(0, BC)) * BT + i_j * BC
+ m_A = (i_t * BT + i_i * BC + tl.arange(0, BC)) < T
+ i_k = 0
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ p_g = tl.make_block_ptr(ge + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_g = tl.load(p_g, boundary_check=(0, 1))
+ p_u = tl.make_block_ptr(u + i_h * s_k_t, (s_k_t,), (1,), (i_k * BK), (BK,), (0,))
+ b_u = tl.load(p_u, boundary_check=(0,))
+
+ for j in range(0, min(BC, T-i_t*BT-i_i*BC)):
+ b_A = tl.zeros([BC], dtype=tl.float32)
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T * K,), (s_k_d,), ((i_t * BT + i_j * BC + j) * K + i_k * BK,), (BK,), (0,))
+ p_gk = tl.make_block_ptr(gi + i_bh * s_k_h, (T*K,), (s_k_d,), ((i_t * BT + i_j * BC + j) * K + i_k * BK,), (BK,), (0,))
+ b_k = tl.load(p_k, boundary_check=(0,)).to(tl.float32)
+ b_gk = tl.load(p_gk, boundary_check=(0,)).to(tl.float32)
+ b_A += tl.sum(b_q * b_k[None, :] * tl.exp(b_g - b_gk[None, :]), 1)
+ b_A = tl.where(o_i > j, b_A * scale, 0.)
+ p_qi = tl.make_block_ptr(q + i_bh * s_k_h, (T * K,), (s_k_d,),
+ ((i_t * BT + i_j * BC + j) * K + i_k * BK,), (BK,), (0,))
+ b_qi = tl.load(p_qi, boundary_check=(0,))
+ A_jj = tl.sum(b_qi * b_k * b_u * scale)
+ b_A = tl.where(o_i != j, b_A, A_jj)
+ tl.store(A + o_A + j, b_A, mask=m_A)
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ ],
+ key=["BC", "BK"],
+)
+@triton.jit
+def chunk_rwkv6_fwd_A_kernel_intra_sub_intra_split(
+ q,
+ k,
+ gi,
+ ge,
+ u,
+ A,
+ s_k_h,
+ s_k_t,
+ s_k_d,
+ scale,
+ H: tl.constexpr,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ BT: tl.constexpr,
+ BC: tl.constexpr,
+ BK: tl.constexpr,
+ NC: tl.constexpr
+):
+ i_k, i_tc, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ n_bh = tl.num_programs(2)
+ i_t, i_i = i_tc // NC, i_tc % NC
+ if i_t * BT + i_i * BC >= T:
+ return
+
+ i_j = i_i
+ i_h = i_bh % H
+ o_i = tl.arange(0, BC)
+ o_A = (i_bh + i_k * n_bh) * T * BC + (i_t * BT + i_i * BC + tl.arange(0, BC)) * BC
+ m_A = (i_t * BT + i_i * BC + tl.arange(0, BC)) < T
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ p_g = tl.make_block_ptr(ge + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_g = tl.load(p_g, boundary_check=(0, 1))
+ p_u = tl.make_block_ptr(u + i_h * s_k_t, (s_k_t,), (1,), (i_k * BK), (BK,), (0,))
+ b_u = tl.load(p_u, boundary_check=(0,))
+
+ for j in range(0, min(BC, T-i_t*BT-i_i*BC)):
+ b_A = tl.zeros([BC], dtype=tl.float32)
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T * K,), (s_k_d,), ((i_t * BT + i_j * BC + j) * K + i_k * BK,), (BK,), (0,))
+ p_gk = tl.make_block_ptr(gi + i_bh * s_k_h, (T*K,), (s_k_d,), ((i_t * BT + i_j * BC + j) * K + i_k * BK,), (BK,), (0,))
+ b_k = tl.load(p_k, boundary_check=(0,)).to(tl.float32)
+ b_gk = tl.load(p_gk, boundary_check=(0,)).to(tl.float32)
+ b_A += tl.sum(b_q * b_k[None, :] * tl.exp(b_g - b_gk[None, :]), 1)
+ b_A = tl.where(o_i > j, b_A * scale, 0.)
+ p_qi = tl.make_block_ptr(q + i_bh * s_k_h, (T * K,), (s_k_d,),
+ ((i_t * BT + i_j * BC + j) * K + i_k * BK,), (BK,), (0,))
+ b_qi = tl.load(p_qi, boundary_check=(0,))
+ A_jj = tl.sum(b_qi * b_k * b_u * scale)
+ b_A = tl.where(o_i != j, b_A, A_jj)
+ tl.store(A + o_A + j, b_A, mask=m_A)
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ ],
+ key=["BC"],
+)
+@triton.jit
+def chunk_rwkv6_fwd_A_kernel_intra_sub_intra_merge(
+ A,
+ A2,
+ T: tl.constexpr,
+ BT: tl.constexpr,
+ BC: tl.constexpr,
+ NK: tl.constexpr
+):
+ i_t, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ if i_t * BT + i_c * BC >= T:
+ return
+
+ n_bh = tl.num_programs(2)
+ b_A = tl.zeros([BC, BC], dtype=tl.float32)
+ for i_k in range(0, NK):
+ p_A = tl.make_block_ptr(A + (i_bh + i_k*n_bh) * T * BC, (T, BC), (BC, 1), (i_t * BT + i_c * BC, 0), (BC, BC), (1, 0))
+ b_A += tl.load(p_A, boundary_check=(0, 1))
+ p_A2 = tl.make_block_ptr(A2 + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT + i_c * BC, i_c * BC), (BC, BC), (1, 0))
+ tl.store(p_A2, b_A.to(A2.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ ],
+ key=["BK", "BV", "BT"],
+)
+@triton.jit
+def chunk_rwkv6_fwd_kernel_inter(
+ q,
+ v,
+ g,
+ h,
+ o,
+ A,
+ s_k_h,
+ s_k_t,
+ s_k_d,
+ s_v_h,
+ s_v_t,
+ s_v_d,
+ s_h_h,
+ s_h_t,
+ s_h_d,
+ scale,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr
+):
+ i_v, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+
+ b_o = tl.zeros([BT, BV], dtype=tl.float32)
+ for i_k in range(tl.cdiv(K, BK)):
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_ge = tl.make_block_ptr(g + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * K * V, (K, V), (s_h_t, s_h_d), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ # [BT, BK]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_q = (b_q * scale).to(b_q.dtype)
+ # [BT, BK]
+ b_g = tl.load(p_ge, boundary_check=(0, 1))
+ # [BT, BK]
+ b_qg = (b_q * tl.exp(b_g)).to(b_q.dtype)
+ # [BK, BV]
+ b_h = tl.load(p_h, boundary_check=(0, 1))
+ # works but dkw, owing to divine benevolence
+ # [BT, BV]
+ if i_k >= 0:
+ b_o += tl.dot(b_qg, b_h.to(b_qg.dtype))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_o = tl.make_block_ptr(o + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_A = tl.make_block_ptr(A + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0))
+ # [BT, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BT, BT]
+ b_A = tl.load(p_A, boundary_check=(0, 1))
+ m_s = tl.arange(0, BT)[:, None] >= tl.arange(0, BT)[None, :]
+ b_A = tl.where(m_s, b_A, 0.)
+ b_o += tl.dot(b_A.to(b_v.dtype), b_v, allow_tf32=False)
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ ],
+ key=["BK", "NC", "BT"],
+)
+@triton.jit
+def chunk_rwkv6_bwd_kernel_intra(
+ q,
+ k,
+ gi,
+ ge,
+ dA,
+ dq,
+ dk,
+ s_k_h,
+ s_k_t,
+ s_k_d,
+ scale,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ BT: tl.constexpr,
+ BC: tl.constexpr,
+ BK: tl.constexpr,
+ NC: tl.constexpr
+):
+ i_k, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_t, i_i = i_c // NC, i_c % NC
+ if i_t * BT + i_i * BC >= T:
+ return
+
+ o_k = i_k * BK + tl.arange(0, BK)
+ o_q = i_t * BT + i_i * BC
+ m_k = o_k < K
+
+ p_ge = tl.make_block_ptr(ge + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ # [BC, BK]
+ b_ge = tl.load(p_ge, boundary_check=(0, 1))
+ b_dq = tl.zeros([BC, BK], dtype=tl.float32)
+ b_dk = tl.zeros([BC, BK], dtype=tl.float32)
+ o_i = tl.arange(0, BC)
+ m_dA = (i_t * BT + i_i * BC + tl.arange(0, BC)) < T
+
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+
+ b_dq = tl.zeros([BC, BK], dtype=tl.float32)
+
+ if i_i > 0:
+ b_gn = tl.load(gi + i_bh * T * K + (o_q - 1) * K + o_k, mask=(m_k & (i_i > 0) & (o_q <= T)), other=0)
+ for i_j in range(0, i_i):
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d),
+ (i_t * BT + i_j * BC, i_k * BK), (BC, BK), (1, 0))
+ p_gk = tl.make_block_ptr(gi + i_bh * s_k_h, (T, K), (s_k_t, s_k_d),
+ (i_t * BT + i_j * BC, i_k * BK), (BC, BK), (1, 0))
+ p_dA = tl.make_block_ptr(dA + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT + i_i * BC, i_j * BC), (BC, BC), (1, 0))
+ # [BC, BK]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ b_gk = tl.load(p_gk, boundary_check=(0, 1))
+ b_kg = (b_k * tl.exp(b_gn[None, :] - b_gk))
+ # [BC, BC]
+ b_dA = tl.load(p_dA, boundary_check=(0, 1))
+ # [BC, BK]
+ b_dq += tl.dot(b_dA, b_kg)
+ b_dq *= tl.exp(b_ge - b_gn[None, :])
+
+ o_dA = i_bh * T * BT + (i_t * BT + i_i * BC + tl.arange(0, BC)) * BT + i_i * BC
+ for j in range(0, min(BC, T-i_t*BT-i_i*BC)):
+ p_kj = tl.make_block_ptr(k + i_bh * s_k_h, (T * K,), (1,), ((i_t * BT + i_i*BC+j) * K + i_k * BK,), (BK,), (0,))
+ p_gkj = tl.make_block_ptr(gi + i_bh * s_k_h, (T * K,), (1,), ((i_t * BT + i_i*BC+j) * K + i_k * BK,), (BK,), (0,))
+ # [BC,]
+ b_dA = tl.load(dA + o_dA + j, mask=m_dA, other=0)
+ # [BK,]
+ b_kj = tl.load(p_kj, boundary_check=(0,)).to(tl.float32)
+ b_gkj = tl.load(p_gkj, boundary_check=(0,)).to(tl.float32)
+ # [BC, BK]
+ m_i = o_i[:, None] > j
+ # [BC, BK]
+ # (SY 09/17) important to not use bf16 for b_dA to have a good precision.
+ tmp = tl.exp(b_ge - b_gkj[None, :])
+ b_dq += tl.where(m_i, b_dA[:, None] * b_kj[None, :] * tmp, 0.)
+ p_dq = tl.make_block_ptr(dq + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1))
+ tl.debug_barrier()
+ b_dk = tl.zeros([BC, BK], dtype=tl.float32)
+ p_gk = tl.make_block_ptr(gi + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ # [BC, BK]
+ b_gk = tl.load(p_gk, boundary_check=(0, 1))
+
+ max_block_idx = min(NC, tl.cdiv(T-i_t*BT, BC))
+ if i_i < max_block_idx - 1:
+ p_gn = tl.make_block_ptr(gi + i_bh * s_k_h, (T*K,), (s_k_d,),
+ ((i_t * BT + i_i * BC + BC - 1) * K + i_k * BK,), (BK,), (0,))
+ # [BK,]
+ b_gn = tl.load(p_gn, boundary_check=(0,))
+ for i_j in range(i_i + 1, NC):
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d),
+ (i_t * BT + i_j * BC, i_k * BK), (BC, BK), (1, 0))
+ p_ge = tl.make_block_ptr(ge + i_bh * s_k_h, (T, K), (s_k_t, s_k_d),
+ (i_t * BT + i_j * BC, i_k * BK), (BC, BK), (1, 0))
+ p_dA = tl.make_block_ptr(dA + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT + i_j * BC, i_i * BC), (BC, BC), (1, 0))
+ # [BC, BK]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_ge = tl.load(p_ge, boundary_check=(0, 1))
+ b_qg = b_q * tl.exp(b_ge - b_gn[None, :])
+ # [BC, BC]
+ b_dA = tl.load(p_dA, boundary_check=(0, 1))
+ # [BC, BK] fp32
+ b_dk += tl.dot(tl.trans(b_dA), b_qg, allow_tf32=False)
+ b_dk *= tl.exp(b_gn[None, :] - b_gk)
+ o_dA = i_bh * T * BT + (i_t * BT + i_i * BC) * BT + i_i * BC + tl.arange(0, BC)
+ for j in range(0, min(BC, T-i_t*BT-i_i*BC)):
+ p_qj = tl.make_block_ptr(q + i_bh * s_k_h, (T * K,), (1,), ((i_t * BT + i_i * BC + j) * K + i_k * BK,), (BK,), (0,))
+ p_gqj = tl.make_block_ptr(ge + i_bh * s_k_h, (T * K,), (1,), ((i_t * BT + i_i * BC + j) * K + i_k * BK,), (BK,), (0,))
+ # [BC,]
+ b_dA = tl.load(dA + o_dA + j * BT, mask=(i_t * BT + i_i * BC + j < T), other=0)
+ # [BK,]
+ b_qj = tl.load(p_qj, boundary_check=(0,)).to(tl.float32)
+ b_gqj = tl.load(p_gqj, boundary_check=(0,)).to(tl.float32)
+ # [BC, BK]
+ m_i = o_i[:, None] < j
+ b_dk += tl.where(m_i, b_dA[:, None] * b_qj[None, :] * tl.exp(b_gqj[None, :] - b_gk), 0.)
+ p_dk = tl.make_block_ptr(dk + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0))
+ tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.autotune(
+ configs=[
+ # triton.Config({}, num_warps=1),
+ # triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ ],
+ key=["BK", "BV", "BT"],
+)
+@triton.jit
+def chunk_rwkv6_bwd_kernel_inter(
+ q,
+ k,
+ v,
+ h,
+ gi,
+ ge,
+ u,
+ do,
+ dh,
+ dA,
+ dq,
+ dk,
+ dq2,
+ dk2,
+ dg,
+ du,
+ s_k_h,
+ s_k_t,
+ s_k_d,
+ s_v_h,
+ s_v_t,
+ s_v_d,
+ s_h_h,
+ s_h_t,
+ s_h_d,
+ scale,
+ H: tl.constexpr,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr
+):
+ i_k, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_h = i_bh % H
+ n_bh = tl.num_programs(2)
+
+ last_idx = min(T, i_t * BT + BT) - 1
+ p_gn = tl.make_block_ptr(gi + i_bh * s_k_h, (T * K,), (s_k_d,), (last_idx * K + i_k * BK,), (BK,), (0,))
+ b_gn = tl.load(p_gn, boundary_check=(0,))
+ b_dq = tl.zeros([BT, BK], dtype=tl.float32)
+ b_dk = tl.zeros([BT, BK], dtype=tl.float32)
+ b_dgk = tl.zeros([BK,], dtype=tl.float32)
+
+ for i_v in range(tl.cdiv(V, BV)):
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * V * K, (V, K), (s_h_d, s_h_t), (i_v * BV, i_k * BK), (BV, BK), (0, 1))
+ p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dh = tl.make_block_ptr(dh + i_bh * s_h_h + i_t * V * K, (V, K),
+ (s_h_d, s_h_t), (i_v * BV, i_k * BK), (BV, BK), (0, 1))
+ # [BT, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+ # [BV, BK]
+ b_h = tl.load(p_h, boundary_check=(0, 1))
+ b_dh = tl.load(p_dh, boundary_check=(0, 1))
+ # [BK]
+ b_dgk += tl.sum(b_h * b_dh, axis=0)
+ # [BT, BK]
+ b_dq += tl.dot(b_do, b_h.to(b_do.dtype))
+ b_dk += tl.dot(b_v, b_dh.to(b_v.dtype))
+ p_gk = tl.make_block_ptr(ge + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ b_dgk *= tl.exp(b_gn)
+ b_dq *= scale
+ b_gk = tl.load(p_gk, boundary_check=(0, 1))
+ p_gi = tl.make_block_ptr(gi + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ b_gi = tl.load(p_gi, boundary_check=(0, 1))
+ b_dq = b_dq * tl.exp(b_gk)
+ b_dk = b_dk * tl.exp(b_gn[None, :] - b_gi)
+ p_dq = tl.make_block_ptr(dq + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dk = tl.make_block_ptr(dk + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ b_dgk += tl.sum(b_dk * b_k, axis=0)
+
+ b_dq += tl.load(p_dq, boundary_check=(0, 1))
+ b_dk += tl.load(p_dk, boundary_check=(0, 1))
+ b_dg = b_q * b_dq - b_k * b_dk
+ b_dg = b_dg - tl.cumsum(b_dg, axis=0) + tl.sum(b_dg, axis=0)[None, :] + b_dgk[None, :] - b_q * b_dq
+
+ o_i = tl.arange(0, BT)
+ p_dA_dig = dA + i_bh * T * BT + (i_t * BT + o_i) * BT + o_i
+ b_dA_dig = tl.load(p_dA_dig, mask=(i_t * BT + o_i) < T, other=0)
+ p_u = tl.make_block_ptr(u + i_h * K, (K,), (1,), (i_k * BK,), (BK,), (0,))
+ b_u = tl.load(p_u, boundary_check=(0,))
+ # scale is already applied to b_dA_diag
+ b_dq += (b_dA_dig[:, None] * b_u[None, :] * b_k)
+ b_dk += (b_dA_dig[:, None] * b_u[None, :] * b_q)
+ b_du = tl.sum(b_dA_dig[:, None] * b_q * b_k, axis=0)
+ p_du = tl.make_block_ptr(du + (i_h + i_t * n_bh) * K, (K,), (1,), (i_k * BK,), (BK,), (0,))
+ tl.store(p_du, b_du, boundary_check=(0,))
+
+ # Buggy due to strange triton compiler issue.
+ # m_s = tl.where(tl.arange(0, BT)[:, None] <= tl.arange(0, BT)[None, :], 1., 0.)
+ # b_dg = tl.dot(m_s, b_dg, allow_tf32=False) + b_dgk[None, :]
+ p_dg = tl.make_block_ptr(dg + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ # work around triton compiler bugs.
+ p_dq = tl.make_block_ptr(dq2 + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dk = tl.make_block_ptr(dk2 + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1))
+ tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1))
+ tl.store(p_dg, b_dg.to(p_dg.dtype.element_ty), boundary_check=(0, 1))
+
+
+def chunk_rwkv6_fwd_intra_A_gated(q, k, gi, ge, u, scale, BT):
+ BC = 16
+ B, H, T, K = q.shape
+ A = q.new_empty(B, H, T, BT, dtype=torch.float32)
+ NC = triton.cdiv(BT, BC)
+ NT = triton.cdiv(T, BT)
+ grid = (triton.cdiv(T, BT), NC * NC, B * H)
+ BK = min(64, triton.next_power_of_2(K))
+ chunk_rwkv6_fwd_A_kernel_intra_sub_inter[grid](
+ q, k, gi, ge, A,
+ k.stride(1), k.stride(2), k.stride(3),
+ scale,
+ T=T, K=K, BT=BT, BC=BC, BK=BK, NC=NC
+ )
+ grid = (NT, NC, B * H)
+ # TODO: can we merge the two kernels?
+ # load the entire [BC, K] blocks into SRAM at once
+ if K <= 256:
+ chunk_rwkv6_fwd_A_kernel_intra_sub_intra[grid](
+ q, k, gi, ge, u, A,
+ k.stride(1), k.stride(2), k.stride(3),
+ scale,
+ H=H, T=T, K=K, BT=BT, BC=BC, BK=triton.next_power_of_2(K), NC=NC
+ )
+ # split then merge
+ else:
+ BK = 128
+ NK = triton.cdiv(K, BK)
+ A_intra = q.new_empty(NK, B, H, T, BC, dtype=torch.float32)
+ grid = (NK, NT * NC, B * H)
+ chunk_rwkv6_fwd_A_kernel_intra_sub_intra_split[grid](
+ q, k, gi, ge, u, A_intra,
+ k.stride(1), k.stride(2), k.stride(3),
+ scale,
+ H=H, T=T, K=K, BT=BT, BC=BC, BK=BK, NC=NC
+ )
+ grid = (NT, NC, B * H)
+ chunk_rwkv6_fwd_A_kernel_intra_sub_intra_merge[grid](
+ A_intra, A,
+ T=T, BT=BT, BC=BC, NK=NK
+ )
+ return A
+
+
+def chunk_rwkv6_fwd_o_gated_gk(q, v, g_cumsum, A, h, BT, scale):
+ B, H, T, K, V = *q.shape, v.shape[-1]
+ BV = min(32, triton.next_power_of_2(V))
+ BK = min(32, triton.next_power_of_2(K))
+ NV = triton.cdiv(V, BV)
+ NT = triton.cdiv(T, BT)
+ grid = (NV, NT, B * H)
+ o = torch.empty_like(v)
+ chunk_rwkv6_fwd_kernel_inter[grid](
+ q, v, g_cumsum, h, o, A,
+ q.stride(1), q.stride(2), q.stride(3),
+ v.stride(1), v.stride(2), v.stride(3),
+ h.stride(1), h.stride(2), h.stride(3),
+ scale,
+ T=T, K=K, V=V, BT=BT, BK=BK, BV=BV
+ )
+ return o
+
+
+def chunk_rwkv6_bwd_dqk_intra(q, k, g_cumsum_inclusive, g_cumsum_exclusive, dA, BT, scale):
+ B, H, T, K = q.shape
+ BC = 16
+ BK = min(64, triton.next_power_of_2(K))
+ NK = triton.cdiv(K, BK)
+ NT = triton.cdiv(T, BT)
+ NC = triton.cdiv(BT, BC)
+ dq = torch.empty_like(q, dtype=torch.float32)
+ dk = torch.empty_like(k, dtype=torch.float32)
+ grid = (NK, NT * NC, B * H)
+ chunk_rwkv6_bwd_kernel_intra[grid](
+ q, k, g_cumsum_inclusive, g_cumsum_exclusive, dA, dq, dk,
+ k.stride(1), k.stride(2), k.stride(3), scale,
+ T=T, K=K, BT=BT, BC=BC, BK=BK, NC=NC
+ )
+ return dq, dk
+
+
+def chunk_rwkv6_bwd_dqkgu(q, k, v, h, g_cumsum_inclusive, g_cumsum_exclusive, u, do, dh, dA, dq, dk, BT, scale):
+ B, H, T, K, V = *q.shape, v.shape[-1]
+ dg = torch.empty_like(g_cumsum_inclusive)
+ BK = 64
+ BV = 64
+ NK = triton.cdiv(K, BK)
+ NT = triton.cdiv(T, BT)
+ grid = (NK, NT, B * H)
+ # work around triton compiler bugs.
+ dq2 = torch.empty_like(dq)
+ dk2 = torch.empty_like(dk)
+ du = torch.empty(NT, B, H, K, dtype=torch.float32, device=u.device)
+ chunk_rwkv6_bwd_kernel_inter[grid](
+ q, k, v, h, g_cumsum_inclusive, g_cumsum_exclusive, u, do, dh, dA, dq, dk, dq2, dk2, dg, du,
+ k.stride(1), k.stride(2), k.stride(3),
+ v.stride(1), v.stride(2), v.stride(3),
+ h.stride(1), h.stride(2), h.stride(3),
+ scale, H=H, T=T, K=K, V=V, BT=BT, BK=BK, BV=BV
+ )
+ du = du.sum([0, 1])
+ return dq2, dk2, dg, du
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ ],
+ key=["BT", "BK", "BV"],
+)
+@triton.heuristics({
+ 'STORE_INITIAL_STATE_GRADIENT': lambda args: args['dh0'] is not None,
+ 'USE_FINAL_STATE_GRADIENT': lambda args: args['dht'] is not None
+})
+@triton.jit
+def chunk_rwkv6_bwd_kernel_dh(
+ q,
+ gi,
+ ge,
+ do,
+ dh,
+ dht,
+ dh0,
+ s_k_h,
+ s_k_t,
+ s_v_h,
+ s_v_t,
+ s_h_h,
+ s_h_t,
+ scale,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ NT: tl.constexpr,
+ NG: tl.constexpr,
+ STORE_INITIAL_STATE_GRADIENT: tl.constexpr,
+ USE_FINAL_STATE_GRADIENT: tl.constexpr
+):
+ i_k, i_v, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_bg = i_bh // NG
+ # [BK, BV]
+ b_dh = tl.zeros([BK, BV], dtype=tl.float32)
+ if USE_FINAL_STATE_GRADIENT:
+ p_dht = tl.make_block_ptr(dht + i_bh * K * V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ b_dh += tl.load(p_dht, boundary_check=(0, 1)).to(tl.float32)
+
+ for i_t in range(NT - 1, -1, -1):
+ p_dh = tl.make_block_ptr(dh + i_bh * s_h_h + i_t * K * V, (K, V), (s_h_t, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ tl.store(p_dh, b_dh.to(p_dh.dtype.element_ty), boundary_check=(0, 1))
+ last_idx = min(i_t * BT + BT, T) - 1
+ # [BK, BT]
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (K, T), (1, s_k_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ # [BT, BV]
+ p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+ p_gk = tl.make_block_ptr(ge + i_bg * s_k_h, (K, T), (1, s_k_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ b_gk = tl.load(p_gk, boundary_check=(0, 1))
+ b_q = (b_q * tl.exp(b_gk) * scale).to(b_q.dtype)
+ p_gk_last = gi + i_bg * s_k_h + last_idx * K + i_k * BK + tl.arange(0, BK)
+ p_gk_last = tl.max_contiguous(tl.multiple_of(p_gk_last, BK), BK)
+ b_gk_last = tl.load(p_gk_last, mask=(i_k * BK + tl.arange(0, BK) < K), other=0.)
+ b_dh *= tl.exp(b_gk_last)[:, None]
+ b_dh += tl.dot(b_q, b_do)
+
+ if STORE_INITIAL_STATE_GRADIENT:
+ p_dh0 = tl.make_block_ptr(dh0 + i_bh * K * V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ tl.store(p_dh0, b_dh.to(p_dh0.dtype.element_ty), boundary_check=(0, 1))
+
+
+def chunk_rwkv6_bwd_dh(q, k, v, g_cumsum_inclusive, g_cumsum_exclusive, do, h0, dht, BT, scale, states_in_fp32=False):
+ HQ = q.shape[1]
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ BT = 64
+ BK = min(triton.next_power_of_2(K), 64)
+ BV = min(triton.next_power_of_2(V), 64)
+ NT, NK, NV = triton.cdiv(T, BT), triton.cdiv(K, BK), triton.cdiv(V, BV)
+ NG = HQ // H
+
+ dh = k.new_empty(B, HQ, NT * K, V, dtype=k.dtype if not states_in_fp32 else torch.float32)
+ if h0 is not None:
+ dh0 = torch.empty_like(h0, dtype=torch.float32) if h0.requires_grad else None
+ else:
+ dh0 = None
+ chunk_rwkv6_bwd_kernel_dh[(NK, NV, B * HQ)](
+ q, g_cumsum_inclusive, g_cumsum_exclusive, do, dh, dht, dh0,
+ q.stride(1), q.stride(2),
+ v.stride(1), v.stride(2),
+ dh.stride(1), dh.stride(2),
+ scale,
+ T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, NT=NT, NG=NG
+ )
+ return dh, dh0
+
+
+class ChunkRWKV6Function(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ def forward(ctx, q, k, v, g, u, scale, initial_state, output_final_state):
+ BT = 64
+ g_cumsum_inclusive, g_cumsum_exclusive = chunk_rwkv6_fwd_cumsum(g, BT=BT) # gi, ge for short
+ h, ht = chunk_fwd_h(
+ k=k,
+ v=v,
+ g=None,
+ gk=g_cumsum_inclusive,
+ gv=None,
+ h0=initial_state,
+ output_final_state=output_final_state,
+ states_in_fp32=False,
+ chunk_size=BT
+ )
+ A = chunk_rwkv6_fwd_intra_A_gated(q, k, g_cumsum_inclusive, g_cumsum_exclusive, u, scale, BT)
+ o = chunk_rwkv6_fwd_o_gated_gk(q, v, g_cumsum_exclusive, A, h, BT, scale)
+ ctx.save_for_backward(q, k, v, g, initial_state, A, u)
+ ctx.BT = BT
+ ctx.scale = scale
+ return o, ht
+
+ @staticmethod
+ @contiguous
+ def backward(ctx, do, dht):
+ q, k, v, g, initial_state, A, u = ctx.saved_tensors
+ BT, scale = ctx.BT, ctx.scale
+ g_cumsum_inclusive, g_cumsum_exclusive = chunk_rwkv6_fwd_cumsum(g, BT=BT) # gi, ge for short
+ h, _ = chunk_fwd_h(
+ k=k,
+ v=v,
+ g=None,
+ gk=g_cumsum_inclusive,
+ gv=None,
+ h0=initial_state,
+ output_final_state=False,
+ states_in_fp32=True,
+ chunk_size=BT
+ )
+ dh, dh0 = chunk_rwkv6_bwd_dh(
+ q=q,
+ k=k,
+ v=v,
+ g_cumsum_inclusive=g_cumsum_inclusive,
+ g_cumsum_exclusive=g_cumsum_exclusive,
+ do=do,
+ h0=initial_state,
+ dht=dht,
+ BT=BT,
+ scale=scale,
+ states_in_fp32=True
+ )
+ # dq dk in fp32
+ dA = chunk_gla_bwd_dA(v=v, do=do, scale=scale, chunk_size=BT)
+ dv = chunk_gla_bwd_dv(k=k, g=g_cumsum_inclusive, A=A, do=do, dh=dh, chunk_size=BT)
+ dq, dk = chunk_rwkv6_bwd_dqk_intra(
+ q=q,
+ k=k,
+ g_cumsum_inclusive=g_cumsum_inclusive,
+ g_cumsum_exclusive=g_cumsum_exclusive,
+ dA=dA,
+ BT=BT,
+ scale=scale
+ )
+ dq, dk, dg, du = chunk_rwkv6_bwd_dqkgu(
+ q=q,
+ k=k,
+ v=v,
+ h=h,
+ g_cumsum_inclusive=g_cumsum_inclusive,
+ g_cumsum_exclusive=g_cumsum_exclusive,
+ u=u,
+ do=do,
+ dh=dh,
+ dA=dA,
+ dq=dq,
+ dk=dk,
+ BT=BT,
+ scale=scale
+ )
+ return dq.to(q), dk.to(k), dv.to(v), dg.to(g), du.to(u), None, dh0, None
+
+
+def chunk_rwkv6(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ g: torch.Tensor,
+ u: torch.Tensor,
+ scale: Optional[int] = None,
+ initial_state: torch.Tensor = None,
+ output_final_state: bool = False,
+ head_first: bool = True
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ r"""
+ Args:
+ q (torch.Tensor):
+ queries of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`.
+ k (torch.Tensor):
+ keys of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`.
+ v (torch.Tensor):
+ values of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ g (torch.Tensor):
+ forget gates of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`.
+ u (torch.Tensor):
+ bonus representations of shape `[H]`.
+ scale (Optional[int]):
+ Scale factor for the rwkv6 attention scores.
+ If not provided, it will default to `1 / sqrt(K)`. Default: `None`.
+ initial_state (Optional[torch.Tensor]):
+ Initial state of shape `[B, H, K, V]`. Default: `None`.
+ output_final_state (Optional[bool]):
+ Whether to output the final state of shape `[B, H, K, V]`. Default: `False`.
+ head_first (Optional[bool]):
+ Whether the inputs are in the head-first format. Default: `True`.
+
+ Returns:
+ o (torch.Tensor):
+ Outputs of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ final_state (Optional[torch.Tensor]):
+ Final state of shape `[B, H, K, V]` if `output_final_state=True` and `head_first=True` else `[B, H, M, V]`.
+ """
+ if scale is None:
+ scale = q.shape[-1] ** -0.5
+ if not head_first:
+ q, k, v, g = map(lambda x: x.transpose(1, 2) if x is not None else None, (q, k, v, g))
+ o, final_state = ChunkRWKV6Function.apply(q, k, v, g, u, scale, initial_state, output_final_state)
+ if not head_first:
+ o = o.transpose(1, 2)
+ return o, final_state
diff --git a/fla/ops/rwkv6/chunk_naive.py b/fla/ops/rwkv6/chunk_naive.py
new file mode 100644
index 0000000000000000000000000000000000000000..4a2ac664f5079a20eabe9b11c19c1cff6755c658
--- /dev/null
+++ b/fla/ops/rwkv6/chunk_naive.py
@@ -0,0 +1,43 @@
+# -*- coding: utf-8 -*-
+
+import torch
+from einops import rearrange
+
+
+def naive_chunk_rwkv6(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ w: torch.Tensor,
+ u: torch.Tensor,
+ chunk_size: int = 32
+):
+ assert q.shape[-2] % chunk_size == 0
+ orig_dtype = q.dtype
+ num_chunk = q.shape[-2] // chunk_size
+ u = u.unsqueeze(0)
+
+ q, k, v, w = map(lambda x: rearrange(x, 'b h (n c) d -> b h n c d', c=chunk_size).float(), (q, k, v, w))
+
+ w_cumsum = w.cumsum(-2)
+
+ kw = k * (w_cumsum[..., -1, None, :] - w_cumsum).exp()
+ wkv = kw.transpose(-1, -2) @ v
+
+ wkv_new = torch.zeros_like(wkv)
+
+ for i in range(num_chunk - 1):
+ wkv_new[:, :, i+1] = (wkv_new[:, :, i] * w_cumsum[:, :, i, -1, :, None].exp()) + wkv[:, :, i]
+
+ o_inter = torch.einsum('b h n d p, b h n c d -> b h n c p', wkv_new, (q * (w_cumsum - w).exp()))
+
+ o_intra = torch.zeros_like(o_inter)
+ for i in range(chunk_size):
+ attn = (q[:, :, :, i, None] * k * (w_cumsum[:, :, :, i, None] - w[:, :, :, i, None] - w_cumsum).exp()).sum(-1)
+ mask = (torch.arange(0, chunk_size) < i).to(attn.device)
+ attn.masked_fill_(~mask, 0)
+ intra_inter_o = (attn.unsqueeze(-1) * v).sum(-2)
+ intra_intra_o = (q[:, :, :, i] * u.unsqueeze(2) * k[:, :, :, i]).sum(-1).unsqueeze(-1) * v[:, :, :, i]
+ o_intra[:, :, :, i] = intra_inter_o + intra_intra_o
+ o = o_inter + o_intra
+ return rearrange(o, 'b h n c d -> b h (n c) d').to(orig_dtype)
diff --git a/fla/ops/rwkv6/fused_recurrent.py b/fla/ops/rwkv6/fused_recurrent.py
new file mode 100644
index 0000000000000000000000000000000000000000..2e8643762401806790c5730c0c70775dd744d862
--- /dev/null
+++ b/fla/ops/rwkv6/fused_recurrent.py
@@ -0,0 +1,380 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from typing import Tuple
+
+import torch
+import triton
+import triton.language as tl
+
+from fla.ops.utils import chunk_global_cumsum
+from fla.utils import autocast_custom_bwd, autocast_custom_fwd, contiguous
+
+
+@triton.jit
+def fused_recurrent_rwkv6_fwd_kernel(
+ q, # query [B, H, T, K]
+ k, # key [B, H, T, K]
+ v, # value [B, H, T, V]
+ w, # log gate [B, H, T, K]
+ u, # bonus [B, H, K]
+ o, # output [B, H, T, V]
+ # initial hidden state initialization [B, H, K, V]
+ h0,
+ ht, # final hidden state [B, H, K, V]
+ s_k_h, # stride size: T * K
+ s_v_h, # stride size: T * V
+ scale, # K ** -0.5
+ B: tl.constexpr,
+ H: tl.constexpr,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BK: tl.constexpr, # BLOCK SIZE along the K dimension
+ BV: tl.constexpr, # BLOCK SIZE along the V dimension
+ USE_INITIAL_STATE: tl.constexpr, # whether to use initial state
+ STORE_FINAL_STATE: tl.constexpr, # whether to store final state
+ REVERSE: tl.constexpr, # whether to do autoregressive modeling in the reverse direction
+):
+ i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_h = i_bh % H
+
+ p_q = q + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + ((T-1) * K if REVERSE else 0)
+ p_k = k + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + ((T-1) * K if REVERSE else 0)
+ p_v = v + i_bh * s_v_h + i_v * BV + tl.arange(0, BV) + ((T-1) * V if REVERSE else 0)
+ p_o = o + (i_bh + i_k * B * H) * s_v_h + i_v * BV + tl.arange(0, BV) + ((T-1) * V if REVERSE else 0)
+ p_w = w + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + ((T-1) * K if REVERSE else 0)
+ p_u = u + i_h * K + tl.arange(0, BK) + i_k * BK
+
+ mask_bk = (i_k * BK + tl.arange(0, BK)) < K
+ mask_bv = (i_v * BV + tl.arange(0, BV)) < V
+ mask_kv = mask_bv[:, None] & mask_bk[None, :]
+
+ b_h = tl.zeros([BV, BK], dtype=tl.float32)
+ if USE_INITIAL_STATE:
+ p_h0 = h0 + i_bh * K * V + (i_k * BK + tl.arange(0, BK)[None, :]) * V + (i_v * BV + tl.arange(0, BV)[:, None])
+ b_h += tl.load(p_h0, mask=mask_kv, other=0).to(tl.float32)
+
+ b_u = tl.load(p_u, mask=mask_bk, other=0).to(tl.float32)
+ for _ in range(0, T):
+ b_k = tl.load(p_k, mask=mask_bk, other=0).to(tl.float32)
+ b_v = tl.load(p_v, mask=mask_bv, other=0).to(tl.float32)
+ b_q = tl.load(p_q, mask=mask_bk, other=0).to(tl.float32) * scale
+ b_w = tl.load(p_w, mask=mask_bk, other=0).to(tl.float32)
+ b_w = tl.exp(b_w)
+ b_kv = b_k[None, :] * b_v[:, None]
+ b_o = (b_h + b_kv * b_u[None, :]) * b_q[None, :]
+ b_o = tl.sum(b_o, axis=1)
+ b_h = b_h * b_w[None, :]
+ b_h += b_kv
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), mask=mask_bv)
+ p_q += -K if REVERSE else K
+ p_k += -K if REVERSE else K
+ p_o += -V if REVERSE else V
+ p_v += -V if REVERSE else V
+ p_w += -K if REVERSE else K
+
+ if STORE_FINAL_STATE:
+ p_ht = ht + i_bh * K * V + (i_k * BK + tl.arange(0, BK)[None, :]) * V + (i_v * BV + tl.arange(0, BV)[:, None])
+ tl.store(p_ht, b_h.to(p_ht.dtype.element_ty), mask=mask_kv)
+
+
+# Similar to Algorithm1 of https://arxiv.org/abs/2006.16236
+@triton.jit
+def fused_recurrent_rwkv6_bwd_kernel_dq(
+ # B: B, H: H, T: T, D: d_head
+ # NV: number of split in the V dimension. NK: number of split in the K dimension
+ k, # key [B, H, T, V]
+ v, # value [B, H, T, V]
+ w, # log gate [B, H, T, K]
+ u, # bonus [B, H, K]
+
+ do, # gradient of output [B, H, T, V]
+ dq, # gradient of query [NV, B, H, T, K]
+ dq_aux, # gradient of query_aux [NV, B, H, T, K]
+
+ # initial hidden state initialization [B, H, K, V]
+ h0,
+
+ s_k_h, # stride size: T * K
+ s_v_h, # stride size: T * V
+
+ scale, # K ** -0.5
+ B: tl.constexpr, # B
+ H: tl.constexpr, # H
+ T: tl.constexpr, # T
+ K: tl.constexpr, # K
+ V: tl.constexpr, # V
+ BK: tl.constexpr, # BLOCK SIZE along the K dimension
+ BV: tl.constexpr, # BLOCK SIZE along the V dimension
+ USE_INITIAL_STATE: tl.constexpr, # whether to use initial state
+ REVERSE: tl.constexpr, # whether to do autoregressive modeling in the reverse direction
+):
+ i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_h = i_bh % H
+ p_k = k + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + ((T-1) * K if REVERSE else 0)
+ p_v = v + i_bh * s_v_h + i_v * BV + tl.arange(0, BV) + ((T-1) * V if REVERSE else 0)
+ p_do = do + i_bh * s_v_h + i_v * BV + tl.arange(0, BV) + ((T-1) * V if REVERSE else 0)
+ p_dq = dq + (i_bh + i_v * B * H) * s_k_h + i_k * BK + tl.arange(0, BK) + ((T-1) * K if REVERSE else 0)
+ p_dq_aux = dq_aux + (i_bh + i_v * B * H) * s_k_h + i_k * BK + tl.arange(0, BK) + ((T-1) * K if REVERSE else 0)
+ p_w = w + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + ((T-1) * K if REVERSE else 0)
+ p_u = u + i_h * K + tl.arange(0, BK) + i_k * BK
+
+ mask_bk = i_k * BK + tl.arange(0, BK) < K
+ mask_bv = i_v * BV + tl.arange(0, BV) < V
+ mask_kv = mask_bv[:, None] & mask_bk[None, :]
+ b_u = tl.load(p_u, mask=mask_bk, other=0).to(tl.float32)
+ b_h = tl.zeros([BV, BK], dtype=tl.float32)
+
+ if USE_INITIAL_STATE:
+ p_h0 = h0 + i_bh * K * V + (i_k * BK + tl.arange(0, BK)[None, :]) * V + (i_v * BV + tl.arange(0, BV)[:, None])
+ b_h += tl.load(p_h0, mask=mask_kv, other=0).to(tl.float32)
+
+ for _ in range(0, T):
+ b_k = tl.load(p_k, mask=mask_bk, other=0).to(tl.float32)
+ b_v = tl.load(p_v, mask=mask_bv, other=0).to(tl.float32)
+ b_kv = b_k[None, :] * b_v[:, None]
+ b_do = tl.load(p_do, mask=mask_bv, other=0).to(tl.float32)
+ b_w = tl.load(p_w, mask=mask_bk, other=0).to(tl.float32)
+ b_w = tl.exp(b_w)
+ h_q = b_h * b_do[:, None]
+ b_dq = tl.sum(h_q + b_kv * b_u[None, :] * b_do[:, None], axis=0)
+ b_dq *= scale
+ b_dq_aux = tl.sum(h_q, axis=0)
+ b_h = b_h * b_w[None, :]
+ b_h += b_kv
+ tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), mask=mask_bk)
+ tl.store(p_dq_aux, b_dq_aux.to(p_dq_aux.dtype.element_ty), mask=mask_bk)
+ p_k += -K if REVERSE else K
+ p_do += -V if REVERSE else V
+ p_v += -V if REVERSE else V
+ p_w += -K if REVERSE else K
+ p_dq += -K if REVERSE else K
+ p_dq_aux += -K if REVERSE else K
+
+
+@triton.jit
+def fused_recurrent_rwkv6_bwd_kernel_dkv(
+ # B: B, H: H, T: T, D: d_head
+ # NV: number of split in the V dimension. NK: number of split in the K dimension
+ q, # query [B, H, T, K]
+ k, # key [B, H, T, V]
+ v, # value [B, H, T, V]
+ w, # log gate [B, H, T, K]
+ u, # bonus [B, H, K]
+
+ do, # gradient of output [B, H, T, V]
+ dk,
+ dk_aux,
+ dv,
+ dh0,
+
+ # initial hidden state initialization [B, H, K, V]
+ s_k_h, # stride size: T * K
+ s_v_h, # stride size: T * V
+
+ scale, # K ** -0.5
+ B: tl.constexpr, # B
+ H: tl.constexpr, # H
+ T: tl.constexpr, # T
+ K: tl.constexpr, # K
+ V: tl.constexpr, # V
+ BK: tl.constexpr, # BLOCK SIZE along the K dimension
+ BV: tl.constexpr, # BLOCK SIZE along the V dimension
+ USE_INITIAL_STATE: tl.constexpr, # whether to use initial state
+ REVERSE: tl.constexpr, # whether to do autoregressive modeling in the reverse direction
+):
+ i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_h = i_bh % H
+ p_q = q + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + ((T - 1) * K if not REVERSE else 0)
+ p_k = k + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + ((T - 1) * K if not REVERSE else 0)
+ p_do = do + i_bh * s_v_h + i_v * BV + tl.arange(0, BV) + ((T - 1) * V if not REVERSE else 0)
+ p_v = v + i_bh * s_v_h + i_v * BV + tl.arange(0, BV) + ((T - 1) * V if not REVERSE else 0)
+ p_dk = dk + (i_bh + i_v * B * H) * s_k_h + i_k * BK + tl.arange(0, BK) + ((T - 1) * K if not REVERSE else 0)
+ p_dk_aux = dk_aux + (i_bh + i_v * B * H) * s_k_h + i_k * BK + tl.arange(0, BK) + ((T - 1) * K if not REVERSE else 0)
+ p_dv = dv + (i_bh + i_k * B * H) * s_v_h + i_v * BV + tl.arange(0, BV) + ((T - 1) * V if not REVERSE else 0)
+ p_w = w + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + ((T - 1) * K if not REVERSE else 0)
+ b_dh = tl.zeros([BK, BV], dtype=tl.float32)
+ mask_bk = i_k * BK + tl.arange(0, BK) < K
+ mask_bv = i_v * BV + tl.arange(0, BV) < V
+ mask_kv = mask_bk[:, None] & mask_bv[None, :]
+
+ p_u = u + i_h * K + tl.arange(0, BK) + i_k * BK
+ b_u = tl.load(p_u, mask=mask_bk, other=0).to(tl.float32)
+
+ for _ in range(T-1, -1, -1):
+ b_q = tl.load(p_q, mask=mask_bk, other=0).to(tl.float32) * scale
+ b_k = tl.load(p_k, mask=mask_bk, other=0).to(tl.float32)
+ b_v = tl.load(p_v, mask=mask_bv, other=0).to(tl.float32)
+ b_w = tl.load(p_w, mask=mask_bk, other=0).to(tl.float32)
+ b_do = tl.load(p_do, mask=mask_bv, other=0).to(tl.float32)
+ b_dkv = b_q[:, None] * b_do[None, :]
+ b_dk = tl.sum(b_dh * b_v[None, :], axis=1)
+ tl.store(p_dk_aux, b_dk.to(p_dk_aux.dtype.element_ty), mask=mask_bk)
+ b_dk += tl.sum(b_dkv * b_u[:, None] * b_v[None, :], axis=1)
+ b_dv = tl.sum((b_dh + (b_dkv * b_u[:, None])) * b_k[:, None], axis=0)
+
+ tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), mask=mask_bk)
+ tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), mask=mask_bv)
+ b_dh *= tl.exp(b_w)[:, None]
+ b_dh += b_dkv
+
+ p_q += K if REVERSE else -K
+ p_k += K if REVERSE else -K
+ p_v += V if REVERSE else -V
+ p_w += K if REVERSE else -K
+ p_do += V if REVERSE else -V
+ p_dk += K if REVERSE else -K
+ p_dk_aux += K if REVERSE else -K
+ p_dv += V if REVERSE else -V
+
+ if USE_INITIAL_STATE:
+ p_dh0 = dh0 + i_bh * K * V + (i_k * BK + tl.arange(0, BK)[:, None]) * V + (i_v * BV + tl.arange(0, BV)[None, :])
+ tl.store(p_dh0, b_dh.to(p_dh0.dtype.element_ty), mask=mask_kv)
+
+
+class FusedRecurrentRWKV6Function(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_fwd
+ def forward(ctx, r, k, v, w, u, scale=None, initial_state=None, output_final_state=False, reverse=False):
+ q = r
+ B, H, T, K, V = *q.shape, v.shape[-1]
+
+ BK, BV = min(triton.next_power_of_2(K), 32), min(triton.next_power_of_2(V), 32)
+ NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV)
+ num_stages = 1
+ num_warps = 1
+
+ final_state = q.new_empty(B, H, K, V) if output_final_state else None
+
+ o = q.new_empty(NK, B, H, T, V, dtype=torch.float32)
+ grid = (NV, NK, B * H)
+ fused_recurrent_rwkv6_fwd_kernel[grid](
+ q, k, v, w, u, o, initial_state, final_state,
+ k.stride(1),
+ v.stride(1),
+ scale,
+ B=B, H=H, T=T, K=K, V=V, BK=BK, BV=BV,
+ USE_INITIAL_STATE=initial_state is not None,
+ STORE_FINAL_STATE=final_state is not None,
+ REVERSE=reverse,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+
+ o = o.sum(0)
+ ctx.save_for_backward(q, k, v, w, u, initial_state)
+ ctx.scale = scale
+ ctx.reverse = reverse
+ return o.to(q.dtype), final_state
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_bwd
+ def backward(ctx, do, dht=None):
+ q, k, v, w, u, initial_state = ctx.saved_tensors
+ B, H, T, K, V = *q.shape, v.shape[-1]
+ scale = ctx.scale
+
+ BK, BV = min(triton.next_power_of_2(K), 16), min(triton.next_power_of_2(V), 64)
+ NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV)
+ num_stages = 1
+ num_warps = 1
+ dq = q.new_empty(NV, B, H, T, K, dtype=torch.float32)
+ dq_aux = torch.empty_like(dq)
+ grid = (NV, NK, B * H)
+
+ fused_recurrent_rwkv6_bwd_kernel_dq[grid](
+ k, v, w, u, do, dq, dq_aux, initial_state,
+ q.stride(1),
+ v.stride(1),
+ scale,
+ B=B, H=H, T=T, K=K, V=V, BK=BK, BV=BV,
+ USE_INITIAL_STATE=initial_state is not None,
+ REVERSE=ctx.reverse,
+ num_warps=num_warps,
+ num_stages=num_stages
+ )
+ dq = dq.sum(0).to(q)
+ dq_aux = dq_aux.sum(0)
+
+ BK, BV = min(triton.next_power_of_2(K), 32), min(triton.next_power_of_2(V), 32)
+ NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV)
+
+ dk = q.new_empty(NV, B, H, T, K, dtype=torch.float32)
+ dk_aux = q.new_empty(NV, B, H, T, K, dtype=torch.float32)
+ dv = q.new_empty(NK, B, H, T, V, dtype=torch.float32)
+ dh0 = initial_state.new_empty(B, H, K, V) if initial_state is not None else None
+ grid = (NV, NK, B * H)
+ fused_recurrent_rwkv6_bwd_kernel_dkv[grid](
+ q, k, v, w, u, do, dk, dk_aux, dv, dh0,
+ q.stride(1),
+ v.stride(1),
+ scale,
+ B=B, H=H, T=T, K=K, V=V, BK=BK, BV=BV,
+ num_warps=num_warps,
+ num_stages=num_stages,
+ USE_INITIAL_STATE=initial_state is not None,
+ REVERSE=ctx.reverse,
+ )
+ dk = dk.sum(0).to(k)
+ dv = dv.sum(0).to(v)
+ dk_aux = dk_aux.sum(0)
+
+ dw = (dq_aux * q * scale)[:, :, 1:] - (dk_aux * k)[:, :, 0:-1]
+ dw = torch.nn.functional.pad(dw, (0, 0, 0, 1, 0, 0, 0, 0), value=0)
+ dw = chunk_global_cumsum(dw, reverse=True).to(w)
+
+ du = ((do * v).sum(-1)[..., None] * k * q * scale).sum([0, -2]).to(u)
+ return dq, dk, dv, dw, du, None, dh0, None, None
+
+
+def fused_recurrent_rwkv6(
+ r: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ w: torch.Tensor,
+ u: torch.Tensor,
+ scale: float = -1,
+ initial_state: torch.Tensor = None,
+ output_final_state: bool = False,
+ head_first: bool = True
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ r"""
+ Args:
+ r (torch.Tensor):
+ reception of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`. Alias: q, query in linear attention.
+ k (torch.Tensor):
+ keys of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`.
+ v (torch.Tensor):
+ values of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ w (torch.Tensor):
+ data-dependent decays of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]` in log space! Alias: g.
+ u (torch.Tensor):
+ bonus of shape `[H]`
+ scale (Optional[int]):
+ Scale factor for the RWKV6 attention scores.
+ If not provided, it will default to `1 / sqrt(K)`. Default: `None`.
+ initial_state (Optional[torch.Tensor]):
+ Initial state of shape `[B, H, K, V]`. Default: `None`.
+ output_final_state (Optional[bool]):
+ Whether to output the final state of shape `[B, H, K, V]`. Default: `False`.
+ head_first (Optional[bool]):
+ Whether the inputs are in the head-first format. Default: `True`.
+
+ Returns:
+ o (torch.Tensor):
+ Outputs of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ final_state (Optional[torch.Tensor]):
+ Final state of shape `[B, H, K, V]` if `output_final_state=True` and `head_first=True` else `[B, H, M, V]`.
+ """
+ if scale == -1:
+ scale = r.shape[-1] ** -0.5
+ if not head_first:
+ r, k, v, w = map(lambda x: x.transpose(1, 2), (r, k, v, w))
+ o, final_state = FusedRecurrentRWKV6Function.apply(r, k, v, w, u, scale, initial_state, output_final_state)
+ if not head_first:
+ o = o.transpose(1, 2)
+ return o, final_state
diff --git a/fla/ops/rwkv6/recurrent_naive.py b/fla/ops/rwkv6/recurrent_naive.py
new file mode 100644
index 0000000000000000000000000000000000000000..ba2268759b5d4ce7f9be1be1f9c2e1a2f2a8e6c3
--- /dev/null
+++ b/fla/ops/rwkv6/recurrent_naive.py
@@ -0,0 +1,103 @@
+# -*- coding: utf-8 -*-
+
+from typing import Optional
+
+import torch
+
+
+def naive_recurrent_rwkv6(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ w: torch.Tensor,
+ u: torch.Tensor,
+ scale: Optional[float] = None,
+ initial_state: Optional[torch.Tensor] = None,
+ output_final_state: Optional[bool] = False
+):
+ orig_dtype = q.dtype
+ B, H, T, K, V = *q.shape, v.shape[-1]
+ q, k, v, w, u = map(lambda x: x.float(), (q, k, v, w, u))
+ h = torch.zeros(B, H, K, V, dtype=torch.float32, device=q.device)
+ o = torch.zeros_like(v)
+
+ if scale is None:
+ scale = K ** -0.5
+
+ if initial_state is not None:
+ h += initial_state
+
+ for i in range(T):
+ q_i = q[:, :, i, :] * scale
+ k_i = k[:, :, i]
+ v_i = v[:, :, i, :]
+ w_i = w[:, :, i].exp()
+ kv_i = k_i[..., None] * v_i[..., None, :]
+ o_i = (h + u[None, ..., None] * kv_i) * q_i[..., None]
+ o[:, :, i] = o_i.sum(-2)
+ h = h * w_i[..., None] + kv_i
+ ht = h if output_final_state else None
+ return o.to(orig_dtype), ht
+
+
+@torch.no_grad
+@torch.jit.script
+def naive_recurrent_rwkv6_bwd(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ w: torch.Tensor,
+ u: torch.Tensor,
+ o: torch.Tensor,
+ do: torch.Tensor,
+ initial_state: Optional[torch.Tensor] = None
+):
+ q, k, v, w, u, o, do = (x.to(dtype=torch.float32) for x in (q, k, v, w, u, o, do))
+ B, H, T, K, V = q.shape[0], q.shape[1], q.shape[2], q.shape[3], v.shape[-1]
+ h = torch.zeros(B, H, K, V, dtype=torch.float32, device=q.device)
+ dq = torch.zeros_like(q)
+ dq_aux = torch.zeros_like(q)
+
+ if initial_state is not None:
+ h += initial_state
+
+ for i in range(T):
+ k_i = k[:, :, i]
+ v_i = v[:, :, i]
+ w_i = w[:, :, i].exp()
+ kv_i = k_i[..., None] * v_i[..., None, :]
+ h_i = (h + u[None, ..., None] * kv_i)
+ dq_i = (do[:, :, i, None, :] * h_i).sum(-1)
+ dq_aux_i = (do[:, :, i, None, :] * h).sum(-1)
+ dq[:, :, i] = dq_i
+ dq_aux[:, :, i] = dq_aux_i
+ h = h * w_i[..., None] + kv_i
+
+ du = torch.zeros_like(u)
+ dh = torch.zeros_like(h)
+ dk = torch.zeros_like(k)
+ dk_aux = torch.zeros_like(k)
+ dv = torch.zeros_like(v)
+
+ for i in range(T - 1, -1, -1):
+ d_kv_i = do[:, :, i, None, :] * q[:, :, i, :, None]
+ k_i = k[:, :, i]
+ v_i = v[:, :, i]
+ du_i = (d_kv_i * k_i[..., None] * v_i[..., None, :]).sum(-1)
+ du += du_i.sum(0)
+ dk_i = (dh * v_i[..., None, :]).sum(-1)
+ dk_aux[:, :, i] = dk_i
+ dk_i += (d_kv_i * u[None, ..., None] * v_i[..., None, :]).sum(-1)
+ dv_i = (d_kv_i * u[None, ..., None] * k_i[..., None]).sum(-2)
+ dv_i += (dh * k_i[..., None]).sum(-2)
+
+ dk[:, :, i] = dk_i
+ dv[:, :, i] = dv_i
+ dh = dh * w[:, :, i, :, None].exp() + d_kv_i
+
+ # dw = q * dq_aux - k * dk_aux
+ dw = torch.zeros_like(w)
+ for i in range(T - 2, -1, -1):
+ dw[:, :, i] = dw[:, :, i+1] + dq_aux[:, :, i+1] * q[:, :, i+1] - dk_aux[:, :, i] * k[:, :, i]
+
+ return dq, dk, dv, dw, du, dh
diff --git a/fla/ops/scan/__init__.py b/fla/ops/scan/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..692bfc522dda32fcb629308b0c0341d70db63395
--- /dev/null
+++ b/fla/ops/scan/__init__.py
@@ -0,0 +1,9 @@
+# -*- coding: utf-8 -*-
+
+from .parallel import parallel_scan
+from .naive import naive_recurrent_scan
+
+__all__ = [
+ 'parallel_scan',
+ 'naive_recurrent_scan'
+]
diff --git a/fla/ops/scan/fused_recurrent.py b/fla/ops/scan/fused_recurrent.py
new file mode 100644
index 0000000000000000000000000000000000000000..47ffe9a6f77d50eaa55d4633e427c13f69e14877
--- /dev/null
+++ b/fla/ops/scan/fused_recurrent.py
@@ -0,0 +1,565 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from typing import Optional, Tuple
+
+import torch
+import triton
+import triton.language as tl
+
+from fla.ops.common.fused_recurrent import (fused_recurrent_bwd_kernel,
+ fused_recurrent_fwd_kernel)
+from fla.ops.utils import chunk_global_cumsum
+from fla.utils import autocast_custom_bwd, autocast_custom_fwd, contiguous
+
+
+@triton.jit
+def fused_recurrent_gsa_inference_kernel(
+ q,
+ k,
+ v,
+ s,
+ g,
+ o,
+ hk0,
+ hv0,
+ hkt,
+ hvt,
+ scale,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ M: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ NG: tl.constexpr
+):
+ i_bh = tl.program_id(0)
+ i_bg = i_bh // NG
+
+ b_s = tl.load(s + i_bg * M + tl.arange(0, M)).to(tl.float32)
+ b_g = tl.load(g + i_bg * M + tl.arange(0, M)).to(tl.float32)
+ b_g = tl.exp(b_g)
+
+ b_ok = tl.zeros([M], dtype=tl.float32)
+ for i_k in range(tl.cdiv(K, BK)):
+ o_k = i_k * BK + tl.arange(0, BK)
+
+ p_hk0 = hk0 + i_bg * K * M + (o_k[None, :]) * M + tl.arange(0, M)[:, None]
+ # [BK,]
+ mask_k = o_k < K
+ # [M, BK]
+ mask_hk = (tl.arange(0, M) < M)[:, None] & mask_k[None, :]
+ # [M, BK]
+ b_hk = tl.load(p_hk0, mask=mask_hk, other=0.).to(tl.float32)
+ # [BK,]
+ b_q = tl.load(q + i_bh * K + o_k, mask=mask_k, other=0.).to(tl.float32) * scale
+ b_k = tl.load(k + i_bg * K + o_k, mask=mask_k, other=0.).to(tl.float32)
+ b_hk = b_hk * b_g[:, None] + b_k[None, :] * b_s[:, None]
+ b_ok += tl.sum(b_hk * b_q[None, :], axis=1)
+
+ if i_bh % NG == 0:
+ p_hkt = hkt + i_bg * K * M + o_k[None, :] * M + tl.arange(0, M)[:, None]
+ tl.store(p_hkt, b_hk.to(p_hkt.dtype.element_ty), mask=mask_hk)
+
+ b_qv = tl.softmax(b_ok)
+ for i_v in range(tl.cdiv(V, BV)):
+ o_v = i_v * BV + tl.arange(0, BV)
+
+ p_hv0 = hv0 + i_bg * M * V + tl.arange(0, M)[None, :] * V + o_v[:, None]
+ # [BV,]
+ mask_v = o_v < V
+ # [BV, M]
+ mask_hv = mask_v[:, None] & (tl.arange(0, M) < M)[None, :]
+ # [BV, M]
+ b_hv = tl.load(p_hv0, mask=mask_hv, other=0).to(tl.float32)
+ # [BV,]
+ b_v = tl.load(v + i_bg * V + o_v, mask=mask_v, other=0).to(tl.float32)
+ b_hv = b_hv * b_g[None, :] + b_s[None, :] * b_v[:, None]
+ b_ov = tl.sum(b_hv * b_qv[None, :], axis=1)
+
+ tl.store(o + i_bh * V + o_v, b_ov.to(o.dtype.element_ty), mask=mask_v)
+
+ if i_bh % NG == 0:
+ p_hvt = hvt + i_bg * M * V + tl.arange(0, M)[None, :] * V + o_v[:, None]
+ tl.store(p_hvt, b_hv.to(p_hvt.dtype.element_ty), mask=mask_hv)
+
+
+def fused_recurrent_gsa_inference(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ s: torch.Tensor,
+ g: torch.Tensor,
+ initial_state: Optional[Tuple[torch.Tensor, torch.Tensor]] = None,
+ output_final_state: bool = False,
+ scale: float = 1.,
+ head_first: bool = True
+) -> torch.Tensor:
+ if head_first:
+ B, H, T, K, V, M = *k.shape, v.shape[-1], s.shape[-1]
+ else:
+ B, T, H, K, V, M = *k.shape, v.shape[-1], s.shape[-1]
+ HQ = q.shape[1] if head_first else q.shape[2]
+ BK, BV = min(triton.next_power_of_2(K), 64), min(triton.next_power_of_2(V), 64)
+ NG = HQ // H
+
+ hk0, hv0 = None, None
+ if initial_state is not None:
+ hk0, hv0 = initial_state
+ hkt, hvt = None, None
+ if output_final_state:
+ if NG == 1:
+ hkt, hvt = hk0, hv0
+ else:
+ hkt, hvt = q.new_empty(B, H, K, M, dtype=torch.float), q.new_empty(B, H, M, V, dtype=torch.float)
+
+ o = v.new_empty(B, HQ, T, V) if head_first else v.new_empty(B, T, HQ, V)
+ grid = (B * HQ,)
+ fused_recurrent_gsa_inference_kernel[grid](
+ q,
+ k,
+ v,
+ s,
+ g,
+ o,
+ hk0,
+ hv0,
+ hkt,
+ hvt,
+ scale=scale,
+ K=K,
+ V=V,
+ M=M,
+ BK=BK,
+ BV=BV,
+ NG=NG
+ )
+ return o, (hkt, hvt)
+
+
+def fused_recurrent_gsa_fwd(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ s: torch.Tensor,
+ g: torch.Tensor,
+ initial_state: Optional[Tuple[torch.Tensor, torch.Tensor]] = None,
+ output_final_state: bool = False,
+ scale: float = 1.,
+ reverse: bool = False,
+ offsets: Optional[torch.LongTensor] = None,
+ head_first: bool = True
+) -> Tuple[torch.Tensor, Tuple[torch.Tensor]]:
+ if head_first:
+ B, H, T, K, V, M = *k.shape, v.shape[-1], s.shape[-1]
+ else:
+ B, T, H, K, V, M = *k.shape, v.shape[-1], s.shape[-1]
+ N = B if offsets is None else len(offsets) - 1
+ HQ = q.shape[1] if head_first else q.shape[2]
+ if HQ != H:
+ raise ValueError("GQA not supported yet.")
+
+ BK, BV, BM = min(triton.next_power_of_2(K), 64), min(triton.next_power_of_2(V), 64), min(M, 64)
+ NK, NV, NM = triton.cdiv(K, BK), triton.cdiv(V, BV), triton.cdiv(M, BM)
+
+ hk0, hv0 = None, None
+ if initial_state is not None:
+ hk0, hv0 = initial_state
+ hkt, hvt = None, None
+ if output_final_state:
+ hkt, hvt = q.new_empty(N, H, K, M, dtype=torch.float), q.new_empty(N, H, M, V, dtype=torch.float)
+
+ ok = q.new_empty(NK, *s.shape, dtype=torch.float)
+ gk, gv = None, g
+ grid = (NM, NK, N * H)
+ fused_recurrent_fwd_kernel[grid](
+ q=q,
+ k=k,
+ v=s,
+ g=None,
+ gk=gk,
+ gv=gv,
+ o=ok,
+ h0=hk0,
+ ht=hkt,
+ offsets=offsets,
+ scale=scale,
+ B=B,
+ T=T,
+ H=H,
+ K=K,
+ V=M,
+ BK=BK,
+ BV=BM,
+ USE_G=False,
+ USE_GK=False,
+ USE_GV=True,
+ REVERSE=reverse,
+ HEAD_FIRST=head_first
+ )
+ ok = ok.sum(0)
+
+ qv = ok.softmax(-1, dtype=torch.float)
+ ov = q.new_empty(NM, *v.shape, dtype=torch.float)
+ gk, gv = g, None
+ grid = (NV, NM, N * H)
+ fused_recurrent_fwd_kernel[grid](
+ q=qv,
+ k=s,
+ v=v,
+ g=None,
+ gk=gk,
+ gv=gv,
+ o=ov,
+ h0=hv0,
+ ht=hvt,
+ offsets=offsets,
+ scale=1.,
+ B=B,
+ T=T,
+ H=H,
+ K=M,
+ V=V,
+ BK=BM,
+ BV=BV,
+ USE_G=False,
+ USE_GK=True,
+ USE_GV=False,
+ REVERSE=reverse,
+ HEAD_FIRST=head_first
+ )
+ ov = ov.sum(0)
+ return ok, hkt, qv, ov, hvt
+
+
+def fused_recurrent_gsa_bwd(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ s: torch.Tensor,
+ g: torch.Tensor,
+ qv: torch.Tensor,
+ hk0: Optional[torch.Tensor] = None,
+ hv0: Optional[torch.Tensor] = None,
+ ok: Optional[torch.Tensor] = None,
+ do: Optional[torch.Tensor] = None,
+ dhkt: Optional[torch.Tensor] = None,
+ dhvt: Optional[torch.Tensor] = None,
+ scale: float = 1.,
+ reverse: bool = False,
+ offsets: Optional[torch.LongTensor] = None,
+ head_first: bool = True
+) -> Tuple[torch.Tensor]:
+ if head_first:
+ B, H, T, K, V, M = *q.shape, v.shape[-1], s.shape[-1]
+ else:
+ B, T, H, K, V, M = *q.shape, v.shape[-1], s.shape[-1]
+ N = B if offsets is None else len(offsets) - 1
+
+ BK, BV, BM = min(K, 64), min(V, 64), min(M, 64)
+ NK, NV, NM = triton.cdiv(K, BK), triton.cdiv(V, BV), triton.cdiv(M, BM)
+
+ if head_first:
+ dqv = q.new_empty(NV, B, H, T, M, dtype=torch.float)
+ dsv = q.new_empty(NV, B, H, T, M, dtype=torch.float)
+ dv = q.new_empty(NM, B, H, T, V, dtype=torch.float)
+ else:
+ dqv = q.new_empty(NV, B, T, H, M, dtype=torch.float)
+ dsv = q.new_empty(NV, B, T, H, M, dtype=torch.float)
+ dv = q.new_empty(NM, B, T, H, V, dtype=torch.float)
+ dhk0 = torch.empty_like(hk0)if hk0 is not None else None
+ dhv0 = torch.empty_like(hv0)if hv0 is not None else None
+
+ gk, gv = g, None
+ grid = (NV, NM, N * H)
+ fused_recurrent_bwd_kernel[grid](
+ q=qv,
+ k=s,
+ v=v,
+ g=None,
+ gk=gk,
+ gv=gv,
+ h0=hv0,
+ do=do,
+ dq=dqv,
+ dk=dsv,
+ dv=dv,
+ dht=dhvt,
+ dh0=dhv0,
+ offsets=offsets,
+ scale=1.,
+ B=B,
+ T=T,
+ H=H,
+ K=M,
+ V=V,
+ BK=BM,
+ BV=BV,
+ USE_G=False,
+ USE_GK=True,
+ USE_GV=False,
+ REVERSE=reverse,
+ HEAD_FIRST=head_first
+ )
+ dqv = dqv.sum(0)
+ dsv = dsv.sum(0)
+ dv = dv.sum(0)
+ dgk = chunk_global_cumsum(dqv * qv.float() - dsv * s.float(),
+ reverse=not reverse,
+ offsets=offsets,
+ head_first=head_first)
+
+ dok = qv * (dqv - (qv * dqv).sum(-1, True))
+ if head_first:
+ dq = q.new_empty(NM, B, H, T, K, dtype=torch.float)
+ dk = q.new_empty(NM, B, H, T, K, dtype=torch.float)
+ dsk = q.new_empty(NK, B, H, T, M, dtype=torch.float)
+ else:
+ dq = q.new_empty(NM, B, T, H, K, dtype=torch.float)
+ dk = q.new_empty(NM, B, T, H, K, dtype=torch.float)
+ dsk = q.new_empty(NK, B, T, H, M, dtype=torch.float)
+ gk, gv = None, g
+ grid = (NM, NK, N * H)
+ fused_recurrent_bwd_kernel[grid](
+ q=q,
+ k=k,
+ v=s,
+ g=None,
+ gk=gk,
+ gv=gv,
+ h0=hk0,
+ do=dok,
+ dq=dq,
+ dk=dk,
+ dv=dsk,
+ dht=dhkt,
+ dh0=dhk0,
+ offsets=offsets,
+ scale=scale,
+ B=B,
+ T=T,
+ H=H,
+ K=K,
+ V=M,
+ BK=BK,
+ BV=BM,
+ USE_G=False,
+ USE_GK=False,
+ USE_GV=True,
+ REVERSE=reverse,
+ HEAD_FIRST=head_first
+ )
+ dq = dq.sum(0)
+ dk = dk.sum(0)
+ dsk = dsk.sum(0)
+
+ dgv = chunk_global_cumsum(dok.float() * ok.float() - dsk * s.float(),
+ reverse=not reverse,
+ offsets=offsets,
+ head_first=head_first)
+
+ ds = dsk.add_(dsv)
+ dg = dgk.add_(dgv)
+
+ return dq, dk, dv, ds, dg, dhk0, dhv0
+
+
+class FusedRecurrentGSAFunction(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_fwd
+ def forward(
+ ctx,
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ s: torch.Tensor,
+ g: torch.Tensor,
+ scale: Optional[float] = None,
+ hk0: Optional[torch.Tensor] = None,
+ hv0: Optional[torch.Tensor] = None,
+ output_final_state: bool = False,
+ reverse: bool = False,
+ offsets: Optional[torch.LongTensor] = None,
+ head_first: bool = True
+ ) -> Tuple[torch.Tensor, Tuple[torch.Tensor]]:
+ T = q.shape[2] if head_first else q.shape[1]
+ if T == 1 and not q.requires_grad:
+ o, (hkt, hvt) = fused_recurrent_gsa_inference(
+ q=q,
+ k=k,
+ v=v,
+ s=s,
+ g=g,
+ initial_state=(hk0, hv0),
+ output_final_state=output_final_state,
+ scale=scale,
+ head_first=head_first
+ )
+ return o, (hkt, hvt)
+ ok, hkt, qv, ov, hvt = fused_recurrent_gsa_fwd(
+ q=q,
+ k=k,
+ v=v,
+ s=s,
+ g=g,
+ initial_state=(hk0, hv0),
+ output_final_state=output_final_state,
+ scale=scale,
+ reverse=reverse,
+ offsets=offsets,
+ head_first=head_first
+ )
+ ctx.save_for_backward(q, k, v, s, g, qv, hk0, hv0, ok)
+ ctx.scale = scale
+ ctx.reverse = reverse
+ ctx.offsets = offsets
+ ctx.head_first = head_first
+ return ov.to(q.dtype), hkt, hvt
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_bwd
+ def backward(ctx, do, dhkt=None, dhvt=None):
+ q, k, v, s, g, qv, hk0, hv0, ok = ctx.saved_tensors
+ scale = ctx.scale
+ reverse = ctx.reverse
+ offsets = ctx.offsets
+ head_first = ctx.head_first
+
+ # not supported yet.
+ if dhkt is not None or dhvt is not None:
+ if g is not None:
+ assert g.requires_grad is False, "Cannot load final state gradient and use gates at the same time"
+ dq, dk, dv, ds, dg, dhk0, dhv0 = fused_recurrent_gsa_bwd(
+ q=q,
+ k=k,
+ v=v,
+ s=s,
+ g=g,
+ qv=qv,
+ hk0=hk0,
+ hv0=hv0,
+ ok=ok,
+ do=do,
+ dhkt=dhkt,
+ dhvt=dhvt,
+ scale=scale,
+ reverse=reverse,
+ offsets=offsets,
+ head_first=head_first
+ )
+ return dq.to(q), dk.to(k), dv.to(v), ds.to(s), dg.to(g), None, dhk0, dhv0, None, None, None, None
+
+
+def fused_recurrent_gsa(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ s: torch.Tensor,
+ g: Optional[torch.Tensor] = None,
+ scale: Optional[int] = None,
+ initial_state: Optional[Tuple[torch.Tensor]] = None,
+ output_final_state: Optional[bool] = False,
+ reverse: Optional[bool] = False,
+ offsets: Optional[torch.LongTensor] = None,
+ head_first: bool = True
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ r"""
+ Args:
+ q (torch.Tensor):
+ queries of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`.
+ k (torch.Tensor):
+ keys of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`.
+ v (torch.Tensor):
+ values of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ s (torch.Tensor):
+ slot representations of shape `[B, H, T, M]` if `head_first=True` else `[B, T, H, M]`.
+ g (torch.Tensor):
+ Forget gates of shape `[B, H, T, M]` applied to keys.
+ scale (Optional[int]):
+ Scale factor for the attention scores.
+ If not provided, it will default to `1 / sqrt(K)`. Default: `None`.
+ initial_state (Optional[Tuple[torch.Tensor]]):
+ Initial state tuple having tensors of shape `[N, H, K, M]` and `[N, H, M, V]` for `N` input sequences.
+ For equal-length input sequences, `N` equals the batch size `B`.
+ Default: `None`.
+ output_final_state (Optional[bool]):
+ Whether to output the final state of shape `[N, H, K, V]` and `[N, H, M, V]`.
+ Default: `False`.
+ reverse (Optional[bool]):
+ If `True`, process the state passing in reverse order. Default: `False`.
+ offsets (Optional[torch.LongTensor]):
+ Offsets of shape `[N+1]` defining the bos/eos positions of `N` variable-length sequences in the batch.
+ For example,
+ if `offsets` is `[0, 1, 3, 6, 10, 15]`, there are `N=5` sequences with lengths 1, 2, 3, 4 and 5 respectively.
+ If provided, the inputs are concatenated and the batch size `B` is expected to be 1.
+ Default: `None`.
+ head_first (Optional[bool]):
+ Whether the inputs are in the head-first format, which is not supported for variable-length inputs.
+ Default: `True`.
+
+ Returns:
+ o (torch.Tensor):
+ Outputs of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ final_state (Tuple[torch.Tensor]):
+ Final state tuple having tensors of shape `[N, H, K, M]` and `[N, H, M, V]`.
+
+ Examples::
+ >>> import torch
+ >>> import torch.nn.functional as F
+ >>> from einops import rearrange
+ >>> from fla.ops.gsa import fused_recurrent_gsa
+ # inputs with equal lengths
+ >>> B, T, H, K, V, M = 4, 2048, 4, 512, 512, 64
+ >>> q = torch.randn(B, T, H, K, device='cuda')
+ >>> k = torch.randn(B, T, H, K, device='cuda')
+ >>> v = torch.randn(B, T, H, V, device='cuda')
+ >>> s = torch.randn(B, T, H, M, device='cuda')
+ >>> g = F.logsigmoid(torch.randn(B, T, H, M, device='cuda'))
+ >>> h0 = (torch.randn(B, H, K, M, device='cuda'), torch.randn(B, H, M, V, device='cuda'))
+ >>> o, (hk, hv) = fused_recurrent_gsa(q, k, v, s, g,
+ initial_state=h0,
+ output_final_state=True,
+ head_first=False)
+ # for variable-length inputs, the batch size `B` is expected to be 1 and `offsets` is required
+ >>> q, k, v, s, g = map(lambda x: rearrange(x, 'b t h d -> 1 (b t) h d'), (q, k, v, s, g))
+ # for a batch with 4 sequences, offsets with 5 start/end positions are expected
+ >>> offsets = q.new_tensor([0, 2048, 4096, 6144, 8192], dtype=torch.long)
+ >>> o_var, (hk_var, hv_var) = fused_recurrent_gsa(q, k, v, s, g,
+ initial_state=h0,
+ output_final_state=True,
+ offsets=offsets,
+ head_first=False)
+ >>> assert o.allclose(o_var.view(o.shape))
+ >>> assert hk.allclose(hk_var)
+ >>> assert hv.allclose(hv_var)
+ """
+ if offsets is not None:
+ if q.shape[0] != 1:
+ raise ValueError(f"The batch size is expected to be 1 rather than {q.shape[0]} when using `offsets`."
+ f"Please flatten variable-length inputs before processing.")
+ if head_first:
+ raise RuntimeError("Sequences with variable lengths are not supported for head-first mode")
+ if initial_state is not None and initial_state[0].shape[0] != len(offsets) - 1:
+ raise ValueError(f"The number of initial states is expected to be equal to the number of input sequences, "
+ f"i.e., {len(offsets) - 1} rather than {initial_state[0].shape[0]}.")
+ if scale is None:
+ scale = k.shape[-1] ** -0.5
+ if initial_state is None:
+ initial_state = (None, None)
+ o, *final_state = FusedRecurrentGSAFunction.apply(
+ q,
+ k,
+ v,
+ s,
+ g,
+ scale,
+ *initial_state,
+ output_final_state,
+ reverse,
+ offsets,
+ head_first
+ )
+ return o, final_state
diff --git a/fla/ops/scan/naive.py b/fla/ops/scan/naive.py
new file mode 100644
index 0000000000000000000000000000000000000000..11a1dbc49e93148faca79c639539e1e988151c09
--- /dev/null
+++ b/fla/ops/scan/naive.py
@@ -0,0 +1,62 @@
+# -*- coding: utf-8 -*-
+
+from typing import Optional, Tuple
+
+import torch
+from einops import repeat
+
+
+def naive_recurrent_scan(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ s: torch.Tensor,
+ g: torch.Tensor,
+ window_size: int,
+ alibi: torch.Tensor,
+ mask: torch.Tensor,
+ scale: Optional[int] = None,
+ initial_state: Optional[torch.Tensor] = None,
+ output_final_state: Optional[bool] = False,
+ head_first: Optional[bool] = True
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ B, H, T, C, W, S = *q.shape, window_size, g.shape[-1]
+ Tk = k.shape[2]
+
+ if scale is None:
+ scale = C ** -0.5
+
+ sg = torch.einsum("bhts, bhtc -> bhtsc", g, s) # (B, H, T, S, C)
+ gi = 1 - g # (B, H, T, S)
+ prev_state = initial_state if initial_state is not None else torch.zeros((B, H, S, C), device=q.device, dtype=q.dtype)
+ outs = []
+
+ for t in range(T): # this will only loop more than once in the first prefill pass
+ prev_state = torch.einsum("bhs, bhsc -> bhsc", gi[:, :, t], prev_state) # (B, H, S, C)
+ state = prev_state + sg[:, :, t] # (B, H, S, C)
+
+ if T == Tk: # first prefill pass
+ k_window = k[:, :, max(0, t - W):t] # (B, H, W, C)
+ v_window = v[:, :, max(0, t - W):t]
+ else: # subsequent passes
+ k_window = k
+ v_window = v
+ Tw = k_window.shape[-2]
+ # if the window crop is less than W, pad with zeros on the left
+ if Tw < W:
+ k_window = torch.cat((torch.zeros((B, H, W - Tw, C), device=k.device, dtype=k.dtype), k_window), dim=2)
+ v_window = torch.cat((torch.zeros((B, H, W - Tw, C), device=v.device, dtype=v.dtype), v_window), dim=2)
+ all_keys = torch.cat((state, k_window), dim=2) # (B, H, S, C) + (B, H, W, C) -> (B, H, S+W, C)
+ all_values = torch.cat((state, v_window), dim=2) # (B, H, S, C) + (B, H, W, C) -> (B, H, S+W, C)
+ scores = torch.einsum("bhc, bhxc -> bhx", q[:, :, 0], all_keys) * scale # (B, H, C) @ (B, H, S+W, C) -> (B, H, S+W)
+ scores += alibi[:, Tw] # (B, H, S+W)
+ scores = scores.masked_fill(mask[Tw] == 0, float("-inf"))
+ scores = torch.softmax(scores, dim=-1)
+ out = torch.einsum("bhx, bhxc -> bhc", scores, all_values)
+ outs.append(out)
+
+ prev_state = state
+ final_state = prev_state
+ outs = torch.stack(outs, dim=2)
+
+ return outs, final_state
diff --git a/fla/ops/scan/parallel.py b/fla/ops/scan/parallel.py
new file mode 100644
index 0000000000000000000000000000000000000000..f956384e8ec0064d547b30387dff275a6663d82d
--- /dev/null
+++ b/fla/ops/scan/parallel.py
@@ -0,0 +1,1086 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from typing import Optional, Tuple
+
+import math
+import torch
+import triton
+import triton.language as tl
+
+# triton kernel
+# @triton.autotune(
+# configs=[
+# triton.Config({'BLOCK_SIZE_S': 16, 'BLOCK_SIZE_W': 16}, num_warps=2),
+# # triton.Config({'BLOCK_SIZE_S': 16, 'BLOCK_SIZE_W': 16}, num_warps=4),
+# # triton.Config({'BLOCK_SIZE_S': 16, 'BLOCK_SIZE_W': 16}, num_warps=8),
+# # triton.Config({'BLOCK_SIZE_S': 32, 'BLOCK_SIZE_W': 32}, num_warps=2),
+# # triton.Config({'BLOCK_SIZE_S': 32, 'BLOCK_SIZE_W': 32}, num_warps=4),
+# # triton.Config({'BLOCK_SIZE_S': 32, 'BLOCK_SIZE_W': 32}, num_warps=8),
+# # triton.Config({'BLOCK_SIZE_S': 64, 'BLOCK_SIZE_W': 64}, num_warps=2),
+# # triton.Config({'BLOCK_SIZE_S': 64, 'BLOCK_SIZE_W': 64}, num_warps=4),
+# # triton.Config({'BLOCK_SIZE_S': 64, 'BLOCK_SIZE_W': 64}, num_warps=8),
+# ],
+# key=[]
+# )
+@triton.autotune(
+ configs=[
+ triton.Config({'BLOCK_SIZE_S': 8, 'BLOCK_SIZE_W': 8}, num_warps=2),
+ triton.Config({'BLOCK_SIZE_S': 8, 'BLOCK_SIZE_W': 8}, num_warps=4),
+ triton.Config({'BLOCK_SIZE_S': 8, 'BLOCK_SIZE_W': 8}, num_warps=8),
+ triton.Config({'BLOCK_SIZE_S': 16, 'BLOCK_SIZE_W': 16}, num_warps=2),
+ triton.Config({'BLOCK_SIZE_S': 16, 'BLOCK_SIZE_W': 16}, num_warps=4),
+ triton.Config({'BLOCK_SIZE_S': 16, 'BLOCK_SIZE_W': 16}, num_warps=8),
+ # triton.Config({'BLOCK_SIZE_S': 32, 'BLOCK_SIZE_W': 32}, num_warps=2),
+ # triton.Config({'BLOCK_SIZE_S': 32, 'BLOCK_SIZE_W': 32}, num_warps=4),
+ # triton.Config({'BLOCK_SIZE_S': 32, 'BLOCK_SIZE_W': 32}, num_warps=8),
+ # triton.Config({'BLOCK_SIZE_S': 64, 'BLOCK_SIZE_W': 64}, num_warps=2),
+ # triton.Config({'BLOCK_SIZE_S': 64, 'BLOCK_SIZE_W': 64}, num_warps=4),
+ # triton.Config({'BLOCK_SIZE_S': 64, 'BLOCK_SIZE_W': 64}, num_warps=8),
+ ],
+ key=[]
+)
+@triton.jit
+def afak_fwd_kernel(
+ q_ptr, k_ptr, states_ptr, y_ptr,
+ B: tl.constexpr, T: tl.constexpr, S:tl.constexpr, C: tl.constexpr, W: tl.constexpr,
+ BLOCK_SIZE_S: tl.constexpr,
+ BLOCK_SIZE_W: tl.constexpr,
+):
+ # Use multiple program IDs for better parallelization
+ b_id = tl.program_id(axis=0)
+ t_id = tl.program_id(axis=1)
+ sw_block_id = tl.program_id(axis=2)
+ num_s_blocks = triton.cdiv(S, BLOCK_SIZE_S)
+ num_w_blocks = triton.cdiv(W, BLOCK_SIZE_W)
+ SW = S + W
+
+ # Compute base pointers
+ q_base = q_ptr + b_id * T * C
+ k_base = k_ptr + b_id * T * C
+ states_base = states_ptr + b_id * T * S * C
+ y_base = y_ptr + b_id * T * W
+
+ # Fetch the query at [b_id, t_id, :]
+ q_block_ptr = tl.make_block_ptr(
+ base=q_ptr,
+ shape=(B, T, C),
+ strides=(T * C, C, 1),
+ offsets=(b_id, t_id, 0),
+ block_shape=(1, 1, C),
+ order=(0, 1, 2),
+ )
+ q = tl.load(q_block_ptr) # (1, 1, C)
+
+ if sw_block_id < num_s_blocks:
+ s_first_id = sw_block_id * BLOCK_SIZE_S
+ # Fetch the states at [b_id, t_id, s_first_id:s_first_id+BLOCK_SIZE_S, :]
+ s_block_ptr = tl.make_block_ptr(
+ base=states_ptr,
+ shape=(B, T, S, C),
+ strides=(T * S * C, S * C, C, 1),
+ offsets=(b_id, t_id, s_first_id, 0),
+ block_shape=(1, 1, BLOCK_SIZE_S, C),
+ order=(0, 1, 2, 3),
+ )
+ s = tl.load(s_block_ptr) # (1, 1, BLOCK_SIZE_S, C)
+ o = q[:, :, None, :] * s # (1, 1, BLOCK_SIZE_S, C)
+ o = tl.sum(o, axis=-1) # (1, 1, BLOCK_SIZE_S)
+ # Store the result
+ y_block_ptr = tl.make_block_ptr(
+ base=y_ptr,
+ shape=(B, T, SW),
+ strides=(T * SW, SW, 1),
+ offsets=(b_id, t_id, s_first_id),
+ block_shape=(1, 1, BLOCK_SIZE_S),
+ order=(0, 1, 2),
+ )
+ tl.store(y_block_ptr, o.to(y_block_ptr.dtype.element_ty)) # (1, 1, BLOCK_SIZE_S)
+ else:
+ w_first_id = (sw_block_id - num_s_blocks) * BLOCK_SIZE_W
+ # Fetch the key at [b_id, t_id-W+1+(w_block_id*BLOCK_SIZE_W):t_id+(w_block_id*BLOCK_SIZE_W), :]
+ # need to load the keys manually because make_block_ptr doesn't support masks
+ tw_offs = tl.arange(0, BLOCK_SIZE_W)
+ c_offs = tl.arange(0, C)
+ k_block_ptr = k_base + (t_id - W + 1 + (w_first_id + tw_offs[:, None])) * C + c_offs[None, :]
+ mask = w_first_id + tl.arange(0, BLOCK_SIZE_W)[:, None] > (W - t_id - 2)
+ k = tl.load(k_block_ptr, mask=mask) # (BLOCK_SIZE_W, C)
+ # Compute the dot product (but not with tl.dot because it has a minimum size of 16)
+ y = q * k[None, :] # (1, BLOCK_SIZE_W, C)
+ y = tl.sum(y, axis=-1) # (1, BLOCK_SIZE_W)
+ # Store the result
+ y_block_ptr = tl.make_block_ptr(
+ base=y_ptr,
+ shape=(B, T, SW),
+ strides=(T * SW, SW, 1),
+ offsets=(b_id, t_id, S + w_first_id),
+ block_shape=(1, 1, BLOCK_SIZE_W),
+ order=(0, 1, 2),
+ )
+ tl.store(y_block_ptr, y[None, :].to(y_block_ptr.dtype.element_ty)) # (1, 1, BLOCK_SIZE_W)
+
+# @triton.autotune(
+# configs=[
+# triton.Config({
+# 'BLOCK_SIZE_C': bs_c,
+# }, num_warps=warps)
+# for bs_c in [16] #, 32, 64]
+# for warps in [2] # 4, 8]
+# ],
+# key=[]
+# )
+@triton.autotune(
+ configs=[
+ triton.Config({
+ 'BLOCK_SIZE_C': bs_c,
+ }, num_warps=warps)
+ for bs_c in [16, 32, 64]
+ for warps in [2, 4, 8]
+ ],
+ key=[]
+)
+@triton.jit
+def afak_bwd_kernel(
+ q_ptr, k_ptr, states_ptr, dy_ptr, dq_ptr, dk_ptr, ds_ptr,
+ B: tl.constexpr, T: tl.constexpr, S: tl.constexpr, C: tl.constexpr, W: tl.constexpr,
+ BLOCK_SIZE_C: tl.constexpr,
+):
+ # Use multiple program IDs for better parallelization
+ b_id = tl.program_id(axis=0)
+ t_id = tl.program_id(axis=1)
+ c_block_id = tl.program_id(axis=2)
+ c_first_id = c_block_id * BLOCK_SIZE_C
+ SW = S + W
+
+ # Compute base pointers
+ q_base = q_ptr + b_id * T * C
+ k_base = k_ptr + b_id * T * C
+ dy_base = dy_ptr + b_id * T * SW
+ dq_base = dq_ptr + b_id * T * C
+ dk_base = dk_ptr + b_id * T * C
+
+ # First calculate the gradients for q
+ # Fetch original keys at [b_id, t_id-W+1:t_id, c_first_id:c_first_id+BLOCK_SIZE_C]
+ # using a block ptr also disallows the use of masks when loading, so let's just make a ptr manually
+ tw_offs = tl.arange(0, W)
+ c_offs = tl.arange(0, BLOCK_SIZE_C)
+ k_block_ptr = k_base + (t_id - W + 1 + tw_offs[:, None]) * C + c_first_id + c_offs[None, :]
+ mask = tl.arange(0, W)[:, None] > (W - t_id - 2)
+ k = tl.load(k_block_ptr, mask=mask) # (W, BLOCK_SIZE_C)
+ # Fetch output gradients at [b_id, t_id, S:W]
+ dy_block_ptr = tl.make_block_ptr(
+ base=dy_ptr,
+ shape=(B, T, SW),
+ strides=(T * SW, SW, 1),
+ offsets=(b_id, t_id, S),
+ block_shape=(1, 1, W),
+ order=(0, 1, 2),
+ )
+ dy = tl.load(dy_block_ptr) # (1, 1, W)
+ # Compute the gradients for q
+ dqk = dy.permute(0, 2, 1) * k[None, :] # (1, W, BLOCK_SIZE_C)
+ dqk = tl.sum(dqk, axis=1) # (1, BLOCK_SIZE_C)
+ # Then we also have to add the gradients from the states
+ # Fetch the states at [b_id, t_id, c_first_id:c_first_id+BLOCK_SIZE_C]
+ s_block_ptr = tl.make_block_ptr(
+ base=states_ptr,
+ shape=(B, T, S, C),
+ strides=(T * S * C, S * C, C, 1),
+ offsets=(b_id, t_id, 0, c_first_id),
+ block_shape=(1, 1, S, BLOCK_SIZE_C),
+ order=(0, 1, 2, 3),
+ )
+ s = tl.load(s_block_ptr) # (1, 1, S, BLOCK_SIZE_C)
+ # Fetch the output gradients at [b_id, t_id, :S]
+ dy_block_ptr = tl.make_block_ptr(
+ base=dy_ptr,
+ shape=(B, T, SW),
+ strides=(T * SW, SW, 1),
+ offsets=(b_id, t_id, 0),
+ block_shape=(1, 1, S),
+ order=(0, 1, 2),
+ )
+ dy = tl.load(dy_block_ptr) # (1, 1, S)
+ # Compute the gradients for q
+ dqs = dy[:, :, :, None] * s # (1, 1, S, BLOCK_SIZE_C)
+ dqs = tl.sum(dqs, axis=2) # (1, 1, BLOCK_SIZE_C)
+ dq = dqk[None, :] + dqs # (1, 1, BLOCK_SIZE_C)
+ # Store the result
+ dq_block_ptr = tl.make_block_ptr(
+ base=dq_ptr,
+ shape=(B, T, C),
+ strides=(T * C, C, 1),
+ offsets=(b_id, t_id, c_first_id),
+ block_shape=(1, 1, BLOCK_SIZE_C),
+ order=(0, 1, 2),
+ )
+ tl.store(dq_block_ptr, dq.to(dq_block_ptr.dtype.element_ty)) # (1, 1, BLOCK_SIZE_C)
+
+ # Calculate the gradients for states while we're at it
+ # Fetch the query at [b_id, t_id, c_first_id:c_first_id+BLOCK_SIZE_C]
+ q_block_ptr = tl.make_block_ptr(
+ base=q_ptr,
+ shape=(B, T, C),
+ strides=(T * C, C, 1),
+ offsets=(b_id, t_id, c_first_id),
+ block_shape=(1, 1, BLOCK_SIZE_C),
+ order=(0, 1, 2),
+ )
+ q = tl.load(q_block_ptr) # (1, 1, BLOCK_SIZE_C)
+ # Compute the gradients for states
+ ds = dy[:, :, :, None] * q[:, :, None, :] # (1, 1, S, BLOCK_SIZE_C)
+ # Store the result
+ ds_block_ptr = tl.make_block_ptr(
+ base=ds_ptr,
+ shape=(B, T, S, C),
+ strides=(T * S * C, S * C, C, 1),
+ offsets=(b_id, t_id, 0, c_first_id),
+ block_shape=(1, 1, S, BLOCK_SIZE_C),
+ order=(0, 1, 2, 3),
+ )
+ tl.store(ds_block_ptr, ds.to(ds_block_ptr.dtype.element_ty)) # (1, 1, S, BLOCK_SIZE_C)
+
+ # Then calculate the gradients for k
+ # same thing here, let's just make the ptr manually
+ tw_offs = tl.arange(0, W)
+ c_offs = tl.arange(0, BLOCK_SIZE_C)
+ q_block_ptr = q_base + (t_id + tw_offs[:, None]) * C + c_first_id + c_offs[None, :]
+ mask = tl.arange(0, W)[:, None] < T - t_id
+ q = tl.load(q_block_ptr, mask=mask) # (W, BLOCK_SIZE_C)
+ # Fetch original gradients at [b_id, t_id, :]
+ # This one is tricky bc we have to fetch a diagonal from dy
+ # going from [b_id, t_id, W] to [b_id, t_id+W, 0]
+ w_offs = tl.arange(0, W)
+ diag_dy_base = dy_base + t_id * SW + S + tl.flip(w_offs, 0)
+ dy_block_ptr = diag_dy_base + w_offs * SW
+ mask = tl.arange(0, W) < T - t_id
+ dy = tl.load(dy_block_ptr, mask=mask) # (W)
+ # Compute the gradients for k
+ dk = dy.reshape(W, 1) * q # (W, BLOCK_SIZE_C)
+ dk = tl.sum(dk, axis=0) # (BLOCK_SIZE_C)
+ # Store the result
+ dk_block_ptr = tl.make_block_ptr(
+ base=dk_ptr,
+ shape=(B, T, C),
+ strides=(T * C, C, 1),
+ offsets=(b_id, t_id, c_first_id),
+ block_shape=(1, 1, BLOCK_SIZE_C),
+ order=(0, 1, 2),
+ )
+ tl.store(dk_block_ptr, dk.reshape(1, 1, BLOCK_SIZE_C).to(dk_block_ptr.dtype.element_ty)) # (1, 1, BLOCK_SIZE_C)
+
+class AttendFoldedAllKeysTriton(torch.autograd.Function):
+ # @torch.compiler.disable
+ @staticmethod
+ def forward(ctx, q, k, states, W):
+ B, T, C = q.shape
+ B, T, S, C = states.shape
+ q = q.contiguous()
+ k = k.contiguous()
+ states = states.contiguous()
+ ctx.save_for_backward(q, k, states)
+ ctx.W = W
+
+ # Calculate grid dimensions
+ grid = lambda meta: (B, T, triton.cdiv(S, meta['BLOCK_SIZE_S']) + triton.cdiv(W, meta['BLOCK_SIZE_W']))
+
+ # Allocate output tensor
+ y = torch.zeros((B, T, S+W), dtype=q.dtype, device=q.device).contiguous()
+
+ # Launch kernel
+ afak_fwd_kernel[grid](
+ q, k, states, y,
+ B, T, S, C, W,
+ )
+
+ return y
+
+ # @torch.compiler.disable
+ @staticmethod
+ def backward(ctx, grad_output):
+ grad_output = grad_output.contiguous()
+ q, k, states = ctx.saved_tensors
+ B, T, S, C = states.shape
+ W = ctx.W
+
+ # Calculate grid dimensions
+ grid = lambda meta: (B, T, triton.cdiv(C, meta['BLOCK_SIZE_C']))
+
+ gq = torch.zeros_like(q).contiguous()
+ gk = torch.zeros_like(k).contiguous()
+ gs = torch.zeros_like(states).contiguous()
+
+ # Launch kernel
+ afak_bwd_kernel[grid](
+ q, k, states, grad_output, gq, gk, gs,
+ B, T, S, C, W
+ )
+
+ return gq, gk, gs, None
+
+# triton kernel
+# @triton.autotune(
+# configs=[
+# triton.Config({
+# 'BLOCK_SIZE_C': bs_c,
+# }, num_warps=warps)
+# for bs_c in [16] #, 32, 64]
+# for warps in [2] # 4, 8]
+# ],
+# key=[]
+# )
+@triton.autotune(
+ configs=[
+ triton.Config({
+ 'BLOCK_SIZE_C': bs_c,
+ }, num_warps=warps)
+ for bs_c in [16, 32, 64]
+ for warps in [2, 4, 8]
+ ],
+ key=[]
+)
+@triton.jit
+def afav_fwd_kernel(
+ s_ptr, v_ptr, states_ptr, y_ptr,
+ B: tl.constexpr, T: tl.constexpr, S: tl.constexpr, C: tl.constexpr, W: tl.constexpr,
+ BLOCK_SIZE_C: tl.constexpr,
+):
+ # Use multiple program IDs for better parallelization
+ b_id = tl.program_id(axis=0)
+ t_id = tl.program_id(axis=1)
+ c_block_id = tl.program_id(axis=2)
+ c_first_id = c_block_id * BLOCK_SIZE_C
+ SW = S + W
+
+ # Compute base pointers
+ s_base = s_ptr + b_id * T * W
+ v_base = v_ptr + b_id * T * C
+ y_base = y_ptr + b_id * T * C
+
+ # First we accumulate the values
+ # Fetch the scores at [b_id, t_id, S:W]
+ sv_block_ptr = tl.make_block_ptr(
+ base=s_ptr,
+ shape=(B, T, SW),
+ strides=(T * SW, SW, 1),
+ offsets=(b_id, t_id, S),
+ block_shape=(1, 1, W),
+ order=(0, 1, 2),
+ )
+ sv = tl.load(sv_block_ptr) # (1, 1, W)
+ # Fetch the value at [b_id, t_id-W+1:t_id, c_first_id:c_first_id+BLOCK_SIZE_C]
+ # need to load the keys manually because make_block_ptr doesn't support masks
+ tw_offs = tl.arange(0, W)
+ c_offs = tl.arange(0, BLOCK_SIZE_C)
+ v_block_ptr = v_base + (t_id - W + 1 + tw_offs[:, None]) * C + c_first_id + c_offs[None, :]
+ mask = tl.arange(0, W)[:, None] > (W - t_id - 2)
+ v = tl.load(v_block_ptr, mask=mask) # (W, BLOCK_SIZE_C) but W can vary (W - t_id - 2)
+ v = tl.load(v_block_ptr, mask=mask) # (BLOCK_SIZE_W, C)
+
+ # We already fetched output gradients dy at [b_id, t_id, :] w/ size (1, 1, C)
+ # Compute the gradients for v
+ dv = dy * s.reshape(1, BLOCK_SIZE_W, 1) # (1, BLOCK_SIZE_W, C)
+
+ # Compute the gradients for q
+ dsv = dy * v[None, :] # (1, BLOCK_SIZE_W, C)
+ dsv = tl.sum(dsv, axis=-1) # (1, BLOCK_SIZE_W)
+
+ # Store the result
+ dsv_block_ptr = tl.make_block_ptr(
+ base=ds_ptr,
+ shape=(B, T, SW),
+ strides=(T * SW, SW, 1),
+ offsets=(b_id, t_id, S+w_first_id),
+ block_shape=(1, 1, BLOCK_SIZE_W),
+ order=(0, 1, 2),
+ )
+ tl.store(dsv_block_ptr, dsv[None, :].to(dsv_block_ptr.dtype.element_ty)) # (1, 1, BLOCK_SIZE_W)
+
+ # Store the result
+ # need to make a ptr manually because make_block_ptr doesn't support masks
+ tw_offs = tl.arange(0, BLOCK_SIZE_W)
+ c_offs = tl.arange(0, C)
+ dv_block_ptr = dv_base + (t_id - W + 1 + (w_first_id + tw_offs[:, None])) * C + c_offs[None, :]
+ mask = w_first_id + tl.arange(0, BLOCK_SIZE_W)[:, None] > (W - t_id - 2)
+ # now we have to atomically add the gradients to the original values
+ tl.atomic_add(dv_block_ptr[None, :], dv)
+ else:
+ s_first_id = sw_block_id * BLOCK_SIZE_S
+ # Here we calculate the gradients for s[:, :, :S] and for states
+ # First calculate the gradients for s
+ # Fetch states at [b_id, t_id, s_first_id:s_first_id+BLOCK_SIZE_S, :]
+ states_block_ptr = tl.make_block_ptr(
+ base=states_ptr,
+ shape=(B, T, S, C),
+ strides=(T * S * C, S * C, C, 1),
+ offsets=(b_id, t_id, s_first_id, 0),
+ block_shape=(1, 1, BLOCK_SIZE_S, C),
+ order=(0, 1, 2, 3),
+ )
+ states = tl.load(states_block_ptr) # (1, 1, BLOCK_SIZE_S, C)
+ # Fetch original output gradients at [b_id, t_id, :]
+ dy_block_ptr = tl.make_block_ptr(
+ base=dy_ptr,
+ shape=(B, T, C),
+ strides=(T * C, C, 1),
+ offsets=(b_id, t_id, 0),
+ block_shape=(1, 1, C),
+ order=(0, 1, 2),
+ )
+ dy = tl.load(dy_block_ptr) # (1, 1, C)
+ # Fetch the scores at [b_id, t_id, :S]
+ ss_block_ptr = tl.make_block_ptr(
+ base=s_ptr,
+ shape=(B, T, SW),
+ strides=(T * SW, SW, 1),
+ offsets=(b_id, t_id, s_first_id),
+ block_shape=(1, 1, BLOCK_SIZE_S),
+ order=(0, 1, 2),
+ )
+ ss = tl.load(ss_block_ptr) # (1, 1, BLOCK_SIZE_S)
+
+ # Compute the gradients for s
+ dss = dy[:, :, None, :] * states # (1, 1, BLOCK_SIZE_S, C)
+ dss = tl.sum(dss, axis=-1) # (1, 1, BLOCK_SIZE_S)
+
+ # Then calculate the gradients for states
+ dstates = dy[:, :, None, :] * ss[:, :, :, None] # (1, 1, BLOCK_SIZE_S, C)
+
+ # Store the result gradients of s at [b_id, t_id, :S]
+ dss_block_ptr = tl.make_block_ptr(
+ base=ds_ptr,
+ shape=(B, T, SW),
+ strides=(T * SW, SW, 1),
+ offsets=(b_id, t_id, s_first_id),
+ block_shape=(1, 1, BLOCK_SIZE_S),
+ order=(0, 1, 2),
+ )
+ tl.store(dss_block_ptr, dss.to(dss_block_ptr.dtype.element_ty)) # (1, 1, BLOCK_SIZE_S)
+
+ # Store the result gradients of states at [b_id, t_id, s_first_id:s_first_id+BLOCK_SIZE_S, :]
+ dstates_block_ptr = tl.make_block_ptr(
+ base=dstates_ptr,
+ shape=(B, T, S, C),
+ strides=(T * S * C, S * C, C, 1),
+ offsets=(b_id, t_id, s_first_id, 0),
+ block_shape=(1, 1, BLOCK_SIZE_S, C),
+ order=(0, 1, 2, 3),
+ )
+ tl.store(dstates_block_ptr, dstates.to(dstates_block_ptr.dtype.element_ty)) # (1, 1, BLOCK_SIZE_S, C)
+
+class AccumulateFoldedAllValuesTriton(torch.autograd.Function):
+ # @torch.compiler.disable
+ @staticmethod
+ def forward(ctx, s, v, states, W):
+ B, T, S, C = states.shape
+ s = s.contiguous()
+ v = v.contiguous()
+ states = states.contiguous()
+ ctx.save_for_backward(s, v, states)
+ ctx.W = W
+
+ # Calculate grid dimensions
+ grid = lambda meta: (B, T, triton.cdiv(C, meta['BLOCK_SIZE_C']))
+
+ # Allocate output tensor
+ y = torch.zeros((B, T, C), dtype=v.dtype, device=v.device).contiguous()
+
+ # Launch kernel
+ afav_fwd_kernel[grid](
+ s, v, states, y,
+ B, T, S, C, W,
+ )
+
+ return y
+
+ # @torch.compiler.disable
+ @staticmethod
+ def backward(ctx, grad_output):
+ grad_output = grad_output.contiguous()
+ s, v, states = ctx.saved_tensors
+ B, T, S, C = states.shape
+ W = ctx.W
+
+ # Calculate grid dimensions
+ grid = lambda meta: (B, T, triton.cdiv(S, meta['BLOCK_SIZE_S']) + triton.cdiv(W, meta['BLOCK_SIZE_W']))
+
+ gs = torch.zeros_like(s).contiguous()
+ # for gv we want an additional W at the start of the time dimension bc we can't mask atomic add
+ gv = torch.zeros((B, T+W-1, C), device=v.device).contiguous()
+ gst = torch.zeros_like(states).contiguous()
+
+ # Launch kernel
+ afav_bwd_kernel[grid](
+ s, v, states, grad_output, gs, gv, gst,
+ B, T, S, C, W,
+ )
+
+ # No need for the additional W at the start of the time dimension for gv
+ return gs, gv[:, W-1:].to(s.dtype), gst, None
+
+# @triton.autotune(
+# configs=[
+# triton.Config({
+# 'BLOCK_SIZE': bs,
+# 'BLOCK_SIZE_S': bs_s,
+# 'BLOCK_SIZE_C': bs_c
+# }, num_warps=warps)
+# for bs in [16] #, 32, 64]
+# for bs_s in [16] #, 32, 64]
+# for bs_c in [16] #, 32, 64]
+# for warps in [2] # 4, 8]
+# ],
+# key=[]
+# )
+@triton.autotune(
+ configs=[
+ triton.Config({
+ 'BLOCK_SIZE': bs,
+ 'BLOCK_SIZE_S': bs_s,
+ 'BLOCK_SIZE_C': bs_c
+ }, num_warps=warps)
+ for bs in [16, 32, 64]
+ for bs_s in [8, 16] #, 32, 64]
+ for bs_c in [16, 32, 64]
+ for warps in [2, 4, 8]
+ ],
+ key=[]
+)
+@triton.jit
+def cg2d_fwd_kernel(
+ xg_ptr, gi_ptr,
+ B: tl.constexpr, S: tl.constexpr, C: tl.constexpr, T: tl.constexpr, nstages: tl.constexpr,
+ BLOCK_SIZE: tl.constexpr,
+ # Add more constants for tiling
+ BLOCK_SIZE_S: tl.constexpr,
+ BLOCK_SIZE_C: tl.constexpr,
+):
+ # Use multiple program IDs for better parallelization
+ pid = tl.program_id(axis=0)
+ # Compute batch, spatial, and channel indices
+ num_s_blocks = tl.cdiv(S, BLOCK_SIZE_S)
+ num_c_blocks = tl.cdiv(C, BLOCK_SIZE_C)
+ b = pid // (num_s_blocks * num_c_blocks)
+ rem = pid % (num_s_blocks * num_c_blocks)
+ s_block = rem // num_c_blocks
+ c_block = rem % num_c_blocks
+
+ # Compute actual indices
+ s_offs = tl.arange(0, BLOCK_SIZE_S)
+ c_offs = tl.arange(0, BLOCK_SIZE_C)
+ s_mask = s_offs < (S - s_block * BLOCK_SIZE_S)
+ c_mask = c_offs < (C - c_block * BLOCK_SIZE_C)
+ s_offs = s_block * BLOCK_SIZE_S + s_offs
+ c_offs = c_block * BLOCK_SIZE_C + c_offs
+
+ # Compute base pointers
+ xg_base = xg_ptr + b * T * S * C
+ gi_base = gi_ptr + b * T * S
+
+ # Precompute stages for better efficiency
+ # nstages = tl.ceil(tl.log2(float(T))).to(tl.int32)
+ offs = tl.arange(0, BLOCK_SIZE)
+
+ for stage in tl.range(nstages): # CHANGE BACK TO tl.static_range() IN FINAL VERSION
+ group_stride = 1 << stage
+ # Process multiple elements per thread using BLOCK_SIZE
+ for block_start in tl.range(0, T//2, BLOCK_SIZE):
+ block_mask = offs < (T//2 - block_start)
+ block_s_mask = block_mask[:, None] & s_mask[None, :]
+ block_s_c_mask = block_mask[:, None, None] & s_mask[None, :, None] & c_mask[None, None, :]
+
+ # Compute indices with vectorization
+ initial_indices = group_stride + ((offs + block_start) // group_stride) * group_stride * 2
+ t_targets = initial_indices + ((offs + block_start) % group_stride)
+ t_adders = initial_indices - 1
+
+ xg_targets_ptr = xg_base + t_targets[:, None, None] * S * C + s_offs[None, :, None] * C + c_offs[None, None, :] # (BLOCK_SIZE, BLOCK_SIZE_S, BLOCK_SIZE_C)
+ xg_adders_ptr = xg_base + t_adders[:, None, None] * S * C + s_offs[None, :, None] * C + c_offs[None, None, :] # (BLOCK_SIZE, BLOCK_SIZE_S, BLOCK_SIZE_C)
+ gi_targets_ptr = gi_base + t_targets[:, None] * S + s_offs[None, :] # (BLOCK_SIZE, BLOCK_SIZE_S)
+ gi_adders_ptr = gi_base + t_adders[:, None] * S + s_offs[None, :] # (BLOCK_SIZE, BLOCK_SIZE_S)
+
+ xg_targets = tl.load(xg_targets_ptr, mask=block_s_c_mask) # (BLOCK_SIZE, BLOCK_SIZE_S, BLOCK_SIZE_C)
+ xg_adders = tl.load(xg_adders_ptr, mask=block_s_c_mask) # (BLOCK_SIZE, BLOCK_SIZE_S, BLOCK_SIZE_C)
+ gi_targets = tl.load(gi_targets_ptr, mask=block_s_mask) # (BLOCK_SIZE, BLOCK_SIZE_S)
+ gi_adders = tl.load(gi_adders_ptr, mask=block_s_mask) # (BLOCK_SIZE, BLOCK_SIZE_S)
+
+ # Compute and store results
+ xg_targets += xg_adders * gi_targets[:, :, None] # (BLOCK_SIZE, BLOCK_SIZE_S, BLOCK_SIZE_C)
+ # Update gates
+ gi_targets *= gi_adders # (BLOCK_SIZE, BLOCK_SIZE_S)
+
+ tl.store(xg_targets_ptr, xg_targets.to(xg_targets_ptr.dtype.element_ty), mask=block_s_c_mask) # (BLOCK_SIZE, BLOCK_SIZE_S, BLOCK_SIZE_C)
+ tl.store(gi_targets_ptr, gi_targets.to(gi_targets_ptr.dtype.element_ty), mask=block_s_mask) # (BLOCK_SIZE, BLOCK_SIZE_S)
+
+# @triton.autotune(
+# configs=[
+# triton.Config({
+# 'BLOCK_SIZE': bs,
+# 'BLOCK_SIZE_S': bs_s,
+# 'BLOCK_SIZE_C': bs_c
+# }, num_warps=warps)
+# for bs in [16] #, 32, 64]
+# for bs_s in [16] #, 32, 64]
+# for bs_c in [16] #, 32, 64]
+# for warps in [2] #, 32, 64]
+# ],
+# key=[]
+# )
+@triton.autotune(
+ configs=[
+ triton.Config({
+ 'BLOCK_SIZE': bs,
+ 'BLOCK_SIZE_S': bs_s,
+ 'BLOCK_SIZE_C': bs_c
+ }, num_warps=warps)
+ for bs in [16, 32, 64]
+ for bs_s in [8, 16] #, 32, 64]
+ for bs_c in [16, 32, 64]
+ for warps in [2, 4, 8]
+ ],
+ key=[]
+)
+@triton.jit
+def cg2d_gxg_bwd_kernel(
+ gi_ptr, go_ptr,
+ B: tl.constexpr, S: tl.constexpr, C: tl.constexpr, T: tl.constexpr, nstages: tl.constexpr,
+ BLOCK_SIZE: tl.constexpr,
+ BLOCK_SIZE_S: tl.constexpr,
+ BLOCK_SIZE_C: tl.constexpr,
+):
+ # Similar structure to forward kernel with reversed indices
+ pid = tl.program_id(axis=0)
+ num_s_blocks = tl.cdiv(S, BLOCK_SIZE_S)
+ num_c_blocks = tl.cdiv(C, BLOCK_SIZE_C)
+ b = pid // (num_s_blocks * num_c_blocks)
+ rem = pid % (num_s_blocks * num_c_blocks)
+ s_block = rem // num_c_blocks
+ c_block = rem % num_c_blocks
+
+ s_offs = tl.arange(0, BLOCK_SIZE_S)
+ c_offs = tl.arange(0, BLOCK_SIZE_C)
+ s_mask = s_offs < (S - s_block * BLOCK_SIZE_S)
+ c_mask = c_offs < (C - c_block * BLOCK_SIZE_C)
+ s_offs = s_block * BLOCK_SIZE_S + s_offs
+ c_offs = c_block * BLOCK_SIZE_C + c_offs
+
+ gi_base = gi_ptr + b * T * S
+ go_base = go_ptr + b * T * S * C
+
+ # nstages = tl.ceil(tl.log2(float(T))).to(tl.int32)
+ offs = tl.arange(0, BLOCK_SIZE)
+
+ for stage in tl.range(nstages): # CHANGE BACK TO tl.static_range() IN FINAL VERSION
+ group_stride = 1 << stage
+ for block_start in tl.range(0, T//2, BLOCK_SIZE):
+ block_mask = offs < (T//2 - block_start)
+ block_s_mask = block_mask[:, None] & s_mask[None, :]
+ block_s_c_mask = block_mask[:, None, None] & s_mask[None, :, None] & c_mask[None, None, :]
+
+ initial_indices = T - 1 - group_stride - ((offs + block_start) // group_stride) * group_stride * 2
+ t_targets = initial_indices - ((offs + block_start) % group_stride)
+ t_adders = initial_indices + 1
+
+ go_targets_ptr = go_base + t_targets[:, None, None] * S * C + s_offs[None, :, None] * C + c_offs[None, None, :] # (BLOCK_SIZE, BLOCK_SIZE_S, BLOCK_SIZE_C)
+ go_adders_ptr = go_base + t_adders[:, None, None] * S * C + s_offs[None, :, None] * C + c_offs[None, None, :] # (BLOCK_SIZE, BLOCK_SIZE_S, BLOCK_SIZE_C)
+ gi_targets_ptr = gi_base + t_targets[:, None] * S + s_offs[None, :] # (BLOCK_SIZE, BLOCK_SIZE_S)
+ gi_adders_ptr = gi_base + t_adders[:, None] * S + s_offs[None, :] # (BLOCK_SIZE, BLOCK_SIZE_S)
+
+ # Load with block masking
+ go_targets = tl.load(go_targets_ptr, mask=block_s_c_mask) # (BLOCK_SIZE, BLOCK_SIZE_S, BLOCK_SIZE_C)
+ go_adders = tl.load(go_adders_ptr, mask=block_s_c_mask) # (BLOCK_SIZE, BLOCK_SIZE_S, BLOCK_SIZE_C)
+ gi_targets = tl.load(gi_targets_ptr, mask=block_s_mask) # (BLOCK_SIZE, BLOCK_SIZE_S)
+ gi_adders = tl.load(gi_adders_ptr, mask=block_s_mask) # (BLOCK_SIZE, BLOCK_SIZE_S)
+
+ # Compute and store results
+ go_targets += go_adders * gi_targets[:, :, None] # (BLOCK_SIZE, BLOCK_SIZE_S, BLOCK_SIZE_C)
+ gi_targets *= gi_adders # (BLOCK_SIZE, BLOCK_SIZE_S)
+
+ tl.store(go_targets_ptr, go_targets.to(go_targets_ptr.dtype.element_ty), mask=block_s_c_mask) # (BLOCK_SIZE, BLOCK_SIZE_S, BLOCK_SIZE_C)
+ tl.store(gi_targets_ptr, gi_targets.to(gi_targets_ptr.dtype.element_ty), mask=block_s_mask) # (BLOCK_SIZE, BLOCK_SIZE_S)
+
+# @triton.autotune(
+# configs=[
+# triton.Config({
+# 'BLOCK_SIZE': bs,
+# 'BLOCK_SIZE_S': bs_s,
+# 'BLOCK_SIZE_C': bs_c
+# }, num_warps=warps)
+# for bs in [16] #, 32, 64]
+# for bs_s in [16] #, 32, 64]
+# for bs_c in [16] #, 32, 64]
+# for warps in [2] #, 4, 8]
+# ],
+# key=[]
+# )
+@triton.autotune(
+ configs=[
+ triton.Config({
+ 'BLOCK_SIZE': bs,
+ 'BLOCK_SIZE_S': bs_s,
+ 'BLOCK_SIZE_C': bs_c
+ }, num_warps=warps)
+ for bs in [16, 32, 64]
+ for bs_s in [8, 16] #, 32, 64]
+ for bs_c in [16, 32, 64]
+ for warps in [2, 4, 8]
+ ],
+ key=[]
+)
+@triton.jit
+def cg2d_ggi_bwd_kernel(
+ go_ptr, y_ptr, grad_gi_ptr,
+ B: tl.constexpr, S: tl.constexpr, C: tl.constexpr, T: tl.constexpr,
+ BLOCK_SIZE: tl.constexpr,
+ BLOCK_SIZE_S: tl.constexpr,
+ BLOCK_SIZE_C: tl.constexpr
+):
+ b = tl.program_id(axis=0)
+ pid = tl.program_id(axis=1)
+ num_t_blocks = tl.cdiv(T, BLOCK_SIZE)
+ num_s_blocks = tl.cdiv(S, BLOCK_SIZE_S)
+ num_c_blocks = tl.cdiv(C, BLOCK_SIZE_C)
+ t_block = pid // (num_s_blocks * num_c_blocks)
+ rem = pid % (num_s_blocks * num_c_blocks)
+ s_block = rem // num_c_blocks
+ c_block = rem % num_c_blocks
+
+ t_offs = tl.arange(0, BLOCK_SIZE)
+ s_offs = tl.arange(0, BLOCK_SIZE_S)
+ c_offs = tl.arange(0, BLOCK_SIZE_C)
+ t_mask = t_offs < (T - t_block * BLOCK_SIZE)
+ s_mask = s_offs < (S - s_block * BLOCK_SIZE_S)
+ c_mask = c_offs < (C - c_block * BLOCK_SIZE_C)
+ t_offs = t_block * BLOCK_SIZE + t_offs
+ s_offs = s_block * BLOCK_SIZE_S + s_offs
+ c_offs = c_block * BLOCK_SIZE_C + c_offs
+
+ # Compute grad_gi
+ # torch:
+ # grad_gi = grad_output * y
+ # grad_gi = grad_gi.sum(-1)
+ grad_gi_base = grad_gi_ptr + b * T * S
+ t_first_id = t_block * BLOCK_SIZE
+ s_first_id = s_block * BLOCK_SIZE_S
+ c_first_id = c_block * BLOCK_SIZE_C
+ # We can use make_block_ptr since the blocks we need are contiguous
+ go_block_ptr = tl.make_block_ptr(
+ base=go_ptr,
+ shape=(B, T, S, C),
+ strides=(T * S * C, S * C, C, 1),
+ offsets=(b, t_first_id, s_first_id, c_first_id),
+ block_shape=(1, BLOCK_SIZE, BLOCK_SIZE_S, BLOCK_SIZE_C),
+ order=(0, 1, 2, 3)
+ )
+ go_block = tl.load(go_block_ptr) # (1, BLOCK_SIZE, BLOCK_SIZE_S, BLOCK_SIZE_C)
+ y_block_ptr = tl.make_block_ptr(
+ base=y_ptr,
+ shape=(B, T, S, C),
+ strides=(T * S * C, S * C, C, 1),
+ offsets=(b, t_first_id, s_first_id, c_first_id), # y is already shifted to the right by 1
+ block_shape=(1, BLOCK_SIZE, BLOCK_SIZE_S, BLOCK_SIZE_C),
+ order=(0, 1, 2, 3)
+ )
+ y_block = tl.load(y_block_ptr) # (1, BLOCK_SIZE, BLOCK_SIZE_S, BLOCK_SIZE_C)
+
+ grad_gi = go_block * y_block # (1, BLOCK_SIZE, BLOCK_SIZE_S, BLOCK_SIZE_C)
+ grad_gi = tl.sum(grad_gi, axis=-1) # (1, BLOCK_SIZE, BLOCK_SIZE_S)
+
+ # Need to use atomic add for accumulation between S blocks, so we also need to use manual pointer bc it's what atomic add accepts
+ grad_gi_block_ptr = grad_gi_base + t_offs[:, None] * S + s_offs[None, :]
+ grad_gi_mask = t_mask[:, None] & s_mask[None, :]
+ tl.atomic_add(grad_gi_block_ptr[None, :], grad_gi, mask=grad_gi_mask[None, :])
+
+class CumulativeGating2DTriton(torch.autograd.Function):
+ # @torch.compiler.disable
+ @staticmethod
+ def forward(ctx, xg, gi):
+ xg = xg.contiguous()
+ gi = gi.contiguous()
+ orig_gi = gi.clone()
+ B, T, S, C = xg.shape
+
+ # Calculate grid dimensions
+ grid = lambda meta: (B * triton.cdiv(S, meta['BLOCK_SIZE_S']) * triton.cdiv(C, meta['BLOCK_SIZE_C']),)
+
+ # Launch kernel
+ nstages = math.ceil(math.log2(T))
+ cg2d_fwd_kernel[grid](
+ xg, gi,
+ B, S, C, T, nstages,
+ )
+
+ ctx.save_for_backward(xg, orig_gi)
+ return xg
+
+ # @torch.compiler.disable
+ @staticmethod
+ def backward(ctx, grad_output):
+ grad_output = grad_output.contiguous()
+ y, gi = ctx.saved_tensors
+ B, T, S, C = y.shape
+
+ # Calculate grid dimensions
+ grid = lambda meta: (B * triton.cdiv(S, meta['BLOCK_SIZE_S']) * triton.cdiv(C, meta['BLOCK_SIZE_C']),)
+
+ gi = torch.cat((gi[:, 1:], torch.ones_like(gi[:, -1:])), dim=1).contiguous()
+ grad_xg = grad_output.clone()
+ y = torch.cat((torch.zeros_like(y[:, :1]), y[:, :-1]), dim=1).contiguous()
+ grad_gi = torch.zeros((B, T, S), device=gi.device).contiguous() # torch.zeros_like(gi)
+
+ # Launch kernel
+ nstages = math.ceil(math.log2(T))
+ cg2d_gxg_bwd_kernel[grid](
+ gi, grad_xg,
+ B, S, C, T, nstages,
+ )
+
+ # Launch kernel
+ grid = lambda meta: (B, triton.cdiv(T, meta['BLOCK_SIZE']) * triton.cdiv(S, meta['BLOCK_SIZE_S']) * triton.cdiv(C, meta['BLOCK_SIZE_C']))
+ cg2d_ggi_bwd_kernel[grid](
+ grad_xg, y, grad_gi,
+ B, S, C, T,
+ )
+
+ return grad_xg, grad_gi.to(gi.dtype)
+
+# Parallel Semi-Compressed Attention
+def parallel_scan(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ s: torch.Tensor,
+ g: torch.Tensor,
+ window_size: int,
+ num_heads: int,
+ alibi: torch.Tensor,
+ mask: torch.Tensor,
+ scale: Optional[int] = None,
+ initial_state: Optional[torch.Tensor] = None,
+ output_final_state: Optional[bool] = False,
+ checkpoint_level: Optional[int] = 2,
+ offsets: Optional[torch.LongTensor] = None,
+ head_first: Optional[bool] = True
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ r"""
+ Args:
+ q (torch.Tensor):
+ queries of shape `[B, HQ, T, K]` if `head_first=True` else `[B, T, HQ, K]`.
+ k (torch.Tensor):
+ keys of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`.
+ GQA is performed if `H` is not equal to `HQ`.
+ v (torch.Tensor):
+ values of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ s (torch.Tensor):
+ slot representations of shape `[B, H, T, M]` if `head_first=True` else `[B, T, H, M]`.
+ g (torch.Tensor):
+ Forget gates of shape `[B, H, T, M]` applied to keys.
+ If not provided, this function is equivalent to vanilla ABC.
+ scale (Optional[int]):
+ Scale factor for attention scores.
+ If not provided, it will default to `1 / sqrt(K)`. Default: `None`.
+ initial_state (Optional[Tuple[torch.Tensor]]):
+ Initial state tuple having tensors of shape `[N, H, K, M]` and `[N, H, M, V]` for `N` input sequences.
+ For equal-length input sequences, `N` equals the batch size `B`.
+ Default: `None`.
+ output_final_state (Optional[bool]):
+ Whether to output the final state tuple, having tensors of shape `[N, H, K, M]` and `[N, H, M, V]`.
+ Default: `False`.
+ checkpoint_level (Optional[int]):
+ Checkpointing level; higher values will save more memories and do more recomputations during backward.
+ Default: `2`:
+ - Level `0`: no memory saved, no recomputation.
+ - Level `1`: recompute the fp32 cumulative values during backward.
+ - Level `2`: recompute the fp32 cumulative values and forward hidden states during backward.
+ offsets (Optional[torch.LongTensor]):
+ Offsets of shape `[N+1]` defining the bos/eos positions of `N` variable-length sequences in the batch.
+ For example,
+ if `offsets` is `[0, 1, 3, 6, 10, 15]`, there are `N=5` sequences with lengths 1, 2, 3, 4 and 5 respectively.
+ If provided, the inputs are concatenated and the batch size `B` is expected to be 1.
+ Default: `None`.
+ head_first (Optional[bool]):
+ Whether the inputs are in the head-first format, which is not supported for variable-length inputs.
+ Default: `True`.
+
+ Returns:
+ o (torch.Tensor):
+ Outputs of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ final_state (Tuple[torch.Tensor]):
+ Final state tuple having tensors of shape `[N, H, K, M]` and `[N, H, M, V]` if `output_final_state=True`.
+ `None` otherwise.
+
+ Examples::
+ >>> import torch
+ >>> import torch.nn.functional as F
+ >>> from einops import rearrange
+ >>> from fla.ops.gsa import fused_recurrent_gsa
+ # inputs with equal lengths
+ >>> B, T, H, K, V, M = 4, 2048, 4, 512, 512, 64
+ >>> q = torch.randn(B, T, H, K, device='cuda')
+ >>> k = torch.randn(B, T, H, K, device='cuda')
+ >>> v = torch.randn(B, T, H, V, device='cuda')
+ >>> s = torch.randn(B, T, H, M, device='cuda')
+ >>> g = F.logsigmoid(torch.randn(B, T, H, M, device='cuda'))
+ >>> h0 = (torch.randn(B, H, K, M, device='cuda'), torch.randn(B, H, M, V, device='cuda'))
+ >>> o, (hk, hv) = chunk_gsa(q, k, v, s, g,
+ initial_state=h0,
+ output_final_state=True,
+ head_first=False)
+ # for variable-length inputs, the batch size `B` is expected to be 1 and `offsets` is required
+ >>> q, k, v, s, g = map(lambda x: rearrange(x, 'b t h d -> 1 (b t) h d'), (q, k, v, s, g))
+ # for a batch with 4 sequences, offsets with 5 start/end positions are expected
+ >>> offsets = q.new_tensor([0, 2048, 4096, 6144, 8192], dtype=torch.long)
+ >>> o_var, (hk_var, hv_var) = chunk_gsa(q, k, v, s, g,
+ initial_state=h0,
+ output_final_state=True,
+ offsets=offsets,
+ head_first=False)
+ >>> assert o.allclose(o_var.view(o.shape))
+ >>> assert hk.allclose(hk_var)
+ >>> assert hv.allclose(hv_var)
+ """
+ if offsets is not None:
+ if q.shape[0] != 1:
+ raise ValueError(f"The batch size is expected to be 1 rather than {q.shape[0]} when using `offsets`."
+ f"Please flatten variable-length inputs before processing.")
+ if head_first:
+ raise RuntimeError("Sequences with variable lengths are not supported for head-first mode")
+ if initial_state is not None and initial_state[0].shape[0] != len(offsets) - 1:
+ raise ValueError(f"The number of initial states is expected to be equal to the number of input sequences, "
+ f"i.e., {len(offsets) - 1} rather than {initial_state[0].shape[0]}.")
+ assert checkpoint_level in [0, 1, 2]
+
+ if scale is None:
+ scale = q.shape[-1] ** -0.5
+
+ BH, T, S = g.shape
+ # Do semi-compressed attention
+ sg = torch.einsum('bts,btc->btsc', g, s)
+ gi = 1 - g
+ states = CumulativeGating2DTriton.apply(sg, gi) # states (B*H, T, S, C) at all time steps
+ scores = AttendFoldedAllKeysTriton.apply(q, k, states, window_size) * scale # scores (B*H, T, S+W)
+ # bring back to (B, H, T, S+W) to apply alibi with shape (H, T, S+W)
+ scores = scores.view(-1, num_heads, T, S + window_size) + alibi[:, :T]
+ scores = scores.masked_fill(mask[:T] == 0, float('-inf'))
+ scores = torch.softmax(scores, dim=-1).view(BH, T, S + window_size)
+ o = AccumulateFoldedAllValuesTriton.apply(scores, v, states, window_size) # outputs (B*H, T, C)
+
+ final_state = None # TODO: fix for inference
+ return o, final_state
diff --git a/fla/ops/simple_gla/README.md b/fla/ops/simple_gla/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..2a64f3dcdee7ff9863089a6b47ef694f6234ab8f
--- /dev/null
+++ b/fla/ops/simple_gla/README.md
@@ -0,0 +1,10 @@
+# Simple GLA
+
+Gating mechanism in [Gated RFA](https://arxiv.org/abs/2103.02143), [Mamba2](https://arxiv.org/abs/2405.21060) and [YOCO](https://arxiv.org/abs/2405.05254) (a.k.a., Gated RetNet).
+
+Compared to GLA, the gating is head-wise instead of elementwise.
+As a result, we can adapt the RetNet kernel for training using matmul w/o numerical instability.
+It is faster than GLA but has less expressive power.
+I will use it as a baseline for the GLA.
+
+$S_{t+1} = g_{t+1} \odot S_{t} + K_{t+1} V_{t+1}^{\top}$ where $g$ is a scalar.
diff --git a/fla/ops/simple_gla/__init__.py b/fla/ops/simple_gla/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..561e3afbf81e8ab1b0fe738e5c5e8d1e1626868e
--- /dev/null
+++ b/fla/ops/simple_gla/__init__.py
@@ -0,0 +1,11 @@
+# -*- coding: utf-8 -*-
+
+from .chunk import chunk_simple_gla
+from .fused_recurrent import fused_recurrent_simple_gla
+from .parallel import parallel_simple_gla
+
+__all__ = [
+ 'chunk_simple_gla',
+ 'fused_recurrent_simple_gla',
+ 'parallel_simple_gla'
+]
diff --git a/fla/ops/simple_gla/chunk.py b/fla/ops/simple_gla/chunk.py
new file mode 100644
index 0000000000000000000000000000000000000000..4095aee8a4be100c52b294e2c61fd11398bf940f
--- /dev/null
+++ b/fla/ops/simple_gla/chunk.py
@@ -0,0 +1,760 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from typing import Optional, Tuple
+
+import torch
+import triton
+import triton.language as tl
+
+from fla.ops.common.chunk_h import chunk_bwd_dh, chunk_fwd_h
+from fla.ops.utils import chunk_local_cumsum
+from fla.utils import autocast_custom_bwd, autocast_custom_fwd, contiguous
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=4),
+ ],
+ key=["BT", "BK", "BV"],
+)
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def chunk_simple_gla_fwd_kernel_o(
+ q,
+ k,
+ v,
+ h,
+ g,
+ o,
+ offsets,
+ indices,
+ scale,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ NT: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_v, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_b, i_h = i_bh // H, i_bh % H
+ if USE_OFFSETS:
+ i_tg = i_t
+ i_n, i_t = tl.load(indices + i_t * 2).to(tl.int32), tl.load(indices + i_t * 2 + 1).to(tl.int32)
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ T = eos - bos
+ NT = tl.cdiv(T, BT)
+ else:
+ NT = tl.cdiv(T, BT)
+ i_tg = i_b * NT + i_t
+ bos, eos = i_b * T, i_b * T + T
+
+ o_i = tl.arange(0, BT)
+ m_s = o_i[:, None] >= o_i[None, :]
+
+ b_o = tl.zeros([BT, BV], dtype=tl.float32)
+ b_s = tl.zeros([BT, BT], dtype=tl.float32)
+ for i_k in range(tl.cdiv(K, BK)):
+ if HEAD_FIRST:
+ p_q = tl.make_block_ptr(q + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + i_bh * T*K, (K, T), (1, K), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_h = tl.make_block_ptr(h + (i_bh * NT + i_t) * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ else:
+ p_q = tl.make_block_ptr(q + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + (bos * H + i_h) * K, (K, T), (1, H*K), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_h = tl.make_block_ptr(h + (i_tg * H + i_h) * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ # [BT, BK]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ # [BK, BT]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BK, BV]
+ b_h = tl.load(p_h, boundary_check=(0, 1))
+ b_o += tl.dot(b_q, b_h, allow_tf32=False)
+ b_s += tl.dot(b_q, b_k, allow_tf32=False)
+
+ if HEAD_FIRST:
+ p_g = tl.make_block_ptr(g + i_bh * T, (T,), (1,), (i_t * BT,), (BT,), (0,))
+ p_v = tl.make_block_ptr(v + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_o = tl.make_block_ptr(o + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ else:
+ p_g = tl.make_block_ptr(g + bos * H + i_h, (T,), (H,), (i_t * BT,), (BT,), (0,))
+ p_v = tl.make_block_ptr(v + (bos * H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_o = tl.make_block_ptr(o + (bos * H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ b_g = tl.load(p_g, boundary_check=(0,))
+ b_o = b_o * tl.exp(b_g)[:, None]
+ b_s = b_s * tl.exp(b_g[:, None] - b_g[None, :])
+ b_s = tl.where(m_s, b_s, 0)
+
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ b_o = (b_o + tl.dot(b_s.to(b_v.dtype), b_v, allow_tf32=False)) * scale
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8)
+ ],
+ key=["BT", "BK", "BV"],
+)
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def chunk_simple_gla_bwd_kernel_dqkg(
+ q,
+ k,
+ v,
+ h,
+ g,
+ do,
+ dh,
+ dq,
+ dk,
+ dg,
+ offsets,
+ indices,
+ scale,
+ B: tl.constexpr,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_k, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_b, i_h = i_bh // H, i_bh % H
+ if USE_OFFSETS:
+ i_tg = i_t
+ i_n, i_t = tl.load(indices + i_t * 2).to(tl.int32), tl.load(indices + i_t * 2 + 1).to(tl.int32)
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ all = T
+ T = eos - bos
+ NT = tl.cdiv(T, BT)
+ else:
+ NT = tl.cdiv(T, BT)
+ i_tg = i_b * NT + i_t
+ bos, eos = i_b * T, i_b * T + T
+ all = B * T
+ o_i = tl.arange(0, BT)
+
+ if HEAD_FIRST:
+ p_g = tl.make_block_ptr(g + i_bh * T, (T,), (1,), (i_t * BT,), (BT,), (0,))
+ b_g_last = tl.load(g + i_bh * T + min(i_t * BT + BT, T) - 1)
+ else:
+ p_g = tl.make_block_ptr(g + bos * H + i_h, (T,), (H,), (i_t * BT,), (BT,), (0,))
+ b_g_last = tl.load(g + (bos + min(i_t * BT + BT, T) - 1) * H + i_h)
+ b_g = tl.load(p_g, boundary_check=(0,))
+
+ b_dq = tl.zeros([BT, BK], dtype=tl.float32)
+ b_dk = tl.zeros([BT, BK], dtype=tl.float32)
+ b_ds = tl.zeros([BT, BT], dtype=tl.float32)
+ b_dg = tl.zeros([BT,], dtype=tl.float32)
+ b_dg_last = tl.zeros([1,], dtype=tl.float32)
+
+ for i_v in range(tl.cdiv(V, BV)):
+ if HEAD_FIRST:
+ p_v = tl.make_block_ptr(v + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_do = tl.make_block_ptr(do + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_h = tl.make_block_ptr(h + (i_bh * NT + i_t) * K*V, (V, K), (1, V), (i_v * BV, i_k * BK), (BV, BK), (0, 1))
+ p_dh = tl.make_block_ptr(dh + (i_bh * NT + i_t) * K*V, (V, K), (1, V), (i_v * BV, i_k * BK), (BV, BK), (0, 1))
+ else:
+ p_v = tl.make_block_ptr(v + (bos*H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_do = tl.make_block_ptr(do + (bos*H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_h = tl.make_block_ptr(h + (i_tg * H + i_h) * K*V, (V, K), (1, V), (i_v * BV, i_k * BK), (BV, BK), (0, 1))
+ p_dh = tl.make_block_ptr(dh + (i_tg * H + i_h) * K*V, (V, K), (1, V), (i_v * BV, i_k * BK), (BV, BK), (0, 1))
+ # [BT, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+ # [BV, BK]
+ b_h = tl.load(p_h, boundary_check=(0, 1))
+ b_dh = tl.load(p_dh, boundary_check=(0, 1))
+
+ b_dg_last += (tl.sum(b_h * b_dh))
+ b_ds += tl.dot(b_do, tl.trans(b_v))
+ b_dq += tl.dot(b_do, b_h.to(b_do.dtype))
+ b_dk += tl.dot(b_v, b_dh.to(b_v.dtype))
+
+ if HEAD_FIRST:
+ p_q = tl.make_block_ptr(q + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dq = tl.make_block_ptr(dq + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dk = tl.make_block_ptr(dk + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dg = tl.make_block_ptr(dg + (i_k*B*H + i_bh) * T, (T,), (1,), (i_t * BT,), (BT,), (0,))
+ else:
+ p_q = tl.make_block_ptr(q + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_k = tl.make_block_ptr(k + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BK, BT), (0, 1))
+ p_dq = tl.make_block_ptr(dq + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dk = tl.make_block_ptr(dk + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dg = tl.make_block_ptr(dg + (i_k*all + bos) * H + i_h, (T,), (H,), (i_t * BT,), (BT,), (0,))
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ b_dg_last *= tl.exp(b_g_last)
+ b_dq = b_dq * tl.exp(b_g)[:, None] * scale
+ b_dk = b_dk * tl.exp(-b_g + b_g_last)[:, None]
+ b_dg_last += tl.sum(b_dk * b_k)
+ b_ds = tl.where(o_i[:, None] >= o_i[None, :], b_ds * scale * tl.exp(b_g[:, None] - b_g[None, :]), 0)
+ b_ds = b_ds.to(b_k.dtype)
+ # [BT, BK]
+ b_dq += tl.dot(b_ds, b_k)
+ b_dk += tl.dot(tl.trans(b_ds), b_q)
+ b_dg += tl.sum(b_q * b_dq - b_k * b_dk, axis=1)
+ # (SY 09/21) revcumsum in a separate kernel due to strange triton compiler issue
+ # b_dg = tl.dot(tl.where(o_i[:, None] <= o_i[None, :], 1., 0.), b_dg, allow_tf32=False) + b_dg_last)
+ b_dg = tl.where(o_i < min(BT, T-i_t*BT) - 1, b_dg, b_dg + b_dg_last)
+ tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1))
+ tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1))
+ tl.store(p_dg, b_dg.to(p_dg.dtype.element_ty), boundary_check=(0,))
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ ],
+ key=["BT", "BK", "BV"],
+)
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def chunk_simple_gla_bwd_kernel_dv(
+ q,
+ k,
+ g,
+ do,
+ dv,
+ dh,
+ offsets,
+ indices,
+ scale,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ USE_OFFSETS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr
+):
+ i_v, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_b, i_h = i_bh // H, i_bh % H
+ if USE_OFFSETS:
+ i_tg = i_t
+ i_n, i_t = tl.load(indices + i_t * 2).to(tl.int32), tl.load(indices + i_t * 2 + 1).to(tl.int32)
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ T = eos - bos
+ NT = tl.cdiv(T, BT)
+ else:
+ NT = tl.cdiv(T, BT)
+ i_tg = i_b * NT + i_t
+ bos, eos = i_b * T, i_b * T + T
+
+ if HEAD_FIRST:
+ b_g = tl.load(g + i_bh * T + i_t * BT + tl.arange(0, BT))
+ b_g_last = tl.load(g + i_bh * T + min(i_t * BT + BT, T) - 1)
+ else:
+ b_g = tl.load(g + (bos + i_t * BT + tl.arange(0, BT)) * H + i_h)
+ b_g_last = tl.load(g + (bos + min(i_t * BT + BT, T) - 1) * H + i_h)
+ b_dv = tl.zeros([BT, BV], dtype=tl.float32)
+ for i_k in range(tl.cdiv(K, BK)):
+ if HEAD_FIRST:
+ p_k = tl.make_block_ptr(k + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dh = tl.make_block_ptr(dh + (i_bh * NT + i_t) * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+ else:
+ p_k = tl.make_block_ptr(k + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dh = tl.make_block_ptr(dh + (i_tg * H + i_h) * K*V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0))
+
+ # [BT, BK]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BK, BV]
+ b_dh = tl.load(p_dh, boundary_check=(0, 1))
+ b_dv += tl.dot(b_k, b_dh.to(b_k.dtype)) * tl.exp(-b_g + b_g_last)[:, None]
+
+ b_A = tl.zeros([BT, BT], dtype=tl.float32)
+ for i_k in range(tl.cdiv(K, BK)):
+ if HEAD_FIRST:
+ p_q = tl.make_block_ptr(q + i_bh * T*K, (K, T), (1, K), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_k = tl.make_block_ptr(k + i_bh * T*K, (T, K), (K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ else:
+ p_q = tl.make_block_ptr(q + (bos * H + i_h) * K, (K, T), (1, H*K), (i_k * BK, i_t * BT), (BK, BT), (0, 1))
+ p_k = tl.make_block_ptr(k + (bos * H + i_h) * K, (T, K), (H*K, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ b_A += tl.dot(b_k, b_q, allow_tf32=False)
+
+ mask = (tl.arange(0, BT)[:, None] <= tl.arange(0, BT)[None, :]) & (i_t * BT + tl.arange(0, BT) < T)
+ b_A = b_A * tl.exp(b_g[None, :] - b_g[:, None]) * scale
+ b_A = tl.where(mask, b_A, 0).to(do.dtype.element_ty)
+
+ if HEAD_FIRST:
+ p_do = tl.make_block_ptr(do + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dv = tl.make_block_ptr(dv + i_bh * T*V, (T, V), (V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ else:
+ p_do = tl.make_block_ptr(do + (bos*H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dv = tl.make_block_ptr(dv + (bos*H + i_h) * V, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+ b_dv += tl.dot(b_A, b_do)
+ tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1))
+
+
+def chunk_simple_gla_fwd_o(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ g: torch.Tensor,
+ h: torch.Tensor,
+ scale: float,
+ offsets: Optional[torch.LongTensor] = None,
+ indices: Optional[torch.LongTensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+) -> torch.Tensor:
+ if head_first:
+ B, H, T, K, V = *q.shape, v.shape[-1]
+ else:
+ B, T, H, K, V = *q.shape, v.shape[-1]
+ BT = min(chunk_size, triton.next_power_of_2(T))
+ if offsets is None:
+ NT = triton.cdiv(T, BT)
+ else:
+ if indices is None:
+ indices = torch.cat([torch.arange(n) for n in triton.cdiv(offsets[1:] - offsets[:-1], BT).tolist()])
+ indices = torch.stack([indices.eq(0).cumsum(0) - 1, indices], 1).to(offsets)
+ NT = len(indices)
+ BK = min(triton.next_power_of_2(K), 64)
+ BV = min(triton.next_power_of_2(V), 64)
+ NV = triton.cdiv(V, BV)
+
+ o = torch.empty_like(v)
+ grid = (NV, NT, B * H)
+ chunk_simple_gla_fwd_kernel_o[grid](
+ q,
+ k,
+ v,
+ h,
+ g,
+ o,
+ offsets,
+ indices,
+ scale,
+ T=T,
+ H=H,
+ K=K,
+ V=V,
+ BT=BT,
+ BK=BK,
+ BV=BV,
+ NT=NT,
+ HEAD_FIRST=head_first
+ )
+ return o
+
+
+def chunk_simple_gla_bwd_dqkg(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ g: torch.Tensor,
+ h: torch.Tensor,
+ do: torch.Tensor,
+ dh: torch.Tensor,
+ scale: float,
+ offsets: Optional[torch.LongTensor] = None,
+ indices: Optional[torch.LongTensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
+ if head_first:
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ else:
+ B, T, H, K, V = *k.shape, v.shape[-1]
+ BT = min(chunk_size, triton.next_power_of_2(T))
+ if offsets is None:
+ NT = triton.cdiv(T, BT)
+ else:
+ if indices is None:
+ indices = torch.cat([torch.arange(n) for n in triton.cdiv(offsets[1:] - offsets[:-1], BT).tolist()])
+ indices = torch.stack([indices.eq(0).cumsum(0) - 1, indices], 1).to(offsets)
+ NT = len(indices)
+ BK = min(triton.next_power_of_2(K), 64)
+ BV = min(triton.next_power_of_2(V), 64)
+ NK = triton.cdiv(K, BK)
+
+ dq = torch.empty_like(q)
+ dk = torch.empty_like(k)
+ dg = torch.empty(NK, *g.shape, dtype=torch.float32, device=g.device)
+ grid = (NK, NT, B * H)
+ chunk_simple_gla_bwd_kernel_dqkg[grid](
+ q,
+ k,
+ v,
+ h,
+ g,
+ do,
+ dh,
+ dq,
+ dk,
+ dg,
+ offsets,
+ indices,
+ scale,
+ B=B,
+ T=T,
+ H=H,
+ K=K,
+ V=V,
+ BT=BT,
+ BK=BK,
+ BV=BV,
+ HEAD_FIRST=head_first
+ )
+ dg = chunk_local_cumsum(dg.sum(0), chunk_size, reverse=True, offsets=offsets, head_first=head_first)
+ return dq, dk, dg
+
+
+def chunk_simple_gla_bwd_dv(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ g: torch.Tensor,
+ do: torch.Tensor,
+ dh: torch.Tensor,
+ scale: float,
+ offsets: Optional[torch.LongTensor] = None,
+ indices: Optional[torch.LongTensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+) -> torch.Tensor:
+ if head_first:
+ B, H, T, K, V = *k.shape, do.shape[-1]
+ else:
+ B, T, H, K, V = *k.shape, do.shape[-1]
+ BT = min(chunk_size, triton.next_power_of_2(T))
+ if offsets is None:
+ NT = triton.cdiv(T, BT)
+ else:
+ if indices is None:
+ indices = torch.cat([torch.arange(n) for n in triton.cdiv(offsets[1:] - offsets[:-1], BT).tolist()])
+ indices = torch.stack([indices.eq(0).cumsum(0) - 1, indices], 1).to(offsets)
+ NT = len(indices)
+ BK = min(triton.next_power_of_2(K), 64)
+ BV = min(triton.next_power_of_2(V), 64)
+ NV = triton.cdiv(V, BV)
+
+ dv = torch.empty_like(do)
+ grid = (NV, NT, B * H)
+ chunk_simple_gla_bwd_kernel_dv[grid](
+ q,
+ k,
+ g,
+ do,
+ dv,
+ dh,
+ offsets,
+ indices,
+ scale,
+ T=T,
+ H=H,
+ K=K,
+ V=V,
+ BT=BT,
+ BK=BK,
+ BV=BV,
+ HEAD_FIRST=head_first
+ )
+ return dv
+
+
+def chunk_simple_gla_fwd(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ g: torch.Tensor,
+ scale: float,
+ initial_state: torch.Tensor,
+ output_final_state: bool,
+ offsets: Optional[torch.LongTensor] = None,
+ indices: Optional[torch.LongTensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
+ g = chunk_local_cumsum(g, chunk_size, offsets=offsets, head_first=head_first)
+ h, ht = chunk_fwd_h(
+ k=k,
+ v=v,
+ g=g,
+ gk=None,
+ gv=None,
+ h0=initial_state,
+ output_final_state=output_final_state,
+ states_in_fp32=False,
+ offsets=offsets,
+ head_first=head_first,
+ chunk_size=chunk_size
+ )
+ o = chunk_simple_gla_fwd_o(
+ q=q,
+ k=k,
+ v=v,
+ g=g,
+ h=h,
+ scale=scale,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=chunk_size
+ )
+ return g, o, ht
+
+
+def chunk_simple_gla_bwd(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ g: torch.Tensor,
+ initial_state: torch.Tensor,
+ do: torch.Tensor,
+ dht: torch.Tensor,
+ scale: float,
+ offsets: Optional[torch.LongTensor] = None,
+ indices: Optional[torch.LongTensor] = None,
+ head_first: bool = True,
+ chunk_size: int = 64
+) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
+ # (SY 09/22) states_in_fp32 seems not affecting the error of dg but for safety, set to True
+ h, _ = chunk_fwd_h(
+ k=k,
+ v=v,
+ g=g,
+ gk=None,
+ gv=None,
+ h0=initial_state,
+ output_final_state=False,
+ states_in_fp32=True,
+ offsets=offsets,
+ head_first=head_first,
+ chunk_size=chunk_size
+ )
+ dh, dh0 = chunk_bwd_dh(
+ q=q,
+ k=k,
+ v=v,
+ g=g,
+ gk=None,
+ gv=None,
+ do=do,
+ h0=initial_state,
+ dht=dht,
+ scale=scale,
+ states_in_fp32=True,
+ offsets=offsets,
+ head_first=head_first,
+ chunk_size=chunk_size
+ )
+ dq, dk, dg = chunk_simple_gla_bwd_dqkg(
+ q=q,
+ k=k,
+ v=v,
+ g=g,
+ h=h,
+ do=do,
+ dh=dh,
+ scale=scale,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=chunk_size
+ )
+ dv = chunk_simple_gla_bwd_dv(
+ q=q,
+ k=k,
+ g=g,
+ do=do,
+ dh=dh,
+ scale=scale,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=chunk_size
+ )
+ return dq, dk, dv, dg, dh0
+
+
+class ChunkSimpleGLAFunction(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_fwd
+ def forward(
+ ctx,
+ q,
+ k,
+ v,
+ g,
+ scale,
+ initial_state,
+ output_final_state,
+ offsets,
+ head_first
+ ):
+ T = q.shape[2] if head_first else q.shape[1]
+ chunk_size = min(64, triton.next_power_of_2(T))
+
+ # 2-d indices denoting the offsets of chunks in each sequence
+ # for example, if the passed `offsets` is [0, 100, 356] and `chunk_size` is 64,
+ # then there are 2 and 4 chunks in the 1st and 2nd sequences respectively, and `indices` will be
+ # [[0, 0], [0, 1], [1, 0], [1, 1], [1, 2], [1, 3]]
+ indices = None
+ if offsets is not None:
+ indices = torch.cat([torch.arange(n) for n in triton.cdiv(offsets[1:] - offsets[:-1], chunk_size).tolist()])
+ indices = torch.stack([indices.eq(0).cumsum(0) - 1, indices], 1).to(offsets)
+
+ g, o, ht = chunk_simple_gla_fwd(
+ q=q,
+ k=k,
+ v=v,
+ g=g,
+ scale=scale,
+ initial_state=initial_state,
+ output_final_state=output_final_state,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=chunk_size
+ )
+ ctx.save_for_backward(q, k, v, g, initial_state)
+ ctx.chunk_size = chunk_size
+ ctx.scale = scale
+ ctx.offsets = offsets
+ ctx.indices = indices
+ ctx.head_first = head_first
+ return o.to(q.dtype), ht
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_bwd
+ def backward(ctx, do, dht):
+ chunk_size, scale, offsets, indices, head_first = ctx.chunk_size, ctx.scale, ctx.offsets, ctx.indices, ctx.head_first
+ q, k, v, g, initial_state = ctx.saved_tensors
+ dq, dk, dv, dg, dh0 = chunk_simple_gla_bwd(
+ q=q,
+ k=k,
+ v=v,
+ g=g,
+ initial_state=initial_state,
+ do=do,
+ dht=dht,
+ scale=scale,
+ offsets=offsets,
+ indices=indices,
+ head_first=head_first,
+ chunk_size=chunk_size
+ )
+ return dq.to(q.dtype), dk.to(k.dtype), dv.to(v.dtype), dg.to(g.dtype), None, dh0, None, None, None
+
+
+def chunk_simple_gla(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ g: torch.Tensor, # log decay
+ scale: Optional[float] = None,
+ initial_state: Optional[torch.Tensor] = None,
+ output_final_state: bool = False,
+ offsets: Optional[torch.LongTensor] = None,
+ head_first: bool = True
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ r"""
+ Args:
+ q (torch.Tensor):
+ queries of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`.
+ k (torch.Tensor):
+ keys of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`.
+ v (torch.Tensor):
+ values of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ g (torch.Tensor):
+ Forget gates of shape `[B, H, T]` if `head_first=True` else `[B, T, H]`.
+ Compared to GLA, the gating is head-wise instead of elementwise.
+ scale (Optional[int]):
+ Scale factor for the attention scores.
+ If not provided, it will default to `1 / sqrt(K)`. Default: `None`.
+ initial_state (Optional[torch.Tensor]):
+ Initial state of shape `[N, H, K, V]` for `N` input sequences.
+ For equal-length input sequences, `N` equals the batch size `B`.
+ Default: `None`.
+ output_final_state (Optional[bool]):
+ Whether to output the final state of shape `[N, H, K, V]`. Default: `False`.
+ offsets (Optional[torch.LongTensor]):
+ Offsets of shape `[N+1]` defining the bos/eos positions of `N` variable-length sequences in the batch.
+ For example,
+ if `offsets` is `[0, 1, 3, 6, 10, 15]`, there are `N=5` sequences with lengths 1, 2, 3, 4 and 5 respectively.
+ If provided, the inputs are concatenated and the batch size `B` is expected to be 1.
+ Default: `None`.
+ head_first (Optional[bool]):
+ Whether the inputs are in the head-first format, which is not supported for variable-length inputs.
+ Default: `True`.
+
+ Returns:
+ o (torch.Tensor):
+ Outputs of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ final_state (torch.Tensor):
+ Final state of shape `[N, H, K, V]` if `output_final_state=True` else `None`.
+
+ Examples::
+ >>> import torch
+ >>> import torch.nn.functional as F
+ >>> from einops import rearrange
+ >>> from fla.ops.simple_gla import chunk_simple_gla
+ # inputs with equal lengths
+ >>> B, T, H, K, V = 4, 2048, 4, 512, 512
+ >>> q = torch.randn(B, T, H, K, device='cuda')
+ >>> k = torch.randn(B, T, H, K, device='cuda')
+ >>> v = torch.randn(B, T, H, V, device='cuda')
+ >>> g = F.logsigmoid(torch.randn(B, T, H, device='cuda'))
+ >>> o, ht = chunk_simple_gla(q, k, v, g,
+ initial_state=None,
+ output_final_state=True,
+ head_first=False)
+ # for variable-length inputs, the batch size `B` is expected to be 1 and `offsets` is required
+ >>> q, k, v, g = map(lambda x: rearrange(x, 'b t ... -> 1 (b t) ...'), (q, k, v, g))
+ # for a batch with 4 sequences, offsets with 5 start/end positions are expected
+ >>> offsets = q.new_tensor([0, 2048, 4096, 6144, 8192], dtype=torch.long)
+ >>> o_var, ht_var = chunk_simple_gla(q, k, v, g,
+ initial_state=None,
+ output_final_state=True,
+ offsets=offsets,
+ head_first=False)
+ >>> assert o.allclose(o_var.view(o.shape))
+ >>> assert ht.allclose(ht_var)
+ """
+ if offsets is not None:
+ if q.shape[0] != 1:
+ raise ValueError(f"The batch size is expected to be 1 rather than {q.shape[0]} when using `offsets`."
+ f"Please flatten variable-length inputs before processing.")
+ if head_first:
+ raise RuntimeError("Sequences with variable lengths are not supported for head-first mode")
+ if initial_state is not None and initial_state.shape[0] != len(offsets) - 1:
+ raise ValueError(f"The number of initial states is expected to be equal to the number of input sequences, "
+ f"i.e., {len(offsets) - 1} rather than {initial_state.shape[0]}.")
+ if scale is None:
+ scale = k.shape[-1] ** -0.5
+ o, final_state = ChunkSimpleGLAFunction.apply(
+ q,
+ k,
+ v,
+ g,
+ scale,
+ initial_state,
+ output_final_state,
+ offsets,
+ head_first
+ )
+ return o, final_state
diff --git a/fla/ops/simple_gla/fused_recurrent.py b/fla/ops/simple_gla/fused_recurrent.py
new file mode 100644
index 0000000000000000000000000000000000000000..a155609360f0835a91d78e79113e711284c2277a
--- /dev/null
+++ b/fla/ops/simple_gla/fused_recurrent.py
@@ -0,0 +1,112 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from typing import Optional, Tuple
+
+import torch
+
+from fla.ops.common.fused_recurrent import fused_recurrent
+
+
+def fused_recurrent_simple_gla(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ g: torch.Tensor,
+ scale: Optional[float] = None,
+ initial_state: Optional[torch.Tensor] = None,
+ output_final_state: bool = False,
+ reverse: bool = False,
+ offsets: Optional[torch.LongTensor] = None,
+ head_first: bool = True
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ r"""
+ Args:
+ q (torch.Tensor):
+ queries of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`.
+ k (torch.Tensor):
+ keys of shape `[B, H, T, K]` if `head_first=True` else `[B, T, H, K]`.
+ v (torch.Tensor):
+ values of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ g (torch.Tensor):
+ Forget gates of shape `[B, H, T]` if `head_first=True` else `[B, T, H]`.
+ Compared to GLA, the gating is head-wise instead of elementwise.
+ scale (Optional[int]):
+ Scale factor for the attention scores.
+ If not provided, it will default to `1 / sqrt(K)`. Default: `None`.
+ initial_state (Optional[torch.Tensor]):
+ Initial state of shape `[N, H, K, V]` for `N` input sequences.
+ For equal-length input sequences, `N` equals the batch size `B`.
+ Default: `None`.
+ output_final_state (Optional[bool]):
+ Whether to output the final state of shape `[N, H, K, V]`. Default: `False`.
+ reverse (Optional[bool]):
+ If `True`, process the state passing in reverse order. Default: `False`.
+ offsets (Optional[torch.LongTensor]):
+ Offsets of shape `[N+1]` defining the bos/eos positions of `N` variable-length sequences in the batch.
+ For example,
+ if `offsets` is `[0, 1, 3, 6, 10, 15]`, there are `N=5` sequences with lengths 1, 2, 3, 4 and 5 respectively.
+ If provided, the inputs are concatenated and the batch size `B` is expected to be 1.
+ Default: `None`.
+ head_first (Optional[bool]):
+ Whether the inputs are in the head-first format, which is not supported for variable-length inputs.
+ Default: `True`.
+
+ Returns:
+ o (torch.Tensor):
+ Outputs of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ final_state (torch.Tensor):
+ Final state of shape `[N, H, K, V]` if `output_final_state=True` else `None`.
+
+ Examples::
+ >>> import torch
+ >>> import torch.nn.functional as F
+ >>> from einops import rearrange
+ >>> from fla.ops.simple_gla import fused_recurrent_simple_gla
+ # inputs with equal lengths
+ >>> B, T, H, K, V = 4, 2048, 4, 512, 512
+ >>> q = torch.randn(B, T, H, K, device='cuda')
+ >>> k = torch.randn(B, T, H, K, device='cuda')
+ >>> v = torch.randn(B, T, H, V, device='cuda')
+ >>> g = F.logsigmoid(torch.randn(B, T, H, K, device='cuda'))
+ >>> h0 = torch.randn(B, H, K, V, device='cuda')
+ >>> o, ht = fused_recurrent_simple_gla(q, k, v, g,
+ initial_state=h0,
+ output_final_state=True,
+ head_first=False)
+ # for variable-length inputs, the batch size `B` is expected to be 1 and `offsets` is required
+ >>> q, k, v, g = map(lambda x: rearrange(x, 'b t h d -> 1 (b t) h d'), (q, k, v, g))
+ # for a batch with 4 sequences, offsets with 5 start/end positions are expected
+ >>> offsets = q.new_tensor([0, 2048, 4096, 6144, 8192], dtype=torch.long)
+ >>> o_var, ht_var = fused_recurrent_simple_gla(q, k, v, g,
+ initial_state=h0,
+ output_final_state=True,
+ offsets=offsets,
+ head_first=False)
+ >>> assert o.allclose(o_var.view(o.shape))
+ >>> assert ht.allclose(ht_var)
+ """
+ if offsets is not None:
+ if q.shape[0] != 1:
+ raise ValueError(f"The batch size is expected to be 1 rather than {q.shape[0]} when using `offsets`."
+ f"Please flatten variable-length inputs before processing.")
+ if head_first:
+ raise RuntimeError("Sequences with variable lengths are not supported for head-first mode")
+ if initial_state is not None and initial_state.shape[0] != len(offsets) - 1:
+ raise ValueError(f"The number of initial states is expected to be equal to the number of input sequences, "
+ f"i.e., {len(offsets) - 1} rather than {initial_state.shape[0]}.")
+ if scale is None:
+ scale = k.shape[-1] ** -0.5
+ o, final_state = fused_recurrent(
+ q=q,
+ k=k,
+ v=v,
+ g=g,
+ scale=scale,
+ initial_state=initial_state,
+ output_final_state=output_final_state,
+ reverse=reverse,
+ offsets=offsets,
+ head_first=head_first
+ )
+ return o, final_state
diff --git a/fla/ops/simple_gla/naive.py b/fla/ops/simple_gla/naive.py
new file mode 100644
index 0000000000000000000000000000000000000000..5fcc96ebeb720cc8b9699793ee6bdf8d3d39fdaa
--- /dev/null
+++ b/fla/ops/simple_gla/naive.py
@@ -0,0 +1,54 @@
+# -*- coding: utf-8 -*-
+
+import torch
+from einops import rearrange
+
+
+def torch_simple_gla(q, k, v, g, chunk_size=64, scale=None):
+ if scale is None:
+ scale = (q.shape[-1] ** -0.5)
+ q = rearrange(q, 'b h (n c) d -> b h n c d', c=chunk_size) * scale
+ k = rearrange(k, 'b h (n c) d -> b h n c d', c=chunk_size)
+ v = rearrange(v, 'b h (n c) d -> b h n c d', c=chunk_size)
+ g = rearrange(g, 'b h (n c) -> b h n c', c=chunk_size)
+ g = g.cumsum(-1)
+ kv = k.transpose(-1, -2) @ (v * (-g + g[:, :, :, -1, None]).exp()[..., None])
+ S = torch.zeros_like(kv)
+
+ for i in range(1, g.shape[-2]):
+ S[:, :, i] = S[:, :, i-1].clone() * g[:, :, i-1, -1, None, None].exp() + kv[:, :, i-1]
+
+ inter = (q * g[..., None].exp()) @ S
+ attn = q @ k.transpose(-1, -2)
+ attn = attn * (g[..., None] - g[..., None, :]).exp()
+ attn = attn.masked_fill(torch.triu(torch.ones(chunk_size, chunk_size, dtype=bool, device=q.device), diagonal=1), 0)
+ intra = attn @ v
+ o = inter + intra
+ return rearrange(o, 'b h n c d -> b h (n c) d')
+
+
+def torch_simple_gla_recurrent(q, k, v, g, scale=None, initial_state=None, output_final_state=True):
+ B, H, T, DK = q.shape
+ original_dtype = q.dtype
+ q, k, v, g = q.float(), k.float(), v.float(), g.float()
+ if scale is None:
+ scale = DK ** -0.5
+ q = q * scale
+ _, _, _, DV = v.shape
+ if initial_state is None:
+ S = torch.zeros(B, H, DK, DV)
+ else:
+ S = initial_state
+ o = torch.zeros(B, H, T, DV).to(q)
+ for i in range(T):
+ gate = g[:, :, i].exp()
+ key = k[:, :, i]
+ value = v[:, :, i]
+ kv = key.unsqueeze(-1) * value.unsqueeze(-2)
+ S = S.clone() * gate.unsqueeze(-1).unsqueeze(-1) + kv
+ q_i = q[:, :, i, :]
+ o_i = (q_i.unsqueeze(-1) * S).sum(-2)
+ o[:, :, i] = o_i
+ if not output_final_state:
+ S = None
+ return o.to(original_dtype), S
diff --git a/fla/ops/simple_gla/parallel.py b/fla/ops/simple_gla/parallel.py
new file mode 100644
index 0000000000000000000000000000000000000000..00c21469677f151fdfa4c9777288615c18bfad30
--- /dev/null
+++ b/fla/ops/simple_gla/parallel.py
@@ -0,0 +1,607 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2024, Songlin Yang, Yu Zhang
+
+from typing import Tuple
+
+import torch
+import triton
+import triton.language as tl
+
+from fla.ops.utils import chunk_global_cumsum, chunk_local_cumsum
+from fla.utils import autocast_custom_bwd, autocast_custom_fwd, contiguous
+
+
+@triton.heuristics({
+ 'NV': lambda args: triton.cdiv(args['V'], args['BV']),
+ 'OUTPUT_ATTENTIONS': lambda args: args['attn'] is not None
+})
+@triton.jit
+def parallel_simple_gla_fwd_kernel(
+ q,
+ k,
+ v,
+ g,
+ o,
+ attn,
+ s_k_h,
+ s_k_t,
+ s_v_h,
+ s_v_t,
+ scale,
+ B: tl.constexpr,
+ H: tl.constexpr,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BS: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ NV: tl.constexpr,
+ OUTPUT_ATTENTIONS: tl.constexpr
+):
+ i_kv, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_k, i_v = i_kv // NV, i_kv % NV
+
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ if OUTPUT_ATTENTIONS:
+ p_a = tl.make_block_ptr(attn + (i_k * B * H + i_bh) * T * T, (T, T), (T, 1), (i_t * BT, 0), (BT, BS), (1, 0))
+
+ # the Q block is kept in the shared memory throughout the whole kernel
+ # [BT, BK]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_q = (b_q * scale).to(b_q.dtype)
+
+ b_o = tl.zeros([BT, BV], dtype=tl.float32)
+ # Q block and K block have no overlap
+ # no need for mask, thereby saving flops
+ for i_s in range(0, i_t * BT, BS):
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (K, T), (1, s_k_t), (i_k * BK, i_s), (BK, BS), (0, 1))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, 1), (i_s, i_v * BV), (BS, BV), (1, 0))
+ p_g = tl.make_block_ptr(g + i_bh * T, (T,), (1,), (i_s,), (BS,), (0,))
+ # [BK, BS]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BS, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BS,]
+ b_g = tl.load(p_g, boundary_check=(0,))
+
+ b_gn = tl.load(g + i_bh * T + min(i_s + BS, T) - 1)
+ b_gp = tl.load(g + i_bh * T + i_s - 1) if i_s % BT > 0 else 0.
+
+ b_kg = (b_k * tl.exp(b_gn - b_g)).to(b_k.dtype)
+ # [BT, BS]
+ b_s = tl.dot(b_q, b_kg, allow_tf32=False)
+ # do this check to avoid some layout bugs
+ # [[BT, BV]
+ if i_s > 0:
+ b_o = b_o * tl.exp(b_gn - b_gp)
+ b_o += tl.dot(b_s.to(b_v.dtype), b_v, allow_tf32=False)
+
+ if OUTPUT_ATTENTIONS:
+ tl.store(p_a, b_s.to(p_a.dtype.element_ty), boundary_check=(0, 1))
+ p_a = tl.advance(p_a, (0, BS))
+
+ tl.debug_barrier()
+
+ p_g = tl.make_block_ptr(g + i_bh * T, (T,), (1,), (i_t * BT,), (BT,), (0,))
+ # [BT,]
+ b_gq = tl.load(p_g, boundary_check=(0,))
+ # rescale interchunk output
+ b_o *= tl.exp(b_gq)[:, None]
+
+ if OUTPUT_ATTENTIONS:
+ p_a = tl.make_block_ptr(attn + (i_k * B * H + i_bh) * T * T, (T, T), (T, 1), (i_t * BT, i_t * BT), (BT, BS), (1, 0))
+
+ # [BT]
+ o_q = i_t * BT + tl.arange(0, BT)
+ # [BS]
+ o_k = i_t * BT + tl.arange(0, BS)
+ # Q block and K block have overlap.
+ # masks required
+ for i_s in range(i_t * BT, min((i_t + 1) * BT, T), BS):
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (K, T), (1, s_k_t), (i_k * BK, i_s), (BK, BS), (0, 1))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, 1), (i_s, i_v * BV), (BS, BV), (1, 0))
+ p_gk = tl.make_block_ptr(g + i_bh * T, (T,), (1,), (i_s,), (BS,), (0,))
+ # [BK, BS]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BS, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BS,]
+ b_gk = tl.load(p_gk, boundary_check=(0,))
+ # [BT, BS]
+ m_s = o_q[:, None] >= o_k[None, :]
+ b_s = tl.where(m_s, tl.dot(b_q, b_k, allow_tf32=False) * tl.exp(b_gq[:, None] - b_gk[None, :]), 0)
+ # [BT, BV]
+ b_o += tl.dot(b_s.to(b_q.dtype), b_v, allow_tf32=False)
+
+ if OUTPUT_ATTENTIONS:
+ tl.store(p_a, b_s.to(p_a.dtype.element_ty), boundary_check=(0, 1))
+ p_a = tl.advance(p_a, (0, BS))
+ o_k += BS
+
+ p_o = tl.make_block_ptr(o + (i_bh + B * H * i_k) * s_v_h, (T, V), (s_v_t, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.jit
+def parallel_simple_gla_bwd_kernel_dq(
+ i_bh,
+ i_t,
+ i_k,
+ i_v,
+ q,
+ k,
+ v,
+ g,
+ do,
+ dq,
+ dg,
+ s_k_h,
+ s_k_t,
+ s_v_h,
+ s_v_t,
+ scale,
+ B: tl.constexpr,
+ H: tl.constexpr,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BS: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+):
+ p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ # [BT, BV]
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+ # [BT, BK]
+ b_dq = tl.zeros([BT, BK], dtype=tl.float32)
+
+ for i_s in range(0, i_t * BT, BS):
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, 1), (i_s, i_k * BK), (BS, BK), (1, 0))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (V, T), (1, s_v_t), (i_v * BV, i_s), (BV, BS), (0, 1))
+ p_g = tl.make_block_ptr(g + i_bh * T, (T,), (1,), (i_s,), (BS,), (0,))
+ # [BS, BK]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BV, BS]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BS]
+ b_g = tl.load(p_g, boundary_check=(0,))
+
+ b_gn = tl.load(g + i_bh * T + min(i_s + BS, T) - 1)
+ b_gp = tl.load(g + i_bh * T + i_s - 1) if i_s % BT > 0 else 0.
+ # [BT, BS]
+ b_ds = tl.dot(b_do, b_v, allow_tf32=False) * tl.exp(b_gn - b_g)[None, :]
+ # [BT, BK]
+ if i_s > 0:
+ b_dq *= tl.exp(b_gn - b_gp)
+ b_dq += tl.dot(b_ds.to(b_v.dtype), b_k, allow_tf32=False)
+
+ p_gq = tl.make_block_ptr(g + i_bh * T, (T,), (1,), (i_t * BT,), (BT,), (0,))
+ # [BT,]
+ b_gq = tl.load(p_gq, boundary_check=(0,))
+ # [BT, BK]
+ b_dq *= tl.exp(b_gq)[:, None] * scale
+
+ # [BT]
+ o_q = i_t * BT + tl.arange(0, BT)
+ # [BS]
+ o_k = i_t * BT + tl.arange(0, BS)
+ # Q block and K block have overlap. masks required
+ for i_s in range(i_t * BT, min((i_t + 1) * BT, T), BS):
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, 1), (i_s, i_k * BK), (BS, BK), (1, 0))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (V, T), (1, s_v_t), (i_v * BV, i_s), (BV, BS), (0, 1))
+ p_gk = tl.make_block_ptr(g + i_bh * T, (T,), (1,), (i_s,), (BS,), (0,))
+ # [BS, BK]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ # [BV, BS]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ # [BS]
+ b_gk = tl.load(p_gk, boundary_check=(0,))
+ # [BT, BS]
+ m_s = o_q[:, None] >= o_k[None, :]
+ b_ds = tl.where(m_s, tl.dot(b_do, b_v, allow_tf32=False) * tl.exp((b_gq[:, None] - b_gk[None, :])), 0) * scale
+ # [BT, BK]
+ b_dq += tl.dot(b_ds.to(b_k.dtype), b_k, allow_tf32=False)
+
+ o_k += BS
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dq = tl.make_block_ptr(dq + (i_v * B * H + i_bh) * s_k_h, (T, K), (s_k_t, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dg = tl.make_block_ptr(dg + (i_v * B * H + i_bh) * T, (T,), (1,), (i_t * BT,), (BT,), (0,))
+
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ b_dg = tl.sum(b_dq * b_q, 1)
+ tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1))
+ tl.store(p_dg, b_dg.to(p_dg.dtype.element_ty), boundary_check=(0,))
+
+
+@triton.jit
+def parallel_simple_gla_bwd_kernel_dkv(
+ i_bh,
+ i_t,
+ i_k,
+ i_v,
+ q,
+ k,
+ v,
+ g,
+ do,
+ dk,
+ dv,
+ dg,
+ s_k_h,
+ s_k_t,
+ s_v_h,
+ s_v_t,
+ scale,
+ B: tl.constexpr,
+ H: tl.constexpr,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BS: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+):
+ # compute dk dv
+ p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_gk = tl.make_block_ptr(g + i_bh * T, (T,), (1,), (i_t * BT,), (BT,), (0,))
+ # [BT, BK]
+ b_k = tl.load(p_k, boundary_check=(0, 1))
+ b_dk = tl.zeros([BT, BK], dtype=tl.float32)
+ # [BT, BV]
+ b_v = tl.load(p_v, boundary_check=(0, 1))
+ b_dv = tl.zeros([BT, BV], dtype=tl.float32)
+ # [BT,]
+ b_gk = tl.load(p_gk, boundary_check=(0,))
+
+ NTS = tl.cdiv(T, BS)
+ # [BT, BK]
+ b_kg = (b_k * tl.exp(tl.load(g + i_bh * T + min(i_t * BT + BT, T) - 1) - b_gk)[:, None]).to(b_k.dtype)
+
+ for i_s in range(NTS * BS - BS, (i_t + 1) * BT - BS, -BS):
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, 1), (i_s, i_k * BK), (BS, BK), (1, 0))
+ p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, 1), (i_s, i_v * BV), (BS, BV), (1, 0))
+ p_gq = tl.make_block_ptr(g + i_bh * T, (T,), (1,), (i_s,), (BS,), (0,))
+ # [BS, BK]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ # [BS,]
+ b_gq = tl.load(p_gq, boundary_check=(0,))
+
+ b_gp = tl.load(g + i_bh * T + min(i_s + BS, T) - 1)
+ b_gn = tl.load(g + i_bh * T + i_s - 1) if i_s % BT > 0 else 0.
+ # [BS, BV]
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+ b_do = (b_do * tl.exp(b_gq - b_gn)[:, None]).to(b_do.dtype)
+
+ # overall decay rate for an entire block
+ # [BS, BK]
+ b_dk *= tl.exp(b_gp - b_gn)
+ # [BS, BV]
+ b_dv *= tl.exp(b_gp - b_gn)
+ # [BT, BS]
+ b_ds = tl.dot(b_v, tl.trans(b_do), allow_tf32=False)
+ b_s = tl.dot(b_kg, tl.trans(b_q), allow_tf32=False)
+ # [BT, BK]
+ b_dk += tl.dot(b_ds.to(b_q.dtype), b_q, allow_tf32=False)
+ # [BT, BV]
+ b_dv += tl.dot(b_s.to(b_do.dtype), b_do, allow_tf32=False)
+
+ # [BT, BK]
+ b_dk *= tl.exp(tl.load(g + i_bh * T + min(T, i_t * BT + BT) - 1) - b_gk)[:, None] * scale
+ # [BT, BV]
+ b_dv *= scale
+
+ tl.debug_barrier()
+ o_q = i_t * BT + tl.arange(0, BS)
+ o_k = i_t * BT + tl.arange(0, BT)
+ for i_s in range(i_t * BT, min((i_t + 1) * BT, T), BS):
+ p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, 1), (i_s, i_k * BK), (BS, BK), (1, 0))
+ p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, 1), (i_s, i_v * BV), (BS, BV), (1, 0))
+ p_gq = tl.make_block_ptr(g + i_bh * T, (T,), (1,), (i_s,), (BS,), (0,))
+ # [BS, BK]
+ b_q = tl.load(p_q, boundary_check=(0, 1))
+ # [BS, BV]
+ b_do = tl.load(p_do, boundary_check=(0, 1))
+ # [BS]
+ b_gq = tl.load(p_gq, boundary_check=(0,))
+ # [BT, BS]
+ m_s = o_k[:, None] <= o_q[None, :]
+ d_s = tl.where(m_s, tl.exp(-b_gk[:, None] + b_gq[None, :]), 0) * scale
+
+ b_ds = tl.dot(b_v, tl.trans(b_do), allow_tf32=False) * d_s
+ b_s = tl.dot(b_k, tl.trans(b_q), allow_tf32=False) * d_s
+ # [BT, BK]
+ b_dk += tl.dot(b_ds.to(b_q.dtype), b_q, allow_tf32=False)
+ b_dv += tl.dot(b_s.to(b_q.dtype), b_do, allow_tf32=False)
+ o_q += BS
+ p_dk = tl.make_block_ptr(dk + (i_v * B * H + i_bh) * s_k_h, (T, K), (s_k_t, 1), (i_t * BT, i_k * BK), (BT, BK), (1, 0))
+ p_dv = tl.make_block_ptr(dv + (i_k * B * H + i_bh) * s_v_h, (T, V), (s_v_t, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0))
+ p_dg = tl.make_block_ptr(dg + (i_v * B * H + i_bh) * T, (T,), (1,), (i_t * BT,), (BT,), (0,))
+ tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1))
+ tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1))
+
+ b_dg = tl.load(p_dg, boundary_check=(0,))
+ b_dg -= tl.sum(b_dk * b_k, 1)
+ tl.store(p_dg, b_dg.to(p_dg.dtype.element_ty), boundary_check=(0,))
+
+
+@triton.heuristics({
+ 'NV': lambda args: triton.cdiv(args['V'], args['BV'])
+})
+@triton.jit
+def parallel_simple_gla_bwd_kernel(
+ q,
+ k,
+ v,
+ g,
+ do,
+ dq,
+ dk,
+ dv,
+ dg,
+ s_k_h,
+ s_k_t,
+ s_v_h,
+ s_v_t,
+ scale,
+ B: tl.constexpr,
+ H: tl.constexpr,
+ T: tl.constexpr,
+ K: tl.constexpr,
+ V: tl.constexpr,
+ BT: tl.constexpr,
+ BS: tl.constexpr,
+ BK: tl.constexpr,
+ BV: tl.constexpr,
+ NV: tl.constexpr
+):
+ i_kv, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_k, i_v = i_kv // NV, i_kv % NV
+
+ parallel_simple_gla_bwd_kernel_dq(
+ i_bh,
+ i_t,
+ i_k,
+ i_v,
+ q,
+ k,
+ v,
+ g,
+ do,
+ dq,
+ dg,
+ s_k_h,
+ s_k_t,
+ s_v_h,
+ s_v_t,
+ scale,
+ B=B,
+ H=H,
+ T=T,
+ K=K,
+ V=V,
+ BT=BT,
+ BS=BS,
+ BK=BK,
+ BV=BV
+ )
+ tl.debug_barrier()
+ parallel_simple_gla_bwd_kernel_dkv(
+ i_bh,
+ i_t,
+ i_k,
+ i_v,
+ q,
+ k,
+ v,
+ g,
+ do,
+ dk,
+ dv,
+ dg,
+ s_k_h,
+ s_k_t,
+ s_v_h,
+ s_v_t,
+ scale,
+ B,
+ H,
+ T,
+ K,
+ V,
+ BT,
+ BS,
+ BK,
+ BV
+ )
+
+
+def parallel_simple_gla_fwd(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ g: torch.Tensor,
+ scale: float,
+ output_attentions: bool = False,
+ chunk_size: int = 128
+):
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ BT, BS = chunk_size, 32
+ if torch.cuda.get_device_capability()[0] >= 9:
+ BK = min(256, triton.next_power_of_2(K))
+ BV = min(256, triton.next_power_of_2(V))
+ else:
+ BK = min(128, triton.next_power_of_2(K))
+ BV = min(128, triton.next_power_of_2(V))
+ NK = triton.cdiv(K, BK)
+ NV = triton.cdiv(V, BV)
+ assert BT % BS == 0
+
+ num_stages = 3 if K <= 64 else 2
+ num_warps = 4
+
+ # local cumulative decay in log space
+ g = chunk_local_cumsum(g, BT)
+
+ grid = (NK * NV, triton.cdiv(T, BT), B * H)
+ o = torch.empty(NK, B, H, T, V, dtype=q.dtype, device=q.device)
+ attn = q.new_zeros(NK, B, H, T, T) if output_attentions else None
+ parallel_simple_gla_fwd_kernel[grid](
+ q=q,
+ k=k,
+ v=v,
+ g=g,
+ o=o,
+ attn=attn,
+ s_k_h=k.stride(1),
+ s_k_t=k.stride(2),
+ s_v_h=v.stride(1),
+ s_v_t=v.stride(2),
+ scale=scale,
+ B=B,
+ H=H,
+ T=T,
+ K=K,
+ V=V,
+ BT=BT,
+ BS=BS,
+ BK=BK,
+ BV=BV,
+ num_stages=num_stages,
+ num_warps=num_warps
+ )
+ o = o.sum(0)
+ if output_attentions:
+ attn = attn.sum(0)
+ return o, g, attn
+
+
+def parallel_simple_gla_bwd(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ g: torch.Tensor,
+ do: torch.Tensor,
+ scale: float,
+ chunk_size: int = 128
+):
+ B, H, T, K, V = *k.shape, v.shape[-1]
+ BT, BS = chunk_size, 32
+ BK = min(128, triton.next_power_of_2(k.shape[-1]))
+ BV = min(128, triton.next_power_of_2(v.shape[-1]))
+ NK = triton.cdiv(K, BK)
+ NV = triton.cdiv(V, BV)
+ assert BT % BS == 0
+
+ num_stages = 3 if K <= 64 else 2
+ num_warps = 4
+
+ dq = torch.empty(NV, B, H, T, K, dtype=q.dtype, device=q.device)
+ dk = torch.empty(NV, B, H, T, K, dtype=q.dtype, device=q.device)
+ dv = torch.empty(NK, B, H, T, V, dtype=q.dtype, device=q.device)
+ dg = torch.empty(NV, B, H, T, dtype=torch.float, device=q.device)
+ grid = (NK * NV, triton.cdiv(T, BT), B * H)
+ parallel_simple_gla_bwd_kernel[grid](
+ q=q,
+ k=k,
+ v=v,
+ g=g,
+ do=do,
+ dq=dq,
+ dk=dk,
+ dv=dv,
+ dg=dg,
+ s_k_h=k.stride(1),
+ s_k_t=k.stride(2),
+ s_v_h=v.stride(1),
+ s_v_t=v.stride(2),
+ scale=scale,
+ B=B,
+ H=H,
+ T=T,
+ K=K,
+ V=V,
+ BT=BT,
+ BS=BS,
+ BK=BK,
+ BV=BV,
+ num_stages=num_stages,
+ num_warps=num_warps
+ )
+ dq = dq.sum(0)
+ dk = dk.sum(0)
+ dv = dv.sum(0)
+ dg = dg.sum(0)
+ dg = chunk_global_cumsum(dg, reverse=True)
+ return dq, dk, dv, dg
+
+
+class ParallelSimpleGLAFunction(torch.autograd.Function):
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_fwd
+ def forward(ctx, q, k, v, g, scale, output_attentions):
+ BT = 128
+ ctx.dtype = g.dtype
+ o, g, attn = parallel_simple_gla_fwd(q, k, v, g, scale, output_attentions, BT)
+ ctx.save_for_backward(q, k, v, g)
+ ctx.scale = scale
+ ctx.BT = BT
+ return o.to(q.dtype), attn
+
+ @staticmethod
+ @contiguous
+ @autocast_custom_bwd
+ def backward(ctx, do, da=None):
+ q, k, v, g = ctx.saved_tensors
+ dq, dk, dv, dg = parallel_simple_gla_bwd(q, k, v, g, do, ctx.scale, ctx.BT)
+ return dq.to(q), dk.to(k), dv.to(v), dg.to(ctx.dtype), None, None
+
+
+def parallel_simple_gla(
+ q: torch.Tensor,
+ k: torch.Tensor,
+ v: torch.Tensor,
+ g: torch.Tensor,
+ scale: float = None,
+ output_attentions: bool = False,
+ head_first: bool = True
+) -> Tuple[torch.Tensor, torch.Tensor]:
+ r"""
+ Args:
+ q (torch.Tensor):
+ queries of shape `[B, H, T, K]`
+ k (torch.Tensor):
+ keys of shape `[B, H, T, K]`
+ v (torch.Tensor):
+ values of shape `[B, H, T, V]`
+ g (torch.Tensor):
+ Forget gates of shape `[B, H, T]` applied to keys.
+ Compared to GLA, the gating is head-wise instead of elementwise.
+ scale (Optional[int]):
+ Scale factor for attention scores.
+ If not provided, it will default to `1 / sqrt(K)`. Default: `None`.
+ output_attentions (bool):
+ Whether to output the materialized attention scores of shape [B, H, T, T]. Default: `False`.
+ head_first (Optional[bool]):
+ Whether the inputs are in the head-first format. Default: `True`.
+
+ Returns:
+ o (torch.Tensor):
+ Outputs of shape `[B, H, T, V]` if `head_first=True` else `[B, T, H, V]`.
+ attn (torch.Tensor):
+ Attention scores of shape `[B, H, T, T]` if `output_attentions=True` else `None`
+ """
+ if scale is None:
+ scale = k.shape[-1] ** -0.5
+ if not head_first:
+ q, k, v, g = map(lambda x: x.transpose(1, 2) if x is not None else None, (q, k, v, g))
+ o, attn = ParallelSimpleGLAFunction.apply(q, k, v, g, scale, output_attentions)
+ if not head_first:
+ o = o.transpose(1, 2).contiguous()
+ return o, attn
diff --git a/fla/ops/utils/__init__.py b/fla/ops/utils/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..383a8dc9c9b184b52c8d17873ed47b20b7c274fb
--- /dev/null
+++ b/fla/ops/utils/__init__.py
@@ -0,0 +1,38 @@
+# -*- coding: utf-8 -*-
+
+from .cumsum import (chunk_global_cumsum, chunk_global_cumsum_scalar,
+ chunk_global_cumsum_scalar_kernel,
+ chunk_global_cumsum_vector,
+ chunk_global_cumsum_vector_kernel, chunk_local_cumsum,
+ chunk_local_cumsum_scalar,
+ chunk_local_cumsum_scalar_kernel,
+ chunk_local_cumsum_vector,
+ chunk_local_cumsum_vector_kernel)
+from .logcumsumexp import logcumsumexp_fwd_kernel
+from .logsumexp import logsumexp_fwd, logsumexp_fwd_kernel
+from .matmul import addmm, matmul, matmul_kernel
+from .softmax import (softmax_bwd, softmax_bwd_kernel, softmax_fwd,
+ softmax_fwd_kernel)
+
+__all__ = [
+ 'chunk_global_cumsum',
+ 'chunk_global_cumsum_scalar',
+ 'chunk_global_cumsum_scalar_kernel',
+ 'chunk_global_cumsum_vector',
+ 'chunk_global_cumsum_vector_kernel',
+ 'chunk_local_cumsum',
+ 'chunk_local_cumsum_scalar',
+ 'chunk_local_cumsum_scalar_kernel',
+ 'chunk_local_cumsum_vector',
+ 'chunk_local_cumsum_vector_kernel',
+ 'logcumsumexp_fwd_kernel',
+ 'logsumexp_fwd',
+ 'logsumexp_fwd_kernel',
+ 'addmm',
+ 'matmul',
+ 'matmul_kernel',
+ 'softmax_bwd',
+ 'softmax_bwd_kernel',
+ 'softmax_fwd',
+ 'softmax_fwd_kernel',
+]
diff --git a/fla/ops/utils/cumsum.py b/fla/ops/utils/cumsum.py
new file mode 100644
index 0000000000000000000000000000000000000000..efdea0c52abf93ed7c8214579d91bb8b78af1ab1
--- /dev/null
+++ b/fla/ops/utils/cumsum.py
@@ -0,0 +1,632 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2023-2024, Songlin Yang, Yu Zhang
+
+from typing import Optional
+
+import torch
+import triton
+import triton.language as tl
+
+from fla.utils import contiguous
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8)
+ ],
+ key=['BT']
+)
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def chunk_local_cumsum_scalar_kernel(
+ s,
+ o,
+ offsets,
+ indices,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ BT: tl.constexpr,
+ HEAD_FIRST: tl.constexpr,
+ USE_OFFSETS: tl.constexpr
+):
+ i_t, i_bh = tl.program_id(0), tl.program_id(1)
+ i_b, i_h = i_bh // H, i_bh % H
+ if USE_OFFSETS:
+ i_n, i_t = tl.load(indices + i_t * 2).to(tl.int32), tl.load(indices + i_t * 2 + 1).to(tl.int32)
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ T = eos - bos
+ else:
+ bos, eos = i_b * T, i_b * T + T
+
+ if HEAD_FIRST:
+ p_s = tl.make_block_ptr(s + i_bh * T, (T,), (1,), (i_t * BT,), (BT,), (0,))
+ p_o = tl.make_block_ptr(o + i_bh * T, (T,), (1,), (i_t * BT,), (BT,), (0,))
+ else:
+ p_s = tl.make_block_ptr(s + bos*H + i_h, (T,), (H,), (i_t * BT,), (BT,), (0,))
+ p_o = tl.make_block_ptr(o + bos*H + i_h, (T,), (H,), (i_t * BT,), (BT,), (0,))
+ # [BT]
+ b_s = tl.load(p_s, boundary_check=(0,)).to(tl.float32)
+ b_o = tl.cumsum(b_s, axis=0)
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0,))
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8)
+ ],
+ key=['BT']
+)
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def chunk_local_reversed_cumsum_scalar_kernel(
+ s,
+ o,
+ offsets,
+ indices,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ BT: tl.constexpr,
+ HEAD_FIRST: tl.constexpr,
+ USE_OFFSETS: tl.constexpr
+):
+ i_t, i_bh = tl.program_id(0), tl.program_id(1)
+ i_b, i_h = i_bh // H, i_bh % H
+ if USE_OFFSETS:
+ i_n, i_t = tl.load(indices + i_t * 2).to(tl.int32), tl.load(indices + i_t * 2 + 1).to(tl.int32)
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ T = eos - bos
+ else:
+ bos, eos = i_b * T, i_b * T + T
+
+ if HEAD_FIRST:
+ p_s = tl.make_block_ptr(s + i_bh * T, (T,), (1,), (i_t * BT,), (BT,), (0,))
+ p_o = tl.make_block_ptr(o + i_bh * T, (T,), (1,), (i_t * BT,), (BT,), (0,))
+ else:
+ p_s = tl.make_block_ptr(s + bos*H + i_h, (T,), (H,), (i_t * BT,), (BT,), (0,))
+ p_o = tl.make_block_ptr(o + bos*H + i_h, (T,), (H,), (i_t * BT,), (BT,), (0,))
+ # [BT]
+ b_s = tl.load(p_s, boundary_check=(0,)).to(tl.float32)
+ b_z = tl.sum(b_s, axis=0)
+ b_o = b_z[None] - tl.cumsum(b_s, axis=0) + b_s
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0,))
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({'BS': 16}, num_warps=2),
+ triton.Config({'BS': 16}, num_warps=4),
+ triton.Config({'BS': 16}, num_warps=8),
+ triton.Config({'BS': 32}, num_warps=2),
+ triton.Config({'BS': 32}, num_warps=4),
+ triton.Config({'BS': 32}, num_warps=8),
+ triton.Config({'BS': 64}, num_warps=2),
+ triton.Config({'BS': 64}, num_warps=4),
+ triton.Config({'BS': 64}, num_warps=8),
+ ],
+ key=['S', 'BT']
+)
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def chunk_local_cumsum_vector_kernel(
+ s,
+ o,
+ offsets,
+ indices,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ S: tl.constexpr,
+ BT: tl.constexpr,
+ BS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr,
+ USE_OFFSETS: tl.constexpr
+):
+ i_s, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_b, i_h = i_bh // H, i_bh % H
+ if USE_OFFSETS:
+ i_n, i_t = tl.load(indices + i_t * 2).to(tl.int32), tl.load(indices + i_t * 2 + 1).to(tl.int32)
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ T = eos - bos
+ else:
+ bos, eos = i_b * T, i_b * T + T
+
+ o_i = tl.arange(0, BT)
+ m_s = tl.where(o_i[:, None] >= o_i[None, :], 1., 0.)
+
+ if HEAD_FIRST:
+ p_s = tl.make_block_ptr(s + i_bh * T*S, (T, S), (S, 1), (i_t * BT, i_s * BS), (BT, BS), (1, 0))
+ p_o = tl.make_block_ptr(o + i_bh * T*S, (T, S), (S, 1), (i_t * BT, i_s * BS), (BT, BS), (1, 0))
+ else:
+ p_s = tl.make_block_ptr(s + (bos * H + i_h) * S, (T, S), (H*S, 1), (i_t * BT, i_s * BS), (BT, BS), (1, 0))
+ p_o = tl.make_block_ptr(o + (bos * H + i_h) * S, (T, S), (H*S, 1), (i_t * BT, i_s * BS), (BT, BS), (1, 0))
+ # [BT, BS]
+ b_s = tl.load(p_s, boundary_check=(0, 1)).to(tl.float32)
+ b_o = tl.dot(m_s, b_s, allow_tf32=False)
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({'BS': 16}, num_warps=2),
+ triton.Config({'BS': 16}, num_warps=4),
+ triton.Config({'BS': 16}, num_warps=8),
+ triton.Config({'BS': 32}, num_warps=2),
+ triton.Config({'BS': 32}, num_warps=4),
+ triton.Config({'BS': 32}, num_warps=8),
+ triton.Config({'BS': 64}, num_warps=2),
+ triton.Config({'BS': 64}, num_warps=4),
+ triton.Config({'BS': 64}, num_warps=8),
+ ],
+ key=['S', 'BT']
+)
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def chunk_local_reversed_cumsum_vector_kernel(
+ s,
+ o,
+ offsets,
+ indices,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ S: tl.constexpr,
+ BT: tl.constexpr,
+ BS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr,
+ USE_OFFSETS: tl.constexpr
+):
+ i_s, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2)
+ i_b, i_h = i_bh // H, i_bh % H
+ if USE_OFFSETS:
+ i_n, i_t = tl.load(indices + i_t * 2).to(tl.int32), tl.load(indices + i_t * 2 + 1).to(tl.int32)
+ bos, eos = tl.load(offsets + i_n).to(tl.int32), tl.load(offsets + i_n + 1).to(tl.int32)
+ T = eos - bos
+ else:
+ bos, eos = i_b * T, i_b * T + T
+
+ o_i = tl.arange(0, BT)
+ m_s = tl.where(o_i[:, None] <= o_i[None, :], 1., 0.)
+
+ if HEAD_FIRST:
+ p_s = tl.make_block_ptr(s + i_bh * T*S, (T, S), (S, 1), (i_t * BT, i_s * BS), (BT, BS), (1, 0))
+ p_o = tl.make_block_ptr(o + i_bh * T*S, (T, S), (S, 1), (i_t * BT, i_s * BS), (BT, BS), (1, 0))
+ else:
+ p_s = tl.make_block_ptr(s + (bos * H + i_h) * S, (T, S), (H*S, 1), (i_t * BT, i_s * BS), (BT, BS), (1, 0))
+ p_o = tl.make_block_ptr(o + (bos * H + i_h) * S, (T, S), (H*S, 1), (i_t * BT, i_s * BS), (BT, BS), (1, 0))
+ # [BT, BS]
+ b_s = tl.load(p_s, boundary_check=(0, 1)).to(tl.float32)
+ b_o = tl.dot(m_s, b_s, allow_tf32=False)
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1))
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({'BT': 16}, num_warps=2),
+ triton.Config({'BT': 32}, num_warps=4),
+ triton.Config({'BT': 32}, num_warps=2),
+ triton.Config({'BT': 64}, num_warps=8),
+ triton.Config({'BT': 64}, num_warps=4),
+ ],
+ key=[]
+)
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def chunk_global_cumsum_scalar_kernel(
+ s,
+ o,
+ offsets,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ BT: tl.constexpr,
+ HEAD_FIRST: tl.constexpr,
+ USE_OFFSETS: tl.constexpr
+):
+ i_bh = tl.program_id(0)
+ i_b, i_h = i_bh // H, i_bh % H
+ if USE_OFFSETS:
+ bos, eos = tl.load(offsets + i_b).to(tl.int32), tl.load(offsets + i_b + 1).to(tl.int32)
+ else:
+ bos, eos = i_b * T, i_b * T + T
+ T = eos - bos
+
+ b_z = tl.zeros([], dtype=tl.float32)
+ for i_t in range(tl.cdiv(T, BT)):
+ if HEAD_FIRST:
+ p_s = tl.make_block_ptr(s + i_bh * T, (T,), (1,), (i_t * BT,), (BT,), (0,))
+ p_o = tl.make_block_ptr(o + i_bh * T, (T,), (1,), (i_t * BT,), (BT,), (0,))
+ else:
+ p_s = tl.make_block_ptr(s + bos*H + i_h, (T,), (H,), (i_t * BT,), (BT,), (0,))
+ p_o = tl.make_block_ptr(o + bos*H + i_h, (T,), (H,), (i_t * BT,), (BT,), (0,))
+ b_s = tl.load(p_s, boundary_check=(0,)).to(tl.float32)
+ b_o = tl.cumsum(b_s, axis=0) + b_z[None]
+ b_z += tl.sum(b_s, axis=0)
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0,))
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({'BT': 16}, num_warps=2),
+ triton.Config({'BT': 32}, num_warps=4),
+ triton.Config({'BT': 32}, num_warps=2),
+ triton.Config({'BT': 64}, num_warps=8),
+ triton.Config({'BT': 64}, num_warps=4),
+ ],
+ key=[]
+)
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def chunk_global_reversed_cumsum_scalar_kernel(
+ s,
+ o,
+ offsets,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ BT: tl.constexpr,
+ HEAD_FIRST: tl.constexpr,
+ USE_OFFSETS: tl.constexpr
+):
+ i_bh = tl.program_id(0)
+ i_b, i_h = i_bh // H, i_bh % H
+ if USE_OFFSETS:
+ bos, eos = tl.load(offsets + i_b).to(tl.int32), tl.load(offsets + i_b + 1).to(tl.int32)
+ else:
+ bos, eos = i_b * T, i_b * T + T
+ T = eos - bos
+
+ b_z = tl.zeros([], dtype=tl.float32)
+ for i_t in range(tl.cdiv(T, BT) - 1, -1, -1):
+ if HEAD_FIRST:
+ p_s = tl.make_block_ptr(s + i_bh * T, (T,), (1,), (i_t * BT,), (BT,), (0,))
+ p_o = tl.make_block_ptr(o + i_bh * T, (T,), (1,), (i_t * BT,), (BT,), (0,))
+ else:
+ p_s = tl.make_block_ptr(s + bos*H + i_h, (T,), (H,), (i_t * BT,), (BT,), (0,))
+ p_o = tl.make_block_ptr(o + bos*H + i_h, (T,), (H,), (i_t * BT,), (BT,), (0,))
+ b_s = tl.load(p_s, boundary_check=(0,)).to(tl.float32)
+ b_zz = tl.sum(b_s, axis=0)
+ b_z += b_zz
+ b_o = b_s - tl.cumsum(b_s, axis=0) + b_z[None]
+ tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0,))
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({'BT': 16}, num_warps=2),
+ triton.Config({'BT': 16}, num_warps=4),
+ triton.Config({'BT': 16}, num_warps=8),
+ triton.Config({'BT': 32}, num_warps=2),
+ triton.Config({'BT': 32}, num_warps=4),
+ triton.Config({'BT': 32}, num_warps=8),
+ triton.Config({'BT': 64}, num_warps=2),
+ triton.Config({'BT': 64}, num_warps=4),
+ triton.Config({'BT': 64}, num_warps=8),
+ ],
+ key=['S']
+)
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def chunk_global_cumsum_vector_kernel(
+ s,
+ z,
+ offsets,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ S: tl.constexpr,
+ BT: tl.constexpr,
+ BS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr,
+ USE_OFFSETS: tl.constexpr
+):
+ i_s, i_bh = tl.program_id(0), tl.program_id(1)
+ i_b, i_h = i_bh // H, i_bh % H
+ if USE_OFFSETS:
+ bos, eos = tl.load(offsets + i_b).to(tl.int32), tl.load(offsets + i_b + 1).to(tl.int32)
+ else:
+ bos, eos = i_b * T, i_b * T + T
+ T = eos - bos
+
+ o_i = tl.arange(0, BT)
+ m_s = tl.where(o_i[:, None] >= o_i[None, :], 1., 0.)
+
+ b_z = tl.zeros([BS], dtype=tl.float32)
+ for i_t in range(tl.cdiv(T, BT)):
+ if HEAD_FIRST:
+ p_s = tl.make_block_ptr(s + i_bh * T*S, (T, S), (S, 1), (i_t * BT, i_s * BS), (BT, BS), (1, 0))
+ p_z = tl.make_block_ptr(z + i_bh * T*S, (T, S), (S, 1), (i_t * BT, i_s * BS), (BT, BS), (1, 0))
+ else:
+ p_s = tl.make_block_ptr(s + (bos * H + i_h) * S, (T, S), (H*S, 1), (i_t * BT, i_s * BS), (BT, BS), (1, 0))
+ p_z = tl.make_block_ptr(z + (bos * H + i_h) * S, (T, S), (H*S, 1), (i_t * BT, i_s * BS), (BT, BS), (1, 0))
+ # [BT, BS]
+ b_s = tl.load(p_s, boundary_check=(0, 1)).to(tl.float32)
+ b_c = b_z[None, :] + tl.dot(m_s, b_s, allow_tf32=False)
+ tl.store(p_z, b_c.to(p_z.dtype.element_ty), boundary_check=(0, 1))
+ if i_t >= 0:
+ b_z += tl.sum(b_s, 0)
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({'BT': 16}, num_warps=2),
+ triton.Config({'BT': 16}, num_warps=4),
+ triton.Config({'BT': 16}, num_warps=8),
+ triton.Config({'BT': 32}, num_warps=2),
+ triton.Config({'BT': 32}, num_warps=4),
+ triton.Config({'BT': 32}, num_warps=8),
+ triton.Config({'BT': 64}, num_warps=2),
+ triton.Config({'BT': 64}, num_warps=4),
+ triton.Config({'BT': 64}, num_warps=8),
+ ],
+ key=['S']
+)
+@triton.heuristics({'USE_OFFSETS': lambda args: args['offsets'] is not None})
+@triton.jit
+def chunk_global_reversed_cumsum_vector_kernel(
+ s,
+ z,
+ offsets,
+ T: tl.constexpr,
+ H: tl.constexpr,
+ S: tl.constexpr,
+ BT: tl.constexpr,
+ BS: tl.constexpr,
+ HEAD_FIRST: tl.constexpr,
+ USE_OFFSETS: tl.constexpr
+):
+ i_s, i_bh = tl.program_id(0), tl.program_id(1)
+ i_b, i_h = i_bh // H, i_bh % H
+ if USE_OFFSETS:
+ bos, eos = tl.load(offsets + i_b).to(tl.int32), tl.load(offsets + i_b + 1).to(tl.int32)
+ else:
+ bos, eos = i_b * T, i_b * T + T
+ T = eos - bos
+
+ o_i = tl.arange(0, BT)
+ m_s = tl.where(o_i[:, None] <= o_i[None, :], 1., 0.)
+
+ b_z = tl.zeros([BS], dtype=tl.float32)
+ for i_t in range(tl.cdiv(T, BT) - 1, -1, -1):
+ if HEAD_FIRST:
+ p_s = tl.make_block_ptr(s + i_bh * T*S, (T, S), (S, 1), (i_t * BT, i_s * BS), (BT, BS), (1, 0))
+ p_z = tl.make_block_ptr(z + i_bh * T*S, (T, S), (S, 1), (i_t * BT, i_s * BS), (BT, BS), (1, 0))
+ else:
+ p_s = tl.make_block_ptr(s + (bos * H + i_h) * S, (T, S), (H*S, 1), (i_t * BT, i_s * BS), (BT, BS), (1, 0))
+ p_z = tl.make_block_ptr(z + (bos * H + i_h) * S, (T, S), (H*S, 1), (i_t * BT, i_s * BS), (BT, BS), (1, 0))
+ # [BT, BS]
+ b_s = tl.load(p_s, boundary_check=(0, 1)).to(tl.float32)
+ b_c = b_z[None, :] + tl.dot(m_s, b_s, allow_tf32=False)
+ tl.store(p_z, b_c.to(p_z.dtype.element_ty), boundary_check=(0, 1))
+
+ if i_t >= 0:
+ b_z += tl.sum(b_s, 0)
+
+
+def chunk_local_cumsum_scalar(
+ g: torch.Tensor,
+ chunk_size: int,
+ reverse: bool = False,
+ offsets: Optional[torch.Tensor] = None,
+ indices: Optional[torch.Tensor] = None,
+ head_first: bool = True
+) -> torch.Tensor:
+ if head_first:
+ B, H, T = g.shape
+ else:
+ B, T, H = g.shape
+ if offsets is not None:
+ B = len(offsets) - 1
+ BT = chunk_size
+ if offsets is None:
+ NT = triton.cdiv(T, BT)
+ else:
+ if indices is None:
+ indices = torch.cat([
+ torch.stack([offsets.new_full((n,), i), offsets.new_tensor(range(n))], 1)
+ for i, n in enumerate(triton.cdiv(offsets[1:] - offsets[:-1], BT).tolist())
+ ])
+ NT = len(indices)
+ g_org, g = g, torch.empty_like(g, dtype=torch.float)
+ grid = (NT, B * H)
+ if reverse:
+ chunk_local_reversed_cumsum_scalar_kernel[grid](
+ g_org,
+ g,
+ offsets,
+ indices,
+ T=T,
+ H=H,
+ BT=BT,
+ HEAD_FIRST=head_first
+ )
+ else:
+ chunk_local_cumsum_scalar_kernel[grid](
+ g_org,
+ g,
+ offsets,
+ indices,
+ T=T,
+ H=H,
+ BT=BT,
+ HEAD_FIRST=head_first
+ )
+ return g
+
+
+def chunk_local_cumsum_vector(
+ g: torch.Tensor,
+ chunk_size: int,
+ reverse: bool = False,
+ offsets: Optional[torch.Tensor] = None,
+ indices: Optional[torch.Tensor] = None,
+ head_first: bool = True
+) -> torch.Tensor:
+ if head_first:
+ B, H, T, S = g.shape
+ else:
+ B, T, H, S = g.shape
+ BT = chunk_size
+ if offsets is None:
+ NT = triton.cdiv(T, BT)
+ else:
+ if indices is None:
+ indices = torch.cat([
+ torch.stack([offsets.new_full((n,), i), offsets.new_tensor(range(n))], 1)
+ for i, n in enumerate(triton.cdiv(offsets[1:] - offsets[:-1], BT).tolist())
+ ])
+ NT = len(indices)
+ g_org, g = g, torch.empty_like(g, dtype=torch.float)
+ def grid(meta): return (triton.cdiv(meta['S'], meta['BS']), NT, B * H)
+ # keep cummulative normalizer in fp32
+ # this kernel is equivalent to
+ # g = g.view(B, H, NT, BT, -1).cumsum(-2).view(B, H, T, -1)
+ if reverse:
+ chunk_local_reversed_cumsum_vector_kernel[grid](
+ g_org,
+ g,
+ offsets,
+ indices,
+ T=T,
+ H=H,
+ S=S,
+ BT=BT,
+ HEAD_FIRST=head_first
+ )
+ else:
+ chunk_local_cumsum_vector_kernel[grid](
+ g_org,
+ g,
+ offsets,
+ indices,
+ T=T,
+ H=H,
+ S=S,
+ BT=BT,
+ HEAD_FIRST=head_first
+ )
+ return g
+
+
+@contiguous
+def chunk_local_cumsum(
+ g: torch.Tensor,
+ chunk_size: int,
+ reverse: bool = False,
+ offsets: Optional[torch.Tensor] = None,
+ indices: Optional[torch.Tensor] = None,
+ head_first: bool = True
+) -> torch.Tensor:
+ if offsets is not None:
+ assert not head_first, "Sequences with variable lengths are not supported for head-first mode"
+ assert g.shape[0] == 1, "Only batch size 1 is supported when offsets are provided"
+ if len(g.shape) == 3:
+ return chunk_local_cumsum_scalar(g, chunk_size, reverse, offsets, indices, head_first)
+ elif len(g.shape) == 4:
+ return chunk_local_cumsum_vector(g, chunk_size, reverse, offsets, indices, head_first)
+ else:
+ raise ValueError(f"Unsupported input shape {g.shape}. "
+ f"which should be (B, H, T, dim) if `head_first=True` "
+ f"or (batch_size, num_heads, seq_len) otherwise")
+
+
+@contiguous
+def chunk_global_cumsum_scalar(
+ s: torch.Tensor,
+ dtype: Optional[torch.dtype] = None,
+ reverse: bool = False,
+ offsets: Optional[torch.Tensor] = None,
+ head_first: bool = True
+) -> torch.Tensor:
+ dtype = dtype or s.dtype
+ if head_first:
+ B, H, T = s.shape
+ else:
+ B, T, H = s.shape
+ if offsets is not None:
+ B = len(offsets) - 1
+ grid = (B * H,)
+ z = torch.empty_like(s, dtype=dtype)
+ if reverse:
+ chunk_global_reversed_cumsum_scalar_kernel[grid](
+ s,
+ z,
+ offsets,
+ T=T,
+ H=H,
+ HEAD_FIRST=head_first
+ )
+ else:
+ chunk_global_cumsum_scalar_kernel[grid](
+ s,
+ z,
+ offsets,
+ T=T,
+ H=H,
+ HEAD_FIRST=head_first
+ )
+ return z
+
+
+@contiguous
+def chunk_global_cumsum_vector(
+ s: torch.Tensor,
+ dtype: Optional[torch.dtype] = None,
+ reverse: bool = False,
+ offsets: Optional[torch.Tensor] = None,
+ head_first: bool = True
+) -> torch.Tensor:
+ dtype = dtype or s.dtype
+ if head_first:
+ B, H, T, S = s.shape
+ else:
+ B, T, H, S = s.shape
+ BS = min(32, S)
+ if offsets is not None:
+ B = len(offsets) - 1
+ grid = (triton.cdiv(S, BS), B * H)
+ z = torch.empty_like(s, dtype=dtype)
+ if reverse:
+ chunk_global_reversed_cumsum_vector_kernel[grid](
+ s,
+ z,
+ offsets,
+ T=T,
+ H=H,
+ S=S,
+ BS=BS,
+ HEAD_FIRST=head_first
+ )
+ else:
+ chunk_global_cumsum_vector_kernel[grid](
+ s,
+ z,
+ offsets,
+ T=T,
+ H=H,
+ S=S,
+ BS=BS,
+ HEAD_FIRST=head_first
+ )
+ return z
+
+
+@contiguous
+def chunk_global_cumsum(
+ s: torch.Tensor,
+ dtype: Optional[torch.dtype] = None,
+ reverse: bool = False,
+ offsets: Optional[torch.Tensor] = None,
+ head_first: bool = True
+) -> torch.Tensor:
+ if offsets is not None:
+ assert not head_first, "Sequences with variable lengths are not supported for head-first mode"
+ assert s.shape[0] == 1, "Only batch size 1 is supported when offsets are provided"
+ if len(s.shape) == 3:
+ return chunk_global_cumsum_scalar(s, dtype, reverse, offsets, head_first)
+ elif len(s.shape) == 4:
+ return chunk_global_cumsum_vector(s, dtype, reverse, offsets, head_first)
+ else:
+ raise ValueError(f"Unsupported input shape {s.shape}. "
+ f"which should be [B, H, T]/[B, H, T, D] if `head_first=True` "
+ f"or [B, T, H]/[B, T, H, D] otherwise")
diff --git a/fla/ops/utils/logcumsumexp.py b/fla/ops/utils/logcumsumexp.py
new file mode 100644
index 0000000000000000000000000000000000000000..2d6c3028c7bb359b630e9d641dc8b9c8c26a56b4
--- /dev/null
+++ b/fla/ops/utils/logcumsumexp.py
@@ -0,0 +1,61 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2023-2024, Songlin Yang, Yu Zhang
+
+import triton
+import triton.language as tl
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({'BT': 16}, num_warps=2),
+ triton.Config({'BT': 16}, num_warps=4),
+ triton.Config({'BT': 16}, num_warps=8),
+ triton.Config({'BT': 32}, num_warps=2),
+ triton.Config({'BT': 32}, num_warps=4),
+ triton.Config({'BT': 32}, num_warps=8),
+ triton.Config({'BT': 64}, num_warps=2),
+ triton.Config({'BT': 64}, num_warps=4),
+ triton.Config({'BT': 64}, num_warps=8),
+ ],
+ key=['S']
+)
+@triton.jit
+def logcumsumexp_fwd_kernel(
+ s,
+ z,
+ s_s_h,
+ s_s_t,
+ s_s_d,
+ T: tl.constexpr,
+ S: tl.constexpr,
+ BT: tl.constexpr
+):
+ i_bh = tl.program_id(0)
+ o_i = tl.arange(0, BT)
+ m_s = tl.where(o_i[:, None] >= o_i[None, :], 1., 0.)
+
+ b_mp = tl.full([S,], float('-inf'), dtype=tl.float32)
+ b_zp = tl.zeros([S,], dtype=tl.float32)
+ for i_t in range(tl.cdiv(T, BT)):
+ p_s = tl.make_block_ptr(s + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, 0), (BT, S), (1, 0))
+ p_z = tl.make_block_ptr(z + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, 0), (BT, S), (1, 0))
+
+ # [BT, S]
+ b_s = tl.load(p_s, boundary_check=(0, 1)).to(tl.float32)
+ # [S,]
+ b_mc = tl.max(b_s, 0)
+ # workaround for compiler bugs
+ if i_t > 0:
+ b_mc = tl.maximum(b_mp, b_mc)
+ b_zp = b_zp * tl.exp(b_mp - b_mc)
+ # [BT, S]
+ b_s = tl.exp(b_s - b_mc)
+ b_z = tl.dot(m_s, b_s, allow_tf32=False) + b_zp
+ # [S,]
+ b_zc = tl.max(b_z, 0)
+ b_mp = b_mc
+ b_zp = b_zc
+ # [BT, BS]
+ # small eps to prevent underflows
+ b_z = tl.log(tl.where(b_z != 0, b_z, 1e-20)) + b_mc
+ tl.store(p_z, b_z.to(p_z.dtype.element_ty), boundary_check=(0, 1))
diff --git a/fla/ops/utils/logsumexp.py b/fla/ops/utils/logsumexp.py
new file mode 100644
index 0000000000000000000000000000000000000000..58c6188a12d2cd06e921fe0504d5f938a0f100a8
--- /dev/null
+++ b/fla/ops/utils/logsumexp.py
@@ -0,0 +1,82 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2023-2024, Songlin Yang, Yu Zhang
+
+from typing import Optional
+
+import torch
+import triton
+import triton.language as tl
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ triton.Config({}, num_warps=16),
+ triton.Config({}, num_warps=32),
+ ],
+ key=['D']
+)
+@triton.heuristics({
+ 'HAS_SCALE': lambda args: args['scale'] is not None
+})
+@triton.jit
+def logsumexp_fwd_kernel(
+ x,
+ z,
+ scale,
+ D: tl.constexpr,
+ B: tl.constexpr,
+ HAS_SCALE: tl.constexpr
+):
+ i_n, i_d = tl.program_id(0), tl.program_id(1)
+ o_d = i_d * B + tl.arange(0, B)
+ m_d = o_d < D
+
+ b_x = tl.load(x + i_n * D + o_d, mask=m_d, other=-float('inf'))
+ if HAS_SCALE:
+ b_x = b_x * scale
+ b_m = tl.max(b_x, 0)
+ b_z = tl.log(tl.sum(tl.exp(b_x - b_m), 0)) + b_m
+ tl.store(z + i_n * tl.cdiv(D, B) + i_d, b_z)
+
+
+def logsumexp_fwd(
+ x,
+ scale: Optional[float] = None,
+ dtype: Optional[torch.dtype] = None
+):
+ r"""
+ Compute the logsumexp of the input tensor over the last dimension.
+
+ Args:
+ x (Tensor):
+ The input tensor of any shape.
+ scale (Optional[float]):
+ The scale applied to the input tensor. Default: `None`.
+ dtype (Optional[torch.dtype]):
+ The data type of the output tensor. Default: `None`.
+ Returns:
+ Tensor: The logsumexp of the input tensor.
+ """
+
+ shape = x.shape
+ x = x.view(-1, shape[-1])
+ N, D = x.shape
+ B = min(triton.next_power_of_2(D), 64 * 1024)
+ ND = triton.cdiv(D, B)
+
+ z = x.new_empty(N, ND, dtype=torch.float)
+ logsumexp_fwd_kernel[(N, ND)](
+ x=x,
+ z=z,
+ scale=scale,
+ D=D,
+ B=B
+ )
+ z = z.logsumexp(-1).view(*shape[:-1])
+ if dtype is not None and dtype != torch.float:
+ z = z.to(dtype)
+ return z
diff --git a/fla/ops/utils/matmul.py b/fla/ops/utils/matmul.py
new file mode 100644
index 0000000000000000000000000000000000000000..7dfedd3a5249bd7205fd96683987b14fa0cd126b
--- /dev/null
+++ b/fla/ops/utils/matmul.py
@@ -0,0 +1,194 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2023-2024, Songlin Yang, Yu Zhang
+
+# code adapted from
+# https://triton-lang.org/main/getting-started/tutorials/03-matrix-multiplication.html
+
+from typing import Optional
+
+import torch
+import triton
+import triton.language as tl
+
+from fla.utils import contiguous
+
+
+# `triton.jit`'ed functions can be auto-tuned by using the `triton.autotune` decorator, which consumes:
+# - A list of `triton.Config` objects that define different configurations of
+# meta-parameters (e.g., `BM`) and compilation options (e.g., `num_warps`) to try
+# - An auto-tuning *key* whose change in values will trigger evaluation of all the
+# provided configs
+@triton.autotune(
+ configs=[
+ triton.Config({'BM': 128, 'BK': 64, 'BN': 256, 'G': 4}, num_stages=3, num_warps=8),
+ triton.Config({'BM': 64, 'BK': 32, 'BN': 256, 'G': 4}, num_stages=4, num_warps=4),
+ triton.Config({'BM': 128, 'BK': 32, 'BN': 128, 'G': 4}, num_stages=4, num_warps=4),
+ triton.Config({'BM': 128, 'BK': 32, 'BN': 64, 'G': 4}, num_stages=4, num_warps=4),
+ triton.Config({'BM': 64, 'BK': 32, 'BN': 128, 'G': 4}, num_stages=4, num_warps=4),
+ triton.Config({'BM': 128, 'BK': 32, 'BN': 32, 'G': 4}, num_stages=4, num_warps=4),
+ triton.Config({'BM': 64, 'BK': 32, 'BN': 32, 'G': 4}, num_stages=5, num_warps=2),
+ triton.Config({'BM': 32, 'BK': 32, 'BN': 64, 'G': 4}, num_stages=5, num_warps=2),
+ # Good config for fp8 inputs.
+ triton.Config({'BM': 128, 'BK': 128, 'BN': 256, 'G': 4}, num_stages=3, num_warps=8),
+ triton.Config({'BM': 256, 'BK': 128, 'BN': 128, 'G': 4}, num_stages=3, num_warps=8),
+ triton.Config({'BM': 256, 'BK': 128, 'BN': 64, 'G': 4}, num_stages=4, num_warps=4),
+ triton.Config({'BM': 64, 'BK': 128, 'BN': 256, 'G': 4}, num_stages=4, num_warps=4),
+ triton.Config({'BM': 128, 'BK': 128, 'BN': 128, 'G': 4}, num_stages=4, num_warps=4),
+ triton.Config({'BM': 128, 'BK': 64, 'BN': 64, 'G': 4}, num_stages=4, num_warps=4),
+ triton.Config({'BM': 64, 'BK': 64, 'BN': 128, 'G': 4}, num_stages=4, num_warps=4),
+ triton.Config({'BM': 128, 'BK': 64, 'BN': 32, 'G': 4}, num_stages=4, num_warps=4)
+ ],
+ key=['M', 'N', 'K'],
+)
+@triton.heuristics({
+ 'HAS_INPUT': lambda args: args['input'] is not None,
+ 'HAS_ALPHA': lambda args: args['alpha'] is not None,
+ 'HAS_BETA': lambda args: args['beta'] is not None
+})
+@triton.jit
+def matmul_kernel(
+ # Pointers to matrices
+ a,
+ b,
+ c,
+ input,
+ alpha,
+ beta,
+ # Matrix dimensions
+ M,
+ N,
+ K,
+ # The stride variables represent how much to increase the ptr by when moving by 1
+ # element in a particular dimension. E.g. `s_am` is how much to increase `a`
+ # by to get the element one row down (A has M rows).
+ s_am,
+ s_ak,
+ s_bk,
+ s_bn,
+ s_cm,
+ s_cn,
+ # Meta-parameters
+ BM: tl.constexpr,
+ BK: tl.constexpr,
+ BN: tl.constexpr,
+ G: tl.constexpr,
+ ACTIVATION: tl.constexpr,
+ HAS_INPUT: tl.constexpr,
+ HAS_ALPHA: tl.constexpr,
+ HAS_BETA: tl.constexpr
+):
+ """Kernel for computing the matmul C = A x B.
+ A has shape (M, K), B has shape (K, N) and C has shape (M, N)
+ """
+ # -----------------------------------------------------------
+ # Map program ids `pid` to the block of C it should compute.
+ # This is done in a grouped ordering to promote L2 data reuse.
+ # See above `L2 Cache Optimizations` section for details.
+ NM, NN = tl.num_programs(0), tl.num_programs(1)
+ i_m, i_n = tl.program_id(0), tl.program_id(1)
+ i_m, i_n = tl.swizzle2d(i_m, i_n, NM, NN, G)
+
+ # ----------------------------------------------------------
+ # Create pointers for the first blocks of A and B.
+ # We will advance this pointer as we move in the K direction
+ # and accumulate
+ # `p_a` is a block of [BM, BK] pointers
+ # `p_b` is a block of [BK, BN] pointers
+ # See above `Pointer Arithmetic` section for details
+ o_am = (i_m * BM + tl.arange(0, BM)) % M
+ o_bn = (i_n * BN + tl.arange(0, BN)) % N
+ o_k = tl.arange(0, BK)
+
+ p_a = a + (o_am[:, None] * s_am + o_k[None, :] * s_ak)
+ p_b = b + (o_k[:, None] * s_bk + o_bn[None, :] * s_bn)
+
+ b_acc = tl.zeros((BM, BN), dtype=tl.float32)
+ for k in range(0, tl.cdiv(K, BK)):
+ # Load the next block of A and B, generate a mask by checking the K dimension.
+ # If it is out of bounds, set it to 0.
+ b_a = tl.load(p_a, mask=o_k[None, :] < K - k * BK, other=0.0)
+ b_b = tl.load(p_b, mask=o_k[:, None] < K - k * BK, other=0.0)
+ # We accumulate along the K dimension.
+ b_acc += tl.dot(b_a, b_b, allow_tf32=False)
+ # Advance the ptrs to the next K block.
+ p_a += BK * s_ak
+ p_b += BK * s_bk
+
+ o_cm = i_m * BM + tl.arange(0, BM)
+ o_cn = i_n * BN + tl.arange(0, BN)
+ mask = (o_cm[:, None] < M) & (o_cn[None, :] < N)
+
+ b_c = b_acc
+ # You can fuse arbitrary activation functions here
+ # while the b_acc is still in FP32!
+ if ACTIVATION == "leaky_relu":
+ b_c = leaky_relu(b_c)
+ if HAS_ALPHA:
+ b_c *= tl.load(alpha)
+ if HAS_INPUT:
+ p_i = input + s_cm * o_cm[:, None] + s_cn * o_cn[None, :]
+ b_i = tl.load(p_i, mask=mask, other=0.0).to(tl.float32)
+ if HAS_BETA:
+ b_i *= tl.load(beta)
+ b_c += b_i
+
+ # -----------------------------------------------------------
+ # Write back the block of the output matrix C with masks.
+ p_c = c + s_cm * o_cm[:, None] + s_cn * o_cn[None, :]
+ tl.store(p_c, b_c.to(c.dtype.element_ty), mask=mask)
+
+
+# We can fuse `leaky_relu` by providing it as an `ACTIVATION` meta-parameter in `matmul_kernel`.
+@triton.jit
+def leaky_relu(x):
+ return tl.where(x >= 0, x, 0.01 * x)
+
+
+@contiguous
+def matmul(a, b, activation=''):
+ assert a.shape[1] == b.shape[0], 'Incompatible dimensions (A: {}x{}, B: {}x{})'.format(*a.shape, *b.shape)
+
+ M, K = a.shape
+ K, N = b.shape
+ # Allocates output.
+ c = a.new_empty(M, N)
+ # 1D launch kernel where each block gets its own program.
+
+ def grid(meta): return (triton.cdiv(M, meta['BM']), triton.cdiv(N, meta['BN']))
+ matmul_kernel[grid](
+ a, b, c, None, None, None,
+ M, N, K,
+ a.stride(0), a.stride(1),
+ b.stride(0), b.stride(1),
+ c.stride(0), c.stride(1),
+ ACTIVATION=activation,
+ )
+ return c
+
+
+@contiguous
+def addmm(
+ x: torch.Tensor,
+ a: torch.Tensor,
+ b: torch.Tensor,
+ alpha: Optional[float] = None,
+ beta: Optional[float] = None,
+ inplace: Optional[bool] = False
+) -> torch.Tensor:
+ assert a.shape[1] == b.shape[0], 'Incompatible dimensions (A: {}x{}, B: {}x{})'.format(*a.shape, *b.shape)
+
+ M, K = a.shape
+ K, N = b.shape
+ # Allocates output.
+ c = x if inplace else a.new_empty(M, N)
+
+ def grid(meta): return (triton.cdiv(M, meta['BM']), triton.cdiv(N, meta['BN']))
+ matmul_kernel[grid](
+ a, b, c, x, alpha, beta,
+ M, N, K,
+ a.stride(0), a.stride(1),
+ b.stride(0), b.stride(1),
+ c.stride(0), c.stride(1),
+ ACTIVATION=None,
+ )
+ return c
diff --git a/fla/ops/utils/softmax.py b/fla/ops/utils/softmax.py
new file mode 100644
index 0000000000000000000000000000000000000000..ff54cea1ea7c0a60d7b360b0d91e98fc20c48de0
--- /dev/null
+++ b/fla/ops/utils/softmax.py
@@ -0,0 +1,109 @@
+# -*- coding: utf-8 -*-
+# Copyright (c) 2023-2024, Songlin Yang, Yu Zhang
+
+from typing import Optional
+
+import torch
+import triton
+import triton.language as tl
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ triton.Config({}, num_warps=16),
+ triton.Config({}, num_warps=32)
+ ],
+ key=['D']
+)
+@triton.jit
+def softmax_fwd_kernel(
+ x,
+ p,
+ D: tl.constexpr,
+ B: tl.constexpr
+):
+ i_n = tl.program_id(0)
+ o_d = tl.arange(0, B)
+ m_d = o_d < D
+
+ b_x = tl.load(x + i_n * D + o_d, mask=m_d, other=-float('inf'))
+ b_m = tl.max(b_x, 0)
+ b_x = tl.exp(b_x - b_m)
+ b_p = b_x / tl.sum(b_x, 0)
+
+ tl.store(p + i_n * D + o_d, b_p.to(p.dtype.element_ty), mask=m_d)
+
+
+@triton.autotune(
+ configs=[
+ triton.Config({}, num_warps=1),
+ triton.Config({}, num_warps=2),
+ triton.Config({}, num_warps=4),
+ triton.Config({}, num_warps=8),
+ triton.Config({}, num_warps=16),
+ triton.Config({}, num_warps=32)
+ ],
+ key=['D']
+)
+@triton.jit
+def softmax_bwd_kernel(
+ p,
+ dp,
+ ds,
+ D: tl.constexpr,
+ B: tl.constexpr
+):
+ i_n = tl.program_id(0)
+ o_d = tl.arange(0, B)
+ m_d = o_d < D
+
+ b_p = tl.load(p + i_n * D + o_d, mask=m_d, other=0.)
+ b_dp = tl.load(dp + i_n * D + o_d, mask=m_d, other=0.)
+ b_pp = tl.sum(b_p * b_dp, 0)
+ b_ds = b_p * b_dp - b_p * b_pp
+ tl.store(ds + i_n * D + o_d, b_ds.to(ds.dtype.element_ty), mask=m_d)
+
+
+def softmax_fwd(
+ x: torch.Tensor,
+ dtype: Optional[torch.dtype] = torch.float
+) -> torch.Tensor:
+ shape = x.shape
+ x = x.view(-1, x.shape[-1])
+
+ N, D = x.shape
+ B = triton.next_power_of_2(D)
+
+ p = torch.empty_like(x, dtype=dtype)
+ softmax_fwd_kernel[(N,)](
+ x=x,
+ p=p,
+ D=D,
+ B=B
+ )
+ return p.view(*shape)
+
+
+def softmax_bwd(
+ p: torch.Tensor,
+ dp: torch.Tensor,
+ dtype: Optional[torch.dtype] = torch.float
+) -> torch.Tensor:
+ shape = p.shape
+ p = p.view(-1, p.shape[-1])
+ ds = torch.empty_like(p, dtype=dtype)
+
+ N, D = p.shape
+ B = triton.next_power_of_2(D)
+ softmax_bwd_kernel[(N,)](
+ p=p,
+ dp=dp,
+ ds=ds,
+ D=D,
+ B=B
+ )
+ return ds.view(*shape)
diff --git a/fla/utils.py b/fla/utils.py
new file mode 100644
index 0000000000000000000000000000000000000000..604c755abbf91d7f7407f475e22a7b122b79c4c8
--- /dev/null
+++ b/fla/utils.py
@@ -0,0 +1,48 @@
+# -*- coding: utf-8 -*-
+
+import functools
+
+import torch
+from packaging import version
+
+
+def contiguous(fn):
+ """
+ Make sure all input tensors are contiguous.
+ """
+ @functools.wraps(fn)
+ def wrapper(ctx, *args, **kwargs):
+ return fn(ctx,
+ *(i if not isinstance(i, torch.Tensor) else i.contiguous() for i in args),
+ **{k: (v if not isinstance(v, torch.Tensor) else v.contiguous()) for k, v in kwargs.items()})
+ return wrapper
+
+
+def require_version(version, hint):
+ """
+ Perform a runtime check of the dependency versions, using the exact same syntax used by pip.
+ """
+ def decorator(fn):
+ @functools.wraps(fn)
+ def wrapper(ctx, *args, **kwargs):
+ from transformers.utils.versions import require_version
+ require_version(version, hint)
+ return fn(ctx,
+ *(i if not isinstance(i, torch.Tensor) else i.contiguous() for i in args),
+ **{k: (v if not isinstance(v, torch.Tensor) else v.contiguous()) for k, v in kwargs.items()})
+ return wrapper
+ return decorator
+
+
+def checkpoint(func):
+ def wrapper(*args, **kwargs):
+ return torch.utils.checkpoint.checkpoint(func, *args, **kwargs)
+ return wrapper
+
+
+if version.parse(torch.__version__) >= version.parse("2.4"):
+ autocast_custom_fwd = functools.partial(torch.amp.custom_fwd, device_type="cuda")
+ autocast_custom_bwd = functools.partial(torch.amp.custom_bwd, device_type="cuda")
+else:
+ autocast_custom_fwd = torch.cuda.amp.custom_fwd
+ autocast_custom_bwd = torch.cuda.amp.custom_bwd
diff --git a/flame/__init__.py b/flame/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/flame/__pycache__/__init__.cpython-312.pyc b/flame/__pycache__/__init__.cpython-312.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..9bbf08b4cfc5a632583ad6ee8921de9f0f5bc340
Binary files /dev/null and b/flame/__pycache__/__init__.cpython-312.pyc differ
diff --git a/flame/__pycache__/data.cpython-312.pyc b/flame/__pycache__/data.cpython-312.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..d7b50f7b4a6ace23f8c05b80c7935997cb1f7deb
Binary files /dev/null and b/flame/__pycache__/data.cpython-312.pyc differ
diff --git a/flame/__pycache__/logging.cpython-312.pyc b/flame/__pycache__/logging.cpython-312.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..608991b0075a4b4e2f4bd2093e88475f67905cc8
Binary files /dev/null and b/flame/__pycache__/logging.cpython-312.pyc differ
diff --git a/flame/__pycache__/parser.cpython-312.pyc b/flame/__pycache__/parser.cpython-312.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..2c51a4e9fb8a40cd98bba88487a938155d2f4063
Binary files /dev/null and b/flame/__pycache__/parser.cpython-312.pyc differ
diff --git a/flame/data.py b/flame/data.py
new file mode 100644
index 0000000000000000000000000000000000000000..fd2806b43dede99ea494702c99afa22e0dfacf0e
--- /dev/null
+++ b/flame/data.py
@@ -0,0 +1,86 @@
+# -*- coding: utf-8 -*-
+
+from dataclasses import dataclass
+from typing import Any, Dict, List, Union
+
+import numpy as np
+import torch
+from transformers import PreTrainedTokenizer
+
+
+@dataclass
+class DataCollatorForLanguageModeling:
+ """
+ Data collator used for language modeling. Inputs are dynamically padded to the maximum length of a batch if they
+ are not all of the same length.
+
+ Args:
+ tokenizer ([`PreTrainedTokenizer`] or [`PreTrainedTokenizerFast`]):
+ The tokenizer used for encoding the data.
+ varlen (`bool`):
+ Whether to return sequences with variable lengths.
+ If `True`, the offsets indicating the start and end of each sequence will be returned.
+ For example, if the sequence lengths are `[4, 8, 12]`,
+ the returned `input_ids` will be a long flattened tensor of shape `[1, 24]`, with `offsets` being `[0, 4, 12, 24]`.
+ If `False`, the `input_ids` with shape `[batch_size, seq_len]` will be returned directly.
+ return_tensors (`str`):
+ The type of Tensor to return. Allowable values are "pt".
+
+
+
+ For best performance, this data collator should be used with a dataset having items that are dictionaries or
+ BatchEncoding, with the `"special_tokens_mask"` key, as returned by a [`PreTrainedTokenizer`] or a
+ [`PreTrainedTokenizerFast`] with the argument `return_special_tokens_mask=True`.
+
+ """
+
+ tokenizer: PreTrainedTokenizer
+ varlen: bool = False
+ return_tensors: str = "pt"
+
+ def __call__(
+ self,
+ examples: List[Union[List[int], Dict[str, Any]]]
+ ) -> Dict[str, Any]:
+ if not isinstance(examples[0], Dict):
+ examples = [{'input_ids': x} for x in examples]
+ if isinstance(examples[0]['input_ids'], List):
+ examples = [{'input_ids': torch.tensor(x['input_ids'], dtype=torch.long)} for x in examples]
+ elif isinstance(examples[0]['input_ids'], np.ndarray):
+ examples = [{'input_ids': torch.from_numpy(x['input_ids'])} for x in examples]
+
+ if not self.varlen:
+ length_of_first = examples[0]['input_ids'].size(0)
+ # Check if padding is necessary.
+ if all(x['input_ids'].size(0) == length_of_first for x in examples):
+ batch = {'input_ids': torch.stack([x['input_ids'] for x in examples], dim=0)}
+ else:
+ # If yes, check if we have a `pad_token`.
+ if self.tokenizer._pad_token is None:
+ raise ValueError(
+ f"You are attempting to pad samples but the tokenizer you are using "
+ f"({self.tokenizer.__class__.__name__}) does not have a pad token."
+ )
+ batch = self.tokenizer.pad(examples, return_tensors=self.return_tensors, return_attention_mask=False)
+ else:
+ batch = {'input_ids': torch.cat([x['input_ids'] for x in examples], dim=0).unsqueeze(0)}
+ if self.tokenizer.add_bos_token:
+ offsets = []
+ if batch['input_ids'][0, 0] != self.tokenizer.bos_token_id:
+ offsets.append(torch.tensor([0], dtype=torch.long))
+ offsets.append(torch.where(batch['input_ids'].eq(self.tokenizer.bos_token_id))[1])
+ offsets.append(torch.tensor([len(batch['input_ids'][0])], dtype=torch.long))
+ batch['offsets'] = torch.cat(offsets, dim=0)
+ elif self.tokenizer.add_eos_token:
+ offsets = [torch.tensor([0], dtype=torch.long)]
+ offsets.append(torch.where(batch['input_ids'].eq(self.tokenizer.eos_token_id))[1] + 1)
+ if batch['input_ids'][0, -1] != self.tokenizer.eos_token_id:
+ offsets.append(torch.tensor([len(batch['input_ids'][0])], dtype=torch.long))
+ batch['offsets'] = torch.cat(offsets, dim=0)
+ else:
+ raise ValueError("You must allow the tokenizer to add either a bos or eos token as separators.")
+ labels = batch['input_ids'].clone()
+ if self.tokenizer.pad_token_id is not None:
+ labels[labels == self.tokenizer.pad_token_id] = -100
+ batch["labels"] = labels
+ return batch
diff --git a/flame/logging.py b/flame/logging.py
new file mode 100644
index 0000000000000000000000000000000000000000..5ebdf2e1281a187f246430ed8b1074b40ebfb450
--- /dev/null
+++ b/flame/logging.py
@@ -0,0 +1,149 @@
+# -*- coding: utf-8 -*-
+
+import json
+import logging
+import os
+import sys
+import time
+
+from transformers.trainer_callback import (ExportableState, TrainerCallback,
+ TrainerControl, TrainerState)
+from transformers.training_args import TrainingArguments
+
+
+class LoggerHandler(logging.Handler):
+ r"""
+ Logger handler used in Web UI.
+ """
+
+ def __init__(self):
+ super().__init__()
+ self.log = ""
+
+ def reset(self):
+ self.log = ""
+
+ def emit(self, record):
+ if record.name == "httpx":
+ return
+ log_entry = self.format(record)
+ self.log += log_entry
+ self.log += "\n\n"
+
+
+def get_logger(name: str) -> logging.Logger:
+ r"""
+ Gets a standard logger with a stream hander to stdout.
+ """
+ formatter = logging.Formatter(
+ fmt="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S"
+ )
+ handler = logging.StreamHandler(sys.stdout)
+ handler.setFormatter(formatter)
+
+ logger = logging.getLogger(name)
+ logger.setLevel(logging.INFO)
+ logger.addHandler(handler)
+
+ return logger
+
+
+def reset_logging() -> None:
+ r"""
+ Removes basic config of root logger. (unused in script)
+ """
+ root = logging.getLogger()
+ list(map(root.removeHandler, root.handlers))
+ list(map(root.removeFilter, root.filters))
+
+
+logger = get_logger(__name__)
+
+LOG_FILE_NAME = "trainer_log.jsonl"
+
+
+class LogCallback(TrainerCallback, ExportableState):
+ def __init__(self, start_time: float = None, elapsed_time: float = None):
+
+ self.start_time = time.time() if start_time is None else start_time
+ self.elapsed_time = 0 if elapsed_time is None else elapsed_time
+ self.last_time = self.start_time
+
+ def on_train_begin(
+ self,
+ args: TrainingArguments,
+ state: TrainerState,
+ control: TrainerControl,
+ **kwargs
+ ):
+ r"""
+ Event called at the beginning of training.
+ """
+ if state.is_local_process_zero:
+ if not args.resume_from_checkpoint:
+ self.start_time = time.time()
+ self.elapsed_time = 0
+ else:
+ self.start_time = state.stateful_callbacks['LogCallback']['start_time']
+ self.elapsed_time = state.stateful_callbacks['LogCallback']['elapsed_time']
+
+ if args.save_on_each_node:
+ if not state.is_local_process_zero:
+ return
+ else:
+ if not state.is_world_process_zero:
+ return
+
+ self.last_time = time.time()
+ if os.path.exists(os.path.join(args.output_dir, LOG_FILE_NAME)) and args.overwrite_output_dir:
+ logger.warning("Previous log file in this folder will be deleted.")
+ os.remove(os.path.join(args.output_dir, LOG_FILE_NAME))
+
+ def on_log(
+ self,
+ args: TrainingArguments,
+ state: TrainerState,
+ control: TrainerControl,
+ logs,
+ **kwargs
+ ):
+ if args.save_on_each_node:
+ if not state.is_local_process_zero:
+ return
+ else:
+ if not state.is_world_process_zero:
+ return
+
+ self.elapsed_time += time.time() - self.last_time
+ self.last_time = time.time()
+ if 'num_input_tokens_seen' in logs:
+ logs['num_tokens'] = logs.pop('num_input_tokens_seen')
+ state.log_history[-1].pop('num_input_tokens_seen')
+ throughput = logs['num_tokens'] / args.world_size / self.elapsed_time
+ state.log_history[-1]['throughput'] = logs['throughput'] = throughput
+ state.stateful_callbacks["LogCallback"] = self.state()
+
+ logs = dict(
+ current_steps=state.global_step,
+ total_steps=state.max_steps,
+ loss=state.log_history[-1].get("loss", None),
+ eval_loss=state.log_history[-1].get("eval_loss", None),
+ predict_loss=state.log_history[-1].get("predict_loss", None),
+ learning_rate=state.log_history[-1].get("learning_rate", None),
+ epoch=state.log_history[-1].get("epoch", None),
+ percentage=round(state.global_step / state.max_steps * 100, 2) if state.max_steps != 0 else 100,
+ )
+
+ os.makedirs(args.output_dir, exist_ok=True)
+ with open(os.path.join(args.output_dir, "trainer_log.jsonl"), "a", encoding="utf-8") as f:
+ f.write(json.dumps(logs) + "\n")
+
+ def state(self) -> dict:
+ return {
+ 'start_time': self.start_time,
+ 'elapsed_time': self.elapsed_time
+ }
+
+ @classmethod
+ def from_state(cls, state):
+ return cls(state['start_time'], state['elapsed_time'])
diff --git a/flame/parser.py b/flame/parser.py
new file mode 100644
index 0000000000000000000000000000000000000000..621a0ef157f0e6c96f016d8f21abe4e6b2b8772d
--- /dev/null
+++ b/flame/parser.py
@@ -0,0 +1,90 @@
+# -*- coding: utf-8 -*-
+
+from __future__ import annotations
+
+from dataclasses import dataclass, field
+from typing import Optional
+
+import transformers
+from transformers import HfArgumentParser, TrainingArguments
+
+from flame.logging import get_logger
+
+logger = get_logger(__name__)
+
+
+@dataclass
+class TrainingArguments(TrainingArguments):
+
+ model_name_or_path: str = field(
+ default=None,
+ metadata={
+ "help": "Path to the model weight or identifier from huggingface.co/models or modelscope.cn/models."
+ },
+ )
+ tokenizer: str = field(
+ default="mistralai/Mistral-7B-v0.1",
+ metadata={"help": "Name of the tokenizer to use."}
+ )
+ use_fast_tokenizer: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to use one of the fast tokenizer (backed by the tokenizers library)."},
+ )
+ from_config: bool = field(
+ default=True,
+ metadata={"help": "Whether to initialize models from scratch."},
+ )
+ dataset: Optional[str] = field(
+ default=None,
+ metadata={"help": "The dataset(s) to use. Use commas to separate multiple datasets."},
+ )
+ dataset_name: Optional[str] = field(
+ default=None,
+ metadata={"help": "The name of provided dataset(s) to use."},
+ )
+ cache_dir: str = field(
+ default=None,
+ metadata={"help": "Path to the cached tokenized dataset."},
+ )
+ split: str = field(
+ default="train",
+ metadata={"help": "Which dataset split to use for training and evaluation."},
+ )
+ streaming: bool = field(
+ default=False,
+ metadata={"help": "Enable dataset streaming."},
+ )
+ hf_hub_token: Optional[str] = field(
+ default=None,
+ metadata={"help": "Auth token to log in with Hugging Face Hub."},
+ )
+ preprocessing_num_workers: Optional[int] = field(
+ default=None,
+ metadata={"help": "The number of processes to use for the pre-processing."},
+ )
+ buffer_size: int = field(
+ default=2048,
+ metadata={"help": "Size of the buffer to randomly sample examples from in dataset streaming."},
+ )
+ context_length: int = field(
+ default=2048,
+ metadata={"help": "The context length of the tokenized inputs in the dataset."},
+ )
+
+
+def get_train_args():
+ parser = HfArgumentParser(TrainingArguments)
+ args, unknown_args = parser.parse_args_into_dataclasses(return_remaining_strings=True)
+
+ if unknown_args:
+ print(parser.format_help())
+ print("Got unknown args, potentially deprecated arguments: {}".format(unknown_args))
+ raise ValueError("Some specified arguments are not used by the HfArgumentParser: {}".format(unknown_args))
+
+ if args.should_log:
+ transformers.utils.logging.set_verbosity(args.get_process_log_level())
+ transformers.utils.logging.enable_default_handler()
+ transformers.utils.logging.enable_explicit_format()
+ # set seeds manually
+ transformers.set_seed(args.seed)
+ return args
diff --git a/model.safetensors b/model.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..1631c6c8953ab70d5cd91247b9a26ad48607e150
--- /dev/null
+++ b/model.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:077ea186538f6591ceaaa5ff30ddde7385b35d39ebb98b7e10538dc67e588936
+size 169434248
diff --git a/preprocess.py b/preprocess.py
new file mode 100644
index 0000000000000000000000000000000000000000..d175b0cb8f0a26737295989c7f4a5a2c0d7d8517
--- /dev/null
+++ b/preprocess.py
@@ -0,0 +1,114 @@
+# -*- coding: utf-8 -*-
+
+from __future__ import annotations
+
+import argparse
+import logging
+from itertools import chain
+from typing import Any, Dict, List, Optional
+
+from datasets import load_dataset
+from transformers import AutoTokenizer
+
+logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
+
+
+def tokenize(
+ examples: Dict[str, List[Any]],
+ tokenizer: AutoTokenizer,
+ context_length: int
+) -> Dict[str, List[List[int]]]:
+ """
+ Tokenize the input text and split into chunks of specified context length.
+
+ Args:
+ examples:
+ Dictionary containing the input text.
+ tokenizer:
+ Initialized tokenizer.
+ context_length:
+ Length of each context chunk.
+
+ Returns:
+ Dictionary containing tokenized and chunked input ids
+ """
+ text = examples['text']
+ input_ids = tokenizer(text)['input_ids']
+ input_ids = list(chain(*input_ids))
+ total_length = len(input_ids)
+ total_length = (total_length // context_length) * context_length
+ # The last chunk smaller than context_length will be discarded
+ return {'input_ids': [input_ids[i:i+context_length] for i in range(0, total_length, context_length)]}
+
+
+def preprocess(
+ dataset: str,
+ name: Optional[str] = None,
+ split: str = 'train',
+ output: str = 'data',
+ model: str = 'mistralai/Mistral-7B-v0.1',
+ num_proc: int = 64,
+ context_length: int = 8192
+) -> None:
+ """
+ Load, tokenize, and save the processed dataset.
+
+ Args:
+ dataset:
+ Path or name of the dataset.
+ name:
+ Name of the dataset configuration.
+ split:
+ Dataset split to process.
+ output:
+ Output directory.
+ model:
+ Model name for tokenizer.
+ num_proc:
+ Number of processes for parallel processing.
+ context_length:
+ Context length for tokenization.
+ """
+ tokenized_path = f'{output}/{dataset}/{name}/{split}' if name is not None else f'{output}/{dataset}/{split}'
+
+ logging.info(f'Initializing tokenizer of {model}')
+ tokenizer = AutoTokenizer.from_pretrained(model, trust_remote_code=True)
+ logging.info(f'Tokenizer initialized: {tokenizer}')
+
+ logging.info(f'Loading dataset: {dataset}')
+ dataset = load_dataset(dataset, name=name, split=split)
+
+ remove_columns = list(next(iter(dataset)).keys())
+ logging.info('Tokenizing and processing dataset')
+ dataset = dataset.map(
+ lambda examples: tokenize(examples, tokenizer, context_length),
+ batched=True,
+ remove_columns=remove_columns,
+ num_proc=num_proc,
+ desc="Running tokenizer on dataset"
+ )
+
+ logging.info(f'Saving processed dataset to {tokenized_path}')
+ dataset.save_to_disk(tokenized_path, num_proc=num_proc)
+
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser(description="Preprocess and tokenize dataset")
+ parser.add_argument("--dataset", default="HuggingFaceFW/fineweb-edu", help="Path or name of the dataset")
+ parser.add_argument("--name", default=None, help="Name of the dataset configuration")
+ parser.add_argument("--split", default="train", help="Dataset split to process")
+ parser.add_argument("--output", default="data", help="Output directory")
+ parser.add_argument("--model", default="mistralai/Mistral-7B-v0.1", help="Model name for tokenizer")
+ parser.add_argument("--num_proc", type=int, default=64, help="Number of processes for parallel processing")
+ parser.add_argument("--context_length", type=int, default=8192, help="Context length for tokenization")
+ args = parser.parse_args()
+
+ preprocess(
+ dataset=args.dataset,
+ name=args.name,
+ split=args.split,
+ output=args.output,
+ model=args.model,
+ num_proc=args.num_proc,
+ context_length=args.context_length
+ )
diff --git a/profile.sh b/profile.sh
new file mode 100644
index 0000000000000000000000000000000000000000..0678f13bcc5390387c3e3e501559060d58dff082
--- /dev/null
+++ b/profile.sh
@@ -0,0 +1,194 @@
+args=$@
+for arg in $args; do
+ eval "$arg"
+done
+
+echo "model: ${model:=mistralai/Mistral-7B-v0.1}"
+echo "tokenizer: ${tokenizer:=mistralai/Mistral-7B-v0.1}"
+echo "project: ${project:=fla}"
+echo "type: ${type:=gla}"
+echo "data: ${data:=}"
+echo "name: ${name:=}"
+echo "cache: ${cache:=}"
+echo "seed: ${seed:=42}"
+echo "context: ${context:=2048}"
+echo "steps: ${steps:=0}"
+echo "save: ${save:=2048}"
+echo "limit: ${limit:=16}"
+echo "preprocessing: ${preprocessing:=32}"
+echo "workers: ${workers:=32}"
+echo "logging: ${logging:=32}"
+echo "config: ${config:=configs/deepspeed.yaml}"
+echo "push: ${push:=False}"
+
+echo "lr: ${lr:=3e-4}"
+echo "scheduler: ${scheduler:=cosine_with_min_lr}"
+echo "epochs: ${epochs:=1}"
+echo "optim: ${optim:=adamw_torch_fused}"
+echo "decay: ${decay:=0.01}"
+echo "beta1: ${beta1:=0.9}"
+echo "beta2: ${beta2:=0.95}"
+echo "norm: ${norm:=1.0}"
+echo "batch: ${batch:=32}"
+echo "update: ${update:=4}"
+echo "warmup: ${warmup:=512}"
+echo "path: ${path:=}"
+echo "checkpoint: ${checkpoint:=}"
+echo "node: ${node:=}"
+echo "rank: ${rank:=}"
+echo "ip: ${ip:=}"
+echo "port: ${port:=}"
+echo "nodes: ${nodes:=1}"
+
+params="--model_name_or_path $model \
+ --tokenizer $tokenizer \
+ --use_fast_tokenizer \
+ --do_train \
+ --dataset $data \
+ --context_length $context \
+ --streaming \
+ --preprocessing_num_workers $preprocessing \
+ --dataloader_num_workers $workers \
+ --dataloader_prefetch_factor 2 \
+ --ignore_data_skip \
+ --output_dir $path \
+ --overwrite_output_dir \
+ --logging_steps $logging \
+ --include_num_input_tokens_seen \
+ --save_steps $save \
+ --save_total_limit $limit \
+ --learning_rate $lr \
+ --lr_scheduler_type $scheduler \
+ --warmup_steps $warmup \
+ --optim $optim \
+ --weight_decay $decay \
+ --adam_beta1=$beta1 \
+ --adam_beta2=$beta2 \
+ --max_grad_norm $norm \
+ --num_train_epochs $epochs \
+ --per_device_train_batch_size $batch \
+ --gradient_accumulation_steps $update \
+ --seed $seed \
+ --logging_steps $logging \
+ --push_to_hub $push \
+ --bf16"
+
+if [ $steps -gt 0 ]; then
+ params+=" --max_steps $steps"
+fi
+
+if [ "$name" != "" ]; then
+ params+=" --dataset_name $name"
+fi
+if [ "$cache" != "" ]; then
+ params+=" --cache_dir $cache"
+fi
+if [ "$checkpoint" != "" ]; then
+ params+=" --resume_from_checkpoint $checkpoint"
+fi
+if [ "$WANDB_DISABLED" != "true" ]; then
+ params+=" --report_to wandb \
+ --run_name $type.$(basename $path)"
+else
+ params+=" --report_to none"
+fi
+
+NUM_GPUS=$(nvidia-smi --list-gpus | wc -l)
+echo "Launching training..."
+accelerate_params=""
+if [ "$rank" != "" ]; then
+ accelerate_params+=" --machine_rank $rank \
+ --num_processes $((nodes * $NUM_GPUS)) \
+ --num_machines $nodes \
+ --main_process_ip $ip \
+ --main_process_port $port \
+ --same_network"
+fi
+
+if [[ $config == *"deepspeed"* ]]; then
+cat < "configs/ds_config.json"
+{
+ "train_batch_size": "auto",
+ "train_micro_batch_size_per_gpu": "auto",
+ "gradient_accumulation_steps": "auto",
+ "gradient_clipping": "auto",
+ "zero_allow_untested_optimizer": true,
+ "bf16": {
+ "enabled": true
+ },
+ "zero_optimization": {
+ "stage": 2,
+ "allgather_partitions": true,
+ "allgather_bucket_size": 5e8,
+ "reduce_scatter": true,
+ "reduce_bucket_size": 5e8,
+ "overlap_comm": false,
+ "contiguous_gradients": true
+ }
+}
+EOF
+cat < $config
+compute_environment: LOCAL_MACHINE
+distributed_type: DEEPSPEED
+deepspeed_config:
+ deepspeed_config_file: configs/ds_config.json
+ zero3_init_flag: true
+machine_rank: 0
+main_training_function: main
+num_machines: 1
+num_processes: $NUM_GPUS
+use_cpu: false
+EOF
+fi
+if [[ $config == *"fsdp"* ]]; then
+cat < $config
+compute_environment: LOCAL_MACHINE
+distributed_type: FSDP
+fsdp_config:
+ fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
+ fsdp_backward_prefetch: BACKWARD_PRE
+ fsdp_forward_prefetch: false
+ fsdp_cpu_ram_efficient_loading: true
+ fsdp_offload_params: false
+ fsdp_sharding_strategy: HYBRID_SHARD_ZERO2
+ fsdp_state_dict_type: SHARDED_STATE_DICT
+ fsdp_sync_module_states: true
+ fsdp_use_orig_params: true
+machine_rank: 0
+main_training_function: main
+mixed_precision: bf16
+num_machines: $nodes
+num_processes: $((nodes * $NUM_GPUS))
+rdzv_backend: static
+same_network: true
+tpu_env: []
+tpu_use_cluster: false
+tpu_use_sudo: false
+use_cpu: false
+EOF
+fi
+
+cat $config
+
+set -x
+mkdir -p $path
+cp * $path
+cp -r configs $path
+cp -r flame $path
+cp -r ../fla $path
+
+export TRANSFORMERS_OFFLINE=1
+export HF_DATASETS_OFFLINE=1
+if [ "$date" == "" ]; then
+ date=$(date +%Y%m%d%H%M)
+fi
+# export WANDB_RESUME=allow
+# export WANDB_NAME="$type.$(basename $path)"
+# export WANDB_PROJECT=$project
+# export WANDB_RUN_ID="$WANDB_NAME-$date"
+export WANDB_MODE=offline
+export HF_HUB_OFFLINE=0
+export TRITON_PRINT_AUTOTUNING=1
+ncu --set all -o profiling/train-profile python run.py $params
+
+echo "RUNNING DONE!"
diff --git a/run.py b/run.py
new file mode 100644
index 0000000000000000000000000000000000000000..b9e1be1999ee910335e700112954c30b9b1e43e5
--- /dev/null
+++ b/run.py
@@ -0,0 +1,75 @@
+# -*- coding: utf-8 -*-
+
+from datasets import load_from_disk
+from transformers import (AutoConfig, AutoModelForCausalLM, AutoTokenizer,
+ Trainer)
+
+import fla # noqa
+from flame.data import DataCollatorForLanguageModeling
+from flame.logging import LogCallback, get_logger
+from flame.parser import get_train_args
+
+logger = get_logger(__name__)
+
+
+def main():
+ args = get_train_args()
+ logger.info(args)
+
+ tokenizer = AutoTokenizer.from_pretrained(
+ args.tokenizer,
+ use_fast=args.use_fast_tokenizer,
+ trust_remote_code=True,
+ add_bos_token=True,
+ add_eos_token=False
+ )
+ if tokenizer.pad_token_id is None:
+ tokenizer.pad_token = tokenizer.eos_token
+ logger.info("Add pad token: {}".format(tokenizer.pad_token))
+ if args.from_config:
+ logger.info("All model params are randomly initialized for from-scratch training.")
+ model = AutoModelForCausalLM.from_config(AutoConfig.from_pretrained(args.model_name_or_path))
+ else:
+ logger.info(f"Loading pretrained checkpoint {args.model_name_or_path}")
+ model = AutoModelForCausalLM.from_pretrained(args.model_name_or_path)
+ model.train()
+
+ trainable_params, all_param = model.num_parameters(only_trainable=True), model.num_parameters()
+ logger.info(f"% of trainable params: {trainable_params:d} / {all_param:d} = {trainable_params / all_param:.2%}")
+ logger.info(f"{tokenizer}\n{model}\n{model.config}")
+
+ logger.info(f"Loading the `{args.split}` split directly from the cache {args.cache_dir}...")
+ dataset = load_from_disk(args.cache_dir)
+ logger.info(f"{dataset}")
+ logger.info(f"Shuffling the dataset with seed {args.seed}")
+ dataset = dataset.shuffle(seed=args.seed)
+ data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer)
+
+ if args.lr_scheduler_type == 'cosine_with_min_lr':
+ args.lr_scheduler_kwargs = {'min_lr_rate': 0.1}
+ if args.lr_scheduler_type == 'warmup_stable_decay':
+ args.lr_scheduler_kwargs = {
+ 'num_stable_steps': args.max_steps * 0.9 - args.warmup_steps,
+ 'num_decay_steps': args.max_steps * 0.1
+ }
+
+ trainer = Trainer(
+ model=model,
+ args=args,
+ tokenizer=tokenizer,
+ data_collator=data_collator,
+ callbacks=[LogCallback()],
+ train_dataset=dataset
+ )
+
+ results = trainer.train(resume_from_checkpoint=args.resume_from_checkpoint)
+ trainer.save_model()
+ tokenizer.save_pretrained(trainer.args.output_dir)
+
+ trainer.log_metrics("train", results.metrics)
+ trainer.save_metrics("train", results.metrics)
+ trainer.save_state()
+
+
+if __name__ == "__main__":
+ main()
diff --git a/special_tokens_map.json b/special_tokens_map.json
new file mode 100644
index 0000000000000000000000000000000000000000..72ecfeeb7e14d244c936169d2ed139eeae235ef1
--- /dev/null
+++ b/special_tokens_map.json
@@ -0,0 +1,24 @@
+{
+ "bos_token": {
+ "content": "",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false
+ },
+ "eos_token": {
+ "content": "",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false
+ },
+ "pad_token": "",
+ "unk_token": {
+ "content": "",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false
+ }
+}
diff --git a/tokenizer.json b/tokenizer.json
new file mode 100644
index 0000000000000000000000000000000000000000..65d75227e9e5faa60d4f81dee357d9222a450184
--- /dev/null
+++ b/tokenizer.json
@@ -0,0 +1,268053 @@
+{
+ "version": "1.0",
+ "truncation": null,
+ "padding": null,
+ "added_tokens": [
+ {
+ "id": 0,
+ "content": "",
+ "single_word": false,
+ "lstrip": false,
+ "rstrip": false,
+ "normalized": false,
+ "special": true
+ },
+ {
+ "id": 1,
+ "content": "",
+ "single_word": false,
+ "lstrip": false,
+ "rstrip": false,
+ "normalized": false,
+ "special": true
+ },
+ {
+ "id": 2,
+ "content": "",
+ "single_word": false,
+ "lstrip": false,
+ "rstrip": false,
+ "normalized": false,
+ "special": true
+ }
+ ],
+ "normalizer": null,
+ "pre_tokenizer": {
+ "type": "Metaspace",
+ "replacement": "▁",
+ "prepend_scheme": "first",
+ "split": false
+ },
+ "post_processor": {
+ "type": "TemplateProcessing",
+ "single": [
+ {
+ "SpecialToken": {
+ "id": "",
+ "type_id": 0
+ }
+ },
+ {
+ "Sequence": {
+ "id": "A",
+ "type_id": 0
+ }
+ }
+ ],
+ "pair": [
+ {
+ "SpecialToken": {
+ "id": "",
+ "type_id": 0
+ }
+ },
+ {
+ "Sequence": {
+ "id": "A",
+ "type_id": 0
+ }
+ },
+ {
+ "SpecialToken": {
+ "id": "",
+ "type_id": 1
+ }
+ },
+ {
+ "Sequence": {
+ "id": "B",
+ "type_id": 1
+ }
+ }
+ ],
+ "special_tokens": {
+ "": {
+ "id": "",
+ "ids": [
+ 1
+ ],
+ "tokens": [
+ ""
+ ]
+ }
+ }
+ },
+ "decoder": {
+ "type": "Sequence",
+ "decoders": [
+ {
+ "type": "Replace",
+ "pattern": {
+ "String": "▁"
+ },
+ "content": " "
+ },
+ {
+ "type": "ByteFallback"
+ },
+ {
+ "type": "Fuse"
+ },
+ {
+ "type": "Strip",
+ "content": " ",
+ "start": 1,
+ "stop": 0
+ }
+ ]
+ },
+ "model": {
+ "type": "BPE",
+ "dropout": null,
+ "unk_token": "",
+ "continuing_subword_prefix": null,
+ "end_of_word_suffix": null,
+ "fuse_unk": true,
+ "byte_fallback": true,
+ "ignore_merges": false,
+ "vocab": {
+ "": 0,
+ "": 1,
+ "": 2,
+ "<0x00>": 3,
+ "<0x01>": 4,
+ "<0x02>": 5,
+ "<0x03>": 6,
+ "<0x04>": 7,
+ "<0x05>": 8,
+ "<0x06>": 9,
+ "<0x07>": 10,
+ "<0x08>": 11,
+ "<0x09>": 12,
+ "<0x0A>": 13,
+ "<0x0B>": 14,
+ "<0x0C>": 15,
+ "<0x0D>": 16,
+ "<0x0E>": 17,
+ "<0x0F>": 18,
+ "<0x10>": 19,
+ "<0x11>": 20,
+ "<0x12>": 21,
+ "<0x13>": 22,
+ "<0x14>": 23,
+ "<0x15>": 24,
+ "<0x16>": 25,
+ "<0x17>": 26,
+ "<0x18>": 27,
+ "<0x19>": 28,
+ "<0x1A>": 29,
+ "<0x1B>": 30,
+ "<0x1C>": 31,
+ "<0x1D>": 32,
+ "<0x1E>": 33,
+ "<0x1F>": 34,
+ "<0x20>": 35,
+ "<0x21>": 36,
+ "<0x22>": 37,
+ "<0x23>": 38,
+ "<0x24>": 39,
+ "<0x25>": 40,
+ "<0x26>": 41,
+ "<0x27>": 42,
+ "<0x28>": 43,
+ "<0x29>": 44,
+ "<0x2A>": 45,
+ "<0x2B>": 46,
+ "<0x2C>": 47,
+ "<0x2D>": 48,
+ "<0x2E>": 49,
+ "<0x2F>": 50,
+ "<0x30>": 51,
+ "<0x31>": 52,
+ "<0x32>": 53,
+ "<0x33>": 54,
+ "<0x34>": 55,
+ "<0x35>": 56,
+ "<0x36>": 57,
+ "<0x37>": 58,
+ "<0x38>": 59,
+ "<0x39>": 60,
+ "<0x3A>": 61,
+ "<0x3B>": 62,
+ "<0x3C>": 63,
+ "<0x3D>": 64,
+ "<0x3E>": 65,
+ "<0x3F>": 66,
+ "<0x40>": 67,
+ "<0x41>": 68,
+ "<0x42>": 69,
+ "<0x43>": 70,
+ "<0x44>": 71,
+ "<0x45>": 72,
+ "<0x46>": 73,
+ "<0x47>": 74,
+ "<0x48>": 75,
+ "<0x49>": 76,
+ "<0x4A>": 77,
+ "<0x4B>": 78,
+ "<0x4C>": 79,
+ "<0x4D>": 80,
+ "<0x4E>": 81,
+ "<0x4F>": 82,
+ "<0x50>": 83,
+ "<0x51>": 84,
+ "<0x52>": 85,
+ "<0x53>": 86,
+ "<0x54>": 87,
+ "<0x55>": 88,
+ "<0x56>": 89,
+ "<0x57>": 90,
+ "<0x58>": 91,
+ "<0x59>": 92,
+ "<0x5A>": 93,
+ "<0x5B>": 94,
+ "<0x5C>": 95,
+ "<0x5D>": 96,
+ "<0x5E>": 97,
+ "<0x5F>": 98,
+ "<0x60>": 99,
+ "<0x61>": 100,
+ "<0x62>": 101,
+ "<0x63>": 102,
+ "<0x64>": 103,
+ "<0x65>": 104,
+ "<0x66>": 105,
+ "<0x67>": 106,
+ "<0x68>": 107,
+ "<0x69>": 108,
+ "<0x6A>": 109,
+ "<0x6B>": 110,
+ "<0x6C>": 111,
+ "<0x6D>": 112,
+ "<0x6E>": 113,
+ "<0x6F>": 114,
+ "<0x70>": 115,
+ "<0x71>": 116,
+ "<0x72>": 117,
+ "<0x73>": 118,
+ "<0x74>": 119,
+ "<0x75>": 120,
+ "<0x76>": 121,
+ "<0x77>": 122,
+ "<0x78>": 123,
+ "<0x79>": 124,
+ "<0x7A>": 125,
+ "<0x7B>": 126,
+ "<0x7C>": 127,
+ "<0x7D>": 128,
+ "<0x7E>": 129,
+ "<0x7F>": 130,
+ "<0x80>": 131,
+ "<0x81>": 132,
+ "<0x82>": 133,
+ "<0x83>": 134,
+ "<0x84>": 135,
+ "<0x85>": 136,
+ "<0x86>": 137,
+ "<0x87>": 138,
+ "<0x88>": 139,
+ "<0x89>": 140,
+ "<0x8A>": 141,
+ "<0x8B>": 142,
+ "<0x8C>": 143,
+ "<0x8D>": 144,
+ "<0x8E>": 145,
+ "<0x8F>": 146,
+ "<0x90>": 147,
+ "<0x91>": 148,
+ "<0x92>": 149,
+ "<0x93>": 150,
+ "<0x94>": 151,
+ "<0x95>": 152,
+ "<0x96>": 153,
+ "<0x97>": 154,
+ "<0x98>": 155,
+ "<0x99>": 156,
+ "<0x9A>": 157,
+ "<0x9B>": 158,
+ "<0x9C>": 159,
+ "<0x9D>": 160,
+ "<0x9E>": 161,
+ "<0x9F>": 162,
+ "<0xA0>": 163,
+ "<0xA1>": 164,
+ "<0xA2>": 165,
+ "<0xA3>": 166,
+ "<0xA4>": 167,
+ "<0xA5>": 168,
+ "<0xA6>": 169,
+ "<0xA7>": 170,
+ "<0xA8>": 171,
+ "<0xA9>": 172,
+ "<0xAA>": 173,
+ "<0xAB>": 174,
+ "<0xAC>": 175,
+ "<0xAD>": 176,
+ "<0xAE>": 177,
+ "<0xAF>": 178,
+ "<0xB0>": 179,
+ "<0xB1>": 180,
+ "<0xB2>": 181,
+ "<0xB3>": 182,
+ "<0xB4>": 183,
+ "<0xB5>": 184,
+ "<0xB6>": 185,
+ "<0xB7>": 186,
+ "<0xB8>": 187,
+ "<0xB9>": 188,
+ "<0xBA>": 189,
+ "<0xBB>": 190,
+ "<0xBC>": 191,
+ "<0xBD>": 192,
+ "<0xBE>": 193,
+ "<0xBF>": 194,
+ "<0xC0>": 195,
+ "<0xC1>": 196,
+ "<0xC2>": 197,
+ "<0xC3>": 198,
+ "<0xC4>": 199,
+ "<0xC5>": 200,
+ "<0xC6>": 201,
+ "<0xC7>": 202,
+ "<0xC8>": 203,
+ "<0xC9>": 204,
+ "<0xCA>": 205,
+ "<0xCB>": 206,
+ "<0xCC>": 207,
+ "<0xCD>": 208,
+ "<0xCE>": 209,
+ "<0xCF>": 210,
+ "<0xD0>": 211,
+ "<0xD1>": 212,
+ "<0xD2>": 213,
+ "<0xD3>": 214,
+ "<0xD4>": 215,
+ "<0xD5>": 216,
+ "<0xD6>": 217,
+ "<0xD7>": 218,
+ "<0xD8>": 219,
+ "<0xD9>": 220,
+ "<0xDA>": 221,
+ "<0xDB>": 222,
+ "<0xDC>": 223,
+ "<0xDD>": 224,
+ "<0xDE>": 225,
+ "<0xDF>": 226,
+ "<0xE0>": 227,
+ "<0xE1>": 228,
+ "<0xE2>": 229,
+ "<0xE3>": 230,
+ "<0xE4>": 231,
+ "<0xE5>": 232,
+ "<0xE6>": 233,
+ "<0xE7>": 234,
+ "<0xE8>": 235,
+ "<0xE9>": 236,
+ "<0xEA>": 237,
+ "<0xEB>": 238,
+ "<0xEC>": 239,
+ "<0xED>": 240,
+ "<0xEE>": 241,
+ "<0xEF>": 242,
+ "<0xF0>": 243,
+ "<0xF1>": 244,
+ "<0xF2>": 245,
+ "<0xF3>": 246,
+ "<0xF4>": 247,
+ "<0xF5>": 248,
+ "<0xF6>": 249,
+ "<0xF7>": 250,
+ "<0xF8>": 251,
+ "<0xF9>": 252,
+ "<0xFA>": 253,
+ "<0xFB>": 254,
+ "<0xFC>": 255,
+ "<0xFD>": 256,
+ "<0xFE>": 257,
+ "<0xFF>": 258,
+ "▁▁": 259,
+ "▁▁▁▁": 260,
+ "▁t": 261,
+ "in": 262,
+ "er": 263,
+ "▁a": 264,
+ "he": 265,
+ "on": 266,
+ "re": 267,
+ "▁s": 268,
+ "en": 269,
+ "at": 270,
+ "or": 271,
+ "▁the": 272,
+ "▁▁▁▁▁▁▁▁": 273,
+ "es": 274,
+ "▁w": 275,
+ "an": 276,
+ "▁c": 277,
+ "is": 278,
+ "it": 279,
+ "ou": 280,
+ "▁d": 281,
+ "al": 282,
+ "ar": 283,
+ "▁p": 284,
+ "▁f": 285,
+ "ed": 286,
+ "▁b": 287,
+ "ing": 288,
+ "▁o": 289,
+ "▁m": 290,
+ "le": 291,
+ "nd": 292,
+ "as": 293,
+ "ic": 294,
+ "▁h": 295,
+ "ion": 296,
+ "▁in": 297,
+ "▁to": 298,
+ "et": 299,
+ "om": 300,
+ "el": 301,
+ "▁of": 302,
+ "st": 303,
+ "▁and": 304,
+ "▁l": 305,
+ "▁th": 306,
+ "▁n": 307,
+ "ent": 308,
+ "il": 309,
+ "ct": 310,
+ "ro": 311,
+ "▁re": 312,
+ "id": 313,
+ "am": 314,
+ "▁I": 315,
+ "ad": 316,
+ "▁e": 317,
+ "▁S": 318,
+ "▁g": 319,
+ "▁T": 320,
+ "im": 321,
+ "ot": 322,
+ "ac": 323,
+ "ur": 324,
+ "▁(": 325,
+ "ig": 326,
+ "▁=": 327,
+ "ol": 328,
+ "ut": 329,
+ "▁A": 330,
+ "se": 331,
+ "▁u": 332,
+ "ve": 333,
+ "▁C": 334,
+ "if": 335,
+ "ow": 336,
+ "▁y": 337,
+ "ch": 338,
+ "ay": 339,
+ "▁de": 340,
+ "▁st": 341,
+ "▁|": 342,
+ "ver": 343,
+ ");": 344,
+ "▁\"": 345,
+ "ly": 346,
+ "▁be": 347,
+ "**": 348,
+ "▁is": 349,
+ "od": 350,
+ "▁M": 351,
+ "ation": 352,
+ "ul": 353,
+ "▁for": 354,
+ "▁▁▁▁▁": 355,
+ "▁on": 356,
+ "ag": 357,
+ "ce": 358,
+ "▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁": 359,
+ "ter": 360,
+ "ir": 361,
+ "th": 362,
+ "▁v": 363,
+ "qu": 364,
+ "▁B": 365,
+ "em": 366,
+ "▁P": 367,
+ "▁you": 368,
+ "▁that": 369,
+ "un": 370,
+ "▁{": 371,
+ "ith": 372,
+ "ri": 373,
+ "est": 374,
+ "ab": 375,
+ "--": 376,
+ "ap": 377,
+ "▁it": 378,
+ "▁con": 379,
+ "ate": 380,
+ "us": 381,
+ "▁H": 382,
+ "um": 383,
+ "▁D": 384,
+ "os": 385,
+ "pe": 386,
+ "▁-": 387,
+ "▁wh": 388,
+ "▁al": 389,
+ "▁as": 390,
+ "and": 391,
+ "ist": 392,
+ "▁L": 393,
+ "▁W": 394,
+ "▁with": 395,
+ "▁an": 396,
+ "ere": 397,
+ "▁*": 398,
+ "▁R": 399,
+ "▁he": 400,
+ "▁F": 401,
+ "oc": 402,
+ "▁was": 403,
+ "ers": 404,
+ "ke": 405,
+ "out": 406,
+ "ht": 407,
+ "▁r": 408,
+ "ess": 409,
+ "op": 410,
+ "res": 411,
+ "ie": 412,
+ "▁E": 413,
+ "▁\\": 414,
+ "▁The": 415,
+ "end": 416,
+ "ld": 417,
+ "▁N": 418,
+ "ort": 419,
+ "▁G": 420,
+ "//": 421,
+ "▁#": 422,
+ "our": 423,
+ "te": 424,
+ "ill": 425,
+ "ain": 426,
+ "▁se": 427,
+ "▁▁▁▁▁▁": 428,
+ "▁$": 429,
+ "▁pro": 430,
+ "ore": 431,
+ "▁com": 432,
+ "ame": 433,
+ "tr": 434,
+ "▁ne": 435,
+ "rom": 436,
+ "ub": 437,
+ "▁at": 438,
+ "▁ex": 439,
+ "ant": 440,
+ "ue": 441,
+ "▁or": 442,
+ "▁}": 443,
+ "art": 444,
+ "ction": 445,
+ "▁k": 446,
+ "pt": 447,
+ "nt": 448,
+ "iv": 449,
+ "de": 450,
+ "▁O": 451,
+ "pl": 452,
+ "urn": 453,
+ "ight": 454,
+ "all": 455,
+ "▁this": 456,
+ "ser": 457,
+ "ave": 458,
+ "▁not": 459,
+ "▁are": 460,
+ "▁j": 461,
+ "▁le": 462,
+ "iz": 463,
+ "▁'": 464,
+ "age": 465,
+ "ment": 466,
+ "▁tr": 467,
+ "ack": 468,
+ "ust": 469,
+ "()": 470,
+ "->": 471,
+ "ity": 472,
+ "ine": 473,
+ "ould": 474,
+ "▁J": 475,
+ "og": 476,
+ "▁from": 477,
+ "▁we": 478,
+ "ell": 479,
+ "▁sh": 480,
+ "▁en": 481,
+ "ure": 482,
+ "port": 483,
+ "▁ch": 484,
+ "ne": 485,
+ "▁by": 486,
+ "per": 487,
+ "ard": 488,
+ "ass": 489,
+ "ge": 490,
+ "ak": 491,
+ "are": 492,
+ "ok": 493,
+ "av": 494,
+ "ive": 495,
+ "ff": 496,
+ "ies": 497,
+ "ath": 498,
+ "turn": 499,
+ "▁U": 500,
+ "int": 501,
+ "----": 502,
+ "▁im": 503,
+ "ost": 504,
+ "ial": 505,
+ "▁have": 506,
+ "ind": 507,
+ "ip": 508,
+ "ans": 509,
+ "xt": 510,
+ "▁do": 511,
+ "cl": 512,
+ "▁if": 513,
+ "con": 514,
+ "ia": 515,
+ "▁his": 516,
+ "ult": 517,
+ "rou": 518,
+ "▁su": 519,
+ "ra": 520,
+ "▁un": 521,
+ "able": 522,
+ "▁<": 523,
+ "▁K": 524,
+ "ome": 525,
+ "▁qu": 526,
+ "get": 527,
+ "▁me": 528,
+ "ast": 529,
+ "ect": 530,
+ "▁##": 531,
+ "to": 532,
+ "▁cl": 533,
+ "▁ab": 534,
+ "ice": 535,
+ "ire": 536,
+ "ber": 537,
+ "one": 538,
+ "ich": 539,
+ "hen": 540,
+ "▁can": 541,
+ "▁Th": 542,
+ "▁la": 543,
+ "▁all": 544,
+ "ime": 545,
+ "ile": 546,
+ "ide": 547,
+ "\",": 548,
+ "▁pl": 549,
+ "▁V": 550,
+ "ru": 551,
+ "orm": 552,
+ "▁had": 553,
+ "ud": 554,
+ "ase": 555,
+ "ord": 556,
+ "),": 557,
+ "▁▁▁▁▁▁▁▁▁▁▁▁": 558,
+ "▁her": 559,
+ "▁In": 560,
+ "ace": 561,
+ "▁but": 562,
+ "ata": 563,
+ "::": 564,
+ "****": 565,
+ "ong": 566,
+ "▁&": 567,
+ "..": 568,
+ "▁▁▁▁▁▁▁▁▁▁▁▁▁": 569,
+ "ite": 570,
+ "ype": 571,
+ "act": 572,
+ "ode": 573,
+ "▁your": 574,
+ "▁out": 575,
+ "▁go": 576,
+ "lic": 577,
+ "ally": 578,
+ "▁so": 579,
+ "ork": 580,
+ "au": 581,
+ "▁up": 582,
+ "▁_": 583,
+ "ll": 584,
+ "==": 585,
+ "▁my": 586,
+ "pp": 587,
+ "cc": 588,
+ "▁//": 589,
+ "▁they": 590,
+ "gh": 591,
+ "▁us": 592,
+ "ib": 593,
+ "ions": 594,
+ "ach": 595,
+ "ens": 596,
+ "▁ar": 597,
+ "ob": 598,
+ "elf": 599,
+ "ook": 600,
+ "ated": 601,
+ "ang": 602,
+ "ign": 603,
+ "▁return": 604,
+ "▁res": 605,
+ "ck": 606,
+ "ous": 607,
+ "ст": 608,
+ ").": 609,
+ "▁п": 610,
+ ".\"": 611,
+ "на": 612,
+ "▁i": 613,
+ "ail": 614,
+ "ep": 615,
+ "▁ad": 616,
+ "ance": 617,
+ "(\"": 618,
+ "▁**": 619,
+ "ther": 620,
+ "ake": 621,
+ "▁will": 622,
+ "▁comp": 623,
+ "▁one": 624,
+ "▁get": 625,
+ "ov": 626,
+ "▁Y": 627,
+ "ary": 628,
+ "ock": 629,
+ "▁she": 630,
+ "che": 631,
+ "ft": 632,
+ "▁new": 633,
+ "▁des": 634,
+ "▁li": 635,
+ "ence": 636,
+ "▁sa": 637,
+ "ress": 638,
+ "▁el": 639,
+ "▁und": 640,
+ "eg": 641,
+ "fer": 642,
+ "ry": 643,
+ "ear": 644,
+ "ose": 645,
+ "very": 646,
+ "',": 647,
+ "▁+": 648,
+ "▁в": 649,
+ "▁He": 650,
+ "ublic": 651,
+ "▁their": 652,
+ "ize": 653,
+ "▁were": 654,
+ "ink": 655,
+ "own": 656,
+ "In": 657,
+ "{\\": 658,
+ "▁has": 659,
+ "▁per": 660,
+ "▁It": 661,
+ "▁St": 662,
+ "her": 663,
+ "ject": 664,
+ "ра": 665,
+ "ild": 666,
+ "so": 667,
+ "▁sp": 668,
+ "ни": 669,
+ "du": 670,
+ "row": 671,
+ "alue": 672,
+ "set": 673,
+ "form": 674,
+ "com": 675,
+ "▁man": 676,
+ "ont": 677,
+ "ull": 678,
+ "▁cont": 679,
+ "▁more": 680,
+ "ick": 681,
+ "▁would": 682,
+ "▁ev": 683,
+ "▁about": 684,
+ "ition": 685,
+ "▁z": 686,
+ "ound": 687,
+ "ree": 688,
+ "▁Ch": 689,
+ "▁which": 690,
+ "io": 691,
+ "();": 692,
+ "▁who": 693,
+ "err": 694,
+ "ory": 695,
+ "ount": 696,
+ "ations": 697,
+ "▁с": 698,
+ "ring": 699,
+ "": 700,
+ "▁fe": 701,
+ "ко": 702,
+ "но": 703,
+ "▁dis": 704,
+ "ma": 705,
+ "▁them": 706,
+ "▁any": 707,
+ "▁no": 708,
+ "--------": 709,
+ "▁pre": 710,
+ "▁te": 711,
+ "▁ro": 712,
+ "▁him": 713,
+ "▁:": 714,
+ "up": 715,
+ "▁int": 716,
+ "▁ag": 717,
+ "St": 718,
+ "ark": 719,
+ "ex": 720,
+ "ph": 721,
+ "ient": 722,
+ "ely": 723,
+ "▁pr": 724,
+ "ER": 725,
+ "▁import": 726,
+ "▁time": 727,
+ "ро": 728,
+ "pro": 729,
+ "User": 730,
+ "lo": 731,
+ "▁/": 732,
+ "▁[": 733,
+ "ors": 734,
+ "=\"": 735,
+ "▁there": 736,
+ "▁like": 737,
+ "old": 738,
+ "▁when": 739,
+ "vers": 740,
+ "▁some": 741,
+ "ings": 742,
+ "))": 743,
+ "▁part": 744,
+ "ical": 745,
+ "▁fun": 746,
+ "▁kn": 747,
+ "ays": 748,
+ "ier": 749,
+ "▁been": 750,
+ "ove": 751,
+ "▁sc": 752,
+ "ian": 753,
+ "▁over": 754,
+ "iel": 755,
+ "▁▁▁▁▁▁▁▁▁▁": 756,
+ "▁pe": 757,
+ "rib": 758,
+ "put": 759,
+ "ec": 760,
+ "eth": 761,
+ "aram": 762,
+ "app": 763,
+ "▁–": 764,
+ "▁stat": 765,
+ "pon": 766,
+ "▁what": 767,
+ "ption": 768,
+ "we": 769,
+ "ade": 770,
+ "▁work": 771,
+ "text": 772,
+ "▁said": 773,
+ "▁###": 774,
+ "IN": 775,
+ "▁just": 776,
+ "irst": 777,
+ "▁into": 778,
+ "▁const": 779,
+ "ource": 780,
+ "tt": 781,
+ "ps": 782,
+ "pr": 783,
+ "erv": 784,
+ "itt": 785,
+ "ug": 786,
+ "_{": 787,
+ "ents": 788,
+ "ish": 789,
+ "ener": 790,
+ "▁inter": 791,
+ "ple": 792,
+ "oll": 793,
+ "mer": 794,
+ "ater": 795,
+ "ool": 796,
+ "ef": 797,
+ "▁public": 798,
+ "▁other": 799,
+ "ре": 800,
+ "▁def": 801,
+ "▁@": 802,
+ "го": 803,
+ "oint": 804,
+ "▁off": 805,
+ "oid": 806,
+ "return": 807,
+ "▁set": 808,
+ "wo": 809,
+ "fter": 810,
+ "sh": 811,
+ "********": 812,
+ "▁our": 813,
+ "riv": 814,
+ "iss": 815,
+ "▁We": 816,
+ "ng": 817,
+ "▁ob": 818,
+ "ss": 819,
+ "gr": 820,
+ "▁than": 821,
+ "pect": 822,
+ "ied": 823,
+ "sc": 824,
+ "iew": 825,
+ "der": 826,
+ "yst": 827,
+ "ev": 828,
+ "▁could": 829,
+ "ann": 830,
+ "enc": 831,
+ "ON": 832,
+ "ix": 833,
+ "anc": 834,
+ "▁also": 835,
+ "reat": 836,
+ "▁am": 837,
+ "▁bec": 838,
+ "▁и": 839,
+ "ual": 840,
+ "pec": 841,
+ "▁.": 842,
+ "▁bl": 843,
+ "lect": 844,
+ "ople": 845,
+ "ys": 846,
+ "▁gr": 847,
+ "ict": 848,
+ "ik": 849,
+ "tring": 850,
+ "▁This": 851,
+ "▁back": 852,
+ "▁о": 853,
+ "▁fin": 854,
+ "atch": 855,
+ "Con": 856,
+ "('": 857,
+ "erm": 858,
+ "▁==": 859,
+ "__": 860,
+ "name": 861,
+ ",\"": 862,
+ "▁did": 863,
+ "ise": 864,
+ "▁only": 865,
+ "ruct": 866,
+ "les": 867,
+ "▁then": 868,
+ "ause": 869,
+ "ва": 870,
+ "▁its": 871,
+ "rit": 872,
+ "▁know": 873,
+ "ield": 874,
+ "▁class": 875,
+ "▁>": 876,
+ "▁em": 877,
+ "▁$\\": 878,
+ "▁year": 879,
+ "wn": 880,
+ "},": 881,
+ "▁del": 882,
+ "ale": 883,
+ "ty": 884,
+ "fig": 885,
+ "sp": 886,
+ "hed": 887,
+ "round": 888,
+ "ew": 889,
+ "▁di": 890,
+ "▁der": 891,
+ "ри": 892,
+ "red": 893,
+ "this": 894,
+ "let": 895,
+ "RE": 896,
+ "ax": 897,
+ "fr": 898,
+ "essage": 899,
+ "ough": 900,
+ "▁comm": 901,
+ "fo": 902,
+ "uch": 903,
+ "oy": 904,
+ "▁people": 905,
+ "ystem": 906,
+ "▁first": 907,
+ "▁function": 908,
+ "ange": 909,
+ "▁how": 910,
+ "▁et": 911,
+ "ah": 912,
+ "▁look": 913,
+ "то": 914,
+ "und": 915,
+ "▁under": 916,
+ "ка": 917,
+ "▁!": 918,
+ "ray": 919,
+ "ST": 920,
+ "ific": 921,
+ "ли": 922,
+ "read": 923,
+ "▁bet": 924,
+ "ious": 925,
+ "arg": 926,
+ "▁need": 927,
+ "math": 928,
+ "▁на": 929,
+ "ert": 930,
+ "▁op": 931,
+ "▁acc": 932,
+ "Pro": 933,
+ "▁est": 934,
+ "▁Un": 935,
+ "▁ent": 936,
+ "▁rec": 937,
+ "▁use": 938,
+ "ен": 939,
+ "▁par": 940,
+ "az": 941,
+ "▁д": 942,
+ "▁Wh": 943,
+ "self": 944,
+ "▁ke": 945,
+ "та": 946,
+ "▁want": 947,
+ "▁end": 948,
+ "▁don": 949,
+ "ek": 950,
+ "ren": 951,
+ "Name": 952,
+ "▁=>": 953,
+ "▁app": 954,
+ "▁que": 955,
+ "igh": 956,
+ "▁bu": 957,
+ "equ": 958,
+ "vel": 959,
+ "▁act": 960,
+ "cre": 961,
+ "AT": 962,
+ "▁var": 963,
+ "cess": 964,
+ "====": 965,
+ "Ex": 966,
+ "▁add": 967,
+ "▁mod": 968,
+ "ung": 969,
+ "▁where": 970,
+ "ning": 971,
+ "▁fl": 972,
+ "als": 973,
+ "tern": 974,
+ "}}": 975,
+ "▁Al": 976,
+ "▁pos": 977,
+ "ank": 978,
+ "▁ap": 979,
+ "eng": 980,
+ "▁“": 981,
+ "ble": 982,
+ "▁reg": 983,
+ "^{": 984,
+ "▁She": 985,
+ "▁*/": 986,
+ "ude": 987,
+ "add": 988,
+ "▁two": 989,
+ "▁col": 990,
+ "▁sm": 991,
+ "air": 992,
+ "▁may": 993,
+ "fore": 994,
+ "▁You": 995,
+ "rough": 996,
+ "▁che": 997,
+ "▁att": 998,
+ "oth": 999,
+ "ла": 1000,
+ "▁co": 1001,
+ "ates": 1002,
+ "▁rem": 1003,
+ "ood": 1004,
+ "Type": 1005,
+ "led": 1006,
+ "ful": 1007,
+ "▁self": 1008,
+ "of": 1009,
+ "▁Ar": 1010,
+ "que": 1011,
+ "▁every": 1012,
+ "ref": 1013,
+ "The": 1014,
+ "▁And": 1015,
+ "▁rel": 1016,
+ "OR": 1017,
+ "Id": 1018,
+ "▁even": 1019,
+ "EN": 1020,
+ "▁hand": 1021,
+ "ait": 1022,
+ "▁should": 1023,
+ "▁after": 1024,
+ "▁dif": 1025,
+ "ght": 1026,
+ "ife": 1027,
+ "ator": 1028,
+ "ash": 1029,
+ "ribut": 1030,
+ "umber": 1031,
+ "▁see": 1032,
+ "ms": 1033,
+ "▁call": 1034,
+ "yn": 1035,
+ "dd": 1036,
+ "▁es": 1037,
+ "▁make": 1038,
+ "other": 1039,
+ "▁—": 1040,
+ "\");": 1041,
+ "str": 1042,
+ "▁long": 1043,
+ "lement": 1044,
+ "▁wor": 1045,
+ "its": 1046,
+ "▁If": 1047,
+ "alse": 1048,
+ "ль": 1049,
+ "ward": 1050,
+ "▁по": 1051,
+ "val": 1052,
+ "ons": 1053,
+ "▁Z": 1054,
+ "▁now": 1055,
+ "data": 1056,
+ "amp": 1057,
+ "ense": 1058,
+ "▁through": 1059,
+ "▁down": 1060,
+ "att": 1061,
+ "▁static": 1062,
+ "ics": 1063,
+ "##": 1064,
+ "pos": 1065,
+ "▁void": 1066,
+ "aw": 1067,
+ "oun": 1068,
+ "▁way": 1069,
+ "ible": 1070,
+ "vent": 1071,
+ "ower": 1072,
+ "▁think": 1073,
+ "ts": 1074,
+ "*/": 1075,
+ "▁again": 1076,
+ "ating": 1077,
+ "те": 1078,
+ "ner": 1079,
+ "▁most": 1080,
+ "line": 1081,
+ "ym": 1082,
+ "▁sub": 1083,
+ "erson": 1084,
+ "▁requ": 1085,
+ "AL": 1086,
+ "AR": 1087,
+ "abel": 1088,
+ "ond": 1089,
+ "));": 1090,
+ "▁Se": 1091,
+ "▁But": 1092,
+ "alk": 1093,
+ "▁An": 1094,
+ "new": 1095,
+ "▁because": 1096,
+ "ger": 1097,
+ "ular": 1098,
+ "roup": 1099,
+ "ta": 1100,
+ "...": 1101,
+ "▁cons": 1102,
+ "▁right": 1103,
+ "▁fr": 1104,
+ "be": 1105,
+ "ily": 1106,
+ "ки": 1107,
+ "▁ph": 1108,
+ "ead": 1109,
+ "?\"": 1110,
+ "▁gu": 1111,
+ "▁else": 1112,
+ "▁som": 1113,
+ "rent": 1114,
+ "co": 1115,
+ "ement": 1116,
+ "▁str": 1117,
+ "ault": 1118,
+ "▁з": 1119,
+ "ло": 1120,
+ "sert": 1121,
+ "var": 1122,
+ "type": 1123,
+ "▁Com": 1124,
+ "ле": 1125,
+ "ins": 1126,
+ "me": 1127,
+ "way": 1128,
+ "ident": 1129,
+ "▁prov": 1130,
+ "▁м": 1131,
+ "▁true": 1132,
+ "▁Pro": 1133,
+ "fl": 1134,
+ "▁sl": 1135,
+ "▁As": 1136,
+ "}\\": 1137,
+ "ID": 1138,
+ "ues": 1139,
+ "▁inst": 1140,
+ "▁name": 1141,
+ "ox": 1142,
+ "▁)": 1143,
+ "li": 1144,
+ "ames": 1145,
+ "Res": 1146,
+ "▁sur": 1147,
+ "param": 1148,
+ "▁start": 1149,
+ "aj": 1150,
+ "SE": 1151,
+ "ask": 1152,
+ "IT": 1153,
+ "String": 1154,
+ "▁ass": 1155,
+ "▁play": 1156,
+ "ting": 1157,
+ "ton": 1158,
+ "▁before": 1159,
+ "▁pol": 1160,
+ "arch": 1161,
+ "▁well": 1162,
+ "Com": 1163,
+ "any": 1164,
+ "olog": 1165,
+ "▁err": 1166,
+ "▁these": 1167,
+ "ars": 1168,
+ "eb": 1169,
+ "▁br": 1170,
+ "▁incl": 1171,
+ "▁hel": 1172,
+ "ern": 1173,
+ "ody": 1174,
+ "во": 1175,
+ "▁ind": 1176,
+ "----------------": 1177,
+ "▁data": 1178,
+ "▁good": 1179,
+ "LE": 1180,
+ "],": 1181,
+ "▁av": 1182,
+ "▁ac": 1183,
+ "ider": 1184,
+ "не": 1185,
+ "▁Q": 1186,
+ "▁min": 1187,
+ "▁much": 1188,
+ "ci": 1189,
+ "els": 1190,
+ "▁cur": 1191,
+ "▁value": 1192,
+ "ery": 1193,
+ "uf": 1194,
+ "▁loc": 1195,
+ "reak": 1196,
+ "ative": 1197,
+ "imes": 1198,
+ "Cl": 1199,
+ "▁,": 1200,
+ "▁ser": 1201,
+ "▁die": 1202,
+ "▁trans": 1203,
+ "▁result": 1204,
+ "ext": 1205,
+ "▁aut": 1206,
+ "land": 1207,
+ "▁&&": 1208,
+ "Ch": 1209,
+ "ten": 1210,
+ "}$": 1211,
+ "▁type": 1212,
+ "cond": 1213,
+ "ices": 1214,
+ "▁very": 1215,
+ "▁own": 1216,
+ "▁fil": 1217,
+ "ities": 1218,
+ "▁produ": 1219,
+ "▁read": 1220,
+ "▁form": 1221,
+ "▁case": 1222,
+ "ather": 1223,
+ "ти": 1224,
+ "да": 1225,
+ "ер": 1226,
+ "Th": 1227,
+ "aut": 1228,
+ "▁spec": 1229,
+ "ij": 1230,
+ "bl": 1231,
+ "ility": 1232,
+ "▁é": 1233,
+ "▁er": 1234,
+ "▁does": 1235,
+ "▁here": 1236,
+ "the": 1237,
+ "ures": 1238,
+ "▁%": 1239,
+ "min": 1240,
+ "▁null": 1241,
+ "rap": 1242,
+ "\")": 1243,
+ "rr": 1244,
+ "List": 1245,
+ "right": 1246,
+ "▁User": 1247,
+ "UL": 1248,
+ "ational": 1249,
+ "▁being": 1250,
+ "AN": 1251,
+ "sk": 1252,
+ "▁car": 1253,
+ "ole": 1254,
+ "▁dist": 1255,
+ "plic": 1256,
+ "ollow": 1257,
+ "▁pres": 1258,
+ "▁such": 1259,
+ "ream": 1260,
+ "ince": 1261,
+ "gan": 1262,
+ "▁For": 1263,
+ "\":": 1264,
+ "son": 1265,
+ "rivate": 1266,
+ "▁years": 1267,
+ "▁serv": 1268,
+ "▁made": 1269,
+ "def": 1270,
+ ";\r": 1271,
+ "▁gl": 1272,
+ "▁bel": 1273,
+ "▁list": 1274,
+ "▁cor": 1275,
+ "▁det": 1276,
+ "ception": 1277,
+ "egin": 1278,
+ "▁б": 1279,
+ "▁char": 1280,
+ "trans": 1281,
+ "▁fam": 1282,
+ "▁!=": 1283,
+ "ouse": 1284,
+ "▁dec": 1285,
+ "ica": 1286,
+ "▁many": 1287,
+ "aking": 1288,
+ "▁à": 1289,
+ "▁sim": 1290,
+ "ages": 1291,
+ "uff": 1292,
+ "ased": 1293,
+ "man": 1294,
+ "▁Sh": 1295,
+ "iet": 1296,
+ "irect": 1297,
+ "▁Re": 1298,
+ "▁differ": 1299,
+ "▁find": 1300,
+ "ethod": 1301,
+ "▁\r": 1302,
+ "ines": 1303,
+ "▁inv": 1304,
+ "▁point": 1305,
+ "▁They": 1306,
+ "▁used": 1307,
+ "ctions": 1308,
+ "▁still": 1309,
+ "ió": 1310,
+ "ined": 1311,
+ "▁while": 1312,
+ "It": 1313,
+ "ember": 1314,
+ "▁say": 1315,
+ "▁help": 1316,
+ "▁cre": 1317,
+ "▁x": 1318,
+ "▁Tr": 1319,
+ "ument": 1320,
+ "▁sk": 1321,
+ "ought": 1322,
+ "ually": 1323,
+ "message": 1324,
+ "▁Con": 1325,
+ "▁mon": 1326,
+ "ared": 1327,
+ "work": 1328,
+ "):": 1329,
+ "ister": 1330,
+ "arn": 1331,
+ "ized": 1332,
+ "Data": 1333,
+ "orn": 1334,
+ "▁head": 1335,
+ "DE": 1336,
+ "▁Le": 1337,
+ "▁person": 1338,
+ "ments": 1339,
+ "ength": 1340,
+ "▁false": 1341,
+ "▁med": 1342,
+ "▁De": 1343,
+ "ache": 1344,
+ "ited": 1345,
+ "▁let": 1346,
+ "▁show": 1347,
+ "▁same": 1348,
+ "uss": 1349,
+ "▁gener": 1350,
+ "▁у": 1351,
+ "cur": 1352,
+ "▁real": 1353,
+ "ced": 1354,
+ "\">": 1355,
+ "struct": 1356,
+ "begin": 1357,
+ "cept": 1358,
+ "▁bo": 1359,
+ "ired": 1360,
+ "▁Fr": 1361,
+ "▁stud": 1362,
+ "dev": 1363,
+ "Ar": 1364,
+ "(\\": 1365,
+ "▁Cl": 1366,
+ "ween": 1367,
+ "▁too": 1368,
+ "▁test": 1369,
+ "▁day": 1370,
+ "oh": 1371,
+ "▁follow": 1372,
+ "ature": 1373,
+ "ze": 1374,
+ "ien": 1375,
+ "reg": 1376,
+ "ces": 1377,
+ "uring": 1378,
+ "amb": 1379,
+ "ina": 1380,
+ "cri": 1381,
+ "▁ed": 1382,
+ "SS": 1383,
+ "uck": 1384,
+ "▁/*": 1385,
+ "CT": 1386,
+ "▁There": 1387,
+ "▁take": 1388,
+ "par": 1389,
+ "ule": 1390,
+ "cal": 1391,
+ "for": 1392,
+ "****************": 1393,
+ "source": 1394,
+ "▁those": 1395,
+ "col": 1396,
+ "▁eff": 1397,
+ "mod": 1398,
+ "cont": 1399,
+ "}{": 1400,
+ "▁around": 1401,
+ "press": 1402,
+ "by": 1403,
+ "▁going": 1404,
+ "ponse": 1405,
+ "▁С": 1406,
+ "▁line": 1407,
+ "date": 1408,
+ "code": 1409,
+ "['": 1410,
+ "▁life": 1411,
+ "ason": 1412,
+ "▁using": 1413,
+ "▁val": 1414,
+ "▁du": 1415,
+ "yp": 1416,
+ "▁▁▁▁▁▁▁▁▁▁▁▁▁▁": 1417,
+ "▁On": 1418,
+ "▁found": 1419,
+ "olut": 1420,
+ "']": 1421,
+ "arent": 1422,
+ "▁string": 1423,
+ "▁met": 1424,
+ "▁wr": 1425,
+ "ush": 1426,
+ "string": 1427,
+ "size": 1428,
+ "▁ver": 1429,
+ "▁each": 1430,
+ "value": 1431,
+ "▁last": 1432,
+ "▁got": 1433,
+ "ven": 1434,
+ "back": 1435,
+ "Set": 1436,
+ "ey": 1437,
+ "rol": 1438,
+ "▁cr": 1439,
+ "thing": 1440,
+ "ret": 1441,
+ "és": 1442,
+ "ism": 1443,
+ "▁between": 1444,
+ "Ob": 1445,
+ "ething": 1446,
+ "mp": 1447,
+ "▁lo": 1448,
+ "ats": 1449,
+ "▁New": 1450,
+ "ви": 1451,
+ "ado": 1452,
+ "dex": 1453,
+ "ди": 1454,
+ "▁pass": 1455,
+ "wh": 1456,
+ "▁den": 1457,
+ "Get": 1458,
+ "apt": 1459,
+ "▁ask": 1460,
+ "▁sup": 1461,
+ "Value": 1462,
+ "ны": 1463,
+ "▁try": 1464,
+ "lation": 1465,
+ "day": 1466,
+ "ness": 1467,
+ "ets": 1468,
+ "▁exper": 1469,
+ "Tr": 1470,
+ "▁Mar": 1471,
+ "serv": 1472,
+ "br": 1473,
+ "▁number": 1474,
+ "inal": 1475,
+ "cent": 1476,
+ "/*": 1477,
+ "not": 1478,
+ "ional": 1479,
+ "▁final": 1480,
+ "')": 1481,
+ "▁run": 1482,
+ "over": 1483,
+ "▁never": 1484,
+ "uc": 1485,
+ "▁high": 1486,
+ "yle": 1487,
+ "▁ins": 1488,
+ "▁best": 1489,
+ "ittle": 1490,
+ "ric": 1491,
+ "▁sign": 1492,
+ "▁dem": 1493,
+ "iness": 1494,
+ "gy": 1495,
+ "▁war": 1496,
+ "ished": 1497,
+ "▁giv": 1498,
+ "key": 1499,
+ "▁X": 1500,
+ "($": 1501,
+ "▁child": 1502,
+ "less": 1503,
+ "ways": 1504,
+ "incl": 1505,
+ "rop": 1506,
+ "raw": 1507,
+ "://": 1508,
+ "▁«": 1509,
+ "no": 1510,
+ "indow": 1511,
+ "fe": 1512,
+ "riend": 1513,
+ "▁les": 1514,
+ "▁los": 1515,
+ "file": 1516,
+ "formation": 1517,
+ "ccess": 1518,
+ "▁В": 1519,
+ "na": 1520,
+ "▁il": 1521,
+ "ision": 1522,
+ "ler": 1523,
+ "▁art": 1524,
+ "Cont": 1525,
+ "▁world": 1526,
+ "▁turn": 1527,
+ "▁really": 1528,
+ "▁Ex": 1529,
+ "ма": 1530,
+ "▁П": 1531,
+ "ters": 1532,
+ "arget": 1533,
+ "Err": 1534,
+ "▁happ": 1535,
+ "time": 1536,
+ "▁So": 1537,
+ "div": 1538,
+ "▁didn": 1539,
+ "ada": 1540,
+ "oot": 1541,
+ "})": 1542,
+ "▁sch": 1543,
+ "▁cle": 1544,
+ "▁something": 1545,
+ "().": 1546,
+ "▁cour": 1547,
+ "ever": 1548,
+ "ants": 1549,
+ "▁?": 1550,
+ "To": 1551,
+ "▁`": 1552,
+ "try": 1553,
+ "ux": 1554,
+ "ais": 1555,
+ "ross": 1556,
+ "hip": 1557,
+ "▁rep": 1558,
+ "label": 1559,
+ "▁both": 1560,
+ "*,": 1561,
+ "ott": 1562,
+ "ми": 1563,
+ "ane": 1564,
+ "▁open": 1565,
+ "ww": 1566,
+ "▁come": 1567,
+ "▁ext": 1568,
+ "rem": 1569,
+ "_{\\": 1570,
+ "▁old": 1571,
+ "ched": 1572,
+ "._": 1573,
+ "ME": 1574,
+ "ify": 1575,
+ "gg": 1576,
+ "Col": 1577,
+ "view": 1578,
+ "▁bus": 1579,
+ "▁must": 1580,
+ "▁different": 1581,
+ "log": 1582,
+ "ists": 1583,
+ "roll": 1584,
+ "ai": 1585,
+ "▁за": 1586,
+ "▁system": 1587,
+ "ivers": 1588,
+ "atus": 1589,
+ "ote": 1590,
+ "med": 1591,
+ "].": 1592,
+ "akes": 1593,
+ "RO": 1594,
+ "▁cent": 1595,
+ "gram": 1596,
+ "▁private": 1597,
+ "▁great": 1598,
+ "\";": 1599,
+ "opy": 1600,
+ "▁feel": 1601,
+ "▁How": 1602,
+ "////": 1603,
+ "IC": 1604,
+ "▁dr": 1605,
+ "ains": 1606,
+ "lock": 1607,
+ "En": 1608,
+ "▁Sch": 1609,
+ "▁mat": 1610,
+ "▁home": 1611,
+ "perty": 1612,
+ "test": 1613,
+ "loc": 1614,
+ "▁wom": 1615,
+ "sw": 1616,
+ "arly": 1617,
+ "▁En": 1618,
+ "▁ко": 1619,
+ "den": 1620,
+ "ста": 1621,
+ "▁а": 1622,
+ "eter": 1623,
+ "▁includ": 1624,
+ "ULL": 1625,
+ "▁mem": 1626,
+ "▁po": 1627,
+ "▁little": 1628,
+ "▁arg": 1629,
+ "▁},": 1630,
+ "include": 1631,
+ "eta": 1632,
+ "▁place": 1633,
+ "idth": 1634,
+ "ustom": 1635,
+ "▁||": 1636,
+ "▁tem": 1637,
+ "ried": 1638,
+ "▁fact": 1639,
+ "ience": 1640,
+ "▁Pl": 1641,
+ "opt": 1642,
+ "ele": 1643,
+ "go": 1644,
+ "AC": 1645,
+ "inter": 1646,
+ "========": 1647,
+ "(),": 1648,
+ "ots": 1649,
+ "ral": 1650,
+ "ique": 1651,
+ "aving": 1652,
+ "ml": 1653,
+ "▁thought": 1654,
+ "frac": 1655,
+ "▁care": 1656,
+ "());": 1657,
+ "▁put": 1658,
+ "▁might": 1659,
+ "▁Amer": 1660,
+ "▁(!": 1661,
+ "ample": 1662,
+ "alth": 1663,
+ "▁few": 1664,
+ "▁state": 1665,
+ "sub": 1666,
+ "▁Or": 1667,
+ "];": 1668,
+ "▁size": 1669,
+ "▁Sp": 1670,
+ "▁without": 1671,
+ "▁poss": 1672,
+ "eq": 1673,
+ "play": 1674,
+ "▁expect": 1675,
+ "▁second": 1676,
+ "▁String": 1677,
+ "uild": 1678,
+ "▁next": 1679,
+ "++": 1680,
+ "requ": 1681,
+ "▁All": 1682,
+ "▁men": 1683,
+ "▁When": 1684,
+ "iter": 1685,
+ "ament": 1686,
+ "net": 1687,
+ "▁К": 1688,
+ "ron": 1689,
+ "aint": 1690,
+ "▁Is": 1691,
+ "ве": 1692,
+ "pend": 1693,
+ "translation": 1694,
+ "▁го": 1695,
+ "че": 1696,
+ "▁van": 1697,
+ "▁another": 1698,
+ "▁ret": 1699,
+ "▁La": 1700,
+ "Mod": 1701,
+ "ION": 1702,
+ "list": 1703,
+ "▁post": 1704,
+ "da": 1705,
+ "ware": 1706,
+ "▁word": 1707,
+ "Error": 1708,
+ "▁seem": 1709,
+ "▁contin": 1710,
+ "atic": 1711,
+ "▁three": 1712,
+ "Object": 1713,
+ "▁partic": 1714,
+ "$.": 1715,
+ "▁mark": 1716,
+ "▁vis": 1717,
+ "rc": 1718,
+ "▁sw": 1719,
+ "ptions": 1720,
+ "▁break": 1721,
+ "▁things": 1722,
+ "ute": 1723,
+ "ui": 1724,
+ "▁That": 1725,
+ "urs": 1726,
+ "gl": 1727,
+ "ру": 1728,
+ "▁file": 1729,
+ "use": 1730,
+ "igned": 1731,
+ "part": 1732,
+ "Un": 1733,
+ "▁equ": 1734,
+ "(&": 1735,
+ "▁lead": 1736,
+ "rm": 1737,
+ "ained": 1738,
+ "▁Be": 1739,
+ "path": 1740,
+ "▁small": 1741,
+ "ager": 1742,
+ "▁always": 1743,
+ "▁El": 1744,
+ "▁order": 1745,
+ "▁ey": 1746,
+ "▁won": 1747,
+ "ape": 1748,
+ "▁left": 1749,
+ "ava": 1750,
+ "item": 1751,
+ "hor": 1752,
+ "▁away": 1753,
+ "bb": 1754,
+ "fun": 1755,
+ "▁Ind": 1756,
+ "mb": 1757,
+ "▁struct": 1758,
+ "▁process": 1759,
+ "▁support": 1760,
+ ");\r": 1761,
+ "ión": 1762,
+ "LO": 1763,
+ "▁oper": 1764,
+ "UT": 1765,
+ "▁·": 1766,
+ "PE": 1767,
+ "load": 1768,
+ "off": 1769,
+ "▁No": 1770,
+ "ives": 1771,
+ "ican": 1772,
+ "▁ve": 1773,
+ "action": 1774,
+ "';": 1775,
+ "▁vo": 1776,
+ "$,": 1777,
+ "▁Gr": 1778,
+ "pre": 1779,
+ "ny": 1780,
+ "aining": 1781,
+ "ior": 1782,
+ "init": 1783,
+ "lection": 1784,
+ "arm": 1785,
+ "umn": 1786,
+ "ags": 1787,
+ "ци": 1788,
+ "ско": 1789,
+ "version": 1790,
+ "▁To": 1791,
+ "▁ref": 1792,
+ "stand": 1793,
+ "▁At": 1794,
+ "ift": 1795,
+ "▁ein": 1796,
+ "face": 1797,
+ "bo": 1798,
+ "ified": 1799,
+ "ved": 1800,
+ "sum": 1801,
+ "une": 1802,
+ "ital": 1803,
+ "ump": 1804,
+ "comm": 1805,
+ "▁mov": 1806,
+ "elt": 1807,
+ "▁von": 1808,
+ "velop": 1809,
+ "ctor": 1810,
+ "head": 1811,
+ "cle": 1812,
+ "▁build": 1813,
+ "inc": 1814,
+ ".'": 1815,
+ "bs": 1816,
+ "info": 1817,
+ "chn": 1818,
+ "▁week": 1819,
+ "▁book": 1820,
+ "HE": 1821,
+ "bar": 1822,
+ "icense": 1823,
+ "▁What": 1824,
+ "▁quest": 1825,
+ "urch": 1826,
+ "ato": 1827,
+ "left": 1828,
+ "▁mar": 1829,
+ "▁top": 1830,
+ "FF": 1831,
+ "▁friend": 1832,
+ "▁beh": 1833,
+ "▁field": 1834,
+ "▁against": 1835,
+ "ract": 1836,
+ "ization": 1837,
+ "user": 1838,
+ "chen": 1839,
+ "▁keep": 1840,
+ "AD": 1841,
+ "itor": 1842,
+ "▁non": 1843,
+ "ird": 1844,
+ "ope": 1845,
+ "▁rest": 1846,
+ "▁dev": 1847,
+ "▁__": 1848,
+ "▁una": 1849,
+ "▁term": 1850,
+ "IS": 1851,
+ "▁pop": 1852,
+ "rist": 1853,
+ "▁since": 1854,
+ "ves": 1855,
+ "▁hard": 1856,
+ "pi": 1857,
+ "util": 1858,
+ "▁soc": 1859,
+ "ene": 1860,
+ "Exception": 1861,
+ "▁local": 1862,
+ "▁direct": 1863,
+ "▁sure": 1864,
+ "▁bro": 1865,
+ "▁da": 1866,
+ "▁": 1867,
+ "▁current": 1868,
+ "':": 1869,
+ "Wh": 1870,
+ "▁information": 1871,
+ "▁ide": 1872,
+ "▁better": 1873,
+ "Text": 1874,
+ "raph": 1875,
+ "▁stand": 1876,
+ "▁check": 1877,
+ "▁к": 1878,
+ "▁na": 1879,
+ "((": 1880,
+ "outh": 1881,
+ "aps": 1882,
+ "▁unt": 1883,
+ "bf": 1884,
+ "▁conf": 1885,
+ "▁spe": 1886,
+ "itle": 1887,
+ "▁Col": 1888,
+ "class": 1889,
+ "ural": 1890,
+ "bers": 1891,
+ "MA": 1892,
+ "ession": 1893,
+ "▁М": 1894,
+ "Info": 1895,
+ "▁Br": 1896,
+ "▁eas": 1897,
+ "ervice": 1898,
+ "aus": 1899,
+ "ari": 1900,
+ "по": 1901,
+ "▁coun": 1902,
+ "де": 1903,
+ "())": 1904,
+ "ling": 1905,
+ "ED": 1906,
+ "ably": 1907,
+ "▁pat": 1908,
+ "org": 1909,
+ "▁id": 1910,
+ "▁г": 1911,
+ "▁tell": 1912,
+ "lex": 1913,
+ "▁allow": 1914,
+ "reen": 1915,
+ "my": 1916,
+ "▁consider": 1917,
+ "▁team": 1918,
+ "lease": 1919,
+ "htt": 1920,
+ "▁Pr": 1921,
+ "/**": 1922,
+ "▁sing": 1923,
+ "Requ": 1924,
+ "Re": 1925,
+ "ides": 1926,
+ "ches": 1927,
+ "▁object": 1928,
+ "ially": 1929,
+ "By": 1930,
+ "ся": 1931,
+ "ided": 1932,
+ "▁free": 1933,
+ "▁proble": 1934,
+ "cite": 1935,
+ "▁);": 1936,
+ "ission": 1937,
+ "▁during": 1938,
+ "▁--": 1939,
+ "ither": 1940,
+ "ля": 1941,
+ "▁leg": 1942,
+ "▁sit": 1943,
+ "ically": 1944,
+ "▁key": 1945,
+ "leg": 1946,
+ "tra": 1947,
+ "▁mom": 1948,
+ "▁expl": 1949,
+ "▁develop": 1950,
+ "▁event": 1951,
+ "▁NULL": 1952,
+ "ohn": 1953,
+ "▁///": 1954,
+ "▁business": 1955,
+ "ча": 1956,
+ "▁prof": 1957,
+ "error": 1958,
+ "▁por": 1959,
+ "▁commun": 1960,
+ "Ind": 1961,
+ "ium": 1962,
+ "Test": 1963,
+ "▁Ad": 1964,
+ "ouble": 1965,
+ "▁son": 1966,
+ "rite": 1967,
+ "ready": 1968,
+ "▁{\r": 1969,
+ "▁thing": 1970,
+ "ня": 1971,
+ "▁Ph": 1972,
+ "ped": 1973,
+ "сь": 1974,
+ "ived": 1975,
+ "You": 1976,
+ "arl": 1977,
+ "const": 1978,
+ "../": 1979,
+ "Se": 1980,
+ "Sh": 1981,
+ "▁power": 1982,
+ "ribute": 1983,
+ "▁My": 1984,
+ "▁talk": 1985,
+ "itch": 1986,
+ "▁called": 1987,
+ "▁came": 1988,
+ "▁belie": 1989,
+ "UR": 1990,
+ "Add": 1991,
+ "▁Res": 1992,
+ "aster": 1993,
+ "ella": 1994,
+ "obal": 1995,
+ "▁until": 1996,
+ "▁hum": 1997,
+ "CO": 1998,
+ "ately": 1999,
+ "####": 2000,
+ "public": 2001,
+ "[]": 2002,
+ "▁room": 2003,
+ "len": 2004,
+ "▁family": 2005,
+ "por": 2006,
+ "▁program": 2007,
+ "▁hist": 2008,
+ "▁mus": 2009,
+ "arge": 2010,
+ "oney": 2011,
+ "Im": 2012,
+ "else": 2013,
+ "ails": 2014,
+ "af": 2015,
+ "▁love": 2016,
+ "är": 2017,
+ "ases": 2018,
+ "pha": 2019,
+ "ours": 2020,
+ "dis": 2021,
+ "map": 2022,
+ "iver": 2023,
+ "ör": 2024,
+ "▁Bl": 2025,
+ "ateg": 2026,
+ "state": 2027,
+ "State": 2028,
+ "ertain": 2029,
+ "▁effect": 2030,
+ "print": 2031,
+ "▁big": 2032,
+ "index": 2033,
+ "▁pub": 2034,
+ "vert": 2035,
+ "ero": 2036,
+ "md": 2037,
+ "▁method": 2038,
+ "▁game": 2039,
+ "ries": 2040,
+ "lete": 2041,
+ "Item": 2042,
+ "ING": 2043,
+ "resent": 2044,
+ "ality": 2045,
+ "pty": 2046,
+ "ley": 2047,
+ "ocument": 2048,
+ "▁beg": 2049,
+ "TR": 2050,
+ "}.": 2051,
+ "▁school": 2052,
+ "hes": 2053,
+ "до": 2054,
+ "▁lot": 2055,
+ "▁took": 2056,
+ "▁adv": 2057,
+ "▁cap": 2058,
+ "MP": 2059,
+ "unk": 2060,
+ "▁light": 2061,
+ "▁later": 2062,
+ ".,": 2063,
+ "Key": 2064,
+ "itions": 2065,
+ "▁enough": 2066,
+ "▁/**": 2067,
+ "▁went": 2068,
+ "ão": 2069,
+ "▁though": 2070,
+ "▁group": 2071,
+ "▁mean": 2072,
+ "ски": 2073,
+ "AP": 2074,
+ "▁num": 2075,
+ "▁cond": 2076,
+ "ні": 2077,
+ "▁given": 2078,
+ "▁why": 2079,
+ "▁rece": 2080,
+ "▁side": 2081,
+ "▁far": 2082,
+ "Context": 2083,
+ "ме": 2084,
+ "▁log": 2085,
+ "View": 2086,
+ "▁<<": 2087,
+ "fil": 2088,
+ "aces": 2089,
+ "ency": 2090,
+ "oad": 2091,
+ "ered": 2092,
+ "▁product": 2093,
+ "ET": 2094,
+ "▁param": 2095,
+ "▁prote": 2096,
+ "tes": 2097,
+ "Time": 2098,
+ "je": 2099,
+ "olution": 2100,
+ "▁ра": 2101,
+ "▁month": 2102,
+ "ference": 2103,
+ "▁appe": 2104,
+ "▁face": 2105,
+ "ened": 2106,
+ "tract": 2107,
+ "▁less": 2108,
+ "AS": 2109,
+ "ée": 2110,
+ "▁give": 2111,
+ "▁kind": 2112,
+ "▁count": 2113,
+ "count": 2114,
+ "▁stop": 2115,
+ "▁gover": 2116,
+ "ka": 2117,
+ "▁error": 2118,
+ "ences": 2119,
+ "▁mil": 2120,
+ "alf": 2121,
+ "ync": 2122,
+ "vious": 2123,
+ "ho": 2124,
+ "▁night": 2125,
+ "era": 2126,
+ "▁про": 2127,
+ "▁sol": 2128,
+ "men": 2129,
+ "▁water": 2130,
+ "ering": 2131,
+ "▁lim": 2132,
+ "Param": 2133,
+ "▁house": 2134,
+ "▁System": 2135,
+ "▁pay": 2136,
+ "▁:=": 2137,
+ "uro": 2138,
+ "oci": 2139,
+ "zy": 2140,
+ "▁already": 2141,
+ ",\\": 2142,
+ "length": 2143,
+ "▁si": 2144,
+ "▁interest": 2145,
+ "aff": 2146,
+ "cted": 2147,
+ "ention": 2148,
+ "▁до": 2149,
+ "ume": 2150,
+ "▁appro": 2151,
+ "bre": 2152,
+ "IG": 2153,
+ "▁throw": 2154,
+ "mathcal": 2155,
+ "irl": 2156,
+ "▁prom": 2157,
+ "oss": 2158,
+ "▁request": 2159,
+ "equation": 2160,
+ "ology": 2161,
+ "mit": 2162,
+ "▁pack": 2163,
+ "ino": 2164,
+ "array": 2165,
+ "za": 2166,
+ "til": 2167,
+ "UN": 2168,
+ "▁present": 2169,
+ "▁organ": 2170,
+ "File": 2171,
+ "▁orig": 2172,
+ "▁full": 2173,
+ "istr": 2174,
+ "▁flo": 2175,
+ "hr": 2176,
+ "▁assert": 2177,
+ "ards": 2178,
+ "url": 2179,
+ "enn": 2180,
+ "sl": 2181,
+ "▁А": 2182,
+ "▁cho": 2183,
+ "▁level": 2184,
+ "OT": 2185,
+ "word": 2186,
+ "▁body": 2187,
+ "▁user": 2188,
+ "ía": 2189,
+ "Qu": 2190,
+ "▁main": 2191,
+ "AB": 2192,
+ "ploy": 2193,
+ "Event": 2194,
+ "▁super": 2195,
+ "oken": 2196,
+ "▁Н": 2197,
+ "As": 2198,
+ "thers": 2199,
+ "мо": 2200,
+ "ку": 2201,
+ "▁days": 2202,
+ "▁done": 2203,
+ "▁view": 2204,
+ "side": 2205,
+ "си": 2206,
+ "');": 2207,
+ "▁vol": 2208,
+ "▁tot": 2209,
+ "case": 2210,
+ "▁aff": 2211,
+ "Request": 2212,
+ "▁Man": 2213,
+ "\\\\": 2214,
+ "▁John": 2215,
+ "▁Б": 2216,
+ "orth": 2217,
+ "▁je": 2218,
+ "▁une": 2219,
+ "la": 2220,
+ "[\"": 2221,
+ "field": 2222,
+ "▁US": 2223,
+ "ico": 2224,
+ "▁perform": 2225,
+ "ailable": 2226,
+ "Config": 2227,
+ "Or": 2228,
+ "▁model": 2229,
+ "ales": 2230,
+ "▁create": 2231,
+ "▁ann": 2232,
+ "ances": 2233,
+ "IL": 2234,
+ "ination": 2235,
+ "▁Im": 2236,
+ "ante": 2237,
+ "ana": 2238,
+ "ан": 2239,
+ "▁told": 2240,
+ "config": 2241,
+ "\"]": 2242,
+ "met": 2243,
+ "lt": 2244,
+ "▁text": 2245,
+ "▁May": 2246,
+ "▁org": 2247,
+ "▁port": 2248,
+ "Pl": 2249,
+ "ently": 2250,
+ "▁door": 2251,
+ "US": 2252,
+ "▁(*": 2253,
+ "kt": 2254,
+ "ES": 2255,
+ "ential": 2256,
+ "▁iss": 2257,
+ "▁inc": 2258,
+ "Node": 2259,
+ "ively": 2260,
+ "▁asked": 2261,
+ "irt": 2262,
+ "▁Te": 2263,
+ "▁report": 2264,
+ "▁chang": 2265,
+ "сти": 2266,
+ "▁along": 2267,
+ "▁change": 2268,
+ "Size": 2269,
+ "▁ever": 2270,
+ "▁occ": 2271,
+ "ury": 2272,
+ "▁mind": 2273,
+ "order": 2274,
+ "point": 2275,
+ "сто": 2276,
+ "▁whe": 2277,
+ "▁important": 2278,
+ "des": 2279,
+ "▁Not": 2280,
+ "▁writ": 2281,
+ "▁eyes": 2282,
+ "▁desc": 2283,
+ "most": 2284,
+ "ks": 2285,
+ "▁bit": 2286,
+ "▁▁▁": 2287,
+ "▁success": 2288,
+ "ть": 2289,
+ "бо": 2290,
+ "core": 2291,
+ "}(": 2292,
+ "▁array": 2293,
+ "lin": 2294,
+ "lish": 2295,
+ "▁following": 2296,
+ "Field": 2297,
+ "ids": 2298,
+ "hing": 2299,
+ "▁cal": 2300,
+ "Is": 2301,
+ "aring": 2302,
+ "lev": 2303,
+ "alt": 2304,
+ "CH": 2305,
+ "▁dé": 2306,
+ "alpha": 2307,
+ "▁four": 2308,
+ "▁law": 2309,
+ "▁се": 2310,
+ "iron": 2311,
+ "▁disc": 2312,
+ "се": 2313,
+ "ken": 2314,
+ "node": 2315,
+ "▁Par": 2316,
+ "▁Eng": 2317,
+ "▁move": 2318,
+ "▁License": 2319,
+ "cul": 2320,
+ "ione": 2321,
+ ")$": 2322,
+ "▁tw": 2323,
+ "We": 2324,
+ "sel": 2325,
+ "▁With": 2326,
+ "▁once": 2327,
+ "Service": 2328,
+ "bol": 2329,
+ "ured": 2330,
+ "ida": 2331,
+ "▁Qu": 2332,
+ "▁grow": 2333,
+ "▁conne": 2334,
+ "EX": 2335,
+ "▁htt": 2336,
+ "▁};": 2337,
+ "▁walk": 2338,
+ "▁init": 2339,
+ "nal": 2340,
+ "ender": 2341,
+ "cription": 2342,
+ "mber": 2343,
+ "lected": 2344,
+ "po": 2345,
+ "▁nil": 2346,
+ "▁prob": 2347,
+ "чи": 2348,
+ "▁Ste": 2349,
+ "ison": 2350,
+ "ands": 2351,
+ "osed": 2352,
+ "же": 2353,
+ "▁His": 2354,
+ "ür": 2355,
+ "Man": 2356,
+ "Element": 2357,
+ "▁able": 2358,
+ "Index": 2359,
+ "search": 2360,
+ "▁mag": 2361,
+ "ар": 2362,
+ "▁course": 2363,
+ "▁Car": 2364,
+ "▁exp": 2365,
+ "aph": 2366,
+ "▁mit": 2367,
+ "▁doesn": 2368,
+ "▁default": 2369,
+ "/>": 2370,
+ "aim": 2371,
+ "▁service": 2372,
+ "▁within": 2373,
+ "angu": 2374,
+ "▁Д": 2375,
+ "uffer": 2376,
+ "AG": 2377,
+ "▁Do": 2378,
+ "▁incre": 2379,
+ "▁understand": 2380,
+ "}^": 2381,
+ "▁looked": 2382,
+ "gen": 2383,
+ "ailed": 2384,
+ "▁е": 2385,
+ "ayer": 2386,
+ "▁One": 2387,
+ "▁bas": 2388,
+ "▁job": 2389,
+ "mu": 2390,
+ "but": 2391,
+ "elta": 2392,
+ "▁Christ": 2393,
+ "uration": 2394,
+ "▁record": 2395,
+ "▁Univers": 2396,
+ "ivid": 2397,
+ "valid": 2398,
+ "▁Р": 2399,
+ "▁hold": 2400,
+ "▁table": 2401,
+ "ones": 2402,
+ "link": 2403,
+ "▁Ge": 2404,
+ "▁offer": 2405,
+ "ster": 2406,
+ "Form": 2407,
+ "={": 2408,
+ "▁не": 2409,
+ "stance": 2410,
+ "▁govern": 2411,
+ "▁techn": 2412,
+ "▁prim": 2413,
+ "*.": 2414,
+ "cho": 2415,
+ "max": 2416,
+ "▁fore": 2417,
+ "▁Can": 2418,
+ "▁polit": 2419,
+ "ories": 2420,
+ "▁times": 2421,
+ "▁dans": 2422,
+ "▁air": 2423,
+ "▁anything": 2424,
+ "▁sever": 2425,
+ "acy": 2426,
+ "}_": 2427,
+ "He": 2428,
+ "▁least": 2429,
+ "ips": 2430,
+ "ENT": 2431,
+ "do": 2432,
+ "▁от": 2433,
+ "▁cost": 2434,
+ ".”": 2435,
+ "▁children": 2436,
+ "ability": 2437,
+ "But": 2438,
+ "▁path": 2439,
+ "result": 2440,
+ "acter": 2441,
+ "▁element": 2442,
+ "ee": 2443,
+ "▁wait": 2444,
+ "▁money": 2445,
+ "Map": 2446,
+ "td": 2447,
+ "oin": 2448,
+ "iving": 2449,
+ "icht": 2450,
+ "icy": 2451,
+ "sch": 2452,
+ "ste": 2453,
+ "ду": 2454,
+ "ored": 2455,
+ "oud": 2456,
+ "ille": 2457,
+ "ised": 2458,
+ "plication": 2459,
+ "▁custom": 2460,
+ "▁having": 2461,
+ "ponent": 2462,
+ "▁By": 2463,
+ "ules": 2464,
+ "ued": 2465,
+ "atter": 2466,
+ "And": 2467,
+ "itive": 2468,
+ "Def": 2469,
+ "▁moment": 2470,
+ "aterial": 2471,
+ "Class": 2472,
+ "ograph": 2473,
+ "ike": 2474,
+ "▁large": 2475,
+ "▁####": 2476,
+ "▁either": 2477,
+ "duct": 2478,
+ "▁Then": 2479,
+ "▁Gu": 2480,
+ "olean": 2481,
+ "pert": 2482,
+ "▁Get": 2483,
+ "▁Ab": 2484,
+ "▁short": 2485,
+ "On": 2486,
+ "iment": 2487,
+ "▁project": 2488,
+ "cript": 2489,
+ "▁including": 2490,
+ "ния": 2491,
+ "▁making": 2492,
+ "▁someone": 2493,
+ "▁Fl": 2494,
+ "▁sat": 2495,
+ "▁company": 2496,
+ "ocus": 2497,
+ "pu": 2498,
+ "▁God": 2499,
+ "ification": 2500,
+ "No": 2501,
+ "▁sn": 2502,
+ "ano": 2503,
+ "ga": 2504,
+ "▁au": 2505,
+ "▁cou": 2506,
+ "ás": 2507,
+ "ended": 2508,
+ "ту": 2509,
+ "ober": 2510,
+ "▁nothing": 2511,
+ "▁net": 2512,
+ "▁pot": 2513,
+ "▁typ": 2514,
+ "▁item": 2515,
+ "rew": 2516,
+ "Att": 2517,
+ "▁young": 2518,
+ "}\r": 2519,
+ "nder": 2520,
+ "start": 2521,
+ "▁Sc": 2522,
+ "*)": 2523,
+ "▁enc": 2524,
+ "▁women": 2525,
+ "▁looking": 2526,
+ "▁ро": 2527,
+ "▁health": 2528,
+ "Path": 2529,
+ "▁After": 2530,
+ "▁mult": 2531,
+ "▁{\\": 2532,
+ "▁land": 2533,
+ "orld": 2534,
+ "▁Des": 2535,
+ "▁eng": 2536,
+ "input": 2537,
+ "▁Pol": 2538,
+ "\"\"": 2539,
+ "Code": 2540,
+ "▁supp": 2541,
+ "ainer": 2542,
+ "heck": 2543,
+ "▁mor": 2544,
+ "▁mill": 2545,
+ "▁aw": 2546,
+ "fs": 2547,
+ "▁doing": 2548,
+ "tings": 2549,
+ "ades": 2550,
+ "▁toget": 2551,
+ "▁certain": 2552,
+ "▁together": 2553,
+ "CE": 2554,
+ "ideo": 2555,
+ "▁American": 2556,
+ "ony": 2557,
+ "idd": 2558,
+ "II": 2559,
+ "ged": 2560,
+ "ables": 2561,
+ "▁ident": 2562,
+ "iod": 2563,
+ "▁parent": 2564,
+ "For": 2565,
+ "ambda": 2566,
+ "ando": 2567,
+ "=\\": 2568,
+ "aged": 2569,
+ "ending": 2570,
+ "Int": 2571,
+ "▁possible": 2572,
+ "▁со": 2573,
+ "ivity": 2574,
+ "num": 2575,
+ "rt": 2576,
+ "ajor": 2577,
+ "create": 2578,
+ "ride": 2579,
+ "▁knew": 2580,
+ "bit": 2581,
+ "itional": 2582,
+ "▁lik": 2583,
+ "▁Her": 2584,
+ "ension": 2585,
+ "\".": 2586,
+ "oto": 2587,
+ "▁exist": 2588,
+ "aken": 2589,
+ "▁actually": 2590,
+ "ca": 2591,
+ "▁Г": 2592,
+ "хо": 2593,
+ "inn": 2594,
+ "All": 2595,
+ "buf": 2596,
+ "▁Me": 2597,
+ "▁seen": 2598,
+ "ops": 2599,
+ "▁▁▁▁▁▁▁▁▁": 2600,
+ "Not": 2601,
+ "▁control": 2602,
+ "▁respon": 2603,
+ "};": 2604,
+ "ilt": 2605,
+ "isk": 2606,
+ "▁bad": 2607,
+ "▁often": 2608,
+ "▁past": 2609,
+ "aper": 2610,
+ "▁reason": 2611,
+ "eters": 2612,
+ "▁wanted": 2613,
+ "ura": 2614,
+ "table": 2615,
+ "ormal": 2616,
+ "width": 2617,
+ "га": 2618,
+ "ptr": 2619,
+ "▁dest": 2620,
+ "▁design": 2621,
+ "▁sound": 2622,
+ "▁plan": 2623,
+ "▁base": 2624,
+ "hand": 2625,
+ "gs": 2626,
+ "▁says": 2627,
+ "function": 2628,
+ "▁tri": 2629,
+ "mt": 2630,
+ "▁invest": 2631,
+ "▁available": 2632,
+ "ayout": 2633,
+ "▁och": 2634,
+ "▁las": 2635,
+ "illed": 2636,
+ "Val": 2637,
+ "▁ф": 2638,
+ "iety": 2639,
+ "mon": 2640,
+ "Hand": 2641,
+ "Fr": 2642,
+ "iam": 2643,
+ "pace": 2644,
+ "▁Ob": 2645,
+ "▁para": 2646,
+ "▁meet": 2647,
+ "▁sum": 2648,
+ "Message": 2649,
+ "ici": 2650,
+ "▁known": 2651,
+ "▁gen": 2652,
+ "amma": 2653,
+ "arr": 2654,
+ "▁tre": 2655,
+ "oke": 2656,
+ "uth": 2657,
+ "~\\": 2658,
+ "▁experience": 2659,
+ "icle": 2660,
+ "▁Il": 2661,
+ "▁sent": 2662,
+ "▁others": 2663,
+ "▁soft": 2664,
+ "IP": 2665,
+ "▁max": 2666,
+ "ball": 2667,
+ "▁market": 2668,
+ "▁pour": 2669,
+ "pression": 2670,
+ "eps": 2671,
+ "▁saw": 2672,
+ "▁across": 2673,
+ "▁Su": 2674,
+ "Over": 2675,
+ "ние": 2676,
+ "ulation": 2677,
+ "▁Reg": 2678,
+ "▁+=": 2679,
+ "body": 2680,
+ ")\\": 2681,
+ "▁print": 2682,
+ "▁при": 2683,
+ "db": 2684,
+ "ources": 2685,
+ "wards": 2686,
+ "▁black": 2687,
+ "со": 2688,
+ "ili": 2689,
+ "▁Ed": 2690,
+ "▁complet": 2691,
+ "▁single": 2692,
+ "▁IN": 2693,
+ "ached": 2694,
+ "bt": 2695,
+ "▁code": 2696,
+ "▁bool": 2697,
+ "▁area": 2698,
+ "▁require": 2699,
+ "▁problem": 2700,
+ "aced": 2701,
+ "Equ": 2702,
+ "▁config": 2703,
+ "vec": 2704,
+ "ney": 2705,
+ "cy": 2706,
+ "Al": 2707,
+ "▁account": 2708,
+ "ymbol": 2709,
+ "▁ste": 2710,
+ "ges": 2711,
+ "Array": 2712,
+ "empl": 2713,
+ "context": 2714,
+ "Des": 2715,
+ "Result": 2716,
+ "ecut": 2717,
+ "▁target": 2718,
+ "▁getting": 2719,
+ "\"/>": 2720,
+ "ogle": 2721,
+ "▁himself": 2722,
+ "▁wasn": 2723,
+ "▁block": 2724,
+ "▁ant": 2725,
+ "▁York": 2726,
+ "▁become": 2727,
+ "iff": 2728,
+ "ports": 2729,
+ "reate": 2730,
+ "='": 2731,
+ "cd": 2732,
+ "location": 2733,
+ "ет": 2734,
+ "▁access": 2735,
+ "gress": 2736,
+ "ros": 2737,
+ "Up": 2738,
+ "▁working": 2739,
+ "▁Am": 2740,
+ "iqu": 2741,
+ "cer": 2742,
+ "▁((": 2743,
+ "▁Per": 2744,
+ "▁func": 2745,
+ "▁girl": 2746,
+ "▁above": 2747,
+ "pen": 2748,
+ "пи": 2749,
+ "ido": 2750,
+ "▁version": 2751,
+ "TY": 2752,
+ "▁;": 2753,
+ "mary": 2754,
+ "abled": 2755,
+ "annel": 2756,
+ "▁example": 2757,
+ "▁context": 2758,
+ "OP": 2759,
+ "▁red": 2760,
+ "▁cir": 2761,
+ "sm": 2762,
+ "Log": 2763,
+ "▁space": 2764,
+ "▁fut": 2765,
+ "▁Gener": 2766,
+ "ills": 2767,
+ "▁dri": 2768,
+ "_.": 2769,
+ "▁felt": 2770,
+ "▁offic": 2771,
+ "▁===": 2772,
+ "ii": 2773,
+ "▁started": 2774,
+ "▁Т": 2775,
+ "▁});": 2776,
+ "js": 2777,
+ "▁front": 2778,
+ "▁almost": 2779,
+ "irm": 2780,
+ "!\"": 2781,
+ "signed": 2782,
+ "▁yet": 2783,
+ "▁trad": 2784,
+ "ients": 2785,
+ "ama": 2786,
+ "▁input": 2787,
+ "lim": 2788,
+ "па": 2789,
+ "▁ка": 2790,
+ "▁camp": 2791,
+ "ibr": 2792,
+ "fect": 2793,
+ "unt": 2794,
+ "▁half": 2795,
+ "▁cover": 2796,
+ "anguage": 2797,
+ "▁ben": 2798,
+ "ha": 2799,
+ "▁diff": 2800,
+ "_\\": 2801,
+ "▁об": 2802,
+ "])": 2803,
+ "odes": 2804,
+ "hel": 2805,
+ "ios": 2806,
+ "▁О": 2807,
+ "▁mot": 2808,
+ "▁social": 2809,
+ "////////": 2810,
+ "▁stre": 2811,
+ "ground": 2812,
+ "ів": 2813,
+ "object": 2814,
+ "ples": 2815,
+ "reed": 2816,
+ "▁een": 2817,
+ "▁based": 2818,
+ "▁range": 2819,
+ "An": 2820,
+ "urg": 2821,
+ "▁learn": 2822,
+ "▁exc": 2823,
+ "▁imp": 2824,
+ "▁means": 2825,
+ "▁wur": 2826,
+ "ends": 2827,
+ "void": 2828,
+ "▁std": 2829,
+ "▁particular": 2830,
+ "ja": 2831,
+ "▁source": 2832,
+ "default": 2833,
+ "py": 2834,
+ "▁als": 2835,
+ "scri": 2836,
+ "status": 2837,
+ "▁story": 2838,
+ "▁begin": 2839,
+ "▁position": 2840,
+ "▁special": 2841,
+ "php": 2842,
+ "▁bar": 2843,
+ "▁pract": 2844,
+ "call": 2845,
+ "▁das": 2846,
+ "▁rad": 2847,
+ "▁close": 2848,
+ "www": 2849,
+ "ере": 2850,
+ "gu": 2851,
+ "▁Er": 2852,
+ "▁dom": 2853,
+ "AM": 2854,
+ "▁bed": 2855,
+ "▁several": 2856,
+ "aul": 2857,
+ "box": 2858,
+ "▁low": 2859,
+ "pack": 2860,
+ "Reg": 2861,
+ "Of": 2862,
+ "atures": 2863,
+ "én": 2864,
+ "eder": 2865,
+ "uilder": 2866,
+ "cast": 2867,
+ "conom": 2868,
+ "raft": 2869,
+ "▁makes": 2870,
+ "Loc": 2871,
+ "http": 2872,
+ "▁abs": 2873,
+ "resh": 2874,
+ "▁Will": 2875,
+ "break": 2876,
+ "▁options": 2877,
+ "fort": 2878,
+ "▁из": 2879,
+ "▁anal": 2880,
+ "▁env": 2881,
+ "({": 2882,
+ "event": 2883,
+ "▁page": 2884,
+ "ternal": 2885,
+ "▁distribut": 2886,
+ "▁food": 2887,
+ "check": 2888,
+ "CK": 2889,
+ "▁во": 2890,
+ "assert": 2891,
+ "án": 2892,
+ "base": 2893,
+ "▁whole": 2894,
+ "ación": 2895,
+ "OD": 2896,
+ "▁turned": 2897,
+ "igma": 2898,
+ "▁response": 2899,
+ "▁University": 2900,
+ "▁div": 2901,
+ "apter": 2902,
+ "▁results": 2903,
+ "▁represent": 2904,
+ "▁everything": 2905,
+ "▁Cent": 2906,
+ "utes": 2907,
+ "rix": 2908,
+ "▁Some": 2909,
+ "▁behind": 2910,
+ "▁creat": 2911,
+ "place": 2912,
+ "su": 2913,
+ "▁Part": 2914,
+ "umb": 2915,
+ "mathbb": 2916,
+ "ping": 2917,
+ "▁match": 2918,
+ "Out": 2919,
+ "dom": 2920,
+ "▁situ": 2921,
+ "dr": 2922,
+ "ara": 2923,
+ "▁window": 2924,
+ "ns": 2925,
+ "lished": 2926,
+ "▁Ver": 2927,
+ "▁message": 2928,
+ "▁Em": 2929,
+ "▁human": 2930,
+ "perties": 2931,
+ "лу": 2932,
+ "lem": 2933,
+ "ORT": 2934,
+ "▁early": 2935,
+ "▁quick": 2936,
+ "▁та": 2937,
+ "roid": 2938,
+ "▁country": 2939,
+ "▁due": 2940,
+ "▁Die": 2941,
+ "▁trying": 2942,
+ "▁live": 2943,
+ "▁press": 2944,
+ "INT": 2945,
+ "With": 2946,
+ "oved": 2947,
+ "▁specific": 2948,
+ "▁fall": 2949,
+ "uk": 2950,
+ "yl": 2951,
+ "▁general": 2952,
+ "му": 2953,
+ "ну": 2954,
+ "▁names": 2955,
+ "where": 2956,
+ "▁These": 2957,
+ "▁sil": 2958,
+ "ét": 2959,
+ "▁ener": 2960,
+ "▁Now": 2961,
+ "▁address": 2962,
+ "Response": 2963,
+ "▁Mr": 2964,
+ "▁answ": 2965,
+ "▁film": 2966,
+ "▁strong": 2967,
+ "▁bring": 2968,
+ "▁United": 2969,
+ "▁ge": 2970,
+ "▁woman": 2971,
+ "New": 2972,
+ "ett": 2973,
+ ".)": 2974,
+ "ename": 2975,
+ "▁AN": 2976,
+ "▁describ": 2977,
+ "за": 2978,
+ "ising": 2979,
+ "EL": 2980,
+ "ql": 2981,
+ "▁fur": 2982,
+ "ying": 2983,
+ "▁Cal": 2984,
+ "▁Dr": 2985,
+ "ERR": 2986,
+ "▁\\\\": 2987,
+ "angle": 2988,
+ "urope": 2989,
+ "▁city": 2990,
+ "▁index": 2991,
+ "▁action": 2992,
+ "▁However": 2993,
+ "▁fig": 2994,
+ "ias": 2995,
+ "▁question": 2996,
+ "▁Jan": 2997,
+ "▁Med": 2998,
+ "▁Cont": 2999,
+ "amed": 3000,
+ "Call": 3001,
+ "plied": 3002,
+ "tty": 3003,
+ "▁individ": 3004,
+ "page": 3005,
+ "▁comb": 3006,
+ "section": 3007,
+ "▁Comm": 3008,
+ "uel": 3009,
+ "▁het": 3010,
+ "▁Bar": 3011,
+ "agement": 3012,
+ "fin": 3013,
+ "▁major": 3014,
+ "oper": 3015,
+ "api": 3016,
+ "room": 3017,
+ "▁„": 3018,
+ "▁hab": 3019,
+ "зи": 3020,
+ "▁auf": 3021,
+ "current": 3022,
+ "ni": 3023,
+ "▁include": 3024,
+ "▁qui": 3025,
+ "va": 3026,
+ "UE": 3027,
+ "▁idea": 3028,
+ ",'": 3029,
+ "▁required": 3030,
+ "▁heart": 3031,
+ "ibility": 3032,
+ "iction": 3033,
+ "Model": 3034,
+ "write": 3035,
+ "▁content": 3036,
+ "▁wer": 3037,
+ "▁hands": 3038,
+ "zen": 3039,
+ "char": 3040,
+ "}^{": 3041,
+ "▁mass": 3042,
+ "ply": 3043,
+ "▁nat": 3044,
+ "rel": 3045,
+ "▁dat": 3046,
+ "================": 3047,
+ "imal": 3048,
+ "▁probably": 3049,
+ "unch": 3050,
+ "▁mer": 3051,
+ "ilar": 3052,
+ "ires": 3053,
+ "▁watch": 3054,
+ "SI": 3055,
+ "▁cult": 3056,
+ "▁mother": 3057,
+ "▁government": 3058,
+ "ording": 3059,
+ "▁()": 3060,
+ "▁pri": 3061,
+ "▁link": 3062,
+ "group": 3063,
+ "OL": 3064,
+ "▁near": 3065,
+ "▁Ser": 3066,
+ "Ser": 3067,
+ "ito": 3068,
+ "▁values": 3069,
+ "▁java": 3070,
+ "fully": 3071,
+ "Count": 3072,
+ "++)": 3073,
+ "▁vi": 3074,
+ "▁white": 3075,
+ "mat": 3076,
+ "ctx": 3077,
+ "▁conc": 3078,
+ "▁stay": 3079,
+ "ging": 3080,
+ "▁clear": 3081,
+ "▁copy": 3082,
+ "selves": 3083,
+ "▁provide": 3084,
+ "▁words": 3085,
+ "comp": 3086,
+ "args": 3087,
+ "▁pick": 3088,
+ "uly": 3089,
+ "▁vari": 3090,
+ "▁believe": 3091,
+ "▁Co": 3092,
+ "Property": 3093,
+ "Group": 3094,
+ "▁ten": 3095,
+ "ischen": 3096,
+ "eturn": 3097,
+ "ival": 3098,
+ "System": 3099,
+ "CL": 3100,
+ "bed": 3101,
+ "▁total": 3102,
+ "▁ist": 3103,
+ "Input": 3104,
+ "uments": 3105,
+ "Manager": 3106,
+ "ши": 3107,
+ "▁win": 3108,
+ "leep": 3109,
+ "PI": 3110,
+ "ного": 3111,
+ "ruction": 3112,
+ "▁inte": 3113,
+ "App": 3114,
+ "avor": 3115,
+ "▁respect": 3116,
+ "ators": 3117,
+ "▁como": 3118,
+ "▁cut": 3119,
+ "FA": 3120,
+ "▁sus": 3121,
+ "▁App": 3122,
+ "rect": 3123,
+ "FI": 3124,
+ "▁began": 3125,
+ "oph": 3126,
+ "▁sort": 3127,
+ "though": 3128,
+ "је": 3129,
+ "icro": 3130,
+ "Trans": 3131,
+ "лі": 3132,
+ "▁Inst": 3133,
+ "request": 3134,
+ "ор": 3135,
+ "▁relations": 3136,
+ "-\\": 3137,
+ "Status": 3138,
+ "жи": 3139,
+ "▁father": 3140,
+ "cs": 3141,
+ "▁sex": 3142,
+ "isch": 3143,
+ "vo": 3144,
+ "}_{": 3145,
+ "aven": 3146,
+ "▁Ne": 3147,
+ "ATE": 3148,
+ "itten": 3149,
+ "▁ess": 3150,
+ "TH": 3151,
+ "ights": 3152,
+ "▁hom": 3153,
+ "▁today": 3154,
+ "▁zu": 3155,
+ "ita": 3156,
+ "▁isn": 3157,
+ "▁opt": 3158,
+ "ogn": 3159,
+ "ér": 3160,
+ "▁whether": 3161,
+ "ixed": 3162,
+ "phi": 3163,
+ "idence": 3164,
+ "ald": 3165,
+ "Client": 3166,
+ "At": 3167,
+ "▁death": 3168,
+ "▁Let": 3169,
+ "ius": 3170,
+ "ги": 3171,
+ "▁ре": 3172,
+ "ben": 3173,
+ ")\r": 3174,
+ "ba": 3175,
+ ">": 3176,
+ "avel": 3177,
+ "▁miss": 3178,
+ "▁node": 3179,
+ "▁($": 3180,
+ "▁color": 3181,
+ "▁obt": 3182,
+ "tot": 3183,
+ "▁пре": 3184,
+ "CON": 3185,
+ "ette": 3186,
+ "▁Go": 3187,
+ "Fl": 3188,
+ "▁Don": 3189,
+ "▁crit": 3190,
+ "▁ri": 3191,
+ "post": 3192,
+ "▁->": 3193,
+ "▁Just": 3194,
+ "What": 3195,
+ "atal": 3196,
+ "▁Min": 3197,
+ "▁Cor": 3198,
+ "▁dark": 3199,
+ "rl": 3200,
+ "▁larg": 3201,
+ "ding": 3202,
+ "ón": 3203,
+ "ouch": 3204,
+ "▁um": 3205,
+ "▁elect": 3206,
+ "▁dam": 3207,
+ "▁needs": 3208,
+ "▁matter": 3209,
+ "▁rather": 3210,
+ "from": 3211,
+ "ram": 3212,
+ "▁і": 3213,
+ "▁taken": 3214,
+ "▁deal": 3215,
+ "▁period": 3216,
+ "▁Mon": 3217,
+ "▁Л": 3218,
+ "▁Aug": 3219,
+ "run": 3220,
+ "mm": 3221,
+ "elle": 3222,
+ "▁export": 3223,
+ "Sc": 3224,
+ "vis": 3225,
+ "abor": 3226,
+ "▁author": 3227,
+ "ère": 3228,
+ "▁remember": 3229,
+ "▁redu": 3230,
+ "▁List": 3231,
+ "▁focus": 3232,
+ "▁character": 3233,
+ "Table": 3234,
+ "▁individual": 3235,
+ "▁needed": 3236,
+ "bum": 3237,
+ "▁style": 3238,
+ "inary": 3239,
+ "ersion": 3240,
+ "oute": 3241,
+ "▁Pe": 3242,
+ "▁hon": 3243,
+ "mut": 3244,
+ "see": 3245,
+ "▁became": 3246,
+ "▁dire": 3247,
+ "▁document": 3248,
+ "sec": 3249,
+ "ening": 3250,
+ "▁visit": 3251,
+ "▁fac": 3252,
+ "tx": 3253,
+ "down": 3254,
+ "plit": 3255,
+ "▁phys": 3256,
+ "itting": 3257,
+ "joy": 3258,
+ "▁hig": 3259,
+ "This": 3260,
+ "Ad": 3261,
+ "▁Brit": 3262,
+ "▁employ": 3263,
+ "▁ré": 3264,
+ "▁т": 3265,
+ "lambda": 3266,
+ "▁impro": 3267,
+ "▁Bo": 3268,
+ "iding": 3269,
+ "▁online": 3270,
+ "mem": 3271,
+ "atform": 3272,
+ "▁War": 3273,
+ "▁cas": 3274,
+ "asure": 3275,
+ "▁pur": 3276,
+ "medi": 3277,
+ "Dis": 3278,
+ "▁Germ": 3279,
+ "pc": 3280,
+ "са": 3281,
+ "▁friends": 3282,
+ "▁Mc": 3283,
+ "DI": 3284,
+ "▁plus": 3285,
+ "▁Set": 3286,
+ "iddle": 3287,
+ "itut": 3288,
+ "▁depend": 3289,
+ "rest": 3290,
+ "▁Je": 3291,
+ "▁hor": 3292,
+ "▁entire": 3293,
+ "Query": 3294,
+ "▁refer": 3295,
+ "▁hot": 3296,
+ "▁Aust": 3297,
+ "▁common": 3298,
+ "ці": 3299,
+ "▁pull": 3300,
+ "▁Add": 3301,
+ "▁season": 3302,
+ "▁invol": 3303,
+ "▁World": 3304,
+ "client": 3305,
+ "now": 3306,
+ "true": 3307,
+ "append": 3308,
+ "itted": 3309,
+ "empt": 3310,
+ "){": 3311,
+ "///": 3312,
+ "▁prop": 3313,
+ "imate": 3314,
+ "SC": 3315,
+ "▁hours": 3316,
+ "▁hope": 3317,
+ "andom": 3318,
+ "ід": 3319,
+ "istic": 3320,
+ "▁property": 3321,
+ "sg": 3322,
+ ">(": 3323,
+ "▁write": 3324,
+ "mark": 3325,
+ "find": 3326,
+ "▁personal": 3327,
+ "][": 3328,
+ "rown": 3329,
+ "Ph": 3330,
+ "▁foot": 3331,
+ "▁research": 3332,
+ "ironment": 3333,
+ "▁nom": 3334,
+ "▁instance": 3335,
+ "▁held": 3336,
+ "De": 3337,
+ "▁members": 3338,
+ "▁fire": 3339,
+ "▁history": 3340,
+ "▁map": 3341,
+ "▁discuss": 3342,
+ "▁espec": 3343,
+ "▁taking": 3344,
+ "▁services": 3345,
+ "▁indust": 3346,
+ "igen": 3347,
+ "▁Ass": 3348,
+ "▁expected": 3349,
+ "▁wurde": 3350,
+ "dir": 3351,
+ "▁among": 3352,
+ "▁sugg": 3353,
+ "rec": 3354,
+ "Inter": 3355,
+ "block": 3356,
+ "▁Rep": 3357,
+ "▁pain": 3358,
+ "▁five": 3359,
+ "▁fund": 3360,
+ "rid": 3361,
+ "arrow": 3362,
+ "▁treat": 3363,
+ "▁heard": 3364,
+ "▁determ": 3365,
+ "icult": 3366,
+ "▁sense": 3367,
+ "ese": 3368,
+ "Fun": 3369,
+ "▁months": 3370,
+ "json": 3371,
+ ",”": 3372,
+ "TI": 3373,
+ "orage": 3374,
+ "▁У": 3375,
+ "▁everyone": 3376,
+ "▁clos": 3377,
+ "iers": 3378,
+ "airs": 3379,
+ "define": 3380,
+ "If": 3381,
+ "osp": 3382,
+ "▁wonder": 3383,
+ "NA": 3384,
+ "query": 3385,
+ "pg": 3386,
+ "ites": 3387,
+ "▁material": 3388,
+ "yd": 3389,
+ "Read": 3390,
+ "html": 3391,
+ "TE": 3392,
+ "Pr": 3393,
+ "^{\\": 3394,
+ "▁gave": 3395,
+ "▁IS": 3396,
+ "▁suggest": 3397,
+ "Override": 3398,
+ "rodu": 3399,
+ "From": 3400,
+ "▁Europe": 3401,
+ "PO": 3402,
+ "▁soon": 3403,
+ "host": 3404,
+ "▁Ber": 3405,
+ "....": 3406,
+ "▁Har": 3407,
+ "▁energy": 3408,
+ "><": 3409,
+ "aves": 3410,
+ "▁easy": 3411,
+ "▁bre": 3412,
+ "frame": 3413,
+ "▁ground": 3414,
+ "with": 3415,
+ "▁inside": 3416,
+ "ief": 3417,
+ "▁mo": 3418,
+ "pm": 3419,
+ "pan": 3420,
+ "igr": 3421,
+ "▁om": 3422,
+ "next": 3423,
+ "omet": 3424,
+ "▁status": 3425,
+ "▁}\r": 3426,
+ "▁music": 3427,
+ "ora": 3428,
+ "iles": 3429,
+ "ki": 3430,
+ "▁esc": 3431,
+ "▁bes": 3432,
+ "▁Dis": 3433,
+ "▁host": 3434,
+ "▁comes": 3435,
+ "used": 3436,
+ "▁future": 3437,
+ "lick": 3438,
+ "aid": 3439,
+ "▁compet": 3440,
+ "▁voice": 3441,
+ "▁load": 3442,
+ "evel": 3443,
+ "▁neg": 3444,
+ "▁command": 3445,
+ "▁für": 3446,
+ "▁pie": 3447,
+ "▁quite": 3448,
+ "▁blo": 3449,
+ "agn": 3450,
+ "ilon": 3451,
+ "▁claim": 3452,
+ "▁teach": 3453,
+ "▁previous": 3454,
+ "▁site": 3455,
+ "color": 3456,
+ "attr": 3457,
+ "▁accept": 3458,
+ "▁exact": 3459,
+ ")}": 3460,
+ "aft": 3461,
+ "roller": 3462,
+ "он": 3463,
+ "oo": 3464,
+ "Date": 3465,
+ "▁ou": 3466,
+ "sy": 3467,
+ "▁pretty": 3468,
+ "▁image": 3469,
+ "BU": 3470,
+ "▁terms": 3471,
+ "▁search": 3472,
+ "▁è": 3473,
+ "▁Val": 3474,
+ "▁‘": 3475,
+ "▁Dav": 3476,
+ "MS": 3477,
+ "src": 3478,
+ "mar": 3479,
+ "incip": 3480,
+ "▁couldn": 3481,
+ "ados": 3482,
+ "▁dro": 3483,
+ "beta": 3484,
+ "imum": 3485,
+ "▁minutes": 3486,
+ "▁grand": 3487,
+ "▁»": 3488,
+ "▁Our": 3489,
+ "Str": 3490,
+ "VER": 3491,
+ "maz": 3492,
+ "▁original": 3493,
+ "ini": 3494,
+ "▁coll": 3495,
+ "loat": 3496,
+ "▁os": 3497,
+ "});": 3498,
+ "summary": 3499,
+ "▁wall": 3500,
+ "Color": 3501,
+ "▁vers": 3502,
+ "▁della": 3503,
+ "▁\"\"\"": 3504,
+ "mathbf": 3505,
+ "zer": 3506,
+ "aur": 3507,
+ "▁track": 3508,
+ "▁associ": 3509,
+ "▁suff": 3510,
+ "▁inde": 3511,
+ "ague": 3512,
+ "▁Apr": 3513,
+ "Le": 3514,
+ "roups": 3515,
+ "board": 3516,
+ "▁attack": 3517,
+ "▁series": 3518,
+ "▁instead": 3519,
+ "ham": 3520,
+ "book": 3521,
+ "▁six": 3522,
+ "▁Rec": 3523,
+ "▁coming": 3524,
+ "urt": 3525,
+ "▁global": 3526,
+ "▁necess": 3527,
+ "lege": 3528,
+ "Pos": 3529,
+ "▁leave": 3530,
+ "▁pod": 3531,
+ "ategory": 3532,
+ "uz": 3533,
+ "▁deep": 3534,
+ "▁km": 3535,
+ "▁outside": 3536,
+ "has": 3537,
+ "options": 3538,
+ "▁Sm": 3539,
+ "Sub": 3540,
+ "rows": 3541,
+ "▁ви": 3542,
+ "▁States": 3543,
+ "▁wrong": 3544,
+ "▁however": 3545,
+ "▁sem": 3546,
+ "▁catch": 3547,
+ "\"),": 3548,
+ "model": 3549,
+ "▁http": 3550,
+ "▁option": 3551,
+ "rie": 3552,
+ "▁ста": 3553,
+ "▁är": 3554,
+ "▁enjoy": 3555,
+ "nu": 3556,
+ "▁pas": 3557,
+ "▁amount": 3558,
+ "▁respons": 3559,
+ "▁Intern": 3560,
+ "▁myself": 3561,
+ "▁opp": 3562,
+ "▁Sim": 3563,
+ "▁sens": 3564,
+ "Ed": 3565,
+ "▁(\\": 3566,
+ "▁students": 3567,
+ "нов": 3568,
+ "▁points": 3569,
+ "arning": 3570,
+ "UP": 3571,
+ "elling": 3572,
+ "▁cannot": 3573,
+ "Be": 3574,
+ "▁length": 3575,
+ "null": 3576,
+ "uint": 3577,
+ "wise": 3578,
+ "▁double": 3579,
+ "ige": 3580,
+ "ista": 3581,
+ "▁estab": 3582,
+ "anch": 3583,
+ "▁ago": 3584,
+ "▁bound": 3585,
+ "▁fa": 3586,
+ "▁clean": 3587,
+ "▁simple": 3588,
+ "mi": 3589,
+ "########": 3590,
+ "ifier": 3591,
+ "▁General": 3592,
+ "▁seemed": 3593,
+ "ena": 3594,
+ "▁age": 3595,
+ "ной": 3596,
+ "endif": 3597,
+ "AA": 3598,
+ "▁caus": 3599,
+ "▁educ": 3600,
+ "▁cell": 3601,
+ "Gener": 3602,
+ "space": 3603,
+ "▁Your": 3604,
+ "▁beaut": 3605,
+ "gt": 3606,
+ "▁limit": 3607,
+ "▁date": 3608,
+ "Util": 3609,
+ "▁National": 3610,
+ "ows": 3611,
+ "pat": 3612,
+ "quad": 3613,
+ "▁ok": 3614,
+ "▁И": 3615,
+ "arth": 3616,
+ "hat": 3617,
+ "▁community": 3618,
+ "oul": 3619,
+ "▁econom": 3620,
+ "Component": 3621,
+ "bor": 3622,
+ "usion": 3623,
+ "▁below": 3624,
+ "earch": 3625,
+ "ores": 3626,
+ "ban": 3627,
+ "▁August": 3628,
+ "▁further": 3629,
+ "sigma": 3630,
+ "▁ha": 3631,
+ "ji": 3632,
+ "▁comput": 3633,
+ "гра": 3634,
+ "▁None": 3635,
+ "▁ter": 3636,
+ "▁anyone": 3637,
+ "▁task": 3638,
+ "ente": 3639,
+ "position": 3640,
+ "pped": 3641,
+ "▁aus": 3642,
+ "Attribute": 3643,
+ "req": 3644,
+ "addr": 3645,
+ "light": 3646,
+ "ше": 3647,
+ "▁arm": 3648,
+ "cover": 3649,
+ "upport": 3650,
+ "▁Gl": 3651,
+ "▁San": 3652,
+ "▁writing": 3653,
+ "▁lost": 3654,
+ "▁Mark": 3655,
+ "▁gre": 3656,
+ "TYPE": 3657,
+ "▁South": 3658,
+ "▁perfect": 3659,
+ "▁package": 3660,
+ "▁infl": 3661,
+ "haps": 3662,
+ "▁Ang": 3663,
+ "respon": 3664,
+ "ris": 3665,
+ "ptember": 3666,
+ "▁building": 3667,
+ "VAL": 3668,
+ "free": 3669,
+ "▁ce": 3670,
+ "HT": 3671,
+ "▁From": 3672,
+ "ds": 3673,
+ "roy": 3674,
+ "achine": 3675,
+ "nown": 3676,
+ "▁saying": 3677,
+ "▁бы": 3678,
+ "oe": 3679,
+ "Ref": 3680,
+ "▁network": 3681,
+ "parent": 3682,
+ "uge": 3683,
+ "▁similar": 3684,
+ ">\r": 3685,
+ "Builder": 3686,
+ "▁living": 3687,
+ "▁continue": 3688,
+ "anger": 3689,
+ "▁Red": 3690,
+ "▁hair": 3691,
+ "anced": 3692,
+ "ians": 3693,
+ "▁dead": 3694,
+ "▁boolean": 3695,
+ "ication": 3696,
+ "▁де": 3697,
+ "▁client": 3698,
+ "uct": 3699,
+ "▁•": 3700,
+ "SP": 3701,
+ "older": 3702,
+ "пе": 3703,
+ "udio": 3704,
+ "▁deg": 3705,
+ "asing": 3706,
+ "▁step": 3707,
+ "▁pers": 3708,
+ "ção": 3709,
+ "obj": 3710,
+ "oz": 3711,
+ "ula": 3712,
+ "▁round": 3713,
+ "▁upon": 3714,
+ "▁resource": 3715,
+ "▁valid": 3716,
+ "▁II": 3717,
+ "bug": 3718,
+ "std": 3719,
+ "▁ang": 3720,
+ "span": 3721,
+ "pol": 3722,
+ "ialog": 3723,
+ "▁phot": 3724,
+ "?'": 3725,
+ "DB": 3726,
+ "▁Fin": 3727,
+ "VE": 3728,
+ "Em": 3729,
+ "▁cam": 3730,
+ "target": 3731,
+ "pected": 3732,
+ "Hel": 3733,
+ "▁ut": 3734,
+ "▁Test": 3735,
+ "▁town": 3736,
+ "align": 3737,
+ "▁webs": 3738,
+ "inner": 3739,
+ "augh": 3740,
+ "▁except": 3741,
+ "▁initial": 3742,
+ "enty": 3743,
+ "lich": 3744,
+ "▁Aut": 3745,
+ "top": 3746,
+ "▁fail": 3747,
+ "ona": 3748,
+ "▁benef": 3749,
+ "anks": 3750,
+ "ische": 3751,
+ ".*": 3752,
+ "▁signific": 3753,
+ "▁contact": 3754,
+ "Rec": 3755,
+ "ario": 3756,
+ "ottom": 3757,
+ "▁relationship": 3758,
+ "]);": 3759,
+ "▁На": 3760,
+ "Head": 3761,
+ "format": 3762,
+ "▁ét": 3763,
+ "▁More": 3764,
+ "actory": 3765,
+ "portun": 3766,
+ "+\\": 3767,
+ "▁simply": 3768,
+ "▁ep": 3769,
+ "▁Russ": 3770,
+ "ní": 3771,
+ "ua": 3772,
+ "erc": 3773,
+ "▁longer": 3774,
+ "inition": 3775,
+ "ector": 3776,
+ "aption": 3777,
+ "▁profess": 3778,
+ "▁Mus": 3779,
+ "ilities": 3780,
+ "ès": 3781,
+ "▁Act": 3782,
+ "offset": 3783,
+ "▁ill": 3784,
+ "band": 3785,
+ "▁Ag": 3786,
+ "▁По": 3787,
+ "би": 3788,
+ "content": 3789,
+ "icon": 3790,
+ "▁works": 3791,
+ "ynam": 3792,
+ "plement": 3793,
+ "Resource": 3794,
+ "Action": 3795,
+ "▁difficult": 3796,
+ "▁West": 3797,
+ "▁video": 3798,
+ "▁THE": 3799,
+ "▁decl": 3800,
+ "ondon": 3801,
+ "ded": 3802,
+ "}{\\": 3803,
+ "ocr": 3804,
+ "▁City": 3805,
+ "▁я": 3806,
+ "uer": 3807,
+ "cz": 3808,
+ "▁imag": 3809,
+ "cr": 3810,
+ "ete": 3811,
+ "idget": 3812,
+ "▁Mod": 3813,
+ "▁forward": 3814,
+ "▁pict": 3815,
+ "orge": 3816,
+ "▁subject": 3817,
+ "update": 3818,
+ "attle": 3819,
+ "sa": 3820,
+ "▁Ant": 3821,
+ "▁running": 3822,
+ "▁sal": 3823,
+ "conne": 3824,
+ "▁output": 3825,
+ "adata": 3826,
+ "ML": 3827,
+ "Check": 3828,
+ "ledge": 3829,
+ "▁paper": 3830,
+ "params": 3831,
+ "avy": 3832,
+ "▁af": 3833,
+ "▁eine": 3834,
+ "▁jour": 3835,
+ "AY": 3836,
+ "▁itself": 3837,
+ "▁Str": 3838,
+ "style": 3839,
+ "That": 3840,
+ "▁million": 3841,
+ "▁language": 3842,
+ "OS": 3843,
+ "ving": 3844,
+ "▁ма": 3845,
+ "▁то": 3846,
+ ")(": 3847,
+ "▁buy": 3848,
+ "./": 3849,
+ "▁...": 3850,
+ "▁tried": 3851,
+ "▁compl": 3852,
+ "▁activ": 3853,
+ "apped": 3854,
+ "Button": 3855,
+ "Token": 3856,
+ "▁provided": 3857,
+ "iber": 3858,
+ "▁created": 3859,
+ "curity": 3860,
+ "End": 3861,
+ "ał": 3862,
+ "uster": 3863,
+ "izing": 3864,
+ "omb": 3865,
+ "▁sich": 3866,
+ "▁compon": 3867,
+ "▁See": 3868,
+ "▁uint": 3869,
+ "▁label": 3870,
+ "vol": 3871,
+ "ów": 3872,
+ "ocol": 3873,
+ "▁received": 3874,
+ "▁intern": 3875,
+ "це": 3876,
+ "Run": 3877,
+ "▁road": 3878,
+ "▁Oct": 3879,
+ "▁Comp": 3880,
+ "▁study": 3881,
+ "▁те": 3882,
+ "Act": 3883,
+ "▁tour": 3884,
+ "▁State": 3885,
+ "▁added": 3886,
+ "https": 3887,
+ "stream": 3888,
+ "▁lower": 3889,
+ "▁box": 3890,
+ "▁Sk": 3891,
+ "▁themselves": 3892,
+ "▁cross": 3893,
+ "▁echo": 3894,
+ "▁device": 3895,
+ "pose": 3896,
+ "▁games": 3897,
+ "PL": 3898,
+ "Window": 3899,
+ "ises": 3900,
+ "title": 3901,
+ "Stream": 3902,
+ "zt": 3903,
+ "▁Sw": 3904,
+ "▁role": 3905,
+ "iant": 3906,
+ "ku": 3907,
+ "sequ": 3908,
+ "▁late": 3909,
+ "▁sold": 3910,
+ "ря": 3911,
+ "Comm": 3912,
+ "▁entre": 3913,
+ "▁dog": 3914,
+ "device": 3915,
+ "Par": 3916,
+ "▁likely": 3917,
+ "^{-": 3918,
+ "▁len": 3919,
+ "▁Paul": 3920,
+ "▁tool": 3921,
+ "Off": 3922,
+ "▁famil": 3923,
+ "▁draw": 3924,
+ "apping": 3925,
+ "▁events": 3926,
+ "cret": 3927,
+ "rought": 3928,
+ "Content": 3929,
+ "▁software": 3930,
+ "ria": 3931,
+ "msg": 3932,
+ "gamma": 3933,
+ "▁hear": 3934,
+ "Oper": 3935,
+ "▁yourself": 3936,
+ "▁liter": 3937,
+ "emp": 3938,
+ "▁separ": 3939,
+ "▁З": 3940,
+ "▁title": 3941,
+ "Method": 3942,
+ "mathrm": 3943,
+ "▁slow": 3944,
+ "▁Rom": 3945,
+ "!!": 3946,
+ "▁tax": 3947,
+ "ска": 3948,
+ "emplate": 3949,
+ "oi": 3950,
+ "▁Art": 3951,
+ "false": 3952,
+ "astic": 3953,
+ "сть": 3954,
+ "ocket": 3955,
+ "▁ens": 3956,
+ "TO": 3957,
+ "amente": 3958,
+ "local": 3959,
+ "chie": 3960,
+ "▁pan": 3961,
+ "ний": 3962,
+ "chema": 3963,
+ "▁North": 3964,
+ "зо": 3965,
+ "▁>=": 3966,
+ "Aut": 3967,
+ "▁dig": 3968,
+ "▁seems": 3969,
+ "▁morning": 3970,
+ "sole": 3971,
+ "umer": 3972,
+ "delta": 3973,
+ "ité": 3974,
+ "abase": 3975,
+ "raf": 3976,
+ "▁observ": 3977,
+ "▁Est": 3978,
+ "▁seg": 3979,
+ "▁[]": 3980,
+ "▁Pres": 3981,
+ "iful": 3982,
+ "push": 3983,
+ "▁Off": 3984,
+ "ipe": 3985,
+ "ati": 3986,
+ "▁dim": 3987,
+ "ceed": 3988,
+ "Ent": 3989,
+ "____": 3990,
+ "entry": 3991,
+ "▁fight": 3992,
+ "▁cred": 3993,
+ "▁OR": 3994,
+ "▁Dep": 3995,
+ "${": 3996,
+ "лен": 3997,
+ "Create": 3998,
+ "▁April": 3999,
+ "ministr": 4000,
+ "FL": 4001,
+ "▁Ap": 4002,
+ "▁Here": 4003,
+ "private": 4004,
+ "Instance": 4005,
+ "iem": 4006,
+ "▁office": 4007,
+ "▁third": 4008,
+ "▁update": 4009,
+ "Line": 4010,
+ "tag": 4011,
+ "▁especially": 4012,
+ "▁года": 4013,
+ "▁cu": 4014,
+ "▁kill": 4015,
+ "aught": 4016,
+ "▁swe": 4017,
+ "Options": 4018,
+ "IM": 4019,
+ "CC": 4020,
+ "▁compan": 4021,
+ "just": 4022,
+ "▁While": 4023,
+ "izer": 4024,
+ "▁мо": 4025,
+ "ке": 4026,
+ "▁auto": 4027,
+ "▁band": 4028,
+ "мен": 4029,
+ "iques": 4030,
+ "▁ple": 4031,
+ "NO": 4032,
+ "▁OF": 4033,
+ "▁song": 4034,
+ "▁Acc": 4035,
+ "EXT": 4036,
+ "ensor": 4037,
+ "ining": 4038,
+ "▁lat": 4039,
+ "big": 4040,
+ "▁King": 4041,
+ "och": 4042,
+ "si": 4043,
+ "▁Hist": 4044,
+ "▁quality": 4045,
+ "mode": 4046,
+ "▁opportun": 4047,
+ "▁wouldn": 4048,
+ ":**": 4049,
+ "output": 4050,
+ "▁feet": 4051,
+ "▁mis": 4052,
+ "df": 4053,
+ "aging": 4054,
+ "▁ме": 4055,
+ "▁tro": 4056,
+ "▁defined": 4057,
+ "▁review": 4058,
+ "▁Fil": 4059,
+ ">>": 4060,
+ "▁princip": 4061,
+ "Base": 4062,
+ "dict": 4063,
+ "verage": 4064,
+ "icient": 4065,
+ "IF": 4066,
+ "▁hit": 4067,
+ "Page": 4068,
+ "▁perm": 4069,
+ "cel": 4070,
+ "ít": 4071,
+ "▁express": 4072,
+ "▁indic": 4073,
+ "▁September": 4074,
+ "image": 4075,
+ "▁products": 4076,
+ "▁media": 4077,
+ "change": 4078,
+ "igger": 4079,
+ "▁send": 4080,
+ "last": 4081,
+ "ming": 4082,
+ "pa": 4083,
+ "uary": 4084,
+ "▁speak": 4085,
+ "ный": 4086,
+ "ще": 4087,
+ "ysis": 4088,
+ "lying": 4089,
+ "▁ч": 4090,
+ "like": 4091,
+ "ры": 4092,
+ "ві": 4093,
+ "▁Mich": 4094,
+ "MO": 4095,
+ "▁Jah": 4096,
+ "ensive": 4097,
+ "▁share": 4098,
+ "▁development": 4099,
+ "CP": 4100,
+ "spec": 4101,
+ "▁fast": 4102,
+ "het": 4103,
+ "HO": 4104,
+ "▁particip": 4105,
+ "Block": 4106,
+ "▁viol": 4107,
+ "▁frame": 4108,
+ "▁qual": 4109,
+ "tre": 4110,
+ "▁Ф": 4111,
+ "▁toward": 4112,
+ "fg": 4113,
+ "Box": 4114,
+ "Column": 4115,
+ "▁milit": 4116,
+ "▁March": 4117,
+ "▁various": 4118,
+ "pass": 4119,
+ "▁Park": 4120,
+ "▁Ben": 4121,
+ "Frame": 4122,
+ "▁normal": 4123,
+ "open": 4124,
+ "px": 4125,
+ "▁phone": 4126,
+ "▁Even": 4127,
+ "▁ma": 4128,
+ "ibrary": 4129,
+ "Start": 4130,
+ "idden": 4131,
+ "rho": 4132,
+ "graph": 4133,
+ "acing": 4134,
+ "'.": 4135,
+ "arter": 4136,
+ "mes": 4137,
+ "inst": 4138,
+ "▁ir": 4139,
+ "active": 4140,
+ "▁fem": 4141,
+ "▁moved": 4142,
+ "▁store": 4143,
+ "▁price": 4144,
+ "\").": 4145,
+ "berg": 4146,
+ "▁nov": 4147,
+ "▁card": 4148,
+ "ellow": 4149,
+ "▁party": 4150,
+ "▁Mor": 4151,
+ "ael": 4152,
+ "▁percent": 4153,
+ "▁training": 4154,
+ "▁ing": 4155,
+ "imer": 4156,
+ "▁Sam": 4157,
+ "Default": 4158,
+ "▁fuck": 4159,
+ "▁complete": 4160,
+ "uid": 4161,
+ "▁details": 4162,
+ "▁led": 4163,
+ "Point": 4164,
+ "▁Count": 4165,
+ "▁regard": 4166,
+ "zo": 4167,
+ "▁Bro": 4168,
+ "▁recogn": 4169,
+ "▁Hol": 4170,
+ "UM": 4171,
+ "element": 4172,
+ "Mode": 4173,
+ "▁exam": 4174,
+ "▁EX": 4175,
+ "Image": 4176,
+ "verse": 4177,
+ "riter": 4178,
+ "soft": 4179,
+ "▁introdu": 4180,
+ "▁surpr": 4181,
+ "Buffer": 4182,
+ "lector": 4183,
+ "aren": 4184,
+ "anged": 4185,
+ "▁Pat": 4186,
+ "▁Pal": 4187,
+ "▁contr": 4188,
+ "Handler": 4189,
+ "▁features": 4190,
+ "iple": 4191,
+ "▁CON": 4192,
+ "Fil": 4193,
+ "▁Port": 4194,
+ "▁thinking": 4195,
+ "doc": 4196,
+ "wer": 4197,
+ "▁worked": 4198,
+ "PC": 4199,
+ "cm": 4200,
+ "dat": 4201,
+ "PRO": 4202,
+ "▁Every": 4203,
+ "▁era": 4204,
+ "▁First": 4205,
+ "gn": 4206,
+ "▁immedi": 4207,
+ "ovember": 4208,
+ "apan": 4209,
+ "▁extra": 4210,
+ "▁section": 4211,
+ "▁June": 4212,
+ "▁via": 4213,
+ "▁gone": 4214,
+ "come": 4215,
+ "▁stri": 4216,
+ "^\\": 4217,
+ "antly": 4218,
+ "▁arch": 4219,
+ "Source": 4220,
+ "▁conv": 4221,
+ "▁London": 4222,
+ "Number": 4223,
+ "▁questions": 4224,
+ "andid": 4225,
+ "▁played": 4226,
+ "env": 4227,
+ "▁School": 4228,
+ "▁natural": 4229,
+ "can": 4230,
+ "▁news": 4231,
+ "DR": 4232,
+ "▁chall": 4233,
+ "▁Soc": 4234,
+ "▁э": 4235,
+ "▁attempt": 4236,
+ "*}": 4237,
+ "Null": 4238,
+ "rote": 4239,
+ "▁bi": 4240,
+ "▁written": 4241,
+ "▁blood": 4242,
+ "▁happened": 4243,
+ "▁cause": 4244,
+ "ashing": 4245,
+ "▁William": 4246,
+ "adem": 4247,
+ "▁brought": 4248,
+ "▁display": 4249,
+ "ima": 4250,
+ "▁finally": 4251,
+ "tab": 4252,
+ "▁returned": 4253,
+ "ных": 4254,
+ "nie": 4255,
+ "▁q": 4256,
+ "▁hers": 4257,
+ "▁Pre": 4258,
+ "▁dou": 4259,
+ "buffer": 4260,
+ "▁effort": 4261,
+ "aine": 4262,
+ "xy": 4263,
+ "▁histor": 4264,
+ "enu": 4265,
+ "▁arriv": 4266,
+ "▁Dem": 4267,
+ "▁favor": 4268,
+ "▁handle": 4269,
+ "SET": 4270,
+ "▁Public": 4271,
+ "rupt": 4272,
+ "▁ur": 4273,
+ "▁force": 4274,
+ "▁és": 4275,
+ "ube": 4276,
+ "Pre": 4277,
+ "рі": 4278,
+ "iny": 4279,
+ "theta": 4280,
+ "isf": 4281,
+ "▁national": 4282,
+ "Equal": 4283,
+ "rench": 4284,
+ "▁wife": 4285,
+ "▁capt": 4286,
+ "▁Inter": 4287,
+ "tau": 4288,
+ "▁sleep": 4289,
+ "../../": 4290,
+ "▁issue": 4291,
+ "▁member": 4292,
+ "▁await": 4293,
+ "▁Dan": 4294,
+ "zi": 4295,
+ "inate": 4296,
+ "▁sym": 4297,
+ "chan": 4298,
+ "▁Jack": 4299,
+ "▁English": 4300,
+ "▁sz": 4301,
+ "ributes": 4302,
+ "▁ign": 4303,
+ "ál": 4304,
+ "▁appear": 4305,
+ "rad": 4306,
+ "idge": 4307,
+ "▁couple": 4308,
+ "▁ship": 4309,
+ "lig": 4310,
+ "web": 4311,
+ "▁usually": 4312,
+ "▁ready": 4313,
+ "▁vill": 4314,
+ "▁Why": 4315,
+ "ebru": 4316,
+ "▁grad": 4317,
+ "ords": 4318,
+ "▁inf": 4319,
+ "▁loss": 4320,
+ "▁od": 4321,
+ "▁Phil": 4322,
+ "server": 4323,
+ "▁Up": 4324,
+ "▁buff": 4325,
+ "▁filename": 4326,
+ "ABLE": 4327,
+ "iting": 4328,
+ "efore": 4329,
+ "()->": 4330,
+ "▁conditions": 4331,
+ "vm": 4332,
+ "eld": 4333,
+ "itz": 4334,
+ "▁Trans": 4335,
+ "▁weight": 4336,
+ "▁higher": 4337,
+ "▁rate": 4338,
+ "▁accom": 4339,
+ "vider": 4340,
+ "OM": 4341,
+ "▁ways": 4342,
+ "coming": 4343,
+ "▁lock": 4344,
+ "▁etc": 4345,
+ "▁avec": 4346,
+ "▁takes": 4347,
+ "▁Char": 4348,
+ "▁November": 4349,
+ "method": 4350,
+ "▁Austral": 4351,
+ "▁America": 4352,
+ "long": 4353,
+ "cember": 4354,
+ "▁political": 4355,
+ "flow": 4356,
+ "▁maybe": 4357,
+ "▁amb": 4358,
+ "Layout": 4359,
+ "iled": 4360,
+ "omen": 4361,
+ "ola": 4362,
+ "icip": 4363,
+ "partial": 4364,
+ "True": 4365,
+ "▁floor": 4366,
+ "▁Def": 4367,
+ "▁concern": 4368,
+ "yr": 4369,
+ "▁shows": 4370,
+ "ih": 4371,
+ "▁answer": 4372,
+ "acc": 4373,
+ "▁ball": 4374,
+ "▁Rev": 4375,
+ "▁sun": 4376,
+ "▁quickly": 4377,
+ "▁somet": 4378,
+ "mente": 4379,
+ "▁Mal": 4380,
+ "undred": 4381,
+ "▁issues": 4382,
+ "ecause": 4383,
+ "pes": 4384,
+ "▁player": 4385,
+ "▁parents": 4386,
+ "▁popular": 4387,
+ "▁mode": 4388,
+ "▁mention": 4389,
+ "NE": 4390,
+ "Load": 4391,
+ "▁regular": 4392,
+ "aved": 4393,
+ "?:": 4394,
+ "year": 4395,
+ "func": 4396,
+ "▁performance": 4397,
+ "▁July": 4398,
+ "thern": 4399,
+ "▁website": 4400,
+ "ford": 4401,
+ "PR": 4402,
+ "ela": 4403,
+ "level": 4404,
+ "uit": 4405,
+ "flags": 4406,
+ "▁worth": 4407,
+ "▁correspon": 4408,
+ "▁British": 4409,
+ "sim": 4410,
+ "▁alone": 4411,
+ "▁har": 4412,
+ "▁ones": 4413,
+ "obile": 4414,
+ "▁dru": 4415,
+ "chi": 4416,
+ "▁David": 4417,
+ "▁problems": 4418,
+ "▁column": 4419,
+ "();\r": 4420,
+ "ZE": 4421,
+ "▁relig": 4422,
+ "ological": 4423,
+ "▁region": 4424,
+ "ady": 4425,
+ "IO": 4426,
+ "ander": 4427,
+ "Net": 4428,
+ "▁built": 4429,
+ "▁install": 4430,
+ "▁approach": 4431,
+ "Cur": 4432,
+ "▁fine": 4433,
+ "▁talking": 4434,
+ "▁changes": 4435,
+ "Style": 4436,
+ "▁Mart": 4437,
+ "лю": 4438,
+ "response": 4439,
+ "teger": 4440,
+ "{\r": 4441,
+ "irit": 4442,
+ "▁protected": 4443,
+ "▁rele": 4444,
+ "ership": 4445,
+ "тель": 4446,
+ "unsigned": 4447,
+ "ialize": 4448,
+ "▁https": 4449,
+ "Tag": 4450,
+ "▁$(": 4451,
+ "more": 4452,
+ "ypes": 4453,
+ "▁stream": 4454,
+ "etch": 4455,
+ "▁engine": 4456,
+ "KE": 4457,
+ "cmd": 4458,
+ "script": 4459,
+ "ttp": 4460,
+ "▁avoid": 4461,
+ "▁terr": 4462,
+ "▁rock": 4463,
+ "▁ful": 4464,
+ "Update": 4465,
+ "▁environment": 4466,
+ "▁prec": 4467,
+ "▁са": 4468,
+ "▁cases": 4469,
+ "▁offset": 4470,
+ "▁rais": 4471,
+ "lib": 4472,
+ "ées": 4473,
+ "aa": 4474,
+ "yt": 4475,
+ "▁arr": 4476,
+ "opyright": 4477,
+ "first": 4478,
+ "▁util": 4479,
+ "▁feature": 4480,
+ "posed": 4481,
+ "ffect": 4482,
+ "жа": 4483,
+ "itude": 4484,
+ "ements": 4485,
+ "asc": 4486,
+ "ador": 4487,
+ "lections": 4488,
+ "▁club": 4489,
+ "]{": 4490,
+ "▁*)": 4491,
+ "ство": 4492,
+ "▁imm": 4493,
+ "▁former": 4494,
+ "▁rights": 4495,
+ "▁decided": 4496,
+ "▁rev": 4497,
+ "▁ment": 4498,
+ "ani": 4499,
+ "▁stru": 4500,
+ "▁attention": 4501,
+ "artment": 4502,
+ "▁Ital": 4503,
+ "alle": 4504,
+ "▁bis": 4505,
+ "gener": 4506,
+ "▁integr": 4507,
+ "ello": 4508,
+ "rypt": 4509,
+ "▁achie": 4510,
+ "nes": 4511,
+ "▁stra": 4512,
+ "sb": 4513,
+ "▁types": 4514,
+ "▁RE": 4515,
+ "Init": 4516,
+ "▁comment": 4517,
+ "▁addition": 4518,
+ "▁ID": 4519,
+ "ART": 4520,
+ "FO": 4521,
+ "щи": 4522,
+ "Conne": 4523,
+ "▁squ": 4524,
+ "▁considered": 4525,
+ "idad": 4526,
+ "▁October": 4527,
+ "cial": 4528,
+ "▁Of": 4529,
+ "▁travel": 4530,
+ "▁boy": 4531,
+ "').": 4532,
+ "uy": 4533,
+ "illa": 4534,
+ "istry": 4535,
+ "▁va": 4536,
+ "▁Che": 4537,
+ "ERT": 4538,
+ "ende": 4539,
+ "ungen": 4540,
+ "aby": 4541,
+ "▁Rober": 4542,
+ "▁playing": 4543,
+ "ils": 4544,
+ "▁sam": 4545,
+ "▁execut": 4546,
+ "▁Us": 4547,
+ "▁mut": 4548,
+ "▁bal": 4549,
+ "asse": 4550,
+ "▁kids": 4551,
+ "▁financ": 4552,
+ "gor": 4553,
+ "▁Sec": 4554,
+ "bert": 4555,
+ "▁High": 4556,
+ "▁је": 4557,
+ "▁kept": 4558,
+ "button": 4559,
+ "itory": 4560,
+ "▁Rem": 4561,
+ "▁DE": 4562,
+ "▁reach": 4563,
+ "▁bur": 4564,
+ "Label": 4565,
+ "át": 4566,
+ "ago": 4567,
+ "▁passed": 4568,
+ "▁behav": 4569,
+ "xFF": 4570,
+ "▁Return": 4571,
+ "STR": 4572,
+ "▁Les": 4573,
+ "▁ord": 4574,
+ "ala": 4575,
+ "inger": 4576,
+ "▁Since": 4577,
+ "▁experi": 4578,
+ "▁shall": 4579,
+ "▁star": 4580,
+ "non": 4581,
+ "▁gun": 4582,
+ "▁Bel": 4583,
+ "▁obj": 4584,
+ "ares": 4585,
+ "rs": 4586,
+ "▁weeks": 4587,
+ "nen": 4588,
+ "▁Stre": 4589,
+ "oring": 4590,
+ "▁î": 4591,
+ "▁serious": 4592,
+ "times": 4593,
+ "▁House": 4594,
+ "▁roll": 4595,
+ "▁register": 4596,
+ "▁module": 4597,
+ "▁applic": 4598,
+ "IR": 4599,
+ "▁cook": 4600,
+ "aux": 4601,
+ "▁save": 4602,
+ "▁Cr": 4603,
+ ",\r": 4604,
+ "▁states": 4605,
+ "▁empty": 4606,
+ "▁autom": 4607,
+ "figure": 4608,
+ "iance": 4609,
+ "▁happy": 4610,
+ "▁fn": 4611,
+ "▁jud": 4612,
+ "▁hat": 4613,
+ "ACK": 4614,
+ "▁Fe": 4615,
+ "$-": 4616,
+ "ivil": 4617,
+ "oted": 4618,
+ "▁sizeof": 4619,
+ "▁situation": 4620,
+ "▁lives": 4621,
+ "▁feeling": 4622,
+ "▁risk": 4623,
+ "▁January": 4624,
+ "▁Object": 4625,
+ "▁recomm": 4626,
+ "▁вы": 4627,
+ "▁potential": 4628,
+ "eah": 4629,
+ "▁complex": 4630,
+ "printf": 4631,
+ "istance": 4632,
+ "irth": 4633,
+ "lik": 4634,
+ "aste": 4635,
+ "▁whose": 4636,
+ "Arg": 4637,
+ "▁modern": 4638,
+ "iones": 4639,
+ "▁че": 4640,
+ "▁sett": 4641,
+ "▁Mag": 4642,
+ "ae": 4643,
+ "▁condition": 4644,
+ "Length": 4645,
+ "▁fit": 4646,
+ "ounds": 4647,
+ "▁changed": 4648,
+ "▁guy": 4649,
+ "filter": 4650,
+ "atever": 4651,
+ "éd": 4652,
+ "remove": 4653,
+ "▁hop": 4654,
+ "▁Out": 4655,
+ "▁Rich": 4656,
+ "child": 4657,
+ "▁included": 4658,
+ "$\\": 4659,
+ "▁Tom": 4660,
+ "eline": 4661,
+ "▁sometimes": 4662,
+ "▁drink": 4663,
+ "▁quant": 4664,
+ "▁please": 4665,
+ "▁Int": 4666,
+ "rief": 4667,
+ "▁exactly": 4668,
+ "cing": 4669,
+ "▁allowed": 4670,
+ "build": 4671,
+ "▁beautiful": 4672,
+ "▁Well": 4673,
+ "▁looks": 4674,
+ "▁ü": 4675,
+ "▁chance": 4676,
+ "▁wrote": 4677,
+ "▁nor": 4678,
+ "▁failed": 4679,
+ "Met": 4680,
+ "▁prior": 4681,
+ "▁hundred": 4682,
+ "ской": 4683,
+ "oria": 4684,
+ "▁cy": 4685,
+ "▁web": 4686,
+ "▁mess": 4687,
+ "leq": 4688,
+ "dy": 4689,
+ "tex": 4690,
+ "▁anim": 4691,
+ "atur": 4692,
+ "▁structure": 4693,
+ "option": 4694,
+ "▁actual": 4695,
+ "▁Franc": 4696,
+ "enced": 4697,
+ ".": 4698,
+ "▁flow": 4699,
+ "▁Afr": 4700,
+ "det": 4701,
+ "▁Ke": 4702,
+ "ety": 4703,
+ "ский": 4704,
+ "▁stuff": 4705,
+ "itter": 4706,
+ "▁args": 4707,
+ "▁album": 4708,
+ "▁]": 4709,
+ "ugin": 4710,
+ "SU": 4711,
+ "Per": 4712,
+ "▁circ": 4713,
+ "▁correct": 4714,
+ "▁lines": 4715,
+ "▁completely": 4716,
+ "known": 4717,
+ "▁tree": 4718,
+ "root": 4719,
+ "▁Japan": 4720,
+ "oles": 4721,
+ "endo": 4722,
+ "▁location": 4723,
+ "▁Х": 4724,
+ "▁mid": 4725,
+ "aling": 4726,
+ "GL": 4727,
+ "iano": 4728,
+ "▁{}": 4729,
+ "lang": 4730,
+ "▁equip": 4731,
+ "ERROR": 4732,
+ "▁memory": 4733,
+ "▁(\"": 4734,
+ "▁nature": 4735,
+ "google": 4736,
+ "abs": 4737,
+ "BC": 4738,
+ "▁gets": 4739,
+ "Command": 4740,
+ "TER": 4741,
+ "aled": 4742,
+ "cp": 4743,
+ "▁purch": 4744,
+ "▁Den": 4745,
+ "▁herself": 4746,
+ "▁Ir": 4747,
+ "▁sie": 4748,
+ "gar": 4749,
+ "Ap": 4750,
+ "▁nel": 4751,
+ "ota": 4752,
+ ")]": 4753,
+ "cor": 4754,
+ "acht": 4755,
+ "(*": 4756,
+ "irtual": 4757,
+ "▁police": 4758,
+ "▁skin": 4759,
+ "ship": 4760,
+ "efined": 4761,
+ "aughter": 4762,
+ "inding": 4763,
+ "▁Sl": 4764,
+ "▁influ": 4765,
+ "▁mount": 4766,
+ "▁az": 4767,
+ "▁wood": 4768,
+ "otes": 4769,
+ "ega": 4770,
+ "▁according": 4771,
+ "▁namespace": 4772,
+ "Delta": 4773,
+ "stant": 4774,
+ "▁published": 4775,
+ "aker": 4776,
+ "▁Black": 4777,
+ "ln": 4778,
+ "▁industry": 4779,
+ "SON": 4780,
+ "Rep": 4781,
+ "▁choice": 4782,
+ "▁inn": 4783,
+ "kl": 4784,
+ "▁pal": 4785,
+ "▁aud": 4786,
+ "▁standard": 4787,
+ "▁knowledge": 4788,
+ "**,": 4789,
+ "▁Frank": 4790,
+ "sq": 4791,
+ "Output": 4792,
+ "▁för": 4793,
+ "Valid": 4794,
+ "ugh": 4795,
+ "▁books": 4796,
+ "▁James": 4797,
+ "ko": 4798,
+ "▁companies": 4799,
+ "anning": 4800,
+ "▁vict": 4801,
+ "▁repl": 4802,
+ "▁sche": 4803,
+ "▁happen": 4804,
+ "fty": 4805,
+ "acity": 4806,
+ "ira": 4807,
+ "▁implement": 4808,
+ "ского": 4809,
+ "number": 4810,
+ "SH": 4811,
+ "iro": 4812,
+ "▁fear": 4813,
+ "▁touch": 4814,
+ "▁cast": 4815,
+ "ASS": 4816,
+ "▁consist": 4817,
+ "Task": 4818,
+ "▁sig": 4819,
+ "ба": 4820,
+ "igation": 4821,
+ "▁Most": 4822,
+ "▁Der": 4823,
+ "}(\\": 4824,
+ ":\"": 4825,
+ "▁Fig": 4826,
+ "ali": 4827,
+ "iner": 4828,
+ "'),": 4829,
+ "▁Coun": 4830,
+ "(_": 4831,
+ "▁distributed": 4832,
+ "NAME": 4833,
+ "▁mur": 4834,
+ "▁career": 4835,
+ "~~": 4836,
+ "pers": 4837,
+ "aries": 4838,
+ "enses": 4839,
+ "▁Also": 4840,
+ "Version": 4841,
+ "▁unique": 4842,
+ "▁France": 4843,
+ "BA": 4844,
+ "ky": 4845,
+ "▁Febru": 4846,
+ "▁died": 4847,
+ "omega": 4848,
+ "▁Form": 4849,
+ "▁width": 4850,
+ "tocol": 4851,
+ "▁lie": 4852,
+ "She": 4853,
+ "ém": 4854,
+ "▁straight": 4855,
+ "▁nach": 4856,
+ "▁stood": 4857,
+ "olds": 4858,
+ "▁goes": 4859,
+ "cell": 4860,
+ "▁till": 4861,
+ "LI": 4862,
+ "draw": 4863,
+ "▁satisf": 4864,
+ "▁reading": 4865,
+ "ATION": 4866,
+ "▁Are": 4867,
+ "▁Ac": 4868,
+ ")*": 4869,
+ "▁additional": 4870,
+ "wood": 4871,
+ "cil": 4872,
+ "пу": 4873,
+ "ULT": 4874,
+ "▁bill": 4875,
+ "mas": 4876,
+ "ania": 4877,
+ "су": 4878,
+ "anz": 4879,
+ "height": 4880,
+ "jo": 4881,
+ "▁dos": 4882,
+ "\\\"": 4883,
+ "▁/>": 4884,
+ "▁production": 4885,
+ "iger": 4886,
+ "▁ст": 4887,
+ "show": 4888,
+ "▁population": 4889,
+ "▁park": 4890,
+ "▁Ze": 4891,
+ "▁necessary": 4892,
+ "▁trust": 4893,
+ "▁shown": 4894,
+ "module": 4895,
+ "GE": 4896,
+ "▁lay": 4897,
+ "▁announ": 4898,
+ "▁className": 4899,
+ "▁calcul": 4900,
+ "Function": 4901,
+ "▁Sal": 4902,
+ "OK": 4903,
+ "TP": 4904,
+ "▁entry": 4905,
+ "▁Stud": 4906,
+ "▁items": 4907,
+ "▁security": 4908,
+ "Entry": 4909,
+ "float": 4910,
+ "ls": 4911,
+ "ibly": 4912,
+ "▁contribut": 4913,
+ "▁Check": 4914,
+ "MD": 4915,
+ "▁improve": 4916,
+ "Part": 4917,
+ "▁systems": 4918,
+ "Bl": 4919,
+ "▁policy": 4920,
+ "▁screen": 4921,
+ "▁Any": 4922,
+ "▁opened": 4923,
+ "alloc": 4924,
+ "▁December": 4925,
+ "▁É": 4926,
+ "▁email": 4927,
+ "ader": 4928,
+ "=>": 4929,
+ "▁Hen": 4930,
+ "▁info": 4931,
+ "▁float": 4932,
+ "▁switch": 4933,
+ "ран": 4934,
+ "urance": 4935,
+ "▁assum": 4936,
+ "ustr": 4937,
+ "▁groups": 4938,
+ "▁Read": 4939,
+ "▁wat": 4940,
+ "Sp": 4941,
+ "вер": 4942,
+ "RAN": 4943,
+ "hib": 4944,
+ "ALL": 4945,
+ "▁hus": 4946,
+ "Spec": 4947,
+ "\"))": 4948,
+ "▁French": 4949,
+ "▁Class": 4950,
+ "▁president": 4951,
+ "▁definit": 4952,
+ "▁Nor": 4953,
+ "▁Thom": 4954,
+ "aign": 4955,
+ "Width": 4956,
+ "Do": 4957,
+ "▁{@": 4958,
+ "agon": 4959,
+ "▁Lu": 4960,
+ "▁followed": 4961,
+ "MM": 4962,
+ "asons": 4963,
+ "tmp": 4964,
+ "▁throws": 4965,
+ "ITY": 4966,
+ "ном": 4967,
+ "▁fair": 4968,
+ "▁pen": 4969,
+ "ég": 4970,
+ "▁interface": 4971,
+ "▁saf": 4972,
+ "oon": 4973,
+ "Back": 4974,
+ "▁speed": 4975,
+ "▁extends": 4976,
+ "empty": 4977,
+ "▁пере": 4978,
+ "▁proper": 4979,
+ "▁driv": 4980,
+ "фи": 4981,
+ "▁center": 4982,
+ "header": 4983,
+ "▁})": 4984,
+ "wa": 4985,
+ "▁middle": 4986,
+ "▁choose": 4987,
+ "▁Stad": 4988,
+ "SO": 4989,
+ "Factory": 4990,
+ "Dev": 4991,
+ "icles": 4992,
+ "▁application": 4993,
+ "▁models": 4994,
+ "pite": 4995,
+ "cap": 4996,
+ "xi": 4997,
+ "ospital": 4998,
+ "▁dream": 4999,
+ "END": 5000,
+ "▁contract": 5001,
+ "icrosoft": 5002,
+ "▁thous": 5003,
+ "izes": 5004,
+ "▁да": 5005,
+ "▁CO": 5006,
+ "▁direction": 5007,
+ "▁``": 5008,
+ "▁drive": 5009,
+ "Max": 5010,
+ "cia": 5011,
+ "▁continu": 5012,
+ "▁Alex": 5013,
+ "▁gold": 5014,
+ "▁prep": 5015,
+ "▁origin": 5016,
+ "▁rap": 5017,
+ "Op": 5018,
+ "ously": 5019,
+ "▁areas": 5020,
+ "PORT": 5021,
+ "она": 5022,
+ "▁safe": 5023,
+ "▁professional": 5024,
+ "apache": 5025,
+ "▁temper": 5026,
+ "sz": 5027,
+ "▁unit": 5028,
+ "▁cop": 5029,
+ "eqn": 5030,
+ "Listener": 5031,
+ "▁format": 5032,
+ "select": 5033,
+ "▁comfort": 5034,
+ "▁meant": 5035,
+ "iday": 5036,
+ "eme": 5037,
+ "▁active": 5038,
+ "▁note": 5039,
+ "▁Mil": 5040,
+ "only": 5041,
+ "▁<=": 5042,
+ "▁neigh": 5043,
+ "ao": 5044,
+ "▁blue": 5045,
+ "▁TV": 5046,
+ "Child": 5047,
+ "▁reached": 5048,
+ "Address": 5049,
+ "ств": 5050,
+ "▁closed": 5051,
+ "inder": 5052,
+ "olo": 5053,
+ "▁alt": 5054,
+ "▁adm": 5055,
+ "Format": 5056,
+ "UI": 5057,
+ "▁Ham": 5058,
+ "▁frequ": 5059,
+ "▁independ": 5060,
+ "▁easily": 5061,
+ "▁Land": 5062,
+ "▁tor": 5063,
+ "ography": 5064,
+ "infty": 5065,
+ "▁Work": 5066,
+ "iven": 5067,
+ "▁County": 5068,
+ "▁src": 5069,
+ "}$,": 5070,
+ "parse": 5071,
+ "CD": 5072,
+ "▁Cour": 5073,
+ "▁fol": 5074,
+ "Entity": 5075,
+ "pgf": 5076,
+ "▁China": 5077,
+ "▁Sub": 5078,
+ "hood": 5079,
+ "▁fields": 5080,
+ "▁yes": 5081,
+ "rend": 5082,
+ "▁towards": 5083,
+ "▁staff": 5084,
+ "▁Air": 5085,
+ "▁station": 5086,
+ "atives": 5087,
+ "▁impact": 5088,
+ "вы": 5089,
+ "▁directly": 5090,
+ "issions": 5091,
+ "iva": 5092,
+ "|\\": 5093,
+ "Ptr": 5094,
+ "▁Sant": 5095,
+ "Pol": 5096,
+ "▁progress": 5097,
+ "itar": 5098,
+ "▁parts": 5099,
+ "▁plant": 5100,
+ "▁absolut": 5101,
+ "▁guess": 5102,
+ "eqref": 5103,
+ "▁tim": 5104,
+ "▁Lou": 5105,
+ "▁cool": 5106,
+ "alu": 5107,
+ "▁mouth": 5108,
+ "них": 5109,
+ "▁height": 5110,
+ "gest": 5111,
+ "▁Post": 5112,
+ "▁board": 5113,
+ "▁tit": 5114,
+ "▁hour": 5115,
+ "▁server": 5116,
+ "▁players": 5117,
+ "rier": 5118,
+ "Link": 5119,
+ "▁President": 5120,
+ "](": 5121,
+ "▁construct": 5122,
+ "handle": 5123,
+ "}$.": 5124,
+ "rying": 5125,
+ "▁shop": 5126,
+ "iana": 5127,
+ "exp": 5128,
+ "Helper": 5129,
+ "Offset": 5130,
+ "aches": 5131,
+ "▁connection": 5132,
+ "▁difference": 5133,
+ "service": 5134,
+ "▁gas": 5135,
+ "▁priv": 5136,
+ "▁univers": 5137,
+ "▁wish": 5138,
+ "Rem": 5139,
+ "Url": 5140,
+ "geb": 5141,
+ "So": 5142,
+ "ensions": 5143,
+ "Module": 5144,
+ "SIZE": 5145,
+ "▁prem": 5146,
+ "window": 5147,
+ "▁dies": 5148,
+ "del": 5149,
+ "▁row": 5150,
+ "▁average": 5151,
+ "xim": 5152,
+ "▁pu": 5153,
+ "anç": 5154,
+ "Det": 5155,
+ "ker": 5156,
+ "ya": 5157,
+ "▁Det": 5158,
+ "▁på": 5159,
+ "▁named": 5160,
+ "▁decision": 5161,
+ "win": 5162,
+ "▁George": 5163,
+ "arily": 5164,
+ "▁solution": 5165,
+ "▁multiple": 5166,
+ "ategy": 5167,
+ "▁learning": 5168,
+ "▁secret": 5169,
+ "DO": 5170,
+ "▁nice": 5171,
+ "////////////////": 5172,
+ "Su": 5173,
+ "itation": 5174,
+ "▁join": 5175,
+ "▁elements": 5176,
+ "▁emer": 5177,
+ "tilde": 5178,
+ "▁dep": 5179,
+ "▁shot": 5180,
+ "▁platform": 5181,
+ "othing": 5182,
+ "My": 5183,
+ "edia": 5184,
+ "oms": 5185,
+ "aily": 5186,
+ "([": 5187,
+ "▁dress": 5188,
+ "▁official": 5189,
+ "estern": 5190,
+ "▁discover": 5191,
+ "▁mi": 5192,
+ "ные": 5193,
+ "CA": 5194,
+ "oding": 5195,
+ "▁Found": 5196,
+ "▁affect": 5197,
+ "Vis": 5198,
+ "stract": 5199,
+ "iced": 5200,
+ "debug": 5201,
+ "▁related": 5202,
+ "▁spect": 5203,
+ "ushed": 5204,
+ "сько": 5205,
+ "▁bank": 5206,
+ "▁cele": 5207,
+ "AND": 5208,
+ "olf": 5209,
+ "ем": 5210,
+ "▁fill": 5211,
+ "▁gives": 5212,
+ "▁бу": 5213,
+ "aron": 5214,
+ "▁Jes": 5215,
+ "REG": 5216,
+ "▁sudd": 5217,
+ "dated": 5218,
+ "vi": 5219,
+ "▁gi": 5220,
+ "send": 5221,
+ "cpp": 5222,
+ "▁spent": 5223,
+ "ande": 5224,
+ "▁operation": 5225,
+ "process": 5226,
+ "▁inform": 5227,
+ "▁Free": 5228,
+ "yond": 5229,
+ "▁perhaps": 5230,
+ "▁surv": 5231,
+ "▁Loc": 5232,
+ "▁concl": 5233,
+ "▁раз": 5234,
+ "▁Over": 5235,
+ "hol": 5236,
+ "raz": 5237,
+ "Write": 5238,
+ "▁giving": 5239,
+ "rd": 5240,
+ "instance": 5241,
+ "▁released": 5242,
+ "▁Ro": 5243,
+ "RA": 5244,
+ "▁practice": 5245,
+ "▁graph": 5246,
+ "▁increase": 5247,
+ "▁figure": 5248,
+ "Filter": 5249,
+ "HECK": 5250,
+ "idx": 5251,
+ "▁glass": 5252,
+ "ski": 5253,
+ "comes": 5254,
+ "▁cat": 5255,
+ "▁cold": 5256,
+ "goto": 5257,
+ "ufact": 5258,
+ "▁Copyright": 5259,
+ "}}\\": 5260,
+ "▁streng": 5261,
+ "▁dir": 5262,
+ "token": 5263,
+ "▁occur": 5264,
+ "arlier": 5265,
+ "▁measure": 5266,
+ "▁sec": 5267,
+ "▁más": 5268,
+ "▁Net": 5269,
+ "▁argument": 5270,
+ "▁sou": 5271,
+ "▁moving": 5272,
+ "▁prefer": 5273,
+ "mask": 5274,
+ "<<": 5275,
+ "▁breath": 5276,
+ "▁physical": 5277,
+ "▁positive": 5278,
+ "▁sor": 5279,
+ "▁depart": 5280,
+ "▁remove": 5281,
+ "▁kit": 5282,
+ "▁meeting": 5283,
+ "▁Data": 5284,
+ "ograf": 5285,
+ "actions": 5286,
+ "▁parameters": 5287,
+ "▁Att": 5288,
+ "esch": 5289,
+ "▁involved": 5290,
+ "ät": 5291,
+ "LL": 5292,
+ "Bar": 5293,
+ "▁си": 5294,
+ "ech": 5295,
+ "GET": 5296,
+ "▁prevent": 5297,
+ "▁beyond": 5298,
+ "▁Other": 5299,
+ "än": 5300,
+ "byte": 5301,
+ "▁sudden": 5302,
+ "olve": 5303,
+ "▁но": 5304,
+ "LOG": 5305,
+ "unit": 5306,
+ "▁truth": 5307,
+ "rat": 5308,
+ "SD": 5309,
+ "▁eat": 5310,
+ "▁Mad": 5311,
+ "▁provides": 5312,
+ "▁session": 5313,
+ "Dele": 5314,
+ "▁convers": 5315,
+ "center": 5316,
+ "▁continued": 5317,
+ "otion": 5318,
+ "cache": 5319,
+ "display": 5320,
+ "▁protect": 5321,
+ "ams": 5322,
+ "▁pow": 5323,
+ "CTION": 5324,
+ "▁Mac": 5325,
+ "mo": 5326,
+ "ха": 5327,
+ "▁distance": 5328,
+ "▁Time": 5329,
+ "gi": 5330,
+ "▁sequ": 5331,
+ "Target": 5332,
+ "сле": 5333,
+ "Server": 5334,
+ "▁wide": 5335,
+ "close": 5336,
+ "▁cru": 5337,
+ "Ext": 5338,
+ "▁select": 5339,
+ "▁pattern": 5340,
+ "\"));": 5341,
+ "Provider": 5342,
+ "URL": 5343,
+ "▁green": 5344,
+ "▁waiting": 5345,
+ "proto": 5346,
+ "▁immediately": 5347,
+ "common": 5348,
+ "azione": 5349,
+ "river": 5350,
+ "▁sen": 5351,
+ "▁!==": 5352,
+ "▁February": 5353,
+ "urb": 5354,
+ "▁Sen": 5355,
+ "dest": 5356,
+ "": 5357,
+ "▁edge": 5358,
+ "▁mais": 5359,
+ "gorith": 5360,
+ "cpu": 5361,
+ "▁education": 5362,
+ "▁associated": 5363,
+ "None": 5364,
+ "hi": 5365,
+ "▁poor": 5366,
+ "sem": 5367,
+ "▁Wil": 5368,
+ "▁bud": 5369,
+ "▁auch": 5370,
+ "eller": 5371,
+ "▁Life": 5372,
+ "▁files": 5373,
+ "▁leading": 5374,
+ "▁obtain": 5375,
+ "▁Jul": 5376,
+ "atory": 5377,
+ "гу": 5378,
+ "itable": 5379,
+ "▁onto": 5380,
+ "▁born": 5381,
+ "orem": 5382,
+ "▁Street": 5383,
+ "▁maint": 5384,
+ "Params": 5385,
+ "rip": 5386,
+ "▁ST": 5387,
+ "uv": 5388,
+ "main": 5389,
+ "▁▁▁▁▁▁▁": 5390,
+ "▁recent": 5391,
+ "Web": 5392,
+ "ova": 5393,
+ "ца": 5394,
+ "aise": 5395,
+ "yles": 5396,
+ "▁described": 5397,
+ "▁beginning": 5398,
+ "▁Day": 5399,
+ "▁Vol": 5400,
+ "▁huge": 5401,
+ "Has": 5402,
+ "ancy": 5403,
+ "Header": 5404,
+ "▁aren": 5405,
+ "ван": 5406,
+ "▁ensure": 5407,
+ "▁pet": 5408,
+ "mult": 5409,
+ "▁Like": 5410,
+ "▁management": 5411,
+ "PS": 5412,
+ "while": 5413,
+ "▁background": 5414,
+ "ounter": 5415,
+ "bool": 5416,
+ "FC": 5417,
+ "Num": 5418,
+ "RL": 5419,
+ "▁excl": 5420,
+ "▁eye": 5421,
+ "img": 5422,
+ "▁rom": 5423,
+ "▁Hel": 5424,
+ "Option": 5425,
+ "▁stopped": 5426,
+ "▁thread": 5427,
+ "totype": 5428,
+ ")))": 5429,
+ "▁stage": 5430,
+ "▁über": 5431,
+ "▁although": 5432,
+ "Types": 5433,
+ "▁Oh": 5434,
+ "▁eight": 5435,
+ "▁description": 5436,
+ "''": 5437,
+ "ön": 5438,
+ "▁surface": 5439,
+ "▁International": 5440,
+ "▁charg": 5441,
+ "▁collection": 5442,
+ "▁users": 5443,
+ "▁obvious": 5444,
+ "▁century": 5445,
+ "icks": 5446,
+ "▁article": 5447,
+ "▁\"\\": 5448,
+ "dim": 5449,
+ "▁sin": 5450,
+ "enge": 5451,
+ "Control": 5452,
+ "▁commit": 5453,
+ "ensity": 5454,
+ "▁tra": 5455,
+ "criptor": 5456,
+ "▁NOT": 5457,
+ "well": 5458,
+ "▁Michael": 5459,
+ "▁nod": 5460,
+ "▁mort": 5461,
+ "ivo": 5462,
+ "isation": 5463,
+ "▁Po": 5464,
+ "▁Paris": 5465,
+ "▁administr": 5466,
+ "burg": 5467,
+ "cdot": 5468,
+ "▁military": 5469,
+ "▁Best": 5470,
+ "▁Ка": 5471,
+ "INE": 5472,
+ "▁throughout": 5473,
+ "Sl": 5474,
+ "▁impl": 5475,
+ "control": 5476,
+ "▁Ч": 5477,
+ "▁uit": 5478,
+ "▁unsigned": 5479,
+ "▁Mary": 5480,
+ "Char": 5481,
+ "мі": 5482,
+ "▁threat": 5483,
+ "▁court": 5484,
+ "ville": 5485,
+ "▁ш": 5486,
+ "▁Cam": 5487,
+ ".\r": 5488,
+ "▁currently": 5489,
+ "rot": 5490,
+ "▁Date": 5491,
+ "▁shit": 5492,
+ "▁${\\": 5493,
+ "unn": 5494,
+ "Us": 5495,
+ "▁buffer": 5496,
+ "▁sont": 5497,
+ "▁letter": 5498,
+ "inated": 5499,
+ "Change": 5500,
+ "▁href": 5501,
+ "▁lack": 5502,
+ "▁oil": 5503,
+ "▁Cons": 5504,
+ "▁Jer": 5505,
+ "BUG": 5506,
+ "iforn": 5507,
+ "▁properties": 5508,
+ "▁random": 5509,
+ "▁brother": 5510,
+ "▁piece": 5511,
+ "бу": 5512,
+ "istics": 5513,
+ "▁technology": 5514,
+ "global": 5515,
+ "▁transform": 5516,
+ "erd": 5517,
+ "▁Because": 5518,
+ "PECT": 5519,
+ "pret": 5520,
+ "▁году": 5521,
+ "▁Met": 5522,
+ "▁psy": 5523,
+ "▁од": 5524,
+ "▁god": 5525,
+ "▁Del": 5526,
+ "based": 5527,
+ "▁voor": 5528,
+ "▁Call": 5529,
+ "SA": 5530,
+ "▁filter": 5531,
+ "▁includes": 5532,
+ "olutions": 5533,
+ "fd": 5534,
+ "▁wind": 5535,
+ "▁бо": 5536,
+ "▁ability": 5537,
+ "card": 5538,
+ "▁numer": 5539,
+ "address": 5540,
+ "▁goal": 5541,
+ "ashington": 5542,
+ "▁slight": 5543,
+ "aba": 5544,
+ "▁Log": 5545,
+ "Settings": 5546,
+ "adow": 5547,
+ "▁pi": 5548,
+ "iring": 5549,
+ "FT": 5550,
+ "▁numbers": 5551,
+ "conf": 5552,
+ "task": 5553,
+ "▁în": 5554,
+ "ты": 5555,
+ "▁receive": 5556,
+ "▁root": 5557,
+ "▁India": 5558,
+ "patch": 5559,
+ "él": 5560,
+ "▁summer": 5561,
+ "▁methods": 5562,
+ "▁places": 5563,
+ "▁Ма": 5564,
+ "▁capital": 5565,
+ "▁evidence": 5566,
+ "▁German": 5567,
+ "\\,": 5568,
+ "DA": 5569,
+ "ecute": 5570,
+ "column": 5571,
+ "▁functions": 5572,
+ "▁counter": 5573,
+ "▁arms": 5574,
+ "▁feed": 5575,
+ "vey": 5576,
+ "hent": 5577,
+ "MAX": 5578,
+ "▁acqu": 5579,
+ "▁apply": 5580,
+ "▁husband": 5581,
+ "▁killed": 5582,
+ "▁Spec": 5583,
+ "entity": 5584,
+ "▁earlier": 5585,
+ "▁Miss": 5586,
+ "▁setting": 5587,
+ "itect": 5588,
+ "▁ded": 5589,
+ "Row": 5590,
+ "▁ran": 5591,
+ "▁Yes": 5592,
+ "▁financial": 5593,
+ "session": 5594,
+ "lear": 5595,
+ "ishing": 5596,
+ "▁nearly": 5597,
+ "▁dur": 5598,
+ "▁machine": 5599,
+ "xff": 5600,
+ "bro": 5601,
+ "▁symbol": 5602,
+ "lands": 5603,
+ "Acc": 5604,
+ "di": 5605,
+ "▁Robert": 5606,
+ "prop": 5607,
+ "urity": 5608,
+ "▁#####": 5609,
+ "▁walked": 5610,
+ "▁international": 5611,
+ "▁Е": 5612,
+ "Yes": 5613,
+ "▁release": 5614,
+ "▁starting": 5615,
+ "static": 5616,
+ "▁bei": 5617,
+ "allow": 5618,
+ "▁People": 5619,
+ "ez": 5620,
+ "▁parameter": 5621,
+ "Cache": 5622,
+ "▁$$": 5623,
+ "ampions": 5624,
+ "▁Mer": 5625,
+ "▁kom": 5626,
+ "leted": 5627,
+ "ois": 5628,
+ "▁Open": 5629,
+ "types": 5630,
+ "▁fue": 5631,
+ "acters": 5632,
+ "▁reference": 5633,
+ "Equals": 5634,
+ "▁aware": 5635,
+ "▁hol": 5636,
+ "▁demand": 5637,
+ "lor": 5638,
+ "▁veh": 5639,
+ "▁notice": 5640,
+ "▁component": 5641,
+ "fn": 5642,
+ "▁analysis": 5643,
+ "match": 5644,
+ "▁effective": 5645,
+ "product": 5646,
+ "ник": 5647,
+ "▁legal": 5648,
+ "ей": 5649,
+ "semb": 5650,
+ "▁located": 5651,
+ "▁су": 5652,
+ "QL": 5653,
+ "inct": 5654,
+ "eto": 5655,
+ "Draw": 5656,
+ "▁scale": 5657,
+ "ров": 5658,
+ "▁wants": 5659,
+ "How": 5660,
+ "▁wel": 5661,
+ "isions": 5662,
+ "▁deliver": 5663,
+ "under": 5664,
+ "▁deb": 5665,
+ "▁ju": 5666,
+ "values": 5667,
+ "▁sister": 5668,
+ "ков": 5669,
+ "▁Create": 5670,
+ "▁Inc": 5671,
+ "▁aux": 5672,
+ "▁White": 5673,
+ "Menu": 5674,
+ "aud": 5675,
+ "resource": 5676,
+ "▁cab": 5677,
+ "▁lif": 5678,
+ "▁culture": 5679,
+ "iche": 5680,
+ "▁whatever": 5681,
+ "▁designed": 5682,
+ "▁repe": 5683,
+ "▁Mont": 5684,
+ "▁charge": 5685,
+ "Names": 5686,
+ "▁insp": 5687,
+ "▁customers": 5688,
+ "osa": 5689,
+ "▁daughter": 5690,
+ "▁East": 5691,
+ "EQ": 5692,
+ "▁opin": 5693,
+ "▁Fre": 5694,
+ "▁seek": 5695,
+ "▁push": 5696,
+ "▁nav": 5697,
+ "▁burn": 5698,
+ "arden": 5699,
+ "hash": 5700,
+ "▁opportunity": 5701,
+ "▁Mat": 5702,
+ "oyal": 5703,
+ "▁pun": 5704,
+ "scale": 5705,
+ "ynamic": 5706,
+ "▁Type": 5707,
+ "iling": 5708,
+ "▁query": 5709,
+ "▁mist": 5710,
+ "ror": 5711,
+ "force": 5712,
+ "▁Once": 5713,
+ "▁medical": 5714,
+ "lie": 5715,
+ "▁student": 5716,
+ "ederal": 5717,
+ "▁lov": 5718,
+ "iform": 5719,
+ "▁altern": 5720,
+ "bin": 5721,
+ "oder": 5722,
+ "▁returns": 5723,
+ "register": 5724,
+ "uts": 5725,
+ "CI": 5726,
+ "▁Tor": 5727,
+ "CR": 5728,
+ "▁Los": 5729,
+ "amily": 5730,
+ "aire": 5731,
+ "++;": 5732,
+ "Controller": 5733,
+ "wide": 5734,
+ "xx": 5735,
+ "rowser": 5736,
+ "▁Book": 5737,
+ "Container": 5738,
+ "pload": 5739,
+ "▁Ev": 5740,
+ "▁tal": 5741,
+ "▁theory": 5742,
+ "eqnarray": 5743,
+ "бе": 5744,
+ "▁reported": 5745,
+ "▁meaning": 5746,
+ "▁sy": 5747,
+ "ribe": 5748,
+ "icate": 5749,
+ "hold": 5750,
+ "▁offers": 5751,
+ "▁templ": 5752,
+ "css": 5753,
+ "▁picture": 5754,
+ "▁async": 5755,
+ "▁stock": 5756,
+ "▁internal": 5757,
+ "ti": 5758,
+ "BO": 5759,
+ "Ver": 5760,
+ "спо": 5761,
+ "▁demon": 5762,
+ "▁laugh": 5763,
+ "▁End": 5764,
+ "▁kon": 5765,
+ "▁ideas": 5766,
+ "▁candid": 5767,
+ "Mem": 5768,
+ "izz": 5769,
+ "refix": 5770,
+ "▁AND": 5771,
+ "egen": 5772,
+ "El": 5773,
+ "▁campaign": 5774,
+ "Http": 5775,
+ "▁Rob": 5776,
+ "ді": 5777,
+ "▁bul": 5778,
+ "▁Ко": 5779,
+ "▁countries": 5780,
+ "».": 5781,
+ "▁expression": 5782,
+ "▁England": 5783,
+ "sf": 5784,
+ "▁certainly": 5785,
+ "agen": 5786,
+ "▁ча": 5787,
+ "▁ANY": 5788,
+ "▁connect": 5789,
+ "FE": 5790,
+ "▁android": 5791,
+ "▁Gold": 5792,
+ "▁oppos": 5793,
+ "overn": 5794,
+ "▁Commun": 5795,
+ ",_": 5796,
+ "asion": 5797,
+ "La": 5798,
+ "▁firm": 5799,
+ "▁Although": 5800,
+ "▁Good": 5801,
+ "▁Law": 5802,
+ "erve": 5803,
+ "▁brand": 5804,
+ "Min": 5805,
+ "fill": 5806,
+ "'],": 5807,
+ "▁Jew": 5808,
+ "iler": 5809,
+ "ingle": 5810,
+ "ithub": 5811,
+ "▁Div": 5812,
+ "▁cert": 5813,
+ "Height": 5814,
+ "rael": 5815,
+ "There": 5816,
+ "itute": 5817,
+ "▁amaz": 5818,
+ "look": 5819,
+ "▁SE": 5820,
+ "▁jo": 5821,
+ "▁pulled": 5822,
+ "▁resources": 5823,
+ "▁Max": 5824,
+ "▁agreed": 5825,
+ "asy": 5826,
+ "▁treatment": 5827,
+ "\">": 5828,
+ "ман": 5829,
+ "▁Err": 5830,
+ "orig": 5831,
+ "cos": 5832,
+ "▁Maybe": 5833,
+ "otal": 5834,
+ "▁train": 5835,
+ "▁Service": 5836,
+ "▁ih": 5837,
+ "▁spirit": 5838,
+ "Comp": 5839,
+ "sqrt": 5840,
+ "▁broad": 5841,
+ "}[": 5842,
+ "▁shape": 5843,
+ "▁doc": 5844,
+ "how": 5845,
+ "▁tag": 5846,
+ "atalog": 5847,
+ "sd": 5848,
+ "▁meas": 5849,
+ "▁Ро": 5850,
+ "▁exception": 5851,
+ "▁Tw": 5852,
+ "▁interesting": 5853,
+ "ATA": 5854,
+ "▁Rel": 5855,
+ "ár": 5856,
+ "▁useful": 5857,
+ "useum": 5858,
+ "▁bottom": 5859,
+ "▁otherwise": 5860,
+ "▁agree": 5861,
+ "cht": 5862,
+ "then": 5863,
+ "▁significant": 5864,
+ "}/": 5865,
+ "▁channel": 5866,
+ "icial": 5867,
+ "тив": 5868,
+ "vare": 5869,
+ "▁enter": 5870,
+ "Eng": 5871,
+ "uj": 5872,
+ "URE": 5873,
+ "queue": 5874,
+ "ono": 5875,
+ "▁contains": 5876,
+ "MI": 5877,
+ "▁nation": 5878,
+ "▁rules": 5879,
+ "fol": 5880,
+ "▁pa": 5881,
+ "arp": 5882,
+ "▁quiet": 5883,
+ "▁thus": 5884,
+ "ipped": 5885,
+ "annot": 5886,
+ "udes": 5887,
+ "():": 5888,
+ "names": 5889,
+ "▁compos": 5890,
+ "▁inj": 5891,
+ "una": 5892,
+ "bind": 5893,
+ "▁fully": 5894,
+ "ras": 5895,
+ "Utils": 5896,
+ "anges": 5897,
+ "dule": 5898,
+ "▁Christian": 5899,
+ "▁reve": 5900,
+ "änd": 5901,
+ "▁collect": 5902,
+ "▁celebr": 5903,
+ "anda": 5904,
+ "ín": 5905,
+ "join": 5906,
+ "▁paid": 5907,
+ "Core": 5908,
+ "Ge": 5909,
+ ".$": 5910,
+ "▁fif": 5911,
+ "▁uma": 5912,
+ "▁~": 5913,
+ "ervices": 5914,
+ "▁recently": 5915,
+ "desc": 5916,
+ "▁heavy": 5917,
+ "▁rule": 5918,
+ "▁Please": 5919,
+ "psi": 5920,
+ "▁console": 5921,
+ "▁fort": 5922,
+ ".\\": 5923,
+ "▁Washington": 5924,
+ "▁gar": 5925,
+ "▁Group": 5926,
+ "▁interview": 5927,
+ "anned": 5928,
+ "sql": 5929,
+ "▁anc": 5930,
+ "ја": 5931,
+ "Pack": 5932,
+ "▁Club": 5933,
+ "▁mask": 5934,
+ "▁concept": 5935,
+ "▁['": 5936,
+ "▁selected": 5937,
+ "▁Use": 5938,
+ "▁ele": 5939,
+ "ears": 5940,
+ "▁race": 5941,
+ "hy": 5942,
+ "Om": 5943,
+ "▁steps": 5944,
+ "ila": 5945,
+ "ests": 5946,
+ "eds": 5947,
+ "▁street": 5948,
+ "ners": 5949,
+ "▁birth": 5950,
+ "pop": 5951,
+ "▁ли": 5952,
+ "MB": 5953,
+ "кра": 5954,
+ "cir": 5955,
+ "epsilon": 5956,
+ "▁constant": 5957,
+ "ques": 5958,
+ "adas": 5959,
+ "▁knows": 5960,
+ "▁Py": 5961,
+ "cles": 5962,
+ "▁cit": 5963,
+ "▁pair": 5964,
+ "inese": 5965,
+ "▁Peter": 5966,
+ "▁finished": 5967,
+ "▁master": 5968,
+ "▁twenty": 5969,
+ "▁fell": 5970,
+ "▁central": 5971,
+ "▁mes": 5972,
+ "rev": 5973,
+ "STAT": 5974,
+ "stat": 5975,
+ "▁allows": 5976,
+ "▁gro": 5977,
+ "Click": 5978,
+ "▁stories": 5979,
+ "Fe": 5980,
+ "år": 5981,
+ "▁baby": 5982,
+ "encia": 5983,
+ "▁einer": 5984,
+ "Are": 5985,
+ "ebug": 5986,
+ "store": 5987,
+ "\",\"": 5988,
+ "lam": 5989,
+ "▁sv": 5990,
+ "ции": 5991,
+ "NULL": 5992,
+ "▁Leg": 5993,
+ "▁movie": 5994,
+ "▁hous": 5995,
+ "▁learned": 5996,
+ "bon": 5997,
+ "▁transfer": 5998,
+ "ifornia": 5999,
+ "psilon": 6000,
+ "▁Soft": 6001,
+ "▁commer": 6002,
+ "▁hadn": 6003,
+ "▁Ein": 6004,
+ "▁Two": 6005,
+ "craft": 6006,
+ "Process": 6007,
+ "▁под": 6008,
+ "argin": 6009,
+ "▁estim": 6010,
+ "▁Mem": 6011,
+ "ika": 6012,
+ "▁Tod": 6013,
+ "duc": 6014,
+ "▁danger": 6015,
+ "rive": 6016,
+ "Don": 6017,
+ "▁Que": 6018,
+ "hal": 6019,
+ "▁mm": 6020,
+ "▁Sur": 6021,
+ "Order": 6022,
+ "▁distribution": 6023,
+ "fa": 6024,
+ "▁Many": 6025,
+ "plicit": 6026,
+ "Empty": 6027,
+ "Handle": 6028,
+ "▁token": 6029,
+ "▁epis": 6030,
+ "▁assist": 6031,
+ "▁purpose": 6032,
+ "▁ц": 6033,
+ "NU": 6034,
+ "iders": 6035,
+ "rate": 6036,
+ "They": 6037,
+ "Parameter": 6038,
+ "Dec": 6039,
+ "▁strugg": 6040,
+ "▁shoot": 6041,
+ "IV": 6042,
+ "▁Great": 6043,
+ "▁Sil": 6044,
+ "▁loved": 6045,
+ "▁click": 6046,
+ "▁reserv": 6047,
+ "▁ве": 6048,
+ "▁spread": 6049,
+ "▁og": 6050,
+ "▁${": 6051,
+ "▁miles": 6052,
+ "▁successful": 6053,
+ "oj": 6054,
+ "▁Direct": 6055,
+ "▁ax": 6056,
+ "▁growth": 6057,
+ "Work": 6058,
+ "▁church": 6059,
+ "Inst": 6060,
+ "ICE": 6061,
+ "sten": 6062,
+ "род": 6063,
+ "▁Center": 6064,
+ "ses": 6065,
+ "got": 6066,
+ "delete": 6067,
+ "▁Ma": 6068,
+ "%%": 6069,
+ "▁crow": 6070,
+ "DF": 6071,
+ "front": 6072,
+ "▁blog": 6073,
+ "▁computer": 6074,
+ "ная": 6075,
+ "▁mir": 6076,
+ "▁Super": 6077,
+ "','": 6078,
+ "▁multi": 6079,
+ "▁gru": 6080,
+ "▁Jo": 6081,
+ "▁Canada": 6082,
+ "▁Thomas": 6083,
+ "▁larger": 6084,
+ "▁compar": 6085,
+ "Current": 6086,
+ "that": 6087,
+ "▁drop": 6088,
+ "ент": 6089,
+ "▁Republic": 6090,
+ "▁dise": 6091,
+ "▁effects": 6092,
+ "▁girls": 6093,
+ "encies": 6094,
+ "ellig": 6095,
+ "▁Note": 6096,
+ "▁Associ": 6097,
+ "▁uses": 6098,
+ "elled": 6099,
+ "▁warm": 6100,
+ "thread": 6101,
+ "font": 6102,
+ "▁zum": 6103,
+ "▁follows": 6104,
+ "▁whom": 6105,
+ "TA": 6106,
+ "▁wild": 6107,
+ "▁AR": 6108,
+ "iable": 6109,
+ "▁True": 6110,
+ "Position": 6111,
+ "▁sell": 6112,
+ "cher": 6113,
+ "▁Bus": 6114,
+ "▁lean": 6115,
+ "ACE": 6116,
+ "▁served": 6117,
+ "hw": 6118,
+ "▁Cur": 6119,
+ "▁north": 6120,
+ "Dat": 6121,
+ "▁>>": 6122,
+ "command": 6123,
+ "atz": 6124,
+ "▁mal": 6125,
+ "став": 6126,
+ "▁Press": 6127,
+ "▁characters": 6128,
+ "▁zero": 6129,
+ "AGE": 6130,
+ "rapper": 6131,
+ "▁kitchen": 6132,
+ "aming": 6133,
+ "▁restr": 6134,
+ "XX": 6135,
+ "▁College": 6136,
+ "▁Array": 6137,
+ "▁fresh": 6138,
+ "▁shift": 6139,
+ "▁specified": 6140,
+ "plete": 6141,
+ "ITE": 6142,
+ "▁Camp": 6143,
+ "rial": 6144,
+ "cb": 6145,
+ "▁TH": 6146,
+ "IB": 6147,
+ "osen": 6148,
+ "▁ú": 6149,
+ "▁params": 6150,
+ "ignment": 6151,
+ "adding": 6152,
+ "▁degree": 6153,
+ "Local": 6154,
+ "Oh": 6155,
+ "▁zur": 6156,
+ "▁levels": 6157,
+ "CS": 6158,
+ "finished": 6159,
+ "Case": 6160,
+ "riage": 6161,
+ "Vector": 6162,
+ "▁sea": 6163,
+ "antic": 6164,
+ "▁League": 6165,
+ "▁therefore": 6166,
+ "One": 6167,
+ "Return": 6168,
+ "Access": 6169,
+ "vas": 6170,
+ "▁ос": 6171,
+ "▁rat": 6172,
+ "Big": 6173,
+ "▁behavior": 6174,
+ "kr": 6175,
+ "▁undefined": 6176,
+ "▁Es": 6177,
+ "▁appeared": 6178,
+ "eles": 6179,
+ "▁WAR": 6180,
+ "Stat": 6181,
+ "▁Google": 6182,
+ "▁credit": 6183,
+ "▁File": 6184,
+ "anging": 6185,
+ "house": 6186,
+ "romise": 6187,
+ "gent": 6188,
+ "▁habit": 6189,
+ "▁society": 6190,
+ "▁encour": 6191,
+ "▁paint": 6192,
+ "pet": 6193,
+ "▁UK": 6194,
+ "aws": 6195,
+ "onom": 6196,
+ "Gl": 6197,
+ "}_{\\": 6198,
+ "eless": 6199,
+ "emy": 6200,
+ "▁Cong": 6201,
+ "▁developed": 6202,
+ "▁images": 6203,
+ "▁ö": 6204,
+ "▁font": 6205,
+ "clear": 6206,
+ "gin": 6207,
+ "▁Lord": 6208,
+ "▁transport": 6209,
+ "▁::": 6210,
+ "▁cup": 6211,
+ "ulate": 6212,
+ "▁During": 6213,
+ "priv": 6214,
+ "▁extrem": 6215,
+ "▁Di": 6216,
+ "▁doubt": 6217,
+ "Py": 6218,
+ "ifying": 6219,
+ "split": 6220,
+ "ego": 6221,
+ "github": 6222,
+ "▁),": 6223,
+ "ROM": 6224,
+ "▁chair": 6225,
+ "▁trade": 6226,
+ "▁nicht": 6227,
+ "Top": 6228,
+ "Store": 6229,
+ "▁parte": 6230,
+ "project": 6231,
+ "nia": 6232,
+ "▁від": 6233,
+ "war": 6234,
+ "▁Prof": 6235,
+ "▁caught": 6236,
+ "Thread": 6237,
+ "ства": 6238,
+ "author": 6239,
+ "▁doll": 6240,
+ "▁harm": 6241,
+ "▁Gen": 6242,
+ "tree": 6243,
+ "etime": 6244,
+ "cfg": 6245,
+ "▁guys": 6246,
+ "▁California": 6247,
+ "▁Green": 6248,
+ "▁movement": 6249,
+ "iej": 6250,
+ "▁statement": 6251,
+ "▁seeing": 6252,
+ "▁haven": 6253,
+ "vention": 6254,
+ "SL": 6255,
+ "chedul": 6256,
+ "iert": 6257,
+ "▁primary": 6258,
+ "▁civil": 6259,
+ "rian": 6260,
+ "▁button": 6261,
+ "▁lived": 6262,
+ "Pass": 6263,
+ "sor": 6264,
+ "▁watching": 6265,
+ "▁skills": 6266,
+ "tee": 6267,
+ "Level": 6268,
+ "▁scient": 6269,
+ "hs": 6270,
+ "▁agre": 6271,
+ "cat": 6272,
+ "▁tend": 6273,
+ "▁Mill": 6274,
+ "▁Cap": 6275,
+ "ORD": 6276,
+ "gle": 6277,
+ "▁сво": 6278,
+ "»,": 6279,
+ "▁ahead": 6280,
+ "vest": 6281,
+ "▁Jose": 6282,
+ "ischer": 6283,
+ "și": 6284,
+ "▁leaving": 6285,
+ "▁для": 6286,
+ "▁south": 6287,
+ "▁consum": 6288,
+ "Range": 6289,
+ "▁activities": 6290,
+ "Sec": 6291,
+ "▁sales": 6292,
+ "▁fix": 6293,
+ "▁jed": 6294,
+ "rum": 6295,
+ "vector": 6296,
+ "▁spot": 6297,
+ "▁manufact": 6298,
+ "кт": 6299,
+ "orrow": 6300,
+ "sign": 6301,
+ "▁college": 6302,
+ "▁driver": 6303,
+ "▁definitely": 6304,
+ "▁spend": 6305,
+ "mission": 6306,
+ "зу": 6307,
+ "atively": 6308,
+ "bi": 6309,
+ "Callback": 6310,
+ "▁particularly": 6311,
+ "▁hell": 6312,
+ "▁pool": 6313,
+ "PRE": 6314,
+ "▁clearly": 6315,
+ "PT": 6316,
+ "othes": 6317,
+ "▁Id": 6318,
+ "Location": 6319,
+ "▁Run": 6320,
+ "▁fixed": 6321,
+ "▁Hand": 6322,
+ "bal": 6323,
+ "double": 6324,
+ "Can": 6325,
+ "Omega": 6326,
+ "▁challeng": 6327,
+ "▁standing": 6328,
+ "iten": 6329,
+ "▁mechan": 6330,
+ "▁durch": 6331,
+ "▁dell": 6332,
+ "▁raised": 6333,
+ "▁weak": 6334,
+ "▁Du": 6335,
+ "grad": 6336,
+ "▁scene": 6337,
+ "poss": 6338,
+ "▁ton": 6339,
+ "▁earth": 6340,
+ "ulations": 6341,
+ "▁strength": 6342,
+ "aked": 6343,
+ "▁remain": 6344,
+ "▁Bi": 6345,
+ "▁customer": 6346,
+ "range": 6347,
+ "▁interested": 6348,
+ "ONE": 6349,
+ "▁coff": 6350,
+ "require": 6351,
+ "▁Only": 6352,
+ "▁Web": 6353,
+ "▁farm": 6354,
+ "▁activity": 6355,
+ "▁rout": 6356,
+ "bling": 6357,
+ "SY": 6358,
+ "▁Richard": 6359,
+ "▁Ref": 6360,
+ "▁кон": 6361,
+ "▁jun": 6362,
+ "born": 6363,
+ "ijn": 6364,
+ "Configuration": 6365,
+ "uman": 6366,
+ "EE": 6367,
+ "▁married": 6368,
+ "▁За": 6369,
+ "▁fat": 6370,
+ "▁kid": 6371,
+ "▁Tur": 6372,
+ "▁offered": 6373,
+ "nic": 6374,
+ "▁Big": 6375,
+ "Gamma": 6376,
+ "▁Health": 6377,
+ "▁TR": 6378,
+ "▁się": 6379,
+ "▁construction": 6380,
+ "▁Church": 6381,
+ "▁Bet": 6382,
+ "bus": 6383,
+ "▁earn": 6384,
+ "rict": 6385,
+ "▁пра": 6386,
+ "▁brain": 6387,
+ "▁fra": 6388,
+ "▁Op": 6389,
+ "FIG": 6390,
+ "ema": 6391,
+ "▁European": 6392,
+ "▁Saint": 6393,
+ "ARE": 6394,
+ "uri": 6395,
+ "▁River": 6396,
+ "{}": 6397,
+ "▁sitting": 6398,
+ "▁understanding": 6399,
+ "▁plans": 6400,
+ "ropri": 6401,
+ "▁older": 6402,
+ "▁pressure": 6403,
+ "Impl": 6404,
+ "▁peace": 6405,
+ "Connection": 6406,
+ "▁fi": 6407,
+ "rich": 6408,
+ "▁shut": 6409,
+ "apers": 6410,
+ "Port": 6411,
+ "▁Look": 6412,
+ "rim": 6413,
+ "auth": 6414,
+ "auto": 6415,
+ "▁highly": 6416,
+ "▁unless": 6417,
+ "▁Wal": 6418,
+ "▁ren": 6419,
+ "ws": 6420,
+ "▁core": 6421,
+ "(-": 6422,
+ "▁clim": 6423,
+ "ruit": 6424,
+ "▁callback": 6425,
+ "hest": 6426,
+ "▁Charles": 6427,
+ "▁Long": 6428,
+ "}=": 6429,
+ "ър": 6430,
+ "▁shared": 6431,
+ "ulated": 6432,
+ "gorithm": 6433,
+ "▁Home": 6434,
+ "▁village": 6435,
+ "ees": 6436,
+ "sv": 6437,
+ "▁restaur": 6438,
+ "rey": 6439,
+ "▁Cast": 6440,
+ "▁Person": 6441,
+ "кий": 6442,
+ "▁organiz": 6443,
+ "▁Rad": 6444,
+ "ponents": 6445,
+ "▁werden": 6446,
+ "▁bow": 6447,
+ "sen": 6448,
+ "ami": 6449,
+ "Interface": 6450,
+ "▁basis": 6451,
+ "▁Company": 6452,
+ "ernel": 6453,
+ "itu": 6454,
+ "Hash": 6455,
+ "▁aan": 6456,
+ "▁х": 6457,
+ "▁smile": 6458,
+ "xml": 6459,
+ "▁scen": 6460,
+ "amm": 6461,
+ "tool": 6462,
+ "aria": 6463,
+ "▁accur": 6464,
+ "settings": 6465,
+ "▁Jesus": 6466,
+ "acement": 6467,
+ "power": 6468,
+ "(!": 6469,
+ "▁calls": 6470,
+ "▁basic": 6471,
+ "▁settings": 6472,
+ "ript": 6473,
+ "pool": 6474,
+ "ctors": 6475,
+ "▁Foundation": 6476,
+ "▁weap": 6477,
+ "KEY": 6478,
+ "foot": 6479,
+ "▁radio": 6480,
+ "▁helped": 6481,
+ "mann": 6482,
+ "▁jump": 6483,
+ "▁tick": 6484,
+ "▁growing": 6485,
+ "aten": 6486,
+ "real": 6487,
+ "▁increasing": 6488,
+ "Device": 6489,
+ "varepsilon": 6490,
+ "▁sets": 6491,
+ "▁advant": 6492,
+ "Open": 6493,
+ "▁reasons": 6494,
+ "▁supposed": 6495,
+ "oes": 6496,
+ "ede": 6497,
+ "teen": 6498,
+ "ifdef": 6499,
+ "▁delete": 6500,
+ "▁&=": 6501,
+ "▁Bill": 6502,
+ "▁aim": 6503,
+ "▁Ok": 6504,
+ "▁Av": 6505,
+ "reci": 6506,
+ "acks": 6507,
+ "iste": 6508,
+ "Properties": 6509,
+ "▁tmp": 6510,
+ "▁dei": 6511,
+ "PER": 6512,
+ "DC": 6513,
+ "sta": 6514,
+ "нии": 6515,
+ "▁limited": 6516,
+ "▁greater": 6517,
+ "description": 6518,
+ "ori": 6519,
+ "aints": 6520,
+ "▁hy": 6521,
+ "▁Mel": 6522,
+ "▁CH": 6523,
+ "cons": 6524,
+ "▁surround": 6525,
+ "▁Who": 6526,
+ "arc": 6527,
+ "▁telev": 6528,
+ "itution": 6529,
+ "▁equal": 6530,
+ "кі": 6531,
+ "▁Israel": 6532,
+ "äh": 6533,
+ "▁Caption": 6534,
+ "▁exerc": 6535,
+ "empor": 6536,
+ "▁++": 6537,
+ "▁lib": 6538,
+ "make": 6539,
+ "▁MA": 6540,
+ "copy": 6541,
+ "friend": 6542,
+ "▁кото": 6543,
+ "▁damage": 6544,
+ "▁\\,": 6545,
+ "oded": 6546,
+ "▁none": 6547,
+ "▁evalu": 6548,
+ "ston": 6549,
+ ">,": 6550,
+ "FOR": 6551,
+ "▁norm": 6552,
+ "appe": 6553,
+ "Session": 6554,
+ "▁adult": 6555,
+ "▁hospital": 6556,
+ "▁recommend": 6557,
+ "property": 6558,
+ "stein": 6559,
+ "final": 6560,
+ "▁nu": 6561,
+ "second": 6562,
+ "▁aspect": 6563,
+ "\")]": 6564,
+ "жен": 6565,
+ "amento": 6566,
+ "▁rac": 6567,
+ "save": 6568,
+ "▁football": 6569,
+ "Ab": 6570,
+ "ungs": 6571,
+ "abil": 6572,
+ "▁Arch": 6573,
+ "system": 6574,
+ "hist": 6575,
+ "▁luck": 6576,
+ "render": 6577,
+ "▁sein": 6578,
+ "ioni": 6579,
+ "▁rot": 6580,
+ "▁corner": 6581,
+ "▁appropri": 6582,
+ "▁Software": 6583,
+ "▁tele": 6584,
+ "Delete": 6585,
+ "▁According": 6586,
+ "▁prison": 6587,
+ "▁lic": 6588,
+ "▁ми": 6589,
+ "term": 6590,
+ "sets": 6591,
+ "▁vel": 6592,
+ "▁rank": 6593,
+ "▁existing": 6594,
+ "▁Vir": 6595,
+ "▁trip": 6596,
+ "▁му": 6597,
+ "avax": 6598,
+ "▁ris": 6599,
+ "▁define": 6600,
+ "▁heat": 6601,
+ "car": 6602,
+ "▁convert": 6603,
+ "email": 6604,
+ "▁Under": 6605,
+ "▁Ш": 6606,
+ "▁Grand": 6607,
+ "▁exists": 6608,
+ "sys": 6609,
+ "eff": 6610,
+ "▁Top": 6611,
+ "▁č": 6612,
+ "▁tempor": 6613,
+ "▁arguments": 6614,
+ "▁supported": 6615,
+ "ensed": 6616,
+ "▁Francis": 6617,
+ "▁coord": 6618,
+ "▁achieve": 6619,
+ "▁Name": 6620,
+ "▁Jahr": 6621,
+ "▁Gi": 6622,
+ "she": 6623,
+ "▁Dev": 6624,
+ "▁alla": 6625,
+ "▁WIT": 6626,
+ "agment": 6627,
+ "custom": 6628,
+ "alls": 6629,
+ "&&": 6630,
+ "WE": 6631,
+ "▁holding": 6632,
+ "prototype": 6633,
+ "▁fing": 6634,
+ "▁bag": 6635,
+ "▁Party": 6636,
+ "stack": 6637,
+ "▁economic": 6638,
+ "▁Gal": 6639,
+ "idents": 6640,
+ "▁Jun": 6641,
+ "▁showed": 6642,
+ "osh": 6643,
+ "▁Bay": 6644,
+ "mail": 6645,
+ "▁SO": 6646,
+ "▁\"<": 6647,
+ "graphics": 6648,
+ "▁fu": 6649,
+ "click": 6650,
+ "▁battle": 6651,
+ "{{": 6652,
+ "▁Event": 6653,
+ "rior": 6654,
+ "chaft": 6655,
+ "▁favorite": 6656,
+ "usive": 6657,
+ "support": 6658,
+ "bm": 6659,
+ "Kind": 6660,
+ "▁safety": 6661,
+ "▁Ent": 6662,
+ "cup": 6663,
+ "▁Australia": 6664,
+ "▁destroy": 6665,
+ "▁organization": 6666,
+ "iden": 6667,
+ "################": 6668,
+ "dec": 6669,
+ "▁za": 6670,
+ "▁seven": 6671,
+ "arely": 6672,
+ "▁flag": 6673,
+ "Dir": 6674,
+ "▁Carl": 6675,
+ "▁doctor": 6676,
+ "▁variety": 6677,
+ "▁Lin": 6678,
+ "▁tom": 6679,
+ "^{(": 6680,
+ "Bo": 6681,
+ "antes": 6682,
+ "▁mine": 6683,
+ "▁Mit": 6684,
+ "▁describe": 6685,
+ "Args": 6686,
+ "LS": 6687,
+ "API": 6688,
+ "▁Luc": 6689,
+ "phone": 6690,
+ "▁science": 6691,
+ "▁Oper": 6692,
+ "Next": 6693,
+ "▁investig": 6694,
+ "▁demonstr": 6695,
+ "▁Govern": 6696,
+ "▁objects": 6697,
+ "▁Louis": 6698,
+ "▁Returns": 6699,
+ "▁han": 6700,
+ "nam": 6701,
+ "▁comme": 6702,
+ "▁presence": 6703,
+ "▁pel": 6704,
+ "▁detect": 6705,
+ ")=": 6706,
+ "▁Chinese": 6707,
+ "▁rich": 6708,
+ "▁classes": 6709,
+ "▁expand": 6710,
+ "▁Dom": 6711,
+ "▁Dec": 6712,
+ "sn": 6713,
+ "peed": 6714,
+ "▁Jim": 6715,
+ "should": 6716,
+ "▁Smith": 6717,
+ "▁pages": 6718,
+ "▁Jean": 6719,
+ "rics": 6720,
+ "▁Sund": 6721,
+ "ads": 6722,
+ "▁Their": 6723,
+ "unicip": 6724,
+ "ву": 6725,
+ "▁download": 6726,
+ "▁stress": 6727,
+ "▁Pet": 6728,
+ "menu": 6729,
+ "reme": 6730,
+ "▁compared": 6731,
+ "Ste": 6732,
+ "IND": 6733,
+ "container": 6734,
+ "▁Indian": 6735,
+ "oren": 6736,
+ "▁ses": 6737,
+ "▁Whe": 6738,
+ "▁roku": 6739,
+ "▁established": 6740,
+ "▁generally": 6741,
+ "▁fle": 6742,
+ "__(": 6743,
+ "=\"+": 6744,
+ "Var": 6745,
+ "▁Make": 6746,
+ "▁removed": 6747,
+ "zz": 6748,
+ "ün": 6749,
+ "▁mix": 6750,
+ "erk": 6751,
+ "iation": 6752,
+ "outer": 6753,
+ "SK": 6754,
+ "▁becomes": 6755,
+ "▁Hall": 6756,
+ "scious": 6757,
+ "▁watched": 6758,
+ "▁gather": 6759,
+ "▁Result": 6760,
+ "proof": 6761,
+ "pay": 6762,
+ "▁produced": 6763,
+ "▁|=": 6764,
+ "▁border": 6765,
+ "▁din": 6766,
+ "▁script": 6767,
+ "▁actions": 6768,
+ "▁mas": 6769,
+ "ща": 6770,
+ "ooth": 6771,
+ "▁Techn": 6772,
+ "Json": 6773,
+ "▁filled": 6774,
+ "ден": 6775,
+ "undle": 6776,
+ "сту": 6777,
+ "Tool": 6778,
+ "▁king": 6779,
+ "▁ven": 6780,
+ "stra": 6781,
+ "▁predict": 6782,
+ "▁lui": 6783,
+ "▁WARRAN": 6784,
+ "▁Fun": 6785,
+ "Script": 6786,
+ "▁powerful": 6787,
+ "▁lose": 6788,
+ "atically": 6789,
+ "▁daily": 6790,
+ "▁ring": 6791,
+ "▁arrived": 6792,
+ "Stack": 6793,
+ "scope": 6794,
+ "▁Back": 6795,
+ "elij": 6796,
+ "▁ze": 6797,
+ "keys": 6798,
+ "{\"": 6799,
+ "VID": 6800,
+ "▁license": 6801,
+ "what": 6802,
+ "▁proced": 6803,
+ "rant": 6804,
+ "estival": 6805,
+ "agram": 6806,
+ "▁LO": 6807,
+ "▁Henry": 6808,
+ "▁flags": 6809,
+ "Down": 6810,
+ "scription": 6811,
+ "▁families": 6812,
+ "isse": 6813,
+ "bour": 6814,
+ "▁Bur": 6815,
+ "—\"": 6816,
+ "▁brief": 6817,
+ "▁creating": 6818,
+ "▁clients": 6819,
+ "rangle": 6820,
+ "▁amazing": 6821,
+ "▁sind": 6822,
+ "▁covered": 6823,
+ "Well": 6824,
+ "сте": 6825,
+ "тор": 6826,
+ "▁Bas": 6827,
+ "total": 6828,
+ "▁Init": 6829,
+ "▁sand": 6830,
+ "Unit": 6831,
+ "▁murder": 6832,
+ "▁bright": 6833,
+ "▁trav": 6834,
+ "icans": 6835,
+ "▁attribute": 6836,
+ "fc": 6837,
+ "▁placed": 6838,
+ "EST": 6839,
+ "Vari": 6840,
+ "▁cos": 6841,
+ "▁attract": 6842,
+ "anel": 6843,
+ "}).": 6844,
+ "bytes": 6845,
+ "▁parse": 6846,
+ "▁belong": 6847,
+ "BN": 6848,
+ "▁Sol": 6849,
+ "Po": 6850,
+ "`,": 6851,
+ "▁calling": 6852,
+ "▁?>": 6853,
+ "▁iter": 6854,
+ "▁url": 6855,
+ "▁evening": 6856,
+ "reek": 6857,
+ "▁honest": 6858,
+ "▁director": 6859,
+ "RC": 6860,
+ "▁solid": 6861,
+ "▁phil": 6862,
+ "iene": 6863,
+ "FAULT": 6864,
+ "cope": 6865,
+ "▁History": 6866,
+ "▁Team": 6867,
+ "reedom": 6868,
+ "▁ru": 6869,
+ "UB": 6870,
+ "▁worse": 6871,
+ "imo": 6872,
+ "Mat": 6873,
+ "▁Mex": 6874,
+ "actor": 6875,
+ "▁vor": 6876,
+ "ться": 6877,
+ "▁experiment": 6878,
+ "▁Play": 6879,
+ "▁Another": 6880,
+ "▁happens": 6881,
+ "uan": 6882,
+ "▁patients": 6883,
+ "▁rend": 6884,
+ "▁Mo": 6885,
+ "▁Tex": 6886,
+ "▁wed": 6887,
+ "tn": 6888,
+ "insert": 6889,
+ "▁па": 6890,
+ "▁anti": 6891,
+ "Match": 6892,
+ "ampionship": 6893,
+ "▁forces": 6894,
+ "▁Hot": 6895,
+ "▁phase": 6896,
+ "▁template": 6897,
+ "stop": 6898,
+ "icated": 6899,
+ "▁managed": 6900,
+ "wait": 6901,
+ "▁*(": 6902,
+ "GB": 6903,
+ "▁appoint": 6904,
+ "ła": 6905,
+ "▁stick": 6906,
+ "▁FOR": 6907,
+ "▁Vis": 6908,
+ "tor": 6909,
+ "▁př": 6910,
+ "quest": 6911,
+ "uses": 6912,
+ "\");\r": 6913,
+ "▁suddenly": 6914,
+ "éc": 6915,
+ "ND": 6916,
+ "urop": 6917,
+ "ред": 6918,
+ "▁insurance": 6919,
+ "access": 6920,
+ "unfinished": 6921,
+ "▁tamb": 6922,
+ "▁sac": 6923,
+ "▁Court": 6924,
+ "▁missing": 6925,
+ "▁Where": 6926,
+ "▁Sum": 6927,
+ "}^{\\": 6928,
+ "▁sua": 6929,
+ "_,": 6930,
+ "▁thick": 6931,
+ "▁Trump": 6932,
+ "▁operations": 6933,
+ "FS": 6934,
+ "▁deux": 6935,
+ "dz": 6936,
+ "Template": 6937,
+ "▁\"/": 6938,
+ "▁odd": 6939,
+ "▁reality": 6940,
+ "▁teams": 6941,
+ "▁cer": 6942,
+ "oma": 6943,
+ "▁și": 6944,
+ "▁cloud": 6945,
+ "▁Department": 6946,
+ "Ne": 6947,
+ "▁requires": 6948,
+ "items": 6949,
+ "▁III": 6950,
+ "rightarrow": 6951,
+ ")->": 6952,
+ "▁writer": 6953,
+ "replace": 6954,
+ "▁thr": 6955,
+ "jen": 6956,
+ "▁ot": 6957,
+ "▁occup": 6958,
+ "▁eventually": 6959,
+ "▁Math": 6960,
+ "▁conserv": 6961,
+ "amer": 6962,
+ "▁Fort": 6963,
+ "▁dry": 6964,
+ "▁sexual": 6965,
+ "▁costs": 6966,
+ "▁forms": 6967,
+ "▁Vict": 6968,
+ "PAR": 6969,
+ "framework": 6970,
+ "▁ди": 6971,
+ "Operation": 6972,
+ "зна": 6973,
+ "which": 6974,
+ "▁tight": 6975,
+ "Invalid": 6976,
+ "▁partner": 6977,
+ "▁пред": 6978,
+ "▁thank": 6979,
+ "▁guard": 6980,
+ "hem": 6981,
+ "Body": 6982,
+ "▁emot": 6983,
+ "IX": 6984,
+ "fast": 6985,
+ "що": 6986,
+ "ño": 6987,
+ "night": 6988,
+ "▁Sci": 6989,
+ "ника": 6990,
+ "▁TO": 6991,
+ "▁individuals": 6992,
+ "сси": 6993,
+ "}),": 6994,
+ "False": 6995,
+ "(\"%": 6996,
+ "▁optim": 6997,
+ "▁-->": 6998,
+ "▁factor": 6999,
+ "▁smaller": 7000,
+ "▁contain": 7001,
+ "spect": 7002,
+ "Engine": 7003,
+ "▁announced": 7004,
+ "▁Democr": 7005,
+ "▁rob": 7006,
+ "▁flat": 7007,
+ "osoph": 7008,
+ "Search": 7009,
+ "ahl": 7010,
+ "▁Exception": 7011,
+ "▁Ol": 7012,
+ "equals": 7013,
+ "▁unter": 7014,
+ "shape": 7015,
+ "NS": 7016,
+ "Obj": 7017,
+ "▁species": 7018,
+ "weight": 7019,
+ "you": 7020,
+ "▁este": 7021,
+ "▁View": 7022,
+ "▁mission": 7023,
+ "▁journal": 7024,
+ "Values": 7025,
+ "▁einem": 7026,
+ "ismo": 7027,
+ "▁projects": 7028,
+ "▁Das": 7029,
+ "rible": 7030,
+ "▁serve": 7031,
+ "▁opening": 7032,
+ "▁hur": 7033,
+ "▁programs": 7034,
+ "▁USA": 7035,
+ "iliar": 7036,
+ "idos": 7037,
+ "Br": 7038,
+ "estamp": 7039,
+ "▁tools": 7040,
+ "anner": 7041,
+ "RT": 7042,
+ "▁Start": 7043,
+ "▁bath": 7044,
+ "▁coffee": 7045,
+ "orter": 7046,
+ "internal": 7047,
+ "files": 7048,
+ "INVAL": 7049,
+ "ako": 7050,
+ "dt": 7051,
+ "▁Second": 7052,
+ "▁alloc": 7053,
+ "▁ended": 7054,
+ "acional": 7055,
+ "▁manager": 7056,
+ "▁Sun": 7057,
+ "agg": 7058,
+ "▁leader": 7059,
+ "olved": 7060,
+ "▁что": 7061,
+ "▁traditional": 7062,
+ "shot": 7063,
+ "rup": 7064,
+ "CF": 7065,
+ "▁Each": 7066,
+ "wr": 7067,
+ "▁Som": 7068,
+ "▁materials": 7069,
+ "▁msg": 7070,
+ "▁syn": 7071,
+ "▁produce": 7072,
+ "▁storage": 7073,
+ "subsection": 7074,
+ "▁Sie": 7075,
+ "▁IP": 7076,
+ "CESS": 7077,
+ "▁wa": 7078,
+ "Record": 7079,
+ "▁marketing": 7080,
+ "plet": 7081,
+ "Dialog": 7082,
+ "▁mentioned": 7083,
+ "▁Na": 7084,
+ "▁Union": 7085,
+ "▁API": 7086,
+ "▁negative": 7087,
+ "txt": 7088,
+ "▁easier": 7089,
+ "legal": 7090,
+ "Dep": 7091,
+ "▁novel": 7092,
+ "eur": 7093,
+ "ació": 7094,
+ "▁Bud": 7095,
+ "▁carry": 7096,
+ "schaft": 7097,
+ "▁broken": 7098,
+ "▁trees": 7099,
+ ">();": 7100,
+ "▁emb": 7101,
+ "ieder": 7102,
+ "▁route": 7103,
+ "ikel": 7104,
+ "▁listen": 7105,
+ "ashion": 7106,
+ "▁Mrs": 7107,
+ "▁equipment": 7108,
+ "agger": 7109,
+ "▁Thus": 7110,
+ "▁matrix": 7111,
+ "alla": 7112,
+ "▁Tour": 7113,
+ "▁conversation": 7114,
+ "Mon": 7115,
+ "ournal": 7116,
+ "▁minute": 7117,
+ "Am": 7118,
+ "Api": 7119,
+ "▁forget": 7120,
+ "Me": 7121,
+ "levant": 7122,
+ "temp": 7123,
+ "▁telling": 7124,
+ "move": 7125,
+ "▁independent": 7126,
+ "toString": 7127,
+ "edit": 7128,
+ "▁Jac": 7129,
+ "azz": 7130,
+ "react": 7131,
+ "▁cin": 7132,
+ "▁Prov": 7133,
+ "isted": 7134,
+ "▁hash": 7135,
+ "onna": 7136,
+ "iki": 7137,
+ "▁generated": 7138,
+ "Render": 7139,
+ "▁psych": 7140,
+ "nav": 7141,
+ "▁entr": 7142,
+ "пра": 7143,
+ "rx": 7144,
+ "ATH": 7145,
+ "▁assume": 7146,
+ "Tree": 7147,
+ "sembly": 7148,
+ "▁Matt": 7149,
+ "caption": 7150,
+ "▁solutions": 7151,
+ "▁faith": 7152,
+ "▁digital": 7153,
+ "▁excell": 7154,
+ "▁Version": 7155,
+ "Debug": 7156,
+ "▁жи": 7157,
+ "▁carried": 7158,
+ "reset": 7159,
+ "▁slowly": 7160,
+ "ancing": 7161,
+ "▁owner": 7162,
+ "▁Ter": 7163,
+ "▁Did": 7164,
+ "▁gest": 7165,
+ "▁été": 7166,
+ "▁proof": 7167,
+ "Font": 7168,
+ "▁nob": 7169,
+ "Co": 7170,
+ "▁GNU": 7171,
+ "▁liber": 7172,
+ "itness": 7173,
+ "▁hij": 7174,
+ "▁vert": 7175,
+ "ша": 7176,
+ "FLAG": 7177,
+ "MENT": 7178,
+ "▁Son": 7179,
+ "Mult": 7180,
+ "▁district": 7181,
+ "connect": 7182,
+ "jection": 7183,
+ "lymp": 7184,
+ "▁realized": 7185,
+ "mos": 7186,
+ "ye": 7187,
+ "▁render": 7188,
+ "rio": 7189,
+ "▁interpret": 7190,
+ "▁slightly": 7191,
+ "fix": 7192,
+ "▁studies": 7193,
+ "▁rid": 7194,
+ "atre": 7195,
+ "▁benefits": 7196,
+ "▁Face": 7197,
+ "ivery": 7198,
+ "рия": 7199,
+ "document": 7200,
+ "▁asking": 7201,
+ "Last": 7202,
+ "arante": 7203,
+ "▁Martin": 7204,
+ "▁Ell": 7205,
+ "▁vector": 7206,
+ "▁forced": 7207,
+ "оло": 7208,
+ "PH": 7209,
+ "WR": 7210,
+ "▁Kl": 7211,
+ "▁sky": 7212,
+ "▁strategy": 7213,
+ "ocked": 7214,
+ "▁neck": 7215,
+ "ści": 7216,
+ "OUT": 7217,
+ ")),": 7218,
+ "Custom": 7219,
+ "▁wie": 7220,
+ "▁sweet": 7221,
+ "▁temp": 7222,
+ "▁foreign": 7223,
+ "▁hall": 7224,
+ "astr": 7225,
+ "Ass": 7226,
+ "MODE": 7227,
+ "▁maximum": 7228,
+ "annels": 7229,
+ "▁tip": 7230,
+ "▁seconds": 7231,
+ "▁stack": 7232,
+ "iga": 7233,
+ "▁raise": 7234,
+ "enable": 7235,
+ "oir": 7236,
+ "▁soul": 7237,
+ "Ke": 7238,
+ ")$.": 7239,
+ "▁Tim": 7240,
+ "ALSE": 7241,
+ "iser": 7242,
+ "contin": 7243,
+ "bel": 7244,
+ "▁mad": 7245,
+ "lichen": 7246,
+ "abe": 7247,
+ "safe": 7248,
+ "▁concent": 7249,
+ "bound": 7250,
+ "▁Requ": 7251,
+ "switch": 7252,
+ "▁stone": 7253,
+ "▁transl": 7254,
+ "▁vac": 7255,
+ "andon": 7256,
+ "▁Fore": 7257,
+ "▁sounds": 7258,
+ "▁Pop": 7259,
+ "▁HT": 7260,
+ "lia": 7261,
+ "enter": 7262,
+ "▁helps": 7263,
+ "edy": 7264,
+ "ствен": 7265,
+ "anted": 7266,
+ "▁Its": 7267,
+ "▁Step": 7268,
+ "Icon": 7269,
+ "▁EXPECT": 7270,
+ "ialized": 7271,
+ "Post": 7272,
+ "aze": 7273,
+ "▁Carol": 7274,
+ "▁req": 7275,
+ "▁critical": 7276,
+ "DS": 7277,
+ "▁seat": 7278,
+ "aped": 7279,
+ "▁upper": 7280,
+ "▁Sy": 7281,
+ "▁explain": 7282,
+ "▁'./": 7283,
+ "utils": 7284,
+ "possible": 7285,
+ "▁dont": 7286,
+ "Host": 7287,
+ "▁approxim": 7288,
+ "Async": 7289,
+ "▁grab": 7290,
+ "▁sources": 7291,
+ "▁Mos": 7292,
+ "▁Germany": 7293,
+ "▁rub": 7294,
+ "CHAN": 7295,
+ "▁rain": 7296,
+ "▁truly": 7297,
+ "▁joined": 7298,
+ "▁": 7299,
+ "▁Lo": 7300,
+ "Description": 7301,
+ "akt": 7302,
+ "▁Ann": 7303,
+ "^*": 7304,
+ "idae": 7305,
+ "(:": 7306,
+ "tw": 7307,
+ "Mar": 7308,
+ "produ": 7309,
+ "▁spoke": 7310,
+ "ют": 7311,
+ "▁walking": 7312,
+ "▁nodded": 7313,
+ "Props": 7314,
+ "Enabled": 7315,
+ "irk": 7316,
+ "FILE": 7317,
+ "equal": 7318,
+ "pping": 7319,
+ "oli": 7320,
+ "EV": 7321,
+ "enz": 7322,
+ "eting": 7323,
+ "▁sample": 7324,
+ "▁artist": 7325,
+ "[$": 7326,
+ "ità": 7327,
+ "йо": 7328,
+ "props": 7329,
+ "bu": 7330,
+ "ев": 7331,
+ "▁responsible": 7332,
+ "MT": 7333,
+ "▁caused": 7334,
+ "▁theme": 7335,
+ "▁Was": 7336,
+ "▁Before": 7337,
+ "acle": 7338,
+ "▁року": 7339,
+ "cu": 7340,
+ "DEV": 7341,
+ "▁hung": 7342,
+ "textbf": 7343,
+ "▁spin": 7344,
+ "▁latest": 7345,
+ "entially": 7346,
+ "▁Program": 7347,
+ "Metadata": 7348,
+ "password": 7349,
+ "▁hurt": 7350,
+ "кс": 7351,
+ "▁Aus": 7352,
+ "sey": 7353,
+ "allet": 7354,
+ "xF": 7355,
+ "▁Road": 7356,
+ "ется": 7357,
+ "▁rent": 7358,
+ "ция": 7359,
+ "▁Assert": 7360,
+ "іль": 7361,
+ "ück": 7362,
+ "▁sites": 7363,
+ "Document": 7364,
+ "▁obtained": 7365,
+ "▁ci": 7366,
+ "▁[\"": 7367,
+ "▁completed": 7368,
+ "aset": 7369,
+ "raid": 7370,
+ "▁sorry": 7371,
+ "▁fab": 7372,
+ "▁schools": 7373,
+ "ходи": 7374,
+ "▁scr": 7375,
+ "▁incor": 7376,
+ "▁'/": 7377,
+ "▁spr": 7378,
+ "▁Text": 7379,
+ "▁commercial": 7380,
+ "ingly": 7381,
+ "▁opinion": 7382,
+ "▁Star": 7383,
+ "Sign": 7384,
+ "▁javax": 7385,
+ "wi": 7386,
+ "lat": 7387,
+ "▁Key": 7388,
+ "varphi": 7389,
+ "ды": 7390,
+ "▁connected": 7391,
+ "▁adjust": 7392,
+ "▁Az": 7393,
+ "▁planning": 7394,
+ "---": 7395,
+ "Integer": 7396,
+ "auf": 7397,
+ "expected": 7398,
+ "▁fant": 7399,
+ "▁tou": 7400,
+ "Parent": 7401,
+ "▁Lat": 7402,
+ "▁thoughts": 7403,
+ "▁Jud": 7404,
+ "Parameters": 7405,
+ "Gr": 7406,
+ "ром": 7407,
+ "IA": 7408,
+ "▁Bob": 7409,
+ "lict": 7410,
+ "lan": 7411,
+ "omic": 7412,
+ "▁apart": 7413,
+ "▁trou": 7414,
+ "▁appreci": 7415,
+ "▁Christmas": 7416,
+ "irq": 7417,
+ "thon": 7418,
+ "▁Error": 7419,
+ "▁score": 7420,
+ "rome": 7421,
+ "▁neighbor": 7422,
+ "▁Mur": 7423,
+ "admin": 7424,
+ "▁Film": 7425,
+ "Rect": 7426,
+ "▁configuration": 7427,
+ "▁cs": 7428,
+ "gun": 7429,
+ "channel": 7430,
+ "▁Report": 7431,
+ "▁strateg": 7432,
+ "▁workers": 7433,
+ "fields": 7434,
+ "Schema": 7435,
+ "appa": 7436,
+ "olic": 7437,
+ "EO": 7438,
+ "▁Charl": 7439,
+ "▁Cup": 7440,
+ "png": 7441,
+ "▁Hill": 7442,
+ "owe": 7443,
+ "▁mostly": 7444,
+ "”.": 7445,
+ "▁finish": 7446,
+ "▁Со": 7447,
+ "▁stars": 7448,
+ "player": 7449,
+ "▁inner": 7450,
+ "component": 7451,
+ "tim": 7452,
+ "IE": 7453,
+ "▁ther": 7454,
+ "▁smart": 7455,
+ "▁sad": 7456,
+ "▁Council": 7457,
+ "area": 7458,
+ "lay": 7459,
+ "▁ба": 7460,
+ "▁gradu": 7461,
+ "▁chem": 7462,
+ "▁ho": 7463,
+ "Select": 7464,
+ "▁instr": 7465,
+ "▁kl": 7466,
+ "ifications": 7467,
+ "Long": 7468,
+ "▁sobre": 7469,
+ "▁Old": 7470,
+ "west": 7471,
+ "},\\": 7472,
+ "ingu": 7473,
+ "▁spring": 7474,
+ "▁nur": 7475,
+ "example": 7476,
+ "When": 7477,
+ "▁advice": 7478,
+ "▁ult": 7479,
+ "ennis": 7480,
+ "▁Love": 7481,
+ "▁\"\"": 7482,
+ "▁increased": 7483,
+ "▁finding": 7484,
+ "irty": 7485,
+ "istrict": 7486,
+ "▁layer": 7487,
+ "template": 7488,
+ "First": 7489,
+ "ным": 7490,
+ "igration": 7491,
+ "rency": 7492,
+ "owie": 7493,
+ "▁np": 7494,
+ "▁selection": 7495,
+ "▁Nach": 7496,
+ "▁PRO": 7497,
+ "▁polic": 7498,
+ "▁database": 7499,
+ "▁byte": 7500,
+ "▁providing": 7501,
+ "mac": 7502,
+ "▁metal": 7503,
+ "modules": 7504,
+ "▁Georg": 7505,
+ "▁Sa": 7506,
+ "▁establish": 7507,
+ "...\"": 7508,
+ "iu": 7509,
+ "kin": 7510,
+ "▁eth": 7511,
+ "▁Sand": 7512,
+ "▁Chapter": 7513,
+ "▁gal": 7514,
+ "▁ice": 7515,
+ "Red": 7516,
+ "▁dal": 7517,
+ "▁principal": 7518,
+ "Msg": 7519,
+ "▁remains": 7520,
+ "нг": 7521,
+ "Title": 7522,
+ "Rel": 7523,
+ "Display": 7524,
+ "Non": 7525,
+ "▁definition": 7526,
+ "▁attr": 7527,
+ "▁signal": 7528,
+ "hl": 7529,
+ "▁sel": 7530,
+ "▁volume": 7531,
+ "▁cache": 7532,
+ "hens": 7533,
+ "▁wird": 7534,
+ "[\\": 7535,
+ "NOT": 7536,
+ "▁election": 7537,
+ "utt": 7538,
+ "▁Window": 7539,
+ "ental": 7540,
+ "ifest": 7541,
+ "xf": 7542,
+ "▁Ра": 7543,
+ "▁overall": 7544,
+ "blic": 7545,
+ "▁editor": 7546,
+ "aden": 7547,
+ "▁cart": 7548,
+ "Left": 7549,
+ "uls": 7550,
+ "bing": 7551,
+ "Right": 7552,
+ "▁sé": 7553,
+ "Sim": 7554,
+ "▁camera": 7555,
+ "▁fav": 7556,
+ "Decl": 7557,
+ "spring": 7558,
+ "▁errors": 7559,
+ "Tab": 7560,
+ "println": 7561,
+ "▁Bern": 7562,
+ "nab": 7563,
+ "▁Base": 7564,
+ "▁auth": 7565,
+ "▁apparent": 7566,
+ "▁presented": 7567,
+ "▁remained": 7568,
+ "▁wet": 7569,
+ "Enc": 7570,
+ "INFO": 7571,
+ "▁Sing": 7572,
+ "package": 7573,
+ ")));": 7574,
+ "▁Social": 7575,
+ "▁Mass": 7576,
+ "▁despite": 7577,
+ "▁mobile": 7578,
+ "▁labor": 7579,
+ "Go": 7580,
+ "▁esp": 7581,
+ "▁Table": 7582,
+ "▁expert": 7583,
+ "▁flex": 7584,
+ "▁profession": 7585,
+ "▁pil": 7586,
+ "Collection": 7587,
+ "LOCK": 7588,
+ "▁applied": 7589,
+ "aller": 7590,
+ "orph": 7591,
+ "ENSE": 7592,
+ "▁был": 7593,
+ "▁db": 7594,
+ "overline": 7595,
+ "▁Code": 7596,
+ "▁bytes": 7597,
+ "▁trouble": 7598,
+ "▁насе": 7599,
+ "DD": 7600,
+ "▁Year": 7601,
+ "mbox": 7602,
+ "▁keeping": 7603,
+ "▁kick": 7604,
+ "äng": 7605,
+ "▁corresponding": 7606,
+ "▁library": 7607,
+ "▁*/\r": 7608,
+ "callback": 7609,
+ "ums": 7610,
+ "▁json": 7611,
+ "▁Mount": 7612,
+ "▁Stand": 7613,
+ "IGHT": 7614,
+ "▁News": 7615,
+ "▁comments": 7616,
+ "returns": 7617,
+ "Cal": 7618,
+ "▁award": 7619,
+ "▁bought": 7620,
+ "includegraphics": 7621,
+ "▁ле": 7622,
+ "dot": 7623,
+ "ronic": 7624,
+ "▁extremely": 7625,
+ "▁minor": 7626,
+ "ifer": 7627,
+ "java": 7628,
+ "endar": 7629,
+ "layout": 7630,
+ "plies": 7631,
+ "▁buf": 7632,
+ "▁Island": 7633,
+ "▁About": 7634,
+ "▁west": 7635,
+ "▁Scott": 7636,
+ "ACT": 7637,
+ "Why": 7638,
+ "▁largest": 7639,
+ "▁container": 7640,
+ "▁temperature": 7641,
+ "▁£": 7642,
+ "▁reduce": 7643,
+ "▁foi": 7644,
+ "han": 7645,
+ "▁bod": 7646,
+ "▁Van": 7647,
+ "▁nullptr": 7648,
+ "▁dating": 7649,
+ "▁chain": 7650,
+ "Flags": 7651,
+ "iento": 7652,
+ "sort": 7653,
+ "▁fan": 7654,
+ "▁determine": 7655,
+ "▁wear": 7656,
+ "BE": 7657,
+ "▁appropriate": 7658,
+ "лся": 7659,
+ "тов": 7660,
+ "▁goals": 7661,
+ "▁Map": 7662,
+ "▁Sar": 7663,
+ "▁Option": 7664,
+ "▁hate": 7665,
+ "▁zijn": 7666,
+ ",-": 7667,
+ "▁implied": 7668,
+ "bits": 7669,
+ "▁Men": 7670,
+ "skip": 7671,
+ "▁Mond": 7672,
+ "▁Hon": 7673,
+ "▁prove": 7674,
+ "van": 7675,
+ "▁traff": 7676,
+ "▁intr": 7677,
+ "pic": 7678,
+ "▁dropped": 7679,
+ "▁werd": 7680,
+ "▁separate": 7681,
+ "isa": 7682,
+ "▁tab": 7683,
+ "tml": 7684,
+ "▁\"$": 7685,
+ "mutex": 7686,
+ "▁Pan": 7687,
+ "serve": 7688,
+ "▁hotel": 7689,
+ "▁Last": 7690,
+ "step": 7691,
+ "▁vir": 7692,
+ "Rule": 7693,
+ "istan": 7694,
+ "oting": 7695,
+ "arks": 7696,
+ "(__": 7697,
+ "▁els": 7698,
+ "Player": 7699,
+ "]]": 7700,
+ "вич": 7701,
+ "ych": 7702,
+ "exception": 7703,
+ "=\"../": 7704,
+ "▁imagine": 7705,
+ "\"},": 7706,
+ "icago": 7707,
+ "eler": 7708,
+ "▁vs": 7709,
+ "▁Africa": 7710,
+ "▁Business": 7711,
+ "ocks": 7712,
+ "▁prz": 7713,
+ "▁fucking": 7714,
+ "▁picked": 7715,
+ "▁ві": 7716,
+ "▁\",": 7717,
+ "▁bott": 7718,
+ "▁failure": 7719,
+ "[:": 7720,
+ "▁Gar": 7721,
+ "apes": 7722,
+ "uple": 7723,
+ "▁fer": 7724,
+ "▁purchase": 7725,
+ "▁пер": 7726,
+ "▁bird": 7727,
+ "Widget": 7728,
+ "▁Sunday": 7729,
+ "▁Amaz": 7730,
+ "▁consult": 7731,
+ "utsch": 7732,
+ "anto": 7733,
+ "Storage": 7734,
+ "▁header": 7735,
+ "ühr": 7736,
+ "▁Ha": 7737,
+ "▁Association": 7738,
+ "▁sight": 7739,
+ "Cell": 7740,
+ "▁profile": 7741,
+ "▁female": 7742,
+ "ån": 7743,
+ "▁wid": 7744,
+ "zn": 7745,
+ "Direct": 7746,
+ "▁stret": 7747,
+ "aat": 7748,
+ "▁patient": 7749,
+ "here": 7750,
+ "▁Atl": 7751,
+ "inet": 7752,
+ "Definition": 7753,
+ "imary": 7754,
+ "Policy": 7755,
+ "▁dut": 7756,
+ "▁majority": 7757,
+ "сі": 7758,
+ "▁Project": 7759,
+ "ById": 7760,
+ "▁believed": 7761,
+ "▁Music": 7762,
+ "зы": 7763,
+ "anti": 7764,
+ "▁oder": 7765,
+ "Channel": 7766,
+ "▁sle": 7767,
+ "▁sequence": 7768,
+ "▁pieces": 7769,
+ "▁kne": 7770,
+ "▁absolutely": 7771,
+ "▁Philip": 7772,
+ "abilities": 7773,
+ "Que": 7774,
+ "▁Kar": 7775,
+ "Execut": 7776,
+ "▁Devel": 7777,
+ "▁electric": 7778,
+ "full": 7779,
+ "rolled": 7780,
+ "Dom": 7781,
+ "▁river": 7782,
+ "▁healthy": 7783,
+ "▁extern": 7784,
+ "fit": 7785,
+ "▁coach": 7786,
+ "▁Kr": 7787,
+ "asta": 7788,
+ "Compat": 7789,
+ "▁exit": 7790,
+ "▁Const": 7791,
+ "after": 7792,
+ "▁shoulder": 7793,
+ "▁jobs": 7794,
+ "zone": 7795,
+ "▁sale": 7796,
+ "ixel": 7797,
+ "▁determined": 7798,
+ "▁anyway": 7799,
+ "orf": 7800,
+ "▁Ger": 7801,
+ "allel": 7802,
+ "rees": 7803,
+ "asm": 7804,
+ "ims": 7805,
+ "▁records": 7806,
+ "▁corpor": 7807,
+ "▁intellig": 7808,
+ "▁Prem": 7809,
+ "▁driving": 7810,
+ "▁marriage": 7811,
+ "▁Thank": 7812,
+ "▁willing": 7813,
+ "MC": 7814,
+ "Fields": 7815,
+ "Items": 7816,
+ "▁micro": 7817,
+ "▁lift": 7818,
+ "irection": 7819,
+ "Account": 7820,
+ "▁architect": 7821,
+ "track": 7822,
+ "▁prin": 7823,
+ "PA": 7824,
+ "▁runs": 7825,
+ "▁Texas": 7826,
+ "isher": 7827,
+ "ensure": 7828,
+ "▁Both": 7829,
+ "ком": 7830,
+ "▁Color": 7831,
+ "Register": 7832,
+ "▁Joe": 7833,
+ "geq": 7834,
+ "lets": 7835,
+ "ading": 7836,
+ "▁army": 7837,
+ "▁Bank": 7838,
+ "otic": 7839,
+ "Product": 7840,
+ "import": 7841,
+ "▁Wed": 7842,
+ "▁cry": 7843,
+ "grade": 7844,
+ "dig": 7845,
+ "gal": 7846,
+ "кла": 7847,
+ "ested": 7848,
+ "ões": 7849,
+ "gers": 7850,
+ "ologie": 7851,
+ "том": 7852,
+ "razy": 7853,
+ "▁dinner": 7854,
+ "QU": 7855,
+ "▁fingers": 7856,
+ "ULE": 7857,
+ "claim": 7858,
+ "▁advantage": 7859,
+ "▁variable": 7860,
+ "▁medic": 7861,
+ "▁male": 7862,
+ "▁circum": 7863,
+ "▁мі": 7864,
+ "▁internet": 7865,
+ "WN": 7866,
+ "▁lab": 7867,
+ "azine": 7868,
+ "чно": 7869,
+ "▁loop": 7870,
+ "▁pred": 7871,
+ "▁consequ": 7872,
+ "▁balance": 7873,
+ "fortun": 7874,
+ "▁gift": 7875,
+ "▁drug": 7876,
+ "▁cash": 7877,
+ "ских": 7878,
+ "rg": 7879,
+ "istribut": 7880,
+ "▁highest": 7881,
+ "ême": 7882,
+ "emph": 7883,
+ "emon": 7884,
+ "▁performed": 7885,
+ "cut": 7886,
+ "▁closer": 7887,
+ "▁becoming": 7888,
+ "▁\"\",": 7889,
+ "star": 7890,
+ "pub": 7891,
+ "▁prepar": 7892,
+ "▁vote": 7893,
+ "ilde": 7894,
+ "▁impress": 7895,
+ "▁employees": 7896,
+ "▁einen": 7897,
+ "▁smooth": 7898,
+ "▁snow": 7899,
+ "▁purs": 7900,
+ "▁voc": 7901,
+ "▁Microsoft": 7902,
+ "PU": 7903,
+ "▁income": 7904,
+ "inos": 7905,
+ "▁operator": 7906,
+ "▁equival": 7907,
+ "▁password": 7908,
+ "ción": 7909,
+ "success": 7910,
+ "▁emp": 7911,
+ "HOUT": 7912,
+ "▁ca": 7913,
+ "flag": 7914,
+ "illy": 7915,
+ "crete": 7916,
+ "frak": 7917,
+ "▁hidden": 7918,
+ "▁\"%": 7919,
+ "ERN": 7920,
+ "рова": 7921,
+ "▁UN": 7922,
+ "roke": 7923,
+ "miss": 7924,
+ "▁split": 7925,
+ "Reference": 7926,
+ ")$,": 7927,
+ "eper": 7928,
+ "▁NO": 7929,
+ "▁square": 7930,
+ "sur": 7931,
+ "чен": 7932,
+ "ester": 7933,
+ "нь": 7934,
+ "}\"": 7935,
+ "rawn": 7936,
+ "rule": 7937,
+ "▁audience": 7938,
+ "este": 7939,
+ "ems": 7940,
+ "ICENSE": 7941,
+ "▁Ill": 7942,
+ "USE": 7943,
+ "▁bon": 7944,
+ "bur": 7945,
+ "▁sick": 7946,
+ "▁horse": 7947,
+ "▁Educ": 7948,
+ "▁benefit": 7949,
+ "▁cro": 7950,
+ "Application": 7951,
+ "▁corre": 7952,
+ "▁guarante": 7953,
+ "DATA": 7954,
+ "▁explained": 7955,
+ "TX": 7956,
+ "▁ont": 7957,
+ "▁Flor": 7958,
+ "▁reports": 7959,
+ "▁Real": 7960,
+ "uded": 7961,
+ "lean": 7962,
+ "▁citiz": 7963,
+ "▁decide": 7964,
+ "WS": 7965,
+ "▁domain": 7966,
+ "▁reflect": 7967,
+ "▁minimum": 7968,
+ "▁legs": 7969,
+ "▁smiled": 7970,
+ "fi": 7971,
+ "▁pure": 7972,
+ "▁Custom": 7973,
+ "▁essential": 7974,
+ "▁observed": 7975,
+ "Bytes": 7976,
+ "▁ctx": 7977,
+ "▁rates": 7978,
+ "mbre": 7979,
+ "▁worry": 7980,
+ ")^": 7981,
+ "▁Research": 7982,
+ "Root": 7983,
+ "Windows": 7984,
+ "ulture": 7985,
+ "▁relative": 7986,
+ "▁seu": 7987,
+ "▁nie": 7988,
+ "▁shook": 7989,
+ "iously": 7990,
+ "▁advert": 7991,
+ "See": 7992,
+ "▁Central": 7993,
+ "▁batter": 7994,
+ "▁signed": 7995,
+ "TS": 7996,
+ "oni": 7997,
+ "▁prepared": 7998,
+ "gate": 7999,
+ "▁Care": 8000,
+ "care": 8001,
+ "▁supply": 8002,
+ "Exp": 8003,
+ "bolds": 8004,
+ "▁trail": 8005,
+ "▁fish": 8006,
+ "▁units": 8007,
+ "venue": 8008,
+ "хи": 8009,
+ "▁Wood": 8010,
+ "▁category": 8011,
+ "▁ble": 8012,
+ "▁override": 8013,
+ "foo": 8014,
+ "▁influence": 8015,
+ "enth": 8016,
+ "rij": 8017,
+ "▁adapt": 8018,
+ "icians": 8019,
+ "deleted": 8020,
+ "▁vision": 8021,
+ "ctrl": 8022,
+ "Lambda": 8023,
+ "tp": 8024,
+ "mond": 8025,
+ "aturday": 8026,
+ "normal": 8027,
+ "▁thousand": 8028,
+ "▁Profess": 8029,
+ "▁disease": 8030,
+ "clip": 8031,
+ "▁гра": 8032,
+ "boldsymbol": 8033,
+ "OB": 8034,
+ "▁challenge": 8035,
+ "▁motion": 8036,
+ "▁whis": 8037,
+ "▁leaders": 8038,
+ "▁colon": 8039,
+ "▁suit": 8040,
+ "mid": 8041,
+ "ampion": 8042,
+ "ág": 8043,
+ "▁views": 8044,
+ "▁appears": 8045,
+ "ancel": 8046,
+ "▁zwe": 8047,
+ "IST": 8048,
+ "▁leaves": 8049,
+ "▁enh": 8050,
+ "Active": 8051,
+ "▁dit": 8052,
+ "ificate": 8053,
+ "matrix": 8054,
+ "Expression": 8055,
+ "Reader": 8056,
+ "▁mental": 8057,
+ "embre": 8058,
+ "▁decor": 8059,
+ "arts": 8060,
+ "▁vent": 8061,
+ "nel": 8062,
+ "lines": 8063,
+ "upid": 8064,
+ "erved": 8065,
+ "▁boys": 8066,
+ "аль": 8067,
+ "MOD": 8068,
+ "isl": 8069,
+ "▁[[": 8070,
+ "phy": 8071,
+ "▁..": 8072,
+ "▁agent": 8073,
+ "▁Services": 8074,
+ "▁iron": 8075,
+ "▁components": 8076,
+ "▁fre": 8077,
+ "ictionary": 8078,
+ "▁tests": 8079,
+ ".~\\": 8080,
+ "obs": 8081,
+ "▁Ми": 8082,
+ "▁обла": 8083,
+ "▁assess": 8084,
+ "▁Friday": 8085,
+ "▁weather": 8086,
+ "kg": 8087,
+ "стра": 8088,
+ ".}": 8089,
+ "endant": 8090,
+ "anna": 8091,
+ "▁Japanese": 8092,
+ "cmp": 8093,
+ "▁Army": 8094,
+ "onym": 8095,
+ "▁relax": 8096,
+ "dates": 8097,
+ "▁Russian": 8098,
+ "▁excellent": 8099,
+ "'))": 8100,
+ "ILITY": 8101,
+ "▁showing": 8102,
+ "▁Daniel": 8103,
+ "мя": 8104,
+ "▁Main": 8105,
+ "Phi": 8106,
+ "▁Rock": 8107,
+ "▁grew": 8108,
+ "▁yield": 8109,
+ "ière": 8110,
+ "seg": 8111,
+ "}}$": 8112,
+ "▁strict": 8113,
+ "▁vehicle": 8114,
+ "UD": 8115,
+ "AF": 8116,
+ "Sw": 8117,
+ "▁chest": 8118,
+ "▁officer": 8119,
+ "▁ear": 8120,
+ "HER": 8121,
+ "noon": 8122,
+ "▁journey": 8123,
+ "NT": 8124,
+ "▁divers": 8125,
+ "▁Finally": 8126,
+ "Found": 8127,
+ "▁AS": 8128,
+ "rik": 8129,
+ "▁constr": 8130,
+ "▁sust": 8131,
+ "account": 8132,
+ "▁walls": 8133,
+ "▁entirely": 8134,
+ "Iter": 8135,
+ "cha": 8136,
+ "ishes": 8137,
+ "IVE": 8138,
+ "▁prime": 8139,
+ "▁…": 8140,
+ "xe": 8141,
+ "uten": 8142,
+ "arse": 8143,
+ "▁Pa": 8144,
+ "pute": 8145,
+ "äl": 8146,
+ "▁protection": 8147,
+ "▁keys": 8148,
+ "May": 8149,
+ "Byte": 8150,
+ "Const": 8151,
+ "BL": 8152,
+ "▁пе": 8153,
+ "▁spl": 8154,
+ "▁clothes": 8155,
+ "ashed": 8156,
+ "Mark": 8157,
+ "ème": 8158,
+ "▁fait": 8159,
+ "▁introduced": 8160,
+ "unlock": 8161,
+ "▁Instead": 8162,
+ "ansion": 8163,
+ "region": 8164,
+ "▁Americans": 8165,
+ "▁indeed": 8166,
+ "widget": 8167,
+ "▁realize": 8168,
+ "▁fro": 8169,
+ "BIT": 8170,
+ "▁React": 8171,
+ "READ": 8172,
+ "asket": 8173,
+ "never": 8174,
+ "▁poll": 8175,
+ "icol": 8176,
+ "▁prev": 8177,
+ "▁hyp": 8178,
+ "▁Fur": 8179,
+ "cloud": 8180,
+ "▁Lee": 8181,
+ "pling": 8182,
+ "▁Child": 8183,
+ "▁ideal": 8184,
+ "Selector": 8185,
+ "STATUS": 8186,
+ "ucture": 8187,
+ "▁wine": 8188,
+ "▁possibly": 8189,
+ "▁putting": 8190,
+ "▁riv": 8191,
+ "▁wearing": 8192,
+ "▁Source": 8193,
+ "▁Cas": 8194,
+ "Changed": 8195,
+ "▁thanks": 8196,
+ "TIME": 8197,
+ "▁sport": 8198,
+ "▁Award": 8199,
+ "▁glad": 8200,
+ "▁Pass": 8201,
+ "▁Pos": 8202,
+ "sche": 8203,
+ "▁CD": 8204,
+ "▁afford": 8205,
+ "▁Women": 8206,
+ "▁District": 8207,
+ "▁identity": 8208,
+ "▁parties": 8209,
+ ":%": 8210,
+ "▁drag": 8211,
+ "▁mai": 8212,
+ "!(": 8213,
+ "langle": 8214,
+ "▁knowing": 8215,
+ "Project": 8216,
+ "▁regarding": 8217,
+ "▁Joseph": 8218,
+ "ге": 8219,
+ "▁Dar": 8220,
+ "▁Hor": 8221,
+ "▁animals": 8222,
+ "▁extension": 8223,
+ "ская": 8224,
+ "▁Han": 8225,
+ "btn": 8226,
+ "aciones": 8227,
+ "▁familiar": 8228,
+ "holder": 8229,
+ ":\r": 8230,
+ "stood": 8231,
+ "▁liked": 8232,
+ "CODE": 8233,
+ "▁enable": 8234,
+ "▁ped": 8235,
+ "iti": 8236,
+ "hab": 8237,
+ "DIR": 8238,
+ "▁beat": 8239,
+ "ті": 8240,
+ "▁Minister": 8241,
+ "▁py": 8242,
+ "Pat": 8243,
+ "▁exhib": 8244,
+ "▁Build": 8245,
+ "▁Field": 8246,
+ "ician": 8247,
+ "▁collabor": 8248,
+ "▁quarter": 8249,
+ "▁False": 8250,
+ "km": 8251,
+ "▁virtual": 8252,
+ "owa": 8253,
+ "▁Jon": 8254,
+ "amin": 8255,
+ "uen": 8256,
+ "▁ин": 8257,
+ "imation": 8258,
+ "oving": 8259,
+ "▁testing": 8260,
+ "sect": 8261,
+ "ITION": 8262,
+ "!\\": 8263,
+ "apy": 8264,
+ "▁transition": 8265,
+ "ository": 8266,
+ "ODO": 8267,
+ "PD": 8268,
+ "né": 8269,
+ "▁generate": 8270,
+ "▁native": 8271,
+ "▁('": 8272,
+ "▁elle": 8273,
+ "RR": 8274,
+ "▁hun": 8275,
+ "_->": 8276,
+ "agnost": 8277,
+ "▁proposed": 8278,
+ "▁Game": 8279,
+ "▁efforts": 8280,
+ "вя": 8281,
+ "tc": 8282,
+ "ск": 8283,
+ "▁intent": 8284,
+ "▁Bre": 8285,
+ "isc": 8286,
+ "▁protest": 8287,
+ "▁holds": 8288,
+ "ometry": 8289,
+ "▁Have": 8290,
+ "▁detail": 8291,
+ "▁WITHOUT": 8292,
+ "yer": 8293,
+ "▁Kon": 8294,
+ "▁noticed": 8295,
+ "▁requirements": 8296,
+ "DEBUG": 8297,
+ "kins": 8298,
+ "▁Span": 8299,
+ "▁cars": 8300,
+ "meta": 8301,
+ "▁kil": 8302,
+ "▁Bron": 8303,
+ "▁experienced": 8304,
+ "▁remind": 8305,
+ "ourse": 8306,
+ "▁Western": 8307,
+ "tered": 8308,
+ "▁devices": 8309,
+ "▁pictures": 8310,
+ "▁tut": 8311,
+ "\"`": 8312,
+ "▁impossible": 8313,
+ "▁rail": 8314,
+ "▁feels": 8315,
+ "icas": 8316,
+ "illing": 8317,
+ "▁accident": 8318,
+ "▁'@": 8319,
+ "________": 8320,
+ "▁notes": 8321,
+ "oman": 8322,
+ "Parser": 8323,
+ "▁discovered": 8324,
+ "▁Roman": 8325,
+ "▁budget": 8326,
+ "▁guide": 8327,
+ "king": 8328,
+ "▁incred": 8329,
+ "olar": 8330,
+ "enden": 8331,
+ "Desc": 8332,
+ "▁wave": 8333,
+ "бли": 8334,
+ "igt": 8335,
+ "▁restrict": 8336,
+ "▁Ret": 8337,
+ "▁mac": 8338,
+ "ур": 8339,
+ "BS": 8340,
+ "ís": 8341,
+ "▁generation": 8342,
+ "dem": 8343,
+ "alo": 8344,
+ "бра": 8345,
+ "▁ordered": 8346,
+ "drop": 8347,
+ "▁pp": 8348,
+ "▁Review": 8349,
+ "▁literally": 8350,
+ "▁Sir": 8351,
+ "▁Yeah": 8352,
+ "▁density": 8353,
+ "riz": 8354,
+ "inde": 8355,
+ "▁gain": 8356,
+ "▁panel": 8357,
+ "jet": 8358,
+ "▁Times": 8359,
+ "▁nella": 8360,
+ "▁previously": 8361,
+ "points": 8362,
+ "Send": 8363,
+ "▁Brown": 8364,
+ "each": 8365,
+ "▁trigger": 8366,
+ "ometimes": 8367,
+ "icos": 8368,
+ "GR": 8369,
+ "Panel": 8370,
+ "ogen": 8371,
+ "▁cm": 8372,
+ "ructions": 8373,
+ "▁kiss": 8374,
+ "▁solo": 8375,
+ "▁famous": 8376,
+ "ran": 8377,
+ "про": 8378,
+ "▁thro": 8379,
+ "Graph": 8380,
+ "imit": 8381,
+ "▁Value": 8382,
+ "▁starts": 8383,
+ "ipeline": 8384,
+ "hd": 8385,
+ "TC": 8386,
+ "▁discussion": 8387,
+ "▁truck": 8388,
+ "aka": 8389,
+ "Only": 8390,
+ "▁Equ": 8391,
+ "▁kö": 8392,
+ "▁Bes": 8393,
+ "▁critic": 8394,
+ "▁propos": 8395,
+ "▁batt": 8396,
+ "▁Section": 8397,
+ "Show": 8398,
+ "gp": 8399,
+ "STATE": 8400,
+ "POST": 8401,
+ "▁Nord": 8402,
+ "▁innov": 8403,
+ "▁crim": 8404,
+ "axis": 8405,
+ "▁Turn": 8406,
+ "conn": 8407,
+ "Runtime": 8408,
+ "▁remaining": 8409,
+ "oston": 8410,
+ "▁Э": 8411,
+ "▁windows": 8412,
+ "▁Royal": 8413,
+ "▁vide": 8414,
+ "PP": 8415,
+ "chron": 8416,
+ "▁san": 8417,
+ "▁rise": 8418,
+ "▁delle": 8419,
+ "▁Dur": 8420,
+ "▁rapid": 8421,
+ "cert": 8422,
+ "LA": 8423,
+ "edge": 8424,
+ "▁\\]": 8425,
+ "▁entered": 8426,
+ "▁laws": 8427,
+ "▁photo": 8428,
+ "▁applications": 8429,
+ "▁Berlin": 8430,
+ "▁arrest": 8431,
+ "▁federal": 8432,
+ "▁Russia": 8433,
+ "▁usual": 8434,
+ "▁raw": 8435,
+ "▁più": 8436,
+ "être": 8437,
+ "JSON": 8438,
+ "SION": 8439,
+ "xture": 8440,
+ "istent": 8441,
+ "▁Power": 8442,
+ "Bit": 8443,
+ "▁capacity": 8444,
+ "▁cards": 8445,
+ "UID": 8446,
+ "iments": 8447,
+ "▁dar": 8448,
+ "▁Chicago": 8449,
+ "▁comfortable": 8450,
+ "tip": 8451,
+ "bas": 8452,
+ "▁mu": 8453,
+ "▁enemy": 8454,
+ "yan": 8455,
+ "▁фи": 8456,
+ "▁updated": 8457,
+ "ango": 8458,
+ "Ev": 8459,
+ "Effect": 8460,
+ "osing": 8461,
+ "rence": 8462,
+ "▁Congress": 8463,
+ "▁defe": 8464,
+ "▁ip": 8465,
+ "▁tout": 8466,
+ "▁freedom": 8467,
+ "▁ao": 8468,
+ "▁Therefore": 8469,
+ "Edit": 8470,
+ "▁Virgin": 8471,
+ "REE": 8472,
+ "argo": 8473,
+ "▁Dam": 8474,
+ "▁traffic": 8475,
+ "ños": 8476,
+ "▁alle": 8477,
+ "▁depth": 8478,
+ "Now": 8479,
+ "▁sides": 8480,
+ "▁годи": 8481,
+ "Descriptor": 8482,
+ "▁artikel": 8483,
+ "▁narrow": 8484,
+ "___": 8485,
+ "kw": 8486,
+ "uto": 8487,
+ "▁Facebook": 8488,
+ "tegr": 8489,
+ "boolean": 8490,
+ "nik": 8491,
+ "bd": 8492,
+ "Track": 8493,
+ "▁gran": 8494,
+ "reshold": 8495,
+ "вет": 8496,
+ "wrap": 8497,
+ "▁noise": 8498,
+ "igu": 8499,
+ "▁Bon": 8500,
+ "▁wy": 8501,
+ "linux": 8502,
+ "cks": 8503,
+ "▁fans": 8504,
+ "▁mach": 8505,
+ "▁prices": 8506,
+ "év": 8507,
+ "outs": 8508,
+ "standing": 8509,
+ "▁categ": 8510,
+ ";\\": 8511,
+ "▁decre": 8512,
+ "▁Saturday": 8513,
+ "▁menu": 8514,
+ "▁Nov": 8515,
+ "▁Yet": 8516,
+ "▁так": 8517,
+ "liche": 8518,
+ "▁Academ": 8519,
+ "▁communication": 8520,
+ "using": 8521,
+ "▁Society": 8522,
+ "▁nuc": 8523,
+ "pective": 8524,
+ "orial": 8525,
+ "▁afraid": 8526,
+ "▁animal": 8527,
+ "▁turning": 8528,
+ "dst": 8529,
+ "mathfrak": 8530,
+ "lers": 8531,
+ "▁lots": 8532,
+ "▁á": 8533,
+ "▁Tra": 8534,
+ "np": 8535,
+ "▁rose": 8536,
+ "▁GL": 8537,
+ "▁helping": 8538,
+ "▁winter": 8539,
+ "▁ком": 8540,
+ "Mock": 8541,
+ "▁investment": 8542,
+ "Use": 8543,
+ "▁Canad": 8544,
+ "нд": 8545,
+ "Copy": 8546,
+ "▁fly": 8547,
+ "SER": 8548,
+ "▁Far": 8549,
+ "▁Ros": 8550,
+ "amil": 8551,
+ "▁fighting": 8552,
+ "▁religious": 8553,
+ "super": 8554,
+ "screen": 8555,
+ "▁furn": 8556,
+ "▁surprised": 8557,
+ "▁replied": 8558,
+ "Activity": 8559,
+ "▁Down": 8560,
+ "▁insert": 8561,
+ "▁Olymp": 8562,
+ "▁pointed": 8563,
+ "▁Card": 8564,
+ "driver": 8565,
+ "▁Da": 8566,
+ "!--": 8567,
+ "roud": 8568,
+ "undo": 8569,
+ "▁messages": 8570,
+ "▁Point": 8571,
+ "VM": 8572,
+ "▁plane": 8573,
+ "xc": 8574,
+ "▁television": 8575,
+ "ён": 8576,
+ "▁thousands": 8577,
+ "▁cris": 8578,
+ "▁delay": 8579,
+ "▁Next": 8580,
+ "▁nombre": 8581,
+ "▁tu": 8582,
+ "▁skip": 8583,
+ "road": 8584,
+ "istration": 8585,
+ "▁tur": 8586,
+ "▁Develop": 8587,
+ "▁Па": 8588,
+ "▁дру": 8589,
+ "▁wonderful": 8590,
+ ">&": 8591,
+ "▁Liber": 8592,
+ "▁scope": 8593,
+ "▁manage": 8594,
+ "▁dass": 8595,
+ "▁recall": 8596,
+ "PM": 8597,
+ "▁relevant": 8598,
+ "▁Earth": 8599,
+ "▁как": 8600,
+ "▁apr": 8601,
+ "▁ASS": 8602,
+ "ién": 8603,
+ "▁SH": 8604,
+ "oom": 8605,
+ "itet": 8606,
+ "none": 8607,
+ "asi": 8608,
+ "▁motor": 8609,
+ "▁Show": 8610,
+ "nb": 8611,
+ "▁factors": 8612,
+ "▁forest": 8613,
+ "▁вре": 8614,
+ "thm": 8615,
+ "▁municip": 8616,
+ "▁turns": 8617,
+ "▁Division": 8618,
+ "EC": 8619,
+ "▁disappe": 8620,
+ "structor": 8621,
+ "▁somewhere": 8622,
+ "▁African": 8623,
+ "▁Institute": 8624,
+ "Grid": 8625,
+ "▁teacher": 8626,
+ "uries": 8627,
+ "▁respectively": 8628,
+ "▁SD": 8629,
+ "▁alive": 8630,
+ "▁pou": 8631,
+ "▁Water": 8632,
+ "фе": 8633,
+ "▁changing": 8634,
+ "▁afternoon": 8635,
+ "▁orders": 8636,
+ "Ret": 8637,
+ "Pointer": 8638,
+ "▁sav": 8639,
+ "erg": 8640,
+ "oked": 8641,
+ "essions": 8642,
+ "▁Fire": 8643,
+ "aret": 8644,
+ "imm": 8645,
+ "▁desire": 8646,
+ "▁що": 8647,
+ "▁Design": 8648,
+ "uture": 8649,
+ "▁Office": 8650,
+ "▁cmd": 8651,
+ "▁eating": 8652,
+ "Network": 8653,
+ "▁rough": 8654,
+ "operator": 8655,
+ "IGN": 8656,
+ "▁sports": 8657,
+ "▁weren": 8658,
+ "▁noted": 8659,
+ "▁twice": 8660,
+ "III": 8661,
+ "▁anx": 8662,
+ "▁elim": 8663,
+ "▁ав": 8664,
+ "▁io": 8665,
+ "▁speech": 8666,
+ "▁condu": 8667,
+ "elles": 8668,
+ "idade": 8669,
+ "▁advance": 8670,
+ "RI": 8671,
+ "oca": 8672,
+ "/\\": 8673,
+ "apshot": 8674,
+ "▁tail": 8675,
+ "models": 8676,
+ "ogy": 8677,
+ "▁Jeff": 8678,
+ "iration": 8679,
+ "▁Kore": 8680,
+ "▁leads": 8681,
+ "bat": 8682,
+ "Adapter": 8683,
+ "category": 8684,
+ "angular": 8685,
+ "▁saved": 8686,
+ "▁uniform": 8687,
+ "▁né": 8688,
+ "▁businesses": 8689,
+ "Hist": 8690,
+ "▁ар": 8691,
+ "domain": 8692,
+ "▁Si": 8693,
+ "raise": 8694,
+ "▁warn": 8695,
+ "hetic": 8696,
+ "▁Gro": 8697,
+ ")).": 8698,
+ "}>": 8699,
+ "зе": 8700,
+ "▁Amazon": 8701,
+ "▁Organ": 8702,
+ "▁Lake": 8703,
+ "▁agreement": 8704,
+ "xa": 8705,
+ "▁perman": 8706,
+ "▁containing": 8707,
+ "▁strange": 8708,
+ "сті": 8709,
+ "▁stupid": 8710,
+ "▁speaking": 8711,
+ "▁Internet": 8712,
+ "prefix": 8713,
+ "esc": 8714,
+ "Assert": 8715,
+ "prote": 8716,
+ "▁manner": 8717,
+ "▁Sz": 8718,
+ "unte": 8719,
+ "iot": 8720,
+ "Profile": 8721,
+ "oven": 8722,
+ "▁formed": 8723,
+ "▁lit": 8724,
+ "▁economy": 8725,
+ "▁cz": 8726,
+ "wid": 8727,
+ "REQ": 8728,
+ "▁chosen": 8729,
+ "▁Produ": 8730,
+ "oster": 8731,
+ "stances": 8732,
+ "awa": 8733,
+ "▁Ren": 8734,
+ "▁confirm": 8735,
+ "▁Бо": 8736,
+ "▁billion": 8737,
+ "▁déc": 8738,
+ "ých": 8739,
+ "▁illustr": 8740,
+ "TIES": 8741,
+ "▁Pub": 8742,
+ "▁ban": 8743,
+ "aded": 8744,
+ "ahn": 8745,
+ "▁Cath": 8746,
+ "nonumber": 8747,
+ "▁worst": 8748,
+ "▁Ме": 8749,
+ "▁suggested": 8750,
+ "stats": 8751,
+ "▁cant": 8752,
+ "▁align": 8753,
+ "kappa": 8754,
+ "▁hen": 8755,
+ "▁initi": 8756,
+ "'])": 8757,
+ "BI": 8758,
+ "▁garden": 8759,
+ "▁secure": 8760,
+ "▁\\[": 8761,
+ "handler": 8762,
+ "elli": 8763,
+ "ldots": 8764,
+ "secut": 8765,
+ "▁extended": 8766,
+ "}-": 8767,
+ "anie": 8768,
+ "▁Find": 8769,
+ "▁Museum": 8770,
+ "▁Conne": 8771,
+ "yy": 8772,
+ "▁passion": 8773,
+ "akers": 8774,
+ "ahr": 8775,
+ "ologies": 8776,
+ "▁equation": 8777,
+ "▁occasion": 8778,
+ "Let": 8779,
+ "']['": 8780,
+ "Print": 8781,
+ "anes": 8782,
+ "iente": 8783,
+ "▁Today": 8784,
+ "LECT": 8785,
+ "▁Af": 8786,
+ ",,": 8787,
+ "▁Та": 8788,
+ "▁```": 8789,
+ "even": 8790,
+ "sin": 8791,
+ "urer": 8792,
+ "▁°": 8793,
+ "otimes": 8794,
+ "▁IO": 8795,
+ "▁poet": 8796,
+ "()));": 8797,
+ "▁−": 8798,
+ "▁adopt": 8799,
+ "phere": 8800,
+ "#[": 8801,
+ "▁centre": 8802,
+ "oves": 8803,
+ "▁ans": 8804,
+ "dp": 8805,
+ "▁Kir": 8806,
+ "▁applicable": 8807,
+ "fp": 8808,
+ "▁visual": 8809,
+ "▁okay": 8810,
+ "oro": 8811,
+ "▁opportunities": 8812,
+ "Repository": 8813,
+ "▁ll": 8814,
+ "▁Rod": 8815,
+ "▁shel": 8816,
+ "▁launch": 8817,
+ "▁conven": 8818,
+ "▁Spe": 8819,
+ "Amer": 8820,
+ "▁cette": 8821,
+ "Cond": 8822,
+ "dep": 8823,
+ "Own": 8824,
+ "▁hook": 8825,
+ "▁dict": 8826,
+ "▁Those": 8827,
+ "▁fellow": 8828,
+ "▁philosoph": 8829,
+ "vin": 8830,
+ "ferences": 8831,
+ "hav": 8832,
+ "▁adding": 8833,
+ "iverse": 8834,
+ "game": 8835,
+ "▁Blue": 8836,
+ "▁clin": 8837,
+ "note": 8838,
+ "▁Ram": 8839,
+ "мер": 8840,
+ "covery": 8841,
+ "ña": 8842,
+ "▁би": 8843,
+ "▁fashion": 8844,
+ "▁broke": 8845,
+ "▁'\\": 8846,
+ "▁reader": 8847,
+ "ное": 8848,
+ "ности": 8849,
+ "▁payment": 8850,
+ "▁Lic": 8851,
+ "▁lips": 8852,
+ "▁academ": 8853,
+ "▁Mot": 8854,
+ "ells": 8855,
+ "CHECK": 8856,
+ "▁ру": 8857,
+ "▁MS": 8858,
+ "Editor": 8859,
+ "▁zone": 8860,
+ "iture": 8861,
+ "▁IT": 8862,
+ "runtime": 8863,
+ "▁proceed": 8864,
+ "лов": 8865,
+ "▁Maria": 8866,
+ "olver": 8867,
+ "▁Thanks": 8868,
+ "▁shouldn": 8869,
+ "▁Joh": 8870,
+ "▁Model": 8871,
+ "▁Sov": 8872,
+ "!'": 8873,
+ "Di": 8874,
+ "▁cancer": 8875,
+ "Ident": 8876,
+ "▁exchange": 8877,
+ "iller": 8878,
+ "inf": 8879,
+ "LEN": 8880,
+ "(){": 8881,
+ "aga": 8882,
+ "\"],": 8883,
+ "uh": 8884,
+ "▁Ken": 8885,
+ "▁photos": 8886,
+ "▁tiny": 8887,
+ "▁gent": 8888,
+ "ül": 8889,
+ "▁Take": 8890,
+ "idel": 8891,
+ "outing": 8892,
+ "Internal": 8893,
+ "▁cells": 8894,
+ "ним": 8895,
+ "hard": 8896,
+ "▁Town": 8897,
+ "obe": 8898,
+ "plex": 8899,
+ "тер": 8900,
+ "tons": 8901,
+ "▁concentr": 8902,
+ "mock": 8903,
+ "vc": 8904,
+ "áz": 8905,
+ "▁Championship": 8906,
+ "▁бе": 8907,
+ "??": 8908,
+ "éri": 8909,
+ "aly": 8910,
+ "▁Ц": 8911,
+ "ierte": 8912,
+ "▁totally": 8913,
+ "▁Auf": 8914,
+ "▁ourselves": 8915,
+ "▁Self": 8916,
+ "Forms": 8917,
+ "ighter": 8918,
+ "▁island": 8919,
+ "fmt": 8920,
+ "▁rc": 8921,
+ "▁tells": 8922,
+ "BB": 8923,
+ "dit": 8924,
+ "▁variables": 8925,
+ "▁intended": 8926,
+ "izont": 8927,
+ "▁plays": 8928,
+ "dam": 8929,
+ "seq": 8930,
+ "▁Sup": 8931,
+ "▁cultural": 8932,
+ "▁scream": 8933,
+ "__,": 8934,
+ "cipl": 8935,
+ "Timeout": 8936,
+ "▁ж": 8937,
+ "orte": 8938,
+ "▁replaced": 8939,
+ "EM": 8940,
+ "▁abandon": 8941,
+ "▁Special": 8942,
+ "ellen": 8943,
+ "▁Bru": 8944,
+ "irmed": 8945,
+ "Te": 8946,
+ "olt": 8947,
+ "ju": 8948,
+ "Argument": 8949,
+ "▁neut": 8950,
+ "scape": 8951,
+ "▁Ray": 8952,
+ "▁Polit": 8953,
+ "▁crowd": 8954,
+ "▁Windows": 8955,
+ "iego": 8956,
+ "▁escape": 8957,
+ "▁Apache": 8958,
+ "sync": 8959,
+ "eben": 8960,
+ "ifies": 8961,
+ "ether": 8962,
+ "Meta": 8963,
+ "▁biggest": 8964,
+ "Game": 8965,
+ "▁transaction": 8966,
+ "Env": 8967,
+ "▁Мо": 8968,
+ "▁plenty": 8969,
+ "▁mel": 8970,
+ "пре": 8971,
+ "▁motiv": 8972,
+ "▁ор": 8973,
+ "organ": 8974,
+ "▁mock": 8975,
+ "▁$_": 8976,
+ "ене": 8977,
+ "▁Number": 8978,
+ "cknow": 8979,
+ "▁Update": 8980,
+ "zero": 8981,
+ "▁surprise": 8982,
+ "cean": 8983,
+ "pdf": 8984,
+ "Global": 8985,
+ "▁attend": 8986,
+ "▁fond": 8987,
+ "▁understood": 8988,
+ "Nav": 8989,
+ "▁Mic": 8990,
+ "=$": 8991,
+ "oking": 8992,
+ "▁Stadium": 8993,
+ "Close": 8994,
+ "▁competition": 8995,
+ "▁soldiers": 8996,
+ "▁OP": 8997,
+ "agne": 8998,
+ "▁Anton": 8999,
+ "Main": 9000,
+ "ák": 9001,
+ "▁#[": 9002,
+ "▁Commit": 9003,
+ "pyx": 9004,
+ "▁east": 9005,
+ "▁Order": 9006,
+ "Float": 9007,
+ "▁accepted": 9008,
+ "▁monitor": 9009,
+ "▁pad": 9010,
+ "onic": 9011,
+ "▁pushed": 9012,
+ "▁replace": 9013,
+ "CRE": 9014,
+ "▁ride": 9015,
+ "found": 9016,
+ "=%": 9017,
+ "вой": 9018,
+ "▁matches": 9019,
+ "▁Lie": 9020,
+ "▁experiences": 9021,
+ "Pool": 9022,
+ "ups": 9023,
+ "AV": 9024,
+ "▁existence": 9025,
+ "▁thin": 9026,
+ "▁magn": 9027,
+ "COMP": 9028,
+ "home": 9029,
+ "▁ni": 9030,
+ "▁wurden": 9031,
+ "лав": 9032,
+ "▁teeth": 9033,
+ "▁Stan": 9034,
+ "appro": 9035,
+ "anny": 9036,
+ "ifts": 9037,
+ "▁unknown": 9038,
+ "▁homes": 9039,
+ "▁entity": 9040,
+ "cie": 9041,
+ "ление": 9042,
+ "iar": 9043,
+ "▁compliance": 9044,
+ "▁focused": 9045,
+ "uzz": 9046,
+ "=\\\"": 9047,
+ "components": 9048,
+ "Attr": 9049,
+ "allery": 9050,
+ "▁identify": 9051,
+ "Ok": 9052,
+ "pie": 9053,
+ "▁Still": 9054,
+ "▁offering": 9055,
+ "▁busy": 9056,
+ "ctl": 9057,
+ "itors": 9058,
+ "▁concerned": 9059,
+ "▁brown": 9060,
+ "clk": 9061,
+ "Selected": 9062,
+ "▁Block": 9063,
+ "▁egy": 9064,
+ "icing": 9065,
+ "▁URL": 9066,
+ "▁topic": 9067,
+ "▁Product": 9068,
+ "▁чи": 9069,
+ "▁trial": 9070,
+ "▁weekend": 9071,
+ "lu": 9072,
+ "▁IV": 9073,
+ "▁Egy": 9074,
+ "xC": 9075,
+ "▁nove": 9076,
+ "▁lett": 9077,
+ "enne": 9078,
+ "()).": 9079,
+ ".**": 9080,
+ "▁promise": 9081,
+ "election": 9082,
+ "Auth": 9083,
+ "rv": 9084,
+ "ril": 9085,
+ "▁conduct": 9086,
+ "▁maintain": 9087,
+ "▁boat": 9088,
+ "▁opposite": 9089,
+ "spin": 9090,
+ "webpack": 9091,
+ "anta": 9092,
+ "▁orient": 9093,
+ "▁suc": 9094,
+ "▁exercise": 9095,
+ "▁efficient": 9096,
+ "▁tradition": 9097,
+ "▁zw": 9098,
+ "▁Sud": 9099,
+ "going": 9100,
+ "▁Pier": 9101,
+ "inv": 9102,
+ "ipes": 9103,
+ "ensuremath": 9104,
+ "▁conver": 9105,
+ "creen": 9106,
+ "▁terror": 9107,
+ "▁Dou": 9108,
+ "▁invalid": 9109,
+ "ceived": 9110,
+ "▁Arab": 9111,
+ "▁wire": 9112,
+ "application": 9113,
+ "shift": 9114,
+ "Generic": 9115,
+ "▁Plan": 9116,
+ "▁Wall": 9117,
+ "▁directory": 9118,
+ "▁egg": 9119,
+ "▁wealth": 9120,
+ "random": 9121,
+ "attribute": 9122,
+ "▁hide": 9123,
+ "Serial": 9124,
+ "cam": 9125,
+ "▁ital": 9126,
+ "▁Line": 9127,
+ "▁CHECK": 9128,
+ "ployment": 9129,
+ "▁massive": 9130,
+ "▁extract": 9131,
+ "chain": 9132,
+ "Rest": 9133,
+ "▁Las": 9134,
+ "▁bear": 9135,
+ "▁links": 9136,
+ "▁newsp": 9137,
+ "▁FC": 9138,
+ "Card": 9139,
+ "aks": 9140,
+ "▁visible": 9141,
+ "▁Marc": 9142,
+ "▁Boston": 9143,
+ "▁reserved": 9144,
+ "▁roof": 9145,
+ "licenses": 9146,
+ "dc": 9147,
+ "▁Information": 9148,
+ "▁witness": 9149,
+ "Sk": 9150,
+ "*),": 9151,
+ "Scope": 9152,
+ "'];": 9153,
+ "▁Mir": 9154,
+ "uding": 9155,
+ "▁trend": 9156,
+ "rep": 9157,
+ "▁musical": 9158,
+ "▁neither": 9159,
+ "▁Creat": 9160,
+ "▁positions": 9161,
+ "LC": 9162,
+ "ridge": 9163,
+ "▁officers": 9164,
+ "▁violence": 9165,
+ "▁Tem": 9166,
+ "▁Sus": 9167,
+ "▁Way": 9168,
+ "After": 9169,
+ "acket": 9170,
+ "▁Sou": 9171,
+ "acer": 9172,
+ "||": 9173,
+ "▁remark": 9174,
+ "water": 9175,
+ "ně": 9176,
+ "▁Са": 9177,
+ "▁sed": 9178,
+ "Each": 9179,
+ "▁photograph": 9180,
+ "▁letters": 9181,
+ "▁invent": 9182,
+ "▁Mas": 9183,
+ "▁songs": 9184,
+ "ól": 9185,
+ "kind": 9186,
+ "▁Non": 9187,
+ "▁dust": 9188,
+ "**:": 9189,
+ "nabla": 9190,
+ ".\",": 9191,
+ "Lock": 9192,
+ "▁До": 9193,
+ "▁cluster": 9194,
+ "loss": 9195,
+ "▁ASSERT": 9196,
+ "fall": 9197,
+ "▁reject": 9198,
+ "▁Spring": 9199,
+ "▁wedding": 9200,
+ "▁grav": 9201,
+ "ression": 9202,
+ "limit": 9203,
+ "RES": 9204,
+ "]}": 9205,
+ "▁listed": 9206,
+ "▁Tele": 9207,
+ "hline": 9208,
+ "▁chief": 9209,
+ "MEM": 9210,
+ "дар": 9211,
+ "▁expensive": 9212,
+ "trace": 9213,
+ "▁Rog": 9214,
+ "▁Coll": 9215,
+ "▁Author": 9216,
+ "▁Board": 9217,
+ "▁Capt": 9218,
+ "TEXT": 9219,
+ "▁recon": 9220,
+ "esta": 9221,
+ "▁properly": 9222,
+ "▁&\\": 9223,
+ "leton": 9224,
+ "iker": 9225,
+ "Gu": 9226,
+ "▁Kom": 9227,
+ "oco": 9228,
+ "▁anymore": 9229,
+ "▁taste": 9230,
+ "▁Santa": 9231,
+ "gex": 9232,
+ "▁Secret": 9233,
+ "▁talent": 9234,
+ "▁moments": 9235,
+ "▁Ba": 9236,
+ "▁extr": 9237,
+ "▁Commission": 9238,
+ "▁modify": 9239,
+ "▁Figure": 9240,
+ "▁domin": 9241,
+ "▁plot": 9242,
+ "enger": 9243,
+ "utch": 9244,
+ "▁cities": 9245,
+ "▁nut": 9246,
+ "profile": 9247,
+ "▁Stat": 9248,
+ "▁nodes": 9249,
+ "▁ns": 9250,
+ "essages": 9251,
+ "impl": 9252,
+ "icker": 9253,
+ "▁examples": 9254,
+ "abeth": 9255,
+ "▁stated": 9256,
+ "fire": 9257,
+ "bul": 9258,
+ "▁dangerous": 9259,
+ "▁Pay": 9260,
+ "▁Gre": 9261,
+ "▁Monday": 9262,
+ "esome": 9263,
+ "igan": 9264,
+ "rund": 9265,
+ "prise": 9266,
+ "fail": 9267,
+ "▁Never": 9268,
+ "Av": 9269,
+ "▁linear": 9270,
+ "▁ul": 9271,
+ "WAR": 9272,
+ "рен": 9273,
+ "▁AT": 9274,
+ "▁dop": 9275,
+ "▁nou": 9276,
+ "Dest": 9277,
+ "▁claims": 9278,
+ "enda": 9279,
+ "▁crazy": 9280,
+ "gel": 9281,
+ "oggle": 9282,
+ "▁representation": 9283,
+ "inen": 9284,
+ "▁alternative": 9285,
+ "DM": 9286,
+ "ABILITY": 9287,
+ "faces": 9288,
+ "▁doors": 9289,
+ "ativ": 9290,
+ "Look": 9291,
+ "▁JSON": 9292,
+ "▁appearance": 9293,
+ "бря": 9294,
+ "SQL": 9295,
+ "▁silence": 9296,
+ "udo": 9297,
+ "▁Director": 9298,
+ "Statement": 9299,
+ "selected": 9300,
+ "high": 9301,
+ "prime": 9302,
+ "▁ignore": 9303,
+ "▁colors": 9304,
+ "ushing": 9305,
+ "▁virt": 9306,
+ "manager": 9307,
+ "▁remote": 9308,
+ "ło": 9309,
+ "small": 9310,
+ "▁crime": 9311,
+ "rb": 9312,
+ "▁creation": 9313,
+ "▁flight": 9314,
+ "▁Sign": 9315,
+ "ILE": 9316,
+ "▁DO": 9317,
+ "comment": 9318,
+ "▁Cost": 9319,
+ ".__": 9320,
+ "▁Cop": 9321,
+ "▁vom": 9322,
+ "▁Science": 9323,
+ "ления": 9324,
+ "oop": 9325,
+ "interface": 9326,
+ "▁WARRANTIES": 9327,
+ "▁Page": 9328,
+ "******": 9329,
+ "ском": 9330,
+ "TRUE": 9331,
+ "▁repeated": 9332,
+ "▁его": 9333,
+ "шо": 9334,
+ "▁roz": 9335,
+ "Pe": 9336,
+ "▁ISBN": 9337,
+ "irts": 9338,
+ "poses": 9339,
+ "})$": 9340,
+ "▁І": 9341,
+ "children": 9342,
+ "bles": 9343,
+ "ECT": 9344,
+ "▁iz": 9345,
+ "▁builder": 9346,
+ "▁Media": 9347,
+ "iat": 9348,
+ "▁contrast": 9349,
+ "”,": 9350,
+ "▁Link": 9351,
+ "▁Education": 9352,
+ "▁joint": 9353,
+ "▁external": 9354,
+ "▁роз": 9355,
+ "▁bits": 9356,
+ "FORM": 9357,
+ "erman": 9358,
+ "wp": 9359,
+ "▁Mike": 9360,
+ "▁Master": 9361,
+ "▁senior": 9362,
+ "▁Nav": 9363,
+ "▁recorded": 9364,
+ "eling": 9365,
+ "esh": 9366,
+ "fx": 9367,
+ "кан": 9368,
+ "▁tall": 9369,
+ "▁Johnson": 9370,
+ "▁sono": 9371,
+ "▁anche": 9372,
+ "icken": 9373,
+ "loop": 9374,
+ "iciency": 9375,
+ "emporary": 9376,
+ "▁Does": 9377,
+ "▁relation": 9378,
+ "мы": 9379,
+ "was": 9380,
+ "low": 9381,
+ "ichte": 9382,
+ "▁Jones": 9383,
+ "▁bedroom": 9384,
+ "DIS": 9385,
+ "▁magnet": 9386,
+ "▁Engine": 9387,
+ "▁feelings": 9388,
+ "GC": 9389,
+ "▁torn": 9390,
+ "▁relationships": 9391,
+ "▁Ре": 9392,
+ "▁proud": 9393,
+ "▁twe": 9394,
+ "oval": 9395,
+ "▁waste": 9396,
+ "▁reduced": 9397,
+ "ilton": 9398,
+ "BP": 9399,
+ "▁forgot": 9400,
+ "▁bodies": 9401,
+ "▁Haw": 9402,
+ "lag": 9403,
+ "▁www": 9404,
+ "door": 9405,
+ "▁sufficient": 9406,
+ "▁dollars": 9407,
+ "Len": 9408,
+ "▁talked": 9409,
+ "▁bond": 9410,
+ "▁Bor": 9411,
+ "}}{": 9412,
+ "rod": 9413,
+ "Password": 9414,
+ "quare": 9415,
+ "▁lights": 9416,
+ "eren": 9417,
+ "▁thirty": 9418,
+ "NC": 9419,
+ "▁TODO": 9420,
+ "▁respond": 9421,
+ "ких": 9422,
+ "direct": 9423,
+ "ação": 9424,
+ "▁heav": 9425,
+ "Media": 9426,
+ "exit": 9427,
+ "License": 9428,
+ "`.": 9429,
+ "▁mixed": 9430,
+ "▁desk": 9431,
+ "▁teaching": 9432,
+ "▁maj": 9433,
+ "▁nerv": 9434,
+ "inations": 9435,
+ "typeof": 9436,
+ "▁coast": 9437,
+ "▁же": 9438,
+ "▁beside": 9439,
+ "ummy": 9440,
+ "Doc": 9441,
+ "▁schedule": 9442,
+ "▁recover": 9443,
+ "▁Further": 9444,
+ "▁steel": 9445,
+ "boot": 9446,
+ "▁Perhaps": 9447,
+ "▁съ": 9448,
+ "▁Os": 9449,
+ "rick": 9450,
+ "▁Ви": 9451,
+ "Support": 9452,
+ "▁(_": 9453,
+ "nil": 9454,
+ "pis": 9455,
+ "xpected": 9456,
+ "▁processing": 9457,
+ "Build": 9458,
+ "arian": 9459,
+ "▁icon": 9460,
+ "▁CA": 9461,
+ "wick": 9462,
+ "=(": 9463,
+ "▁algorithm": 9464,
+ "▁Young": 9465,
+ "▁Management": 9466,
+ "▁ancient": 9467,
+ "ность": 9468,
+ "oti": 9469,
+ "▁combination": 9470,
+ "world": 9471,
+ "nn": 9472,
+ "▁dram": 9473,
+ "enabled": 9474,
+ "Ac": 9475,
+ "CCESS": 9476,
+ "aration": 9477,
+ "▁blocks": 9478,
+ "▁Angeles": 9479,
+ "▁Qual": 9480,
+ "▁succeed": 9481,
+ "network": 9482,
+ "▁oblig": 9483,
+ "springframework": 9484,
+ "▁Tre": 9485,
+ "okes": 9486,
+ "mun": 9487,
+ "▁Network": 9488,
+ "Del": 9489,
+ "▁estate": 9490,
+ "▁liqu": 9491,
+ "▁pob": 9492,
+ "▁dad": 9493,
+ "▁distinct": 9494,
+ "▁Tit": 9495,
+ "▁Lear": 9496,
+ "ferred": 9497,
+ "android": 9498,
+ "▁subsequ": 9499,
+ "▁Florida": 9500,
+ "subset": 9501,
+ "▁whisper": 9502,
+ "Vol": 9503,
+ "ulous": 9504,
+ "▁crew": 9505,
+ "▁lug": 9506,
+ "pid": 9507,
+ "ocity": 9508,
+ "skb": 9509,
+ "▁tea": 9510,
+ "ун": 9511,
+ "▁honor": 9512,
+ "▁Ins": 9513,
+ "▁gew": 9514,
+ "Details": 9515,
+ "eneath": 9516,
+ "atar": 9517,
+ "▁_{": 9518,
+ "amen": 9519,
+ "▁setup": 9520,
+ "Transaction": 9521,
+ "▁blank": 9522,
+ "Failed": 9523,
+ "job": 9524,
+ "▁pret": 9525,
+ "ße": 9526,
+ "loor": 9527,
+ "ří": 9528,
+ "ncia": 9529,
+ "▁anywhere": 9530,
+ "▁Light": 9531,
+ "▁Ak": 9532,
+ "BD": 9533,
+ "▁excited": 9534,
+ "agers": 9535,
+ "▁warning": 9536,
+ "▁processes": 9537,
+ "hu": 9538,
+ "▁youth": 9539,
+ "▁dogs": 9540,
+ "▁oct": 9541,
+ "▁nine": 9542,
+ "Writer": 9543,
+ "grid": 9544,
+ "▁importance": 9545,
+ "estic": 9546,
+ "▁carefully": 9547,
+ "master": 9548,
+ "▁decisions": 9549,
+ "▁pin": 9550,
+ "▁crack": 9551,
+ "TEST": 9552,
+ "▁Local": 9553,
+ "▁Right": 9554,
+ "▁vast": 9555,
+ "▁faster": 9556,
+ "▁institut": 9557,
+ "▁annual": 9558,
+ "LAN": 9559,
+ "▁episode": 9560,
+ "▁XV": 9561,
+ "▁delivery": 9562,
+ "tl": 9563,
+ "FP": 9564,
+ "circ": 9565,
+ "▁typically": 9566,
+ "igo": 9567,
+ "▁intel": 9568,
+ "nat": 9569,
+ "xb": 9570,
+ "стро": 9571,
+ ")-": 9572,
+ "▁Bal": 9573,
+ "▁Jos": 9574,
+ "▁gonna": 9575,
+ "▁Rest": 9576,
+ "jor": 9577,
+ "onia": 9578,
+ "orship": 9579,
+ "overy": 9580,
+ "LINE": 9581,
+ "]:": 9582,
+ "Queue": 9583,
+ "▁compare": 9584,
+ "▁apartment": 9585,
+ "▁rul": 9586,
+ "Dr": 9587,
+ "gency": 9588,
+ "▁obviously": 9589,
+ "zie": 9590,
+ "ycl": 9591,
+ "fortunately": 9592,
+ "▁stepped": 9593,
+ "▁Seg": 9594,
+ "▁Which": 9595,
+ "▁PC": 9596,
+ "▁ast": 9597,
+ "endor": 9598,
+ "▁permission": 9599,
+ "COL": 9600,
+ "▁TEST": 9601,
+ "Pay": 9602,
+ "ères": 9603,
+ "▁studied": 9604,
+ "▁accompl": 9605,
+ "role": 9606,
+ "Where": 9607,
+ "protobuf": 9608,
+ "metadata": 9609,
+ "Job": 9610,
+ "▁Four": 9611,
+ "plements": 9612,
+ "disable": 9613,
+ "▁loud": 9614,
+ "▁happening": 9615,
+ "▁Using": 9616,
+ "rog": 9617,
+ "▁depends": 9618,
+ "ím": 9619,
+ "'\\": 9620,
+ "▁taught": 9621,
+ "shared": 9622,
+ "▁attributes": 9623,
+ "▁Action": 9624,
+ "▁dess": 9625,
+ "▁houses": 9626,
+ "▁reset": 9627,
+ "▁bien": 9628,
+ "▁explicit": 9629,
+ "LOW": 9630,
+ "->_": 9631,
+ "▁PM": 9632,
+ "Category": 9633,
+ "oice": 9634,
+ "into": 9635,
+ "▁mail": 9636,
+ "▁authority": 9637,
+ "▁unable": 9638,
+ "filename": 9639,
+ "ék": 9640,
+ "лей": 9641,
+ "▁sector": 9642,
+ "appoint": 9643,
+ "▁hang": 9644,
+ "▁cel": 9645,
+ "related": 9646,
+ "itate": 9647,
+ "▁'<": 9648,
+ "amber": 9649,
+ "▁cheap": 9650,
+ "▁enabled": 9651,
+ "▁division": 9652,
+ "Any": 9653,
+ "▁hier": 9654,
+ "▁Head": 9655,
+ "ntax": 9656,
+ "uda": 9657,
+ "▁limitations": 9658,
+ "▁studio": 9659,
+ "media": 9660,
+ "▁circle": 9661,
+ "нова": 9662,
+ "▁laug": 9663,
+ "acts": 9664,
+ "▁Во": 9665,
+ "ód": 9666,
+ "pled": 9667,
+ "LOC": 9668,
+ "Expr": 9669,
+ ">:": 9670,
+ "▁prés": 9671,
+ "▁laughed": 9672,
+ "▁Three": 9673,
+ "лы": 9674,
+ "▁ends": 9675,
+ "▁fundament": 9676,
+ "▁inher": 9677,
+ "▁liv": 9678,
+ "bid": 9679,
+ "▁responsibility": 9680,
+ "▁checked": 9681,
+ "▁Pac": 9682,
+ "▁fault": 9683,
+ "▁yellow": 9684,
+ "▁salt": 9685,
+ "▁Francisco": 9686,
+ "▁^": 9687,
+ "▁ON": 9688,
+ "▁beauty": 9689,
+ "yg": 9690,
+ "▁Aff": 9691,
+ "▁Eq": 9692,
+ "▁magic": 9693,
+ "▁handler": 9694,
+ "xE": 9695,
+ "▁numerous": 9696,
+ "▁hole": 9697,
+ "▁rooms": 9698,
+ "cción": 9699,
+ "▁Arm": 9700,
+ "person": 9701,
+ "▁buildings": 9702,
+ "▁plate": 9703,
+ "bled": 9704,
+ "errors": 9705,
+ "▁Again": 9706,
+ "▁Default": 9707,
+ "▁Hard": 9708,
+ "tó": 9709,
+ "hus": 9710,
+ "▁dimension": 9711,
+ "iale": 9712,
+ "▁Mult": 9713,
+ "▁Government": 9714,
+ "Func": 9715,
+ "▁blow": 9716,
+ "▁rect": 9717,
+ "erra": 9718,
+ "connection": 9719,
+ "▁passing": 9720,
+ "ßen": 9721,
+ "phas": 9722,
+ "ensional": 9723,
+ "record": 9724,
+ "cohol": 9725,
+ "▁Harry": 9726,
+ "izontal": 9727,
+ "▁finger": 9728,
+ "▁younger": 9729,
+ "▁SC": 9730,
+ "operation": 9731,
+ "BY": 9732,
+ "heim": 9733,
+ "▁Bad": 9734,
+ "▁storm": 9735,
+ "▁Nat": 9736,
+ "▁buying": 9737,
+ "▁Sometimes": 9738,
+ "▁Ста": 9739,
+ "essed": 9740,
+ "▁damn": 9741,
+ "▁meg": 9742,
+ "umes": 9743,
+ "ünd": 9744,
+ "тра": 9745,
+ "▁silver": 9746,
+ "wd": 9747,
+ "hidden": 9748,
+ "ardo": 9749,
+ "▁communities": 9750,
+ "▁diet": 9751,
+ "otted": 9752,
+ "▁bat": 9753,
+ "ancer": 9754,
+ "▁fmt": 9755,
+ "▁Pen": 9756,
+ "▁til": 9757,
+ "Enum": 9758,
+ "PATH": 9759,
+ "▁matters": 9760,
+ "timeout": 9761,
+ "------------": 9762,
+ "kan": 9763,
+ "▁Corpor": 9764,
+ "=\"../../": 9765,
+ "▁Ale": 9766,
+ "hentication": 9767,
+ "▁complic": 9768,
+ "▁Security": 9769,
+ "OFF": 9770,
+ "Rad": 9771,
+ "apse": 9772,
+ "▁dance": 9773,
+ "▁permissions": 9774,
+ "▁warrant": 9775,
+ "▁lad": 9776,
+ "▁isol": 9777,
+ "dl": 9778,
+ "▁Au": 9779,
+ "yes": 9780,
+ "▁tv": 9781,
+ "▁provider": 9782,
+ "▁terrible": 9783,
+ "▁department": 9784,
+ "eral": 9785,
+ "▁implementation": 9786,
+ "SR": 9787,
+ "▁hearing": 9788,
+ "▁Kn": 9789,
+ "FR": 9790,
+ "tv": 9791,
+ "▁diss": 9792,
+ "FUN": 9793,
+ "▁durante": 9794,
+ "osis": 9795,
+ "▁tasks": 9796,
+ "▁Blo": 9797,
+ "вод": 9798,
+ "▁branch": 9799,
+ "▁politics": 9800,
+ "▁Elle": 9801,
+ "▁leadership": 9802,
+ "expr": 9803,
+ "▁techniques": 9804,
+ "prec": 9805,
+ "Sigma": 9806,
+ "imately": 9807,
+ "tk": 9808,
+ "achment": 9809,
+ "▁Enter": 9810,
+ "▁creative": 9811,
+ "▁зна": 9812,
+ "appy": 9813,
+ "unched": 9814,
+ "▁'',": 9815,
+ "onder": 9816,
+ "{-": 9817,
+ "NUM": 9818,
+ "▁narr": 9819,
+ "Memory": 9820,
+ "▁winning": 9821,
+ "▁Follow": 9822,
+ "*/\r": 9823,
+ "vision": 9824,
+ "resents": 9825,
+ "zione": 9826,
+ "▁latter": 9827,
+ "▁requests": 9828,
+ "▁margin": 9829,
+ "▁{\"": 9830,
+ "video": 9831,
+ "cn": 9832,
+ "▁Image": 9833,
+ "Tim": 9834,
+ "CONFIG": 9835,
+ "▁allowing": 9836,
+ "▁combined": 9837,
+ "PUT": 9838,
+ "▁instanceof": 9839,
+ "igin": 9840,
+ "▁pero": 9841,
+ "▁''": 9842,
+ "▁confidence": 9843,
+ "▁equivalent": 9844,
+ "pad": 9845,
+ "effect": 9846,
+ "RX": 9847,
+ "▁lang": 9848,
+ "strong": 9849,
+ "▁bridge": 9850,
+ "aya": 9851,
+ "▁treated": 9852,
+ "▁forth": 9853,
+ "SW": 9854,
+ "▁accounts": 9855,
+ "▁PO": 9856,
+ "▁listening": 9857,
+ "Route": 9858,
+ "()))": 9859,
+ "cpy": 9860,
+ "▁reform": 9861,
+ "▁gate": 9862,
+ "▁Walk": 9863,
+ "▁somehow": 9864,
+ "tf": 9865,
+ "▁layout": 9866,
+ "umin": 9867,
+ "▁considering": 9868,
+ "▁premi": 9869,
+ "▁Mom": 9870,
+ "athan": 9871,
+ "Gen": 9872,
+ "▁planet": 9873,
+ "amples": 9874,
+ "▁MO": 9875,
+ "shop": 9876,
+ "▁premier": 9877,
+ "▁simpl": 9878,
+ "▁segu": 9879,
+ "LY": 9880,
+ "Sum": 9881,
+ "▁tables": 9882,
+ "ska": 9883,
+ "▁ž": 9884,
+ "pd": 9885,
+ "▁sous": 9886,
+ "▁conference": 9887,
+ "▁Dat": 9888,
+ "Scroll": 9889,
+ "▁standards": 9890,
+ "▁гру": 9891,
+ "esse": 9892,
+ "▁citizens": 9893,
+ "▁occurred": 9894,
+ "▁democr": 9895,
+ "▁elev": 9896,
+ "▁Sem": 9897,
+ "ensus": 9898,
+ "headers": 9899,
+ "▁Chris": 9900,
+ "imento": 9901,
+ "kom": 9902,
+ "Cor": 9903,
+ "MIN": 9904,
+ "usher": 9905,
+ "Database": 9906,
+ "▁formal": 9907,
+ "igne": 9908,
+ "▁organizations": 9909,
+ "▁Ire": 9910,
+ "Xml": 9911,
+ "из": 9912,
+ "▁pray": 9913,
+ "▁bomb": 9914,
+ "▁mand": 9915,
+ "erts": 9916,
+ "▁clock": 9917,
+ "▁buck": 9918,
+ "вали": 9919,
+ "ensch": 9920,
+ "▁volt": 9921,
+ "▁films": 9922,
+ "▁plants": 9923,
+ "inode": 9924,
+ "Boolean": 9925,
+ "▁restaurant": 9926,
+ "ían": 9927,
+ "▁debut": 9928,
+ "pages": 9929,
+ "▁wordt": 9930,
+ "▁Ба": 9931,
+ "▁greatest": 9932,
+ "(\"/": 9933,
+ "▁copyright": 9934,
+ "▁rit": 9935,
+ "sizeof": 9936,
+ "Trace": 9937,
+ "uent": 9938,
+ "тур": 9939,
+ "▁ko": 9940,
+ ":\\": 9941,
+ "▁bigger": 9942,
+ "▁perfectly": 9943,
+ "tenance": 9944,
+ "MASK": 9945,
+ "ré": 9946,
+ "▁ett": 9947,
+ "▁nose": 9948,
+ "▁craft": 9949,
+ "iteral": 9950,
+ "▁discussed": 9951,
+ "▁Jewish": 9952,
+ "Cap": 9953,
+ "▁Unless": 9954,
+ "▁Jackson": 9955,
+ "Attributes": 9956,
+ "▁lunch": 9957,
+ "öl": 9958,
+ "atr": 9959,
+ "▁paying": 9960,
+ "Parse": 9961,
+ "()\r": 9962,
+ "lad": 9963,
+ "▁rare": 9964,
+ "▁[];": 9965,
+ "stone": 9966,
+ "▁unc": 9967,
+ "▁defense": 9968,
+ "}+": 9969,
+ "▁Global": 9970,
+ "▁Soviet": 9971,
+ "▁Australian": 9972,
+ "▁gli": 9973,
+ "variant": 9974,
+ "▁Ron": 9975,
+ "▁loan": 9976,
+ "Step": 9977,
+ "member": 9978,
+ "Sch": 9979,
+ "▁Committee": 9980,
+ "▁spending": 9981,
+ "▁Tri": 9982,
+ "▁Journal": 9983,
+ "▁sugar": 9984,
+ "elly": 9985,
+ "HTML": 9986,
+ "▁advent": 9987,
+ "wing": 9988,
+ "▁Whether": 9989,
+ "oration": 9990,
+ "▁NE": 9991,
+ "iveness": 9992,
+ "▁hav": 9993,
+ "▁conscious": 9994,
+ "een": 9995,
+ "Symbol": 9996,
+ "▁ку": 9997,
+ "Logger": 9998,
+ "▁Little": 9999,
+ "widet": 10000,
+ "ocation": 10001,
+ "pin": 10002,
+ "▁symmet": 10003,
+ "▁AD": 10004,
+ "▁posts": 10005,
+ "shal": 10006,
+ "▁Conf": 10007,
+ "▁chose": 10008,
+ "mal": 10009,
+ "ulo": 10010,
+ "▁Method": 10011,
+ "▁missed": 10012,
+ "Remove": 10013,
+ "Auto": 10014,
+ "VALUE": 10015,
+ "thlet": 10016,
+ "▁Force": 10017,
+ "pf": 10018,
+ "▁Я": 10019,
+ "late": 10020,
+ "▁pul": 10021,
+ "Pop": 10022,
+ "▁advanced": 10023,
+ "aires": 10024,
+ "ressed": 10025,
+ "AME": 10026,
+ "bell": 10027,
+ "aching": 10028,
+ "ić": 10029,
+ "echo": 10030,
+ "HS": 10031,
+ "▁funny": 10032,
+ "рии": 10033,
+ "▁eer": 10034,
+ "▁veget": 10035,
+ "▁fourth": 10036,
+ "cf": 10037,
+ "transform": 10038,
+ "▁grown": 10039,
+ "▁McC": 10040,
+ "site": 10041,
+ "▁beneath": 10042,
+ "▁shell": 10043,
+ "xd": 10044,
+ "Play": 10045,
+ "short": 10046,
+ "Role": 10047,
+ "▁religion": 10048,
+ "inator": 10049,
+ "}": 10050,
+ "▁Eliz": 10051,
+ "Microsoft": 10052,
+ "▁vez": 10053,
+ "▁рабо": 10054,
+ "reich": 10055,
+ "vet": 10056,
+ "enum": 10057,
+ "▁welcome": 10058,
+ "nament": 10059,
+ "▁jan": 10060,
+ "▁cycle": 10061,
+ "▁acknow": 10062,
+ "▁wound": 10063,
+ "idi": 10064,
+ "▁possibility": 10065,
+ "annotation": 10066,
+ "▁technical": 10067,
+ "▁fold": 10068,
+ "eh": 10069,
+ "istence": 10070,
+ "▁reply": 10071,
+ "etes": 10072,
+ "▁decades": 10073,
+ "wan": 10074,
+ "▁кра": 10075,
+ "▁Lab": 10076,
+ "▁unf": 10077,
+ "▁imper": 10078,
+ "▁bug": 10079,
+ "▁Though": 10080,
+ "throws": 10081,
+ "Visible": 10082,
+ "prev": 10083,
+ "▁Ty": 10084,
+ "▁depending": 10085,
+ "▁policies": 10086,
+ "andy": 10087,
+ "▁Italian": 10088,
+ "uma": 10089,
+ "▁signs": 10090,
+ "▁Through": 10091,
+ "бы": 10092,
+ "bot": 10093,
+ "▁publish": 10094,
+ ")**": 10095,
+ "ATTR": 10096,
+ "iral": 10097,
+ "VT": 10098,
+ "▁recognized": 10099,
+ "▁Lind": 10100,
+ "ection": 10101,
+ "▁relatively": 10102,
+ "▁Ah": 10103,
+ "▁Dig": 10104,
+ "ць": 10105,
+ "icket": 10106,
+ "▁specifically": 10107,
+ "nost": 10108,
+ "▁grass": 10109,
+ "▁causes": 10110,
+ "тво": 10111,
+ "utter": 10112,
+ "▁Festival": 10113,
+ "greg": 10114,
+ "▁weapons": 10115,
+ "▁sir": 10116,
+ "▁Virginia": 10117,
+ "login": 10118,
+ "▁schedul": 10119,
+ "ського": 10120,
+ "▁losing": 10121,
+ "▁Europ": 10122,
+ "\"><": 10123,
+ "asp": 10124,
+ "ajo": 10125,
+ "exports": 10126,
+ "▁Node": 10127,
+ "▁jako": 10128,
+ "▁ya": 10129,
+ "▁successfully": 10130,
+ "▁friendly": 10131,
+ "buff": 10132,
+ "DEFAULT": 10133,
+ "▁pregn": 10134,
+ "Required": 10135,
+ "▁binary": 10136,
+ "isting": 10137,
+ "▁stared": 10138,
+ "▁circumstances": 10139,
+ "▁хо": 10140,
+ "rei": 10141,
+ "▁Го": 10142,
+ "Transform": 10143,
+ "cnt": 10144,
+ "▁Ext": 10145,
+ "report": 10146,
+ "VERSION": 10147,
+ "▁analy": 10148,
+ "▁Marg": 10149,
+ "▁alleg": 10150,
+ "builder": 10151,
+ "ToString": 10152,
+ "Layer": 10153,
+ "íst": 10154,
+ "Prop": 10155,
+ "▁Emp": 10156,
+ "}]": 10157,
+ "▁selling": 10158,
+ "▁queue": 10159,
+ "▁seriously": 10160,
+ "▁Lead": 10161,
+ "textit": 10162,
+ "testing": 10163,
+ "▁Пре": 10164,
+ "security": 10165,
+ "iał": 10166,
+ "ún": 10167,
+ "chip": 10168,
+ "▁candidate": 10169,
+ "▁minister": 10170,
+ "eria": 10171,
+ "▁Het": 10172,
+ "дин": 10173,
+ "▁Britain": 10174,
+ "▁barely": 10175,
+ "▁sty": 10176,
+ "▁Spanish": 10177,
+ "▁Ven": 10178,
+ "timer": 10179,
+ "ків": 10180,
+ "▁documents": 10181,
+ "('.": 10182,
+ "▁debug": 10183,
+ "▁contro": 10184,
+ "стоя": 10185,
+ "▁joy": 10186,
+ "Sn": 10187,
+ "Inv": 10188,
+ "▁protocol": 10189,
+ "▁faces": 10190,
+ "▁Despite": 10191,
+ "sed": 10192,
+ "Conf": 10193,
+ "ARG": 10194,
+ "▁evolution": 10195,
+ "▁tod": 10196,
+ "▁Promise": 10197,
+ "▁posted": 10198,
+ "Perm": 10199,
+ "bet": 10200,
+ "Ang": 10201,
+ "Just": 10202,
+ "▁rum": 10203,
+ "layer": 10204,
+ "▁behavi": 10205,
+ "ipping": 10206,
+ "▁dynam": 10207,
+ "▁scheme": 10208,
+ "▁proto": 10209,
+ ")/": 10210,
+ "Collections": 10211,
+ "riev": 10212,
+ "▁Click": 10213,
+ "▁uns": 10214,
+ "widetilde": 10215,
+ "▁remembered": 10216,
+ "гі": 10217,
+ "inates": 10218,
+ "▁incorpor": 10219,
+ "▁Description": 10220,
+ "▁prepare": 10221,
+ "▁Final": 10222,
+ "uation": 10223,
+ "▁Queen": 10224,
+ ">;": 10225,
+ "▁automatically": 10226,
+ "▁sharp": 10227,
+ "▁meat": 10228,
+ "ateur": 10229,
+ "astern": 10230,
+ "▁stuck": 10231,
+ "ASSERT": 10232,
+ "▁planned": 10233,
+ "dots": 10234,
+ "ookie": 10235,
+ "▁Histor": 10236,
+ "▁reviews": 10237,
+ "IMP": 10238,
+ "▁answered": 10239,
+ "Total": 10240,
+ "▁sau": 10241,
+ "▁Mexico": 10242,
+ "continue": 10243,
+ "▁Apple": 10244,
+ "likely": 10245,
+ "зва": 10246,
+ "users": 10247,
+ "▁identified": 10248,
+ "▁Lev": 10249,
+ "▁mol": 10250,
+ "▁Islam": 10251,
+ "▁committed": 10252,
+ "writ": 10253,
+ "бер": 10254,
+ "rift": 10255,
+ "▁interrupt": 10256,
+ "▁readonly": 10257,
+ "schema": 10258,
+ "Sm": 10259,
+ "Double": 10260,
+ "aza": 10261,
+ "▁Hal": 10262,
+ "Move": 10263,
+ "▁Series": 10264,
+ "inline": 10265,
+ "▁которы": 10266,
+ "soc": 10267,
+ "▁tent": 10268,
+ "▁amer": 10269,
+ "aki": 10270,
+ "▁lady": 10271,
+ "▁tired": 10272,
+ "ifi": 10273,
+ "▁même": 10274,
+ "ouver": 10275,
+ "▁aside": 10276,
+ "Did": 10277,
+ "',\r": 10278,
+ "▁bringing": 10279,
+ "Drawing": 10280,
+ "aro": 10281,
+ "▁Rh": 10282,
+ "▁Naz": 10283,
+ "esso": 10284,
+ "▁reaction": 10285,
+ "mitted": 10286,
+ "▁absolute": 10287,
+ "haust": 10288,
+ "(()": 10289,
+ "▁Task": 10290,
+ "ERS": 10291,
+ "▁^{": 10292,
+ "VD": 10293,
+ "▁tone": 10294,
+ "dist": 10295,
+ "vs": 10296,
+ "▁wheel": 10297,
+ "▁administration": 10298,
+ "▁interests": 10299,
+ "▁pointer": 10300,
+ "▁encounter": 10301,
+ "aver": 10302,
+ "▁nord": 10303,
+ "ket": 10304,
+ "▁beach": 10305,
+ "▁enjoyed": 10306,
+ "contains": 10307,
+ "▁append": 10308,
+ "Wait": 10309,
+ "▁squad": 10310,
+ "zel": 10311,
+ "▁medium": 10312,
+ "▁sending": 10313,
+ "▁Lady": 10314,
+ "ções": 10315,
+ "▁destination": 10316,
+ "nych": 10317,
+ "▁conflict": 10318,
+ "▁Ly": 10319,
+ "▁vul": 10320,
+ "▁basically": 10321,
+ "reated": 10322,
+ "black": 10323,
+ "ugins": 10324,
+ "▁calm": 10325,
+ "érie": 10326,
+ "har": 10327,
+ "лан": 10328,
+ "▁Се": 10329,
+ "watch": 10330,
+ "▁Put": 10331,
+ "▁dump": 10332,
+ "acher": 10333,
+ "scroll": 10334,
+ "▁claimed": 10335,
+ "▁Control": 10336,
+ "▁blind": 10337,
+ "enti": 10338,
+ "▁Keep": 10339,
+ "▁Development": 10340,
+ "images": 10341,
+ "▁tough": 10342,
+ "gebra": 10343,
+ "▁sept": 10344,
+ "hew": 10345,
+ "▁skill": 10346,
+ "▁Tay": 10347,
+ "▁któ": 10348,
+ "owner": 10349,
+ "pare": 10350,
+ "▁fee": 10351,
+ "▁continues": 10352,
+ "▁kan": 10353,
+ "bes": 10354,
+ "▁cha": 10355,
+ "ovo": 10356,
+ "▁Night": 10357,
+ "icture": 10358,
+ "shire": 10359,
+ "▁essay": 10360,
+ "▁suppose": 10361,
+ "etic": 10362,
+ "Art": 10363,
+ "acon": 10364,
+ "lla": 10365,
+ "words": 10366,
+ "▁comparison": 10367,
+ "▁BE": 10368,
+ "▁challenges": 10369,
+ "▁ol": 10370,
+ "citep": 10371,
+ "▁Foot": 10372,
+ "▁Such": 10373,
+ "▁papers": 10374,
+ "activ": 10375,
+ "quer": 10376,
+ "тя": 10377,
+ "▁То": 10378,
+ "ський": 10379,
+ "thur": 10380,
+ "done": 10381,
+ "▁shock": 10382,
+ "▁dedicated": 10383,
+ "▁correspond": 10384,
+ "Second": 10385,
+ "▁bull": 10386,
+ "life": 10387,
+ "indent": 10388,
+ "▁figures": 10389,
+ "▁Andrew": 10390,
+ "isp": 10391,
+ "▁favour": 10392,
+ "зда": 10393,
+ "▁Elect": 10394,
+ "Full": 10395,
+ "▁nearby": 10396,
+ "▁Register": 10397,
+ "Scale": 10398,
+ "ications": 10399,
+ "ин": 10400,
+ "▁AM": 10401,
+ "pair": 10402,
+ "▁perspective": 10403,
+ "▁nos": 10404,
+ "apa": 10405,
+ "ostał": 10406,
+ "▁Pers": 10407,
+ "icer": 10408,
+ "▁plastic": 10409,
+ "дов": 10410,
+ "ciples": 10411,
+ "zą": 10412,
+ "clos": 10413,
+ "▁уча": 10414,
+ "▁Á": 10415,
+ "plugin": 10416,
+ "▁angle": 10417,
+ "▁commission": 10418,
+ "▁funds": 10419,
+ "▁indu": 10420,
+ "▁drawn": 10421,
+ "ám": 10422,
+ "▁developing": 10423,
+ "▁segment": 10424,
+ "isme": 10425,
+ "scr": 10426,
+ "▁lies": 10427,
+ "▁IL": 10428,
+ "▁api": 10429,
+ "Extension": 10430,
+ "▁scal": 10431,
+ "install": 10432,
+ "▁Week": 10433,
+ "▁gentle": 10434,
+ "▁Canadian": 10435,
+ "▁dialog": 10436,
+ "▁articles": 10437,
+ "Theme": 10438,
+ "SM": 10439,
+ "▁Bul": 10440,
+ "▁leur": 10441,
+ "▁stom": 10442,
+ "Plugin": 10443,
+ "▁после": 10444,
+ "▁stead": 10445,
+ "▁ś": 10446,
+ "ipher": 10447,
+ "▁prze": 10448,
+ "▁draft": 10449,
+ "bottom": 10450,
+ "▁{};": 10451,
+ "▁stayed": 10452,
+ "feature": 10453,
+ "▁vot": 10454,
+ "▁fabric": 10455,
+ "ça": 10456,
+ "('#": 10457,
+ "rea": 10458,
+ "▁reput": 10459,
+ "▁Cir": 10460,
+ "▁AL": 10461,
+ "▁assertEquals": 10462,
+ "results": 10463,
+ "▁Cross": 10464,
+ "ursday": 10465,
+ "▁audio": 10466,
+ "▁gap": 10467,
+ "▁streets": 10468,
+ "▁scientific": 10469,
+ "platform": 10470,
+ "▁auss": 10471,
+ "▁Cro": 10472,
+ "▁partial": 10473,
+ "unc": 10474,
+ "▁choices": 10475,
+ "▁или": 10476,
+ "pred": 10477,
+ "▁heads": 10478,
+ "terday": 10479,
+ "▁Nick": 10480,
+ "▁weird": 10481,
+ "asant": 10482,
+ "▁represented": 10483,
+ "▁пи": 10484,
+ "DP": 10485,
+ "orders": 10486,
+ "clock": 10487,
+ "▁Ho": 10488,
+ "arters": 10489,
+ "Cmd": 10490,
+ "oga": 10491,
+ "Keys": 10492,
+ "Report": 10493,
+ "▁Vill": 10494,
+ "▁Mu": 10495,
+ "▁owned": 10496,
+ "SUCCESS": 10497,
+ "▁typeof": 10498,
+ "hdr": 10499,
+ "uable": 10500,
+ "▁neighborhood": 10501,
+ "▁AP": 10502,
+ "▁resulting": 10503,
+ "▁shadow": 10504,
+ "STRING": 10505,
+ "▁videos": 10506,
+ "лення": 10507,
+ "expect": 10508,
+ "▁Valley": 10509,
+ "▁goto": 10510,
+ "▁Sher": 10511,
+ "frastr": 10512,
+ "▁operating": 10513,
+ "▁это": 10514,
+ "▁Licensed": 10515,
+ "Variable": 10516,
+ "▁PR": 10517,
+ "▁Hans": 10518,
+ "clone": 10519,
+ "▁Gesch": 10520,
+ "▁Band": 10521,
+ "........": 10522,
+ "uing": 10523,
+ "▁hundreds": 10524,
+ "▁ок": 10525,
+ "▁emotional": 10526,
+ "▁Indust": 10527,
+ ")+": 10528,
+ "▁Egypt": 10529,
+ "▁franç": 10530,
+ "▁š": 10531,
+ "▁fasc": 10532,
+ "onto": 10533,
+ "▁Adam": 10534,
+ "▁laid": 10535,
+ "▁rig": 10536,
+ "▁detailed": 10537,
+ "▁implements": 10538,
+ "▁university": 10539,
+ "▁Hy": 10540,
+ "▁grid": 10541,
+ "▁regions": 10542,
+ "Stop": 10543,
+ "▁slot": 10544,
+ "▁angry": 10545,
+ "▁-=": 10546,
+ "▁waited": 10547,
+ "Vert": 10548,
+ "\":\"": 10549,
+ "▁elem": 10550,
+ "▁rég": 10551,
+ "owed": 10552,
+ "Member": 10553,
+ "▁ratio": 10554,
+ "isen": 10555,
+ "▁Lem": 10556,
+ "gery": 10557,
+ "▁cream": 10558,
+ "▁était": 10559,
+ "▁geb": 10560,
+ "unique": 10561,
+ "▁Deb": 10562,
+ "▁factory": 10563,
+ "że": 10564,
+ "dialog": 10565,
+ "▁Config": 10566,
+ "Sync": 10567,
+ "angers": 10568,
+ "▁governing": 10569,
+ "▁Hun": 10570,
+ "Space": 10571,
+ "▁jest": 10572,
+ "icious": 10573,
+ "▁emphas": 10574,
+ "umps": 10575,
+ "▁Esp": 10576,
+ "▁sul": 10577,
+ "▁historical": 10578,
+ "ija": 10579,
+ "▁lying": 10580,
+ "▁Steve": 10581,
+ "▁measures": 10582,
+ "osto": 10583,
+ "?”": 10584,
+ "▁pocket": 10585,
+ "▁Sat": 10586,
+ "▁pitch": 10587,
+ "▁natur": 10588,
+ "▁humans": 10589,
+ "▁Simon": 10590,
+ "adores": 10591,
+ "(\"\\": 10592,
+ "inking": 10593,
+ "▁expos": 10594,
+ "material": 10595,
+ "▁apparently": 10596,
+ "▁Camb": 10597,
+ "▁Box": 10598,
+ "▁spaces": 10599,
+ "exists": 10600,
+ "▁acting": 10601,
+ "ORY": 10602,
+ "зова": 10603,
+ "Good": 10604,
+ "ienne": 10605,
+ "▁Williams": 10606,
+ "▁fruit": 10607,
+ "iera": 10608,
+ "▁Lim": 10609,
+ "▁trait": 10610,
+ "▁artists": 10611,
+ "▁absor": 10612,
+ "rait": 10613,
+ "LOAD": 10614,
+ "▁movies": 10615,
+ "▁dynamic": 10616,
+ "asts": 10617,
+ "▁Integer": 10618,
+ "▁smoke": 10619,
+ "пі": 10620,
+ "angel": 10621,
+ ">(\"": 10622,
+ "▁instrument": 10623,
+ "▁fuel": 10624,
+ "ної": 10625,
+ "atalogue": 10626,
+ "▁serial": 10627,
+ "Files": 10628,
+ "▁bathroom": 10629,
+ "ilo": 10630,
+ "esto": 10631,
+ "▁pm": 10632,
+ "entials": 10633,
+ "▁Online": 10634,
+ "white": 10635,
+ "▁tips": 10636,
+ "▁capable": 10637,
+ "Fig": 10638,
+ "TV": 10639,
+ "▁он": 10640,
+ "ké": 10641,
+ "bitr": 10642,
+ "Mapping": 10643,
+ "▁tak": 10644,
+ "ющи": 10645,
+ "вля": 10646,
+ ")\",": 10647,
+ "▁Karl": 10648,
+ "▁Human": 10649,
+ "▁Pot": 10650,
+ "▁represents": 10651,
+ "▁consistent": 10652,
+ "_(": 10653,
+ "wen": 10654,
+ "▁Rose": 10655,
+ "law": 10656,
+ "▁FROM": 10657,
+ "▁begins": 10658,
+ "▁edit": 10659,
+ "▁mountain": 10660,
+ "▁chapter": 10661,
+ "▁wondered": 10662,
+ "▁industrial": 10663,
+ "▁Major": 10664,
+ "▁ges": 10665,
+ "▁directed": 10666,
+ "eros": 10667,
+ "▁Wild": 10668,
+ "liament": 10669,
+ "Book": 10670,
+ "username": 10671,
+ "hot": 10672,
+ "▁nam": 10673,
+ "▁league": 10674,
+ "bra": 10675,
+ "кон": 10676,
+ "▁Tal": 10677,
+ "▁Ва": 10678,
+ "▁exports": 10679,
+ "(@": 10680,
+ "▁sharing": 10681,
+ "▁Tro": 10682,
+ "ść": 10683,
+ "uesday": 10684,
+ "ylv": 10685,
+ "▁guitar": 10686,
+ "elen": 10687,
+ "Selection": 10688,
+ "▁confident": 10689,
+ "rypto": 10690,
+ "▁hors": 10691,
+ "editor": 10692,
+ "▁shoulders": 10693,
+ "getName": 10694,
+ "encing": 10695,
+ "SELECT": 10696,
+ "вши": 10697,
+ "▁kinds": 10698,
+ "▁Wel": 10699,
+ "▁purposes": 10700,
+ "Matrix": 10701,
+ "invalid": 10702,
+ "▁owners": 10703,
+ "▁Records": 10704,
+ "▁Process": 10705,
+ "▁chat": 10706,
+ "▁Dor": 10707,
+ "▁bin": 10708,
+ "redit": 10709,
+ "oire": 10710,
+ "▁Total": 10711,
+ "▁Family": 10712,
+ "ARY": 10713,
+ "▁bread": 10714,
+ "▁compre": 10715,
+ "▁shoes": 10716,
+ "▁raz": 10717,
+ "▁trace": 10718,
+ "nej": 10719,
+ "orted": 10720,
+ "hn": 10721,
+ "▁procedure": 10722,
+ "properties": 10723,
+ "plier": 10724,
+ "▁hero": 10725,
+ "panel": 10726,
+ "▁marked": 10727,
+ "▁worried": 10728,
+ "\\|": 10729,
+ "pts": 10730,
+ "▁Support": 10731,
+ "▁serving": 10732,
+ "Fail": 10733,
+ "▁disappoint": 10734,
+ "▁Scot": 10735,
+ "▁pleasure": 10736,
+ "▁judge": 10737,
+ "zeich": 10738,
+ "▁forever": 10739,
+ "▁Zeit": 10740,
+ "uous": 10741,
+ "inent": 10742,
+ "▁dw": 10743,
+ "▁waren": 10744,
+ "▁flash": 10745,
+ "▁troops": 10746,
+ "▁drugs": 10747,
+ "▁diam": 10748,
+ ".~": 10749,
+ "imp": 10750,
+ "inned": 10751,
+ "▁EV": 10752,
+ "Struct": 10753,
+ "▁justice": 10754,
+ "▁officials": 10755,
+ "ffff": 10756,
+ "▁Common": 10757,
+ "▁Cat": 10758,
+ "▁tomorrow": 10759,
+ "▁él": 10760,
+ "Texture": 10761,
+ "qpoint": 10762,
+ "▁Fried": 10763,
+ "▁Term": 10764,
+ "pgfqpoint": 10765,
+ "▁nem": 10766,
+ "norm": 10767,
+ "▁hardly": 10768,
+ "oda": 10769,
+ "zeta": 10770,
+ "emic": 10771,
+ "▁полу": 10772,
+ "▁loaded": 10773,
+ "kes": 10774,
+ "ció": 10775,
+ "▁fool": 10776,
+ "▁trick": 10777,
+ "▁dst": 10778,
+ "Find": 10779,
+ "▁все": 10780,
+ "}},": 10781,
+ "▁framework": 10782,
+ "▁merely": 10783,
+ "▁union": 10784,
+ "▁Edward": 10785,
+ "rif": 10786,
+ "Flag": 10787,
+ "▁crisis": 10788,
+ "▁finite": 10789,
+ "▁lol": 10790,
+ "▁Kim": 10791,
+ "ната": 10792,
+ "since": 10793,
+ "▁compat": 10794,
+ "▁pert": 10795,
+ "ibilities": 10796,
+ "▁también": 10797,
+ "ibli": 10798,
+ "▁teen": 10799,
+ "▁sympt": 10800,
+ "oral": 10801,
+ "ders": 10802,
+ "otte": 10803,
+ "при": 10804,
+ "▁Jane": 10805,
+ "▁originally": 10806,
+ "▁throat": 10807,
+ "mag": 10808,
+ "sup": 10809,
+ "uni": 10810,
+ "$$": 10811,
+ "▁Library": 10812,
+ "▁attacks": 10813,
+ "ingen": 10814,
+ "('/": 10815,
+ "▁hes": 10816,
+ "coin": 10817,
+ "ounce": 10818,
+ "▁Academy": 10819,
+ "MODULE": 10820,
+ "isms": 10821,
+ "▁Adv": 10822,
+ "▁Bol": 10823,
+ "▁incident": 10824,
+ ")^{": 10825,
+ "▁bij": 10826,
+ "▁Rome": 10827,
+ "▁Italy": 10828,
+ "events": 10829,
+ "▁Fern": 10830,
+ "▁ber": 10831,
+ "▁silent": 10832,
+ "▁pier": 10833,
+ "▁YO": 10834,
+ "▁plain": 10835,
+ "Bas": 10836,
+ "▁pill": 10837,
+ "rase": 10838,
+ "▁carrying": 10839,
+ "▁resp": 10840,
+ "ную": 10841,
+ "▁typical": 10842,
+ "Wrapper": 10843,
+ "▁gau": 10844,
+ "▁chemical": 10845,
+ "▁hal": 10846,
+ "throw": 10847,
+ "Cluster": 10848,
+ "▁Gab": 10849,
+ "▁Girl": 10850,
+ "quir": 10851,
+ "▁Arg": 10852,
+ "▁relief": 10853,
+ "▁Ве": 10854,
+ "dm": 10855,
+ "▁frustr": 10856,
+ "\\%": 10857,
+ "▁stores": 10858,
+ "▁bottle": 10859,
+ "▁Lew": 10860,
+ "two": 10861,
+ "stad": 10862,
+ "▁cheek": 10863,
+ "▁concerns": 10864,
+ "▁helpful": 10865,
+ "▁coverage": 10866,
+ "isi": 10867,
+ "ADD": 10868,
+ "async": 10869,
+ "▁approximately": 10870,
+ "iffer": 10871,
+ "hook": 10872,
+ "▁enum": 10873,
+ "ová": 10874,
+ "▁evil": 10875,
+ "▁constantly": 10876,
+ "apply": 10877,
+ "▁siè": 10878,
+ "▁practices": 10879,
+ "▁teachers": 10880,
+ "▁Sn": 10881,
+ "▁Awards": 10882,
+ "▁substant": 10883,
+ "▁$.": 10884,
+ "dk": 10885,
+ "▁mob": 10886,
+ "▁ingred": 10887,
+ "vere": 10888,
+ "Multi": 10889,
+ "пер": 10890,
+ "stal": 10891,
+ "yard": 10892,
+ "required": 10893,
+ "vement": 10894,
+ "▁intelligence": 10895,
+ "▁thinks": 10896,
+ "▁personally": 10897,
+ "▁trained": 10898,
+ "orney": 10899,
+ ")": 10900,
+ "gged": 10901,
+ "EINVAL": 10902,
+ "arna": 10903,
+ "▁Hamilton": 10904,
+ "merce": 10905,
+ "ekt": 10906,
+ "OF": 10907,
+ ")[": 10908,
+ "rug": 10909,
+ "ición": 10910,
+ "▁survey": 10911,
+ "nesday": 10912,
+ "▁pag": 10913,
+ "▁boundary": 10914,
+ "▁quantum": 10915,
+ "▁drawing": 10916,
+ "▁volunte": 10917,
+ "▁Word": 10918,
+ "sky": 10919,
+ "▁Greg": 10920,
+ "coll": 10921,
+ "hide": 10922,
+ "▁swim": 10923,
+ "▁revealed": 10924,
+ "adv": 10925,
+ "дя": 10926,
+ ".\");": 10927,
+ "▁explan": 10928,
+ "▁Current": 10929,
+ "▁gotten": 10930,
+ "▁falling": 10931,
+ "▁contained": 10932,
+ "UND": 10933,
+ "▁Should": 10934,
+ "▁killing": 10935,
+ "▁aspects": 10936,
+ "icted": 10937,
+ "▁Param": 10938,
+ "\",\r": 10939,
+ "TION": 10940,
+ "));\r": 10941,
+ "▁Iran": 10942,
+ "beit": 10943,
+ "▁Bu": 10944,
+ "▁[],": 10945,
+ "SSION": 10946,
+ "▁Mah": 10947,
+ "▁resolution": 10948,
+ "▁boss": 10949,
+ "lg": 10950,
+ "chor": 10951,
+ "▁Unter": 10952,
+ "▁debt": 10953,
+ "▁vid": 10954,
+ "gie": 10955,
+ "▁uno": 10956,
+ "CB": 10957,
+ "plom": 10958,
+ "LICENSE": 10959,
+ "▁Kenn": 10960,
+ "▁finns": 10961,
+ "ONG": 10962,
+ "▁somewhat": 10963,
+ "▁actor": 10964,
+ "▁Status": 10965,
+ "▁probability": 10966,
+ "fb": 10967,
+ "▁chart": 10968,
+ "▁stands": 10969,
+ "policy": 10970,
+ "▁onder": 10971,
+ "tabular": 10972,
+ "▁Ash": 10973,
+ "▁boost": 10974,
+ "▁desper": 10975,
+ "month": 10976,
+ "▁alert": 10977,
+ "▁suite": 10978,
+ "▁gén": 10979,
+ "▁vacc": 10980,
+ "▁Has": 10981,
+ "Mask": 10982,
+ "▁Thursday": 10983,
+ "▁proved": 10984,
+ "▁Nel": 10985,
+ "▁moral": 10986,
+ "▁ja": 10987,
+ "auer": 10988,
+ "codec": 10989,
+ "▁instant": 10990,
+ "amps": 10991,
+ "▁milk": 10992,
+ "WORD": 10993,
+ "▁Ö": 10994,
+ "Email": 10995,
+ "Elements": 10996,
+ "▁forma": 10997,
+ "Free": 10998,
+ "MAP": 10999,
+ "▁Ж": 11000,
+ "sym": 11001,
+ "▁ти": 11002,
+ "▁Econom": 11003,
+ "▁Vi": 11004,
+ "▁Columb": 11005,
+ "▁_,": 11006,
+ "oret": 11007,
+ "Sequ": 11008,
+ "plan": 11009,
+ "▁frequency": 11010,
+ "irement": 11011,
+ "▁assumed": 11012,
+ "▁Ca": 11013,
+ "▁Bit": 11014,
+ "▁коман": 11015,
+ "▁smell": 11016,
+ "Security": 11017,
+ "▁aqu": 11018,
+ "oor": 11019,
+ "price": 11020,
+ "inity": 11021,
+ "▁axis": 11022,
+ "release": 11023,
+ "▁resolve": 11024,
+ "▁tears": 11025,
+ "▁bother": 11026,
+ "▁Community": 11027,
+ "▁registered": 11028,
+ "▁revolution": 11029,
+ "?.": 11030,
+ "▁versions": 11031,
+ "%%%%": 11032,
+ "ydro": 11033,
+ "Success": 11034,
+ "▁Win": 11035,
+ "▁Boy": 11036,
+ "▁Dub": 11037,
+ "▁kw": 11038,
+ "▁noch": 11039,
+ "▁charges": 11040,
+ "arios": 11041,
+ "uar": 11042,
+ ";&": 11043,
+ "▁había": 11044,
+ "(`": 11045,
+ "▁tx": 11046,
+ "elve": 11047,
+ "▁años": 11048,
+ "▁math": 11049,
+ "▁Alf": 11050,
+ "▁Fund": 11051,
+ "▁manifest": 11052,
+ "▁attached": 11053,
+ "▁spiritual": 11054,
+ "▁Alexander": 11055,
+ "unes": 11056,
+ "▁seed": 11057,
+ "▁Но": 11058,
+ "▁magazine": 11059,
+ "▁eigen": 11060,
+ "▁обра": 11061,
+ "ea": 11062,
+ "▁PH": 11063,
+ "swing": 11064,
+ "▁Asia": 11065,
+ "ју": 11066,
+ "▁KIND": 11067,
+ "Identifier": 11068,
+ "once": 11069,
+ "▁alcohol": 11070,
+ "ції": 11071,
+ "styles": 11072,
+ "assertEqual": 11073,
+ "▁Ra": 11074,
+ "графи": 11075,
+ "▁millions": 11076,
+ "▁chunk": 11077,
+ "дер": 11078,
+ "Package": 11079,
+ "UST": 11080,
+ "▁Nothing": 11081,
+ "(\"#": 11082,
+ "▁Mid": 11083,
+ "▁нача": 11084,
+ "ły": 11085,
+ "AAAA": 11086,
+ "▁launched": 11087,
+ "▁wake": 11088,
+ "▁guests": 11089,
+ "▁differences": 11090,
+ "udi": 11091,
+ "▁aid": 11092,
+ "▁Sport": 11093,
+ "ulator": 11094,
+ "execute": 11095,
+ "plot": 11096,
+ "ching": 11097,
+ "▁Norm": 11098,
+ "tm": 11099,
+ "\\+": 11100,
+ "ARD": 11101,
+ "▁beer": 11102,
+ "▁під": 11103,
+ "IAL": 11104,
+ "storage": 11105,
+ "▁Anna": 11106,
+ "▁yards": 11107,
+ "▁technique": 11108,
+ "▁où": 11109,
+ "atten": 11110,
+ "UNT": 11111,
+ "don": 11112,
+ "фор": 11113,
+ "▁hoping": 11114,
+ "▁victory": 11115,
+ "itat": 11116,
+ "▁significantly": 11117,
+ "▁practical": 11118,
+ "ije": 11119,
+ "▁expansion": 11120,
+ "JS": 11121,
+ "ixels": 11122,
+ "USER": 11123,
+ "Shape": 11124,
+ "▁extent": 11125,
+ "lio": 11126,
+ "▁pued": 11127,
+ "olid": 11128,
+ "▁gam": 11129,
+ "▁sevent": 11130,
+ "▁Ga": 11131,
+ "anguages": 11132,
+ "(((": 11133,
+ "ъл": 11134,
+ "▁Exper": 11135,
+ "asty": 11136,
+ "rieg": 11137,
+ "gio": 11138,
+ "odo": 11139,
+ "▁colle": 11140,
+ "▁stored": 11141,
+ "▁Sche": 11142,
+ "istant": 11143,
+ "▁lip": 11144,
+ "BR": 11145,
+ "▁aug": 11146,
+ "▁Search": 11147,
+ ")=\\": 11148,
+ "▁Ur": 11149,
+ "▁sole": 11150,
+ "illo": 11151,
+ "▁mehr": 11152,
+ "kit": 11153,
+ "▁interior": 11154,
+ "LIST": 11155,
+ "adel": 11156,
+ "▁shopping": 11157,
+ "▁slä": 11158,
+ "Your": 11159,
+ "DITION": 11160,
+ "▁Http": 11161,
+ "raham": 11162,
+ "три": 11163,
+ "▁brings": 11164,
+ "Rev": 11165,
+ "▁propag": 11166,
+ "ityEngine": 11167,
+ "()),": 11168,
+ "▁ingår": 11169,
+ "▁Ireland": 11170,
+ "▁\"./": 11171,
+ "▁Harr": 11172,
+ "▁admin": 11173,
+ "eno": 11174,
+ "▁kr": 11175,
+ "▁está": 11176,
+ "▁props": 11177,
+ "tok": 11178,
+ "omorph": 11179,
+ "▁affected": 11180,
+ "Phone": 11181,
+ "▁degrees": 11182,
+ "some": 11183,
+ "▁nin": 11184,
+ "EVENT": 11185,
+ "▁interaction": 11186,
+ "▁Tuesday": 11187,
+ "iterator": 11188,
+ "▁Nob": 11189,
+ "▁scatter": 11190,
+ "ucket": 11191,
+ "complete": 11192,
+ "▁duty": 11193,
+ "▁answers": 11194,
+ "Progress": 11195,
+ "eed": 11196,
+ "рон": 11197,
+ "▁vie": 11198,
+ "▁depos": 11199,
+ "▁packet": 11200,
+ "▁tow": 11201,
+ "▁deleg": 11202,
+ "audio": 11203,
+ "▁vary": 11204,
+ "▁migr": 11205,
+ "фі": 11206,
+ "esa": 11207,
+ "Events": 11208,
+ "haus": 11209,
+ "▁Sav": 11210,
+ "▁Portug": 11211,
+ "▁сто": 11212,
+ "ilation": 11213,
+ "▁metadata": 11214,
+ "las": 11215,
+ "▁ai": 11216,
+ "▁anger": 11217,
+ "▁ham": 11218,
+ "▁Anal": 11219,
+ "▁frequently": 11220,
+ "▁FALSE": 11221,
+ "oche": 11222,
+ "rez": 11223,
+ "▁Viet": 11224,
+ "quis": 11225,
+ "▁charged": 11226,
+ "äs": 11227,
+ "▁Path": 11228,
+ "▁accurate": 11229,
+ "▁Plus": 11230,
+ "keit": 11231,
+ "▁Input": 11232,
+ "when": 11233,
+ "eras": 11234,
+ "▁воз": 11235,
+ "▁derived": 11236,
+ "aje": 11237,
+ "▁Had": 11238,
+ "uren": 11239,
+ "ór": 11240,
+ "}=\\": 11241,
+ "ureau": 11242,
+ "aland": 11243,
+ "Execution": 11244,
+ "eden": 11245,
+ "▁seeking": 11246,
+ "changed": 11247,
+ "▁trem": 11248,
+ "ску": 11249,
+ "▁Geme": 11250,
+ "inating": 11251,
+ "▁columns": 11252,
+ "EP": 11253,
+ "▁injury": 11254,
+ "endent": 11255,
+ "▁headed": 11256,
+ "ASE": 11257,
+ "▁Muslim": 11258,
+ "▁climate": 11259,
+ "▁fake": 11260,
+ "CMD": 11261,
+ "ји": 11262,
+ "▁Arts": 11263,
+ "fection": 11264,
+ "▁pit": 11265,
+ ">\\": 11266,
+ "anal": 11267,
+ "Section": 11268,
+ "plus": 11269,
+ "üt": 11270,
+ "▁embed": 11271,
+ "▁strings": 11272,
+ "Before": 11273,
+ "proc": 11274,
+ "▁спо": 11275,
+ "trl": 11276,
+ "vr": 11277,
+ "Background": 11278,
+ "logger": 11279,
+ "agraph": 11280,
+ "iest": 11281,
+ "▁goods": 11282,
+ "batch": 11283,
+ "▁optional": 11284,
+ "▁Taylor": 11285,
+ "▁recognize": 11286,
+ "walk": 11287,
+ "▁Hit": 11288,
+ "▁Elizabeth": 11289,
+ "}:": 11290,
+ "▁careful": 11291,
+ "краї": 11292,
+ "▁locations": 11293,
+ "▁structures": 11294,
+ "▁disk": 11295,
+ "▁ships": 11296,
+ "▁suo": 11297,
+ "▁sowie": 11298,
+ "▁Ess": 11299,
+ "▁Hash": 11300,
+ "▁reasonable": 11301,
+ "▁Moreover": 11302,
+ "▁formula": 11303,
+ "▁Centre": 11304,
+ "▁residents": 11305,
+ "RS": 11306,
+ "Ids": 11307,
+ "▁Know": 11308,
+ "▁trib": 11309,
+ "▁rés": 11310,
+ "▁stable": 11311,
+ "▁Would": 11312,
+ "▁breaking": 11313,
+ "▁meal": 11314,
+ "▁phen": 11315,
+ "▁fel": 11316,
+ "▁Fred": 11317,
+ "Author": 11318,
+ "▁capture": 11319,
+ "opts": 11320,
+ "▁everywhere": 11321,
+ "▁sque": 11322,
+ "▁moder": 11323,
+ "setup": 11324,
+ "▁Supp": 11325,
+ "▁whenever": 11326,
+ "{(": 11327,
+ "wart": 11328,
+ "▁toe": 11329,
+ "Prefix": 11330,
+ "hou": 11331,
+ "gage": 11332,
+ ">\"": 11333,
+ "▁frag": 11334,
+ "▁Theorem": 11335,
+ "memory": 11336,
+ "▁contents": 11337,
+ "docs": 11338,
+ "}'": 11339,
+ "▁Irish": 11340,
+ "Then": 11341,
+ "aats": 11342,
+ "Save": 11343,
+ "▁agency": 11344,
+ "▁име": 11345,
+ "дова": 11346,
+ "▁Function": 11347,
+ "NN": 11348,
+ "destroy": 11349,
+ "▁Message": 11350,
+ "▁cancel": 11351,
+ "▁superior": 11352,
+ "▁ec": 11353,
+ "▁literature": 11354,
+ "▁PART": 11355,
+ "Il": 11356,
+ "▁Cab": 11357,
+ "engine": 11358,
+ "▁basket": 11359,
+ "worth": 11360,
+ "▁Sel": 11361,
+ "fetch": 11362,
+ "▁Stadt": 11363,
+ "▁Ки": 11364,
+ "▁conj": 11365,
+ "▁seiner": 11366,
+ "▁confirmed": 11367,
+ "▁Argent": 11368,
+ "amar": 11369,
+ "pgfpath": 11370,
+ "▁struggle": 11371,
+ "Pattern": 11372,
+ "▁Middle": 11373,
+ "itan": 11374,
+ "▁moon": 11375,
+ "orough": 11376,
+ "▁Catholic": 11377,
+ "▁struck": 11378,
+ "]->": 11379,
+ "▁weapon": 11380,
+ "▁subst": 11381,
+ "▁instructions": 11382,
+ "▁occas": 11383,
+ "protected": 11384,
+ "▁Less": 11385,
+ "▁batch": 11386,
+ "▁contra": 11387,
+ "▁deck": 11388,
+ "▁ignored": 11389,
+ "▁refused": 11390,
+ "trigger": 11391,
+ "▁criminal": 11392,
+ "GA": 11393,
+ "olly": 11394,
+ "▁Bell": 11395,
+ "▁Ю": 11396,
+ "forward": 11397,
+ "▁prefix": 11398,
+ "▁immediate": 11399,
+ "▁assigned": 11400,
+ "▁elected": 11401,
+ "▁tonight": 11402,
+ "▁Dies": 11403,
+ "▁Beach": 11404,
+ "▁preced": 11405,
+ "ował": 11406,
+ "▁galax": 11407,
+ "▁logic": 11408,
+ "enza": 11409,
+ "▁Captain": 11410,
+ "▁Hay": 11411,
+ "▁facts": 11412,
+ "▁ни": 11413,
+ "té": 11414,
+ "▁sb": 11415,
+ "oped": 11416,
+ "▁combat": 11417,
+ "▁explore": 11418,
+ "▁(-": 11419,
+ "Loader": 11420,
+ "▁Wilson": 11421,
+ "▁locked": 11422,
+ ":": 11423,
+ "▁Od": 11424,
+ "▁Prote": 11425,
+ "▁disabled": 11426,
+ "▁hatte": 11427,
+ "▁shout": 11428,
+ "▁constructor": 11429,
+ "бі": 11430,
+ "▁tras": 11431,
+ "▁Father": 11432,
+ "▁adj": 11433,
+ "▁Carolina": 11434,
+ "▁Food": 11435,
+ "bad": 11436,
+ "atore": 11437,
+ "parameters": 11438,
+ "▁Full": 11439,
+ "[-": 11440,
+ "▁\"#": 11441,
+ "▁Try": 11442,
+ "ської": 11443,
+ "▁exhaust": 11444,
+ "▁scroll": 11445,
+ "_;": 11446,
+ "Who": 11447,
+ "▁delivered": 11448,
+ "▁referred": 11449,
+ "▁prospect": 11450,
+ "scan": 11451,
+ "▁modified": 11452,
+ "Generator": 11453,
+ "▁excess": 11454,
+ "▁kg": 11455,
+ "zet": 11456,
+ "icz": 11457,
+ "clipse": 11458,
+ "▁tank": 11459,
+ "▁guns": 11460,
+ "▁Ges": 11461,
+ "inton": 11462,
+ "▁Wednesday": 11463,
+ "▁mainly": 11464,
+ "parser": 11465,
+ "▁effectively": 11466,
+ "▁Ку": 11467,
+ "▁resident": 11468,
+ "▁Li": 11469,
+ "▁flying": 11470,
+ "▁mayor": 11471,
+ "üh": 11472,
+ "uta": 11473,
+ "▁colour": 11474,
+ "▁aircraft": 11475,
+ "terior": 11476,
+ "nr": 11477,
+ "▁keeps": 11478,
+ "fan": 11479,
+ "▁shirt": 11480,
+ "Compar": 11481,
+ "▁Eth": 11482,
+ "Mac": 11483,
+ "clean": 11484,
+ "slice": 11485,
+ "czy": 11486,
+ "▁gender": 11487,
+ "▁butter": 11488,
+ "AUT": 11489,
+ "▁Element": 11490,
+ "Fin": 11491,
+ "dma": 11492,
+ "sample": 11493,
+ "Registry": 11494,
+ "▁classic": 11495,
+ "▁drove": 11496,
+ "pb": 11497,
+ "defined": 11498,
+ "▁reward": 11499,
+ "yal": 11500,
+ "]),": 11501,
+ "▁BAS": 11502,
+ "▁hyper": 11503,
+ "▁Ни": 11504,
+ "▁).": 11505,
+ "Psi": 11506,
+ "▁entries": 11507,
+ "▁Kingdom": 11508,
+ "▁Song": 11509,
+ "▁prompt": 11510,
+ "centering": 11511,
+ "▁Holly": 11512,
+ "eman": 11513,
+ "▁painting": 11514,
+ "▁formation": 11515,
+ "▁Request": 11516,
+ "controller": 11517,
+ "Region": 11518,
+ "PY": 11519,
+ "idades": 11520,
+ "TL": 11521,
+ "▁disable": 11522,
+ "▁rein": 11523,
+ "rical": 11524,
+ "\"\r": 11525,
+ "%)": 11526,
+ "▁Sab": 11527,
+ "▁Without": 11528,
+ "Serv": 11529,
+ "▁Short": 11530,
+ "▁ю": 11531,
+ "▁resc": 11532,
+ "▁patterns": 11533,
+ "▁ArrayList": 11534,
+ "symbol": 11535,
+ "aco": 11536,
+ "▁Hom": 11537,
+ "help": 11538,
+ "▁hasta": 11539,
+ "▁installed": 11540,
+ "atie": 11541,
+ "▁visited": 11542,
+ "▁Бе": 11543,
+ "){\\": 11544,
+ "▁desde": 11545,
+ "JECT": 11546,
+ "▁drew": 11547,
+ "▁Stock": 11548,
+ "▁Cru": 11549,
+ "DEF": 11550,
+ "obby": 11551,
+ "izable": 11552,
+ "ogether": 11553,
+ "▁aber": 11554,
+ "▁dan": 11555,
+ "alis": 11556,
+ "tail": 11557,
+ "▁expressed": 11558,
+ "▁Access": 11559,
+ "Seg": 11560,
+ "▁Lib": 11561,
+ "▁supports": 11562,
+ "background": 11563,
+ "▁commune": 11564,
+ "called": 11565,
+ "▁printf": 11566,
+ "▁Prince": 11567,
+ "ните": 11568,
+ "depend": 11569,
+ "▁dels": 11570,
+ "neur": 11571,
+ "▁recommended": 11572,
+ "▁founded": 11573,
+ "▁markets": 11574,
+ "▁destroyed": 11575,
+ "▁abstract": 11576,
+ "▁serie": 11577,
+ "▁Dun": 11578,
+ "Term": 11579,
+ "▁portion": 11580,
+ "adapter": 11581,
+ "isset": 11582,
+ "чески": 11583,
+ "▁integer": 11584,
+ "▁returning": 11585,
+ "enties": 11586,
+ "▁Fair": 11587,
+ "▁USB": 11588,
+ "▁Price": 11589,
+ "igate": 11590,
+ "▁settled": 11591,
+ "({\\": 11592,
+ "nek": 11593,
+ "▁therm": 11594,
+ "▁cig": 11595,
+ "ány": 11596,
+ "▁investigation": 11597,
+ "ometer": 11598,
+ "SUP": 11599,
+ "Some": 11600,
+ "sing": 11601,
+ "Constant": 11602,
+ "▁retail": 11603,
+ "ży": 11604,
+ "▁drinking": 11605,
+ "▁Invest": 11606,
+ "SV": 11607,
+ "iginal": 11608,
+ "▁Bow": 11609,
+ "{{\\": 11610,
+ "▁assistance": 11611,
+ "▁intellect": 11612,
+ "INIT": 11613,
+ "aug": 11614,
+ "▁Leon": 11615,
+ "Sur": 11616,
+ "▁admit": 11617,
+ "▁Command": 11618,
+ "illes": 11619,
+ "rov": 11620,
+ "▁oh": 11621,
+ "▁não": 11622,
+ "▁matching": 11623,
+ "▁genu": 11624,
+ "▁Ox": 11625,
+ "тся": 11626,
+ "notation": 11627,
+ "GO": 11628,
+ "▁Nap": 11629,
+ "▁verify": 11630,
+ "▁aussi": 11631,
+ "DateTime": 11632,
+ "▁suitable": 11633,
+ "▁indicate": 11634,
+ "▁Live": 11635,
+ "Feature": 11636,
+ "▁tracks": 11637,
+ "▁hasn": 11638,
+ "▁Java": 11639,
+ "▁closely": 11640,
+ "▁Dad": 11641,
+ "ceive": 11642,
+ "▁Market": 11643,
+ "agy": 11644,
+ "▁\"-": 11645,
+ "awn": 11646,
+ "stell": 11647,
+ "pton": 11648,
+ "zeit": 11649,
+ "▁Vector": 11650,
+ "▁MAX": 11651,
+ "▁Federal": 11652,
+ "wall": 11653,
+ "▁Jen": 11654,
+ "delay": 11655,
+ "▁limits": 11656,
+ "▁Quest": 11657,
+ "Cam": 11658,
+ "▁Fel": 11659,
+ "writer": 11660,
+ "LP": 11661,
+ "▁moves": 11662,
+ "▁Execut": 11663,
+ "▁DB": 11664,
+ "oker": 11665,
+ "scribe": 11666,
+ "elijk": 11667,
+ "Constants": 11668,
+ "Addr": 11669,
+ "▁}}": 11670,
+ "▁channels": 11671,
+ "iy": 11672,
+ "riority": 11673,
+ "▁trading": 11674,
+ "▁facilities": 11675,
+ "▁Pack": 11676,
+ "▁sys": 11677,
+ "▁meta": 11678,
+ "▁estimate": 11679,
+ "▁Later": 11680,
+ "issue": 11681,
+ "▁Having": 11682,
+ "▁guest": 11683,
+ "▁nobody": 11684,
+ "depth": 11685,
+ "▁został": 11686,
+ "пера": 11687,
+ ")}\\": 11688,
+ "bg": 11689,
+ "▁Twitter": 11690,
+ "▁darkness": 11691,
+ "jpg": 11692,
+ "contr": 11693,
+ "kernel": 11694,
+ "]\\": 11695,
+ "▁extend": 11696,
+ "roc": 11697,
+ "NET": 11698,
+ "MSG": 11699,
+ "▁burst": 11700,
+ "▁repair": 11701,
+ "▁fetch": 11702,
+ "ieg": 11703,
+ "ús": 11704,
+ "Screen": 11705,
+ "blem": 11706,
+ "AppCompat": 11707,
+ "▁chap": 11708,
+ "ELD": 11709,
+ "▁Penn": 11710,
+ "▁promote": 11711,
+ "▁Ukr": 11712,
+ "arest": 11713,
+ "▁samples": 11714,
+ "▁Greek": 11715,
+ "▁constru": 11716,
+ "▁universe": 11717,
+ "elijke": 11718,
+ "▁preferred": 11719,
+ "▁Де": 11720,
+ "▁Ira": 11721,
+ "▁dow": 11722,
+ "agues": 11723,
+ "HERE": 11724,
+ "▁experts": 11725,
+ "Protocol": 11726,
+ "PIO": 11727,
+ "▁naz": 11728,
+ "▁Kh": 11729,
+ "hör": 11730,
+ "▁distingu": 11731,
+ "▁BY": 11732,
+ "▁seine": 11733,
+ "eping": 11734,
+ "▁fairly": 11735,
+ "▁Mean": 11736,
+ "ixer": 11737,
+ "insi": 11738,
+ "▁authors": 11739,
+ "**.": 11740,
+ "AI": 11741,
+ "▁edges": 11742,
+ "▁shooting": 11743,
+ "Admin": 11744,
+ "▁maps": 11745,
+ "chant": 11746,
+ "▁COVID": 11747,
+ "▁linked": 11748,
+ "▁ske": 11749,
+ "▁powers": 11750,
+ "ád": 11751,
+ "▁stomach": 11752,
+ "▁usage": 11753,
+ "▁defend": 11754,
+ "▁sustain": 11755,
+ "▁updates": 11756,
+ "▁assign": 11757,
+ "HL": 11758,
+ "▁Sea": 11759,
+ "▁discipl": 11760,
+ "Video": 11761,
+ "▁Chief": 11762,
+ "▁bunch": 11763,
+ "▁Obama": 11764,
+ "nis": 11765,
+ "vor": 11766,
+ "▁agents": 11767,
+ "cas": 11768,
+ "chter": 11769,
+ "▁glanced": 11770,
+ "supported": 11771,
+ "▁Consider": 11772,
+ "▁Everyone": 11773,
+ "▁lect": 11774,
+ "▁Stone": 11775,
+ "▁Jam": 11776,
+ "ogram": 11777,
+ "formance": 11778,
+ "▁\\\"": 11779,
+ "▁patch": 11780,
+ "▁vit": 11781,
+ "Power": 11782,
+ "▁harder": 11783,
+ "Anal": 11784,
+ "▁desired": 11785,
+ "▁jug": 11786,
+ "▁supporting": 11787,
+ "DU": 11788,
+ "]],": 11789,
+ "▁Administr": 11790,
+ "ucky": 11791,
+ "▁controller": 11792,
+ "▁issued": 11793,
+ "▁Sin": 11794,
+ "▁affili": 11795,
+ "▁partners": 11796,
+ "cdots": 11797,
+ "ctic": 11798,
+ "Car": 11799,
+ "▁NY": 11800,
+ "▁priority": 11801,
+ "original": 11802,
+ "Sql": 11803,
+ "▁declared": 11804,
+ "▁Hotel": 11805,
+ "▁browser": 11806,
+ "▁grande": 11807,
+ "}^\\": 11808,
+ "bow": 11809,
+ "▁accommod": 11810,
+ "Directory": 11811,
+ "▁suffering": 11812,
+ "▁logger": 11813,
+ "▁breakfast": 11814,
+ "uli": 11815,
+ "▁boot": 11816,
+ "▁contribution": 11817,
+ "NESS": 11818,
+ "▁Ten": 11819,
+ "semble": 11820,
+ "▁housing": 11821,
+ "Raw": 11822,
+ "ANCE": 11823,
+ "▁При": 11824,
+ "▁brit": 11825,
+ "essa": 11826,
+ "inson": 11827,
+ "▁Ball": 11828,
+ "entes": 11829,
+ "▁Bra": 11830,
+ "score": 11831,
+ "GER": 11832,
+ "route": 11833,
+ "apsed": 11834,
+ "рой": 11835,
+ "diff": 11836,
+ "▁broadcast": 11837,
+ "▁tar": 11838,
+ "▁delight": 11839,
+ ")?": 11840,
+ "chester": 11841,
+ "Platform": 11842,
+ "▁emergency": 11843,
+ "▁ces": 11844,
+ "nership": 11845,
+ "▁situations": 11846,
+ "▁familjen": 11847,
+ "▁Geb": 11848,
+ "enta": 11849,
+ "úblic": 11850,
+ "▁Place": 11851,
+ "ILL": 11852,
+ "▁march": 11853,
+ "▁fundamental": 11854,
+ "attributes": 11855,
+ "кти": 11856,
+ "▁Fu": 11857,
+ "FD": 11858,
+ "▁рас": 11859,
+ "▁academic": 11860,
+ "pres": 11861,
+ "▁rising": 11862,
+ "▁Braz": 11863,
+ "▁receiving": 11864,
+ "WARN": 11865,
+ "▁judg": 11866,
+ "▁necessarily": 11867,
+ "]=": 11868,
+ "▁deeply": 11869,
+ "▁gray": 11870,
+ "Headers": 11871,
+ "▁coal": 11872,
+ "\\{": 11873,
+ "Mut": 11874,
+ "bach": 11875,
+ "▁profit": 11876,
+ "вого": 11877,
+ "igs": 11878,
+ "ograp": 11879,
+ "\";\r": 11880,
+ "▁advoc": 11881,
+ "Generated": 11882,
+ "мери": 11883,
+ "▁Cond": 11884,
+ "▁agric": 11885,
+ "BASE": 11886,
+ "▁arrang": 11887,
+ "▁flowers": 11888,
+ "iw": 11889,
+ "▁];": 11890,
+ "▁вой": 11891,
+ "umerate": 11892,
+ "▁ihr": 11893,
+ "▁пар": 11894,
+ "▁mont": 11895,
+ "widehat": 11896,
+ "mg": 11897,
+ "▁btn": 11898,
+ "▁besk": 11899,
+ "▁acts": 11900,
+ "ós": 11901,
+ "~~~~": 11902,
+ "▁curve": 11903,
+ "language": 11904,
+ "▁TRUE": 11905,
+ "▁cleaning": 11906,
+ "Math": 11907,
+ "▁regional": 11908,
+ "▁estimated": 11909,
+ "arity": 11910,
+ "ierung": 11911,
+ "/{": 11912,
+ "jango": 11913,
+ "$_": 11914,
+ "▁threw": 11915,
+ "rq": 11916,
+ "cop": 11917,
+ "nergy": 11918,
+ "▁Account": 11919,
+ "pal": 11920,
+ "▁Nic": 11921,
+ "]))": 11922,
+ "▁awesome": 11923,
+ "▁Load": 11924,
+ "unnel": 11925,
+ "▁rows": 11926,
+ "▁foreach": 11927,
+ "▁Pod": 11928,
+ "▁EN": 11929,
+ "▁.=": 11930,
+ "uate": 11931,
+ "frastructure": 11932,
+ "▁Watch": 11933,
+ "Stand": 11934,
+ "▁routine": 11935,
+ "▁pic": 11936,
+ "helper": 11937,
+ "▁horses": 11938,
+ "▁requested": 11939,
+ "▁---": 11940,
+ "border": 11941,
+ "▁lifted": 11942,
+ "▁Ped": 11943,
+ "Import": 11944,
+ "ље": 11945,
+ "▁Ли": 11946,
+ "▁myst": 11947,
+ "THER": 11948,
+ "▁AC": 11949,
+ "Proxy": 11950,
+ "prov": 11951,
+ "▁Nik": 11952,
+ "hemat": 11953,
+ "ональ": 11954,
+ "▁\".": 11955,
+ "ului": 11956,
+ "▁improved": 11957,
+ "ieren": 11958,
+ "ocolate": 11959,
+ "Sche": 11960,
+ "unic": 11961,
+ "▁Professor": 11962,
+ "ieler": 11963,
+ "▁duration": 11964,
+ "▁timeout": 11965,
+ "hom": 11966,
+ "▁lux": 11967,
+ "▁trab": 11968,
+ "itary": 11969,
+ "ње": 11970,
+ "▁inspired": 11971,
+ "})\\": 11972,
+ "isely": 11973,
+ "ials": 11974,
+ "▁Vor": 11975,
+ "▁enhance": 11976,
+ "▁lucky": 11977,
+ "World": 11978,
+ "elo": 11979,
+ "ifiers": 11980,
+ "▁facing": 11981,
+ "▁appreciate": 11982,
+ "▁être": 11983,
+ "▁bench": 11984,
+ "atted": 11985,
+ "gence": 11986,
+ "course": 11987,
+ "▁tub": 11988,
+ "▁lors": 11989,
+ "▁mistake": 11990,
+ "nom": 11991,
+ "▁paus": 11992,
+ "▁\"\";": 11993,
+ "▁subs": 11994,
+ "▁stato": 11995,
+ "$)": 11996,
+ "▁gay": 11997,
+ "orry": 11998,
+ "▁vehicles": 11999,
+ "▁brill": 12000,
+ "may": 12001,
+ "resp": 12002,
+ "▁wore": 12003,
+ "ją": 12004,
+ "bp": 12005,
+ "onel": 12006,
+ "▁CR": 12007,
+ "▁diagn": 12008,
+ "mathsf": 12009,
+ "▁holiday": 12010,
+ "▁achieved": 12011,
+ "▁{'": 12012,
+ "▁Resource": 12013,
+ "▁hi": 12014,
+ "▁bra": 12015,
+ "▁CONDITION": 12016,
+ "ctr": 12017,
+ "▁Write": 12018,
+ "ishop": 12019,
+ "OLD": 12020,
+ "▁cpu": 12021,
+ "▁occurs": 12022,
+ "ół": 12023,
+ "straint": 12024,
+ "▁nuclear": 12025,
+ "Area": 12026,
+ "cluster": 12027,
+ "▁surrounding": 12028,
+ "▁Juan": 12029,
+ "▁prima": 12030,
+ "▁Southern": 12031,
+ "itty": 12032,
+ "▁Assembly": 12033,
+ "elem": 12034,
+ "adi": 12035,
+ "éral": 12036,
+ "▁Wat": 12037,
+ "▁Radio": 12038,
+ "▁gegen": 12039,
+ "▁Tony": 12040,
+ "pressed": 12041,
+ "▁Anne": 12042,
+ "▁NS": 12043,
+ "▁Pak": 12044,
+ "▁Civil": 12045,
+ "▁thrown": 12046,
+ "NONE": 12047,
+ "▁pump": 12048,
+ "▁solve": 12049,
+ "ENABLE": 12050,
+ "▁Phys": 12051,
+ "▁],": 12052,
+ "POSE": 12053,
+ "ktet": 12054,
+ "▁Fab": 12055,
+ "validate": 12056,
+ "Iterator": 12057,
+ "condition": 12058,
+ "redu": 12059,
+ "▁negoti": 12060,
+ "anno": 12061,
+ "▁sans": 12062,
+ "▁Ul": 12063,
+ "CHAR": 12064,
+ "▁edition": 12065,
+ "▁spectrum": 12066,
+ "orie": 12067,
+ "▁execution": 12068,
+ "Please": 12069,
+ "▁BO": 12070,
+ "URN": 12071,
+ "▁cow": 12072,
+ "стан": 12073,
+ "istribution": 12074,
+ "Domain": 12075,
+ "▁readers": 12076,
+ "▁consumer": 12077,
+ "▁styles": 12078,
+ "encode": 12079,
+ "▁Cy": 12080,
+ "Common": 12081,
+ "▁Prop": 12082,
+ "▁execute": 12083,
+ "▁eq": 12084,
+ "▁visitors": 12085,
+ "▁Amb": 12086,
+ "udad": 12087,
+ "qquad": 12088,
+ "▁Cert": 12089,
+ "▁trop": 12090,
+ "▁yesterday": 12091,
+ "tain": 12092,
+ "LD": 12093,
+ "atro": 12094,
+ "▁increases": 12095,
+ "▁Wars": 12096,
+ "ned": 12097,
+ "before": 12098,
+ "aupt": 12099,
+ "▁ERR": 12100,
+ "▁Ford": 12101,
+ "▁dalla": 12102,
+ "ULAR": 12103,
+ "▁strike": 12104,
+ "Arr": 12105,
+ "▁recovery": 12106,
+ "▁Response": 12107,
+ "▁strategies": 12108,
+ "▁ін": 12109,
+ "▁rear": 12110,
+ "▁adults": 12111,
+ "▁Не": 12112,
+ "windows": 12113,
+ "decl": 12114,
+ "olen": 12115,
+ "▁Jord": 12116,
+ "▁Kal": 12117,
+ "▁cui": 12118,
+ "▁Про": 12119,
+ "▁Sever": 12120,
+ "▁ale": 12121,
+ "▁peut": 12122,
+ "Stats": 12123,
+ "▁Ross": 12124,
+ "arten": 12125,
+ "shall": 12126,
+ "▁entertain": 12127,
+ "▁parking": 12128,
+ "нови": 12129,
+ "erre": 12130,
+ "▁funding": 12131,
+ "▁Cle": 12132,
+ "▁Ot": 12133,
+ "unst": 12134,
+ "assertEquals": 12135,
+ "▁cancell": 12136,
+ "TAG": 12137,
+ "▁Early": 12138,
+ "▁feedback": 12139,
+ "▁pand": 12140,
+ "yo": 12141,
+ "▁mirror": 12142,
+ "▁verb": 12143,
+ "▁highlight": 12144,
+ "erialize": 12145,
+ "▁grade": 12146,
+ "лась": 12147,
+ "▁Brook": 12148,
+ "▁LI": 12149,
+ "▁implies": 12150,
+ "▁enorm": 12151,
+ "ają": 12152,
+ "▁Wer": 12153,
+ "away": 12154,
+ "▁machines": 12155,
+ "▁dent": 12156,
+ "Idx": 12157,
+ "▁tid": 12158,
+ ")\"": 12159,
+ "▁mole": 12160,
+ "bold": 12161,
+ "CONT": 12162,
+ "▁ép": 12163,
+ "▁cutting": 12164,
+ "▁Neg": 12165,
+ "▁tong": 12166,
+ "▁networks": 12167,
+ "▁Fall": 12168,
+ "generated": 12169,
+ "▁Pri": 12170,
+ "UEST": 12171,
+ "▁Belg": 12172,
+ "▁sheet": 12173,
+ "кси": 12174,
+ "▁†": 12175,
+ "▁yeah": 12176,
+ "▁Victor": 12177,
+ "▁Rub": 12178,
+ "▁candidates": 12179,
+ "prés": 12180,
+ "▁EU": 12181,
+ "etr": 12182,
+ "▁rolled": 12183,
+ "▁Pas": 12184,
+ "▁Arthur": 12185,
+ "Arch": 12186,
+ "▁Mann": 12187,
+ "American": 12188,
+ "zes": 12189,
+ "inners": 12190,
+ "▁Auto": 12191,
+ "▁professor": 12192,
+ "▁);\r": 12193,
+ "▁addr": 12194,
+ "▁Medical": 12195,
+ "▁fired": 12196,
+ "▁Core": 12197,
+ "▁CONFIG": 12198,
+ "▁sql": 12199,
+ "▁Conserv": 12200,
+ "ichen": 12201,
+ "Vertex": 12202,
+ "▁HO": 12203,
+ "Yeah": 12204,
+ "Note": 12205,
+ "▁OK": 12206,
+ "mus": 12207,
+ "focus": 12208,
+ "aja": 12209,
+ "rá": 12210,
+ "▁hence": 12211,
+ "▁executive": 12212,
+ "▁liquid": 12213,
+ "uje": 12214,
+ "▁driven": 12215,
+ "igue": 12216,
+ "▁Wik": 12217,
+ "Rate": 12218,
+ "rand": 12219,
+ "Results": 12220,
+ "▁copies": 12221,
+ "▁tan": 12222,
+ "riteria": 12223,
+ "enen": 12224,
+ "}_\\": 12225,
+ "▁pobl": 12226,
+ "▁southern": 12227,
+ "eln": 12228,
+ "▁zwei": 12229,
+ "▁concrete": 12230,
+ "▁CONDITIONS": 12231,
+ "▁dreams": 12232,
+ "▁minim": 12233,
+ "▁employee": 12234,
+ "▁nap": 12235,
+ "▁suspect": 12236,
+ "Mouse": 12237,
+ "▁therapy": 12238,
+ "aval": 12239,
+ "▁Anth": 12240,
+ "START": 12241,
+ "sters": 12242,
+ "ishment": 12243,
+ "finite": 12244,
+ "WA": 12245,
+ "vy": 12246,
+ "▁mood": 12247,
+ "comfort": 12248,
+ "▁shr": 12249,
+ "▁decade": 12250,
+ "ября": 12251,
+ "▁'#": 12252,
+ "▁dot": 12253,
+ "▁hill": 12254,
+ "arry": 12255,
+ "catch": 12256,
+ "▁jQuery": 12257,
+ "▁corporate": 12258,
+ "▁BASIS": 12259,
+ "▁appointed": 12260,
+ "▁embar": 12261,
+ "ographie": 12262,
+ "▁pressed": 12263,
+ "▁champion": 12264,
+ "emit": 12265,
+ "▁Bed": 12266,
+ "вання": 12267,
+ "Gui": 12268,
+ "▁PUR": 12269,
+ "▁urban": 12270,
+ "▁sentence": 12271,
+ "bury": 12272,
+ "▁Video": 12273,
+ "▁regularly": 12274,
+ "vl": 12275,
+ "▁слу": 12276,
+ "ockey": 12277,
+ "evin": 12278,
+ "ultural": 12279,
+ "▁passage": 12280,
+ "▁состав": 12281,
+ "▁largely": 12282,
+ "orters": 12283,
+ "▁connections": 12284,
+ "▁surprising": 12285,
+ "bc": 12286,
+ "▁strongly": 12287,
+ "ansas": 12288,
+ "▁sist": 12289,
+ "▁extreme": 12290,
+ "whel": 12291,
+ "▁dealing": 12292,
+ "ographic": 12293,
+ "▁Republican": 12294,
+ "▁granted": 12295,
+ "▁CL": 12296,
+ "▁Hope": 12297,
+ "lessly": 12298,
+ "▁upload": 12299,
+ "▁-\\": 12300,
+ "нию": 12301,
+ "▁valuable": 12302,
+ "=[": 12303,
+ "Price": 12304,
+ "issance": 12305,
+ "iens": 12306,
+ "heit": 12307,
+ "▁suggests": 12308,
+ "сло": 12309,
+ "▁jur": 12310,
+ "}|": 12311,
+ "lp": 12312,
+ "▁invited": 12313,
+ "▁deriv": 12314,
+ "IMIT": 12315,
+ "rass": 12316,
+ "▁instruct": 12317,
+ "▁courses": 12318,
+ "äch": 12319,
+ "▁fifty": 12320,
+ "DEVICE": 12321,
+ "ASH": 12322,
+ "▁hip": 12323,
+ "Unknown": 12324,
+ "▁Catalogue": 12325,
+ "▁Roll": 12326,
+ "▁tensor": 12327,
+ "bec": 12328,
+ "été": 12329,
+ "Identity": 12330,
+ "&\\": 12331,
+ "▁Stephen": 12332,
+ "nodes": 12333,
+ "Dim": 12334,
+ "▁consists": 12335,
+ "▁normally": 12336,
+ "ubl": 12337,
+ "▁Police": 12338,
+ "▁Games": 12339,
+ "five": 12340,
+ "Have": 12341,
+ "▁padding": 12342,
+ "eres": 12343,
+ "anth": 12344,
+ "▁puts": 12345,
+ "uminate": 12346,
+ "ovie": 12347,
+ "▁Index": 12348,
+ "blue": 12349,
+ "Scal": 12350,
+ "▁giant": 12351,
+ "TF": 12352,
+ "pson": 12353,
+ "▁victim": 12354,
+ "serial": 12355,
+ "▁Sym": 12356,
+ "Single": 12357,
+ "▁md": 12358,
+ "▁attended": 12359,
+ "▁Stra": 12360,
+ "▁Dark": 12361,
+ ")|": 12362,
+ "▁span": 12363,
+ "▁maintenance": 12364,
+ "▁bind": 12365,
+ "Bean": 12366,
+ "ilarly": 12367,
+ "▁convent": 12368,
+ "▁José": 12369,
+ "udd": 12370,
+ "▁poly": 12371,
+ "▁idx": 12372,
+ "▁asks": 12373,
+ "▁enthus": 12374,
+ "▁suck": 12375,
+ "▁Cou": 12376,
+ "▁Corporation": 12377,
+ "usions": 12378,
+ "opher": 12379,
+ "▁symptoms": 12380,
+ "▁Johann": 12381,
+ "▁пу": 12382,
+ "▁html": 12383,
+ "▁ps": 12384,
+ "earing": 12385,
+ "gesch": 12386,
+ "▁Mother": 12387,
+ "RET": 12388,
+ "▁furniture": 12389,
+ "PF": 12390,
+ "▁Guard": 12391,
+ "pattern": 12392,
+ "▁lovely": 12393,
+ "alg": 12394,
+ "edly": 12395,
+ "sex": 12396,
+ "▁finds": 12397,
+ "Buf": 12398,
+ "▁над": 12399,
+ "▁км": 12400,
+ "▁Por": 12401,
+ "СР": 12402,
+ "Enter": 12403,
+ "▁esta": 12404,
+ "▁тре": 12405,
+ "▁\"*": 12406,
+ "▁Fox": 12407,
+ "▁cock": 12408,
+ "Bundle": 12409,
+ "▁puis": 12410,
+ "▁announce": 12411,
+ "▁guid": 12412,
+ "checked": 12413,
+ "icide": 12414,
+ "neg": 12415,
+ "▁Gil": 12416,
+ "schen": 12417,
+ "ologist": 12418,
+ "iso": 12419,
+ "groups": 12420,
+ "▁somebody": 12421,
+ "Day": 12422,
+ "tras": 12423,
+ "▁compact": 12424,
+ "▁organized": 12425,
+ "▁roles": 12426,
+ "▁hint": 12427,
+ "▁så": 12428,
+ "▁pays": 12429,
+ "▁Си": 12430,
+ "▁hoped": 12431,
+ "▁sail": 12432,
+ "▁Vers": 12433,
+ "▁embr": 12434,
+ "▁bot": 12435,
+ "▁exceed": 12436,
+ "BACK": 12437,
+ "▁gaze": 12438,
+ "▁spons": 12439,
+ "AST": 12440,
+ "▁torch": 12441,
+ "▁newspaper": 12442,
+ "▁Dist": 12443,
+ "▁bass": 12444,
+ "▁hanging": 12445,
+ "▁ears": 12446,
+ "ńsk": 12447,
+ "getValue": 12448,
+ "▁unus": 12449,
+ "▁Ele": 12450,
+ "services": 12451,
+ "▁dressed": 12452,
+ "lav": 12453,
+ "▁пла": 12454,
+ "Private": 12455,
+ "mic": 12456,
+ "▁parser": 12457,
+ "▁sections": 12458,
+ "▁fo": 12459,
+ "Errorf": 12460,
+ "inz": 12461,
+ "örd": 12462,
+ "▁metric": 12463,
+ "URI": 12464,
+ "▁vice": 12465,
+ "RED": 12466,
+ "▁nue": 12467,
+ "revs": 12468,
+ "▁collected": 12469,
+ "oose": 12470,
+ "▁mond": 12471,
+ "▁nas": 12472,
+ "▁Насе": 12473,
+ "▁å": 12474,
+ "Drop": 12475,
+ "▁abuse": 12476,
+ "▁sees": 12477,
+ "▁Hence": 12478,
+ "exec": 12479,
+ "}\\,": 12480,
+ "▁arbitr": 12481,
+ "▁Application": 12482,
+ "family": 12483,
+ "üd": 12484,
+ "▁magnetic": 12485,
+ "▁newly": 12486,
+ "▁reprodu": 12487,
+ "▁writers": 12488,
+ "▁headers": 12489,
+ "ší": 12490,
+ "рт": 12491,
+ "YPE": 12492,
+ "▁schema": 12493,
+ "▁Ce": 12494,
+ "▁Jews": 12495,
+ "▁Record": 12496,
+ "present": 12497,
+ "▁также": 12498,
+ "▁labels": 12499,
+ "Socket": 12500,
+ "▁equations": 12501,
+ "▁medicine": 12502,
+ "▁authorities": 12503,
+ "}`": 12504,
+ "стви": 12505,
+ "▁Corn": 12506,
+ "▁environmental": 12507,
+ "WARE": 12508,
+ "Mer": 12509,
+ "▁само": 12510,
+ "▁Technology": 12511,
+ "▁Saf": 12512,
+ "▁conn": 12513,
+ "▁Um": 12514,
+ "▁Pacific": 12515,
+ "тел": 12516,
+ "jan": 12517,
+ "▁uncertain": 12518,
+ "▁belief": 12519,
+ "counter": 12520,
+ "toBe": 12521,
+ "INS": 12522,
+ "weet": 12523,
+ "Light": 12524,
+ "primary": 12525,
+ "▁featured": 12526,
+ "▁touched": 12527,
+ "HTTP": 12528,
+ "▁tact": 12529,
+ "pository": 12530,
+ "▁eines": 12531,
+ "lass": 12532,
+ "ська": 12533,
+ "▁przez": 12534,
+ "▁fuer": 12535,
+ "▁exciting": 12536,
+ "▁Cub": 12537,
+ "agan": 12538,
+ "VO": 12539,
+ "▁'%": 12540,
+ "▁\\{": 12541,
+ "ubble": 12542,
+ "▁Fol": 12543,
+ "▁Kong": 12544,
+ "▁versch": 12545,
+ "FAIL": 12546,
+ "▁naar": 12547,
+ "ös": 12548,
+ "speed": 12549,
+ "▁territor": 12550,
+ "▁wrap": 12551,
+ "▁Jahre": 12552,
+ "lee": 12553,
+ "▁crossed": 12554,
+ "resolve": 12555,
+ "▁stim": 12556,
+ "Native": 12557,
+ "ursor": 12558,
+ "NotNull": 12559,
+ "▁Albert": 12560,
+ "▁signature": 12561,
+ "▁Ru": 12562,
+ "idas": 12563,
+ "▁decent": 12564,
+ "▁faced": 12565,
+ "▁лю": 12566,
+ "▁Spain": 12567,
+ "▁resistance": 12568,
+ "▁Brian": 12569,
+ "kwargs": 12570,
+ "▁interval": 12571,
+ "▁Ле": 12572,
+ "▁explo": 12573,
+ "▁semi": 12574,
+ "▁widely": 12575,
+ "dx": 12576,
+ "kov": 12577,
+ "▁Come": 12578,
+ "▁knife": 12579,
+ "Asp": 12580,
+ "uno": 12581,
+ "lineto": 12582,
+ "▁Bund": 12583,
+ "Cert": 12584,
+ "▁todo": 12585,
+ "tags": 12586,
+ "▁guarantee": 12587,
+ "▁vital": 12588,
+ "▁fought": 12589,
+ "▁Env": 12590,
+ "HD": 12591,
+ "Lower": 12592,
+ "Tx": 12593,
+ "▁Fa": 12594,
+ "▁anticip": 12595,
+ "Timer": 12596,
+ "mediate": 12597,
+ "▁proven": 12598,
+ "▁partir": 12599,
+ "AE": 12600,
+ "cursor": 12601,
+ "▁wooden": 12602,
+ "▁Contact": 12603,
+ "regs": 12604,
+ "▁provinc": 12605,
+ "▁DC": 12606,
+ "▁memories": 12607,
+ "▁ft": 12608,
+ "▁battery": 12609,
+ "utenant": 12610,
+ "Login": 12611,
+ "ountry": 12612,
+ "▁compens": 12613,
+ "operatorname": 12614,
+ "▁Jacob": 12615,
+ "zed": 12616,
+ "ADDR": 12617,
+ "▁quad": 12618,
+ "*).": 12619,
+ "▁coat": 12620,
+ "▁fir": 12621,
+ "▁Michel": 12622,
+ "▁Standard": 12623,
+ "rf": 12624,
+ "mel": 12625,
+ "▁coeff": 12626,
+ "▁Iraq": 12627,
+ "▁Given": 12628,
+ "нима": 12629,
+ "▁FIT": 12630,
+ "▁peu": 12631,
+ "▁ig": 12632,
+ "▁Case": 12633,
+ "mé": 12634,
+ "▁parallel": 12635,
+ "cio": 12636,
+ "kow": 12637,
+ "▁institutions": 12638,
+ "ícul": 12639,
+ "aban": 12640,
+ "UX": 12641,
+ "▁Sarah": 12642,
+ "▁més": 12643,
+ "▁atmos": 12644,
+ "▁släktet": 12645,
+ "▁brothers": 12646,
+ "▁wanting": 12647,
+ "aaaa": 12648,
+ "▁fest": 12649,
+ "=-": 12650,
+ "▁forty": 12651,
+ "▁creates": 12652,
+ "hh": 12653,
+ "▁Android": 12654,
+ "anches": 12655,
+ "BT": 12656,
+ "upload": 12657,
+ "xis": 12658,
+ "Hz": 12659,
+ "бор": 12660,
+ "RAY": 12661,
+ "ntil": 12662,
+ "▁leaned": 12663,
+ "unda": 12664,
+ "▁ultimately": 12665,
+ "▁tok": 12666,
+ "neh": 12667,
+ "▁lawyer": 12668,
+ "hend": 12669,
+ "▁Vin": 12670,
+ "▁facility": 12671,
+ "▁likes": 12672,
+ "ento": 12673,
+ "Nodes": 12674,
+ "▁entrance": 12675,
+ "atto": 12676,
+ "rett": 12677,
+ "accept": 12678,
+ "theme": 12679,
+ "тан": 12680,
+ "osi": 12681,
+ "▁{},": 12682,
+ "pgfpathlineto": 12683,
+ "good": 12684,
+ "slot": 12685,
+ "▁innoc": 12686,
+ "▁proport": 12687,
+ "▁arrive": 12688,
+ "ého": 12689,
+ "▁pairs": 12690,
+ "▁wrapped": 12691,
+ "▁unw": 12692,
+ "▁explos": 12693,
+ "▁gel": 12694,
+ "Will": 12695,
+ "▁Zealand": 12696,
+ "ías": 12697,
+ "▁Jr": 12698,
+ "▁Fra": 12699,
+ "▁legit": 12700,
+ "▁illegal": 12701,
+ "клю": 12702,
+ "▁tort": 12703,
+ "▁pron": 12704,
+ "Fi": 12705,
+ "▁forg": 12706,
+ "export": 12707,
+ "▁Children": 12708,
+ "▁Abs": 12709,
+ "▁Send": 12710,
+ "▁discount": 12711,
+ "▁poster": 12712,
+ "ented": 12713,
+ "anim": 12714,
+ "verb": 12715,
+ "sto": 12716,
+ "▁Bible": 12717,
+ "pending": 12718,
+ "▁Phot": 12719,
+ "strap": 12720,
+ "ieron": 12721,
+ "PG": 12722,
+ "cular": 12723,
+ "crit": 12724,
+ "urd": 12725,
+ "ENO": 12726,
+ "▁northern": 12727,
+ "▁naturally": 12728,
+ "<'": 12729,
+ "weg": 12730,
+ "▁drunk": 12731,
+ "▁Dal": 12732,
+ "▁mouse": 12733,
+ "▁continuous": 12734,
+ "▁initially": 12735,
+ "agu": 12736,
+ "мпи": 12737,
+ "ANT": 12738,
+ "Div": 12739,
+ "▁recording": 12740,
+ "Bind": 12741,
+ "▁correctly": 12742,
+ "initial": 12743,
+ "▁Rights": 12744,
+ "▁debate": 12745,
+ "WRITE": 12746,
+ "built": 12747,
+ "▁permit": 12748,
+ "▁professionals": 12749,
+ "cv": 12750,
+ "▁DI": 12751,
+ "▁handed": 12752,
+ "▁Cu": 12753,
+ "▁Hospital": 12754,
+ "▁beskrevs": 12755,
+ "ней": 12756,
+ "ност": 12757,
+ "▁anxiety": 12758,
+ "▁heavily": 12759,
+ "▁Var": 12760,
+ "▁dispos": 12761,
+ "+\"": 12762,
+ "▁Ever": 12763,
+ "izon": 12764,
+ "▁operators": 12765,
+ "nego": 12766,
+ "▁Bry": 12767,
+ "▁votes": 12768,
+ "izione": 12769,
+ "▁рай": 12770,
+ "▁feat": 12771,
+ "▁western": 12772,
+ "▁confront": 12773,
+ "▁stronger": 12774,
+ "▁фа": 12775,
+ "stre": 12776,
+ "▁Valid": 12777,
+ "▁nad": 12778,
+ "▁checking": 12779,
+ "▁birds": 12780,
+ "▁Northern": 12781,
+ "▁intention": 12782,
+ "uce": 12783,
+ "▁covers": 12784,
+ "▁wondering": 12785,
+ "▁Optional": 12786,
+ "protocol": 12787,
+ "▁aggress": 12788,
+ "——": 12789,
+ "Vec": 12790,
+ "▁dates": 12791,
+ "quot": 12792,
+ "▁bom": 12793,
+ "▁scan": 12794,
+ "▁Item": 12795,
+ "▁Navy": 12796,
+ "▁Gran": 12797,
+ "▁everybody": 12798,
+ "▁unexpected": 12799,
+ "▁divor": 12800,
+ "▁ease": 12801,
+ "umbled": 12802,
+ "^+": 12803,
+ "cuss": 12804,
+ "▁pale": 12805,
+ "▁Inga": 12806,
+ "▁Broad": 12807,
+ "▁Medic": 12808,
+ "▁Roy": 12809,
+ "▁Inn": 12810,
+ "▁pens": 12811,
+ "PN": 12812,
+ ".:": 12813,
+ "▁principle": 12814,
+ "▁letting": 12815,
+ "▁conducted": 12816,
+ "FALSE": 12817,
+ "▁OS": 12818,
+ "Focus": 12819,
+ "▁measured": 12820,
+ "▁Democratic": 12821,
+ "High": 12822,
+ "▁pré": 12823,
+ "ennes": 12824,
+ "▁indicates": 12825,
+ "▁ending": 12826,
+ "▁Small": 12827,
+ "▁": 26345,
+ "olent": 26346,
+ "▁этого": 26347,
+ "▁Generic": 26348,
+ "▁*/,": 26349,
+ "▁combinations": 26350,
+ "▁rejo": 26351,
+ "спубли": 26352,
+ "capacity": 26353,
+ "▁traces": 26354,
+ "▁opacity": 26355,
+ "▁Official": 26356,
+ "icion": 26357,
+ "▁emotionally": 26358,
+ "▁Joel": 26359,
+ "ському": 26360,
+ "▁legendary": 26361,
+ "▁pam": 26362,
+ "▁También": 26363,
+ ".<": 26364,
+ "iba": 26365,
+ "midt": 26366,
+ "бом": 26367,
+ "▁ensuite": 26368,
+ "Authorization": 26369,
+ "Pag": 26370,
+ "▁helmet": 26371,
+ "▁territo": 26372,
+ "secondary": 26373,
+ "▁segunda": 26374,
+ "▁Wire": 26375,
+ "recated": 26376,
+ "▁invoked": 26377,
+ "▁ValueError": 26378,
+ "▁фо": 26379,
+ "ALIGN": 26380,
+ "CURRENT": 26381,
+ "\\+\\_\\": 26382,
+ "▁compilation": 26383,
+ "ær": 26384,
+ "▁Palmar": 26385,
+ "▁influences": 26386,
+ "/:": 26387,
+ "Mix": 26388,
+ "NOP": 26389,
+ "econom": 26390,
+ "▁tucked": 26391,
+ "▁});\r": 26392,
+ "ANK": 26393,
+ "reject": 26394,
+ "▁pension": 26395,
+ "▁generates": 26396,
+ "чё": 26397,
+ "▁incap": 26398,
+ "▁clicked": 26399,
+ "▁fus": 26400,
+ "ourses": 26401,
+ "▁Easter": 26402,
+ "%;": 26403,
+ "zin": 26404,
+ "▁obligations": 26405,
+ "▁Tips": 26406,
+ "};\r": 26407,
+ ".\"_": 26408,
+ "▁BSD": 26409,
+ "ática": 26410,
+ "▁expose": 26411,
+ "Pars": 26412,
+ "▁Amanda": 26413,
+ "куп": 26414,
+ "▁guessed": 26415,
+ "dsi": 26416,
+ "▁Leip": 26417,
+ "Broad": 26418,
+ "▁Hughes": 26419,
+ "ié": 26420,
+ "▁Wahl": 26421,
+ "▁formerly": 26422,
+ "Relative": 26423,
+ "▁Yu": 26424,
+ "▁Mountains": 26425,
+ "▁Enum": 26426,
+ "▁strang": 26427,
+ "_-": 26428,
+ "recht": 26429,
+ "viv": 26430,
+ "pause": 26431,
+ "▁Londres": 26432,
+ "▁elbow": 26433,
+ "▁Hawaii": 26434,
+ "▁Casino": 26435,
+ "Threshold": 26436,
+ "Units": 26437,
+ "Include": 26438,
+ "ито": 26439,
+ "asury": 26440,
+ "▁steht": 26441,
+ "▁damned": 26442,
+ "▁packets": 26443,
+ "▁Werk": 26444,
+ "▁elevator": 26445,
+ "iedad": 26446,
+ "govern": 26447,
+ "▁CONTRACT": 26448,
+ "mals": 26449,
+ "▁remem": 26450,
+ "▁entonces": 26451,
+ "▁vas": 26452,
+ "▁sympathy": 26453,
+ "▁befindet": 26454,
+ "incing": 26455,
+ "DataSet": 26456,
+ "▁additionally": 26457,
+ "▁musician": 26458,
+ "шего": 26459,
+ "▁listop": 26460,
+ ">\")": 26461,
+ "Printf": 26462,
+ "▁Felix": 26463,
+ "▁carved": 26464,
+ "▁nicely": 26465,
+ "гом": 26466,
+ "chap": 26467,
+ "▁Nieder": 26468,
+ "▁Lav": 26469,
+ "▁modifications": 26470,
+ "moment": 26471,
+ "▁balcon": 26472,
+ "▁dependency": 26473,
+ "CKET": 26474,
+ "▁vanished": 26475,
+ "▁fighters": 26476,
+ "▁zunächst": 26477,
+ "ioctl": 26478,
+ "▁defens": 26479,
+ "▁Nem": 26480,
+ "Utility": 26481,
+ "▁curv": 26482,
+ "▁DAMAGES": 26483,
+ "▁Rogers": 26484,
+ "▁gratitude": 26485,
+ "▁Denmark": 26486,
+ "рая": 26487,
+ "grpc": 26488,
+ "▁juni": 26489,
+ "▁октября": 26490,
+ "▁immense": 26491,
+ "▁prevented": 26492,
+ "▁foam": 26493,
+ "▁Extra": 26494,
+ "aimed": 26495,
+ "▁Criteria": 26496,
+ "▁Simply": 26497,
+ "boxes": 26498,
+ "▁Legend": 26499,
+ "▁Players": 26500,
+ "▁Mercedes": 26501,
+ "▁Branch": 26502,
+ "TERN": 26503,
+ "omena": 26504,
+ "▁incorporate": 26505,
+ "conde": 26506,
+ "▁Estado": 26507,
+ "▁wasted": 26508,
+ "▁complaining": 26509,
+ "▁warriors": 26510,
+ "oter": 26511,
+ "▁этом": 26512,
+ "▁conten": 26513,
+ "▁machinery": 26514,
+ "▁technological": 26515,
+ "▁TD": 26516,
+ "▁gras": 26517,
+ "▁minimize": 26518,
+ "▁Door": 26519,
+ "▁bzw": 26520,
+ "▁prac": 26521,
+ "TREE": 26522,
+ "▁Wing": 26523,
+ "▁Transaction": 26524,
+ "▁MVT": 26525,
+ "▁Klein": 26526,
+ "commons": 26527,
+ "▁}{": 26528,
+ "▁Heritage": 26529,
+ "▁fade": 26530,
+ "рок": 26531,
+ "setValue": 26532,
+ "▁Wallace": 26533,
+ "MX": 26534,
+ "▁ACT": 26535,
+ "▁footage": 26536,
+ "▁entstand": 26537,
+ "arga": 26538,
+ "▁nails": 26539,
+ "▁capitalism": 26540,
+ "▁Garc": 26541,
+ "▁suspension": 26542,
+ "ilis": 26543,
+ "▁Mov": 26544,
+ "uffled": 26545,
+ "Arc": 26546,
+ "▁Beautiful": 26547,
+ "WAY": 26548,
+ "Parallel": 26549,
+ "XXXX": 26550,
+ "diag": 26551,
+ "▁DT": 26552,
+ "mq": 26553,
+ "TextView": 26554,
+ "MLE": 26555,
+ "ennen": 26556,
+ "▁infected": 26557,
+ "▁therapist": 26558,
+ "INGS": 26559,
+ "▁cidade": 26560,
+ "ън": 26561,
+ "▁pdf": 26562,
+ "▁bump": 26563,
+ "CTX": 26564,
+ "▁INCLUDING": 26565,
+ "▁Gef": 26566,
+ "ENTIAL": 26567,
+ "▁handy": 26568,
+ "▁temporal": 26569,
+ "AtA": 26570,
+ "ISH": 26571,
+ "▁Pattern": 26572,
+ "▁lan": 26573,
+ "ependant": 26574,
+ "▁shining": 26575,
+ "idy": 26576,
+ "▁NT": 26577,
+ "▁Fran": 26578,
+ "▁nurses": 26579,
+ "▁betray": 26580,
+ "▁sensible": 26581,
+ "▁апреля": 26582,
+ "▁'[": 26583,
+ "▁thirteen": 26584,
+ ")}_{": 26585,
+ "▁Noah": 26586,
+ "INSERT": 26587,
+ "istically": 26588,
+ "▁Appendix": 26589,
+ "▁recher": 26590,
+ "Receiver": 26591,
+ "▁dernier": 26592,
+ "лла": 26593,
+ "лиза": 26594,
+ "▁Partido": 26595,
+ "▁maximal": 26596,
+ "snap": 26597,
+ "▁часть": 26598,
+ "STOP": 26599,
+ "▁ultra": 26600,
+ "▁développ": 26601,
+ "▁tegen": 26602,
+ "▁Чи": 26603,
+ "LIB": 26604,
+ "▁baseline": 26605,
+ "reload": 26606,
+ "▁Arbitro": 26607,
+ "▁kall": 26608,
+ "capture": 26609,
+ "Arm": 26610,
+ "quin": 26611,
+ "impse": 26612,
+ "zas": 26613,
+ "▁Cand": 26614,
+ "▁brains": 26615,
+ "▁hostile": 26616,
+ "▁marble": 26617,
+ "oons": 26618,
+ "▁Loss": 26619,
+ "MetaData": 26620,
+ "▁República": 26621,
+ "▁andra": 26622,
+ "oden": 26623,
+ "▁documented": 26624,
+ "▁Moses": 26625,
+ "odd": 26626,
+ "▁wax": 26627,
+ "usch": 26628,
+ "▁diagnosed": 26629,
+ "inkle": 26630,
+ "▁Xbox": 26631,
+ "▁seventy": 26632,
+ "cias": 26633,
+ "▁noviembre": 26634,
+ "Compute": 26635,
+ "});\r": 26636,
+ "▁Philippe": 26637,
+ "▁För": 26638,
+ "Leave": 26639,
+ "▁sage": 26640,
+ "▁unpre": 26641,
+ "▁Fortunately": 26642,
+ "▁apost": 26643,
+ "entities": 26644,
+ "▁ellos": 26645,
+ "authorized": 26646,
+ "GBT": 26647,
+ "▁insist": 26648,
+ "▁inspire": 26649,
+ "Mass": 26650,
+ "▁rôle": 26651,
+ "fee": 26652,
+ "ipart": 26653,
+ "цер": 26654,
+ "unate": 26655,
+ "▁CNN": 26656,
+ ":}": 26657,
+ "▁unhappy": 26658,
+ "▁imported": 26659,
+ "HIGH": 26660,
+ "rings": 26661,
+ "▁Instance": 26662,
+ "Bay": 26663,
+ "agles": 26664,
+ "mee": 26665,
+ "bery": 26666,
+ "▁Stories": 26667,
+ "▁Chase": 26668,
+ "▁carriage": 26669,
+ "▁misunder": 26670,
+ "▁imagin": 26671,
+ "pw": 26672,
+ "▁Meter": 26673,
+ "▁crowds": 26674,
+ "▁Fame": 26675,
+ "skill": 26676,
+ "▁comed": 26677,
+ "▁ranch": 26678,
+ "▁lacking": 26679,
+ "▁submar": 26680,
+ "iante": 26681,
+ "▁lanz": 26682,
+ "▁служ": 26683,
+ "-----------": 26684,
+ "▁obten": 26685,
+ "▁downstairs": 26686,
+ "YN": 26687,
+ "rotation": 26688,
+ "▁Jesse": 26689,
+ "$(\"#": 26690,
+ "▁puls": 26691,
+ "irling": 26692,
+ "▁Schaus": 26693,
+ "▁deployed": 26694,
+ "▁{}\",": 26695,
+ "▁Marvel": 26696,
+ "ENUM": 26697,
+ "▁Mathemat": 26698,
+ "▁nn": 26699,
+ "compet": 26700,
+ "ków": 26701,
+ "bil": 26702,
+ "Which": 26703,
+ "isine": 26704,
+ "▁rude": 26705,
+ "▁niveau": 26706,
+ "▁área": 26707,
+ "▁près": 26708,
+ "atis": 26709,
+ "▁[...]": 26710,
+ "fur": 26711,
+ "omm": 26712,
+ "packed": 26713,
+ "мене": 26714,
+ "scriptstyle": 26715,
+ "▁Ath": 26716,
+ "▁desp": 26717,
+ "eltemperaturen": 26718,
+ "▁talents": 26719,
+ "ocy": 26720,
+ "▁raises": 26721,
+ "LIMIT": 26722,
+ "▁editorial": 26723,
+ "▁Animal": 26724,
+ "drive": 26725,
+ "▁работа": 26726,
+ "bss": 26727,
+ "▁Sev": 26728,
+ "epoch": 26729,
+ "▁RC": 26730,
+ "UNUSED": 26731,
+ "▁mandatory": 26732,
+ "(?:": 26733,
+ "▁Bin": 26734,
+ "▁synthetic": 26735,
+ "▁gown": 26736,
+ "▁Dob": 26737,
+ "kap": 26738,
+ "▁harmon": 26739,
+ "▁liberty": 26740,
+ "▁Rice": 26741,
+ "▁prayers": 26742,
+ "▁mise": 26743,
+ "▁confusing": 26744,
+ "▁leap": 26745,
+ "▁arrives": 26746,
+ "kamp": 26747,
+ "▁thats": 26748,
+ "ACC": 26749,
+ "▁Parameters": 26750,
+ "▁одно": 26751,
+ "▁Bio": 26752,
+ "density": 26753,
+ "▁glimpse": 26754,
+ "FORE": 26755,
+ "▁Listen": 26756,
+ "Prev": 26757,
+ "}\\,\\": 26758,
+ "куль": 26759,
+ "▁SEC": 26760,
+ "▁explored": 26761,
+ "▁meantime": 26762,
+ "AIL": 26763,
+ "▁WP": 26764,
+ "▁raison": 26765,
+ "▁existe": 26766,
+ "▁lesser": 26767,
+ "▁Validate": 26768,
+ "▁caution": 26769,
+ "usta": 26770,
+ "heading": 26771,
+ "EFF": 26772,
+ ".'\"": 26773,
+ "▁Gilbert": 26774,
+ "▁limitation": 26775,
+ "▁retour": 26776,
+ "▁Commonwealth": 26777,
+ "▁gewann": 26778,
+ "▁miserable": 26779,
+ "▁networking": 26780,
+ "▁ottobre": 26781,
+ "▁Dise": 26782,
+ "edges": 26783,
+ "▁sede": 26784,
+ "вича": 26785,
+ "uniform": 26786,
+ "▁деятель": 26787,
+ "iros": 26788,
+ "▁desen": 26789,
+ "▁parc": 26790,
+ "▁Rico": 26791,
+ "Ns": 26792,
+ "guid": 26793,
+ "orio": 26794,
+ "avelength": 26795,
+ "▁Gle": 26796,
+ "inceton": 26797,
+ "Amaz": 26798,
+ "Construct": 26799,
+ "▁mx": 26800,
+ "▁Vern": 26801,
+ "▁Generation": 26802,
+ "Jack": 26803,
+ "romag": 26804,
+ "▁viagra": 26805,
+ "▁Peg": 26806,
+ "▁Updated": 26807,
+ "▁overlap": 26808,
+ "EventArgs": 26809,
+ "кро": 26810,
+ "▁*«": 26811,
+ "▁questioned": 26812,
+ "South": 26813,
+ "notice": 26814,
+ "▁permanently": 26815,
+ "lst": 26816,
+ "ficie": 26817,
+ "▁quella": 26818,
+ "▁colleges": 26819,
+ "▁disappointment": 26820,
+ "▁Luft": 26821,
+ "imgur": 26822,
+ "▁transitions": 26823,
+ "▁seller": 26824,
+ "▁июня": 26825,
+ "▁Og": 26826,
+ "▁ADD": 26827,
+ "▁Pays": 26828,
+ "COMMAND": 26829,
+ "grades": 26830,
+ "▁febbra": 26831,
+ "▁Cyr": 26832,
+ "▁febbraio": 26833,
+ "eti": 26834,
+ "▁arom": 26835,
+ "▁Claude": 26836,
+ "▁UEFA": 26837,
+ "▁живе": 26838,
+ "▁Victorian": 26839,
+ "keeping": 26840,
+ "ên": 26841,
+ "▁FIXME": 26842,
+ "itime": 26843,
+ "chestr": 26844,
+ "▁Samsung": 26845,
+ "▁doctrine": 26846,
+ "▁pear": 26847,
+ "▁Mediterranean": 26848,
+ "▁Ya": 26849,
+ "▁vault": 26850,
+ "▁Historic": 26851,
+ "▁sedan": 26852,
+ "▁heated": 26853,
+ "▁política": 26854,
+ "Proof": 26855,
+ ":{": 26856,
+ "fem": 26857,
+ "▁Frankfurt": 26858,
+ "pectives": 26859,
+ "MG": 26860,
+ "▁Eye": 26861,
+ "dai": 26862,
+ "▁reserves": 26863,
+ "NER": 26864,
+ "▁tobacco": 26865,
+ "▁fragments": 26866,
+ "icc": 26867,
+ "▁booth": 26868,
+ "▁cruise": 26869,
+ "▁Testament": 26870,
+ "cola": 26871,
+ "▁Leop": 26872,
+ "▁noon": 26873,
+ "▁terrified": 26874,
+ "vb": 26875,
+ "intel": 26876,
+ "alie": 26877,
+ "▁verification": 26878,
+ "yster": 26879,
+ "ADER": 26880,
+ "chied": 26881,
+ "▁datasets": 26882,
+ "▁зі": 26883,
+ "▁miem": 26884,
+ "ulates": 26885,
+ "▁uuid": 26886,
+ "▁Pictures": 26887,
+ "▁Brend": 26888,
+ "Billboard": 26889,
+ "▁stern": 26890,
+ "▁denom": 26891,
+ "▁accidents": 26892,
+ "сня": 26893,
+ "▁packing": 26894,
+ "ција": 26895,
+ "iblical": 26896,
+ "▁Так": 26897,
+ "▁whisk": 26898,
+ "▁luego": 26899,
+ "▁rectangle": 26900,
+ "▁hooks": 26901,
+ "▁neglect": 26902,
+ "▁sober": 26903,
+ "proposition": 26904,
+ "Multiple": 26905,
+ ":\",": 26906,
+ "▁bapt": 26907,
+ "Parts": 26908,
+ "▁Selection": 26909,
+ "▁Alpha": 26910,
+ "weights": 26911,
+ "hall": 26912,
+ "соб": 26913,
+ "▁lur": 26914,
+ "▁época": 26915,
+ "▁rested": 26916,
+ "ambigu": 26917,
+ "▁tastes": 26918,
+ "amazonaws": 26919,
+ "▁confess": 26920,
+ "▁diciembre": 26921,
+ "implement": 26922,
+ "▁absorption": 26923,
+ "Hal": 26924,
+ "LEAN": 26925,
+ "▁Zach": 26926,
+ "▁freeze": 26927,
+ "LBL": 26928,
+ "STM": 26929,
+ "▁calc": 26930,
+ "={()": 26931,
+ "=*/": 26932,
+ "▁bt": 26933,
+ "Reb": 26934,
+ "▁Wien": 26935,
+ "anska": 26936,
+ "▁surn": 26937,
+ "iative": 26938,
+ "▁invån": 26939,
+ "CY": 26940,
+ "▁là": 26941,
+ "amba": 26942,
+ "leen": 26943,
+ "wahl": 26944,
+ "▁functioning": 26945,
+ "ția": 26946,
+ "getContext": 26947,
+ "gart": 26948,
+ "▁обе": 26949,
+ "Pen": 26950,
+ "vik": 26951,
+ "Slider": 26952,
+ "▁Accept": 26953,
+ "Gap": 26954,
+ "▁Jorge": 26955,
+ "SIG": 26956,
+ "▁вос": 26957,
+ "▁голо": 26958,
+ "▁periodo": 26959,
+ "шта": 26960,
+ "▁patches": 26961,
+ "кої": 26962,
+ "äre": 26963,
+ "engono": 26964,
+ "lista": 26965,
+ "horn": 26966,
+ "▁Complex": 26967,
+ "Sent": 26968,
+ "trfs": 26969,
+ "▁convex": 26970,
+ "Generation": 26971,
+ "▁місце": 26972,
+ "compress": 26973,
+ "▁Sax": 26974,
+ "▁uid": 26975,
+ "▁Lebens": 26976,
+ "Completion": 26977,
+ "\\|_{": 26978,
+ "insky": 26979,
+ "▁schon": 26980,
+ "▁masters": 26981,
+ "independ": 26982,
+ "neys": 26983,
+ "▁lied": 26984,
+ "▁aspir": 26985,
+ "чні": 26986,
+ "▁breakdown": 26987,
+ "▁Harm": 26988,
+ "▁designing": 26989,
+ "hf": 26990,
+ "▁Angela": 26991,
+ "▁confer": 26992,
+ "▁partido": 26993,
+ "▁interference": 26994,
+ "mao": 26995,
+ "▁absorbed": 26996,
+ "▁Vall": 26997,
+ "ErrorCode": 26998,
+ "▁Publishing": 26999,
+ "vano": 27000,
+ "BITS": 27001,
+ "▁deer": 27002,
+ "▁Campaign": 27003,
+ "▁graz": 27004,
+ "CHANGE": 27005,
+ "▁feder": 27006,
+ "iffe": 27007,
+ "handed": 27008,
+ "cq": 27009,
+ "umbing": 27010,
+ "▁unre": 27011,
+ "▁siendo": 27012,
+ "▁simpler": 27013,
+ "why": 27014,
+ "arettes": 27015,
+ "anst": 27016,
+ "▁hass": 27017,
+ "▁Enterprise": 27018,
+ "▁mois": 27019,
+ "▁Fo": 27020,
+ "▁участ": 27021,
+ "ffen": 27022,
+ "▁MODULE": 27023,
+ "▁activated": 27024,
+ "▁internacional": 27025,
+ "▁Mittel": 27026,
+ "degree": 27027,
+ "▁откры": 27028,
+ "▁&(": 27029,
+ "getProperty": 27030,
+ "isz": 27031,
+ "cedure": 27032,
+ "▁enters": 27033,
+ "▁Sally": 27034,
+ "▁Train": 27035,
+ "▁logged": 27036,
+ "▁Rav": 27037,
+ "▁Avoid": 27038,
+ "▁Kaiser": 27039,
+ "▁expend": 27040,
+ "aphor": 27041,
+ "▁brass": 27042,
+ "▁melod": 27043,
+ "▁attitudes": 27044,
+ "*\"": 27045,
+ "Wall": 27046,
+ "▁owe": 27047,
+ "▁bamb": 27048,
+ "shader": 27049,
+ "cester": 27050,
+ "▁PP": 27051,
+ "▁migrations": 27052,
+ "entric": 27053,
+ "▁Setup": 27054,
+ "▁Artist": 27055,
+ "hre": 27056,
+ "▁polite": 27057,
+ "ahan": 27058,
+ "▁luglio": 27059,
+ "▁predecess": 27060,
+ "▁SIG": 27061,
+ "тів": 27062,
+ "▁RF": 27063,
+ "▁Dry": 27064,
+ "▁maker": 27065,
+ "шим": 27066,
+ "▁Sounds": 27067,
+ "▁implementing": 27068,
+ "▁ah": 27069,
+ "▁gev": 27070,
+ "▁duplicate": 27071,
+ "▁Logan": 27072,
+ "▁Grade": 27073,
+ "DUCT": 27074,
+ "íses": 27075,
+ "ért": 27076,
+ "▁nonsense": 27077,
+ "backup": 27078,
+ "Attachment": 27079,
+ "▁ecc": 27080,
+ "▁Squadron": 27081,
+ "learn": 27082,
+ "deprecated": 27083,
+ "▁Aub": 27084,
+ "▁Gol": 27085,
+ "▁overl": 27086,
+ "SERVICE": 27087,
+ "▁beautifully": 27088,
+ "REL": 27089,
+ "▁Gian": 27090,
+ "▁Papa": 27091,
+ "respond": 27092,
+ "▁Caribbean": 27093,
+ "rn": 27094,
+ "▁худож": 27095,
+ "Cfg": 27096,
+ "rai": 27097,
+ "▁sniff": 27098,
+ "tto": 27099,
+ "ологи": 27100,
+ "▁rb": 27101,
+ "▁incidents": 27102,
+ "▁duck": 27103,
+ "▁PROVIDED": 27104,
+ "Sources": 27105,
+ "▁Chelsea": 27106,
+ "▁tek": 27107,
+ "▁налази": 27108,
+ "▁pilots": 27109,
+ "тки": 27110,
+ "▁traded": 27111,
+ "▁Beijing": 27112,
+ "▁Gregory": 27113,
+ "scalar": 27114,
+ "▁inclined": 27115,
+ "▁Kamp": 27116,
+ "▁Marian": 27117,
+ "▁fierce": 27118,
+ "▁theft": 27119,
+ "ющих": 27120,
+ "▁Into": 27121,
+ "constraint": 27122,
+ "parentNode": 27123,
+ "idental": 27124,
+ "▁gouvernement": 27125,
+ "▁SND": 27126,
+ "▁Ruby": 27127,
+ "▁monaster": 27128,
+ "Records": 27129,
+ "▁Kab": 27130,
+ "▁Universe": 27131,
+ "▁approximate": 27132,
+ "Water": 27133,
+ "▁Physical": 27134,
+ "appers": 27135,
+ "oubtedly": 27136,
+ "ложен": 27137,
+ "▁towel": 27138,
+ "▁siblings": 27139,
+ "eph": 27140,
+ "icios": 27141,
+ "рами": 27142,
+ "▁outrage": 27143,
+ "▁també": 27144,
+ "SRC": 27145,
+ "телем": 27146,
+ "Vi": 27147,
+ ".');": 27148,
+ "LM": 27149,
+ "▁mitt": 27150,
+ "▁weed": 27151,
+ "▁crops": 27152,
+ "iman": 27153,
+ "Claim": 27154,
+ "insula": 27155,
+ "▁(“": 27156,
+ "▁Changes": 27157,
+ "▁invånare": 27158,
+ "again": 27159,
+ "▁cnt": 27160,
+ "▁Gaz": 27161,
+ "▁austral": 27162,
+ "overlay": 27163,
+ "▁Mechan": 27164,
+ "▁slammed": 27165,
+ "▁trailing": 27166,
+ "▁Biography": 27167,
+ "▁appealing": 27168,
+ "IVER": 27169,
+ "▁Ave": 27170,
+ "▁Plot": 27171,
+ "voj": 27172,
+ "▁sung": 27173,
+ "▁unos": 27174,
+ "Effects": 27175,
+ "vv": 27176,
+ "cook": 27177,
+ "Buttons": 27178,
+ "▁transm": 27179,
+ "ierto": 27180,
+ "CONTEXT": 27181,
+ "▁dignity": 27182,
+ "aired": 27183,
+ "javax": 27184,
+ "▁Alberto": 27185,
+ "▁Recently": 27186,
+ "▁facial": 27187,
+ "mathop": 27188,
+ "ało": 27189,
+ "вид": 27190,
+ "cott": 27191,
+ "Variables": 27192,
+ "▁Ran": 27193,
+ "▁bunk": 27194,
+ "amiliar": 27195,
+ "CAST": 27196,
+ "▁frü": 27197,
+ "VED": 27198,
+ "▁NOTICE": 27199,
+ "▁turno": 27200,
+ "validator": 27201,
+ "▁Portuguese": 27202,
+ "▁questioning": 27203,
+ "}})": 27204,
+ "▁lear": 27205,
+ "Xamarin": 27206,
+ "▁disadv": 27207,
+ "encoded": 27208,
+ "▁Kot": 27209,
+ "rated": 27210,
+ "▁Theory": 27211,
+ "cius": 27212,
+ "▁Darwin": 27213,
+ "ђе": 27214,
+ "▁décl": 27215,
+ "▁область": 27216,
+ "рович": 27217,
+ "▁mobility": 27218,
+ "VF": 27219,
+ "▁хи": 27220,
+ "until": 27221,
+ "▁barriers": 27222,
+ "gif": 27223,
+ "▁Roh": 27224,
+ "▁aging": 27225,
+ "▁Widget": 27226,
+ "olk": 27227,
+ "▁farms": 27228,
+ "Checker": 27229,
+ "Introduction": 27230,
+ "смо": 27231,
+ "▁Russians": 27232,
+ "naments": 27233,
+ "▁Insert": 27234,
+ "▁Whenever": 27235,
+ "erset": 27236,
+ "itori": 27237,
+ "▁Dort": 27238,
+ "▁costume": 27239,
+ "▁mathematical": 27240,
+ "▁Bast": 27241,
+ "▁nominated": 27242,
+ "▁restoration": 27243,
+ "posal": 27244,
+ "▁unfortunate": 27245,
+ "Ps": 27246,
+ "LIN": 27247,
+ "▁intact": 27248,
+ "▁provoc": 27249,
+ "▁située": 27250,
+ "▁ноября": 27251,
+ "ermo": 27252,
+ "▁fisher": 27253,
+ "гля": 27254,
+ "▁conting": 27255,
+ "▁Doug": 27256,
+ "\"?": 27257,
+ "▁Eva": 27258,
+ "▁tops": 27259,
+ "▁Remote": 27260,
+ "▁artwork": 27261,
+ "▁artillery": 27262,
+ "quick": 27263,
+ "▁Arabia": 27264,
+ "▁SDValue": 27265,
+ "▁Dakota": 27266,
+ "iated": 27267,
+ "▁Optim": 27268,
+ "buttons": 27269,
+ "▁cottage": 27270,
+ "▁wherein": 27271,
+ "▁tutorial": 27272,
+ "▁Scre": 27273,
+ "▁sweep": 27274,
+ "▁Coffee": 27275,
+ "})}": 27276,
+ "▁музы": 27277,
+ "hostname": 27278,
+ "▁Temp": 27279,
+ "▁Fut": 27280,
+ "respect": 27281,
+ "ocz": 27282,
+ "▁predomin": 27283,
+ "Indicator": 27284,
+ "encial": 27285,
+ "UMENT": 27286,
+ "▁SHALL": 27287,
+ "▁commanded": 27288,
+ "▁withdrawal": 27289,
+ "iour": 27290,
+ "REGION": 27291,
+ "sprintf": 27292,
+ "▁вме": 27293,
+ "▁Payment": 27294,
+ "▁Anim": 27295,
+ "publish": 27296,
+ "▁seeks": 27297,
+ "ouw": 27298,
+ "▁GM": 27299,
+ "rugu": 27300,
+ "ustain": 27301,
+ "▁))": 27302,
+ "▁consulting": 27303,
+ "▁Dialog": 27304,
+ "▁Lars": 27305,
+ "▁critique": 27306,
+ "▁circulation": 27307,
+ "▁landsc": 27308,
+ "managed": 27309,
+ "▁Craft": 27310,
+ "▁herman": 27311,
+ "afi": 27312,
+ "amy": 27313,
+ "▁discour": 27314,
+ "<>(": 27315,
+ "▁Steph": 27316,
+ "▁tolerance": 27317,
+ "typename": 27318,
+ "ventions": 27319,
+ "ział": 27320,
+ "стов": 27321,
+ "▁sticking": 27322,
+ "ASC": 27323,
+ "ISO": 27324,
+ "▁Spencer": 27325,
+ "▁Didn": 27326,
+ "gomery": 27327,
+ "imiter": 27328,
+ "dru": 27329,
+ "Clause": 27330,
+ "▁slides": 27331,
+ "###": 27332,
+ "▁Sugar": 27333,
+ "HY": 27334,
+ "▁эти": 27335,
+ "▁Edwards": 27336,
+ "▁cents": 27337,
+ "oya": 27338,
+ "serts": 27339,
+ "▁Hass": 27340,
+ "▁ingen": 27341,
+ "стри": 27342,
+ "▁saddle": 27343,
+ "solid": 27344,
+ "▁champions": 27345,
+ "-)": 27346,
+ "▁Slov": 27347,
+ "▁shiny": 27348,
+ "▁*)&": 27349,
+ "▁Define": 27350,
+ "če": 27351,
+ "▁scrut": 27352,
+ "onden": 27353,
+ "'\",": 27354,
+ "uffs": 27355,
+ "▁olymp": 27356,
+ "idential": 27357,
+ "wand": 27358,
+ "▁annually": 27359,
+ "▁Arkansas": 27360,
+ "▁saint": 27361,
+ "▁gleich": 27362,
+ "▁perfection": 27363,
+ ")>": 27364,
+ "▁shorts": 27365,
+ "▁justified": 27366,
+ "peated": 27367,
+ "packages": 27368,
+ "driven": 27369,
+ "▁Liberty": 27370,
+ "▁stripped": 27371,
+ "шение": 27372,
+ "▁fünf": 27373,
+ "▁ecosystem": 27374,
+ "ixa": 27375,
+ "▁Fresh": 27376,
+ "vart": 27377,
+ "▁treats": 27378,
+ "▁stance": 27379,
+ "чёт": 27380,
+ "▁pity": 27381,
+ "adém": 27382,
+ "▁окон": 27383,
+ "▁Chand": 27384,
+ "rab": 27385,
+ "вший": 27386,
+ "inski": 27387,
+ "▁continually": 27388,
+ "▁Daddy": 27389,
+ "▁nightmare": 27390,
+ "icional": 27391,
+ "▁efect": 27392,
+ "ueblo": 27393,
+ "▁lanç": 27394,
+ "▁Collections": 27395,
+ "due": 27396,
+ "ampton": 27397,
+ "▁memcpy": 27398,
+ "▁**(": 27399,
+ "issent": 27400,
+ "▁Insp": 27401,
+ "▁Glasgow": 27402,
+ "▁furono": 27403,
+ "▁kindness": 27404,
+ "Bi": 27405,
+ "▁competed": 27406,
+ "▁oak": 27407,
+ "Large": 27408,
+ "▁disgu": 27409,
+ "▁kings": 27410,
+ "тами": 27411,
+ "▁stuffed": 27412,
+ "▁hilar": 27413,
+ "published": 27414,
+ "▁stressed": 27415,
+ "▁Peak": 27416,
+ "▁loader": 27417,
+ "Keyboard": 27418,
+ "▁reconstruction": 27419,
+ "▁vod": 27420,
+ "▁dun": 27421,
+ "▁understands": 27422,
+ "tenant": 27423,
+ "▁chaque": 27424,
+ "▁prejud": 27425,
+ "utat": 27426,
+ "▁uso": 27427,
+ "▁Heavy": 27428,
+ "▁cuatro": 27429,
+ "▁sidewalk": 27430,
+ "▁Bug": 27431,
+ "▁månaden": 27432,
+ "geo": 27433,
+ "▁united": 27434,
+ "▁Files": 27435,
+ "▁Аль": 27436,
+ "▁rugby": 27437,
+ "▁financing": 27438,
+ "▁comply": 27439,
+ "": 27440,
+ "▁rushing": 27441,
+ "▁fen": 27442,
+ "mong": 27443,
+ "▁spé": 27444,
+ "▁presenting": 27445,
+ "INCLUDING": 27446,
+ "ěl": 27447,
+ "zeichnung": 27448,
+ "Backup": 27449,
+ "▁petit": 27450,
+ "▁allerg": 27451,
+ "нут": 27452,
+ "▁worrying": 27453,
+ "▁mamm": 27454,
+ "▁operand": 27455,
+ ":%.*]]": 27456,
+ "▁realise": 27457,
+ "Commands": 27458,
+ "▁Bew": 27459,
+ "▁assumes": 27460,
+ "▁Covid": 27461,
+ "▁quand": 27462,
+ "tyard": 27463,
+ "▁Mono": 27464,
+ "linked": 27465,
+ "MARK": 27466,
+ "Esp": 27467,
+ "▁blessing": 27468,
+ "▁eyebrows": 27469,
+ "▁NV": 27470,
+ "▁стру": 27471,
+ "▁modeling": 27472,
+ "▁greeted": 27473,
+ "Workspace": 27474,
+ "▁pedest": 27475,
+ "▁неза": 27476,
+ "lemagne": 27477,
+ "Statistics": 27478,
+ "▁aument": 27479,
+ "▁speeds": 27480,
+ "▁syndrome": 27481,
+ "CONNECT": 27482,
+ "zahl": 27483,
+ "verso": 27484,
+ "ército": 27485,
+ "▁astronom": 27486,
+ "▁aprile": 27487,
+ "žen": 27488,
+ "веро": 27489,
+ "draft": 27490,
+ "▁gioc": 27491,
+ "▁comport": 27492,
+ "▁variance": 27493,
+ "▁realizing": 27494,
+ "EDIT": 27495,
+ "олові": 27496,
+ "▁estar": 27497,
+ "▁sost": 27498,
+ "NORMAL": 27499,
+ "▁ó": 27500,
+ "▁Andr": 27501,
+ "ATTRIB": 27502,
+ "▁rede": 27503,
+ "▁toes": 27504,
+ "▁advances": 27505,
+ "▁Against": 27506,
+ "TOM": 27507,
+ "rss": 27508,
+ "MMMM": 27509,
+ "▁newest": 27510,
+ "▁VER": 27511,
+ "▁phrases": 27512,
+ "anter": 27513,
+ "Launch": 27514,
+ "▁chr": 27515,
+ "▁manufactured": 27516,
+ "$),": 27517,
+ "rollment": 27518,
+ "eston": 27519,
+ "▁peint": 27520,
+ "”)": 27521,
+ "endet": 27522,
+ "▁Hair": 27523,
+ "ivalent": 27524,
+ "▁upright": 27525,
+ "gren": 27526,
+ "anked": 27527,
+ "wright": 27528,
+ "▁mast": 27529,
+ "▁onChange": 27530,
+ "▁debris": 27531,
+ "▁grap": 27532,
+ "etry": 27533,
+ "▁(__": 27534,
+ "▁Commerce": 27535,
+ "BOX": 27536,
+ "Tax": 27537,
+ "▁отри": 27538,
+ "▁prevention": 27539,
+ "▁Feel": 27540,
+ "▁exotic": 27541,
+ "▁Bark": 27542,
+ "▁Steam": 27543,
+ "fon": 27544,
+ "olin": 27545,
+ "▁eliminated": 27546,
+ "▁bc": 27547,
+ "▁Cycl": 27548,
+ "▁$(\"#": 27549,
+ "▁Parl": 27550,
+ "manuel": 27551,
+ "ospher": 27552,
+ "WF": 27553,
+ "Analy": 27554,
+ "▁navig": 27555,
+ "▁renown": 27556,
+ "Rx": 27557,
+ "▁Walt": 27558,
+ "uffed": 27559,
+ "▁foster": 27560,
+ "$:": 27561,
+ "shore": 27562,
+ "Connector": 27563,
+ "фика": 27564,
+ "▁realization": 27565,
+ "Li": 27566,
+ "ctxt": 27567,
+ "ahoo": 27568,
+ "▁miracle": 27569,
+ "▁ET": 27570,
+ "▁GPS": 27571,
+ "▁Observable": 27572,
+ "▁hf": 27573,
+ "▁magnificent": 27574,
+ "него": 27575,
+ "BIN": 27576,
+ "▁Dorf": 27577,
+ "ieck": 27578,
+ "vee": 27579,
+ "▁Craw": 27580,
+ "/#": 27581,
+ "▁pci": 27582,
+ "ippet": 27583,
+ "▁Hillary": 27584,
+ "▁gir": 27585,
+ "▁rand": 27586,
+ "▁laying": 27587,
+ "▁Different": 27588,
+ "boys": 27589,
+ "virt": 27590,
+ "▁encryption": 27591,
+ "ász": 27592,
+ "пор": 27593,
+ "▁smelled": 27594,
+ "▁suscept": 27595,
+ "cluded": 27596,
+ "▁Carn": 27597,
+ "igten": 27598,
+ "▁Chuck": 27599,
+ "▁Provinc": 27600,
+ "▁perí": 27601,
+ "▁Marshal": 27602,
+ "мож": 27603,
+ "gfx": 27604,
+ "oshi": 27605,
+ "▁WHE": 27606,
+ "▁relaxation": 27607,
+ ",.": 27608,
+ "were": 27609,
+ "▁varieties": 27610,
+ "▁Won": 27611,
+ "▁gaps": 27612,
+ "▁stole": 27613,
+ "igua": 27614,
+ "ющие": 27615,
+ "▁Hampshire": 27616,
+ "phrase": 27617,
+ "▁película": 27618,
+ "Processing": 27619,
+ "▁initialization": 27620,
+ "oustic": 27621,
+ "▁Josef": 27622,
+ "icating": 27623,
+ "▁goodness": 27624,
+ "TES": 27625,
+ "▁cope": 27626,
+ "▁ignorance": 27627,
+ "▁Brist": 27628,
+ "▁paras": 27629,
+ "▁accidentally": 27630,
+ "▁tand": 27631,
+ "ittest": 27632,
+ "▁ули": 27633,
+ "▁shipped": 27634,
+ "▁ост": 27635,
+ "elseif": 27636,
+ "▁usize": 27637,
+ "horizontal": 27638,
+ "▁Carr": 27639,
+ "▁precip": 27640,
+ "roz": 27641,
+ "pathetic": 27642,
+ "rived": 27643,
+ "rok": 27644,
+ "▁digging": 27645,
+ "мом": 27646,
+ "▁Mull": 27647,
+ "▁XIII": 27648,
+ "▁peas": 27649,
+ "▁foul": 27650,
+ "▁travels": 27651,
+ "▁Ng": 27652,
+ "▁составе": 27653,
+ "Mont": 27654,
+ "arde": 27655,
+ "▁Stefan": 27656,
+ "^^^^": 27657,
+ "▁Kiss": 27658,
+ "▁Ek": 27659,
+ "▁oktober": 27660,
+ "▁memorable": 27661,
+ "')).": 27662,
+ "▁Vision": 27663,
+ "▁Nina": 27664,
+ "▁Solar": 27665,
+ "▁highlighted": 27666,
+ "▁memo": 27667,
+ "meisterschaft": 27668,
+ "sidebar": 27669,
+ "SEE": 27670,
+ "▁Nevada": 27671,
+ "Da": 27672,
+ "▁drawer": 27673,
+ "astically": 27674,
+ "elde": 27675,
+ "scribed": 27676,
+ "▁priests": 27677,
+ "▁hommes": 27678,
+ "▁instructor": 27679,
+ "клад": 27680,
+ "▁spett": 27681,
+ "\\-": 27682,
+ "▁мира": 27683,
+ "▁Looks": 27684,
+ "▁sleeve": 27685,
+ "▁strongest": 27686,
+ "▁tête": 27687,
+ "▁Nicole": 27688,
+ "imper": 27689,
+ "нача": 27690,
+ "ipper": 27691,
+ "▁inwon": 27692,
+ "ilers": 27693,
+ "▁Deputy": 27694,
+ "oge": 27695,
+ "▁depressed": 27696,
+ "▁arte": 27697,
+ "▁combining": 27698,
+ "LAST": 27699,
+ "inted": 27700,
+ "▁Average": 27701,
+ "▁pollution": 27702,
+ "▁Phillips": 27703,
+ "▁WM": 27704,
+ "}}}\\": 27705,
+ "Added": 27706,
+ "▁peripher": 27707,
+ "Creation": 27708,
+ "▁italien": 27709,
+ "▁Choice": 27710,
+ "▁EXPRESS": 27711,
+ "▁Struct": 27712,
+ "ysz": 27713,
+ "Resize": 27714,
+ "ARGS": 27715,
+ "▁repo": 27716,
+ "▁чтобы": 27717,
+ "▁pref": 27718,
+ "▁earthqu": 27719,
+ "▁Мекси": 27720,
+ "▁Finale": 27721,
+ "▁hecho": 27722,
+ "requests": 27723,
+ "Cut": 27724,
+ "▁deserved": 27725,
+ "гово": 27726,
+ "▁Recent": 27727,
+ "▁дивизи": 27728,
+ "▁supportive": 27729,
+ "прави": 27730,
+ "▁irrelevant": 27731,
+ "'\r": 27732,
+ "▁ctrl": 27733,
+ "▁Deal": 27734,
+ "izada": 27735,
+ "uo": 27736,
+ "▁nort": 27737,
+ "geometry": 27738,
+ "▁Individual": 27739,
+ "ereg": 27740,
+ "▁приня": 27741,
+ "cref": 27742,
+ "══": 27743,
+ "▁comerc": 27744,
+ "=_": 27745,
+ "bund": 27746,
+ "тах": 27747,
+ "ilen": 27748,
+ "чита": 27749,
+ "▁corporation": 27750,
+ "esz": 27751,
+ "▁==>": 27752,
+ "ablish": 27753,
+ "Apr": 27754,
+ "▁ripped": 27755,
+ "Vars": 27756,
+ "stret": 27757,
+ "▁Francesco": 27758,
+ "NaN": 27759,
+ "▁anytime": 27760,
+ "▁automated": 27761,
+ "ostream": 27762,
+ "▁drawings": 27763,
+ "▁enhancement": 27764,
+ "okrat": 27765,
+ "▁Issue": 27766,
+ "вра": 27767,
+ "Currency": 27768,
+ "▁wyn": 27769,
+ "izarre": 27770,
+ "ético": 27771,
+ "multiple": 27772,
+ "▁Rate": 27773,
+ "▁Ich": 27774,
+ "▁Auss": 27775,
+ "▁Former": 27776,
+ "Curve": 27777,
+ "▁marvel": 27778,
+ "attro": 27779,
+ "▁сп": 27780,
+ "BOOL": 27781,
+ "сия": 27782,
+ "gold": 27783,
+ "▁Nintendo": 27784,
+ "▁Salvador": 27785,
+ "▁Solution": 27786,
+ "ADC": 27787,
+ "бора": 27788,
+ "▁Bennett": 27789,
+ "▁FR": 27790,
+ "▁pueden": 27791,
+ "patient": 27792,
+ "▁PG": 27793,
+ "▁Jin": 27794,
+ "▁crashed": 27795,
+ "▁denen": 27796,
+ "▁Sample": 27797,
+ "▁Quebec": 27798,
+ "itories": 27799,
+ "▁blinked": 27800,
+ "▁lion": 27801,
+ "▁voce": 27802,
+ "▁Impact": 27803,
+ "▁Mau": 27804,
+ "▁Nie": 27805,
+ "▁lob": 27806,
+ "▁две": 27807,
+ "orneys": 27808,
+ "▁coastal": 27809,
+ "▁sensors": 27810,
+ "▁XII": 27811,
+ "▁illusion": 27812,
+ "oji": 27813,
+ "▁INC": 27814,
+ "▁Duncan": 27815,
+ "yk": 27816,
+ "▁affecting": 27817,
+ "pul": 27818,
+ "▁Napoleon": 27819,
+ "▁акаде": 27820,
+ "▁compt": 27821,
+ "▁profitable": 27822,
+ "loe": 27823,
+ "▁deuxième": 27824,
+ "▁WC": 27825,
+ "▁viable": 27826,
+ "▁Drug": 27827,
+ "TextBox": 27828,
+ "▁luminos": 27829,
+ "auté": 27830,
+ "yc": 27831,
+ "ště": 27832,
+ "▁affiliates": 27833,
+ "ilda": 27834,
+ "conduct": 27835,
+ "▁ebenfalls": 27836,
+ "▁AMD": 27837,
+ "▁Monitor": 27838,
+ "▁Companies": 27839,
+ "▁corrected": 27840,
+ "äck": 27841,
+ "SYSTEM": 27842,
+ "otherapy": 27843,
+ "▁перед": 27844,
+ "▁blues": 27845,
+ "atisf": 27846,
+ "although": 27847,
+ "rost": 27848,
+ "SCAN": 27849,
+ "▁RAM": 27850,
+ "ціональ": 27851,
+ "▁vendors": 27852,
+ "▁customs": 27853,
+ "▁activate": 27854,
+ "▁blogs": 27855,
+ "▁brace": 27856,
+ "▁strat": 27857,
+ "anje": 27858,
+ "щё": 27859,
+ "▁tide": 27860,
+ "▁Brigade": 27861,
+ "getOperand": 27862,
+ "▁aliment": 27863,
+ "▁achievements": 27864,
+ "▁suspicion": 27865,
+ "▁touchdown": 27866,
+ "broad": 27867,
+ "iore": 27868,
+ "Comparison": 27869,
+ "▁mum": 27870,
+ "English": 27871,
+ "▁Picture": 27872,
+ "▁Mouse": 27873,
+ "amd": 27874,
+ "▁[`": 27875,
+ "▁denomin": 27876,
+ "▁Aleks": 27877,
+ "▁prevents": 27878,
+ "ób": 27879,
+ "fed": 27880,
+ "▁Pray": 27881,
+ "▁shine": 27882,
+ "▁clutch": 27883,
+ "mux": 27884,
+ "Appro": 27885,
+ "▁notably": 27886,
+ "chio": 27887,
+ "nage": 27888,
+ "HAS": 27889,
+ "▁')": 27890,
+ "▁Miche": 27891,
+ "tg": 27892,
+ "::~": 27893,
+ "▁amely": 27894,
+ "▁rodz": 27895,
+ "zs": 27896,
+ "trait": 27897,
+ "▁klass": 27898,
+ "fö": 27899,
+ "▁destac": 27900,
+ "▁Clara": 27901,
+ "frequency": 27902,
+ "▁Git": 27903,
+ "▁поль": 27904,
+ "▁frequencies": 27905,
+ "▁febrero": 27906,
+ "▁stumbled": 27907,
+ "кою": 27908,
+ "▁Names": 27909,
+ "▁Flight": 27910,
+ "▁prey": 27911,
+ "▁medio": 27912,
+ "▁VAR": 27913,
+ "▁Float": 27914,
+ "▁Ernest": 27915,
+ "▁Marcatori": 27916,
+ "oport": 27917,
+ "▁cancellation": 27918,
+ "▁Bryan": 27919,
+ "————": 27920,
+ "Luc": 27921,
+ "▁libre": 27922,
+ "▁título": 27923,
+ "*>": 27924,
+ "▁Sandy": 27925,
+ "▁Marina": 27926,
+ "Been": 27927,
+ "▁wal": 27928,
+ "▁Kultur": 27929,
+ "▁explode": 27930,
+ "▁limiting": 27931,
+ "▁presumably": 27932,
+ "▁pb": 27933,
+ "▁Merc": 27934,
+ "▁реки": 27935,
+ "learning": 27936,
+ "Catalog": 27937,
+ "▁Census": 27938,
+ "lte": 27939,
+ "▁NET": 27940,
+ "raising": 27941,
+ "ське": 27942,
+ "staff": 27943,
+ "▁Quinn": 27944,
+ "▁memorial": 27945,
+ "пня": 27946,
+ "▁cuenta": 27947,
+ "▁XI": 27948,
+ "lbl": 27949,
+ "▁varies": 27950,
+ "▁fluctuations": 27951,
+ "▁долж": 27952,
+ "▁особи": 27953,
+ "▁warehouse": 27954,
+ "However": 27955,
+ "▁corrections": 27956,
+ "dhd": 27957,
+ "▁fals": 27958,
+ "▁controversy": 27959,
+ "▁curse": 27960,
+ "▁télé": 27961,
+ "řed": 27962,
+ "▁AU": 27963,
+ "▁тор": 27964,
+ "▁crít": 27965,
+ "idan": 27966,
+ "iliary": 27967,
+ "▁Panel": 27968,
+ "cule": 27969,
+ "▁Poor": 27970,
+ "▁BA": 27971,
+ "▁ignorant": 27972,
+ "èmes": 27973,
+ "▁aesthetic": 27974,
+ "Linked": 27975,
+ "getInt": 27976,
+ "Unicode": 27977,
+ "[@": 27978,
+ "▁Zent": 27979,
+ "Manifest": 27980,
+ "▁vars": 27981,
+ "PB": 27982,
+ "▁ву": 27983,
+ "▁Describe": 27984,
+ "▁Anything": 27985,
+ "oirs": 27986,
+ "▁socks": 27987,
+ "▁imped": 27988,
+ "▁neue": 27989,
+ "▁dispers": 27990,
+ "Collect": 27991,
+ "filer": 27992,
+ "▁Frau": 27993,
+ "▁Hockey": 27994,
+ "▁teens": 27995,
+ "▁Roberto": 27996,
+ "lauf": 27997,
+ "вать": 27998,
+ "▁ско": 27999,
+ "isArray": 28000,
+ "▁teenager": 28001,
+ "Built": 28002,
+ "▁loudly": 28003,
+ "Capacity": 28004,
+ "▁adventures": 28005,
+ "▁Molly": 28006,
+ "recogn": 28007,
+ "bars": 28008,
+ "▁Lor": 28009,
+ "▁può": 28010,
+ "▁mong": 28011,
+ "inement": 28012,
+ "Assignment": 28013,
+ "▁diz": 28014,
+ "lessness": 28015,
+ "▁Halloween": 28016,
+ "▁bitmap": 28017,
+ "Rom": 28018,
+ "нар": 28019,
+ "▁rebel": 28020,
+ "▁radial": 28021,
+ "measure": 28022,
+ "nit": 28023,
+ "▁Assume": 28024,
+ "▁assignments": 28025,
+ "▁Isn": 28026,
+ "▁altre": 28027,
+ "ßer": 28028,
+ "наль": 28029,
+ "▁flies": 28030,
+ "▁droit": 28031,
+ "▁thickness": 28032,
+ "▁enjo": 28033,
+ "▁dwell": 28034,
+ "▁homosexual": 28035,
+ "▁eval": 28036,
+ "$_{": 28037,
+ "asia": 28038,
+ "▁philos": 28039,
+ "getCurrent": 28040,
+ "▁veterans": 28041,
+ "▁Berkeley": 28042,
+ "▁wildlife": 28043,
+ "Cop": 28044,
+ "vern": 28045,
+ "▁Ú": 28046,
+ "tos": 28047,
+ "▁Led": 28048,
+ "▁keywords": 28049,
+ "▁medications": 28050,
+ "neum": 28051,
+ "▁jamais": 28052,
+ "▁Buc": 28053,
+ "▁PD": 28054,
+ "▁Statement": 28055,
+ "▁PI": 28056,
+ "▁Jackie": 28057,
+ "▁ordin": 28058,
+ "▁kör": 28059,
+ "enze": 28060,
+ "▁utilized": 28061,
+ "áct": 28062,
+ "azed": 28063,
+ "▁severely": 28064,
+ "▁även": 28065,
+ "▁libro": 28066,
+ "▁Eu": 28067,
+ "äst": 28068,
+ "PART": 28069,
+ "▁Butler": 28070,
+ "▁puzzle": 28071,
+ "Fall": 28072,
+ "Country": 28073,
+ "pfn": 28074,
+ "▁україн": 28075,
+ "▁Orchestra": 28076,
+ "▁alto": 28077,
+ "▁ancora": 28078,
+ "▁decomposition": 28079,
+ "▁م": 28080,
+ "▁appetite": 28081,
+ "adu": 28082,
+ "▁THAT": 28083,
+ "▁comenz": 28084,
+ "mina": 28085,
+ "▁initiated": 28086,
+ "▁Tat": 28087,
+ "▁sometime": 28088,
+ "rek": 28089,
+ "bread": 28090,
+ "▁Statistics": 28091,
+ "▁Cob": 28092,
+ "Follow": 28093,
+ "▁geometric": 28094,
+ "шла": 28095,
+ "▁proceedings": 28096,
+ "Dlg": 28097,
+ "seven": 28098,
+ "▁[-": 28099,
+ "▁Buffalo": 28100,
+ "▁blacks": 28101,
+ "▁sov": 28102,
+ "▁custody": 28103,
+ "▁ras": 28104,
+ "▁tattoo": 28105,
+ "öffentlicht": 28106,
+ "Blo": 28107,
+ "Austral": 28108,
+ "▁recuper": 28109,
+ "лев": 28110,
+ "▁bem": 28111,
+ "▁thou": 28112,
+ "oriented": 28113,
+ "vir": 28114,
+ "▁colony": 28115,
+ "▁Stanford": 28116,
+ "Absolute": 28117,
+ "adrat": 28118,
+ "▁Situ": 28119,
+ "▁souvent": 28120,
+ "EXEC": 28121,
+ "▁mű": 28122,
+ "▁apartments": 28123,
+ "▁случа": 28124,
+ "▁ano": 28125,
+ "WINDO": 28126,
+ "acci": 28127,
+ "▁Lau": 28128,
+ "court": 28129,
+ "▁manifold": 28130,
+ "▁coalition": 28131,
+ "▁XIV": 28132,
+ "Attrib": 28133,
+ "ascade": 28134,
+ "▁wheat": 28135,
+ "▁strengths": 28136,
+ "FREE": 28137,
+ "EMPTY": 28138,
+ "▁hey": 28139,
+ "ascular": 28140,
+ "▁plasma": 28141,
+ "▁bob": 28142,
+ "Separator": 28143,
+ "=\"${": 28144,
+ "▁Zag": 28145,
+ "▁projet": 28146,
+ "▁smoothly": 28147,
+ "SEQU": 28148,
+ "analy": 28149,
+ "attachment": 28150,
+ "▁ES": 28151,
+ "▁popped": 28152,
+ "ős": 28153,
+ "tom": 28154,
+ "▁són": 28155,
+ "▁rott": 28156,
+ "Utilities": 28157,
+ "hadoop": 28158,
+ "▁sotto": 28159,
+ "autor": 28160,
+ "▁Georges": 28161,
+ "▁který": 28162,
+ "▁gruppo": 28163,
+ "▁когда": 28164,
+ "▁меда": 28165,
+ "▁instrumental": 28166,
+ "▁Writer": 28167,
+ "▁setTimeout": 28168,
+ "ikk": 28169,
+ "▁Dopo": 28170,
+ "]);\r": 28171,
+ "▁practicing": 28172,
+ "▁Ronald": 28173,
+ "▁уби": 28174,
+ "▁agrees": 28175,
+ "▁denoted": 28176,
+ "ismiss": 28177,
+ "▁interviewed": 28178,
+ "templates": 28179,
+ "ři": 28180,
+ "administr": 28181,
+ "▁Butter": 28182,
+ "▁XVII": 28183,
+ "▁positioned": 28184,
+ "▁Fourth": 28185,
+ "▁overwhelmed": 28186,
+ "▁Regular": 28187,
+ "▁reprezent": 28188,
+ "кономи": 28189,
+ "▁expects": 28190,
+ "Indices": 28191,
+ "▁marijuana": 28192,
+ "▁zaj": 28193,
+ "▁Bren": 28194,
+ "▁begg": 28195,
+ "▁nahm": 28196,
+ "▁interrog": 28197,
+ "тие": 28198,
+ "▁Bun": 28199,
+ "▁серед": 28200,
+ "▁shelves": 28201,
+ "▁которых": 28202,
+ "▁Frauen": 28203,
+ "▁Sergeant": 28204,
+ "▁успе": 28205,
+ "matched": 28206,
+ "▁donne": 28207,
+ "▁touches": 28208,
+ "abort": 28209,
+ "▁vale": 28210,
+ "▁institutional": 28211,
+ "▁Mons": 28212,
+ "▁ambitious": 28213,
+ "▁nonetheless": 28214,
+ "jd": 28215,
+ "пей": 28216,
+ "▁backpack": 28217,
+ "dao": 28218,
+ "вия": 28219,
+ "▁surroundings": 28220,
+ "|_{": 28221,
+ "▁gegründ": 28222,
+ "disp": 28223,
+ "▁moisture": 28224,
+ "▁wyd": 28225,
+ "▁traders": 28226,
+ "▁Erst": 28227,
+ "▁Galaxy": 28228,
+ "▁воло": 28229,
+ "▁Peru": 28230,
+ "▁priorities": 28231,
+ "▁pronounced": 28232,
+ "▁CBS": 28233,
+ "▁Palm": 28234,
+ "▁expans": 28235,
+ "▁energet": 28236,
+ "▁Condition": 28237,
+ "▁Sver": 28238,
+ "nested": 28239,
+ "▁февраля": 28240,
+ "hero": 28241,
+ "▁коло": 28242,
+ "▁Films": 28243,
+ "Bon": 28244,
+ "éal": 28245,
+ "ployed": 28246,
+ "trained": 28247,
+ "▁első": 28248,
+ "▁lust": 28249,
+ "atinum": 28250,
+ "oyle": 28251,
+ "▁Jet": 28252,
+ "ждения": 28253,
+ "▁surveys": 28254,
+ "bee": 28255,
+ "workers": 28256,
+ "records": 28257,
+ "calendar": 28258,
+ "bbing": 28259,
+ "regation": 28260,
+ "dashboard": 28261,
+ "King": 28262,
+ "▁vista": 28263,
+ "▁depicted": 28264,
+ "▁occurring": 28265,
+ "▁офи": 28266,
+ "▁sandwich": 28267,
+ "rcu": 28268,
+ "kern": 28269,
+ "▁minut": 28270,
+ "▁смер": 28271,
+ "▁td": 28272,
+ "solete": 28273,
+ "Complex": 28274,
+ "▁tunn": 28275,
+ "▁scarc": 28276,
+ "stead": 28277,
+ "▁Fail": 28278,
+ "▁Rs": 28279,
+ "▁trails": 28280,
+ "kem": 28281,
+ "▁Romans": 28282,
+ "ativity": 28283,
+ "Previous": 28284,
+ "▁depress": 28285,
+ "▁resigned": 28286,
+ "getDefault": 28287,
+ "▁Tibet": 28288,
+ "▁Franco": 28289,
+ "\")));": 28290,
+ "▁injection": 28291,
+ "removed": 28292,
+ "▁praised": 28293,
+ "▁Asc": 28294,
+ "erase": 28295,
+ "▁commissioned": 28296,
+ "MAIL": 28297,
+ "▁Boh": 28298,
+ "Poly": 28299,
+ "▁cinq": 28300,
+ "▁Above": 28301,
+ "▁Joshua": 28302,
+ "ZERO": 28303,
+ "▁summit": 28304,
+ "▁Urs": 28305,
+ "▁curl": 28306,
+ "▁visa": 28307,
+ "▁resur": 28308,
+ "={'": 28309,
+ "feat": 28310,
+ "▁absorb": 28311,
+ "▁planets": 28312,
+ "▁princess": 28313,
+ "▁Jahrhunderts": 28314,
+ "xp": 28315,
+ "▁NBC": 28316,
+ "▁коми": 28317,
+ "▁FUN": 28318,
+ "▁neuen": 28319,
+ "▁déjà": 28320,
+ "▁Oz": 28321,
+ "bben": 28322,
+ "VIDEO": 28323,
+ "▁ejempl": 28324,
+ "▁considers": 28325,
+ "atri": 28326,
+ "▁arrog": 28327,
+ "ioso": 28328,
+ "▁hace": 28329,
+ "▁contacted": 28330,
+ "▁unple": 28331,
+ "▁sponsored": 28332,
+ "▁trainer": 28333,
+ "sbi": 28334,
+ "▁занима": 28335,
+ "Criterion": 28336,
+ "ното": 28337,
+ "scheme": 28338,
+ "ennial": 28339,
+ "perform": 28340,
+ "▁fixing": 28341,
+ "▁постро": 28342,
+ "arb": 28343,
+ "EXIT": 28344,
+ "▁café": 28345,
+ "ituted": 28346,
+ "riages": 28347,
+ "Tur": 28348,
+ "▁haber": 28349,
+ "elasticsearch": 28350,
+ "▁ал": 28351,
+ "rh": 28352,
+ "▁voll": 28353,
+ "CLU": 28354,
+ "Mil": 28355,
+ "▁membres": 28356,
+ "▁remarked": 28357,
+ "вана": 28358,
+ "=\"_": 28359,
+ "Less": 28360,
+ "(\"\");": 28361,
+ "▁Yale": 28362,
+ "berries": 28363,
+ "▁releasing": 28364,
+ "▁imports": 28365,
+ "idea": 28366,
+ "▁(+": 28367,
+ "▁arqu": 28368,
+ "ificación": 28369,
+ "▁пара": 28370,
+ "▁Rangers": 28371,
+ "Mic": 28372,
+ "▁nederbörd": 28373,
+ "▁imaginary": 28374,
+ "▁specialists": 28375,
+ "▁hoof": 28376,
+ "Modules": 28377,
+ "▁sadly": 28378,
+ "ceil": 28379,
+ "TabIndex": 28380,
+ "ationale": 28381,
+ "▁Partner": 28382,
+ "tbody": 28383,
+ "▁leverage": 28384,
+ "DN": 28385,
+ "▁Prec": 28386,
+ "▁Sé": 28387,
+ "▁Mam": 28388,
+ "▁afin": 28389,
+ "isValid": 28390,
+ "Pse": 28391,
+ "▁сторо": 28392,
+ "▁chopped": 28393,
+ "▁Minor": 28394,
+ "▁dabei": 28395,
+ "David": 28396,
+ "ussia": 28397,
+ "▁деревня": 28398,
+ "▁Identity": 28399,
+ "▁LGBT": 28400,
+ "ције": 28401,
+ "▁Orts": 28402,
+ "▁parti": 28403,
+ "▁Bachelor": 28404,
+ "uga": 28405,
+ "▁OPT": 28406,
+ "▁Seth": 28407,
+ "▁LIABLE": 28408,
+ "▁inaugur": 28409,
+ "▁Shanghai": 28410,
+ "▁relaxing": 28411,
+ "циона": 28412,
+ "\"%": 28413,
+ "▁obey": 28414,
+ "▁Airlines": 28415,
+ "Links": 28416,
+ "▁Celt": 28417,
+ "▁Admin": 28418,
+ "agation": 28419,
+ "▁worries": 28420,
+ "INTE": 28421,
+ "arith": 28422,
+ "Fatalf": 28423,
+ "]])": 28424,
+ "colm": 28425,
+ "▁archae": 28426,
+ "▁brushed": 28427,
+ "▁tät": 28428,
+ "▁structured": 28429,
+ "тии": 28430,
+ "▁homem": 28431,
+ "[:,": 28432,
+ "▁navy": 28433,
+ "getKey": 28434,
+ "powered": 28435,
+ "▁sucked": 28436,
+ "▁zomb": 28437,
+ "issant": 28438,
+ "▁Might": 28439,
+ "▁Pull": 28440,
+ "rir": 28441,
+ "▁пі": 28442,
+ "▁seas": 28443,
+ "▁Wrest": 28444,
+ "▁tense": 28445,
+ "▁atm": 28446,
+ "▁havet": 28447,
+ "▁pierws": 28448,
+ "▁tragic": 28449,
+ "▁Diff": 28450,
+ "▁confidential": 28451,
+ "successful": 28452,
+ "ęż": 28453,
+ "▁Chain": 28454,
+ "▁Kenya": 28455,
+ "Choice": 28456,
+ "ocur": 28457,
+ "aniu": 28458,
+ "▁consultant": 28459,
+ "▁Advis": 28460,
+ "Lif": 28461,
+ "▁Lors": 28462,
+ "avorite": 28463,
+ "▁utilizing": 28464,
+ "▁vintage": 28465,
+ "Matcher": 28466,
+ "▁membre": 28467,
+ "▁Expect": 28468,
+ "▁tracing": 28469,
+ "nog": 28470,
+ "▁dej": 28471,
+ "▁уче": 28472,
+ "▁loops": 28473,
+ "▁onclick": 28474,
+ "▁GPU": 28475,
+ "▁Albums": 28476,
+ "▁Archives": 28477,
+ "вата": 28478,
+ "▁stove": 28479,
+ "шли": 28480,
+ "ancies": 28481,
+ "▁gemeente": 28482,
+ "mob": 28483,
+ "PDF": 28484,
+ "eso": 28485,
+ "▁vég": 28486,
+ "Resolve": 28487,
+ "▁teaches": 28488,
+ "ложе": 28489,
+ "▁ство": 28490,
+ "▁Одна": 28491,
+ "▁fid": 28492,
+ "Something": 28493,
+ "▁nebo": 28494,
+ "▁Valentine": 28495,
+ "rowning": 28496,
+ "▁але": 28497,
+ "awi": 28498,
+ "ishi": 28499,
+ "▁SPI": 28500,
+ "▁spel": 28501,
+ "▁біль": 28502,
+ "▁participant": 28503,
+ "▁Ned": 28504,
+ "▁Gast": 28505,
+ "▁blond": 28506,
+ "▁saves": 28507,
+ "colored": 28508,
+ "▁ACTION": 28509,
+ "▁Politiker": 28510,
+ "}$)": 28511,
+ "▁Dum": 28512,
+ "dentry": 28513,
+ "Student": 28514,
+ "▁~=": 28515,
+ "loads": 28516,
+ "▁Foster": 28517,
+ "一个": 28518,
+ "▁PK": 28519,
+ "▁SB": 28520,
+ "▁Hern": 28521,
+ "▁Exhib": 28522,
+ "Listeners": 28523,
+ "Sun": 28524,
+ "plac": 28525,
+ "▁Bever": 28526,
+ "▁incluy": 28527,
+ "▁dc": 28528,
+ "argc": 28529,
+ "▁ged": 28530,
+ "спа": 28531,
+ "▁Formula": 28532,
+ "▁сем": 28533,
+ "▁empt": 28534,
+ "unregister": 28535,
+ "▁Queensland": 28536,
+ "ández": 28537,
+ "otive": 28538,
+ "▁alley": 28539,
+ "▁Democrat": 28540,
+ "▁travail": 28541,
+ "▁$,": 28542,
+ "RP": 28543,
+ "рое": 28544,
+ "personal": 28545,
+ "▁période": 28546,
+ "HOME": 28547,
+ "omes": 28548,
+ "▁recognised": 28549,
+ "heng": 28550,
+ "▁Jung": 28551,
+ "▁Roland": 28552,
+ "▁convicted": 28553,
+ "Locked": 28554,
+ "▁mari": 28555,
+ "▁Luxem": 28556,
+ "referto": 28557,
+ "Deleted": 28558,
+ "intent": 28559,
+ "▁Staats": 28560,
+ "▁області": 28561,
+ "ит": 28562,
+ "▁саве": 28563,
+ "▁Protocol": 28564,
+ "ając": 28565,
+ "chk": 28566,
+ "TypeInfo": 28567,
+ "▁pkt": 28568,
+ "▁scandal": 28569,
+ "▁individually": 28570,
+ "FMT": 28571,
+ "▁nj": 28572,
+ "abile": 28573,
+ "▁Rivers": 28574,
+ "PROPERTY": 28575,
+ "VB": 28576,
+ "wort": 28577,
+ "▁splitting": 28578,
+ "achten": 28579,
+ "▁ARISING": 28580,
+ "▁sip": 28581,
+ "▁fres": 28582,
+ "▁groom": 28583,
+ "Hol": 28584,
+ "▁canon": 28585,
+ "▁abruptly": 28586,
+ "▁afterward": 28587,
+ "▁Running": 28588,
+ "▁ji": 28589,
+ "▁%,": 28590,
+ "▁Palestinian": 28591,
+ "RW": 28592,
+ "pgfscope": 28593,
+ "▁countryside": 28594,
+ "▁fortunate": 28595,
+ "▁cél": 28596,
+ "▁Pointer": 28597,
+ "ensors": 28598,
+ "rating": 28599,
+ "▁buffers": 28600,
+ "▁remot": 28601,
+ "▁PropTypes": 28602,
+ "▁Nah": 28603,
+ "altern": 28604,
+ "▁easiest": 28605,
+ "▁invas": 28606,
+ "▁clk": 28607,
+ "copyright": 28608,
+ "▁blanc": 28609,
+ "SAMP": 28610,
+ "▁Cohen": 28611,
+ "▁Shell": 28612,
+ "▁destroying": 28613,
+ "▁Zel": 28614,
+ "dater": 28615,
+ "čen": 28616,
+ "▁filing": 28617,
+ "▁integrate": 28618,
+ "xit": 28619,
+ "▁RET": 28620,
+ "lene": 28621,
+ "calls": 28622,
+ "▁slaughter": 28623,
+ "initialized": 28624,
+ "unches": 28625,
+ "▁Trace": 28626,
+ "efficient": 28627,
+ "▁Woods": 28628,
+ "▁longitud": 28629,
+ "GN": 28630,
+ "▁Kont": 28631,
+ "▁chunks": 28632,
+ "ách": 28633,
+ "▁unemployment": 28634,
+ "acom": 28635,
+ "▁slowed": 28636,
+ "▁outlined": 28637,
+ "xffff": 28638,
+ "▁ikke": 28639,
+ "▁workspace": 28640,
+ "Mc": 28641,
+ "▁kicking": 28642,
+ "▁embedding": 28643,
+ "chnitt": 28644,
+ "erten": 28645,
+ "▁Interior": 28646,
+ "▁Songs": 28647,
+ "mmc": 28648,
+ "▁analyzed": 28649,
+ "▁Coupe": 28650,
+ "▁favorites": 28651,
+ "▁tt": 28652,
+ "▁той": 28653,
+ "Routing": 28654,
+ "▁Silva": 28655,
+ "▁anderem": 28656,
+ "▁honom": 28657,
+ "▁использова": 28658,
+ ".\"]": 28659,
+ "▁Wu": 28660,
+ "legt": 28661,
+ "▁spoon": 28662,
+ "▁jap": 28663,
+ "▁Extension": 28664,
+ "erne": 28665,
+ "▁vagy": 28666,
+ "▁села": 28667,
+ "▁функ": 28668,
+ "▁analytics": 28669,
+ "▁sug": 28670,
+ "▁Async": 28671,
+ "▁peaks": 28672,
+ "▁Gym": 28673,
+ "▁lawsuit": 28674,
+ "<>": 28675,
+ "ialis": 28676,
+ "etric": 28677,
+ "faced": 28678,
+ "▁disrupt": 28679,
+ "▁få": 28680,
+ "Inputs": 28681,
+ "`);": 28682,
+ "▁Mend": 28683,
+ "gon": 28684,
+ "▁\",\"": 28685,
+ "▁nerves": 28686,
+ "▁doubts": 28687,
+ "sap": 28688,
+ "▁sow": 28689,
+ ",\\,\\": 28690,
+ "▁BS": 28691,
+ "▁Glad": 28692,
+ "▁aster": 28693,
+ "œuvre": 28694,
+ "▁Bangl": 28695,
+ "▁iPad": 28696,
+ "useppe": 28697,
+ "▁conducting": 28698,
+ "▁({\\": 28699,
+ "▁Harbor": 28700,
+ "psz": 28701,
+ "▁FIFA": 28702,
+ "_**": 28703,
+ "emor": 28704,
+ "▁": 28705,
+ "e": 28706,
+ "t": 28707,
+ "a": 28708,
+ "o": 28709,
+ "i": 28710,
+ "n": 28711,
+ "r": 28712,
+ "s": 28713,
+ "l": 28714,
+ "d": 28715,
+ "h": 28716,
+ "c": 28717,
+ "u": 28718,
+ "m": 28719,
+ "p": 28720,
+ "g": 28721,
+ "f": 28722,
+ ".": 28723,
+ "y": 28724,
+ ",": 28725,
+ "b": 28726,
+ "w": 28727,
+ "v": 28728,
+ "k": 28729,
+ "_": 28730,
+ ")": 28731,
+ "(": 28732,
+ "-": 28733,
+ "0": 28734,
+ "S": 28735,
+ "*": 28736,
+ "I": 28737,
+ "T": 28738,
+ "\"": 28739,
+ "1": 28740,
+ "A": 28741,
+ "'": 28742,
+ "C": 28743,
+ "x": 28744,
+ ";": 28745,
+ "=": 28746,
+ ":": 28747,
+ "/": 28748,
+ "E": 28749,
+ "2": 28750,
+ "{": 28751,
+ "}": 28752,
+ "P": 28753,
+ "R": 28754,
+ "M": 28755,
+ "\\": 28756,
+ "D": 28757,
+ "L": 28758,
+ "N": 28759,
+ "B": 28760,
+ "о": 28761,
+ "O": 28762,
+ "а": 28763,
+ "z": 28764,
+ "F": 28765,
+ "|": 28766,
+ ">": 28767,
+ "j": 28768,
+ "H": 28769,
+ "3": 28770,
+ "#": 28771,
+ "и": 28772,
+ "е": 28773,
+ "9": 28774,
+ "q": 28775,
+ "$": 28776,
+ "G": 28777,
+ "н": 28778,
+ "U": 28779,
+ "W": 28780,
+ "4": 28781,
+ "5": 28782,
+ "8": 28783,
+ "6": 28784,
+ "р": 28785,
+ "т": 28786,
+ "7": 28787,
+ "с": 28788,
+ "<": 28789,
+ "V": 28790,
+ "в": 28791,
+ "[": 28792,
+ "]": 28793,
+ "л": 28794,
+ "к": 28795,
+ "K": 28796,
+ "é": 28797,
+ "J": 28798,
+ "д": 28799,
+ "&": 28800,
+ "\r": 28801,
+ "Y": 28802,
+ "м": 28803,
+ "?": 28804,
+ "у": 28805,
+ "+": 28806,
+ "п": 28807,
+ "!": 28808,
+ "’": 28809,
+ "г": 28810,
+ "я": 28811,
+ "з": 28812,
+ "і": 28813,
+ "X": 28814,
+ "^": 28815,
+ "–": 28816,
+ "б": 28817,
+ "@": 28818,
+ "й": 28819,
+ "á": 28820,
+ "—": 28821,
+ "ь": 28822,
+ "%": 28823,
+ "Q": 28824,
+ "ó": 28825,
+ "ч": 28826,
+ "í": 28827,
+ "Z": 28828,
+ "ы": 28829,
+ "ä": 28830,
+ "х": 28831,
+ "`": 28832,
+ "ц": 28833,
+ "ö": 28834,
+ "“": 28835,
+ "ж": 28836,
+ "ü": 28837,
+ "”": 28838,
+ "à": 28839,
+ "è": 28840,
+ "ш": 28841,
+ "ю": 28842,
+ "ł": 28843,
+ "С": 28844,
+ "~": 28845,
+ "ф": 28846,
+ "П": 28847,
+ "»": 28848,
+ "В": 28849,
+ "«": 28850,
+ "å": 28851,
+ "К": 28852,
+ "щ": 28853,
+ "·": 28854,
+ "ј": 28855,
+ "М": 28856,
+ "ç": 28857,
+ "А": 28858,
+ "Н": 28859,
+ "Р": 28860,
+ "Б": 28861,
+ "č": 28862,
+ "ú": 28863,
+ "ę": 28864,
+ "ã": 28865,
+ "ą": 28866,
+ "ă": 28867,
+ "Д": 28868,
+ "ї": 28869,
+ "ъ": 28870,
+ "ě": 28871,
+ "Г": 28872,
+ "š": 28873,
+ "О": 28874,
+ "Т": 28875,
+ "ê": 28876,
+ "ñ": 28877,
+ "…": 28878,
+ "ž": 28879,
+ "ß": 28880,
+ "ё": 28881,
+ "ż": 28882,
+ "ř": 28883,
+ "ś": 28884,
+ "Л": 28885,
+ "ő": 28886,
+ "„": 28887,
+ "э": 28888,
+ "ý": 28889,
+ "У": 28890,
+ "â": 28891,
+ "И": 28892,
+ "є": 28893,
+ "‘": 28894,
+ "î": 28895,
+ "З": 28896,
+ "Ф": 28897,
+ "ò": 28898,
+ "•": 28899,
+ "ć": 28900,
+ "É": 28901,
+ "°": 28902,
+ "ș": 28903,
+ "Х": 28904,
+ "ț": 28905,
+ "ô": 28906,
+ "Е": 28907,
+ "ń": 28908,
+ "Ч": 28909,
+ "Ш": 28910,
+ "ø": 28911,
+ "ù": 28912,
+ "ů": 28913,
+ "的": 28914,
+ "ا": 28915,
+ "æ": 28916,
+ "њ": 28917,
+ "љ": 28918,
+ "ë": 28919,
+ "ï": 28920,
+ "Э": 28921,
+ "£": 28922,
+ "−": 28923,
+ ",": 28924,
+ "õ": 28925,
+ "ћ": 28926,
+ "": 28927,
+ "Ц": 28928,
+ "І": 28929,
+ "ā": 28930,
+ "ű": 28931,
+ "†": 28932,
+ "ل": 28933,
+ "ō": 28934,
+ "": 28935,
+ "º": 28936,
+ "Я": 28937,
+ "′": 28938,
+ "Á": 28939,
+ "Ö": 28940,
+ "²": 28941,
+ "Ж": 28942,
+ "ì": 28943,
+ "。": 28944,
+ "数": 28945,
+ "×": 28946,
+ "ر": 28947,
+ "α": 28948,
+ "́": 28949,
+ "Ю": 28950,
+ "û": 28951,
+ "œ": 28952,
+ "ı": 28953,
+ "م": 28954,
+ "ن": 28955,
+ "ª": 28956,
+ "ź": 28957,
+ "ο": 28958,
+ "″": 28959,
+ "€": 28960,
+ "Ü": 28961,
+ "و": 28962,
+ "用": 28963,
+ "À": 28964,
+ "Č": 28965,
+ "Š": 28966,
+ "ت": 28967,
+ "د": 28968,
+ "一": 28969,
+ "¿": 28970,
+ "是": 28971,
+ "ي": 28972,
+ "ђ": 28973,
+ "®": 28974,
+ "ی": 28975,
+ "ν": 28976,
+ "đ": 28977,
+ "τ": 28978,
+ "─": 28979,
+ "ι": 28980,
+ "ε": 28981,
+ "→": 28982,
+ "ب": 28983,
+ "Å": 28984,
+ "ū": 28985,
+ "№": 28986,
+ "ş": 28987,
+ "不": 28988,
+ "џ": 28989,
+ "ー": 28990,
+ "中": 28991,
+ "Î": 28992,
+ "の": 28993,
+ ":": 28994,
+ "个": 28995,
+ "Й": 28996,
+ "ρ": 28997,
+ "有": 28998,
+ "Ä": 28999,
+ " ": 29000,
+ "ī": 29001,
+ "©": 29002,
+ "为": 29003,
+ "ه": 29004,
+ "י": 29005,
+ "ו": 29006,
+ "时": 29007,
+ "س": 29008,
+ "Ś": 29009,
+ "在": 29010,
+ "件": 29011,
+ "取": 29012,
+ "ς": 29013,
+ "™": 29014,
+ "이": 29015,
+ "σ": 29016,
+ "μ": 29017,
+ "定": 29018,
+ "文": 29019,
+ "据": 29020,
+ "置": 29021,
+ "Ž": 29022,
+ "±": 29023,
+ "表": 29024,
+ "成": 29025,
+ "ň": 29026,
+ "λ": 29027,
+ "¡": 29028,
+ "È": 29029,
+ "π": 29030,
+ "字": 29031,
+ "│": 29032,
+ "Ј": 29033,
+ "回": 29034,
+ "Є": 29035,
+ "到": 29036,
+ "行": 29037,
+ "§": 29038,
+ "½": 29039,
+ "ع": 29040,
+ "、": 29041,
+ "Ł": 29042,
+ "다": 29043,
+ "ン": 29044,
+ "κ": 29045,
+ "名": 29046,
+ "ה": 29047,
+ "入": 29048,
+ "η": 29049,
+ "大": 29050,
+ "对": 29051,
+ "可": 29052,
+ "Â": 29053,
+ "上": 29054,
+ "█": 29055,
+ "新": 29056,
+ "ف": 29057,
+ "加": 29058,
+ "要": 29059,
+ "Ż": 29060,
+ "下": 29061,
+ "分": 29062,
+ "值": 29063,
+ "ת": 29064,
+ "出": 29065,
+ "类": 29066,
+ "请": 29067,
+ "": 29068,
+ "息": 29069,
+ "Ú": 29070,
+ "υ": 29071,
+ "获": 29072,
+ "示": 29073,
+ "以": 29074,
+ "ר": 29075,
+ "接": 29076,
+ "ל": 29077,
+ "を": 29078,
+ "存": 29079,
+ "信": 29080,
+ "设": 29081,
+ "方": 29082,
+ "ش": 29083,
+ "能": 29084,
+ "点": 29085,
+ "人": 29086,
+ "前": 29087,
+ "ğ": 29088,
+ "作": 29089,
+ "═": 29090,
+ "↘": 29091,
+ "ð": 29092,
+ "理": 29093,
+ "■": 29094,
+ "法": 29095,
+ "️": 29096,
+ "ˈ": 29097,
+ "果": 29098,
+ "发": 29099,
+ "ح": 29100,
+ "γ": 29101,
+ "ɵ": 29102,
+ "า": 29103,
+ "َ": 29104,
+ "了": 29105,
+ "户": 29106,
+ "Í": 29107,
+ "ə": 29108,
+ "ス": 29109,
+ "查": 29110,
+ "し": 29111,
+ "מ": 29112,
+ "单": 29113,
+ "ť": 29114,
+ "ق": 29115,
+ "る": 29116,
+ "间": 29117,
+ "如": 29118,
+ "本": 29119,
+ "后": 29120,
+ "ί": 29121,
+ "式": 29122,
+ "ト": 29123,
+ "Щ": 29124,
+ "Ó": 29125,
+ "す": 29126,
+ "א": 29127,
+ "生": 29128,
+ "动": 29129,
+ "ک": 29130,
+ "和": 29131,
+ "い": 29132,
+ "": 29133,
+ "ა": 29134,
+ "가": 29135,
+ "하": 29136,
+ "�": 29137,
+ "小": 29138,
+ "返": 29139,
+ "否": 29140,
+ "ة": 29141,
+ "日": 29142,
+ "로": 29143,
+ "标": 29144,
+ "码": 29145,
+ "地": 29146,
+ "位": 29147,
+ "에": 29148,
+ " ": 29149,
+ "列": 29150,
+ "수": 29151,
+ "β": 29152,
+ "除": 29153,
+ "使": 29154,
+ "ש": 29155,
+ "ج": 29156,
+ "イ": 29157,
+ "δ": 29158,
+ "自": 29159,
+ "于": 29160,
+ "지": 29161,
+ "当": 29162,
+ "所": 29163,
+ "기": 29164,
+ "ი": 29165,
+ "ב": 29166,
+ "ร": 29167,
+ "★": 29168,
+ "子": 29169,
+ "号": 29170,
+ "ك": 29171,
+ "参": 29172,
+ "型": 29173,
+ "に": 29174,
+ "는": 29175,
+ "这": 29176,
+ "开": 29177,
+ "น": 29178,
+ "会": 29179,
+ "器": 29180,
+ "面": 29181,
+ "ル": 29182,
+ "图": 29183,
+ "度": 29184,
+ ")": 29185,
+ "(": 29186,
+ "의": 29187,
+ "内": 29188,
+ "을": 29189,
+ "最": 29190,
+ "": 29191,
+ "化": 29192,
+ "建": 29193,
+ "니": 29194,
+ "量": 29195,
+ "😂": 29196,
+ "始": 29197,
+ "ē": 29198,
+ "خ": 29199,
+ "를": 29200,
+ "ά": 29201,
+ "过": 29202,
+ "³": 29203,
+ "´": 29204,
+ "组": 29205,
+ "功": 29206,
+ "": 29207,
+ "": 29208,
+ "区": 29209,
+ "ز": 29210,
+ "ґ": 29211,
+ "ό": 29212,
+ "ッ": 29213,
+ "ω": 29214,
+ "Ç": 29215,
+ "选": 29216,
+ "通": 29217,
+ "结": 29218,
+ "录": 29219,
+ "改": 29220,
+ "ク": 29221,
+ "目": 29222,
+ "指": 29223,
+ "务": 29224,
+ "๐": 29225,
+ "输": 29226,
+ "た": 29227,
+ "อ": 29228,
+ "关": 29229,
+ "で": 29230,
+ "调": 29231,
+ "ा": 29232,
+ "정": 29233,
+ "合": 29234,
+ "已": 29235,
+ "시": 29236,
+ "部": 29237,
+ "页": 29238,
+ "━": 29239,
+ "ː": 29240,
+ "ま": 29241,
+ "我": 29242,
+ "求": 29243,
+ "市": 29244,
+ "次": 29245,
+ "נ": 29246,
+ "实": 29247,
+ "将": 29248,
+ "重": 29249,
+ "更": 29250,
+ "制": 29251,
+ "符": 29252,
+ "配": 29253,
+ "象": 29254,
+ "θ": 29255,
+ "ก": 29256,
+ "て": 29257,
+ "进": 29258,
+ "需": 29259,
+ "Đ": 29260,
+ "性": 29261,
+ "认": 29262,
+ "来": 29263,
+ "题": 29264,
+ "程": 29265,
+ "模": 29266,
+ "!": 29267,
+ "失": 29268,
+ "口": 29269,
+ "な": 29270,
+ "έ": 29271,
+ "": 29272,
+ "空": 29273,
+ "": 29274,
+ "期": 29275,
+ "者": 29276,
+ "は": 29277,
+ "Ђ": 29278,
+ "提": 29279,
+ "ή": 29280,
+ "ラ": 29281,
+ "한": 29282,
+ "态": 29283,
+ "复": 29284,
+ "ง": 29285,
+ "ე": 29286,
+ "Ø": 29287,
+ "리": 29288,
+ "修": 29289,
+ "‚": 29290,
+ "得": 29291,
+ "多": 29292,
+ "格": 29293,
+ "자": 29294,
+ "ע": 29295,
+ "่": 29296,
+ "函": 29297,
+ "应": 29298,
+ "↗": 29299,
+ "्": 29300,
+ "เ": 29301,
+ "正": 29302,
+ "注": 29303,
+ "스": 29304,
+ "서": 29305,
+ "リ": 29306,
+ "φ": 29307,
+ "ص": 29308,
+ "が": 29309,
+ "则": 29310,
+ "消": 29311,
+ "节": 29312,
+ "序": 29313,
+ "代": 29314,
+ "사": 29315,
+ "と": 29316,
+ "ד": 29317,
+ "้": 29318,
+ "र": 29319,
+ "此": 29320,
+ "保": 29321,
+ "ア": 29322,
+ "ư": 29323,
+ "인": 29324,
+ "ė": 29325,
+ "处": 29326,
+ "删": 29327,
+ "ɛ": 29328,
+ "容": 29329,
+ "ط": 29330,
+ "": 29331,
+ "之": 29332,
+ "包": 29333,
+ "状": 29334,
+ "ド": 29335,
+ "İ": 29336,
+ "体": 29337,
+ "同": 29338,
+ "事": 29339,
+ "🙂": 29340,
+ "タ": 29341,
+ "χ": 29342,
+ "ʿ": 29343,
+ "Ș": 29344,
+ "主": 29345,
+ "品": 29346,
+ "ק": 29347,
+ "询": 29348,
+ "创": 29349,
+ "该": 29350,
+ " ": 29351,
+ "元": 29352,
+ "第": 29353,
+ "天": 29354,
+ "或": 29355,
+ "年": 29356,
+ "转": 29357,
+ "ח": 29358,
+ "传": 29359,
+ "ţ": 29360,
+ "路": 29361,
+ "例": 29362,
+ "机": 29363,
+ "Ã": 29364,
+ "ď": 29365,
+ "高": 29366,
+ "相": 29367,
+ "โ": 29368,
+ "片": 29369,
+ "―": 29370,
+ "操": 29371,
+ "ա": 29372,
+ "ม": 29373,
+ "全": 29374,
+ "无": 29375,
+ "月": 29376,
+ "称": 29377,
+ "ั": 29378,
+ "就": 29379,
+ "": 29380,
+ "明": 29381,
+ "计": 29382,
+ "你": 29383,
+ "败": 29384,
+ "密": 29385,
+ "解": 29386,
+ "れ": 29387,
+ "أ": 29388,
+ "变": 29389,
+ "段": 29390,
+ "条": 29391,
+ "默": 29392,
+ "●": 29393,
+ "ล": 29394,
+ "色": 29395,
+ "断": 29396,
+ "商": 29397,
+ "ם": 29398,
+ "か": 29399,
+ "里": 29400,
+ "系": 29401,
+ "编": 29402,
+ "错": 29403,
+ "트": 29404,
+ "只": 29405,
+ "县": 29406,
+ "ს": 29407,
+ "常": 29408,
+ "初": 29409,
+ "ɔ": 29410,
+ "Α": 29411,
+ "フ": 29412,
+ "►": 29413,
+ "等": 29414,
+ "일": 29415,
+ "・": 29416,
+ "Ō": 29417,
+ "情": 29418,
+ "现": 29419,
+ "Ř": 29420,
+ "ِ": 29421,
+ "さ": 29422,
+ "ạ": 29423,
+ "용": 29424,
+ "证": 29425,
+ "해": 29426,
+ "手": 29427,
+ "支": 29428,
+ "입": 29429,
+ "服": 29430,
+ "்": 29431,
+ "道": 29432,
+ "어": 29433,
+ "送": 29434,
+ "载": 29435,
+ "限": 29436,
+ "线": 29437,
+ "属": 29438,
+ "": 29439,
+ "他": 29440,
+ "放": 29441,
+ "记": 29442,
+ "公": 29443,
+ "没": 29444,
+ "添": 29445,
+ "显": 29446,
+ "บ": 29447,
+ "ย": 29448,
+ "რ": 29449,
+ "其": 29450,
+ "集": 29451,
+ "金": 29452,
+ "国": 29453,
+ "任": 29454,
+ "ە": 29455,
+ "话": 29456,
+ "并": 29457,
+ "被": 29458,
+ "ύ": 29459,
+ "都": 29460,
+ "گ": 29461,
+ "意": 29462,
+ "כ": 29463,
+ "经": 29464,
+ "성": 29465,
+ "看": 29466,
+ "פ": 29467,
+ "址": 29468,
+ "ס": 29469,
+ "드": 29470,
+ "交": 29471,
+ "¼": 29472,
+ "Џ": 29473,
+ "完": 29474,
+ "Δ": 29475,
+ "义": 29476,
+ "보": 29477,
+ "向": 29478,
+ "换": 29479,
+ "山": 29480,
+ "算": 29481,
+ "二": 29482,
+ "پ": 29483,
+ "⁄": 29484,
+ "判": 29485,
+ "级": 29486,
+ "工": 29487,
+ "ด": 29488,
+ "⠀": 29489,
+ "家": 29490,
+ "レ": 29491,
+ "三": 29492,
+ "原": 29493,
+ "】": 29494,
+ "长": 29495,
+ "া": 29496,
+ "管": 29497,
+ "ѝ": 29498,
+ "क": 29499,
+ "学": 29500,
+ "ロ": 29501,
+ "验": 29502,
+ "写": 29503,
+ "Œ": 29504,
+ "从": 29505,
+ "【": 29506,
+ "收": 29507,
+ "ả": 29508,
+ "未": 29509,
+ "登": 29510,
+ "고": 29511,
+ "源": 29512,
+ "每": 29513,
+ "µ": 29514,
+ "误": 29515,
+ "り": 29516,
+ "요": 29517,
+ "按": 29518,
+ "ว": 29519,
+ "权": 29520,
+ "根": 29521,
+ "プ": 29522,
+ "串": 29523,
+ "ส": 29524,
+ "›": 29525,
+ "제": 29526,
+ "シ": 29527,
+ "Ş": 29528,
+ "确": 29529,
+ "好": 29530,
+ "统": 29531,
+ "效": 29532,
+ "网": 29533,
+ "\u0001": 29534,
+ "物": 29535,
+ "아": 29536,
+ "也": 29537,
+ "은": 29538,
+ "ệ": 29539,
+ "न": 29540,
+ "项": 29541,
+ "资": 29542,
+ "こ": 29543,
+ "引": 29544,
+ "ジ": 29545,
+ "ค": 29546,
+ "版": 29547,
+ "ท": 29548,
+ "平": 29549,
+ "们": 29550,
+ "与": 29551,
+ "き": 29552,
+ "移": 29553,
+ "ि": 29554,
+ "素": 29555,
+ "执": 29556,
+ "주": 29557,
+ "‐": 29558,
+ "Ґ": 29559,
+ "ี": 29560,
+ "板": 29561,
+ "问": 29562,
+ "Ε": 29563,
+ "安": 29564,
+ "면": 29565,
+ "소": 29566,
+ "ต": 29567,
+ "ิ": 29568,
+ "持": 29569,
+ "습": 29570,
+ "Σ": 29571,
+ "ら": 29572,
+ "コ": 29573,
+ "心": 29574,
+ "Π": 29575,
+ "打": 29576,
+ "」": 29577,
+ "상": 29578,
+ "「": 29579,
+ "检": 29580,
+ "库": 29581,
+ "÷": 29582,
+ "으": 29583,
+ "测": 29584,
+ "ん": 29585,
+ "े": 29586,
+ "ُ": 29587,
+ "力": 29588,
+ "直": 29589,
+ "由": 29590,
+ "ى": 29591,
+ "试": 29592,
+ "必": 29593,
+ "端": 29594,
+ "ʻ": 29595,
+ "先": 29596,
+ "↑": 29597,
+ "命": 29598,
+ "도": 29599,
+ "전": 29600,
+ "ห": 29601,
+ "员": 29602,
+ "ɪ": 29603,
+ "있": 29604,
+ "比": 29605,
+ "ṣ": 29606,
+ "時": 29607,
+ "择": 29608,
+ "ذ": 29609,
+ "テ": 29610,
+ "": 29611,
+ "构": 29612,
+ "备": 29613,
+ "그": 29614,
+ "链": 29615,
+ "说": 29616,
+ "ლ": 29617,
+ "ן": 29618,
+ "签": 29619,
+ "う": 29620,
+ "غ": 29621,
+ "ế": 29622,
+ "ض": 29623,
+ "ḥ": 29624,
+ "启": 29625,
+ "력": 29626,
+ "ო": 29627,
+ "付": 29628,
+ "მ": 29629,
+ "索": 29630,
+ "特": 29631,
+ "ג": 29632,
+ "西": 29633,
+ "대": 29634,
+ "├": 29635,
+ "": 29636,
+ "": 29637,
+ "外": 29638,
+ "צ": 29639,
+ "头": 29640,
+ "连": 29641,
+ "流": 29642,
+ "◄": 29643,
+ "デ": 29644,
+ "カ": 29645,
+ "র": 29646,
+ "오": 29647,
+ "找": 29648,
+ "清": 29649,
+ "🤣": 29650,
+ "去": 29651,
+ "₹": 29652,
+ "경": 29653,
+ "グ": 29654,
+ "ْ": 29655,
+ "¢": 29656,
+ "因": 29657,
+ "": 29658,
+ "Κ": 29659,
+ "增": 29660,
+ "知": 29661,
+ "¶": 29662,
+ "像": 29663,
+ "♥": 29664,
+ "터": 29665,
+ "く": 29666,
+ "ậ": 29667,
+ "メ": 29668,
+ "Æ": 29669,
+ "省": 29670,
+ "स": 29671,
+ "म": 29672,
+ "❤": 29673,
+ "あ": 29674,
+ "样": 29675,
+ "起": 29676,
+ "台": 29677,
+ "读": 29678,
+ "角": 29679,
+ "南": 29680,
+ "整": 29681,
+ "订": 29682,
+ "\f": 29683,
+ "ט": 29684,
+ "マ": 29685,
+ "্": 29686,
+ "우": 29687,
+ "ն": 29688,
+ "您": 29689,
+ "ئ": 29690,
+ "基": 29691,
+ "水": 29692,
+ "생": 29693,
+ "‑": 29694,
+ "나": 29695,
+ "画": 29696,
+ "描": 29697,
+ "击": 29698,
+ "っ": 29699,
+ "라": 29700,
+ "ნ": 29701,
+ "ր": 29702,
+ "业": 29703,
+ "ბ": 29704,
+ "别": 29705,
+ "♦": 29706,
+ "ィ": 29707,
+ "त": 29708,
+ "给": 29709,
+ "문": 29710,
+ "形": 29711,
+ "控": 29712,
+ "然": 29713,
+ "동": 29714,
+ "Њ": 29715,
+ "": 29716,
+ "东": 29717,
+ "ป": 29718,
+ "州": 29719,
+ "排": 29720,
+ "세": 29721,
+ "装": 29722,
+ "할": 29723,
+ "Ć": 29724,
+ "∞": 29725,
+ "海": 29726,
+ "城": 29727,
+ "键": 29728,
+ "径": 29729,
+ "호": 29730,
+ "화": 29731,
+ "្": 29732,
+ "料": 29733,
+ "ơ": 29734,
+ "ी": 29735,
+ "ウ": 29736,
+ "具": 29737,
+ "ブ": 29738,
+ "块": 29739,
+ "再": 29740,
+ "ố": 29741,
+ "电": 29742,
+ ";": 29743,
+ "위": 29744,
+ "两": 29745,
+ "而": 29746,
+ "장": 29747,
+ "آ": 29748,
+ "Ț": 29749,
+ "バ": 29750,
+ "还": 29751,
+ "令": 29752,
+ "キ": 29753,
+ "ّ": 29754,
+ "값": 29755,
+ "번": 29756,
+ "만": 29757,
+ "总": 29758,
+ "ल": 29759,
+ "▲": 29760,
+ "异": 29761,
+ "光": 29762,
+ "客": 29763,
+ "非": 29764,
+ "ị": 29765,
+ "": 29766,
+ "þ": 29767,
+ "設": 29768,
+ "述": 29769,
+ "합": 29770,
+ "?": 29771,
+ "✔": 29772,
+ "导": 29773,
+ "ṇ": 29774,
+ "부": 29775,
+ "˙": 29776,
+ "Τ": 29777,
+ "も": 29778,
+ "구": 29779,
+ "镇": 29780,
+ "작": 29781,
+ "░": 29782,
+ "步": 29783,
+ "ộ": 29784,
+ "活": 29785,
+ "พ": 29786,
+ "←": 29787,
+ "ǎ": 29788,
+ "จ": 29789,
+ "束": 29790,
+ "ـ": 29791,
+ "": 29792,
+ "那": 29793,
+ "प": 29794,
+ "エ": 29795,
+ "志": 29796,
+ "么": 29797,
+ "运": 29798,
+ "北": 29799,
+ "超": 29800,
+ "་": 29801,
+ "布": 29802,
+ "ώ": 29803,
+ "͡": 29804,
+ "少": 29805,
+ "파": 29806,
+ "ʃ": 29807,
+ "ム": 29808,
+ "": 29809,
+ "卡": 29810,
+ "ন": 29811,
+ "Μ": 29812,
+ "ɑ": 29813,
+ "😉": 29814,
+ "辑": 29815,
+ "원": 29816,
+ "美": 29817,
+ "产": 29818,
+ "利": 29819,
+ "모": 29820,
+ "联": 29821,
+ "界": 29822,
+ "체": 29823,
+ "种": 29824,
+ "王": 29825,
+ "ľ": 29826,
+ "여": 29827,
+ "메": 29828,
+ "域": 29829,
+ "ვ": 29830,
+ "立": 29831,
+ "록": 29832,
+ "게": 29833,
+ "إ": 29834,
+ "ṭ": 29835,
+ "神": 29836,
+ "ո": 29837,
+ "音": 29838,
+ "☆": 29839,
+ "Ñ": 29840,
+ "조": 29841,
+ "動": 29842,
+ "缓": 29843,
+ "과": 29844,
+ "报": 29845,
+ "ʼ": 29846,
+ "ា": 29847,
+ "되": 29848,
+ "ե": 29849,
+ "视": 29850,
+ "ช": 29851,
+ "详": 29852,
+ "แ": 29853,
+ "¦": 29854,
+ "把": 29855,
+ "க": 29856,
+ "ি": 29857,
+ "출": 29858,
+ "비": 29859,
+ "边": 29860,
+ "框": 29861,
+ "व": 29862,
+ "サ": 29863,
+ "Ι": 29864,
+ "Ο": 29865,
+ "オ": 29866,
+ "¾": 29867,
+ "历": 29868,
+ "ŏ": 29869,
+ "门": 29870,
+ "ข": 29871,
+ "含": 29872,
+ "¬": 29873,
+ "周": 29874,
+ "填": 29875,
+ "待": 29876,
+ "ะ": 29877,
+ "დ": 29878,
+ "Ї": 29879,
+ "额": 29880,
+ "음": 29881,
+ "四": 29882,
+ "だ": 29883,
+ "회": 29884,
+ "止": 29885,
+ "率": 29886,
+ "环": 29887,
+ "パ": 29888,
+ "래": 29889,
+ "闭": 29890,
+ "̀": 29891,
+ "语": 29892,
+ "개": 29893,
+ "身": 29894,
+ "藏": 29895,
+ "य": 29896,
+ "된": 29897,
+ "即": 29898,
+ "拉": 29899,
+ "선": 29900,
+ "변": 29901,
+ "≥": 29902,
+ "ุ": 29903,
+ "些": 29904,
+ "🤷": 29905,
+ "せ": 29906,
+ "左": 29907,
+ "ợ": 29908,
+ "右": 29909,
+ "ể": 29910,
+ "내": 29911,
+ "ּ": 29912,
+ "ז": 29913,
+ "ে": 29914,
+ "告": 29915,
+ "ấ": 29916,
+ "白": 29917,
+ "账": 29918,
+ "费": 29919,
+ "江": 29920,
+ "み": 29921,
+ "‹": 29922,
+ "์": 29923,
+ "": 29924,
+ "造": 29925,
+ "但": 29926,
+ "十": 29927,
+ "它": 29928,
+ "ं": 29929,
+ "ŋ": 29930,
+ "ў": 29931,
+ "セ": 29932,
+ "女": 29933,
+ "⣿": 29934,
+ "ի": 29935,
+ "京": 29936,
+ "触": 29937,
+ "함": 29938,
+ "들": 29939,
+ "Ā": 29940,
+ "": 29941,
+ "石": 29942,
+ "よ": 29943,
+ "田": 29944,
+ "易": 29945,
+ "规": 29946,
+ "展": 29947,
+ "¯": 29948,
+ "做": 29949,
+ "星": 29950,
+ "უ": 29951,
+ "✓": 29952,
+ "თ": 29953,
+ "供": 29954,
+ "명": 29955,
+ "ξ": 29956,
+ "己": 29957,
+ "且": 29958,
+ "插": 29959,
+ "景": 29960,
+ "切": 29961,
+ "ไ": 29962,
+ "없": 29963,
+ "ョ": 29964,
+ "及": 29965,
+ "Ν": 29966,
+ "미": 29967,
+ "ث": 29968,
+ "데": 29969,
+ "价": 29970,
+ "乡": 29971,
+ "ह": 29972,
+ "チ": 29973,
+ "真": 29974,
+ "太": 29975,
+ "ู": 29976,
+ "ダ": 29977,
+ "局": 29978,
+ "♂": 29979,
+ "退": 29980,
+ "ு": 29981,
+ "ক": 29982,
+ "ி": 29983,
+ "何": 29984,
+ "😭": 29985,
+ "¥": 29986,
+ "": 29987,
+ "≈": 29988,
+ "司": 29989,
+ "层": 29990,
+ "실": 29991,
+ "站": 29992,
+ "首": 29993,
+ "款": 29994,
+ "រ": 29995,
+ "間": 29996,
+ "ָ": 29997,
+ "저": 29998,
+ "监": 29999,
+ "ァ": 30000,
+ "册": 30001,
+ "案": 30002,
+ "ो": 30003,
+ "反": 30004,
+ "听": 30005,
+ "族": 30006,
+ "析": 30007,
+ "ื": 30008,
+ "秒": 30009,
+ "공": 30010,
+ "": 30011,
+ "🚀": 30012,
+ "거": 30013,
+ "재": 30014,
+ "": 30015,
+ "場": 30016,
+ "广": 30017,
+ "播": 30018,
+ "║": 30019,
+ "⋅": 30020,
+ "技": 30021,
+ "贴": 30022,
+ "想": 30023,
+ "ʁ": 30024,
+ "ớ": 30025,
+ "ャ": 30026,
+ "중": 30027,
+ "》": 30028,
+ "速": 30029,
+ "频": 30030,
+ "队": 30031,
+ "ำ": 30032,
+ "け": 30033,
+ "ु": 30034,
+ "≤": 30035,
+ "↓": 30036,
+ "须": 30037,
+ "菜": 30038,
+ "̃": 30039,
+ "剪": 30040,
+ "버": 30041,
+ "ェ": 30042,
+ "Λ": 30043,
+ "细": 30044,
+ "選": 30045,
+ "द": 30046,
+ "¹": 30047,
+ "许": 30048,
+ "ầ": 30049,
+ "世": 30050,
+ "ュ": 30051,
+ "ء": 30052,
+ "‡": 30053,
+ "候": 30054,
+ "共": 30055,
+ "크": 30056,
+ "ธ": 30057,
+ "설": 30058,
+ "快": 30059,
+ "友": 30060,
+ "ְ": 30061,
+ "车": 30062,
+ "推": 30063,
+ "花": 30064,
+ "言": 30065,
+ "چ": 30066,
+ "至": 30067,
+ "開": 30068,
+ "校": 30069,
+ "個": 30070,
+ "村": 30071,
+ "つ": 30072,
+ "▌": 30073,
+ "ப": 30074,
+ "결": 30075,
+ "ņ": 30076,
+ "优": 30077,
+ "ន": 30078,
+ "达": 30079,
+ "核": 30080,
+ "ナ": 30081,
+ "场": 30082,
+ "影": 30083,
+ "🏻": 30084,
+ "钮": 30085,
+ "ظ": 30086,
+ "Þ": 30087,
+ "▼": 30088,
+ "お": 30089,
+ "份": 30090,
+ "微": 30091,
+ "ờ": 30092,
+ "识": 30093,
+ "행": 30094,
+ "《": 30095,
+ "ใ": 30096,
+ "ọ": 30097,
+ "预": 30098,
+ "ব": 30099,
+ "த": 30100,
+ "": 30101,
+ "ų": 30102,
+ "마": 30103,
+ "않": 30104,
+ "ɡ": 30105,
+ "계": 30106,
+ "연": 30107,
+ "五": 30108,
+ "Ź": 30109,
+ "め": 30110,
+ "很": 30111,
+ "간": 30112,
+ "無": 30113,
+ "ប": 30114,
+ "社": 30115,
+ "Ê": 30116,
+ "书": 30117,
+ "顶": 30118,
+ "ტ": 30119,
+ "才": 30120,
+ "云": 30121,
+ "└": 30122,
+ "ζ": 30123,
+ "،": 30124,
+ "搜": 30125,
+ "신": 30126,
+ "유": 30127,
+ "": 30128,
+ "✅": 30129,
+ "⭐": 30130,
+ "照": 30131,
+ "短": 30132,
+ "川": 30133,
+ "後": 30134,
+ "范": 30135,
+ "民": 30136,
+ "治": 30137,
+ "章": 30138,
+ "ề": 30139,
+ "바": 30140,
+ "ә": 30141,
+ "⚭": 30142,
+ "河": 30143,
+ "论": 30144,
+ "え": 30145,
+ "Ω": 30146,
+ "√": 30147,
+ "Ă": 30148,
+ "Γ": 30149,
+ "坐": 30150,
+ "적": 30151,
+ "停": 30152,
+ "추": 30153,
+ "受": 30154,
+ "♀": 30155,
+ "ʾ": 30156,
+ "树": 30157,
+ "林": 30158,
+ "치": 30159,
+ "fi": 30160,
+ "▒": 30161,
+ "张": 30162,
+ "着": 30163,
+ "访": 30164,
+ "考": 30165,
+ "教": 30166,
+ "ग": 30167,
+ "准": 30168,
+ "印": 30169,
+ "精": 30170,
+ "窗": 30171,
+ "宝": 30172,
+ "ち": 30173,
+ "围": 30174,
+ "ַ": 30175,
+ "致": 30176,
+ "モ": 30177,
+ "때": 30178,
+ "随": 30179,
+ "储": 30180,
+ "况": 30181,
+ "邮": 30182,
+ "武": 30183,
+ "⛔": 30184,
+ "维": 30185,
+ "ү": 30186,
+ "跳": 30187,
+ "ब": 30188,
+ "投": 30189,
+ "ủ": 30190,
+ "표": 30191,
+ "반": 30192,
+ "英": 30193,
+ "ʰ": 30194,
+ "👍": 30195,
+ "ज": 30196,
+ "带": 30197,
+ "為": 30198,
+ "续": 30199,
+ "ɨ": 30200,
+ "처": 30201,
+ "₂": 30202,
+ "클": 30203,
+ "群": 30204,
+ "현": 30205,
+ "风": 30206,
+ "购": 30207,
+ "ក": 30208,
+ "老": 30209,
+ "留": 30210,
+ "球": 30211,
+ "프": 30212,
+ "▄": 30213,
+ "史": 30214,
+ "Љ": 30215,
+ "⟩": 30216,
+ "분": 30217,
+ "გ": 30218,
+ "店": 30219,
+ "审": 30220,
+ "료": 30221,
+ "목": 30222,
+ "略": 30223,
+ "관": 30224,
+ "ִ": 30225,
+ "科": 30226,
+ "货": 30227,
+ "ம": 30228,
+ "络": 30229,
+ "阳": 30230,
+ "Ḥ": 30231,
+ "資": 30232,
+ "若": 30233,
+ "স": 30234,
+ "ہ": 30235,
+ "宽": 30236,
+ "见": 30237,
+ "ズ": 30238,
+ "游": 30239,
+ "방": 30240,
+ "ồ": 30241,
+ "ɾ": 30242,
+ "열": 30243,
+ "러": 30244,
+ "ך": 30245,
+ "\u001b": 30246,
+ "်": 30247,
+ "余": 30248,
+ "响": 30249,
+ "缩": 30250,
+ "ட": 30251,
+ "评": 30252,
+ "允": 30253,
+ "离": 30254,
+ "🤔": 30255,
+ "Ё": 30256,
+ "ʊ": 30257,
+ "黑": 30258,
+ "马": 30259,
+ "⟨": 30260,
+ "値": 30261,
+ "箱": 30262,
+ "야": 30263,
+ "ម": 30264,
+ "Ő": 30265,
+ "感": 30266,
+ "ツ": 30267,
+ "ụ": 30268,
+ "ポ": 30269,
+ "확": 30270,
+ "声": 30271,
+ "战": 30272,
+ "ѕ": 30273,
+ "変": 30274,
+ "와": 30275,
+ "父": 30276,
+ "ベ": 30277,
+ "助": 30278,
+ "업": 30279,
+ "ʲ": 30280,
+ "ÿ": 30281,
+ "充": 30282,
+ "强": 30283,
+ "博": 30284,
+ "ミ": 30285,
+ "销": 30286,
+ "당": 30287,
+ "記": 30288,
+ "什": 30289,
+ "匹": 30290,
+ "ւ": 30291,
+ "そ": 30292,
+ "코": 30293,
+ "ল": 30294,
+ "ŭ": 30295,
+ "午": 30296,
+ "ニ": 30297,
+ "\u0012": 30298,
+ "ʒ": 30299,
+ "შ": 30300,
+ "某": 30301,
+ "ォ": 30302,
+ "足": 30303,
+ "타": 30304,
+ "Ð": 30305,
+ "ხ": 30306,
+ "름": 30307,
+ "木": 30308,
+ "楼": 30309,
+ "최": 30310,
+ "红": 30311,
+ "¨": 30312,
+ "古": 30313,
+ "\u0006": 30314,
+ "단": 30315,
+ "今": 30316,
+ "ʔ": 30317,
+ "ट": 30318,
+ "ম": 30319,
+ "斯": 30320,
+ "語": 30321,
+ "Ÿ": 30322,
+ "🙄": 30323,
+ "牌": 30324,
+ "안": 30325,
+ "ស": 30326,
+ "颜": 30327,
+ "~": 30328,
+ "克": 30329,
+ "深": 30330,
+ "금": 30331,
+ "會": 30332,
+ "尔": 30333,
+ "释": 30334,
+ "批": 30335,
+ "산": 30336,
+ "野": 30337,
+ "防": 30338,
+ "Η": 30339,
+ "ө": 30340,
+ "ψ": 30341,
+ "ボ": 30342,
+ "": 30343,
+ "各": 30344,
+ "진": 30345,
+ "追": 30346,
+ "句": 30347,
+ "警": 30348,
+ "Φ": 30349,
+ "ѣ": 30350,
+ "ḍ": 30351,
+ "词": 30352,
+ "男": 30353,
+ "글": 30354,
+ "식": 30355,
+ "隐": 30356,
+ "복": 30357,
+ "盘": 30358,
+ "Ì": 30359,
+ "申": 30360,
+ "议": 30361,
+ "ザ": 30362,
+ "近": 30363,
+ "능": 30364,
+ "য": 30365,
+ "東": 30366,
+ "這": 30367,
+ "ர": 30368,
+ "距": 30369,
+ "院": 30370,
+ "德": 30371,
+ "ǐ": 30372,
+ "针": 30373,
+ "▀": 30374,
+ "↔": 30375,
+ "房": 30376,
+ "青": 30377,
+ "政": 30378,
+ "😅": 30379,
+ "递": 30380,
+ "প": 30381,
+ "波": 30382,
+ "ソ": 30383,
+ "绑": 30384,
+ "ビ": 30385,
+ "ễ": 30386,
+ "포": 30387,
+ "\u0010": 30388,
+ "ử": 30389,
+ "등": 30390,
+ "환": 30391,
+ "士": 30392,
+ "ত": 30393,
+ "Θ": 30394,
+ "초": 30395,
+ "境": 30396,
+ "差": 30397,
+ "采": 30398,
+ "디": 30399,
+ "ĩ": 30400,
+ "升": 30401,
+ "背": 30402,
+ "배": 30403,
+ "龙": 30404,
+ "街": 30405,
+ "್": 30406,
+ "ṛ": 30407,
+ "ু": 30408,
+ "弹": 30409,
+ "魔": 30410,
+ "객": 30411,
+ "‰": 30412,
+ "⌁": 30413,
+ "ἐ": 30414,
+ "禁": 30415,
+ "ผ": 30416,
+ "қ": 30417,
+ "島": 30418,
+ "ா": 30419,
+ "♭": 30420,
+ "百": 30421,
+ "ứ": 30422,
+ "ネ": 30423,
+ "专": 30424,
+ "來": 30425,
+ "刷": 30426,
+ "필": 30427,
+ "յ": 30428,
+ "ắ": 30429,
+ "华": 30430,
+ "Β": 30431,
+ "श": 30432,
+ "¸": 30433,
+ "屏": 30434,
+ "死": 30435,
+ "遍": 30436,
+ "검": 30437,
+ "Χ": 30438,
+ "것": 30439,
+ "八": 30440,
+ "览": 30441,
+ "택": 30442,
+ "唯": 30443,
+ "∙": 30444,
+ "¤": 30445,
+ "페": 30446,
+ "让": 30447,
+ "锁": 30448,
+ "무": 30449,
+ "思": 30450,
+ "隔": 30451,
+ "Ô": 30452,
+ "\u0013": 30453,
+ "ṃ": 30454,
+ "ワ": 30455,
+ "低": 30456,
+ "션": 30457,
+ "半": 30458,
+ "较": 30459,
+ "ត": 30460,
+ "享": 30461,
+ "积": 30462,
+ "": 30463,
+ "😊": 30464,
+ "典": 30465,
+ "ǔ": 30466,
+ "六": 30467,
+ "便": 30468,
+ "ɐ": 30469,
+ "简": 30470,
+ "继": 30471,
+ "仅": 30472,
+ "尾": 30473,
+ "": 30474,
+ "வ": 30475,
+ "կ": 30476,
+ "": 30477,
+ "영": 30478,
+ "火": 30479,
+ "湖": 30480,
+ "書": 30481,
+ "발": 30482,
+ "ハ": 30483,
+ "循": 30484,
+ "术": 30485,
+ "結": 30486,
+ "ļ": 30487,
+ "乐": 30488,
+ "滤": 30489,
+ "종": 30490,
+ "ถ": 30491,
+ "ὶ": 30492,
+ "满": 30493,
+ "╝": 30494,
+ "わ": 30495,
+ "ど": 30496,
+ "็": 30497,
+ "형": 30498,
+ "國": 30499,
+ "ự": 30500,
+ "線": 30501,
+ "블": 30502,
+ "封": 30503,
+ "確": 30504,
+ "依": 30505,
+ "ս": 30506,
+ "永": 30507,
+ "색": 30508,
+ "歌": 30509,
+ "數": 30510,
+ "福": 30511,
+ "삭": 30512,
+ "実": 30513,
+ "레": 30514,
+ "ſ": 30515,
+ "千": 30516,
+ "\u000e": 30517,
+ "母": 30518,
+ "더": 30519,
+ "임": 30520,
+ "տ": 30521,
+ "ے": 30522,
+ "几": 30523,
+ "双": 30524,
+ "노": 30525,
+ "ณ": 30526,
+ "掉": 30527,
+ "Ρ": 30528,
+ "ἀ": 30529,
+ "標": 30530,
+ "長": 30531,
+ "档": 30532,
+ "태": 30533,
+ "ペ": 30534,
+ "본": 30535,
+ "": 30536,
+ "底": 30537,
+ "终": 30538,
+ "請": 30539,
+ "კ": 30540,
+ "̯": 30541,
+ "예": 30542,
+ "▬": 30543,
+ "報": 30544,
+ "ピ": 30545,
+ "๏": 30546,
+ "暂": 30547,
+ "李": 30548,
+ "Υ": 30549,
+ "\u0005": 30550,
+ "\u0002": 30551,
+ "替": 30552,
+ "운": 30553,
+ "射": 30554,
+ "\u0018": 30555,
+ "매": 30556,
+ "\u0011": 30557,
+ "🏼": 30558,
+ "票": 30559,
+ "附": 30560,
+ "ノ": 30561,
+ "ũ": 30562,
+ "压": 30563,
+ "阿": 30564,
+ "Ò": 30565,
+ "테": 30566,
+ "∼": 30567,
+ "万": 30568,
+ "մ": 30569,
+ "후": 30570,
+ "普": 30571,
+ "截": 30572,
+ "속": 30573,
+ "括": 30574,
+ "😀": 30575,
+ "ை": 30576,
+ "▶": 30577,
+ "까": 30578,
+ "ট": 30579,
+ "曲": 30580,
+ "师": 30581,
+ "钱": 30582,
+ "栏": 30583,
+ "Ы": 30584,
+ "走": 30585,
+ "ữ": 30586,
+ "": 30587,
+ "归": 30588,
+ "점": 30589,
+ "🔥": 30590,
+ "었": 30591,
+ "連": 30592,
+ "私": 30593,
+ "청": 30594,
+ "刘": 30595,
+ "免": 30596,
+ "": 30597,
+ "奖": 30598,
+ "見": 30599,
+ "ֹ": 30600,
+ "☺": 30601,
+ "ケ": 30602,
+ "역": 30603,
+ "际": 30604,
+ "받": 30605,
+ "望": 30606,
+ "帝": 30607,
+ "减": 30608,
+ "두": 30609,
+ "领": 30610,
+ "": 30611,
+ "钟": 30612,
+ "ガ": 30613,
+ "架": 30614,
+ "든": 30615,
+ "ல": 30616,
+ "松": 30617,
+ "□": 30618,
+ "越": 30619,
+ "答": 30620,
+ "ɕ": 30621,
+ "ῦ": 30622,
+ "染": 30623,
+ "": 30624,
+ "质": 30625,
+ "顺": 30626,
+ "气": 30627,
+ "╗": 30628,
+ "計": 30629,
+ "ქ": 30630,
+ "亮": 30631,
+ "🤦": 30632,
+ "̂": 30633,
+ "ٹ": 30634,
+ "座": 30635,
+ "ˌ": 30636,
+ "均": 30637,
+ "\u000b": 30638,
+ "官": 30639,
+ "适": 30640,
+ "护": 30641,
+ "久": 30642,
+ "春": 30643,
+ "曹": 30644,
+ "皇": 30645,
+ "脚": 30646,
+ "池": 30647,
+ "延": 30648,
+ "키": 30649,
+ "품": 30650,
+ "現": 30651,
+ "檔": 30652,
+ "ば": 30653,
+ "ⴰ": 30654,
+ "希": 30655,
+ "玩": 30656,
+ "固": 30657,
+ "黄": 30658,
+ "": 30659,
+ "☽": 30660,
+ "银": 30661,
+ "\u0003": 30662,
+ "┃": 30663,
+ "👏": 30664,
+ "불": 30665,
+ "攻": 30666,
+ "へ": 30667,
+ "决": 30668,
+ "⊙": 30669,
+ "宁": 30670,
+ "च": 30671,
+ "機": 30672,
+ "義": 30673,
+ "ɲ": 30674,
+ "\u0015": 30675,
+ "했": 30676,
+ "ẩ": 30677,
+ "愛": 30678,
+ "矩": 30679,
+ "패": 30680,
+ "ặ": 30681,
+ "郎": 30682,
+ "Ь": 30683,
+ "绘": 30684,
+ "负": 30685,
+ "ổ": 30686,
+ "ய": 30687,
+ "汉": 30688,
+ "編": 30689,
+ "ێ": 30690,
+ "്": 30691,
+ "じ": 30692,
+ "카": 30693,
+ "似": 30694,
+ "ں": 30695,
+ "や": 30696,
+ "認": 30697,
+ "\u000f": 30698,
+ "過": 30699,
+ "통": 30700,
+ "▪": 30701,
+ "约": 30702,
+ "香": 30703,
+ "买": 30704,
+ "住": 30705,
+ "╚": 30706,
+ "😁": 30707,
+ "扩": 30708,
+ "静": 30709,
+ "려": 30710,
+ "학": 30711,
+ "钥": 30712,
+ "증": 30713,
+ "ỉ": 30714,
+ "她": 30715,
+ "食": 30716,
+ "往": 30717,
+ "點": 30718,
+ "偏": 30719,
+ "康": 30720,
+ "\u0014": 30721,
+ "į": 30722,
+ "준": 30723,
+ "\u0004": 30724,
+ "ฟ": 30725,
+ "♣": 30726,
+ "戏": 30727,
+ "ʂ": 30728,
+ "井": 30729,
+ "军": 30730,
+ "爱": 30731,
+ "ٱ": 30732,
+ "七": 30733,
+ "차": 30734,
+ "币": 30735,
+ "♠": 30736,
+ "哈": 30737,
+ "阅": 30738,
+ "介": 30739,
+ "观": 30740,
+ "區": 30741,
+ "˜": 30742,
+ "ً": 30743,
+ "又": 30744,
+ "冲": 30745,
+ "朝": 30746,
+ "姓": 30747,
+ "课": 30748,
+ "龍": 30749,
+ "각": 30750,
+ "∈": 30751,
+ "米": 30752,
+ "ƒ": 30753,
+ "喜": 30754,
+ "夜": 30755,
+ "团": 30756,
+ "⇒": 30757,
+ "远": 30758,
+ "\u001a": 30759,
+ "ὐ": 30760,
+ "承": 30761,
+ "ಿ": 30762,
+ "室": 30763,
+ "ʀ": 30764,
+ "ង": 30765,
+ "अ": 30766,
+ "罗": 30767,
+ "🙏": 30768,
+ "软": 30769,
+ "🟡": 30770,
+ "건": 30771,
+ "؟": 30772,
+ "း": 30773,
+ "ᴇ": 30774,
+ "ユ": 30775,
+ "토": 30776,
+ "策": 30777,
+ "̄": 30778,
+ "국": 30779,
+ "ֶ": 30780,
+ "协": 30781,
+ "营": 30782,
+ "関": 30783,
+ "吉": 30784,
+ "💀": 30785,
+ "奇": 30786,
+ "滚": 30787,
+ "轴": 30788,
+ "処": 30789,
+ "土": 30790,
+ "划": 30791,
+ "ड": 30792,
+ "临": 30793,
+ "ֵ": 30794,
+ "航": 30795,
+ "浏": 30796,
+ "ゴ": 30797,
+ "別": 30798,
+ "寺": 30799,
+ "於": 30800,
+ "進": 30801,
+ "ὸ": 30802,
+ "風": 30803,
+ "ன": 30804,
+ "班": 30805,
+ "◼": 30806,
+ "九": 30807,
+ "̥": 30808,
+ "號": 30809,
+ "류": 30810,
+ "础": 30811,
+ "般": 30812,
+ "︙": 30813,
+ "̈": 30814,
+ "番": 30815,
+ "✨": 30816,
+ "😎": 30817,
+ "ো": 30818,
+ "😍": 30819,
+ "單": 30820,
+ "帧": 30821,
+ "授": 30822,
+ "赋": 30823,
+ "巴": 30824,
+ "占": 30825,
+ "假": 30826,
+ "ṅ": 30827,
+ "透": 30828,
+ "項": 30829,
+ "ħ": 30830,
+ "馬": 30831,
+ "🟢": 30832,
+ "Ľ": 30833,
+ "լ": 30834,
+ "券": 30835,
+ "같": 30836,
+ "類": 30837,
+ "對": 30838,
+ "월": 30839,
+ "激": 30840,
+ "\u0017": 30841,
+ "戦": 30842,
+ "独": 30843,
+ "訊": 30844,
+ "ិ": 30845,
+ "套": 30846,
+ "ʷ": 30847,
+ "跟": 30848,
+ "ở": 30849,
+ "渲": 30850,
+ "顯": 30851,
+ "降": 30852,
+ "ာ": 30853,
+ "尼": 30854,
+ "血": 30855,
+ "언": 30856,
+ "牛": 30857,
+ "將": 30858,
+ "ศ": 30859,
+ "拍": 30860,
+ "刻": 30861,
+ "ზ": 30862,
+ "╔": 30863,
+ "藤": 30864,
+ "్": 30865,
+ "ῶ": 30866,
+ "🟠": 30867,
+ "良": 30868,
+ "김": 30869,
+ "দ": 30870,
+ "Ṣ": 30871,
+ "録": 30872,
+ "伊": 30873,
+ "落": 30874,
+ "雄": 30875,
+ "雪": 30876,
+ "映": 30877,
+ "著": 30878,
+ "른": 30879,
+ "ფ": 30880,
+ "対": 30881,
+ "智": 30882,
+ "译": 30883,
+ "┬": 30884,
+ "抽": 30885,
+ "ῖ": 30886,
+ "酒": 30887,
+ "Ћ": 30888,
+ "股": 30889,
+ "់": 30890,
+ "순": 30891,
+ "직": 30892,
+ "भ": 30893,
+ "谷": 30894,
+ "물": 30895,
+ "ǒ": 30896,
+ "⠄": 30897,
+ "热": 30898,
+ "終": 30899,
+ "夹": 30900,
+ "干": 30901,
+ "彩": 30902,
+ "敗": 30903,
+ "ќ": 30904,
+ "♯": 30905,
+ "̣": 30906,
+ "վ": 30907,
+ "轮": 30908,
+ "阵": 30909,
+ "夏": 30910,
+ "幕": 30911,
+ "吧": 30912,
+ "港": 30913,
+ "益": 30914,
+ "儿": 30915,
+ "액": 30916,
+ "售": 30917,
+ "兵": 30918,
+ "惠": 30919,
+ "欢": 30920,
+ "": 30921,
+ "零": 30922,
+ "學": 30923,
+ "": 30924,
+ "員": 30925,
+ "ỗ": 30926,
+ "玉": 30927,
+ "逻": 30928,
+ "᥀": 30929,
+ "吗": 30930,
+ "沒": 30931,
+ "≠": 30932,
+ "너": 30933,
+ "ச": 30934,
+ "\u0016": 30935,
+ "夫": 30936,
+ "წ": 30937,
+ "堂": 30938,
+ "電": 30939,
+ "≡": 30940,
+ "陆": 30941,
+ "져": 30942,
+ "研": 30943,
+ "荐": 30944,
+ "健": 30945,
+ "碼": 30946,
+ "练": 30947,
+ "検": 30948,
+ "송": 30949,
+ "ै": 30950,
+ "哪": 30951,
+ "圆": 30952,
+ "Ա": 30953,
+ "↩": 30954,
+ "托": 30955,
+ "̪": 30956,
+ "ू": 30957,
+ "缀": 30958,
+ "네": 30959,
+ "沙": 30960,
+ "兴": 30961,
+ "病": 30962,
+ "\u0007": 30963,
+ "ល": 30964,
+ "ừ": 30965,
+ "Ἀ": 30966,
+ "강": 30967,
+ "항": 30968,
+ "\u0019": 30969,
+ "換": 30970,
+ "温": 30971,
+ "帖": 30972,
+ "ទ": 30973,
+ "込": 30974,
+ "削": 30975,
+ "알": 30976,
+ "征": 30977,
+ "习": 30978,
+ "법": 30979,
+ "栈": 30980,
+ "绝": 30981,
+ "": 30982,
+ "ڕ": 30983,
+ "圖": 30984,
+ "苏": 30985,
+ "発": 30986,
+ "ု": 30987,
+ "町": 30988,
+ "互": 30989,
+ "়": 30990,
+ "ც": 30991,
+ "守": 30992,
+ "새": 30993,
+ "侧": 30994,
+ "草": 30995,
+ "ས": 30996,
+ "扫": 30997,
+ "‒": 30998,
+ "恢": 30999,
+ "ң": 31000,
+ "ण": 31001,
+ "ற": 31002,
+ "째": 31003,
+ "්": 31004,
+ "拟": 31005,
+ "派": 31006,
+ "🏽": 31007,
+ "呼": 31008,
+ "": 31009,
+ "演": 31010,
+ "究": 31011,
+ "교": 31012,
+ "ɣ": 31013,
+ "ए": 31014,
+ "ី": 31015,
+ "ף": 31016,
+ "富": 31017,
+ "駅": 31018,
+ "ず": 31019,
+ "♪": 31020,
+ "😆": 31021,
+ "접": 31022,
+ "ғ": 31023,
+ "▓": 31024,
+ "존": 31025,
+ "ಾ": 31026,
+ "旋": 31027,
+ "ゃ": 31028,
+ "补": 31029,
+ "ץ": 31030,
+ "門": 31031,
+ "ច": 31032,
+ "날": 31033,
+ "ภ": 31034,
+ "ག": 31035,
+ "傳": 31036,
+ "∆": 31037,
+ "": 31038,
+ "ׁ": 31039,
+ "缺": 31040,
+ "頭": 31041,
+ "怪": 31042,
+ "組": 31043,
+ "별": 31044,
+ "Ъ": 31045,
+ "發": 31046,
+ "雷": 31047,
+ "ರ": 31048,
+ "ซ": 31049,
+ "び": 31050,
+ "翻": 31051,
+ "ھ": 31052,
+ "პ": 31053,
+ "題": 31054,
+ "居": 31055,
+ "집": 31056,
+ "🌍": 31057,
+ "˚": 31058,
+ "避": 31059,
+ "줄": 31060,
+ "ុ": 31061,
+ "滑": 31062,
+ "故": 31063,
+ "ญ": 31064,
+ "〜": 31065,
+ "ನ": 31066,
+ "양": 31067,
+ "완": 31068,
+ "ள": 31069,
+ "倍": 31070,
+ "宗": 31071,
+ "択": 31072,
+ "브": 31073,
+ "ɴ": 31074,
+ "効": 31075,
+ "尺": 31076,
+ "視": 31077,
+ "ẽ": 31078,
+ "覆": 31079,
+ "ध": 31080,
+ "骨": 31081,
+ "달": 31082,
+ "ᴛ": 31083,
+ "蓝": 31084,
+ "關": 31085,
+ "額": 31086,
+ "Õ": 31087,
+ "∗": 31088,
+ "卷": 31089,
+ "갑": 31090,
+ "르": 31091,
+ "众": 31092,
+ "ᴀ": 31093,
+ "態": 31094,
+ "ٰ": 31095,
+ "暗": 31096,
+ "君": 31097,
+ "錯": 31098,
+ "ɒ": 31099,
+ "យ": 31100,
+ "ḫ": 31101,
+ "ῆ": 31102,
+ "亚": 31103,
+ "♡": 31104,
+ "割": 31105,
+ "鼠": 31106,
+ "̶": 31107,
+ "Ë": 31108,
+ "読": 31109,
+ "격": 31110,
+ "ゲ": 31111,
+ "眼": 31112,
+ "Ý": 31113,
+ "ژ": 31114,
+ "雨": 31115,
+ "宮": 31116,
+ "쪽": 31117,
+ "ष": 31118,
+ "複": 31119,
+ "剩": 31120,
+ "早": 31121,
+ "杂": 31122,
+ "焦": 31123,
+ "贝": 31124,
+ "突": 31125,
+ "워": 31126,
+ "另": 31127,
+ "摄": 31128,
+ "\b": 31129,
+ "": 31130,
+ "府": 31131,
+ "외": 31132,
+ "盖": 31133,
+ "\u001c": 31134,
+ "ษ": 31135,
+ "佛": 31136,
+ "概": 31137,
+ "與": 31138,
+ "經": 31139,
+ "-": 31140,
+ "һ": 31141,
+ "問": 31142,
+ "ು": 31143,
+ "ἰ": 31144,
+ "話": 31145,
+ "倒": 31146,
+ "葛": 31147,
+ "べ": 31148,
+ "ろ": 31149,
+ "\u001e": 31150,
+ "।": 31151,
+ "ေ": 31152,
+ "ᴏ": 31153,
+ "训": 31154,
+ "體": 31155,
+ "👌": 31156,
+ "內": 31157,
+ "က": 31158,
+ "企": 31159,
+ "약": 31160,
+ "찾": 31161,
+ "ོ": 31162,
+ "破": 31163,
+ "輸": 31164,
+ "림": 31165,
+ "塔": 31166,
+ "턴": 31167,
+ "杀": 31168,
+ "』": 31169,
+ "味": 31170,
+ "浮": 31171,
+ "┆": 31172,
+ "ġ": 31173,
+ "郡": 31174,
+ "┐": 31175,
+ "『": 31176,
+ "阶": 31177,
+ "雅": 31178,
+ "┈": 31179,
+ "园": 31180,
+ ".": 31181,
+ "吃": 31182,
+ "남": 31183,
+ " ": 31184,
+ "ར": 31185,
+ "帮": 31186,
+ "毛": 31187,
+ "耗": 31188,
+ "举": 31189,
+ "ర": 31190,
+ "拿": 31191,
+ "밀": 31192,
+ "ご": 31193,
+ "够": 31194,
+ "礼": 31195,
+ "ព": 31196,
+ "ね": 31197,
+ "": 31198,
+ "兰": 31199,
+ "❌": 31200,
+ "折": 31201,
+ "십": 31202,
+ "💎": 31203,
+ "業": 31204,
+ "诸": 31205,
+ "孙": 31206,
+ "བ": 31207,
+ "😳": 31208,
+ "種": 31209,
+ "Ï": 31210,
+ "ึ": 31211,
+ "": 31212,
+ "医": 31213,
+ "拼": 31214,
+ "↵": 31215,
+ "⅓": 31216,
+ "\u001f": 31217,
+ "မ": 31218,
+ "叫": 31219,
+ "জ": 31220,
+ "予": 31221,
+ "寸": 31222,
+ "梅": 31223,
+ "醒": 31224,
+ "津": 31225,
+ "န": 31226,
+ "ి": 31227,
+ "厂": 31228,
+ "屋": 31229,
+ "ख": 31230,
+ "師": 31231,
+ "👀": 31232,
+ "ỏ": 31233,
+ "ヤ": 31234,
+ "ὰ": 31235,
+ "\u001d": 31236,
+ "◆": 31237,
+ "ដ": 31238,
+ "材": 31239,
+ "ホ": 31240,
+ "張": 31241,
+ "洞": 31242,
+ "餐": 31243,
+ "천": 31244,
+ "হ": 31245,
+ "達": 31246,
+ "們": 31247,
+ "斗": 31248,
+ "横": 31249,
+ "백": 31250,
+ "ំ": 31251,
+ "ۆ": 31252,
+ "말": 31253,
+ "গ": 31254,
+ "佳": 31255,
+ "랜": 31256,
+ "仁": 31257,
+ "陈": 31258,
+ "飞": 31259,
+ "极": 31260,
+ "": 31261,
+ "및": 31262,
+ "仓": 31263,
+ "⬛": 31264,
+ "昌": 31265,
+ "錢": 31266,
+ "殊": 31267,
+ "┴": 31268,
+ "○": 31269,
+ "길": 31270,
+ "泉": 31271,
+ "甲": 31272,
+ "활": 31273,
+ "ひ": 31274,
+ "শ": 31275,
+ "ን": 31276,
+ "Ť": 31277,
+ "ღ": 31278,
+ "皮": 31279,
+ "強": 31280,
+ "赛": 31281,
+ "ా": 31282,
+ "預": 31283,
+ "င": 31284,
+ "튼": 31285,
+ "플": 31286,
+ "ყ": 31287,
+ "⋆": 31288,
+ "ք": 31289,
+ "ા": 31290,
+ "尚": 31291,
+ "또": 31292,
+ "բ": 31293,
+ "┌": 31294,
+ "節": 31295,
+ "森": 31296,
+ "आ": 31297,
+ "办": 31298,
+ "園": 31299,
+ "牙": 31300,
+ "庆": 31301,
+ "隆": 31302,
+ "😔": 31303,
+ "叉": 31304,
+ "գ": 31305,
+ "피": 31306,
+ "ギ": 31307,
+ "啊": 31308,
+ "続": 31309,
+ "灵": 31310,
+ "ヒ": 31311,
+ "忽": 31312,
+ "ʌ": 31313,
+ "량": 31314,
+ "油": 31315,
+ "讯": 31316,
+ "ⵉ": 31317,
+ "릭": 31318,
+ "刚": 31319,
+ "氏": 31320,
+ "ိ": 31321,
+ "Ī": 31322,
+ "誤": 31323,
+ "齐": 31324,
+ "末": 31325,
+ "🙌": 31326,
+ "̞": 31327,
+ "圈": 31328,
+ "念": 31329,
+ "숫": 31330,
+ "毫": 31331,
+ "當": 31332,
+ "規": 31333,
+ "판": 31334,
+ "ు": 31335,
+ "旧": 31336,
+ "卖": 31337,
+ "ฉ": 31338,
+ "幸": 31339,
+ "署": 31340,
+ "근": 31341,
+ "ই": 31342,
+ "岛": 31343,
+ "դ": 31344,
+ "觉": 31345,
+ "害": 31346,
+ "毕": 31347,
+ "ฐ": 31348,
+ "威": 31349,
+ "育": 31350,
+ "呢": 31351,
+ "峰": 31352,
+ "职": 31353,
+ "陽": 31354,
+ "ි": 31355,
+ "亞": 31356,
+ "ұ": 31357,
+ "₃": 31358,
+ "따": 31359,
+ "施": 31360,
+ "泰": 31361,
+ "載": 31362,
+ "
": 31363,
+ "笑": 31364,
+ "華": 31365,
+ "迎": 31366,
+ "됩": 31367,
+ "豆": 31368,
+ "嘉": 31369,
+ "🤡": 31370,
+ "ĕ": 31371,
+ "庄": 31372,
+ "級": 31373,
+ "Ψ": 31374,
+ "ི": 31375,
+ "気": 31376,
+ "责": 31377,
+ "հ": 31378,
+ "អ": 31379,
+ "乱": 31380,
+ "休": 31381,
+ "約": 31382,
+ "ฆ": 31383,
+ "∑": 31384,
+ "察": 31385,
+ "온": 31386,
+ "😬": 31387,
+ "ড": 31388,
+ "乘": 31389,
+ "람": 31390,
+ "इ": 31391,
+ "Ά": 31392,
+ "ந": 31393,
+ "ើ": 31394,
+ "亲": 31395,
+ "េ": 31396,
+ "委": 31397,
+ "赤": 31398,
+ "됨": 31399,
+ "勝": 31400,
+ "怎": 31401,
+ "감": 31402,
+ "宋": 31403,
+ "調": 31404,
+ "짜": 31405,
+ "ী": 31406,
+ "难": 31407,
+ "못": 31408,
+ "티": 31409,
+ "備": 31410,
+ "塞": 31411,
+ "វ": 31412,
+ "险": 31413,
+ "旅": 31414,
+ "虚": 31415,
+ "↳": 31416,
+ "笔": 31417,
+ "馆": 31418,
+ "Қ": 31419,
+ "⚡": 31420,
+ "ೆ": 31421,
+ "※": 31422,
+ "唐": 31423,
+ "律": 31424,
+ "稍": 31425,
+ "散": 31426,
+ "ર": 31427,
+ "ヴ": 31428,
+ "副": 31429,
+ "尽": 31430,
+ "挂": 31431,
+ "県": 31432,
+ "⚠": 31433,
+ "洋": 31434,
+ "鬼": 31435,
+ "암": 31436,
+ "孩": 31437,
+ "℃": 31438,
+ "並": 31439,
+ "ց": 31440,
+ "ូ": 31441,
+ "ℓ": 31442,
+ "ⵏ": 31443,
+ "扣": 31444,
+ "铁": 31445,
+ "闻": 31446,
+ "ˆ": 31447,
+ "戳": 31448,
+ "む": 31449,
+ "秀": 31450,
+ "細": 31451,
+ "ပ": 31452,
+ "御": 31453,
+ "拖": 31454,
+ "좌": 31455,
+ "ؤ": 31456,
+ "绍": 31457,
+ "ỹ": 31458,
+ "참": 31459,
+ "향": 31460,
+ "Ď": 31461,
+ "끝": 31462,
+ "민": 31463,
+ "ძ": 31464,
+ "贵": 31465,
+ "纪": 31466,
+ "秋": 31467,
+ "ಕ": 31468,
+ "ӏ": 31469,
+ "網": 31470,
+ "铺": 31471,
+ "恋": 31472,
+ "fl": 31473,
+ "兼": 31474,
+ "羽": 31475,
+ "창": 31476,
+ "啟": 31477,
+ "弟": 31478,
+ "년": 31479,
+ "慢": 31480,
+ "효": 31481,
+ "許": 31482,
+ "硬": 31483,
+ "잘": 31484,
+ "템": 31485,
+ "્": 31486,
+ "න": 31487,
+ "術": 31488,
+ "ڈ": 31489,
+ "溪": 31490,
+ "": 31491,
+ "暴": 31492,
+ "混": 31493,
+ "夢": 31494,
+ "랑": 31495,
+ "আ": 31496,
+ "還": 31497,
+ "探": 31498,
+ "祖": 31499,
+ "织": 31500,
+ "軍": 31501,
+ "թ": 31502,
+ "務": 31503,
+ "艺": 31504,
+ "ད": 31505,
+ "ት": 31506,
+ "ṁ": 31507,
+ "應": 31508,
+ "擇": 31509,
+ "🥰": 31510,
+ "ķ": 31511,
+ "渡": 31512,
+ "葉": 31513,
+ "령": 31514,
+ "決": 31515,
+ "刀": 31516,
+ "從": 31517,
+ "變": 31518,
+ "올": 31519,
+ "💪": 31520,
+ "灣": 31521,
+ "ር": 31522,
+ "평": 31523,
+ "衣": 31524,
+ "😄": 31525,
+ "ി": 31526,
+ "ჩ": 31527,
+ "ὁ": 31528,
+ "ほ": 31529,
+ "Û": 31530,
+ "চ": 31531,
+ "ර": 31532,
+ "製": 31533,
+ "隊": 31534,
+ "₱": 31535,
+ "纳": 31536,
+ "赖": 31537,
+ "农": 31538,
+ "桥": 31539,
+ "ỳ": 31540,
+ "🏾": 31541,
+ "阻": 31542,
+ "ជ": 31543,
+ "秘": 31544,
+ "박": 31545,
+ "伤": 31546,
+ "稿": 31547,
+ "ం": 31548,
+ "拦": 31549,
+ "넣": 31550,
+ "💕": 31551,
+ "₁": 31552,
+ "宿": 31553,
+ "錄": 31554,
+ "镜": 31555,
+ "채": 31556,
+ "Ə": 31557,
+ "ང": 31558,
+ "⇔": 31559,
+ "☼": 31560,
+ "ུ": 31561,
+ "党": 31562,
+ "급": 31563,
+ "洲": 31564,
+ "ղ": 31565,
+ "說": 31566,
+ "ĭ": 31567,
+ "尝": 31568,
+ "담": 31569,
+ "फ": 31570,
+ "哥": 31571,
+ "圣": 31572,
+ "萨": 31573,
+ "😏": 31574,
+ "ʏ": 31575,
+ "ெ": 31576,
+ "丁": 31577,
+ "虎": 31578,
+ "권": 31579,
+ "善": 31580,
+ "岩": 31581,
+ "커": 31582,
+ "◦": 31583,
+ "抛": 31584,
+ "석": 31585,
+ "Έ": 31586,
+ "宣": 31587,
+ "拳": 31588,
+ "팅": 31589,
+ "枚": 31590,
+ "洛": 31591,
+ "証": 31592,
+ "陵": 31593,
+ "佐": 31594,
+ "館": 31595,
+ "누": 31596,
+ "돌": 31597,
+ "₄": 31598,
+ "稱": 31599,
+ "聊": 31600,
+ "車": 31601,
+ "루": 31602,
+ "״": 31603,
+ "ಠ": 31604,
+ "庫": 31605,
+ "མ": 31606,
+ "統": 31607,
+ "련": 31608,
+ "़": 31609,
+ "ṯ": 31610,
+ "ക": 31611,
+ "旗": 31612,
+ "励": 31613,
+ "紀": 31614,
+ "忠": 31615,
+ "າ": 31616,
+ "杨": 31617,
+ "丹": 31618,
+ "Ù": 31619,
+ "ฝ": 31620,
+ "却": 31621,
+ "舞": 31622,
+ "轉": 31623,
+ "တ": 31624,
+ "丽": 31625,
+ "借": 31626,
+ "ා": 31627,
+ "ょ": 31628,
+ "옵": 31629,
+ "편": 31630,
+ "蒙": 31631,
+ "衡": 31632,
+ "ʋ": 31633,
+ "叶": 31634,
+ "̇": 31635,
+ "⬜": 31636,
+ "🇺": 31637,
+ "Հ": 31638,
+ "谢": 31639,
+ "Ą": 31640,
+ "ே": 31641,
+ "ằ": 31642,
+ "既": 31643,
+ "济": 31644,
+ "≯": 31645,
+ "準": 31646,
+ "답": 31647,
+ "ಲ": 31648,
+ "残": 31649,
+ "虑": 31650,
+ "̆": 31651,
+ "┘": 31652,
+ "急": 31653,
+ "招": 31654,
+ "막": 31655,
+ "≮": 31656,
+ "產": 31657,
+ "Ṭ": 31658,
+ "😢": 31659,
+ "垂": 31660,
+ "親": 31661,
+ "ģ": 31662,
+ "־": 31663,
+ "猫": 31664,
+ "ʟ": 31665,
+ "☃": 31666,
+ "✪": 31667,
+ "刪": 31668,
+ "胡": 31669,
+ "☉": 31670,
+ "晚": 31671,
+ "군": 31672,
+ "승": 31673,
+ "న": 31674,
+ "ὴ": 31675,
+ "曾": 31676,
+ "論": 31677,
+ "ɯ": 31678,
+ "త": 31679,
+ "戰": 31680,
+ "鱼": 31681,
+ "ǧ": 31682,
+ "寶": 31683,
+ "특": 31684,
+ "💯": 31685,
+ "崎": 31686,
+ "甘": 31687,
+ "該": 31688,
+ "링": 31689,
+ "😡": 31690,
+ "उ": 31691,
+ "ែ": 31692,
+ "頁": 31693,
+ "큰": 31694,
+ "➤": 31695,
+ "총": 31696,
+ "💰": 31697,
+ "∂": 31698,
+ "毁": 31699,
+ "聖": 31700,
+ "麻": 31701,
+ "ʐ": 31702,
+ "敏": 31703,
+ "運": 31704,
+ "될": 31705,
+ "쓰": 31706,
+ "ಸ": 31707,
+ "စ": 31708,
+ "✦": 31709,
+ "젝": 31710,
+ "復": 31711,
+ "寻": 31712,
+ "茶": 31713,
+ "ਾ": 31714,
+ "竹": 31715,
+ "遇": 31716,
+ "順": 31717,
+ "며": 31718,
+ "累": 31719,
+ "ĝ": 31720,
+ "ˇ": 31721,
+ "覧": 31722,
+ "এ": 31723,
+ "株": 31724,
+ "취": 31725,
+ "ስ": 31726,
+ "争": 31727,
+ "势": 31728,
+ "宇": 31729,
+ "橋": 31730,
+ "Ӏ": 31731,
+ "堆": 31732,
+ "ⵙ": 31733,
+ "丶": 31734,
+ "棋": 31735,
+ "肉": 31736,
+ "የ": 31737,
+ "": 31738,
+ "❶": 31739,
+ "季": 31740,
+ "ል": 31741,
+ "殿": 31742,
+ "優": 31743,
+ "試": 31744,
+ "첫": 31745,
+ "Ό": 31746,
+ "戶": 31747,
+ "ண": 31748,
+ "羅": 31749,
+ "桃": 31750,
+ "립": 31751,
+ "浪": 31752,
+ "脑": 31753,
+ "😛": 31754,
+ "弃": 31755,
+ "炮": 31756,
+ "轻": 31757,
+ "울": 31758,
+ "": 31759,
+ "ヘ": 31760,
+ "奥": 31761,
+ "💜": 31762,
+ "忘": 31763,
+ "遠": 31764,
+ "飛": 31765,
+ "魏": 31766,
+ "Ē": 31767,
+ "汇": 31768,
+ "央": 31769,
+ "逆": 31770,
+ "露": 31771,
+ "須": 31772,
+ "ѐ": 31773,
+ "ḷ": 31774,
+ "ದ": 31775,
+ "✭": 31776,
+ "寄": 31777,
+ "盟": 31778,
+ "财": 31779,
+ "際": 31780,
+ "ἔ": 31781,
+ "ǫ": 31782,
+ "थ": 31783,
+ "ാ": 31784,
+ "宫": 31785,
+ "巨": 31786,
+ "途": 31787,
+ "ʹ": 31788,
+ "ಗ": 31789,
+ "帐": 31790,
+ "": 31791,
+ "拒": 31792,
+ "药": 31793,
+ "🙃": 31794,
+ "ŕ": 31795,
+ "亡": 31796,
+ "壁": 31797,
+ "ም": 31798,
+ "參": 31799,
+ "😩": 31800,
+ "շ": 31801,
+ "ವ": 31802,
+ "ណ": 31803,
+ "丰": 31804,
+ "獲": 31805,
+ "莉": 31806,
+ "좋": 31807,
+ "ရ": 31808,
+ "₦": 31809,
+ "겠": 31810,
+ "👉": 31811,
+ "吴": 31812,
+ "岡": 31813,
+ "诉": 31814,
+ "읽": 31815,
+ "🥺": 31816,
+ "爆": 31817,
+ "🇸": 31818,
+ "ভ": 31819,
+ "迭": 31820,
+ "엔": 31821,
+ "ἄ": 31822,
+ "捷": 31823,
+ "納": 31824,
+ "邀": 31825,
+ "ಯ": 31826,
+ "爾": 31827,
+ "船": 31828,
+ "赞": 31829,
+ "胜": 31830,
+ "므": 31831,
+ "သ": 31832,
+ "構": 31833,
+ "磁": 31834,
+ "冰": 31835,
+ "딩": 31836,
+ "ે": 31837,
+ "媒": 31838,
+ "繁": 31839,
+ "☠": 31840,
+ "❒": 31841,
+ "仪": 31842,
+ "렬": 31843,
+ "昭": 31844,
+ "珠": 31845,
+ "離": 31846,
+ "ན": 31847,
+ "ల": 31848,
+ "ತ": 31849,
+ "拷": 31850,
+ "粉": 31851,
+ "벤": 31852,
+ "⇽": 31853,
+ "乌": 31854,
+ "拥": 31855,
+ "ҳ": 31856,
+ "ය": 31857,
+ "ེ": 31858,
+ "仙": 31859,
+ "塊": 31860,
+ "幅": 31861,
+ "🎉": 31862,
+ "Մ": 31863,
+ "跨": 31864,
+ "ٔ": 31865,
+ "恩": 31866,
+ "损": 31867,
+ "养": 31868,
+ "奈": 31869,
+ "ǀ": 31870,
+ "严": 31871,
+ "卫": 31872,
+ "迟": 31873,
+ "様": 31874,
+ "裡": 31875,
+ "난": 31876,
+ "았": 31877,
+ "͜": 31878,
+ "Ζ": 31879,
+ "ਰ": 31880,
+ "պ": 31881,
+ "ং": 31882,
+ "丢": 31883,
+ "伝": 31884,
+ "컨": 31885,
+ "ව": 31886,
+ "ြ": 31887,
+ "冷": 31888,
+ "遗": 31889,
+ "銀": 31890,
+ "̌": 31891,
+ "ᴜ": 31892,
+ "瑞": 31893,
+ "ฌ": 31894,
+ "❍": 31895,
+ "ふ": 31896,
+ "聚": 31897,
+ "碎": 31898,
+ "衛": 31899,
+ "অ": 31900,
+ "ញ": 31901,
+ "퍼": 31902,
+ "Ս": 31903,
+ "ນ": 31904,
+ "ẓ": 31905,
+ "✌": 31906,
+ "孝": 31907,
+ "陳": 31908,
+ "히": 31909,
+ "ක": 31910,
+ "黒": 31911,
+ "💖": 31912,
+ "ḩ": 31913,
+ "応": 31914,
+ "饰": 31915,
+ "∪": 31916,
+ "宜": 31917,
+ "樂": 31918,
+ "則": 31919,
+ "勇": 31920,
+ "徐": 31921,
+ "ⵓ": 31922,
+ "權": 31923,
+ "鲁": 31924,
+ "‟": 31925,
+ "庭": 31926,
+ "苗": 31927,
+ "🔴": 31928,
+ "闲": 31929,
+ "독": 31930,
+ "ɹ": 31931,
+ "ҽ": 31932,
+ "ថ": 31933,
+ "宏": 31934,
+ "尊": 31935,
+ "總": 31936,
+ "裝": 31937,
+ "ම": 31938,
+ "▸": 31939,
+ "測": 31940,
+ "ಮ": 31941,
+ "አ": 31942,
+ "轩": 31943,
+ "兄": 31944,
+ "剑": 31945,
+ "ન": 31946,
+ "朱": 31947,
+ "ǝ": 31948,
+ "Ḩ": 31949,
+ "担": 31950,
+ "灰": 31951,
+ "讲": 31952,
+ "롤": 31953,
+ "︎": 31954,
+ "😤": 31955,
+ "ោ": 31956,
+ "애": 31957,
+ "였": 31958,
+ "질": 31959,
+ "振": 31960,
+ "灯": 31961,
+ "ĉ": 31962,
+ "ස": 31963,
+ "閉": 31964,
+ "램": 31965,
+ "ಂ": 31966,
+ "げ": 31967,
+ "̧": 31968,
+ "狂": 31969,
+ "融": 31970,
+ "仍": 31971,
+ "實": 31972,
+ "楽": 31973,
+ "範": 31974,
+ "ٌ": 31975,
+ "వ": 31976,
+ "嵌": 31977,
+ "摩": 31978,
+ "袁": 31979,
+ "ষ": 31980,
+ "乎": 31981,
+ "규": 31982,
+ "岗": 31983,
+ "糊": 31984,
+ "క": 31985,
+ "雲": 31986,
+ "심": 31987,
+ "ई": 31988,
+ "འ": 31989,
+ "ἡ": 31990,
+ "丝": 31991,
+ "Ħ": 31992,
+ "ٍ": 31993,
+ "ٓ": 31994,
+ "အ": 31995,
+ "執": 31996,
+ "벨": 31997,
+ "ゼ": 31998,
+ "梦": 31999
+ },
+ "merges": [
+ [
+ "▁",
+ "t"
+ ],
+ [
+ "i",
+ "n"
+ ],
+ [
+ "e",
+ "r"
+ ],
+ [
+ "▁",
+ "a"
+ ],
+ [
+ "h",
+ "e"
+ ],
+ [
+ "o",
+ "n"
+ ],
+ [
+ "r",
+ "e"
+ ],
+ [
+ "▁",
+ "s"
+ ],
+ [
+ "e",
+ "n"
+ ],
+ [
+ "a",
+ "t"
+ ],
+ [
+ "o",
+ "r"
+ ],
+ [
+ "▁t",
+ "he"
+ ],
+ [
+ "▁th",
+ "e"
+ ],
+ [
+ "▁",
+ "the"
+ ],
+ [
+ "e",
+ "s"
+ ],
+ [
+ "▁",
+ "w"
+ ],
+ [
+ "a",
+ "n"
+ ],
+ [
+ "▁",
+ "c"
+ ],
+ [
+ "i",
+ "s"
+ ],
+ [
+ "i",
+ "t"
+ ],
+ [
+ "o",
+ "u"
+ ],
+ [
+ "▁",
+ "d"
+ ],
+ [
+ "a",
+ "l"
+ ],
+ [
+ "a",
+ "r"
+ ],
+ [
+ "▁",
+ "p"
+ ],
+ [
+ "▁",
+ "f"
+ ],
+ [
+ "e",
+ "d"
+ ],
+ [
+ "▁",
+ "b"
+ ],
+ [
+ "in",
+ "g"
+ ],
+ [
+ "i",
+ "ng"
+ ],
+ [
+ "▁",
+ "o"
+ ],
+ [
+ "▁",
+ "m"
+ ],
+ [
+ "l",
+ "e"
+ ],
+ [
+ "n",
+ "d"
+ ],
+ [
+ "a",
+ "s"
+ ],
+ [
+ "i",
+ "c"
+ ],
+ [
+ "▁",
+ "h"
+ ],
+ [
+ "io",
+ "n"
+ ],
+ [
+ "i",
+ "on"
+ ],
+ [
+ "▁i",
+ "n"
+ ],
+ [
+ "▁",
+ "in"
+ ],
+ [
+ "▁t",
+ "o"
+ ],
+ [
+ "▁",
+ "to"
+ ],
+ [
+ "e",
+ "t"
+ ],
+ [
+ "o",
+ "m"
+ ],
+ [
+ "e",
+ "l"
+ ],
+ [
+ "▁o",
+ "f"
+ ],
+ [
+ "▁",
+ "of"
+ ],
+ [
+ "s",
+ "t"
+ ],
+ [
+ "▁a",
+ "nd"
+ ],
+ [
+ "▁an",
+ "d"
+ ],
+ [
+ "▁",
+ "and"
+ ],
+ [
+ "▁",
+ "l"
+ ],
+ [
+ "▁t",
+ "h"
+ ],
+ [
+ "▁",
+ "th"
+ ],
+ [
+ "▁",
+ "n"
+ ],
+ [
+ "en",
+ "t"
+ ],
+ [
+ "e",
+ "nt"
+ ],
+ [
+ "i",
+ "l"
+ ],
+ [
+ "c",
+ "t"
+ ],
+ [
+ "r",
+ "o"
+ ],
+ [
+ "▁r",
+ "e"
+ ],
+ [
+ "▁",
+ "re"
+ ],
+ [
+ "i",
+ "d"
+ ],
+ [
+ "a",
+ "m"
+ ],
+ [
+ "▁",
+ "I"
+ ],
+ [
+ "a",
+ "d"
+ ],
+ [
+ "▁",
+ "e"
+ ],
+ [
+ "▁",
+ "S"
+ ],
+ [
+ "▁",
+ "g"
+ ],
+ [
+ "▁",
+ "T"
+ ],
+ [
+ "i",
+ "m"
+ ],
+ [
+ "o",
+ "t"
+ ],
+ [
+ "a",
+ "c"
+ ],
+ [
+ "u",
+ "r"
+ ],
+ [
+ "▁",
+ "("
+ ],
+ [
+ "i",
+ "g"
+ ],
+ [
+ "▁",
+ "="
+ ],
+ [
+ "o",
+ "l"
+ ],
+ [
+ "u",
+ "t"
+ ],
+ [
+ "▁",
+ "A"
+ ],
+ [
+ "s",
+ "e"
+ ],
+ [
+ "▁",
+ "u"
+ ],
+ [
+ "v",
+ "e"
+ ],
+ [
+ "▁",
+ "C"
+ ],
+ [
+ "i",
+ "f"
+ ],
+ [
+ "o",
+ "w"
+ ],
+ [
+ "▁",
+ "y"
+ ],
+ [
+ "c",
+ "h"
+ ],
+ [
+ "a",
+ "y"
+ ],
+ [
+ "▁d",
+ "e"
+ ],
+ [
+ "▁",
+ "de"
+ ],
+ [
+ "▁s",
+ "t"
+ ],
+ [
+ "▁",
+ "st"
+ ],
+ [
+ "▁",
+ "|"
+ ],
+ [
+ "ve",
+ "r"
+ ],
+ [
+ "v",
+ "er"
+ ],
+ [
+ ")",
+ ";"
+ ],
+ [
+ "▁",
+ "\""
+ ],
+ [
+ "l",
+ "y"
+ ],
+ [
+ "▁b",
+ "e"
+ ],
+ [
+ "▁",
+ "be"
+ ],
+ [
+ "*",
+ "*"
+ ],
+ [
+ "▁i",
+ "s"
+ ],
+ [
+ "▁",
+ "is"
+ ],
+ [
+ "o",
+ "d"
+ ],
+ [
+ "▁",
+ "M"
+ ],
+ [
+ "at",
+ "ion"
+ ],
+ [
+ "ati",
+ "on"
+ ],
+ [
+ "atio",
+ "n"
+ ],
+ [
+ "u",
+ "l"
+ ],
+ [
+ "▁f",
+ "or"
+ ],
+ [
+ "▁fo",
+ "r"
+ ],
+ [
+ "▁",
+ "for"
+ ],
+ [
+ "▁o",
+ "n"
+ ],
+ [
+ "▁",
+ "on"
+ ],
+ [
+ "a",
+ "g"
+ ],
+ [
+ "c",
+ "e"
+ ],
+ [
+ "te",
+ "r"
+ ],
+ [
+ "t",
+ "er"
+ ],
+ [
+ "i",
+ "r"
+ ],
+ [
+ "t",
+ "h"
+ ],
+ [
+ "▁",
+ "v"
+ ],
+ [
+ "q",
+ "u"
+ ],
+ [
+ "▁",
+ "B"
+ ],
+ [
+ "e",
+ "m"
+ ],
+ [
+ "▁",
+ "P"
+ ],
+ [
+ "▁y",
+ "ou"
+ ],
+ [
+ "▁yo",
+ "u"
+ ],
+ [
+ "▁",
+ "you"
+ ],
+ [
+ "▁t",
+ "hat"
+ ],
+ [
+ "▁th",
+ "at"
+ ],
+ [
+ "▁",
+ "that"
+ ],
+ [
+ "u",
+ "n"
+ ],
+ [
+ "▁",
+ "{"
+ ],
+ [
+ "it",
+ "h"
+ ],
+ [
+ "i",
+ "th"
+ ],
+ [
+ "r",
+ "i"
+ ],
+ [
+ "es",
+ "t"
+ ],
+ [
+ "e",
+ "st"
+ ],
+ [
+ "a",
+ "b"
+ ],
+ [
+ "-",
+ "-"
+ ],
+ [
+ "a",
+ "p"
+ ],
+ [
+ "▁i",
+ "t"
+ ],
+ [
+ "▁",
+ "it"
+ ],
+ [
+ "▁c",
+ "on"
+ ],
+ [
+ "▁co",
+ "n"
+ ],
+ [
+ "▁",
+ "con"
+ ],
+ [
+ "at",
+ "e"
+ ],
+ [
+ "a",
+ "te"
+ ],
+ [
+ "u",
+ "s"
+ ],
+ [
+ "▁",
+ "H"
+ ],
+ [
+ "u",
+ "m"
+ ],
+ [
+ "▁",
+ "D"
+ ],
+ [
+ "o",
+ "s"
+ ],
+ [
+ "p",
+ "e"
+ ],
+ [
+ "▁",
+ "-"
+ ],
+ [
+ "▁w",
+ "h"
+ ],
+ [
+ "▁",
+ "wh"
+ ],
+ [
+ "▁a",
+ "l"
+ ],
+ [
+ "▁",
+ "al"
+ ],
+ [
+ "▁a",
+ "s"
+ ],
+ [
+ "▁",
+ "as"
+ ],
+ [
+ "an",
+ "d"
+ ],
+ [
+ "a",
+ "nd"
+ ],
+ [
+ "is",
+ "t"
+ ],
+ [
+ "i",
+ "st"
+ ],
+ [
+ "▁",
+ "L"
+ ],
+ [
+ "▁",
+ "W"
+ ],
+ [
+ "▁w",
+ "ith"
+ ],
+ [
+ "▁",
+ "with"
+ ],
+ [
+ "▁a",
+ "n"
+ ],
+ [
+ "▁",
+ "an"
+ ],
+ [
+ "er",
+ "e"
+ ],
+ [
+ "e",
+ "re"
+ ],
+ [
+ "▁",
+ "*"
+ ],
+ [
+ "▁",
+ "R"
+ ],
+ [
+ "▁h",
+ "e"
+ ],
+ [
+ "▁",
+ "he"
+ ],
+ [
+ "▁",
+ "F"
+ ],
+ [
+ "o",
+ "c"
+ ],
+ [
+ "▁w",
+ "as"
+ ],
+ [
+ "▁wa",
+ "s"
+ ],
+ [
+ "▁",
+ "was"
+ ],
+ [
+ "er",
+ "s"
+ ],
+ [
+ "e",
+ "rs"
+ ],
+ [
+ "k",
+ "e"
+ ],
+ [
+ "ou",
+ "t"
+ ],
+ [
+ "o",
+ "ut"
+ ],
+ [
+ "h",
+ "t"
+ ],
+ [
+ "▁",
+ "r"
+ ],
+ [
+ "es",
+ "s"
+ ],
+ [
+ "e",
+ "ss"
+ ],
+ [
+ "o",
+ "p"
+ ],
+ [
+ "re",
+ "s"
+ ],
+ [
+ "r",
+ "es"
+ ],
+ [
+ "i",
+ "e"
+ ],
+ [
+ "▁",
+ "E"
+ ],
+ [
+ "▁",
+ "\\"
+ ],
+ [
+ "▁T",
+ "he"
+ ],
+ [
+ "▁Th",
+ "e"
+ ],
+ [
+ "▁",
+ "The"
+ ],
+ [
+ "en",
+ "d"
+ ],
+ [
+ "e",
+ "nd"
+ ],
+ [
+ "l",
+ "d"
+ ],
+ [
+ "▁",
+ "N"
+ ],
+ [
+ "or",
+ "t"
+ ],
+ [
+ "o",
+ "rt"
+ ],
+ [
+ "▁",
+ "G"
+ ],
+ [
+ "/",
+ "/"
+ ],
+ [
+ "▁",
+ "#"
+ ],
+ [
+ "ou",
+ "r"
+ ],
+ [
+ "o",
+ "ur"
+ ],
+ [
+ "t",
+ "e"
+ ],
+ [
+ "il",
+ "l"
+ ],
+ [
+ "i",
+ "ll"
+ ],
+ [
+ "ai",
+ "n"
+ ],
+ [
+ "a",
+ "in"
+ ],
+ [
+ "▁s",
+ "e"
+ ],
+ [
+ "▁",
+ "se"
+ ],
+ [
+ "▁",
+ "$"
+ ],
+ [
+ "▁p",
+ "ro"
+ ],
+ [
+ "▁pr",
+ "o"
+ ],
+ [
+ "▁",
+ "pro"
+ ],
+ [
+ "or",
+ "e"
+ ],
+ [
+ "o",
+ "re"
+ ],
+ [
+ "▁c",
+ "om"
+ ],
+ [
+ "▁co",
+ "m"
+ ],
+ [
+ "▁",
+ "com"
+ ],
+ [
+ "am",
+ "e"
+ ],
+ [
+ "a",
+ "me"
+ ],
+ [
+ "t",
+ "r"
+ ],
+ [
+ "▁n",
+ "e"
+ ],
+ [
+ "▁",
+ "ne"
+ ],
+ [
+ "ro",
+ "m"
+ ],
+ [
+ "r",
+ "om"
+ ],
+ [
+ "u",
+ "b"
+ ],
+ [
+ "▁a",
+ "t"
+ ],
+ [
+ "▁",
+ "at"
+ ],
+ [
+ "▁e",
+ "x"
+ ],
+ [
+ "▁",
+ "ex"
+ ],
+ [
+ "an",
+ "t"
+ ],
+ [
+ "a",
+ "nt"
+ ],
+ [
+ "u",
+ "e"
+ ],
+ [
+ "▁o",
+ "r"
+ ],
+ [
+ "▁",
+ "or"
+ ],
+ [
+ "▁",
+ "}"
+ ],
+ [
+ "ar",
+ "t"
+ ],
+ [
+ "a",
+ "rt"
+ ],
+ [
+ "ct",
+ "ion"
+ ],
+ [
+ "▁",
+ "k"
+ ],
+ [
+ "p",
+ "t"
+ ],
+ [
+ "n",
+ "t"
+ ],
+ [
+ "i",
+ "v"
+ ],
+ [
+ "d",
+ "e"
+ ],
+ [
+ "▁",
+ "O"
+ ],
+ [
+ "p",
+ "l"
+ ],
+ [
+ "ur",
+ "n"
+ ],
+ [
+ "u",
+ "rn"
+ ],
+ [
+ "ig",
+ "ht"
+ ],
+ [
+ "igh",
+ "t"
+ ],
+ [
+ "i",
+ "ght"
+ ],
+ [
+ "al",
+ "l"
+ ],
+ [
+ "a",
+ "ll"
+ ],
+ [
+ "▁t",
+ "his"
+ ],
+ [
+ "▁th",
+ "is"
+ ],
+ [
+ "▁",
+ "this"
+ ],
+ [
+ "se",
+ "r"
+ ],
+ [
+ "s",
+ "er"
+ ],
+ [
+ "av",
+ "e"
+ ],
+ [
+ "a",
+ "ve"
+ ],
+ [
+ "▁n",
+ "ot"
+ ],
+ [
+ "▁no",
+ "t"
+ ],
+ [
+ "▁",
+ "not"
+ ],
+ [
+ "▁a",
+ "re"
+ ],
+ [
+ "▁ar",
+ "e"
+ ],
+ [
+ "▁",
+ "are"
+ ],
+ [
+ "▁",
+ "j"
+ ],
+ [
+ "▁l",
+ "e"
+ ],
+ [
+ "▁",
+ "le"
+ ],
+ [
+ "i",
+ "z"
+ ],
+ [
+ "▁",
+ "'"
+ ],
+ [
+ "ag",
+ "e"
+ ],
+ [
+ "a",
+ "ge"
+ ],
+ [
+ "me",
+ "nt"
+ ],
+ [
+ "men",
+ "t"
+ ],
+ [
+ "m",
+ "ent"
+ ],
+ [
+ "▁t",
+ "r"
+ ],
+ [
+ "▁",
+ "tr"
+ ],
+ [
+ "ac",
+ "k"
+ ],
+ [
+ "a",
+ "ck"
+ ],
+ [
+ "us",
+ "t"
+ ],
+ [
+ "u",
+ "st"
+ ],
+ [
+ "(",
+ ")"
+ ],
+ [
+ "-",
+ ">"
+ ],
+ [
+ "it",
+ "y"
+ ],
+ [
+ "i",
+ "ty"
+ ],
+ [
+ "in",
+ "e"
+ ],
+ [
+ "i",
+ "ne"
+ ],
+ [
+ "ou",
+ "ld"
+ ],
+ [
+ "oul",
+ "d"
+ ],
+ [
+ "o",
+ "uld"
+ ],
+ [
+ "▁",
+ "J"
+ ],
+ [
+ "o",
+ "g"
+ ],
+ [
+ "▁f",
+ "rom"
+ ],
+ [
+ "▁fr",
+ "om"
+ ],
+ [
+ "▁fro",
+ "m"
+ ],
+ [
+ "▁",
+ "from"
+ ],
+ [
+ "▁w",
+ "e"
+ ],
+ [
+ "▁",
+ "we"
+ ],
+ [
+ "el",
+ "l"
+ ],
+ [
+ "e",
+ "ll"
+ ],
+ [
+ "▁s",
+ "h"
+ ],
+ [
+ "▁",
+ "sh"
+ ],
+ [
+ "▁e",
+ "n"
+ ],
+ [
+ "▁",
+ "en"
+ ],
+ [
+ "ur",
+ "e"
+ ],
+ [
+ "u",
+ "re"
+ ],
+ [
+ "por",
+ "t"
+ ],
+ [
+ "po",
+ "rt"
+ ],
+ [
+ "p",
+ "ort"
+ ],
+ [
+ "▁c",
+ "h"
+ ],
+ [
+ "▁",
+ "ch"
+ ],
+ [
+ "n",
+ "e"
+ ],
+ [
+ "▁b",
+ "y"
+ ],
+ [
+ "▁",
+ "by"
+ ],
+ [
+ "pe",
+ "r"
+ ],
+ [
+ "p",
+ "er"
+ ],
+ [
+ "ar",
+ "d"
+ ],
+ [
+ "a",
+ "rd"
+ ],
+ [
+ "as",
+ "s"
+ ],
+ [
+ "a",
+ "ss"
+ ],
+ [
+ "g",
+ "e"
+ ],
+ [
+ "a",
+ "k"
+ ],
+ [
+ "ar",
+ "e"
+ ],
+ [
+ "a",
+ "re"
+ ],
+ [
+ "o",
+ "k"
+ ],
+ [
+ "a",
+ "v"
+ ],
+ [
+ "iv",
+ "e"
+ ],
+ [
+ "i",
+ "ve"
+ ],
+ [
+ "f",
+ "f"
+ ],
+ [
+ "ie",
+ "s"
+ ],
+ [
+ "i",
+ "es"
+ ],
+ [
+ "at",
+ "h"
+ ],
+ [
+ "a",
+ "th"
+ ],
+ [
+ "tu",
+ "rn"
+ ],
+ [
+ "t",
+ "urn"
+ ],
+ [
+ "▁",
+ "U"
+ ],
+ [
+ "in",
+ "t"
+ ],
+ [
+ "i",
+ "nt"
+ ],
+ [
+ "--",
+ "--"
+ ],
+ [
+ "---",
+ "-"
+ ],
+ [
+ "-",
+ "---"
+ ],
+ [
+ "▁i",
+ "m"
+ ],
+ [
+ "▁",
+ "im"
+ ],
+ [
+ "os",
+ "t"
+ ],
+ [
+ "o",
+ "st"
+ ],
+ [
+ "ia",
+ "l"
+ ],
+ [
+ "i",
+ "al"
+ ],
+ [
+ "▁h",
+ "ave"
+ ],
+ [
+ "▁ha",
+ "ve"
+ ],
+ [
+ "▁hav",
+ "e"
+ ],
+ [
+ "▁",
+ "have"
+ ],
+ [
+ "in",
+ "d"
+ ],
+ [
+ "i",
+ "nd"
+ ],
+ [
+ "i",
+ "p"
+ ],
+ [
+ "an",
+ "s"
+ ],
+ [
+ "a",
+ "ns"
+ ],
+ [
+ "x",
+ "t"
+ ],
+ [
+ "▁d",
+ "o"
+ ],
+ [
+ "▁",
+ "do"
+ ],
+ [
+ "c",
+ "l"
+ ],
+ [
+ "▁i",
+ "f"
+ ],
+ [
+ "▁",
+ "if"
+ ],
+ [
+ "co",
+ "n"
+ ],
+ [
+ "c",
+ "on"
+ ],
+ [
+ "i",
+ "a"
+ ],
+ [
+ "▁h",
+ "is"
+ ],
+ [
+ "▁hi",
+ "s"
+ ],
+ [
+ "▁",
+ "his"
+ ],
+ [
+ "ul",
+ "t"
+ ],
+ [
+ "u",
+ "lt"
+ ],
+ [
+ "ro",
+ "u"
+ ],
+ [
+ "r",
+ "ou"
+ ],
+ [
+ "▁s",
+ "u"
+ ],
+ [
+ "▁",
+ "su"
+ ],
+ [
+ "r",
+ "a"
+ ],
+ [
+ "▁u",
+ "n"
+ ],
+ [
+ "▁",
+ "un"
+ ],
+ [
+ "ab",
+ "le"
+ ],
+ [
+ "abl",
+ "e"
+ ],
+ [
+ "a",
+ "ble"
+ ],
+ [
+ "▁",
+ "<"
+ ],
+ [
+ "▁",
+ "K"
+ ],
+ [
+ "om",
+ "e"
+ ],
+ [
+ "o",
+ "me"
+ ],
+ [
+ "▁q",
+ "u"
+ ],
+ [
+ "▁",
+ "qu"
+ ],
+ [
+ "ge",
+ "t"
+ ],
+ [
+ "g",
+ "et"
+ ],
+ [
+ "▁m",
+ "e"
+ ],
+ [
+ "▁",
+ "me"
+ ],
+ [
+ "as",
+ "t"
+ ],
+ [
+ "a",
+ "st"
+ ],
+ [
+ "ec",
+ "t"
+ ],
+ [
+ "e",
+ "ct"
+ ],
+ [
+ "▁#",
+ "#"
+ ],
+ [
+ "▁",
+ "##"
+ ],
+ [
+ "t",
+ "o"
+ ],
+ [
+ "▁c",
+ "l"
+ ],
+ [
+ "▁",
+ "cl"
+ ],
+ [
+ "▁a",
+ "b"
+ ],
+ [
+ "▁",
+ "ab"
+ ],
+ [
+ "ic",
+ "e"
+ ],
+ [
+ "i",
+ "ce"
+ ],
+ [
+ "ir",
+ "e"
+ ],
+ [
+ "i",
+ "re"
+ ],
+ [
+ "be",
+ "r"
+ ],
+ [
+ "b",
+ "er"
+ ],
+ [
+ "on",
+ "e"
+ ],
+ [
+ "o",
+ "ne"
+ ],
+ [
+ "ic",
+ "h"
+ ],
+ [
+ "i",
+ "ch"
+ ],
+ [
+ "he",
+ "n"
+ ],
+ [
+ "h",
+ "en"
+ ],
+ [
+ "▁c",
+ "an"
+ ],
+ [
+ "▁ca",
+ "n"
+ ],
+ [
+ "▁",
+ "can"
+ ],
+ [
+ "▁T",
+ "h"
+ ],
+ [
+ "▁",
+ "Th"
+ ],
+ [
+ "▁l",
+ "a"
+ ],
+ [
+ "▁",
+ "la"
+ ],
+ [
+ "▁a",
+ "ll"
+ ],
+ [
+ "▁al",
+ "l"
+ ],
+ [
+ "▁",
+ "all"
+ ],
+ [
+ "im",
+ "e"
+ ],
+ [
+ "i",
+ "me"
+ ],
+ [
+ "il",
+ "e"
+ ],
+ [
+ "i",
+ "le"
+ ],
+ [
+ "id",
+ "e"
+ ],
+ [
+ "i",
+ "de"
+ ],
+ [
+ "\"",
+ ","
+ ],
+ [
+ "▁p",
+ "l"
+ ],
+ [
+ "▁",
+ "pl"
+ ],
+ [
+ "▁",
+ "V"
+ ],
+ [
+ "r",
+ "u"
+ ],
+ [
+ "or",
+ "m"
+ ],
+ [
+ "o",
+ "rm"
+ ],
+ [
+ "▁h",
+ "ad"
+ ],
+ [
+ "▁ha",
+ "d"
+ ],
+ [
+ "▁",
+ "had"
+ ],
+ [
+ "u",
+ "d"
+ ],
+ [
+ "as",
+ "e"
+ ],
+ [
+ "a",
+ "se"
+ ],
+ [
+ "or",
+ "d"
+ ],
+ [
+ "o",
+ "rd"
+ ],
+ [
+ ")",
+ ","
+ ],
+ [
+ "▁h",
+ "er"
+ ],
+ [
+ "▁he",
+ "r"
+ ],
+ [
+ "▁",
+ "her"
+ ],
+ [
+ "▁I",
+ "n"
+ ],
+ [
+ "▁",
+ "In"
+ ],
+ [
+ "ac",
+ "e"
+ ],
+ [
+ "a",
+ "ce"
+ ],
+ [
+ "▁b",
+ "ut"
+ ],
+ [
+ "▁bu",
+ "t"
+ ],
+ [
+ "▁",
+ "but"
+ ],
+ [
+ "at",
+ "a"
+ ],
+ [
+ "a",
+ "ta"
+ ],
+ [
+ ":",
+ ":"
+ ],
+ [
+ "**",
+ "**"
+ ],
+ [
+ "***",
+ "*"
+ ],
+ [
+ "*",
+ "***"
+ ],
+ [
+ "on",
+ "g"
+ ],
+ [
+ "o",
+ "ng"
+ ],
+ [
+ "▁",
+ "&"
+ ],
+ [
+ ".",
+ "."
+ ],
+ [
+ "it",
+ "e"
+ ],
+ [
+ "i",
+ "te"
+ ],
+ [
+ "yp",
+ "e"
+ ],
+ [
+ "y",
+ "pe"
+ ],
+ [
+ "ac",
+ "t"
+ ],
+ [
+ "a",
+ "ct"
+ ],
+ [
+ "od",
+ "e"
+ ],
+ [
+ "o",
+ "de"
+ ],
+ [
+ "▁y",
+ "our"
+ ],
+ [
+ "▁you",
+ "r"
+ ],
+ [
+ "▁yo",
+ "ur"
+ ],
+ [
+ "▁",
+ "your"
+ ],
+ [
+ "▁o",
+ "ut"
+ ],
+ [
+ "▁ou",
+ "t"
+ ],
+ [
+ "▁",
+ "out"
+ ],
+ [
+ "▁g",
+ "o"
+ ],
+ [
+ "▁",
+ "go"
+ ],
+ [
+ "li",
+ "c"
+ ],
+ [
+ "l",
+ "ic"
+ ],
+ [
+ "al",
+ "ly"
+ ],
+ [
+ "all",
+ "y"
+ ],
+ [
+ "▁s",
+ "o"
+ ],
+ [
+ "▁",
+ "so"
+ ],
+ [
+ "or",
+ "k"
+ ],
+ [
+ "a",
+ "u"
+ ],
+ [
+ "▁u",
+ "p"
+ ],
+ [
+ "▁",
+ "up"
+ ],
+ [
+ "▁",
+ "_"
+ ],
+ [
+ "l",
+ "l"
+ ],
+ [
+ "=",
+ "="
+ ],
+ [
+ "▁m",
+ "y"
+ ],
+ [
+ "▁",
+ "my"
+ ],
+ [
+ "p",
+ "p"
+ ],
+ [
+ "c",
+ "c"
+ ],
+ [
+ "▁/",
+ "/"
+ ],
+ [
+ "▁",
+ "//"
+ ],
+ [
+ "▁the",
+ "y"
+ ],
+ [
+ "▁th",
+ "ey"
+ ],
+ [
+ "▁",
+ "they"
+ ],
+ [
+ "g",
+ "h"
+ ],
+ [
+ "▁u",
+ "s"
+ ],
+ [
+ "▁",
+ "us"
+ ],
+ [
+ "i",
+ "b"
+ ],
+ [
+ "ion",
+ "s"
+ ],
+ [
+ "io",
+ "ns"
+ ],
+ [
+ "i",
+ "ons"
+ ],
+ [
+ "ac",
+ "h"
+ ],
+ [
+ "a",
+ "ch"
+ ],
+ [
+ "en",
+ "s"
+ ],
+ [
+ "e",
+ "ns"
+ ],
+ [
+ "▁a",
+ "r"
+ ],
+ [
+ "▁",
+ "ar"
+ ],
+ [
+ "o",
+ "b"
+ ],
+ [
+ "el",
+ "f"
+ ],
+ [
+ "oo",
+ "k"
+ ],
+ [
+ "o",
+ "ok"
+ ],
+ [
+ "at",
+ "ed"
+ ],
+ [
+ "ate",
+ "d"
+ ],
+ [
+ "a",
+ "ted"
+ ],
+ [
+ "an",
+ "g"
+ ],
+ [
+ "a",
+ "ng"
+ ],
+ [
+ "ig",
+ "n"
+ ],
+ [
+ "i",
+ "gn"
+ ],
+ [
+ "▁re",
+ "turn"
+ ],
+ [
+ "▁r",
+ "eturn"
+ ],
+ [
+ "▁ret",
+ "urn"
+ ],
+ [
+ "▁",
+ "return"
+ ],
+ [
+ "▁re",
+ "s"
+ ],
+ [
+ "▁r",
+ "es"
+ ],
+ [
+ "▁",
+ "res"
+ ],
+ [
+ "c",
+ "k"
+ ],
+ [
+ "ou",
+ "s"
+ ],
+ [
+ "o",
+ "us"
+ ],
+ [
+ "с",
+ "т"
+ ],
+ [
+ ")",
+ "."
+ ],
+ [
+ "▁",
+ "п"
+ ],
+ [
+ ".",
+ "\""
+ ],
+ [
+ "н",
+ "а"
+ ],
+ [
+ "▁",
+ "i"
+ ],
+ [
+ "ai",
+ "l"
+ ],
+ [
+ "a",
+ "il"
+ ],
+ [
+ "e",
+ "p"
+ ],
+ [
+ "▁a",
+ "d"
+ ],
+ [
+ "▁",
+ "ad"
+ ],
+ [
+ "an",
+ "ce"
+ ],
+ [
+ "anc",
+ "e"
+ ],
+ [
+ "(",
+ "\""
+ ],
+ [
+ "▁*",
+ "*"
+ ],
+ [
+ "▁",
+ "**"
+ ],
+ [
+ "th",
+ "er"
+ ],
+ [
+ "the",
+ "r"
+ ],
+ [
+ "t",
+ "her"
+ ],
+ [
+ "ak",
+ "e"
+ ],
+ [
+ "a",
+ "ke"
+ ],
+ [
+ "▁w",
+ "ill"
+ ],
+ [
+ "▁",
+ "will"
+ ],
+ [
+ "▁c",
+ "omp"
+ ],
+ [
+ "▁com",
+ "p"
+ ],
+ [
+ "▁co",
+ "mp"
+ ],
+ [
+ "▁",
+ "comp"
+ ],
+ [
+ "▁o",
+ "ne"
+ ],
+ [
+ "▁on",
+ "e"
+ ],
+ [
+ "▁",
+ "one"
+ ],
+ [
+ "▁g",
+ "et"
+ ],
+ [
+ "▁ge",
+ "t"
+ ],
+ [
+ "▁",
+ "get"
+ ],
+ [
+ "o",
+ "v"
+ ],
+ [
+ "▁",
+ "Y"
+ ],
+ [
+ "ar",
+ "y"
+ ],
+ [
+ "a",
+ "ry"
+ ],
+ [
+ "oc",
+ "k"
+ ],
+ [
+ "o",
+ "ck"
+ ],
+ [
+ "▁s",
+ "he"
+ ],
+ [
+ "▁sh",
+ "e"
+ ],
+ [
+ "▁",
+ "she"
+ ],
+ [
+ "ch",
+ "e"
+ ],
+ [
+ "c",
+ "he"
+ ],
+ [
+ "f",
+ "t"
+ ],
+ [
+ "▁n",
+ "ew"
+ ],
+ [
+ "▁ne",
+ "w"
+ ],
+ [
+ "▁",
+ "new"
+ ],
+ [
+ "▁d",
+ "es"
+ ],
+ [
+ "▁de",
+ "s"
+ ],
+ [
+ "▁",
+ "des"
+ ],
+ [
+ "▁l",
+ "i"
+ ],
+ [
+ "▁",
+ "li"
+ ],
+ [
+ "en",
+ "ce"
+ ],
+ [
+ "enc",
+ "e"
+ ],
+ [
+ "▁s",
+ "a"
+ ],
+ [
+ "▁",
+ "sa"
+ ],
+ [
+ "re",
+ "ss"
+ ],
+ [
+ "res",
+ "s"
+ ],
+ [
+ "r",
+ "ess"
+ ],
+ [
+ "▁e",
+ "l"
+ ],
+ [
+ "▁",
+ "el"
+ ],
+ [
+ "▁u",
+ "nd"
+ ],
+ [
+ "▁un",
+ "d"
+ ],
+ [
+ "▁",
+ "und"
+ ],
+ [
+ "e",
+ "g"
+ ],
+ [
+ "fe",
+ "r"
+ ],
+ [
+ "f",
+ "er"
+ ],
+ [
+ "r",
+ "y"
+ ],
+ [
+ "ea",
+ "r"
+ ],
+ [
+ "e",
+ "ar"
+ ],
+ [
+ "os",
+ "e"
+ ],
+ [
+ "o",
+ "se"
+ ],
+ [
+ "ve",
+ "ry"
+ ],
+ [
+ "ver",
+ "y"
+ ],
+ [
+ "v",
+ "ery"
+ ],
+ [
+ "'",
+ ","
+ ],
+ [
+ "▁",
+ "+"
+ ],
+ [
+ "▁",
+ "в"
+ ],
+ [
+ "▁H",
+ "e"
+ ],
+ [
+ "▁",
+ "He"
+ ],
+ [
+ "ub",
+ "lic"
+ ],
+ [
+ "ubl",
+ "ic"
+ ],
+ [
+ "u",
+ "blic"
+ ],
+ [
+ "▁the",
+ "ir"
+ ],
+ [
+ "iz",
+ "e"
+ ],
+ [
+ "i",
+ "ze"
+ ],
+ [
+ "▁w",
+ "ere"
+ ],
+ [
+ "▁we",
+ "re"
+ ],
+ [
+ "▁wer",
+ "e"
+ ],
+ [
+ "▁",
+ "were"
+ ],
+ [
+ "in",
+ "k"
+ ],
+ [
+ "ow",
+ "n"
+ ],
+ [
+ "o",
+ "wn"
+ ],
+ [
+ "I",
+ "n"
+ ],
+ [
+ "{",
+ "\\"
+ ],
+ [
+ "▁h",
+ "as"
+ ],
+ [
+ "▁ha",
+ "s"
+ ],
+ [
+ "▁",
+ "has"
+ ],
+ [
+ "▁p",
+ "er"
+ ],
+ [
+ "▁pe",
+ "r"
+ ],
+ [
+ "▁",
+ "per"
+ ],
+ [
+ "▁I",
+ "t"
+ ],
+ [
+ "▁",
+ "It"
+ ],
+ [
+ "▁S",
+ "t"
+ ],
+ [
+ "▁",
+ "St"
+ ],
+ [
+ "he",
+ "r"
+ ],
+ [
+ "h",
+ "er"
+ ],
+ [
+ "je",
+ "ct"
+ ],
+ [
+ "j",
+ "ect"
+ ],
+ [
+ "р",
+ "а"
+ ],
+ [
+ "il",
+ "d"
+ ],
+ [
+ "i",
+ "ld"
+ ],
+ [
+ "s",
+ "o"
+ ],
+ [
+ "▁s",
+ "p"
+ ],
+ [
+ "▁",
+ "sp"
+ ],
+ [
+ "н",
+ "и"
+ ],
+ [
+ "d",
+ "u"
+ ],
+ [
+ "ro",
+ "w"
+ ],
+ [
+ "r",
+ "ow"
+ ],
+ [
+ "al",
+ "ue"
+ ],
+ [
+ "alu",
+ "e"
+ ],
+ [
+ "se",
+ "t"
+ ],
+ [
+ "s",
+ "et"
+ ],
+ [
+ "fo",
+ "rm"
+ ],
+ [
+ "for",
+ "m"
+ ],
+ [
+ "f",
+ "orm"
+ ],
+ [
+ "co",
+ "m"
+ ],
+ [
+ "c",
+ "om"
+ ],
+ [
+ "▁m",
+ "an"
+ ],
+ [
+ "▁ma",
+ "n"
+ ],
+ [
+ "▁",
+ "man"
+ ],
+ [
+ "on",
+ "t"
+ ],
+ [
+ "o",
+ "nt"
+ ],
+ [
+ "ul",
+ "l"
+ ],
+ [
+ "u",
+ "ll"
+ ],
+ [
+ "▁c",
+ "ont"
+ ],
+ [
+ "▁con",
+ "t"
+ ],
+ [
+ "▁co",
+ "nt"
+ ],
+ [
+ "▁",
+ "cont"
+ ],
+ [
+ "▁m",
+ "ore"
+ ],
+ [
+ "▁mor",
+ "e"
+ ],
+ [
+ "▁mo",
+ "re"
+ ],
+ [
+ "▁",
+ "more"
+ ],
+ [
+ "ic",
+ "k"
+ ],
+ [
+ "i",
+ "ck"
+ ],
+ [
+ "▁w",
+ "ould"
+ ],
+ [
+ "▁wo",
+ "uld"
+ ],
+ [
+ "▁e",
+ "v"
+ ],
+ [
+ "▁",
+ "ev"
+ ],
+ [
+ "▁ab",
+ "out"
+ ],
+ [
+ "▁",
+ "about"
+ ],
+ [
+ "it",
+ "ion"
+ ],
+ [
+ "iti",
+ "on"
+ ],
+ [
+ "▁",
+ "z"
+ ],
+ [
+ "ou",
+ "nd"
+ ],
+ [
+ "oun",
+ "d"
+ ],
+ [
+ "o",
+ "und"
+ ],
+ [
+ "re",
+ "e"
+ ],
+ [
+ "r",
+ "ee"
+ ],
+ [
+ "▁C",
+ "h"
+ ],
+ [
+ "▁",
+ "Ch"
+ ],
+ [
+ "▁wh",
+ "ich"
+ ],
+ [
+ "▁",
+ "which"
+ ],
+ [
+ "i",
+ "o"
+ ],
+ [
+ "()",
+ ";"
+ ],
+ [
+ "(",
+ ");"
+ ],
+ [
+ "▁w",
+ "ho"
+ ],
+ [
+ "▁wh",
+ "o"
+ ],
+ [
+ "▁",
+ "who"
+ ],
+ [
+ "er",
+ "r"
+ ],
+ [
+ "e",
+ "rr"
+ ],
+ [
+ "or",
+ "y"
+ ],
+ [
+ "o",
+ "ry"
+ ],
+ [
+ "ou",
+ "nt"
+ ],
+ [
+ "oun",
+ "t"
+ ],
+ [
+ "o",
+ "unt"
+ ],
+ [
+ "at",
+ "ions"
+ ],
+ [
+ "ation",
+ "s"
+ ],
+ [
+ "ati",
+ "ons"
+ ],
+ [
+ "atio",
+ "ns"
+ ],
+ [
+ "▁",
+ "с"
+ ],
+ [
+ "ri",
+ "ng"
+ ],
+ [
+ "rin",
+ "g"
+ ],
+ [
+ "r",
+ "ing"
+ ],
+ [
+ "<",
+ "/"
+ ],
+ [
+ "▁f",
+ "e"
+ ],
+ [
+ "▁",
+ "fe"
+ ],
+ [
+ "к",
+ "о"
+ ],
+ [
+ "н",
+ "о"
+ ],
+ [
+ "▁d",
+ "is"
+ ],
+ [
+ "▁di",
+ "s"
+ ],
+ [
+ "▁",
+ "dis"
+ ],
+ [
+ "m",
+ "a"
+ ],
+ [
+ "▁t",
+ "hem"
+ ],
+ [
+ "▁the",
+ "m"
+ ],
+ [
+ "▁th",
+ "em"
+ ],
+ [
+ "▁a",
+ "ny"
+ ],
+ [
+ "▁an",
+ "y"
+ ],
+ [
+ "▁",
+ "any"
+ ],
+ [
+ "▁n",
+ "o"
+ ],
+ [
+ "▁",
+ "no"
+ ],
+ [
+ "--",
+ "------"
+ ],
+ [
+ "----",
+ "----"
+ ],
+ [
+ "---",
+ "-----"
+ ],
+ [
+ "-----",
+ "---"
+ ],
+ [
+ "------",
+ "--"
+ ],
+ [
+ "-------",
+ "-"
+ ],
+ [
+ "-",
+ "-------"
+ ],
+ [
+ "▁p",
+ "re"
+ ],
+ [
+ "▁pr",
+ "e"
+ ],
+ [
+ "▁",
+ "pre"
+ ],
+ [
+ "▁t",
+ "e"
+ ],
+ [
+ "▁",
+ "te"
+ ],
+ [
+ "▁r",
+ "o"
+ ],
+ [
+ "▁",
+ "ro"
+ ],
+ [
+ "▁h",
+ "im"
+ ],
+ [
+ "▁hi",
+ "m"
+ ],
+ [
+ "▁",
+ "him"
+ ],
+ [
+ "▁",
+ ":"
+ ],
+ [
+ "u",
+ "p"
+ ],
+ [
+ "▁in",
+ "t"
+ ],
+ [
+ "▁i",
+ "nt"
+ ],
+ [
+ "▁",
+ "int"
+ ],
+ [
+ "▁a",
+ "g"
+ ],
+ [
+ "▁",
+ "ag"
+ ],
+ [
+ "S",
+ "t"
+ ],
+ [
+ "ar",
+ "k"
+ ],
+ [
+ "e",
+ "x"
+ ],
+ [
+ "p",
+ "h"
+ ],
+ [
+ "ie",
+ "nt"
+ ],
+ [
+ "ien",
+ "t"
+ ],
+ [
+ "i",
+ "ent"
+ ],
+ [
+ "el",
+ "y"
+ ],
+ [
+ "e",
+ "ly"
+ ],
+ [
+ "▁p",
+ "r"
+ ],
+ [
+ "▁",
+ "pr"
+ ],
+ [
+ "E",
+ "R"
+ ],
+ [
+ "▁im",
+ "port"
+ ],
+ [
+ "▁imp",
+ "ort"
+ ],
+ [
+ "▁",
+ "import"
+ ],
+ [
+ "▁t",
+ "ime"
+ ],
+ [
+ "▁tim",
+ "e"
+ ],
+ [
+ "▁ti",
+ "me"
+ ],
+ [
+ "▁",
+ "time"
+ ],
+ [
+ "р",
+ "о"
+ ],
+ [
+ "pr",
+ "o"
+ ],
+ [
+ "p",
+ "ro"
+ ],
+ [
+ "Us",
+ "er"
+ ],
+ [
+ "Use",
+ "r"
+ ],
+ [
+ "U",
+ "ser"
+ ],
+ [
+ "l",
+ "o"
+ ],
+ [
+ "▁",
+ "/"
+ ],
+ [
+ "▁",
+ "["
+ ],
+ [
+ "or",
+ "s"
+ ],
+ [
+ "o",
+ "rs"
+ ],
+ [
+ "=",
+ "\""
+ ],
+ [
+ "▁t",
+ "here"
+ ],
+ [
+ "▁the",
+ "re"
+ ],
+ [
+ "▁th",
+ "ere"
+ ],
+ [
+ "▁ther",
+ "e"
+ ],
+ [
+ "▁",
+ "there"
+ ],
+ [
+ "▁l",
+ "ike"
+ ],
+ [
+ "▁li",
+ "ke"
+ ],
+ [
+ "▁lik",
+ "e"
+ ],
+ [
+ "▁",
+ "like"
+ ],
+ [
+ "ol",
+ "d"
+ ],
+ [
+ "o",
+ "ld"
+ ],
+ [
+ "▁w",
+ "hen"
+ ],
+ [
+ "▁wh",
+ "en"
+ ],
+ [
+ "▁whe",
+ "n"
+ ],
+ [
+ "▁",
+ "when"
+ ],
+ [
+ "ve",
+ "rs"
+ ],
+ [
+ "ver",
+ "s"
+ ],
+ [
+ "v",
+ "ers"
+ ],
+ [
+ "▁s",
+ "ome"
+ ],
+ [
+ "▁so",
+ "me"
+ ],
+ [
+ "▁som",
+ "e"
+ ],
+ [
+ "▁",
+ "some"
+ ],
+ [
+ "in",
+ "gs"
+ ],
+ [
+ "ing",
+ "s"
+ ],
+ [
+ ")",
+ ")"
+ ],
+ [
+ "▁p",
+ "art"
+ ],
+ [
+ "▁par",
+ "t"
+ ],
+ [
+ "▁pa",
+ "rt"
+ ],
+ [
+ "▁",
+ "part"
+ ],
+ [
+ "ic",
+ "al"
+ ],
+ [
+ "ica",
+ "l"
+ ],
+ [
+ "i",
+ "cal"
+ ],
+ [
+ "▁f",
+ "un"
+ ],
+ [
+ "▁fu",
+ "n"
+ ],
+ [
+ "▁",
+ "fun"
+ ],
+ [
+ "▁k",
+ "n"
+ ],
+ [
+ "▁",
+ "kn"
+ ],
+ [
+ "ay",
+ "s"
+ ],
+ [
+ "a",
+ "ys"
+ ],
+ [
+ "ie",
+ "r"
+ ],
+ [
+ "i",
+ "er"
+ ],
+ [
+ "▁b",
+ "een"
+ ],
+ [
+ "▁be",
+ "en"
+ ],
+ [
+ "ov",
+ "e"
+ ],
+ [
+ "o",
+ "ve"
+ ],
+ [
+ "▁s",
+ "c"
+ ],
+ [
+ "▁",
+ "sc"
+ ],
+ [
+ "ia",
+ "n"
+ ],
+ [
+ "i",
+ "an"
+ ],
+ [
+ "▁o",
+ "ver"
+ ],
+ [
+ "▁ov",
+ "er"
+ ],
+ [
+ "▁",
+ "over"
+ ],
+ [
+ "ie",
+ "l"
+ ],
+ [
+ "i",
+ "el"
+ ],
+ [
+ "▁p",
+ "e"
+ ],
+ [
+ "▁",
+ "pe"
+ ],
+ [
+ "ri",
+ "b"
+ ],
+ [
+ "r",
+ "ib"
+ ],
+ [
+ "pu",
+ "t"
+ ],
+ [
+ "p",
+ "ut"
+ ],
+ [
+ "e",
+ "c"
+ ],
+ [
+ "et",
+ "h"
+ ],
+ [
+ "e",
+ "th"
+ ],
+ [
+ "ar",
+ "am"
+ ],
+ [
+ "ara",
+ "m"
+ ],
+ [
+ "a",
+ "ram"
+ ],
+ [
+ "ap",
+ "p"
+ ],
+ [
+ "a",
+ "pp"
+ ],
+ [
+ "▁",
+ "–"
+ ],
+ [
+ "▁s",
+ "tat"
+ ],
+ [
+ "▁st",
+ "at"
+ ],
+ [
+ "▁sta",
+ "t"
+ ],
+ [
+ "▁",
+ "stat"
+ ],
+ [
+ "po",
+ "n"
+ ],
+ [
+ "p",
+ "on"
+ ],
+ [
+ "▁w",
+ "hat"
+ ],
+ [
+ "▁wh",
+ "at"
+ ],
+ [
+ "▁",
+ "what"
+ ],
+ [
+ "pt",
+ "ion"
+ ],
+ [
+ "w",
+ "e"
+ ],
+ [
+ "ad",
+ "e"
+ ],
+ [
+ "a",
+ "de"
+ ],
+ [
+ "▁w",
+ "ork"
+ ],
+ [
+ "▁wor",
+ "k"
+ ],
+ [
+ "▁",
+ "work"
+ ],
+ [
+ "te",
+ "xt"
+ ],
+ [
+ "tex",
+ "t"
+ ],
+ [
+ "t",
+ "ext"
+ ],
+ [
+ "▁s",
+ "aid"
+ ],
+ [
+ "▁sa",
+ "id"
+ ],
+ [
+ "▁#",
+ "##"
+ ],
+ [
+ "▁##",
+ "#"
+ ],
+ [
+ "▁",
+ "###"
+ ],
+ [
+ "I",
+ "N"
+ ],
+ [
+ "▁j",
+ "ust"
+ ],
+ [
+ "▁ju",
+ "st"
+ ],
+ [
+ "▁",
+ "just"
+ ],
+ [
+ "ir",
+ "st"
+ ],
+ [
+ "irs",
+ "t"
+ ],
+ [
+ "▁in",
+ "to"
+ ],
+ [
+ "▁int",
+ "o"
+ ],
+ [
+ "▁",
+ "into"
+ ],
+ [
+ "▁con",
+ "st"
+ ],
+ [
+ "▁cons",
+ "t"
+ ],
+ [
+ "▁",
+ "const"
+ ],
+ [
+ "our",
+ "ce"
+ ],
+ [
+ "t",
+ "t"
+ ],
+ [
+ "p",
+ "s"
+ ],
+ [
+ "p",
+ "r"
+ ],
+ [
+ "er",
+ "v"
+ ],
+ [
+ "e",
+ "rv"
+ ],
+ [
+ "it",
+ "t"
+ ],
+ [
+ "i",
+ "tt"
+ ],
+ [
+ "u",
+ "g"
+ ],
+ [
+ "_",
+ "{"
+ ],
+ [
+ "en",
+ "ts"
+ ],
+ [
+ "ent",
+ "s"
+ ],
+ [
+ "is",
+ "h"
+ ],
+ [
+ "i",
+ "sh"
+ ],
+ [
+ "en",
+ "er"
+ ],
+ [
+ "ene",
+ "r"
+ ],
+ [
+ "e",
+ "ner"
+ ],
+ [
+ "▁in",
+ "ter"
+ ],
+ [
+ "▁int",
+ "er"
+ ],
+ [
+ "▁inte",
+ "r"
+ ],
+ [
+ "▁",
+ "inter"
+ ],
+ [
+ "pl",
+ "e"
+ ],
+ [
+ "p",
+ "le"
+ ],
+ [
+ "ol",
+ "l"
+ ],
+ [
+ "o",
+ "ll"
+ ],
+ [
+ "me",
+ "r"
+ ],
+ [
+ "m",
+ "er"
+ ],
+ [
+ "at",
+ "er"
+ ],
+ [
+ "ate",
+ "r"
+ ],
+ [
+ "a",
+ "ter"
+ ],
+ [
+ "oo",
+ "l"
+ ],
+ [
+ "o",
+ "ol"
+ ],
+ [
+ "e",
+ "f"
+ ],
+ [
+ "▁p",
+ "ublic"
+ ],
+ [
+ "▁pub",
+ "lic"
+ ],
+ [
+ "▁pu",
+ "blic"
+ ],
+ [
+ "▁publi",
+ "c"
+ ],
+ [
+ "▁",
+ "public"
+ ],
+ [
+ "▁o",
+ "ther"
+ ],
+ [
+ "▁ot",
+ "her"
+ ],
+ [
+ "▁",
+ "other"
+ ],
+ [
+ "р",
+ "е"
+ ],
+ [
+ "▁d",
+ "ef"
+ ],
+ [
+ "▁de",
+ "f"
+ ],
+ [
+ "▁",
+ "def"
+ ],
+ [
+ "▁",
+ "@"
+ ],
+ [
+ "г",
+ "о"
+ ],
+ [
+ "oin",
+ "t"
+ ],
+ [
+ "oi",
+ "nt"
+ ],
+ [
+ "o",
+ "int"
+ ],
+ [
+ "▁o",
+ "ff"
+ ],
+ [
+ "▁of",
+ "f"
+ ],
+ [
+ "▁",
+ "off"
+ ],
+ [
+ "oi",
+ "d"
+ ],
+ [
+ "o",
+ "id"
+ ],
+ [
+ "re",
+ "turn"
+ ],
+ [
+ "ret",
+ "urn"
+ ],
+ [
+ "r",
+ "eturn"
+ ],
+ [
+ "▁s",
+ "et"
+ ],
+ [
+ "▁se",
+ "t"
+ ],
+ [
+ "▁",
+ "set"
+ ],
+ [
+ "w",
+ "o"
+ ],
+ [
+ "ft",
+ "er"
+ ],
+ [
+ "fte",
+ "r"
+ ],
+ [
+ "f",
+ "ter"
+ ],
+ [
+ "s",
+ "h"
+ ],
+ [
+ "**",
+ "******"
+ ],
+ [
+ "****",
+ "****"
+ ],
+ [
+ "******",
+ "**"
+ ],
+ [
+ "▁o",
+ "ur"
+ ],
+ [
+ "▁ou",
+ "r"
+ ],
+ [
+ "▁",
+ "our"
+ ],
+ [
+ "ri",
+ "v"
+ ],
+ [
+ "r",
+ "iv"
+ ],
+ [
+ "is",
+ "s"
+ ],
+ [
+ "i",
+ "ss"
+ ],
+ [
+ "▁W",
+ "e"
+ ],
+ [
+ "▁",
+ "We"
+ ],
+ [
+ "n",
+ "g"
+ ],
+ [
+ "▁o",
+ "b"
+ ],
+ [
+ "▁",
+ "ob"
+ ],
+ [
+ "s",
+ "s"
+ ],
+ [
+ "g",
+ "r"
+ ],
+ [
+ "▁t",
+ "han"
+ ],
+ [
+ "▁th",
+ "an"
+ ],
+ [
+ "▁",
+ "than"
+ ],
+ [
+ "pe",
+ "ct"
+ ],
+ [
+ "pec",
+ "t"
+ ],
+ [
+ "p",
+ "ect"
+ ],
+ [
+ "ie",
+ "d"
+ ],
+ [
+ "i",
+ "ed"
+ ],
+ [
+ "s",
+ "c"
+ ],
+ [
+ "ie",
+ "w"
+ ],
+ [
+ "i",
+ "ew"
+ ],
+ [
+ "de",
+ "r"
+ ],
+ [
+ "d",
+ "er"
+ ],
+ [
+ "ys",
+ "t"
+ ],
+ [
+ "y",
+ "st"
+ ],
+ [
+ "e",
+ "v"
+ ],
+ [
+ "▁c",
+ "ould"
+ ],
+ [
+ "▁co",
+ "uld"
+ ],
+ [
+ "▁cou",
+ "ld"
+ ],
+ [
+ "▁",
+ "could"
+ ],
+ [
+ "an",
+ "n"
+ ],
+ [
+ "a",
+ "nn"
+ ],
+ [
+ "en",
+ "c"
+ ],
+ [
+ "e",
+ "nc"
+ ],
+ [
+ "O",
+ "N"
+ ],
+ [
+ "i",
+ "x"
+ ],
+ [
+ "an",
+ "c"
+ ],
+ [
+ "a",
+ "nc"
+ ],
+ [
+ "▁al",
+ "so"
+ ],
+ [
+ "▁als",
+ "o"
+ ],
+ [
+ "▁",
+ "also"
+ ],
+ [
+ "re",
+ "at"
+ ],
+ [
+ "rea",
+ "t"
+ ],
+ [
+ "▁a",
+ "m"
+ ],
+ [
+ "▁",
+ "am"
+ ],
+ [
+ "▁b",
+ "ec"
+ ],
+ [
+ "▁be",
+ "c"
+ ],
+ [
+ "▁",
+ "bec"
+ ],
+ [
+ "▁",
+ "и"
+ ],
+ [
+ "ua",
+ "l"
+ ],
+ [
+ "u",
+ "al"
+ ],
+ [
+ "pe",
+ "c"
+ ],
+ [
+ "p",
+ "ec"
+ ],
+ [
+ "▁",
+ "."
+ ],
+ [
+ "▁b",
+ "l"
+ ],
+ [
+ "▁",
+ "bl"
+ ],
+ [
+ "le",
+ "ct"
+ ],
+ [
+ "l",
+ "ect"
+ ],
+ [
+ "op",
+ "le"
+ ],
+ [
+ "opl",
+ "e"
+ ],
+ [
+ "o",
+ "ple"
+ ],
+ [
+ "y",
+ "s"
+ ],
+ [
+ "▁g",
+ "r"
+ ],
+ [
+ "▁",
+ "gr"
+ ],
+ [
+ "ic",
+ "t"
+ ],
+ [
+ "i",
+ "ct"
+ ],
+ [
+ "i",
+ "k"
+ ],
+ [
+ "tr",
+ "ing"
+ ],
+ [
+ "tri",
+ "ng"
+ ],
+ [
+ "t",
+ "ring"
+ ],
+ [
+ "▁T",
+ "his"
+ ],
+ [
+ "▁Th",
+ "is"
+ ],
+ [
+ "▁",
+ "This"
+ ],
+ [
+ "▁b",
+ "ack"
+ ],
+ [
+ "▁ba",
+ "ck"
+ ],
+ [
+ "▁",
+ "back"
+ ],
+ [
+ "▁",
+ "о"
+ ],
+ [
+ "▁f",
+ "in"
+ ],
+ [
+ "▁fi",
+ "n"
+ ],
+ [
+ "▁",
+ "fin"
+ ],
+ [
+ "at",
+ "ch"
+ ],
+ [
+ "Co",
+ "n"
+ ],
+ [
+ "C",
+ "on"
+ ],
+ [
+ "(",
+ "'"
+ ],
+ [
+ "er",
+ "m"
+ ],
+ [
+ "e",
+ "rm"
+ ],
+ [
+ "▁=",
+ "="
+ ],
+ [
+ "▁",
+ "=="
+ ],
+ [
+ "_",
+ "_"
+ ],
+ [
+ "na",
+ "me"
+ ],
+ [
+ "nam",
+ "e"
+ ],
+ [
+ "n",
+ "ame"
+ ],
+ [
+ ",",
+ "\""
+ ],
+ [
+ "▁d",
+ "id"
+ ],
+ [
+ "▁di",
+ "d"
+ ],
+ [
+ "▁",
+ "did"
+ ],
+ [
+ "is",
+ "e"
+ ],
+ [
+ "i",
+ "se"
+ ],
+ [
+ "▁on",
+ "ly"
+ ],
+ [
+ "▁",
+ "only"
+ ],
+ [
+ "ru",
+ "ct"
+ ],
+ [
+ "r",
+ "uct"
+ ],
+ [
+ "le",
+ "s"
+ ],
+ [
+ "l",
+ "es"
+ ],
+ [
+ "▁t",
+ "hen"
+ ],
+ [
+ "▁the",
+ "n"
+ ],
+ [
+ "▁th",
+ "en"
+ ],
+ [
+ "▁",
+ "then"
+ ],
+ [
+ "au",
+ "se"
+ ],
+ [
+ "aus",
+ "e"
+ ],
+ [
+ "a",
+ "use"
+ ],
+ [
+ "в",
+ "а"
+ ],
+ [
+ "▁it",
+ "s"
+ ],
+ [
+ "▁i",
+ "ts"
+ ],
+ [
+ "▁",
+ "its"
+ ],
+ [
+ "ri",
+ "t"
+ ],
+ [
+ "r",
+ "it"
+ ],
+ [
+ "▁k",
+ "now"
+ ],
+ [
+ "▁kn",
+ "ow"
+ ],
+ [
+ "▁",
+ "know"
+ ],
+ [
+ "ie",
+ "ld"
+ ],
+ [
+ "iel",
+ "d"
+ ],
+ [
+ "i",
+ "eld"
+ ],
+ [
+ "▁c",
+ "lass"
+ ],
+ [
+ "▁cl",
+ "ass"
+ ],
+ [
+ "▁clas",
+ "s"
+ ],
+ [
+ "▁",
+ "class"
+ ],
+ [
+ "▁",
+ ">"
+ ],
+ [
+ "▁e",
+ "m"
+ ],
+ [
+ "▁",
+ "em"
+ ],
+ [
+ "▁$",
+ "\\"
+ ],
+ [
+ "▁",
+ "$\\"
+ ],
+ [
+ "▁y",
+ "ear"
+ ],
+ [
+ "▁ye",
+ "ar"
+ ],
+ [
+ "▁",
+ "year"
+ ],
+ [
+ "w",
+ "n"
+ ],
+ [
+ "}",
+ ","
+ ],
+ [
+ "▁d",
+ "el"
+ ],
+ [
+ "▁de",
+ "l"
+ ],
+ [
+ "▁",
+ "del"
+ ],
+ [
+ "al",
+ "e"
+ ],
+ [
+ "a",
+ "le"
+ ],
+ [
+ "t",
+ "y"
+ ],
+ [
+ "fi",
+ "g"
+ ],
+ [
+ "f",
+ "ig"
+ ],
+ [
+ "s",
+ "p"
+ ],
+ [
+ "he",
+ "d"
+ ],
+ [
+ "h",
+ "ed"
+ ],
+ [
+ "ro",
+ "und"
+ ],
+ [
+ "rou",
+ "nd"
+ ],
+ [
+ "r",
+ "ound"
+ ],
+ [
+ "e",
+ "w"
+ ],
+ [
+ "▁d",
+ "i"
+ ],
+ [
+ "▁",
+ "di"
+ ],
+ [
+ "▁d",
+ "er"
+ ],
+ [
+ "▁de",
+ "r"
+ ],
+ [
+ "▁",
+ "der"
+ ],
+ [
+ "р",
+ "и"
+ ],
+ [
+ "re",
+ "d"
+ ],
+ [
+ "r",
+ "ed"
+ ],
+ [
+ "th",
+ "is"
+ ],
+ [
+ "t",
+ "his"
+ ],
+ [
+ "le",
+ "t"
+ ],
+ [
+ "l",
+ "et"
+ ],
+ [
+ "R",
+ "E"
+ ],
+ [
+ "a",
+ "x"
+ ],
+ [
+ "f",
+ "r"
+ ],
+ [
+ "ess",
+ "age"
+ ],
+ [
+ "essa",
+ "ge"
+ ],
+ [
+ "ou",
+ "gh"
+ ],
+ [
+ "o",
+ "ugh"
+ ],
+ [
+ "▁c",
+ "omm"
+ ],
+ [
+ "▁com",
+ "m"
+ ],
+ [
+ "▁co",
+ "mm"
+ ],
+ [
+ "▁",
+ "comm"
+ ],
+ [
+ "f",
+ "o"
+ ],
+ [
+ "uc",
+ "h"
+ ],
+ [
+ "u",
+ "ch"
+ ],
+ [
+ "o",
+ "y"
+ ],
+ [
+ "▁pe",
+ "ople"
+ ],
+ [
+ "▁",
+ "people"
+ ],
+ [
+ "yst",
+ "em"
+ ],
+ [
+ "ys",
+ "tem"
+ ],
+ [
+ "▁f",
+ "irst"
+ ],
+ [
+ "▁fir",
+ "st"
+ ],
+ [
+ "▁",
+ "first"
+ ],
+ [
+ "▁f",
+ "unction"
+ ],
+ [
+ "▁fun",
+ "ction"
+ ],
+ [
+ "▁",
+ "function"
+ ],
+ [
+ "an",
+ "ge"
+ ],
+ [
+ "ang",
+ "e"
+ ],
+ [
+ "▁h",
+ "ow"
+ ],
+ [
+ "▁ho",
+ "w"
+ ],
+ [
+ "▁",
+ "how"
+ ],
+ [
+ "▁e",
+ "t"
+ ],
+ [
+ "▁",
+ "et"
+ ],
+ [
+ "a",
+ "h"
+ ],
+ [
+ "▁l",
+ "ook"
+ ],
+ [
+ "▁lo",
+ "ok"
+ ],
+ [
+ "▁",
+ "look"
+ ],
+ [
+ "т",
+ "о"
+ ],
+ [
+ "un",
+ "d"
+ ],
+ [
+ "u",
+ "nd"
+ ],
+ [
+ "▁u",
+ "nder"
+ ],
+ [
+ "▁un",
+ "der"
+ ],
+ [
+ "▁und",
+ "er"
+ ],
+ [
+ "▁",
+ "under"
+ ],
+ [
+ "к",
+ "а"
+ ],
+ [
+ "▁",
+ "!"
+ ],
+ [
+ "ra",
+ "y"
+ ],
+ [
+ "r",
+ "ay"
+ ],
+ [
+ "S",
+ "T"
+ ],
+ [
+ "if",
+ "ic"
+ ],
+ [
+ "ifi",
+ "c"
+ ],
+ [
+ "i",
+ "fic"
+ ],
+ [
+ "л",
+ "и"
+ ],
+ [
+ "re",
+ "ad"
+ ],
+ [
+ "rea",
+ "d"
+ ],
+ [
+ "r",
+ "ead"
+ ],
+ [
+ "▁b",
+ "et"
+ ],
+ [
+ "▁be",
+ "t"
+ ],
+ [
+ "▁",
+ "bet"
+ ],
+ [
+ "io",
+ "us"
+ ],
+ [
+ "i",
+ "ous"
+ ],
+ [
+ "ar",
+ "g"
+ ],
+ [
+ "a",
+ "rg"
+ ],
+ [
+ "▁n",
+ "eed"
+ ],
+ [
+ "▁ne",
+ "ed"
+ ],
+ [
+ "▁",
+ "need"
+ ],
+ [
+ "ma",
+ "th"
+ ],
+ [
+ "mat",
+ "h"
+ ],
+ [
+ "m",
+ "ath"
+ ],
+ [
+ "▁н",
+ "а"
+ ],
+ [
+ "▁",
+ "на"
+ ],
+ [
+ "er",
+ "t"
+ ],
+ [
+ "e",
+ "rt"
+ ],
+ [
+ "▁o",
+ "p"
+ ],
+ [
+ "▁",
+ "op"
+ ],
+ [
+ "▁a",
+ "cc"
+ ],
+ [
+ "▁ac",
+ "c"
+ ],
+ [
+ "▁",
+ "acc"
+ ],
+ [
+ "Pr",
+ "o"
+ ],
+ [
+ "P",
+ "ro"
+ ],
+ [
+ "▁e",
+ "st"
+ ],
+ [
+ "▁es",
+ "t"
+ ],
+ [
+ "▁",
+ "est"
+ ],
+ [
+ "▁U",
+ "n"
+ ],
+ [
+ "▁",
+ "Un"
+ ],
+ [
+ "▁e",
+ "nt"
+ ],
+ [
+ "▁en",
+ "t"
+ ],
+ [
+ "▁",
+ "ent"
+ ],
+ [
+ "▁re",
+ "c"
+ ],
+ [
+ "▁r",
+ "ec"
+ ],
+ [
+ "▁",
+ "rec"
+ ],
+ [
+ "▁u",
+ "se"
+ ],
+ [
+ "▁us",
+ "e"
+ ],
+ [
+ "▁",
+ "use"
+ ],
+ [
+ "е",
+ "н"
+ ],
+ [
+ "▁p",
+ "ar"
+ ],
+ [
+ "▁pa",
+ "r"
+ ],
+ [
+ "▁",
+ "par"
+ ],
+ [
+ "a",
+ "z"
+ ],
+ [
+ "▁",
+ "д"
+ ],
+ [
+ "▁W",
+ "h"
+ ],
+ [
+ "▁",
+ "Wh"
+ ],
+ [
+ "sel",
+ "f"
+ ],
+ [
+ "s",
+ "elf"
+ ],
+ [
+ "▁k",
+ "e"
+ ],
+ [
+ "▁",
+ "ke"
+ ],
+ [
+ "т",
+ "а"
+ ],
+ [
+ "▁w",
+ "ant"
+ ],
+ [
+ "▁wa",
+ "nt"
+ ],
+ [
+ "▁",
+ "want"
+ ],
+ [
+ "▁e",
+ "nd"
+ ],
+ [
+ "▁en",
+ "d"
+ ],
+ [
+ "▁",
+ "end"
+ ],
+ [
+ "▁d",
+ "on"
+ ],
+ [
+ "▁do",
+ "n"
+ ],
+ [
+ "▁",
+ "don"
+ ],
+ [
+ "e",
+ "k"
+ ],
+ [
+ "re",
+ "n"
+ ],
+ [
+ "r",
+ "en"
+ ],
+ [
+ "Na",
+ "me"
+ ],
+ [
+ "N",
+ "ame"
+ ],
+ [
+ "▁=",
+ ">"
+ ],
+ [
+ "▁",
+ "=>"
+ ],
+ [
+ "▁a",
+ "pp"
+ ],
+ [
+ "▁ap",
+ "p"
+ ],
+ [
+ "▁",
+ "app"
+ ],
+ [
+ "▁qu",
+ "e"
+ ],
+ [
+ "▁q",
+ "ue"
+ ],
+ [
+ "▁",
+ "que"
+ ],
+ [
+ "ig",
+ "h"
+ ],
+ [
+ "i",
+ "gh"
+ ],
+ [
+ "▁b",
+ "u"
+ ],
+ [
+ "▁",
+ "bu"
+ ],
+ [
+ "eq",
+ "u"
+ ],
+ [
+ "e",
+ "qu"
+ ],
+ [
+ "ve",
+ "l"
+ ],
+ [
+ "v",
+ "el"
+ ],
+ [
+ "▁a",
+ "ct"
+ ],
+ [
+ "▁ac",
+ "t"
+ ],
+ [
+ "▁",
+ "act"
+ ],
+ [
+ "cr",
+ "e"
+ ],
+ [
+ "c",
+ "re"
+ ],
+ [
+ "A",
+ "T"
+ ],
+ [
+ "▁v",
+ "ar"
+ ],
+ [
+ "▁va",
+ "r"
+ ],
+ [
+ "▁",
+ "var"
+ ],
+ [
+ "ce",
+ "ss"
+ ],
+ [
+ "ces",
+ "s"
+ ],
+ [
+ "c",
+ "ess"
+ ],
+ [
+ "==",
+ "=="
+ ],
+ [
+ "===",
+ "="
+ ],
+ [
+ "=",
+ "==="
+ ],
+ [
+ "E",
+ "x"
+ ],
+ [
+ "▁a",
+ "dd"
+ ],
+ [
+ "▁ad",
+ "d"
+ ],
+ [
+ "▁",
+ "add"
+ ],
+ [
+ "▁m",
+ "od"
+ ],
+ [
+ "▁mo",
+ "d"
+ ],
+ [
+ "▁",
+ "mod"
+ ],
+ [
+ "un",
+ "g"
+ ],
+ [
+ "u",
+ "ng"
+ ],
+ [
+ "▁w",
+ "here"
+ ],
+ [
+ "▁wh",
+ "ere"
+ ],
+ [
+ "▁whe",
+ "re"
+ ],
+ [
+ "▁",
+ "where"
+ ],
+ [
+ "ni",
+ "ng"
+ ],
+ [
+ "n",
+ "ing"
+ ],
+ [
+ "▁f",
+ "l"
+ ],
+ [
+ "▁",
+ "fl"
+ ],
+ [
+ "al",
+ "s"
+ ],
+ [
+ "a",
+ "ls"
+ ],
+ [
+ "ter",
+ "n"
+ ],
+ [
+ "te",
+ "rn"
+ ],
+ [
+ "t",
+ "ern"
+ ],
+ [
+ "}",
+ "}"
+ ],
+ [
+ "▁A",
+ "l"
+ ],
+ [
+ "▁",
+ "Al"
+ ],
+ [
+ "▁p",
+ "os"
+ ],
+ [
+ "▁po",
+ "s"
+ ],
+ [
+ "▁",
+ "pos"
+ ],
+ [
+ "an",
+ "k"
+ ],
+ [
+ "▁a",
+ "p"
+ ],
+ [
+ "▁",
+ "ap"
+ ],
+ [
+ "en",
+ "g"
+ ],
+ [
+ "e",
+ "ng"
+ ],
+ [
+ "▁",
+ "“"
+ ],
+ [
+ "bl",
+ "e"
+ ],
+ [
+ "b",
+ "le"
+ ],
+ [
+ "▁re",
+ "g"
+ ],
+ [
+ "▁r",
+ "eg"
+ ],
+ [
+ "▁",
+ "reg"
+ ],
+ [
+ "^",
+ "{"
+ ],
+ [
+ "▁S",
+ "he"
+ ],
+ [
+ "▁Sh",
+ "e"
+ ],
+ [
+ "▁",
+ "She"
+ ],
+ [
+ "▁*",
+ "/"
+ ],
+ [
+ "▁",
+ "*/"
+ ],
+ [
+ "ud",
+ "e"
+ ],
+ [
+ "u",
+ "de"
+ ],
+ [
+ "ad",
+ "d"
+ ],
+ [
+ "a",
+ "dd"
+ ],
+ [
+ "▁t",
+ "wo"
+ ],
+ [
+ "▁tw",
+ "o"
+ ],
+ [
+ "▁",
+ "two"
+ ],
+ [
+ "▁c",
+ "ol"
+ ],
+ [
+ "▁co",
+ "l"
+ ],
+ [
+ "▁",
+ "col"
+ ],
+ [
+ "▁s",
+ "m"
+ ],
+ [
+ "▁",
+ "sm"
+ ],
+ [
+ "ai",
+ "r"
+ ],
+ [
+ "a",
+ "ir"
+ ],
+ [
+ "▁m",
+ "ay"
+ ],
+ [
+ "▁ma",
+ "y"
+ ],
+ [
+ "▁",
+ "may"
+ ],
+ [
+ "fo",
+ "re"
+ ],
+ [
+ "for",
+ "e"
+ ],
+ [
+ "f",
+ "ore"
+ ],
+ [
+ "▁Y",
+ "ou"
+ ],
+ [
+ "▁",
+ "You"
+ ],
+ [
+ "ro",
+ "ugh"
+ ],
+ [
+ "rou",
+ "gh"
+ ],
+ [
+ "r",
+ "ough"
+ ],
+ [
+ "▁c",
+ "he"
+ ],
+ [
+ "▁ch",
+ "e"
+ ],
+ [
+ "▁",
+ "che"
+ ],
+ [
+ "▁a",
+ "tt"
+ ],
+ [
+ "▁at",
+ "t"
+ ],
+ [
+ "▁",
+ "att"
+ ],
+ [
+ "ot",
+ "h"
+ ],
+ [
+ "o",
+ "th"
+ ],
+ [
+ "л",
+ "а"
+ ],
+ [
+ "▁c",
+ "o"
+ ],
+ [
+ "▁",
+ "co"
+ ],
+ [
+ "at",
+ "es"
+ ],
+ [
+ "ate",
+ "s"
+ ],
+ [
+ "a",
+ "tes"
+ ],
+ [
+ "▁re",
+ "m"
+ ],
+ [
+ "▁r",
+ "em"
+ ],
+ [
+ "▁",
+ "rem"
+ ],
+ [
+ "oo",
+ "d"
+ ],
+ [
+ "o",
+ "od"
+ ],
+ [
+ "Ty",
+ "pe"
+ ],
+ [
+ "Typ",
+ "e"
+ ],
+ [
+ "T",
+ "ype"
+ ],
+ [
+ "le",
+ "d"
+ ],
+ [
+ "l",
+ "ed"
+ ],
+ [
+ "fu",
+ "l"
+ ],
+ [
+ "f",
+ "ul"
+ ],
+ [
+ "▁s",
+ "elf"
+ ],
+ [
+ "▁sel",
+ "f"
+ ],
+ [
+ "▁",
+ "self"
+ ],
+ [
+ "o",
+ "f"
+ ],
+ [
+ "▁A",
+ "r"
+ ],
+ [
+ "▁",
+ "Ar"
+ ],
+ [
+ "qu",
+ "e"
+ ],
+ [
+ "q",
+ "ue"
+ ],
+ [
+ "▁e",
+ "very"
+ ],
+ [
+ "▁ev",
+ "ery"
+ ],
+ [
+ "▁ever",
+ "y"
+ ],
+ [
+ "▁",
+ "every"
+ ],
+ [
+ "re",
+ "f"
+ ],
+ [
+ "r",
+ "ef"
+ ],
+ [
+ "Th",
+ "e"
+ ],
+ [
+ "T",
+ "he"
+ ],
+ [
+ "▁A",
+ "nd"
+ ],
+ [
+ "▁An",
+ "d"
+ ],
+ [
+ "▁",
+ "And"
+ ],
+ [
+ "▁re",
+ "l"
+ ],
+ [
+ "▁r",
+ "el"
+ ],
+ [
+ "▁",
+ "rel"
+ ],
+ [
+ "O",
+ "R"
+ ],
+ [
+ "I",
+ "d"
+ ],
+ [
+ "▁e",
+ "ven"
+ ],
+ [
+ "▁ev",
+ "en"
+ ],
+ [
+ "▁",
+ "even"
+ ],
+ [
+ "E",
+ "N"
+ ],
+ [
+ "▁h",
+ "and"
+ ],
+ [
+ "▁ha",
+ "nd"
+ ],
+ [
+ "▁han",
+ "d"
+ ],
+ [
+ "▁",
+ "hand"
+ ],
+ [
+ "ai",
+ "t"
+ ],
+ [
+ "a",
+ "it"
+ ],
+ [
+ "▁sh",
+ "ould"
+ ],
+ [
+ "▁",
+ "should"
+ ],
+ [
+ "▁a",
+ "fter"
+ ],
+ [
+ "▁af",
+ "ter"
+ ],
+ [
+ "▁",
+ "after"
+ ],
+ [
+ "▁d",
+ "if"
+ ],
+ [
+ "▁di",
+ "f"
+ ],
+ [
+ "gh",
+ "t"
+ ],
+ [
+ "g",
+ "ht"
+ ],
+ [
+ "if",
+ "e"
+ ],
+ [
+ "i",
+ "fe"
+ ],
+ [
+ "at",
+ "or"
+ ],
+ [
+ "ato",
+ "r"
+ ],
+ [
+ "a",
+ "tor"
+ ],
+ [
+ "as",
+ "h"
+ ],
+ [
+ "a",
+ "sh"
+ ],
+ [
+ "ri",
+ "but"
+ ],
+ [
+ "rib",
+ "ut"
+ ],
+ [
+ "ribu",
+ "t"
+ ],
+ [
+ "um",
+ "ber"
+ ],
+ [
+ "umb",
+ "er"
+ ],
+ [
+ "u",
+ "mber"
+ ],
+ [
+ "▁s",
+ "ee"
+ ],
+ [
+ "▁se",
+ "e"
+ ],
+ [
+ "▁",
+ "see"
+ ],
+ [
+ "m",
+ "s"
+ ],
+ [
+ "▁c",
+ "all"
+ ],
+ [
+ "▁cal",
+ "l"
+ ],
+ [
+ "▁ca",
+ "ll"
+ ],
+ [
+ "▁",
+ "call"
+ ],
+ [
+ "y",
+ "n"
+ ],
+ [
+ "d",
+ "d"
+ ],
+ [
+ "▁e",
+ "s"
+ ],
+ [
+ "▁",
+ "es"
+ ],
+ [
+ "▁m",
+ "ake"
+ ],
+ [
+ "▁ma",
+ "ke"
+ ],
+ [
+ "▁",
+ "make"
+ ],
+ [
+ "ot",
+ "her"
+ ],
+ [
+ "oth",
+ "er"
+ ],
+ [
+ "othe",
+ "r"
+ ],
+ [
+ "o",
+ "ther"
+ ],
+ [
+ "▁",
+ "—"
+ ],
+ [
+ "\")",
+ ";"
+ ],
+ [
+ "\"",
+ ");"
+ ],
+ [
+ "st",
+ "r"
+ ],
+ [
+ "s",
+ "tr"
+ ],
+ [
+ "▁l",
+ "ong"
+ ],
+ [
+ "▁lo",
+ "ng"
+ ],
+ [
+ "▁lon",
+ "g"
+ ],
+ [
+ "▁",
+ "long"
+ ],
+ [
+ "le",
+ "ment"
+ ],
+ [
+ "lem",
+ "ent"
+ ],
+ [
+ "l",
+ "ement"
+ ],
+ [
+ "▁w",
+ "or"
+ ],
+ [
+ "▁wo",
+ "r"
+ ],
+ [
+ "▁",
+ "wor"
+ ],
+ [
+ "it",
+ "s"
+ ],
+ [
+ "i",
+ "ts"
+ ],
+ [
+ "▁I",
+ "f"
+ ],
+ [
+ "▁",
+ "If"
+ ],
+ [
+ "al",
+ "se"
+ ],
+ [
+ "als",
+ "e"
+ ],
+ [
+ "л",
+ "ь"
+ ],
+ [
+ "wa",
+ "rd"
+ ],
+ [
+ "war",
+ "d"
+ ],
+ [
+ "w",
+ "ard"
+ ],
+ [
+ "▁п",
+ "о"
+ ],
+ [
+ "▁",
+ "по"
+ ],
+ [
+ "va",
+ "l"
+ ],
+ [
+ "v",
+ "al"
+ ],
+ [
+ "on",
+ "s"
+ ],
+ [
+ "o",
+ "ns"
+ ],
+ [
+ "▁",
+ "Z"
+ ],
+ [
+ "▁n",
+ "ow"
+ ],
+ [
+ "▁no",
+ "w"
+ ],
+ [
+ "▁",
+ "now"
+ ],
+ [
+ "da",
+ "ta"
+ ],
+ [
+ "dat",
+ "a"
+ ],
+ [
+ "d",
+ "ata"
+ ],
+ [
+ "am",
+ "p"
+ ],
+ [
+ "a",
+ "mp"
+ ],
+ [
+ "en",
+ "se"
+ ],
+ [
+ "ens",
+ "e"
+ ],
+ [
+ "▁th",
+ "rough"
+ ],
+ [
+ "▁thr",
+ "ough"
+ ],
+ [
+ "▁thro",
+ "ugh"
+ ],
+ [
+ "▁",
+ "through"
+ ],
+ [
+ "▁d",
+ "own"
+ ],
+ [
+ "▁do",
+ "wn"
+ ],
+ [
+ "▁dow",
+ "n"
+ ],
+ [
+ "▁",
+ "down"
+ ],
+ [
+ "at",
+ "t"
+ ],
+ [
+ "a",
+ "tt"
+ ],
+ [
+ "▁st",
+ "atic"
+ ],
+ [
+ "▁stat",
+ "ic"
+ ],
+ [
+ "▁",
+ "static"
+ ],
+ [
+ "ic",
+ "s"
+ ],
+ [
+ "i",
+ "cs"
+ ],
+ [
+ "#",
+ "#"
+ ],
+ [
+ "po",
+ "s"
+ ],
+ [
+ "p",
+ "os"
+ ],
+ [
+ "▁v",
+ "oid"
+ ],
+ [
+ "▁vo",
+ "id"
+ ],
+ [
+ "▁",
+ "void"
+ ],
+ [
+ "a",
+ "w"
+ ],
+ [
+ "ou",
+ "n"
+ ],
+ [
+ "o",
+ "un"
+ ],
+ [
+ "▁w",
+ "ay"
+ ],
+ [
+ "▁wa",
+ "y"
+ ],
+ [
+ "▁",
+ "way"
+ ],
+ [
+ "ib",
+ "le"
+ ],
+ [
+ "i",
+ "ble"
+ ],
+ [
+ "ve",
+ "nt"
+ ],
+ [
+ "ven",
+ "t"
+ ],
+ [
+ "v",
+ "ent"
+ ],
+ [
+ "ow",
+ "er"
+ ],
+ [
+ "owe",
+ "r"
+ ],
+ [
+ "o",
+ "wer"
+ ],
+ [
+ "▁th",
+ "ink"
+ ],
+ [
+ "▁thin",
+ "k"
+ ],
+ [
+ "▁",
+ "think"
+ ],
+ [
+ "t",
+ "s"
+ ],
+ [
+ "*",
+ "/"
+ ],
+ [
+ "▁a",
+ "gain"
+ ],
+ [
+ "▁ag",
+ "ain"
+ ],
+ [
+ "▁",
+ "again"
+ ],
+ [
+ "at",
+ "ing"
+ ],
+ [
+ "ati",
+ "ng"
+ ],
+ [
+ "atin",
+ "g"
+ ],
+ [
+ "a",
+ "ting"
+ ],
+ [
+ "т",
+ "е"
+ ],
+ [
+ "ne",
+ "r"
+ ],
+ [
+ "n",
+ "er"
+ ],
+ [
+ "▁m",
+ "ost"
+ ],
+ [
+ "▁mo",
+ "st"
+ ],
+ [
+ "▁mos",
+ "t"
+ ],
+ [
+ "▁",
+ "most"
+ ],
+ [
+ "li",
+ "ne"
+ ],
+ [
+ "lin",
+ "e"
+ ],
+ [
+ "l",
+ "ine"
+ ],
+ [
+ "y",
+ "m"
+ ],
+ [
+ "▁s",
+ "ub"
+ ],
+ [
+ "▁su",
+ "b"
+ ],
+ [
+ "▁",
+ "sub"
+ ],
+ [
+ "er",
+ "son"
+ ],
+ [
+ "ers",
+ "on"
+ ],
+ [
+ "▁re",
+ "qu"
+ ],
+ [
+ "▁r",
+ "equ"
+ ],
+ [
+ "▁req",
+ "u"
+ ],
+ [
+ "▁",
+ "requ"
+ ],
+ [
+ "A",
+ "L"
+ ],
+ [
+ "A",
+ "R"
+ ],
+ [
+ "ab",
+ "el"
+ ],
+ [
+ "abe",
+ "l"
+ ],
+ [
+ "a",
+ "bel"
+ ],
+ [
+ "on",
+ "d"
+ ],
+ [
+ "o",
+ "nd"
+ ],
+ [
+ "))",
+ ";"
+ ],
+ [
+ ")",
+ ");"
+ ],
+ [
+ "▁S",
+ "e"
+ ],
+ [
+ "▁",
+ "Se"
+ ],
+ [
+ "▁B",
+ "ut"
+ ],
+ [
+ "▁Bu",
+ "t"
+ ],
+ [
+ "▁",
+ "But"
+ ],
+ [
+ "al",
+ "k"
+ ],
+ [
+ "▁A",
+ "n"
+ ],
+ [
+ "▁",
+ "An"
+ ],
+ [
+ "ne",
+ "w"
+ ],
+ [
+ "n",
+ "ew"
+ ],
+ [
+ "▁b",
+ "ecause"
+ ],
+ [
+ "▁bec",
+ "ause"
+ ],
+ [
+ "▁",
+ "because"
+ ],
+ [
+ "ge",
+ "r"
+ ],
+ [
+ "g",
+ "er"
+ ],
+ [
+ "ul",
+ "ar"
+ ],
+ [
+ "ula",
+ "r"
+ ],
+ [
+ "u",
+ "lar"
+ ],
+ [
+ "ro",
+ "up"
+ ],
+ [
+ "rou",
+ "p"
+ ],
+ [
+ "r",
+ "oup"
+ ],
+ [
+ "t",
+ "a"
+ ],
+ [
+ "..",
+ "."
+ ],
+ [
+ ".",
+ ".."
+ ],
+ [
+ "▁c",
+ "ons"
+ ],
+ [
+ "▁con",
+ "s"
+ ],
+ [
+ "▁co",
+ "ns"
+ ],
+ [
+ "▁",
+ "cons"
+ ],
+ [
+ "▁r",
+ "ight"
+ ],
+ [
+ "▁ri",
+ "ght"
+ ],
+ [
+ "▁rig",
+ "ht"
+ ],
+ [
+ "▁",
+ "right"
+ ],
+ [
+ "▁f",
+ "r"
+ ],
+ [
+ "▁",
+ "fr"
+ ],
+ [
+ "b",
+ "e"
+ ],
+ [
+ "il",
+ "y"
+ ],
+ [
+ "i",
+ "ly"
+ ],
+ [
+ "к",
+ "и"
+ ],
+ [
+ "▁p",
+ "h"
+ ],
+ [
+ "▁",
+ "ph"
+ ],
+ [
+ "ea",
+ "d"
+ ],
+ [
+ "e",
+ "ad"
+ ],
+ [
+ "?",
+ "\""
+ ],
+ [
+ "▁g",
+ "u"
+ ],
+ [
+ "▁",
+ "gu"
+ ],
+ [
+ "▁el",
+ "se"
+ ],
+ [
+ "▁els",
+ "e"
+ ],
+ [
+ "▁",
+ "else"
+ ],
+ [
+ "▁s",
+ "om"
+ ],
+ [
+ "▁so",
+ "m"
+ ],
+ [
+ "▁",
+ "som"
+ ],
+ [
+ "re",
+ "nt"
+ ],
+ [
+ "ren",
+ "t"
+ ],
+ [
+ "r",
+ "ent"
+ ],
+ [
+ "c",
+ "o"
+ ],
+ [
+ "em",
+ "ent"
+ ],
+ [
+ "eme",
+ "nt"
+ ],
+ [
+ "emen",
+ "t"
+ ],
+ [
+ "e",
+ "ment"
+ ],
+ [
+ "▁s",
+ "tr"
+ ],
+ [
+ "▁st",
+ "r"
+ ],
+ [
+ "▁",
+ "str"
+ ],
+ [
+ "au",
+ "lt"
+ ],
+ [
+ "aul",
+ "t"
+ ],
+ [
+ "a",
+ "ult"
+ ],
+ [
+ "▁",
+ "з"
+ ],
+ [
+ "л",
+ "о"
+ ],
+ [
+ "se",
+ "rt"
+ ],
+ [
+ "ser",
+ "t"
+ ],
+ [
+ "s",
+ "ert"
+ ],
+ [
+ "va",
+ "r"
+ ],
+ [
+ "v",
+ "ar"
+ ],
+ [
+ "ty",
+ "pe"
+ ],
+ [
+ "typ",
+ "e"
+ ],
+ [
+ "t",
+ "ype"
+ ],
+ [
+ "▁C",
+ "om"
+ ],
+ [
+ "▁Co",
+ "m"
+ ],
+ [
+ "▁",
+ "Com"
+ ],
+ [
+ "л",
+ "е"
+ ],
+ [
+ "in",
+ "s"
+ ],
+ [
+ "i",
+ "ns"
+ ],
+ [
+ "m",
+ "e"
+ ],
+ [
+ "wa",
+ "y"
+ ],
+ [
+ "w",
+ "ay"
+ ],
+ [
+ "id",
+ "ent"
+ ],
+ [
+ "ide",
+ "nt"
+ ],
+ [
+ "iden",
+ "t"
+ ],
+ [
+ "▁p",
+ "rov"
+ ],
+ [
+ "▁pro",
+ "v"
+ ],
+ [
+ "▁pr",
+ "ov"
+ ],
+ [
+ "▁",
+ "prov"
+ ],
+ [
+ "▁",
+ "м"
+ ],
+ [
+ "▁tr",
+ "ue"
+ ],
+ [
+ "▁",
+ "true"
+ ],
+ [
+ "▁P",
+ "ro"
+ ],
+ [
+ "▁Pr",
+ "o"
+ ],
+ [
+ "▁",
+ "Pro"
+ ],
+ [
+ "f",
+ "l"
+ ],
+ [
+ "▁s",
+ "l"
+ ],
+ [
+ "▁",
+ "sl"
+ ],
+ [
+ "▁A",
+ "s"
+ ],
+ [
+ "▁",
+ "As"
+ ],
+ [
+ "}",
+ "\\"
+ ],
+ [
+ "I",
+ "D"
+ ],
+ [
+ "ue",
+ "s"
+ ],
+ [
+ "u",
+ "es"
+ ],
+ [
+ "▁in",
+ "st"
+ ],
+ [
+ "▁ins",
+ "t"
+ ],
+ [
+ "▁",
+ "inst"
+ ],
+ [
+ "▁n",
+ "ame"
+ ],
+ [
+ "▁na",
+ "me"
+ ],
+ [
+ "▁nam",
+ "e"
+ ],
+ [
+ "▁",
+ "name"
+ ],
+ [
+ "o",
+ "x"
+ ],
+ [
+ "▁",
+ ")"
+ ],
+ [
+ "l",
+ "i"
+ ],
+ [
+ "am",
+ "es"
+ ],
+ [
+ "ame",
+ "s"
+ ],
+ [
+ "a",
+ "mes"
+ ],
+ [
+ "Re",
+ "s"
+ ],
+ [
+ "R",
+ "es"
+ ],
+ [
+ "▁s",
+ "ur"
+ ],
+ [
+ "▁su",
+ "r"
+ ],
+ [
+ "▁",
+ "sur"
+ ],
+ [
+ "par",
+ "am"
+ ],
+ [
+ "pa",
+ "ram"
+ ],
+ [
+ "para",
+ "m"
+ ],
+ [
+ "p",
+ "aram"
+ ],
+ [
+ "▁st",
+ "art"
+ ],
+ [
+ "▁star",
+ "t"
+ ],
+ [
+ "▁sta",
+ "rt"
+ ],
+ [
+ "▁",
+ "start"
+ ],
+ [
+ "a",
+ "j"
+ ],
+ [
+ "S",
+ "E"
+ ],
+ [
+ "as",
+ "k"
+ ],
+ [
+ "a",
+ "sk"
+ ],
+ [
+ "I",
+ "T"
+ ],
+ [
+ "St",
+ "ring"
+ ],
+ [
+ "Str",
+ "ing"
+ ],
+ [
+ "S",
+ "tring"
+ ],
+ [
+ "▁a",
+ "ss"
+ ],
+ [
+ "▁as",
+ "s"
+ ],
+ [
+ "▁",
+ "ass"
+ ],
+ [
+ "▁p",
+ "lay"
+ ],
+ [
+ "▁pl",
+ "ay"
+ ],
+ [
+ "▁",
+ "play"
+ ],
+ [
+ "ti",
+ "ng"
+ ],
+ [
+ "t",
+ "ing"
+ ],
+ [
+ "to",
+ "n"
+ ],
+ [
+ "t",
+ "on"
+ ],
+ [
+ "▁b",
+ "efore"
+ ],
+ [
+ "▁be",
+ "fore"
+ ],
+ [
+ "▁bef",
+ "ore"
+ ],
+ [
+ "▁",
+ "before"
+ ],
+ [
+ "▁p",
+ "ol"
+ ],
+ [
+ "▁po",
+ "l"
+ ],
+ [
+ "▁",
+ "pol"
+ ],
+ [
+ "ar",
+ "ch"
+ ],
+ [
+ "arc",
+ "h"
+ ],
+ [
+ "▁w",
+ "ell"
+ ],
+ [
+ "▁we",
+ "ll"
+ ],
+ [
+ "▁wel",
+ "l"
+ ],
+ [
+ "▁",
+ "well"
+ ],
+ [
+ "Co",
+ "m"
+ ],
+ [
+ "C",
+ "om"
+ ],
+ [
+ "an",
+ "y"
+ ],
+ [
+ "a",
+ "ny"
+ ],
+ [
+ "ol",
+ "og"
+ ],
+ [
+ "olo",
+ "g"
+ ],
+ [
+ "o",
+ "log"
+ ],
+ [
+ "▁e",
+ "rr"
+ ],
+ [
+ "▁er",
+ "r"
+ ],
+ [
+ "▁",
+ "err"
+ ],
+ [
+ "▁the",
+ "se"
+ ],
+ [
+ "▁th",
+ "ese"
+ ],
+ [
+ "ar",
+ "s"
+ ],
+ [
+ "a",
+ "rs"
+ ],
+ [
+ "e",
+ "b"
+ ],
+ [
+ "▁b",
+ "r"
+ ],
+ [
+ "▁",
+ "br"
+ ],
+ [
+ "▁in",
+ "cl"
+ ],
+ [
+ "▁inc",
+ "l"
+ ],
+ [
+ "▁",
+ "incl"
+ ],
+ [
+ "▁h",
+ "el"
+ ],
+ [
+ "▁he",
+ "l"
+ ],
+ [
+ "▁",
+ "hel"
+ ],
+ [
+ "er",
+ "n"
+ ],
+ [
+ "e",
+ "rn"
+ ],
+ [
+ "od",
+ "y"
+ ],
+ [
+ "o",
+ "dy"
+ ],
+ [
+ "в",
+ "о"
+ ],
+ [
+ "▁in",
+ "d"
+ ],
+ [
+ "▁i",
+ "nd"
+ ],
+ [
+ "▁",
+ "ind"
+ ],
+ [
+ "--",
+ "--------------"
+ ],
+ [
+ "----",
+ "------------"
+ ],
+ [
+ "--------",
+ "--------"
+ ],
+ [
+ "---",
+ "-------------"
+ ],
+ [
+ "------------",
+ "----"
+ ],
+ [
+ "-----",
+ "-----------"
+ ],
+ [
+ "----------",
+ "------"
+ ],
+ [
+ "------",
+ "----------"
+ ],
+ [
+ "-------------",
+ "---"
+ ],
+ [
+ "--------------",
+ "--"
+ ],
+ [
+ "---------",
+ "-------"
+ ],
+ [
+ "-------",
+ "---------"
+ ],
+ [
+ "-----------",
+ "-----"
+ ],
+ [
+ "▁d",
+ "ata"
+ ],
+ [
+ "▁da",
+ "ta"
+ ],
+ [
+ "▁dat",
+ "a"
+ ],
+ [
+ "▁",
+ "data"
+ ],
+ [
+ "▁g",
+ "ood"
+ ],
+ [
+ "▁go",
+ "od"
+ ],
+ [
+ "▁",
+ "good"
+ ],
+ [
+ "L",
+ "E"
+ ],
+ [
+ "]",
+ ","
+ ],
+ [
+ "▁a",
+ "v"
+ ],
+ [
+ "▁",
+ "av"
+ ],
+ [
+ "▁a",
+ "c"
+ ],
+ [
+ "▁",
+ "ac"
+ ],
+ [
+ "id",
+ "er"
+ ],
+ [
+ "ide",
+ "r"
+ ],
+ [
+ "i",
+ "der"
+ ],
+ [
+ "н",
+ "е"
+ ],
+ [
+ "▁",
+ "Q"
+ ],
+ [
+ "▁m",
+ "in"
+ ],
+ [
+ "▁mi",
+ "n"
+ ],
+ [
+ "▁",
+ "min"
+ ],
+ [
+ "▁m",
+ "uch"
+ ],
+ [
+ "▁mu",
+ "ch"
+ ],
+ [
+ "c",
+ "i"
+ ],
+ [
+ "el",
+ "s"
+ ],
+ [
+ "e",
+ "ls"
+ ],
+ [
+ "▁c",
+ "ur"
+ ],
+ [
+ "▁cu",
+ "r"
+ ],
+ [
+ "▁",
+ "cur"
+ ],
+ [
+ "▁v",
+ "alue"
+ ],
+ [
+ "▁val",
+ "ue"
+ ],
+ [
+ "▁",
+ "value"
+ ],
+ [
+ "er",
+ "y"
+ ],
+ [
+ "e",
+ "ry"
+ ],
+ [
+ "u",
+ "f"
+ ],
+ [
+ "▁l",
+ "oc"
+ ],
+ [
+ "▁lo",
+ "c"
+ ],
+ [
+ "▁",
+ "loc"
+ ],
+ [
+ "re",
+ "ak"
+ ],
+ [
+ "rea",
+ "k"
+ ],
+ [
+ "at",
+ "ive"
+ ],
+ [
+ "ati",
+ "ve"
+ ],
+ [
+ "ativ",
+ "e"
+ ],
+ [
+ "im",
+ "es"
+ ],
+ [
+ "ime",
+ "s"
+ ],
+ [
+ "i",
+ "mes"
+ ],
+ [
+ "C",
+ "l"
+ ],
+ [
+ "▁",
+ ","
+ ],
+ [
+ "▁s",
+ "er"
+ ],
+ [
+ "▁se",
+ "r"
+ ],
+ [
+ "▁",
+ "ser"
+ ],
+ [
+ "▁d",
+ "ie"
+ ],
+ [
+ "▁di",
+ "e"
+ ],
+ [
+ "▁",
+ "die"
+ ],
+ [
+ "▁tr",
+ "ans"
+ ],
+ [
+ "▁tra",
+ "ns"
+ ],
+ [
+ "▁",
+ "trans"
+ ],
+ [
+ "▁res",
+ "ult"
+ ],
+ [
+ "▁",
+ "result"
+ ],
+ [
+ "ex",
+ "t"
+ ],
+ [
+ "e",
+ "xt"
+ ],
+ [
+ "▁a",
+ "ut"
+ ],
+ [
+ "▁au",
+ "t"
+ ],
+ [
+ "▁",
+ "aut"
+ ],
+ [
+ "la",
+ "nd"
+ ],
+ [
+ "lan",
+ "d"
+ ],
+ [
+ "l",
+ "and"
+ ],
+ [
+ "▁&",
+ "&"
+ ],
+ [
+ "▁",
+ "&&"
+ ],
+ [
+ "C",
+ "h"
+ ],
+ [
+ "te",
+ "n"
+ ],
+ [
+ "t",
+ "en"
+ ],
+ [
+ "}",
+ "$"
+ ],
+ [
+ "▁t",
+ "ype"
+ ],
+ [
+ "▁typ",
+ "e"
+ ],
+ [
+ "▁ty",
+ "pe"
+ ],
+ [
+ "▁",
+ "type"
+ ],
+ [
+ "con",
+ "d"
+ ],
+ [
+ "co",
+ "nd"
+ ],
+ [
+ "c",
+ "ond"
+ ],
+ [
+ "ic",
+ "es"
+ ],
+ [
+ "ice",
+ "s"
+ ],
+ [
+ "i",
+ "ces"
+ ],
+ [
+ "▁v",
+ "ery"
+ ],
+ [
+ "▁ver",
+ "y"
+ ],
+ [
+ "▁ve",
+ "ry"
+ ],
+ [
+ "▁",
+ "very"
+ ],
+ [
+ "▁o",
+ "wn"
+ ],
+ [
+ "▁",
+ "own"
+ ],
+ [
+ "▁f",
+ "il"
+ ],
+ [
+ "▁fi",
+ "l"
+ ],
+ [
+ "▁",
+ "fil"
+ ],
+ [
+ "it",
+ "ies"
+ ],
+ [
+ "iti",
+ "es"
+ ],
+ [
+ "i",
+ "ties"
+ ],
+ [
+ "▁p",
+ "rodu"
+ ],
+ [
+ "▁pro",
+ "du"
+ ],
+ [
+ "▁prod",
+ "u"
+ ],
+ [
+ "▁",
+ "produ"
+ ],
+ [
+ "▁re",
+ "ad"
+ ],
+ [
+ "▁r",
+ "ead"
+ ],
+ [
+ "▁",
+ "read"
+ ],
+ [
+ "▁f",
+ "orm"
+ ],
+ [
+ "▁for",
+ "m"
+ ],
+ [
+ "▁fo",
+ "rm"
+ ],
+ [
+ "▁",
+ "form"
+ ],
+ [
+ "▁c",
+ "ase"
+ ],
+ [
+ "▁cas",
+ "e"
+ ],
+ [
+ "▁ca",
+ "se"
+ ],
+ [
+ "▁",
+ "case"
+ ],
+ [
+ "at",
+ "her"
+ ],
+ [
+ "ath",
+ "er"
+ ],
+ [
+ "a",
+ "ther"
+ ],
+ [
+ "т",
+ "и"
+ ],
+ [
+ "д",
+ "а"
+ ],
+ [
+ "е",
+ "р"
+ ],
+ [
+ "T",
+ "h"
+ ],
+ [
+ "au",
+ "t"
+ ],
+ [
+ "a",
+ "ut"
+ ],
+ [
+ "▁s",
+ "pec"
+ ],
+ [
+ "▁sp",
+ "ec"
+ ],
+ [
+ "▁spe",
+ "c"
+ ],
+ [
+ "▁",
+ "spec"
+ ],
+ [
+ "i",
+ "j"
+ ],
+ [
+ "b",
+ "l"
+ ],
+ [
+ "il",
+ "ity"
+ ],
+ [
+ "ili",
+ "ty"
+ ],
+ [
+ "▁",
+ "é"
+ ],
+ [
+ "▁e",
+ "r"
+ ],
+ [
+ "▁",
+ "er"
+ ],
+ [
+ "▁d",
+ "oes"
+ ],
+ [
+ "▁do",
+ "es"
+ ],
+ [
+ "▁",
+ "does"
+ ],
+ [
+ "▁h",
+ "ere"
+ ],
+ [
+ "▁he",
+ "re"
+ ],
+ [
+ "▁her",
+ "e"
+ ],
+ [
+ "▁",
+ "here"
+ ],
+ [
+ "th",
+ "e"
+ ],
+ [
+ "t",
+ "he"
+ ],
+ [
+ "ur",
+ "es"
+ ],
+ [
+ "ure",
+ "s"
+ ],
+ [
+ "u",
+ "res"
+ ],
+ [
+ "▁",
+ "%"
+ ],
+ [
+ "mi",
+ "n"
+ ],
+ [
+ "m",
+ "in"
+ ],
+ [
+ "▁n",
+ "ull"
+ ],
+ [
+ "▁nu",
+ "ll"
+ ],
+ [
+ "▁",
+ "null"
+ ],
+ [
+ "ra",
+ "p"
+ ],
+ [
+ "r",
+ "ap"
+ ],
+ [
+ "\"",
+ ")"
+ ],
+ [
+ "r",
+ "r"
+ ],
+ [
+ "Li",
+ "st"
+ ],
+ [
+ "L",
+ "ist"
+ ],
+ [
+ "ri",
+ "ght"
+ ],
+ [
+ "rig",
+ "ht"
+ ],
+ [
+ "r",
+ "ight"
+ ],
+ [
+ "▁U",
+ "ser"
+ ],
+ [
+ "▁Us",
+ "er"
+ ],
+ [
+ "▁Use",
+ "r"
+ ],
+ [
+ "▁",
+ "User"
+ ],
+ [
+ "U",
+ "L"
+ ],
+ [
+ "at",
+ "ional"
+ ],
+ [
+ "ation",
+ "al"
+ ],
+ [
+ "ati",
+ "onal"
+ ],
+ [
+ "atio",
+ "nal"
+ ],
+ [
+ "▁b",
+ "eing"
+ ],
+ [
+ "▁be",
+ "ing"
+ ],
+ [
+ "▁bei",
+ "ng"
+ ],
+ [
+ "▁",
+ "being"
+ ],
+ [
+ "A",
+ "N"
+ ],
+ [
+ "s",
+ "k"
+ ],
+ [
+ "▁c",
+ "ar"
+ ],
+ [
+ "▁ca",
+ "r"
+ ],
+ [
+ "▁",
+ "car"
+ ],
+ [
+ "ol",
+ "e"
+ ],
+ [
+ "o",
+ "le"
+ ],
+ [
+ "▁d",
+ "ist"
+ ],
+ [
+ "▁dis",
+ "t"
+ ],
+ [
+ "▁di",
+ "st"
+ ],
+ [
+ "▁",
+ "dist"
+ ],
+ [
+ "pl",
+ "ic"
+ ],
+ [
+ "p",
+ "lic"
+ ],
+ [
+ "ol",
+ "low"
+ ],
+ [
+ "oll",
+ "ow"
+ ],
+ [
+ "▁p",
+ "res"
+ ],
+ [
+ "▁pre",
+ "s"
+ ],
+ [
+ "▁pr",
+ "es"
+ ],
+ [
+ "▁",
+ "pres"
+ ],
+ [
+ "▁s",
+ "uch"
+ ],
+ [
+ "▁su",
+ "ch"
+ ],
+ [
+ "▁suc",
+ "h"
+ ],
+ [
+ "▁",
+ "such"
+ ],
+ [
+ "re",
+ "am"
+ ],
+ [
+ "rea",
+ "m"
+ ],
+ [
+ "in",
+ "ce"
+ ],
+ [
+ "inc",
+ "e"
+ ],
+ [
+ "ga",
+ "n"
+ ],
+ [
+ "g",
+ "an"
+ ],
+ [
+ "▁F",
+ "or"
+ ],
+ [
+ "▁Fo",
+ "r"
+ ],
+ [
+ "▁",
+ "For"
+ ],
+ [
+ "\"",
+ ":"
+ ],
+ [
+ "so",
+ "n"
+ ],
+ [
+ "s",
+ "on"
+ ],
+ [
+ "riv",
+ "ate"
+ ],
+ [
+ "▁y",
+ "ears"
+ ],
+ [
+ "▁year",
+ "s"
+ ],
+ [
+ "▁ye",
+ "ars"
+ ],
+ [
+ "▁s",
+ "erv"
+ ],
+ [
+ "▁se",
+ "rv"
+ ],
+ [
+ "▁ser",
+ "v"
+ ],
+ [
+ "▁",
+ "serv"
+ ],
+ [
+ "▁m",
+ "ade"
+ ],
+ [
+ "▁ma",
+ "de"
+ ],
+ [
+ "▁mad",
+ "e"
+ ],
+ [
+ "▁",
+ "made"
+ ],
+ [
+ "de",
+ "f"
+ ],
+ [
+ "d",
+ "ef"
+ ],
+ [
+ ";",
+ "\r"
+ ],
+ [
+ "▁g",
+ "l"
+ ],
+ [
+ "▁",
+ "gl"
+ ],
+ [
+ "▁b",
+ "el"
+ ],
+ [
+ "▁be",
+ "l"
+ ],
+ [
+ "▁",
+ "bel"
+ ],
+ [
+ "▁l",
+ "ist"
+ ],
+ [
+ "▁li",
+ "st"
+ ],
+ [
+ "▁",
+ "list"
+ ],
+ [
+ "▁c",
+ "or"
+ ],
+ [
+ "▁co",
+ "r"
+ ],
+ [
+ "▁",
+ "cor"
+ ],
+ [
+ "▁d",
+ "et"
+ ],
+ [
+ "▁de",
+ "t"
+ ],
+ [
+ "▁",
+ "det"
+ ],
+ [
+ "ce",
+ "ption"
+ ],
+ [
+ "cept",
+ "ion"
+ ],
+ [
+ "eg",
+ "in"
+ ],
+ [
+ "e",
+ "gin"
+ ],
+ [
+ "▁",
+ "б"
+ ],
+ [
+ "▁c",
+ "har"
+ ],
+ [
+ "▁ch",
+ "ar"
+ ],
+ [
+ "▁cha",
+ "r"
+ ],
+ [
+ "▁",
+ "char"
+ ],
+ [
+ "tr",
+ "ans"
+ ],
+ [
+ "tra",
+ "ns"
+ ],
+ [
+ "▁f",
+ "am"
+ ],
+ [
+ "▁fa",
+ "m"
+ ],
+ [
+ "▁!",
+ "="
+ ],
+ [
+ "▁",
+ "!="
+ ],
+ [
+ "ou",
+ "se"
+ ],
+ [
+ "ous",
+ "e"
+ ],
+ [
+ "o",
+ "use"
+ ],
+ [
+ "▁d",
+ "ec"
+ ],
+ [
+ "▁de",
+ "c"
+ ],
+ [
+ "▁",
+ "dec"
+ ],
+ [
+ "ic",
+ "a"
+ ],
+ [
+ "i",
+ "ca"
+ ],
+ [
+ "▁m",
+ "any"
+ ],
+ [
+ "▁man",
+ "y"
+ ],
+ [
+ "▁ma",
+ "ny"
+ ],
+ [
+ "▁",
+ "many"
+ ],
+ [
+ "ak",
+ "ing"
+ ],
+ [
+ "aki",
+ "ng"
+ ],
+ [
+ "a",
+ "king"
+ ],
+ [
+ "▁",
+ "à"
+ ],
+ [
+ "▁s",
+ "im"
+ ],
+ [
+ "▁si",
+ "m"
+ ],
+ [
+ "▁",
+ "sim"
+ ],
+ [
+ "ag",
+ "es"
+ ],
+ [
+ "age",
+ "s"
+ ],
+ [
+ "a",
+ "ges"
+ ],
+ [
+ "uf",
+ "f"
+ ],
+ [
+ "u",
+ "ff"
+ ],
+ [
+ "as",
+ "ed"
+ ],
+ [
+ "ase",
+ "d"
+ ],
+ [
+ "a",
+ "sed"
+ ],
+ [
+ "ma",
+ "n"
+ ],
+ [
+ "m",
+ "an"
+ ],
+ [
+ "▁S",
+ "h"
+ ],
+ [
+ "▁",
+ "Sh"
+ ],
+ [
+ "ie",
+ "t"
+ ],
+ [
+ "i",
+ "et"
+ ],
+ [
+ "ir",
+ "ect"
+ ],
+ [
+ "ire",
+ "ct"
+ ],
+ [
+ "i",
+ "rect"
+ ],
+ [
+ "▁R",
+ "e"
+ ],
+ [
+ "▁",
+ "Re"
+ ],
+ [
+ "▁d",
+ "iffer"
+ ],
+ [
+ "▁dif",
+ "fer"
+ ],
+ [
+ "▁diff",
+ "er"
+ ],
+ [
+ "▁f",
+ "ind"
+ ],
+ [
+ "▁fin",
+ "d"
+ ],
+ [
+ "▁fi",
+ "nd"
+ ],
+ [
+ "▁",
+ "find"
+ ],
+ [
+ "eth",
+ "od"
+ ],
+ [
+ "▁",
+ "\r"
+ ],
+ [
+ "in",
+ "es"
+ ],
+ [
+ "ine",
+ "s"
+ ],
+ [
+ "i",
+ "nes"
+ ],
+ [
+ "▁in",
+ "v"
+ ],
+ [
+ "▁i",
+ "nv"
+ ],
+ [
+ "▁",
+ "inv"
+ ],
+ [
+ "▁p",
+ "oint"
+ ],
+ [
+ "▁po",
+ "int"
+ ],
+ [
+ "▁poi",
+ "nt"
+ ],
+ [
+ "▁",
+ "point"
+ ],
+ [
+ "▁The",
+ "y"
+ ],
+ [
+ "▁Th",
+ "ey"
+ ],
+ [
+ "▁",
+ "They"
+ ],
+ [
+ "▁u",
+ "sed"
+ ],
+ [
+ "▁us",
+ "ed"
+ ],
+ [
+ "▁use",
+ "d"
+ ],
+ [
+ "▁",
+ "used"
+ ],
+ [
+ "ct",
+ "ions"
+ ],
+ [
+ "ction",
+ "s"
+ ],
+ [
+ "▁st",
+ "ill"
+ ],
+ [
+ "i",
+ "ó"
+ ],
+ [
+ "in",
+ "ed"
+ ],
+ [
+ "ine",
+ "d"
+ ],
+ [
+ "i",
+ "ned"
+ ],
+ [
+ "▁wh",
+ "ile"
+ ],
+ [
+ "▁",
+ "while"
+ ],
+ [
+ "I",
+ "t"
+ ],
+ [
+ "em",
+ "ber"
+ ],
+ [
+ "emb",
+ "er"
+ ],
+ [
+ "e",
+ "mber"
+ ],
+ [
+ "▁s",
+ "ay"
+ ],
+ [
+ "▁sa",
+ "y"
+ ],
+ [
+ "▁",
+ "say"
+ ],
+ [
+ "▁he",
+ "lp"
+ ],
+ [
+ "▁hel",
+ "p"
+ ],
+ [
+ "▁",
+ "help"
+ ],
+ [
+ "▁c",
+ "re"
+ ],
+ [
+ "▁cr",
+ "e"
+ ],
+ [
+ "▁",
+ "cre"
+ ],
+ [
+ "▁",
+ "x"
+ ],
+ [
+ "▁T",
+ "r"
+ ],
+ [
+ "▁",
+ "Tr"
+ ],
+ [
+ "um",
+ "ent"
+ ],
+ [
+ "ume",
+ "nt"
+ ],
+ [
+ "umen",
+ "t"
+ ],
+ [
+ "u",
+ "ment"
+ ],
+ [
+ "▁s",
+ "k"
+ ],
+ [
+ "▁",
+ "sk"
+ ],
+ [
+ "ou",
+ "ght"
+ ],
+ [
+ "ough",
+ "t"
+ ],
+ [
+ "ual",
+ "ly"
+ ],
+ [
+ "u",
+ "ally"
+ ],
+ [
+ "m",
+ "essage"
+ ],
+ [
+ "▁C",
+ "on"
+ ],
+ [
+ "▁Co",
+ "n"
+ ],
+ [
+ "▁",
+ "Con"
+ ],
+ [
+ "▁m",
+ "on"
+ ],
+ [
+ "▁mo",
+ "n"
+ ],
+ [
+ "▁",
+ "mon"
+ ],
+ [
+ "ar",
+ "ed"
+ ],
+ [
+ "are",
+ "d"
+ ],
+ [
+ "a",
+ "red"
+ ],
+ [
+ "wor",
+ "k"
+ ],
+ [
+ "w",
+ "ork"
+ ],
+ [
+ ")",
+ ":"
+ ],
+ [
+ "is",
+ "ter"
+ ],
+ [
+ "ist",
+ "er"
+ ],
+ [
+ "iste",
+ "r"
+ ],
+ [
+ "i",
+ "ster"
+ ],
+ [
+ "ar",
+ "n"
+ ],
+ [
+ "a",
+ "rn"
+ ],
+ [
+ "iz",
+ "ed"
+ ],
+ [
+ "ize",
+ "d"
+ ],
+ [
+ "i",
+ "zed"
+ ],
+ [
+ "Dat",
+ "a"
+ ],
+ [
+ "Da",
+ "ta"
+ ],
+ [
+ "D",
+ "ata"
+ ],
+ [
+ "or",
+ "n"
+ ],
+ [
+ "o",
+ "rn"
+ ],
+ [
+ "▁h",
+ "ead"
+ ],
+ [
+ "▁he",
+ "ad"
+ ],
+ [
+ "▁",
+ "head"
+ ],
+ [
+ "D",
+ "E"
+ ],
+ [
+ "▁L",
+ "e"
+ ],
+ [
+ "▁",
+ "Le"
+ ],
+ [
+ "▁p",
+ "erson"
+ ],
+ [
+ "▁per",
+ "son"
+ ],
+ [
+ "▁pers",
+ "on"
+ ],
+ [
+ "▁",
+ "person"
+ ],
+ [
+ "ment",
+ "s"
+ ],
+ [
+ "men",
+ "ts"
+ ],
+ [
+ "m",
+ "ents"
+ ],
+ [
+ "eng",
+ "th"
+ ],
+ [
+ "e",
+ "ngth"
+ ],
+ [
+ "▁f",
+ "alse"
+ ],
+ [
+ "▁fal",
+ "se"
+ ],
+ [
+ "▁fals",
+ "e"
+ ],
+ [
+ "▁",
+ "false"
+ ],
+ [
+ "▁m",
+ "ed"
+ ],
+ [
+ "▁me",
+ "d"
+ ],
+ [
+ "▁",
+ "med"
+ ],
+ [
+ "▁D",
+ "e"
+ ],
+ [
+ "▁",
+ "De"
+ ],
+ [
+ "ac",
+ "he"
+ ],
+ [
+ "ach",
+ "e"
+ ],
+ [
+ "a",
+ "che"
+ ],
+ [
+ "it",
+ "ed"
+ ],
+ [
+ "ite",
+ "d"
+ ],
+ [
+ "i",
+ "ted"
+ ],
+ [
+ "▁l",
+ "et"
+ ],
+ [
+ "▁le",
+ "t"
+ ],
+ [
+ "▁",
+ "let"
+ ],
+ [
+ "▁s",
+ "how"
+ ],
+ [
+ "▁sh",
+ "ow"
+ ],
+ [
+ "▁",
+ "show"
+ ],
+ [
+ "▁s",
+ "ame"
+ ],
+ [
+ "▁sa",
+ "me"
+ ],
+ [
+ "▁sam",
+ "e"
+ ],
+ [
+ "▁",
+ "same"
+ ],
+ [
+ "us",
+ "s"
+ ],
+ [
+ "u",
+ "ss"
+ ],
+ [
+ "▁g",
+ "ener"
+ ],
+ [
+ "▁gen",
+ "er"
+ ],
+ [
+ "▁ge",
+ "ner"
+ ],
+ [
+ "▁gene",
+ "r"
+ ],
+ [
+ "▁",
+ "gener"
+ ],
+ [
+ "▁",
+ "у"
+ ],
+ [
+ "cu",
+ "r"
+ ],
+ [
+ "c",
+ "ur"
+ ],
+ [
+ "▁re",
+ "al"
+ ],
+ [
+ "▁",
+ "real"
+ ],
+ [
+ "ce",
+ "d"
+ ],
+ [
+ "c",
+ "ed"
+ ],
+ [
+ "\"",
+ ">"
+ ],
+ [
+ "st",
+ "ruct"
+ ],
+ [
+ "str",
+ "uct"
+ ],
+ [
+ "stru",
+ "ct"
+ ],
+ [
+ "be",
+ "gin"
+ ],
+ [
+ "b",
+ "egin"
+ ],
+ [
+ "ce",
+ "pt"
+ ],
+ [
+ "cep",
+ "t"
+ ],
+ [
+ "▁b",
+ "o"
+ ],
+ [
+ "▁",
+ "bo"
+ ],
+ [
+ "ir",
+ "ed"
+ ],
+ [
+ "ire",
+ "d"
+ ],
+ [
+ "i",
+ "red"
+ ],
+ [
+ "▁F",
+ "r"
+ ],
+ [
+ "▁",
+ "Fr"
+ ],
+ [
+ "▁st",
+ "ud"
+ ],
+ [
+ "▁",
+ "stud"
+ ],
+ [
+ "de",
+ "v"
+ ],
+ [
+ "d",
+ "ev"
+ ],
+ [
+ "A",
+ "r"
+ ],
+ [
+ "(",
+ "\\"
+ ],
+ [
+ "▁C",
+ "l"
+ ],
+ [
+ "▁",
+ "Cl"
+ ],
+ [
+ "we",
+ "en"
+ ],
+ [
+ "w",
+ "een"
+ ],
+ [
+ "▁t",
+ "oo"
+ ],
+ [
+ "▁to",
+ "o"
+ ],
+ [
+ "▁",
+ "too"
+ ],
+ [
+ "▁t",
+ "est"
+ ],
+ [
+ "▁te",
+ "st"
+ ],
+ [
+ "▁",
+ "test"
+ ],
+ [
+ "▁d",
+ "ay"
+ ],
+ [
+ "▁da",
+ "y"
+ ],
+ [
+ "▁",
+ "day"
+ ],
+ [
+ "o",
+ "h"
+ ],
+ [
+ "▁f",
+ "ollow"
+ ],
+ [
+ "▁fol",
+ "low"
+ ],
+ [
+ "▁",
+ "follow"
+ ],
+ [
+ "at",
+ "ure"
+ ],
+ [
+ "atur",
+ "e"
+ ],
+ [
+ "atu",
+ "re"
+ ],
+ [
+ "z",
+ "e"
+ ],
+ [
+ "ie",
+ "n"
+ ],
+ [
+ "i",
+ "en"
+ ],
+ [
+ "re",
+ "g"
+ ],
+ [
+ "r",
+ "eg"
+ ],
+ [
+ "ce",
+ "s"
+ ],
+ [
+ "c",
+ "es"
+ ],
+ [
+ "ur",
+ "ing"
+ ],
+ [
+ "uri",
+ "ng"
+ ],
+ [
+ "u",
+ "ring"
+ ],
+ [
+ "am",
+ "b"
+ ],
+ [
+ "a",
+ "mb"
+ ],
+ [
+ "in",
+ "a"
+ ],
+ [
+ "i",
+ "na"
+ ],
+ [
+ "cr",
+ "i"
+ ],
+ [
+ "c",
+ "ri"
+ ],
+ [
+ "▁e",
+ "d"
+ ],
+ [
+ "▁",
+ "ed"
+ ],
+ [
+ "S",
+ "S"
+ ],
+ [
+ "uc",
+ "k"
+ ],
+ [
+ "u",
+ "ck"
+ ],
+ [
+ "▁/",
+ "*"
+ ],
+ [
+ "▁",
+ "/*"
+ ],
+ [
+ "C",
+ "T"
+ ],
+ [
+ "▁T",
+ "here"
+ ],
+ [
+ "▁The",
+ "re"
+ ],
+ [
+ "▁Th",
+ "ere"
+ ],
+ [
+ "▁Ther",
+ "e"
+ ],
+ [
+ "▁",
+ "There"
+ ],
+ [
+ "▁t",
+ "ake"
+ ],
+ [
+ "▁tak",
+ "e"
+ ],
+ [
+ "▁ta",
+ "ke"
+ ],
+ [
+ "▁",
+ "take"
+ ],
+ [
+ "pa",
+ "r"
+ ],
+ [
+ "p",
+ "ar"
+ ],
+ [
+ "ul",
+ "e"
+ ],
+ [
+ "u",
+ "le"
+ ],
+ [
+ "ca",
+ "l"
+ ],
+ [
+ "c",
+ "al"
+ ],
+ [
+ "fo",
+ "r"
+ ],
+ [
+ "f",
+ "or"
+ ],
+ [
+ "**",
+ "**************"
+ ],
+ [
+ "****",
+ "************"
+ ],
+ [
+ "********",
+ "********"
+ ],
+ [
+ "************",
+ "****"
+ ],
+ [
+ "**************",
+ "**"
+ ],
+ [
+ "s",
+ "ource"
+ ],
+ [
+ "▁th",
+ "ose"
+ ],
+ [
+ "co",
+ "l"
+ ],
+ [
+ "c",
+ "ol"
+ ],
+ [
+ "▁e",
+ "ff"
+ ],
+ [
+ "▁",
+ "eff"
+ ],
+ [
+ "mo",
+ "d"
+ ],
+ [
+ "m",
+ "od"
+ ],
+ [
+ "con",
+ "t"
+ ],
+ [
+ "co",
+ "nt"
+ ],
+ [
+ "c",
+ "ont"
+ ],
+ [
+ "}",
+ "{"
+ ],
+ [
+ "▁a",
+ "round"
+ ],
+ [
+ "▁ar",
+ "ound"
+ ],
+ [
+ "▁",
+ "around"
+ ],
+ [
+ "pr",
+ "ess"
+ ],
+ [
+ "pre",
+ "ss"
+ ],
+ [
+ "pres",
+ "s"
+ ],
+ [
+ "p",
+ "ress"
+ ],
+ [
+ "b",
+ "y"
+ ],
+ [
+ "▁go",
+ "ing"
+ ],
+ [
+ "▁",
+ "going"
+ ],
+ [
+ "pon",
+ "se"
+ ],
+ [
+ "pons",
+ "e"
+ ],
+ [
+ "▁",
+ "С"
+ ],
+ [
+ "▁l",
+ "ine"
+ ],
+ [
+ "▁li",
+ "ne"
+ ],
+ [
+ "▁lin",
+ "e"
+ ],
+ [
+ "▁",
+ "line"
+ ],
+ [
+ "da",
+ "te"
+ ],
+ [
+ "dat",
+ "e"
+ ],
+ [
+ "d",
+ "ate"
+ ],
+ [
+ "co",
+ "de"
+ ],
+ [
+ "cod",
+ "e"
+ ],
+ [
+ "c",
+ "ode"
+ ],
+ [
+ "[",
+ "'"
+ ],
+ [
+ "▁l",
+ "ife"
+ ],
+ [
+ "▁li",
+ "fe"
+ ],
+ [
+ "▁lif",
+ "e"
+ ],
+ [
+ "▁",
+ "life"
+ ],
+ [
+ "as",
+ "on"
+ ],
+ [
+ "a",
+ "son"
+ ],
+ [
+ "▁u",
+ "sing"
+ ],
+ [
+ "▁us",
+ "ing"
+ ],
+ [
+ "▁",
+ "using"
+ ],
+ [
+ "▁v",
+ "al"
+ ],
+ [
+ "▁va",
+ "l"
+ ],
+ [
+ "▁",
+ "val"
+ ],
+ [
+ "▁d",
+ "u"
+ ],
+ [
+ "▁",
+ "du"
+ ],
+ [
+ "y",
+ "p"
+ ],
+ [
+ "▁O",
+ "n"
+ ],
+ [
+ "▁",
+ "On"
+ ],
+ [
+ "▁f",
+ "ound"
+ ],
+ [
+ "▁fo",
+ "und"
+ ],
+ [
+ "▁fou",
+ "nd"
+ ],
+ [
+ "▁",
+ "found"
+ ],
+ [
+ "ol",
+ "ut"
+ ],
+ [
+ "olu",
+ "t"
+ ],
+ [
+ "'",
+ "]"
+ ],
+ [
+ "ar",
+ "ent"
+ ],
+ [
+ "are",
+ "nt"
+ ],
+ [
+ "aren",
+ "t"
+ ],
+ [
+ "a",
+ "rent"
+ ],
+ [
+ "▁s",
+ "tring"
+ ],
+ [
+ "▁st",
+ "ring"
+ ],
+ [
+ "▁str",
+ "ing"
+ ],
+ [
+ "▁stri",
+ "ng"
+ ],
+ [
+ "▁",
+ "string"
+ ],
+ [
+ "▁m",
+ "et"
+ ],
+ [
+ "▁me",
+ "t"
+ ],
+ [
+ "▁",
+ "met"
+ ],
+ [
+ "▁w",
+ "r"
+ ],
+ [
+ "▁",
+ "wr"
+ ],
+ [
+ "us",
+ "h"
+ ],
+ [
+ "u",
+ "sh"
+ ],
+ [
+ "st",
+ "ring"
+ ],
+ [
+ "str",
+ "ing"
+ ],
+ [
+ "stri",
+ "ng"
+ ],
+ [
+ "s",
+ "tring"
+ ],
+ [
+ "si",
+ "ze"
+ ],
+ [
+ "s",
+ "ize"
+ ],
+ [
+ "▁v",
+ "er"
+ ],
+ [
+ "▁ve",
+ "r"
+ ],
+ [
+ "▁",
+ "ver"
+ ],
+ [
+ "▁e",
+ "ach"
+ ],
+ [
+ "▁",
+ "each"
+ ],
+ [
+ "val",
+ "ue"
+ ],
+ [
+ "v",
+ "alue"
+ ],
+ [
+ "▁l",
+ "ast"
+ ],
+ [
+ "▁la",
+ "st"
+ ],
+ [
+ "▁las",
+ "t"
+ ],
+ [
+ "▁",
+ "last"
+ ],
+ [
+ "▁g",
+ "ot"
+ ],
+ [
+ "▁go",
+ "t"
+ ],
+ [
+ "▁",
+ "got"
+ ],
+ [
+ "ve",
+ "n"
+ ],
+ [
+ "v",
+ "en"
+ ],
+ [
+ "ba",
+ "ck"
+ ],
+ [
+ "b",
+ "ack"
+ ],
+ [
+ "Se",
+ "t"
+ ],
+ [
+ "S",
+ "et"
+ ],
+ [
+ "e",
+ "y"
+ ],
+ [
+ "ro",
+ "l"
+ ],
+ [
+ "r",
+ "ol"
+ ],
+ [
+ "▁c",
+ "r"
+ ],
+ [
+ "▁",
+ "cr"
+ ],
+ [
+ "th",
+ "ing"
+ ],
+ [
+ "t",
+ "hing"
+ ],
+ [
+ "re",
+ "t"
+ ],
+ [
+ "r",
+ "et"
+ ],
+ [
+ "é",
+ "s"
+ ],
+ [
+ "is",
+ "m"
+ ],
+ [
+ "i",
+ "sm"
+ ],
+ [
+ "▁bet",
+ "ween"
+ ],
+ [
+ "▁",
+ "between"
+ ],
+ [
+ "O",
+ "b"
+ ],
+ [
+ "et",
+ "hing"
+ ],
+ [
+ "eth",
+ "ing"
+ ],
+ [
+ "e",
+ "thing"
+ ],
+ [
+ "m",
+ "p"
+ ],
+ [
+ "▁l",
+ "o"
+ ],
+ [
+ "▁",
+ "lo"
+ ],
+ [
+ "at",
+ "s"
+ ],
+ [
+ "a",
+ "ts"
+ ],
+ [
+ "▁N",
+ "ew"
+ ],
+ [
+ "▁Ne",
+ "w"
+ ],
+ [
+ "▁",
+ "New"
+ ],
+ [
+ "в",
+ "и"
+ ],
+ [
+ "ad",
+ "o"
+ ],
+ [
+ "a",
+ "do"
+ ],
+ [
+ "de",
+ "x"
+ ],
+ [
+ "d",
+ "ex"
+ ],
+ [
+ "д",
+ "и"
+ ],
+ [
+ "▁p",
+ "ass"
+ ],
+ [
+ "▁pas",
+ "s"
+ ],
+ [
+ "▁pa",
+ "ss"
+ ],
+ [
+ "▁",
+ "pass"
+ ],
+ [
+ "w",
+ "h"
+ ],
+ [
+ "▁d",
+ "en"
+ ],
+ [
+ "▁de",
+ "n"
+ ],
+ [
+ "▁",
+ "den"
+ ],
+ [
+ "Ge",
+ "t"
+ ],
+ [
+ "G",
+ "et"
+ ],
+ [
+ "ap",
+ "t"
+ ],
+ [
+ "a",
+ "pt"
+ ],
+ [
+ "▁a",
+ "sk"
+ ],
+ [
+ "▁as",
+ "k"
+ ],
+ [
+ "▁",
+ "ask"
+ ],
+ [
+ "▁s",
+ "up"
+ ],
+ [
+ "▁su",
+ "p"
+ ],
+ [
+ "▁",
+ "sup"
+ ],
+ [
+ "Val",
+ "ue"
+ ],
+ [
+ "V",
+ "alue"
+ ],
+ [
+ "н",
+ "ы"
+ ],
+ [
+ "▁t",
+ "ry"
+ ],
+ [
+ "▁tr",
+ "y"
+ ],
+ [
+ "▁",
+ "try"
+ ],
+ [
+ "lat",
+ "ion"
+ ],
+ [
+ "l",
+ "ation"
+ ],
+ [
+ "da",
+ "y"
+ ],
+ [
+ "d",
+ "ay"
+ ],
+ [
+ "ne",
+ "ss"
+ ],
+ [
+ "nes",
+ "s"
+ ],
+ [
+ "n",
+ "ess"
+ ],
+ [
+ "et",
+ "s"
+ ],
+ [
+ "e",
+ "ts"
+ ],
+ [
+ "▁ex",
+ "per"
+ ],
+ [
+ "▁exp",
+ "er"
+ ],
+ [
+ "▁",
+ "exper"
+ ],
+ [
+ "T",
+ "r"
+ ],
+ [
+ "▁M",
+ "ar"
+ ],
+ [
+ "▁Ma",
+ "r"
+ ],
+ [
+ "▁",
+ "Mar"
+ ],
+ [
+ "se",
+ "rv"
+ ],
+ [
+ "ser",
+ "v"
+ ],
+ [
+ "s",
+ "erv"
+ ],
+ [
+ "b",
+ "r"
+ ],
+ [
+ "▁n",
+ "umber"
+ ],
+ [
+ "▁num",
+ "ber"
+ ],
+ [
+ "▁nu",
+ "mber"
+ ],
+ [
+ "▁",
+ "number"
+ ],
+ [
+ "in",
+ "al"
+ ],
+ [
+ "ina",
+ "l"
+ ],
+ [
+ "i",
+ "nal"
+ ],
+ [
+ "ce",
+ "nt"
+ ],
+ [
+ "cen",
+ "t"
+ ],
+ [
+ "c",
+ "ent"
+ ],
+ [
+ "/",
+ "*"
+ ],
+ [
+ "no",
+ "t"
+ ],
+ [
+ "n",
+ "ot"
+ ],
+ [
+ "ion",
+ "al"
+ ],
+ [
+ "io",
+ "nal"
+ ],
+ [
+ "iona",
+ "l"
+ ],
+ [
+ "i",
+ "onal"
+ ],
+ [
+ "▁f",
+ "inal"
+ ],
+ [
+ "▁fin",
+ "al"
+ ],
+ [
+ "▁fi",
+ "nal"
+ ],
+ [
+ "▁",
+ "final"
+ ],
+ [
+ "'",
+ ")"
+ ],
+ [
+ "▁r",
+ "un"
+ ],
+ [
+ "▁ru",
+ "n"
+ ],
+ [
+ "▁",
+ "run"
+ ],
+ [
+ "ov",
+ "er"
+ ],
+ [
+ "ove",
+ "r"
+ ],
+ [
+ "o",
+ "ver"
+ ],
+ [
+ "▁n",
+ "ever"
+ ],
+ [
+ "▁ne",
+ "ver"
+ ],
+ [
+ "▁",
+ "never"
+ ],
+ [
+ "u",
+ "c"
+ ],
+ [
+ "▁h",
+ "igh"
+ ],
+ [
+ "▁hig",
+ "h"
+ ],
+ [
+ "▁hi",
+ "gh"
+ ],
+ [
+ "▁",
+ "high"
+ ],
+ [
+ "yl",
+ "e"
+ ],
+ [
+ "y",
+ "le"
+ ],
+ [
+ "▁in",
+ "s"
+ ],
+ [
+ "▁i",
+ "ns"
+ ],
+ [
+ "▁",
+ "ins"
+ ],
+ [
+ "▁b",
+ "est"
+ ],
+ [
+ "▁be",
+ "st"
+ ],
+ [
+ "▁bes",
+ "t"
+ ],
+ [
+ "▁",
+ "best"
+ ],
+ [
+ "it",
+ "tle"
+ ],
+ [
+ "itt",
+ "le"
+ ],
+ [
+ "ri",
+ "c"
+ ],
+ [
+ "r",
+ "ic"
+ ],
+ [
+ "▁s",
+ "ign"
+ ],
+ [
+ "▁si",
+ "gn"
+ ],
+ [
+ "▁sig",
+ "n"
+ ],
+ [
+ "▁",
+ "sign"
+ ],
+ [
+ "▁d",
+ "em"
+ ],
+ [
+ "▁de",
+ "m"
+ ],
+ [
+ "▁",
+ "dem"
+ ],
+ [
+ "in",
+ "ess"
+ ],
+ [
+ "ine",
+ "ss"
+ ],
+ [
+ "ines",
+ "s"
+ ],
+ [
+ "i",
+ "ness"
+ ],
+ [
+ "g",
+ "y"
+ ],
+ [
+ "▁w",
+ "ar"
+ ],
+ [
+ "▁wa",
+ "r"
+ ],
+ [
+ "▁",
+ "war"
+ ],
+ [
+ "is",
+ "hed"
+ ],
+ [
+ "ish",
+ "ed"
+ ],
+ [
+ "▁g",
+ "iv"
+ ],
+ [
+ "▁gi",
+ "v"
+ ],
+ [
+ "ke",
+ "y"
+ ],
+ [
+ "k",
+ "ey"
+ ],
+ [
+ "▁",
+ "X"
+ ],
+ [
+ "(",
+ "$"
+ ],
+ [
+ "▁ch",
+ "ild"
+ ],
+ [
+ "▁chi",
+ "ld"
+ ],
+ [
+ "▁",
+ "child"
+ ],
+ [
+ "le",
+ "ss"
+ ],
+ [
+ "les",
+ "s"
+ ],
+ [
+ "l",
+ "ess"
+ ],
+ [
+ "way",
+ "s"
+ ],
+ [
+ "wa",
+ "ys"
+ ],
+ [
+ "w",
+ "ays"
+ ],
+ [
+ "in",
+ "cl"
+ ],
+ [
+ "inc",
+ "l"
+ ],
+ [
+ "ro",
+ "p"
+ ],
+ [
+ "r",
+ "op"
+ ],
+ [
+ "ra",
+ "w"
+ ],
+ [
+ "r",
+ "aw"
+ ],
+ [
+ ":",
+ "//"
+ ],
+ [
+ "▁",
+ "«"
+ ],
+ [
+ "n",
+ "o"
+ ],
+ [
+ "ind",
+ "ow"
+ ],
+ [
+ "indo",
+ "w"
+ ],
+ [
+ "f",
+ "e"
+ ],
+ [
+ "ri",
+ "end"
+ ],
+ [
+ "rie",
+ "nd"
+ ],
+ [
+ "rien",
+ "d"
+ ],
+ [
+ "▁l",
+ "es"
+ ],
+ [
+ "▁le",
+ "s"
+ ],
+ [
+ "▁",
+ "les"
+ ],
+ [
+ "▁l",
+ "os"
+ ],
+ [
+ "▁lo",
+ "s"
+ ],
+ [
+ "▁",
+ "los"
+ ],
+ [
+ "fil",
+ "e"
+ ],
+ [
+ "fi",
+ "le"
+ ],
+ [
+ "f",
+ "ile"
+ ],
+ [
+ "form",
+ "ation"
+ ],
+ [
+ "format",
+ "ion"
+ ],
+ [
+ "cc",
+ "ess"
+ ],
+ [
+ "c",
+ "cess"
+ ],
+ [
+ "▁",
+ "В"
+ ],
+ [
+ "n",
+ "a"
+ ],
+ [
+ "▁i",
+ "l"
+ ],
+ [
+ "▁",
+ "il"
+ ],
+ [
+ "is",
+ "ion"
+ ],
+ [
+ "isi",
+ "on"
+ ],
+ [
+ "le",
+ "r"
+ ],
+ [
+ "l",
+ "er"
+ ],
+ [
+ "▁a",
+ "rt"
+ ],
+ [
+ "▁ar",
+ "t"
+ ],
+ [
+ "▁",
+ "art"
+ ],
+ [
+ "Con",
+ "t"
+ ],
+ [
+ "Co",
+ "nt"
+ ],
+ [
+ "C",
+ "ont"
+ ],
+ [
+ "▁w",
+ "orld"
+ ],
+ [
+ "▁wor",
+ "ld"
+ ],
+ [
+ "▁",
+ "world"
+ ],
+ [
+ "▁t",
+ "urn"
+ ],
+ [
+ "▁tu",
+ "rn"
+ ],
+ [
+ "▁tur",
+ "n"
+ ],
+ [
+ "▁",
+ "turn"
+ ],
+ [
+ "▁re",
+ "ally"
+ ],
+ [
+ "▁real",
+ "ly"
+ ],
+ [
+ "▁E",
+ "x"
+ ],
+ [
+ "▁",
+ "Ex"
+ ],
+ [
+ "м",
+ "а"
+ ],
+ [
+ "▁",
+ "П"
+ ],
+ [
+ "ter",
+ "s"
+ ],
+ [
+ "te",
+ "rs"
+ ],
+ [
+ "t",
+ "ers"
+ ],
+ [
+ "ar",
+ "get"
+ ],
+ [
+ "arg",
+ "et"
+ ],
+ [
+ "arge",
+ "t"
+ ],
+ [
+ "Er",
+ "r"
+ ],
+ [
+ "E",
+ "rr"
+ ],
+ [
+ "▁h",
+ "app"
+ ],
+ [
+ "▁ha",
+ "pp"
+ ],
+ [
+ "ti",
+ "me"
+ ],
+ [
+ "tim",
+ "e"
+ ],
+ [
+ "t",
+ "ime"
+ ],
+ [
+ "▁S",
+ "o"
+ ],
+ [
+ "▁",
+ "So"
+ ],
+ [
+ "di",
+ "v"
+ ],
+ [
+ "d",
+ "iv"
+ ],
+ [
+ "▁did",
+ "n"
+ ],
+ [
+ "▁di",
+ "dn"
+ ],
+ [
+ "ad",
+ "a"
+ ],
+ [
+ "a",
+ "da"
+ ],
+ [
+ "oo",
+ "t"
+ ],
+ [
+ "o",
+ "ot"
+ ],
+ [
+ "}",
+ ")"
+ ],
+ [
+ "▁s",
+ "ch"
+ ],
+ [
+ "▁sc",
+ "h"
+ ],
+ [
+ "▁",
+ "sch"
+ ],
+ [
+ "▁c",
+ "le"
+ ],
+ [
+ "▁cl",
+ "e"
+ ],
+ [
+ "▁",
+ "cle"
+ ],
+ [
+ "▁some",
+ "thing"
+ ],
+ [
+ "▁som",
+ "ething"
+ ],
+ [
+ "▁somet",
+ "hing"
+ ],
+ [
+ "▁",
+ "something"
+ ],
+ [
+ "()",
+ "."
+ ],
+ [
+ "(",
+ ")."
+ ],
+ [
+ "▁c",
+ "our"
+ ],
+ [
+ "▁co",
+ "ur"
+ ],
+ [
+ "▁cou",
+ "r"
+ ],
+ [
+ "ev",
+ "er"
+ ],
+ [
+ "eve",
+ "r"
+ ],
+ [
+ "e",
+ "ver"
+ ],
+ [
+ "an",
+ "ts"
+ ],
+ [
+ "ant",
+ "s"
+ ],
+ [
+ "▁",
+ "?"
+ ],
+ [
+ "T",
+ "o"
+ ],
+ [
+ "▁",
+ "`"
+ ],
+ [
+ "tr",
+ "y"
+ ],
+ [
+ "t",
+ "ry"
+ ],
+ [
+ "u",
+ "x"
+ ],
+ [
+ "ai",
+ "s"
+ ],
+ [
+ "a",
+ "is"
+ ],
+ [
+ "ro",
+ "ss"
+ ],
+ [
+ "ros",
+ "s"
+ ],
+ [
+ "r",
+ "oss"
+ ],
+ [
+ "hi",
+ "p"
+ ],
+ [
+ "h",
+ "ip"
+ ],
+ [
+ "▁re",
+ "p"
+ ],
+ [
+ "▁r",
+ "ep"
+ ],
+ [
+ "▁",
+ "rep"
+ ],
+ [
+ "la",
+ "bel"
+ ],
+ [
+ "lab",
+ "el"
+ ],
+ [
+ "l",
+ "abel"
+ ],
+ [
+ "▁b",
+ "oth"
+ ],
+ [
+ "▁bo",
+ "th"
+ ],
+ [
+ "▁bot",
+ "h"
+ ],
+ [
+ "▁",
+ "both"
+ ],
+ [
+ "*",
+ ","
+ ],
+ [
+ "ot",
+ "t"
+ ],
+ [
+ "o",
+ "tt"
+ ],
+ [
+ "м",
+ "и"
+ ],
+ [
+ "an",
+ "e"
+ ],
+ [
+ "a",
+ "ne"
+ ],
+ [
+ "▁o",
+ "pen"
+ ],
+ [
+ "▁op",
+ "en"
+ ],
+ [
+ "▁",
+ "open"
+ ],
+ [
+ "w",
+ "w"
+ ],
+ [
+ "▁c",
+ "ome"
+ ],
+ [
+ "▁com",
+ "e"
+ ],
+ [
+ "▁co",
+ "me"
+ ],
+ [
+ "▁",
+ "come"
+ ],
+ [
+ "▁e",
+ "xt"
+ ],
+ [
+ "▁ex",
+ "t"
+ ],
+ [
+ "▁",
+ "ext"
+ ],
+ [
+ "re",
+ "m"
+ ],
+ [
+ "r",
+ "em"
+ ],
+ [
+ "_{",
+ "\\"
+ ],
+ [
+ "_",
+ "{\\"
+ ],
+ [
+ "▁o",
+ "ld"
+ ],
+ [
+ "▁ol",
+ "d"
+ ],
+ [
+ "▁",
+ "old"
+ ],
+ [
+ "ch",
+ "ed"
+ ],
+ [
+ "che",
+ "d"
+ ],
+ [
+ "c",
+ "hed"
+ ],
+ [
+ ".",
+ "_"
+ ],
+ [
+ "M",
+ "E"
+ ],
+ [
+ "if",
+ "y"
+ ],
+ [
+ "i",
+ "fy"
+ ],
+ [
+ "g",
+ "g"
+ ],
+ [
+ "Co",
+ "l"
+ ],
+ [
+ "C",
+ "ol"
+ ],
+ [
+ "vi",
+ "ew"
+ ],
+ [
+ "v",
+ "iew"
+ ],
+ [
+ "▁b",
+ "us"
+ ],
+ [
+ "▁bu",
+ "s"
+ ],
+ [
+ "▁",
+ "bus"
+ ],
+ [
+ "▁m",
+ "ust"
+ ],
+ [
+ "▁mus",
+ "t"
+ ],
+ [
+ "▁mu",
+ "st"
+ ],
+ [
+ "▁",
+ "must"
+ ],
+ [
+ "▁d",
+ "ifferent"
+ ],
+ [
+ "▁differ",
+ "ent"
+ ],
+ [
+ "lo",
+ "g"
+ ],
+ [
+ "l",
+ "og"
+ ],
+ [
+ "is",
+ "ts"
+ ],
+ [
+ "ist",
+ "s"
+ ],
+ [
+ "i",
+ "sts"
+ ],
+ [
+ "ro",
+ "ll"
+ ],
+ [
+ "rol",
+ "l"
+ ],
+ [
+ "r",
+ "oll"
+ ],
+ [
+ "a",
+ "i"
+ ],
+ [
+ "▁з",
+ "а"
+ ],
+ [
+ "▁",
+ "за"
+ ],
+ [
+ "▁s",
+ "ystem"
+ ],
+ [
+ "▁sys",
+ "tem"
+ ],
+ [
+ "▁syst",
+ "em"
+ ],
+ [
+ "▁",
+ "system"
+ ],
+ [
+ "iv",
+ "ers"
+ ],
+ [
+ "ive",
+ "rs"
+ ],
+ [
+ "iver",
+ "s"
+ ],
+ [
+ "i",
+ "vers"
+ ],
+ [
+ "at",
+ "us"
+ ],
+ [
+ "atu",
+ "s"
+ ],
+ [
+ "ot",
+ "e"
+ ],
+ [
+ "o",
+ "te"
+ ],
+ [
+ "me",
+ "d"
+ ],
+ [
+ "m",
+ "ed"
+ ],
+ [
+ "]",
+ "."
+ ],
+ [
+ "ak",
+ "es"
+ ],
+ [
+ "ake",
+ "s"
+ ],
+ [
+ "a",
+ "kes"
+ ],
+ [
+ "R",
+ "O"
+ ],
+ [
+ "▁c",
+ "ent"
+ ],
+ [
+ "▁ce",
+ "nt"
+ ],
+ [
+ "▁",
+ "cent"
+ ],
+ [
+ "gr",
+ "am"
+ ],
+ [
+ "gra",
+ "m"
+ ],
+ [
+ "g",
+ "ram"
+ ],
+ [
+ "▁p",
+ "rivate"
+ ],
+ [
+ "▁priv",
+ "ate"
+ ],
+ [
+ "▁",
+ "private"
+ ],
+ [
+ "▁g",
+ "reat"
+ ],
+ [
+ "▁gre",
+ "at"
+ ],
+ [
+ "\"",
+ ";"
+ ],
+ [
+ "op",
+ "y"
+ ],
+ [
+ "o",
+ "py"
+ ],
+ [
+ "▁fe",
+ "el"
+ ],
+ [
+ "▁fee",
+ "l"
+ ],
+ [
+ "▁H",
+ "ow"
+ ],
+ [
+ "▁Ho",
+ "w"
+ ],
+ [
+ "▁",
+ "How"
+ ],
+ [
+ "//",
+ "//"
+ ],
+ [
+ "///",
+ "/"
+ ],
+ [
+ "/",
+ "///"
+ ],
+ [
+ "I",
+ "C"
+ ],
+ [
+ "▁d",
+ "r"
+ ],
+ [
+ "▁",
+ "dr"
+ ],
+ [
+ "ain",
+ "s"
+ ],
+ [
+ "ai",
+ "ns"
+ ],
+ [
+ "a",
+ "ins"
+ ],
+ [
+ "lo",
+ "ck"
+ ],
+ [
+ "loc",
+ "k"
+ ],
+ [
+ "l",
+ "ock"
+ ],
+ [
+ "E",
+ "n"
+ ],
+ [
+ "▁S",
+ "ch"
+ ],
+ [
+ "▁Sc",
+ "h"
+ ],
+ [
+ "▁",
+ "Sch"
+ ],
+ [
+ "▁m",
+ "at"
+ ],
+ [
+ "▁ma",
+ "t"
+ ],
+ [
+ "▁",
+ "mat"
+ ],
+ [
+ "▁h",
+ "ome"
+ ],
+ [
+ "▁hom",
+ "e"
+ ],
+ [
+ "▁ho",
+ "me"
+ ],
+ [
+ "▁",
+ "home"
+ ],
+ [
+ "per",
+ "ty"
+ ],
+ [
+ "pert",
+ "y"
+ ],
+ [
+ "te",
+ "st"
+ ],
+ [
+ "tes",
+ "t"
+ ],
+ [
+ "t",
+ "est"
+ ],
+ [
+ "lo",
+ "c"
+ ],
+ [
+ "l",
+ "oc"
+ ],
+ [
+ "▁w",
+ "om"
+ ],
+ [
+ "▁wo",
+ "m"
+ ],
+ [
+ "s",
+ "w"
+ ],
+ [
+ "ar",
+ "ly"
+ ],
+ [
+ "arl",
+ "y"
+ ],
+ [
+ "▁E",
+ "n"
+ ],
+ [
+ "▁",
+ "En"
+ ],
+ [
+ "▁к",
+ "о"
+ ],
+ [
+ "▁",
+ "ко"
+ ],
+ [
+ "de",
+ "n"
+ ],
+ [
+ "d",
+ "en"
+ ],
+ [
+ "ст",
+ "а"
+ ],
+ [
+ "с",
+ "та"
+ ],
+ [
+ "▁",
+ "а"
+ ],
+ [
+ "et",
+ "er"
+ ],
+ [
+ "ete",
+ "r"
+ ],
+ [
+ "e",
+ "ter"
+ ],
+ [
+ "▁incl",
+ "ud"
+ ],
+ [
+ "▁inclu",
+ "d"
+ ],
+ [
+ "UL",
+ "L"
+ ],
+ [
+ "U",
+ "LL"
+ ],
+ [
+ "▁m",
+ "em"
+ ],
+ [
+ "▁me",
+ "m"
+ ],
+ [
+ "▁",
+ "mem"
+ ],
+ [
+ "▁p",
+ "o"
+ ],
+ [
+ "▁",
+ "po"
+ ],
+ [
+ "▁l",
+ "ittle"
+ ],
+ [
+ "▁lit",
+ "tle"
+ ],
+ [
+ "▁litt",
+ "le"
+ ],
+ [
+ "▁a",
+ "rg"
+ ],
+ [
+ "▁ar",
+ "g"
+ ],
+ [
+ "▁",
+ "arg"
+ ],
+ [
+ "▁}",
+ ","
+ ],
+ [
+ "▁",
+ "},"
+ ],
+ [
+ "in",
+ "clude"
+ ],
+ [
+ "incl",
+ "ude"
+ ],
+ [
+ "et",
+ "a"
+ ],
+ [
+ "e",
+ "ta"
+ ],
+ [
+ "▁p",
+ "lace"
+ ],
+ [
+ "▁pl",
+ "ace"
+ ],
+ [
+ "▁plac",
+ "e"
+ ],
+ [
+ "▁",
+ "place"
+ ],
+ [
+ "id",
+ "th"
+ ],
+ [
+ "us",
+ "tom"
+ ],
+ [
+ "ust",
+ "om"
+ ],
+ [
+ "▁|",
+ "|"
+ ],
+ [
+ "▁",
+ "||"
+ ],
+ [
+ "▁t",
+ "em"
+ ],
+ [
+ "▁te",
+ "m"
+ ],
+ [
+ "▁",
+ "tem"
+ ],
+ [
+ "ri",
+ "ed"
+ ],
+ [
+ "rie",
+ "d"
+ ],
+ [
+ "r",
+ "ied"
+ ],
+ [
+ "▁f",
+ "act"
+ ],
+ [
+ "▁fac",
+ "t"
+ ],
+ [
+ "▁fa",
+ "ct"
+ ],
+ [
+ "▁",
+ "fact"
+ ],
+ [
+ "ien",
+ "ce"
+ ],
+ [
+ "i",
+ "ence"
+ ],
+ [
+ "▁P",
+ "l"
+ ],
+ [
+ "▁",
+ "Pl"
+ ],
+ [
+ "op",
+ "t"
+ ],
+ [
+ "o",
+ "pt"
+ ],
+ [
+ "el",
+ "e"
+ ],
+ [
+ "e",
+ "le"
+ ],
+ [
+ "g",
+ "o"
+ ],
+ [
+ "A",
+ "C"
+ ],
+ [
+ "in",
+ "ter"
+ ],
+ [
+ "int",
+ "er"
+ ],
+ [
+ "inte",
+ "r"
+ ],
+ [
+ "====",
+ "===="
+ ],
+ [
+ "()",
+ ","
+ ],
+ [
+ "(",
+ "),"
+ ],
+ [
+ "ot",
+ "s"
+ ],
+ [
+ "o",
+ "ts"
+ ],
+ [
+ "ra",
+ "l"
+ ],
+ [
+ "r",
+ "al"
+ ],
+ [
+ "iqu",
+ "e"
+ ],
+ [
+ "iq",
+ "ue"
+ ],
+ [
+ "i",
+ "que"
+ ],
+ [
+ "av",
+ "ing"
+ ],
+ [
+ "avi",
+ "ng"
+ ],
+ [
+ "a",
+ "ving"
+ ],
+ [
+ "m",
+ "l"
+ ],
+ [
+ "▁th",
+ "ought"
+ ],
+ [
+ "▁though",
+ "t"
+ ],
+ [
+ "▁thou",
+ "ght"
+ ],
+ [
+ "fr",
+ "ac"
+ ],
+ [
+ "f",
+ "rac"
+ ],
+ [
+ "▁c",
+ "are"
+ ],
+ [
+ "▁car",
+ "e"
+ ],
+ [
+ "▁ca",
+ "re"
+ ],
+ [
+ "▁",
+ "care"
+ ],
+ [
+ "()",
+ ");"
+ ],
+ [
+ "())",
+ ";"
+ ],
+ [
+ "(",
+ "));"
+ ],
+ [
+ "▁p",
+ "ut"
+ ],
+ [
+ "▁pu",
+ "t"
+ ],
+ [
+ "▁",
+ "put"
+ ],
+ [
+ "▁m",
+ "ight"
+ ],
+ [
+ "▁mi",
+ "ght"
+ ],
+ [
+ "▁mig",
+ "ht"
+ ],
+ [
+ "▁A",
+ "mer"
+ ],
+ [
+ "▁Am",
+ "er"
+ ],
+ [
+ "▁",
+ "Amer"
+ ],
+ [
+ "▁(",
+ "!"
+ ],
+ [
+ "▁",
+ "(!"
+ ],
+ [
+ "am",
+ "ple"
+ ],
+ [
+ "amp",
+ "le"
+ ],
+ [
+ "al",
+ "th"
+ ],
+ [
+ "alt",
+ "h"
+ ],
+ [
+ "▁f",
+ "ew"
+ ],
+ [
+ "▁fe",
+ "w"
+ ],
+ [
+ "▁st",
+ "ate"
+ ],
+ [
+ "▁stat",
+ "e"
+ ],
+ [
+ "▁sta",
+ "te"
+ ],
+ [
+ "▁",
+ "state"
+ ],
+ [
+ "su",
+ "b"
+ ],
+ [
+ "s",
+ "ub"
+ ],
+ [
+ "▁O",
+ "r"
+ ],
+ [
+ "▁",
+ "Or"
+ ],
+ [
+ "]",
+ ";"
+ ],
+ [
+ "▁s",
+ "ize"
+ ],
+ [
+ "▁si",
+ "ze"
+ ],
+ [
+ "▁",
+ "size"
+ ],
+ [
+ "▁S",
+ "p"
+ ],
+ [
+ "▁",
+ "Sp"
+ ],
+ [
+ "▁with",
+ "out"
+ ],
+ [
+ "▁",
+ "without"
+ ],
+ [
+ "▁p",
+ "oss"
+ ],
+ [
+ "▁pos",
+ "s"
+ ],
+ [
+ "▁po",
+ "ss"
+ ],
+ [
+ "▁",
+ "poss"
+ ],
+ [
+ "e",
+ "q"
+ ],
+ [
+ "pl",
+ "ay"
+ ],
+ [
+ "p",
+ "lay"
+ ],
+ [
+ "▁ex",
+ "pect"
+ ],
+ [
+ "▁exp",
+ "ect"
+ ],
+ [
+ "▁",
+ "expect"
+ ],
+ [
+ "▁se",
+ "cond"
+ ],
+ [
+ "▁sec",
+ "ond"
+ ],
+ [
+ "▁",
+ "second"
+ ],
+ [
+ "▁S",
+ "tring"
+ ],
+ [
+ "▁St",
+ "ring"
+ ],
+ [
+ "▁Str",
+ "ing"
+ ],
+ [
+ "▁",
+ "String"
+ ],
+ [
+ "ui",
+ "ld"
+ ],
+ [
+ "u",
+ "ild"
+ ],
+ [
+ "▁n",
+ "ext"
+ ],
+ [
+ "▁ne",
+ "xt"
+ ],
+ [
+ "▁",
+ "next"
+ ],
+ [
+ "+",
+ "+"
+ ],
+ [
+ "re",
+ "qu"
+ ],
+ [
+ "req",
+ "u"
+ ],
+ [
+ "r",
+ "equ"
+ ],
+ [
+ "▁A",
+ "ll"
+ ],
+ [
+ "▁Al",
+ "l"
+ ],
+ [
+ "▁",
+ "All"
+ ],
+ [
+ "▁m",
+ "en"
+ ],
+ [
+ "▁me",
+ "n"
+ ],
+ [
+ "▁",
+ "men"
+ ],
+ [
+ "▁W",
+ "hen"
+ ],
+ [
+ "▁Wh",
+ "en"
+ ],
+ [
+ "▁Whe",
+ "n"
+ ],
+ [
+ "▁",
+ "When"
+ ],
+ [
+ "it",
+ "er"
+ ],
+ [
+ "ite",
+ "r"
+ ],
+ [
+ "i",
+ "ter"
+ ],
+ [
+ "am",
+ "ent"
+ ],
+ [
+ "ame",
+ "nt"
+ ],
+ [
+ "amen",
+ "t"
+ ],
+ [
+ "a",
+ "ment"
+ ],
+ [
+ "ne",
+ "t"
+ ],
+ [
+ "n",
+ "et"
+ ],
+ [
+ "▁",
+ "К"
+ ],
+ [
+ "ro",
+ "n"
+ ],
+ [
+ "r",
+ "on"
+ ],
+ [
+ "ain",
+ "t"
+ ],
+ [
+ "ai",
+ "nt"
+ ],
+ [
+ "a",
+ "int"
+ ],
+ [
+ "▁I",
+ "s"
+ ],
+ [
+ "▁",
+ "Is"
+ ],
+ [
+ "в",
+ "е"
+ ],
+ [
+ "pe",
+ "nd"
+ ],
+ [
+ "pen",
+ "d"
+ ],
+ [
+ "p",
+ "end"
+ ],
+ [
+ "trans",
+ "lation"
+ ],
+ [
+ "transl",
+ "ation"
+ ],
+ [
+ "▁г",
+ "о"
+ ],
+ [
+ "▁",
+ "го"
+ ],
+ [
+ "ч",
+ "е"
+ ],
+ [
+ "▁v",
+ "an"
+ ],
+ [
+ "▁va",
+ "n"
+ ],
+ [
+ "▁",
+ "van"
+ ],
+ [
+ "▁an",
+ "other"
+ ],
+ [
+ "▁ano",
+ "ther"
+ ],
+ [
+ "▁re",
+ "t"
+ ],
+ [
+ "▁r",
+ "et"
+ ],
+ [
+ "▁",
+ "ret"
+ ],
+ [
+ "▁L",
+ "a"
+ ],
+ [
+ "▁",
+ "La"
+ ],
+ [
+ "Mo",
+ "d"
+ ],
+ [
+ "M",
+ "od"
+ ],
+ [
+ "IO",
+ "N"
+ ],
+ [
+ "I",
+ "ON"
+ ],
+ [
+ "li",
+ "st"
+ ],
+ [
+ "l",
+ "ist"
+ ],
+ [
+ "▁p",
+ "ost"
+ ],
+ [
+ "▁pos",
+ "t"
+ ],
+ [
+ "▁po",
+ "st"
+ ],
+ [
+ "▁",
+ "post"
+ ],
+ [
+ "d",
+ "a"
+ ],
+ [
+ "wa",
+ "re"
+ ],
+ [
+ "war",
+ "e"
+ ],
+ [
+ "w",
+ "are"
+ ],
+ [
+ "▁w",
+ "ord"
+ ],
+ [
+ "▁wor",
+ "d"
+ ],
+ [
+ "▁wo",
+ "rd"
+ ],
+ [
+ "▁",
+ "word"
+ ],
+ [
+ "Err",
+ "or"
+ ],
+ [
+ "Er",
+ "ror"
+ ],
+ [
+ "▁se",
+ "em"
+ ],
+ [
+ "▁see",
+ "m"
+ ],
+ [
+ "▁cont",
+ "in"
+ ],
+ [
+ "▁",
+ "contin"
+ ],
+ [
+ "at",
+ "ic"
+ ],
+ [
+ "ati",
+ "c"
+ ],
+ [
+ "▁th",
+ "ree"
+ ],
+ [
+ "▁thr",
+ "ee"
+ ],
+ [
+ "▁",
+ "three"
+ ],
+ [
+ "Ob",
+ "ject"
+ ],
+ [
+ "Obj",
+ "ect"
+ ],
+ [
+ "▁part",
+ "ic"
+ ],
+ [
+ "▁parti",
+ "c"
+ ],
+ [
+ "$",
+ "."
+ ],
+ [
+ "▁m",
+ "ark"
+ ],
+ [
+ "▁mar",
+ "k"
+ ],
+ [
+ "▁",
+ "mark"
+ ],
+ [
+ "▁v",
+ "is"
+ ],
+ [
+ "▁vi",
+ "s"
+ ],
+ [
+ "▁",
+ "vis"
+ ],
+ [
+ "r",
+ "c"
+ ],
+ [
+ "▁s",
+ "w"
+ ],
+ [
+ "▁",
+ "sw"
+ ],
+ [
+ "pt",
+ "ions"
+ ],
+ [
+ "ption",
+ "s"
+ ],
+ [
+ "▁b",
+ "reak"
+ ],
+ [
+ "▁bre",
+ "ak"
+ ],
+ [
+ "▁",
+ "break"
+ ],
+ [
+ "▁th",
+ "ings"
+ ],
+ [
+ "▁thing",
+ "s"
+ ],
+ [
+ "▁thin",
+ "gs"
+ ],
+ [
+ "ut",
+ "e"
+ ],
+ [
+ "u",
+ "te"
+ ],
+ [
+ "u",
+ "i"
+ ],
+ [
+ "▁T",
+ "hat"
+ ],
+ [
+ "▁Th",
+ "at"
+ ],
+ [
+ "▁",
+ "That"
+ ],
+ [
+ "ur",
+ "s"
+ ],
+ [
+ "u",
+ "rs"
+ ],
+ [
+ "g",
+ "l"
+ ],
+ [
+ "р",
+ "у"
+ ],
+ [
+ "▁f",
+ "ile"
+ ],
+ [
+ "▁fil",
+ "e"
+ ],
+ [
+ "▁fi",
+ "le"
+ ],
+ [
+ "▁",
+ "file"
+ ],
+ [
+ "us",
+ "e"
+ ],
+ [
+ "u",
+ "se"
+ ],
+ [
+ "ig",
+ "ned"
+ ],
+ [
+ "ign",
+ "ed"
+ ],
+ [
+ "igne",
+ "d"
+ ],
+ [
+ "par",
+ "t"
+ ],
+ [
+ "pa",
+ "rt"
+ ],
+ [
+ "p",
+ "art"
+ ],
+ [
+ "U",
+ "n"
+ ],
+ [
+ "▁e",
+ "qu"
+ ],
+ [
+ "▁eq",
+ "u"
+ ],
+ [
+ "▁",
+ "equ"
+ ],
+ [
+ "(",
+ "&"
+ ],
+ [
+ "▁l",
+ "ead"
+ ],
+ [
+ "▁le",
+ "ad"
+ ],
+ [
+ "r",
+ "m"
+ ],
+ [
+ "ain",
+ "ed"
+ ],
+ [
+ "ai",
+ "ned"
+ ],
+ [
+ "aine",
+ "d"
+ ],
+ [
+ "a",
+ "ined"
+ ],
+ [
+ "▁B",
+ "e"
+ ],
+ [
+ "▁",
+ "Be"
+ ],
+ [
+ "pat",
+ "h"
+ ],
+ [
+ "pa",
+ "th"
+ ],
+ [
+ "p",
+ "ath"
+ ],
+ [
+ "▁sm",
+ "all"
+ ],
+ [
+ "▁",
+ "small"
+ ],
+ [
+ "ag",
+ "er"
+ ],
+ [
+ "age",
+ "r"
+ ],
+ [
+ "a",
+ "ger"
+ ],
+ [
+ "▁al",
+ "ways"
+ ],
+ [
+ "▁",
+ "always"
+ ],
+ [
+ "▁E",
+ "l"
+ ],
+ [
+ "▁",
+ "El"
+ ],
+ [
+ "▁or",
+ "der"
+ ],
+ [
+ "▁ord",
+ "er"
+ ],
+ [
+ "▁",
+ "order"
+ ],
+ [
+ "▁e",
+ "y"
+ ],
+ [
+ "▁",
+ "ey"
+ ],
+ [
+ "▁w",
+ "on"
+ ],
+ [
+ "▁wo",
+ "n"
+ ],
+ [
+ "▁",
+ "won"
+ ],
+ [
+ "ap",
+ "e"
+ ],
+ [
+ "a",
+ "pe"
+ ],
+ [
+ "▁l",
+ "eft"
+ ],
+ [
+ "▁le",
+ "ft"
+ ],
+ [
+ "▁",
+ "left"
+ ],
+ [
+ "av",
+ "a"
+ ],
+ [
+ "a",
+ "va"
+ ],
+ [
+ "it",
+ "em"
+ ],
+ [
+ "ite",
+ "m"
+ ],
+ [
+ "i",
+ "tem"
+ ],
+ [
+ "ho",
+ "r"
+ ],
+ [
+ "h",
+ "or"
+ ],
+ [
+ "▁a",
+ "way"
+ ],
+ [
+ "▁aw",
+ "ay"
+ ],
+ [
+ "▁",
+ "away"
+ ],
+ [
+ "b",
+ "b"
+ ],
+ [
+ "fu",
+ "n"
+ ],
+ [
+ "f",
+ "un"
+ ],
+ [
+ "▁I",
+ "nd"
+ ],
+ [
+ "▁In",
+ "d"
+ ],
+ [
+ "▁",
+ "Ind"
+ ],
+ [
+ "m",
+ "b"
+ ],
+ [
+ "▁st",
+ "ruct"
+ ],
+ [
+ "▁str",
+ "uct"
+ ],
+ [
+ "▁stru",
+ "ct"
+ ],
+ [
+ "▁",
+ "struct"
+ ],
+ [
+ "▁pro",
+ "cess"
+ ],
+ [
+ "▁proc",
+ "ess"
+ ],
+ [
+ "▁proces",
+ "s"
+ ],
+ [
+ "▁",
+ "process"
+ ],
+ [
+ "▁s",
+ "upport"
+ ],
+ [
+ "▁sup",
+ "port"
+ ],
+ [
+ "▁supp",
+ "ort"
+ ],
+ [
+ "▁",
+ "support"
+ ],
+ [
+ ");",
+ "\r"
+ ],
+ [
+ ")",
+ ";\r"
+ ],
+ [
+ "ió",
+ "n"
+ ],
+ [
+ "i",
+ "ón"
+ ],
+ [
+ "L",
+ "O"
+ ],
+ [
+ "▁o",
+ "per"
+ ],
+ [
+ "▁op",
+ "er"
+ ],
+ [
+ "▁",
+ "oper"
+ ],
+ [
+ "U",
+ "T"
+ ],
+ [
+ "▁",
+ "·"
+ ],
+ [
+ "P",
+ "E"
+ ],
+ [
+ "lo",
+ "ad"
+ ],
+ [
+ "l",
+ "oad"
+ ],
+ [
+ "of",
+ "f"
+ ],
+ [
+ "o",
+ "ff"
+ ],
+ [
+ "▁N",
+ "o"
+ ],
+ [
+ "▁",
+ "No"
+ ],
+ [
+ "iv",
+ "es"
+ ],
+ [
+ "ive",
+ "s"
+ ],
+ [
+ "i",
+ "ves"
+ ],
+ [
+ "ic",
+ "an"
+ ],
+ [
+ "ica",
+ "n"
+ ],
+ [
+ "i",
+ "can"
+ ],
+ [
+ "▁v",
+ "e"
+ ],
+ [
+ "▁",
+ "ve"
+ ],
+ [
+ "act",
+ "ion"
+ ],
+ [
+ "a",
+ "ction"
+ ],
+ [
+ "'",
+ ";"
+ ],
+ [
+ "▁v",
+ "o"
+ ],
+ [
+ "▁",
+ "vo"
+ ],
+ [
+ "$",
+ ","
+ ],
+ [
+ "▁G",
+ "r"
+ ],
+ [
+ "▁",
+ "Gr"
+ ],
+ [
+ "pr",
+ "e"
+ ],
+ [
+ "p",
+ "re"
+ ],
+ [
+ "n",
+ "y"
+ ],
+ [
+ "ain",
+ "ing"
+ ],
+ [
+ "ai",
+ "ning"
+ ],
+ [
+ "a",
+ "ining"
+ ],
+ [
+ "io",
+ "r"
+ ],
+ [
+ "i",
+ "or"
+ ],
+ [
+ "in",
+ "it"
+ ],
+ [
+ "ini",
+ "t"
+ ],
+ [
+ "i",
+ "nit"
+ ],
+ [
+ "le",
+ "ction"
+ ],
+ [
+ "lect",
+ "ion"
+ ],
+ [
+ "l",
+ "ection"
+ ],
+ [
+ "ar",
+ "m"
+ ],
+ [
+ "a",
+ "rm"
+ ],
+ [
+ "um",
+ "n"
+ ],
+ [
+ "u",
+ "mn"
+ ],
+ [
+ "ag",
+ "s"
+ ],
+ [
+ "a",
+ "gs"
+ ],
+ [
+ "ц",
+ "и"
+ ],
+ [
+ "ск",
+ "о"
+ ],
+ [
+ "с",
+ "ко"
+ ],
+ [
+ "vers",
+ "ion"
+ ],
+ [
+ "v",
+ "ersion"
+ ],
+ [
+ "▁T",
+ "o"
+ ],
+ [
+ "▁",
+ "To"
+ ],
+ [
+ "▁re",
+ "f"
+ ],
+ [
+ "▁r",
+ "ef"
+ ],
+ [
+ "▁",
+ "ref"
+ ],
+ [
+ "st",
+ "and"
+ ],
+ [
+ "sta",
+ "nd"
+ ],
+ [
+ "stan",
+ "d"
+ ],
+ [
+ "▁A",
+ "t"
+ ],
+ [
+ "▁",
+ "At"
+ ],
+ [
+ "if",
+ "t"
+ ],
+ [
+ "i",
+ "ft"
+ ],
+ [
+ "▁e",
+ "in"
+ ],
+ [
+ "fa",
+ "ce"
+ ],
+ [
+ "fac",
+ "e"
+ ],
+ [
+ "f",
+ "ace"
+ ],
+ [
+ "b",
+ "o"
+ ],
+ [
+ "if",
+ "ied"
+ ],
+ [
+ "ifi",
+ "ed"
+ ],
+ [
+ "ve",
+ "d"
+ ],
+ [
+ "v",
+ "ed"
+ ],
+ [
+ "su",
+ "m"
+ ],
+ [
+ "s",
+ "um"
+ ],
+ [
+ "un",
+ "e"
+ ],
+ [
+ "u",
+ "ne"
+ ],
+ [
+ "it",
+ "al"
+ ],
+ [
+ "ita",
+ "l"
+ ],
+ [
+ "i",
+ "tal"
+ ],
+ [
+ "um",
+ "p"
+ ],
+ [
+ "u",
+ "mp"
+ ],
+ [
+ "com",
+ "m"
+ ],
+ [
+ "co",
+ "mm"
+ ],
+ [
+ "c",
+ "omm"
+ ],
+ [
+ "▁m",
+ "ov"
+ ],
+ [
+ "▁mo",
+ "v"
+ ],
+ [
+ "▁",
+ "mov"
+ ],
+ [
+ "el",
+ "t"
+ ],
+ [
+ "e",
+ "lt"
+ ],
+ [
+ "▁v",
+ "on"
+ ],
+ [
+ "▁vo",
+ "n"
+ ],
+ [
+ "vel",
+ "op"
+ ],
+ [
+ "ct",
+ "or"
+ ],
+ [
+ "c",
+ "tor"
+ ],
+ [
+ "he",
+ "ad"
+ ],
+ [
+ "h",
+ "ead"
+ ],
+ [
+ "cl",
+ "e"
+ ],
+ [
+ "c",
+ "le"
+ ],
+ [
+ "▁b",
+ "uild"
+ ],
+ [
+ "▁bu",
+ "ild"
+ ],
+ [
+ "▁",
+ "build"
+ ],
+ [
+ "in",
+ "c"
+ ],
+ [
+ "i",
+ "nc"
+ ],
+ [
+ ".",
+ "'"
+ ],
+ [
+ "b",
+ "s"
+ ],
+ [
+ "in",
+ "fo"
+ ],
+ [
+ "inf",
+ "o"
+ ],
+ [
+ "ch",
+ "n"
+ ],
+ [
+ "c",
+ "hn"
+ ],
+ [
+ "▁we",
+ "ek"
+ ],
+ [
+ "▁",
+ "week"
+ ],
+ [
+ "▁b",
+ "ook"
+ ],
+ [
+ "▁bo",
+ "ok"
+ ],
+ [
+ "▁",
+ "book"
+ ],
+ [
+ "H",
+ "E"
+ ],
+ [
+ "ba",
+ "r"
+ ],
+ [
+ "b",
+ "ar"
+ ],
+ [
+ "ic",
+ "ense"
+ ],
+ [
+ "▁W",
+ "hat"
+ ],
+ [
+ "▁Wh",
+ "at"
+ ],
+ [
+ "▁",
+ "What"
+ ],
+ [
+ "▁qu",
+ "est"
+ ],
+ [
+ "▁que",
+ "st"
+ ],
+ [
+ "▁q",
+ "uest"
+ ],
+ [
+ "▁",
+ "quest"
+ ],
+ [
+ "ur",
+ "ch"
+ ],
+ [
+ "at",
+ "o"
+ ],
+ [
+ "a",
+ "to"
+ ],
+ [
+ "le",
+ "ft"
+ ],
+ [
+ "l",
+ "eft"
+ ],
+ [
+ "▁m",
+ "ar"
+ ],
+ [
+ "▁ma",
+ "r"
+ ],
+ [
+ "▁",
+ "mar"
+ ],
+ [
+ "▁t",
+ "op"
+ ],
+ [
+ "▁to",
+ "p"
+ ],
+ [
+ "▁",
+ "top"
+ ],
+ [
+ "F",
+ "F"
+ ],
+ [
+ "▁f",
+ "riend"
+ ],
+ [
+ "▁",
+ "friend"
+ ],
+ [
+ "▁b",
+ "eh"
+ ],
+ [
+ "▁be",
+ "h"
+ ],
+ [
+ "▁f",
+ "ield"
+ ],
+ [
+ "▁fi",
+ "eld"
+ ],
+ [
+ "▁",
+ "field"
+ ],
+ [
+ "▁again",
+ "st"
+ ],
+ [
+ "ra",
+ "ct"
+ ],
+ [
+ "rac",
+ "t"
+ ],
+ [
+ "r",
+ "act"
+ ],
+ [
+ "iz",
+ "ation"
+ ],
+ [
+ "us",
+ "er"
+ ],
+ [
+ "use",
+ "r"
+ ],
+ [
+ "u",
+ "ser"
+ ],
+ [
+ "ch",
+ "en"
+ ],
+ [
+ "che",
+ "n"
+ ],
+ [
+ "c",
+ "hen"
+ ],
+ [
+ "▁ke",
+ "ep"
+ ],
+ [
+ "▁",
+ "keep"
+ ],
+ [
+ "A",
+ "D"
+ ],
+ [
+ "it",
+ "or"
+ ],
+ [
+ "ito",
+ "r"
+ ],
+ [
+ "i",
+ "tor"
+ ],
+ [
+ "▁n",
+ "on"
+ ],
+ [
+ "▁no",
+ "n"
+ ],
+ [
+ "▁",
+ "non"
+ ],
+ [
+ "ir",
+ "d"
+ ],
+ [
+ "i",
+ "rd"
+ ],
+ [
+ "op",
+ "e"
+ ],
+ [
+ "o",
+ "pe"
+ ],
+ [
+ "▁re",
+ "st"
+ ],
+ [
+ "▁r",
+ "est"
+ ],
+ [
+ "▁res",
+ "t"
+ ],
+ [
+ "▁",
+ "rest"
+ ],
+ [
+ "▁d",
+ "ev"
+ ],
+ [
+ "▁de",
+ "v"
+ ],
+ [
+ "▁",
+ "dev"
+ ],
+ [
+ "▁_",
+ "_"
+ ],
+ [
+ "▁",
+ "__"
+ ],
+ [
+ "▁u",
+ "na"
+ ],
+ [
+ "▁un",
+ "a"
+ ],
+ [
+ "▁",
+ "una"
+ ],
+ [
+ "▁t",
+ "erm"
+ ],
+ [
+ "▁te",
+ "rm"
+ ],
+ [
+ "▁ter",
+ "m"
+ ],
+ [
+ "▁",
+ "term"
+ ],
+ [
+ "I",
+ "S"
+ ],
+ [
+ "▁p",
+ "op"
+ ],
+ [
+ "▁po",
+ "p"
+ ],
+ [
+ "▁",
+ "pop"
+ ],
+ [
+ "ri",
+ "st"
+ ],
+ [
+ "ris",
+ "t"
+ ],
+ [
+ "r",
+ "ist"
+ ],
+ [
+ "▁s",
+ "ince"
+ ],
+ [
+ "▁sin",
+ "ce"
+ ],
+ [
+ "▁sinc",
+ "e"
+ ],
+ [
+ "▁",
+ "since"
+ ],
+ [
+ "ve",
+ "s"
+ ],
+ [
+ "v",
+ "es"
+ ],
+ [
+ "▁h",
+ "ard"
+ ],
+ [
+ "▁ha",
+ "rd"
+ ],
+ [
+ "▁har",
+ "d"
+ ],
+ [
+ "▁",
+ "hard"
+ ],
+ [
+ "p",
+ "i"
+ ],
+ [
+ "ut",
+ "il"
+ ],
+ [
+ "uti",
+ "l"
+ ],
+ [
+ "u",
+ "til"
+ ],
+ [
+ "▁s",
+ "oc"
+ ],
+ [
+ "▁so",
+ "c"
+ ],
+ [
+ "▁",
+ "soc"
+ ],
+ [
+ "en",
+ "e"
+ ],
+ [
+ "e",
+ "ne"
+ ],
+ [
+ "Ex",
+ "ception"
+ ],
+ [
+ "▁l",
+ "ocal"
+ ],
+ [
+ "▁loc",
+ "al"
+ ],
+ [
+ "▁lo",
+ "cal"
+ ],
+ [
+ "▁",
+ "local"
+ ],
+ [
+ "▁d",
+ "irect"
+ ],
+ [
+ "▁di",
+ "rect"
+ ],
+ [
+ "▁dire",
+ "ct"
+ ],
+ [
+ "▁dir",
+ "ect"
+ ],
+ [
+ "▁",
+ "direct"
+ ],
+ [
+ "▁s",
+ "ure"
+ ],
+ [
+ "▁su",
+ "re"
+ ],
+ [
+ "▁sur",
+ "e"
+ ],
+ [
+ "▁",
+ "sure"
+ ],
+ [
+ "▁b",
+ "ro"
+ ],
+ [
+ "▁br",
+ "o"
+ ],
+ [
+ "▁",
+ "bro"
+ ],
+ [
+ "▁d",
+ "a"
+ ],
+ [
+ "▁",
+ "da"
+ ],
+ [
+ "▁<",
+ "/"
+ ],
+ [
+ "▁",
+ ""
+ ],
+ [
+ "▁cur",
+ "rent"
+ ],
+ [
+ "▁curr",
+ "ent"
+ ],
+ [
+ "▁",
+ "current"
+ ],
+ [
+ "'",
+ ":"
+ ],
+ [
+ "W",
+ "h"
+ ],
+ [
+ "▁in",
+ "formation"
+ ],
+ [
+ "▁inform",
+ "ation"
+ ],
+ [
+ "▁",
+ "information"
+ ],
+ [
+ "▁i",
+ "de"
+ ],
+ [
+ "▁id",
+ "e"
+ ],
+ [
+ "▁",
+ "ide"
+ ],
+ [
+ "▁bet",
+ "ter"
+ ],
+ [
+ "Te",
+ "xt"
+ ],
+ [
+ "Tex",
+ "t"
+ ],
+ [
+ "T",
+ "ext"
+ ],
+ [
+ "ra",
+ "ph"
+ ],
+ [
+ "rap",
+ "h"
+ ],
+ [
+ "r",
+ "aph"
+ ],
+ [
+ "▁st",
+ "and"
+ ],
+ [
+ "▁stan",
+ "d"
+ ],
+ [
+ "▁sta",
+ "nd"
+ ],
+ [
+ "▁",
+ "stand"
+ ],
+ [
+ "▁c",
+ "heck"
+ ],
+ [
+ "▁che",
+ "ck"
+ ],
+ [
+ "▁",
+ "check"
+ ],
+ [
+ "▁",
+ "к"
+ ],
+ [
+ "▁n",
+ "a"
+ ],
+ [
+ "▁",
+ "na"
+ ],
+ [
+ "(",
+ "("
+ ],
+ [
+ "ou",
+ "th"
+ ],
+ [
+ "out",
+ "h"
+ ],
+ [
+ "o",
+ "uth"
+ ],
+ [
+ "ap",
+ "s"
+ ],
+ [
+ "a",
+ "ps"
+ ],
+ [
+ "▁u",
+ "nt"
+ ],
+ [
+ "▁un",
+ "t"
+ ],
+ [
+ "▁",
+ "unt"
+ ],
+ [
+ "b",
+ "f"
+ ],
+ [
+ "▁con",
+ "f"
+ ],
+ [
+ "▁co",
+ "nf"
+ ],
+ [
+ "▁",
+ "conf"
+ ],
+ [
+ "▁s",
+ "pe"
+ ],
+ [
+ "▁sp",
+ "e"
+ ],
+ [
+ "▁",
+ "spe"
+ ],
+ [
+ "it",
+ "le"
+ ],
+ [
+ "i",
+ "tle"
+ ],
+ [
+ "▁C",
+ "ol"
+ ],
+ [
+ "▁Co",
+ "l"
+ ],
+ [
+ "▁",
+ "Col"
+ ],
+ [
+ "cl",
+ "ass"
+ ],
+ [
+ "c",
+ "lass"
+ ],
+ [
+ "ur",
+ "al"
+ ],
+ [
+ "ura",
+ "l"
+ ],
+ [
+ "u",
+ "ral"
+ ],
+ [
+ "ber",
+ "s"
+ ],
+ [
+ "be",
+ "rs"
+ ],
+ [
+ "b",
+ "ers"
+ ],
+ [
+ "M",
+ "A"
+ ],
+ [
+ "ess",
+ "ion"
+ ],
+ [
+ "▁",
+ "М"
+ ],
+ [
+ "In",
+ "fo"
+ ],
+ [
+ "Inf",
+ "o"
+ ],
+ [
+ "▁B",
+ "r"
+ ],
+ [
+ "▁",
+ "Br"
+ ],
+ [
+ "▁e",
+ "as"
+ ],
+ [
+ "erv",
+ "ice"
+ ],
+ [
+ "au",
+ "s"
+ ],
+ [
+ "a",
+ "us"
+ ],
+ [
+ "ar",
+ "i"
+ ],
+ [
+ "a",
+ "ri"
+ ],
+ [
+ "п",
+ "о"
+ ],
+ [
+ "▁c",
+ "oun"
+ ],
+ [
+ "▁co",
+ "un"
+ ],
+ [
+ "▁cou",
+ "n"
+ ],
+ [
+ "д",
+ "е"
+ ],
+ [
+ "()",
+ ")"
+ ],
+ [
+ "(",
+ "))"
+ ],
+ [
+ "li",
+ "ng"
+ ],
+ [
+ "lin",
+ "g"
+ ],
+ [
+ "l",
+ "ing"
+ ],
+ [
+ "E",
+ "D"
+ ],
+ [
+ "ab",
+ "ly"
+ ],
+ [
+ "abl",
+ "y"
+ ],
+ [
+ "▁p",
+ "at"
+ ],
+ [
+ "▁pa",
+ "t"
+ ],
+ [
+ "▁",
+ "pat"
+ ],
+ [
+ "or",
+ "g"
+ ],
+ [
+ "o",
+ "rg"
+ ],
+ [
+ "▁i",
+ "d"
+ ],
+ [
+ "▁",
+ "id"
+ ],
+ [
+ "▁",
+ "г"
+ ],
+ [
+ "▁t",
+ "ell"
+ ],
+ [
+ "▁te",
+ "ll"
+ ],
+ [
+ "▁tel",
+ "l"
+ ],
+ [
+ "le",
+ "x"
+ ],
+ [
+ "l",
+ "ex"
+ ],
+ [
+ "▁al",
+ "low"
+ ],
+ [
+ "▁all",
+ "ow"
+ ],
+ [
+ "▁",
+ "allow"
+ ],
+ [
+ "re",
+ "en"
+ ],
+ [
+ "ree",
+ "n"
+ ],
+ [
+ "r",
+ "een"
+ ],
+ [
+ "m",
+ "y"
+ ],
+ [
+ "▁cons",
+ "ider"
+ ],
+ [
+ "▁consid",
+ "er"
+ ],
+ [
+ "▁te",
+ "am"
+ ],
+ [
+ "▁tea",
+ "m"
+ ],
+ [
+ "▁",
+ "team"
+ ],
+ [
+ "le",
+ "ase"
+ ],
+ [
+ "ht",
+ "t"
+ ],
+ [
+ "h",
+ "tt"
+ ],
+ [
+ "▁P",
+ "r"
+ ],
+ [
+ "▁",
+ "Pr"
+ ],
+ [
+ "/*",
+ "*"
+ ],
+ [
+ "/",
+ "**"
+ ],
+ [
+ "▁s",
+ "ing"
+ ],
+ [
+ "▁si",
+ "ng"
+ ],
+ [
+ "▁sin",
+ "g"
+ ],
+ [
+ "▁",
+ "sing"
+ ],
+ [
+ "Re",
+ "qu"
+ ],
+ [
+ "Req",
+ "u"
+ ],
+ [
+ "R",
+ "equ"
+ ],
+ [
+ "R",
+ "e"
+ ],
+ [
+ "id",
+ "es"
+ ],
+ [
+ "ide",
+ "s"
+ ],
+ [
+ "i",
+ "des"
+ ],
+ [
+ "ch",
+ "es"
+ ],
+ [
+ "che",
+ "s"
+ ],
+ [
+ "c",
+ "hes"
+ ],
+ [
+ "▁ob",
+ "ject"
+ ],
+ [
+ "▁obj",
+ "ect"
+ ],
+ [
+ "▁",
+ "object"
+ ],
+ [
+ "ial",
+ "ly"
+ ],
+ [
+ "i",
+ "ally"
+ ],
+ [
+ "B",
+ "y"
+ ],
+ [
+ "с",
+ "я"
+ ],
+ [
+ "id",
+ "ed"
+ ],
+ [
+ "ide",
+ "d"
+ ],
+ [
+ "i",
+ "ded"
+ ],
+ [
+ "▁f",
+ "ree"
+ ],
+ [
+ "▁fr",
+ "ee"
+ ],
+ [
+ "▁fre",
+ "e"
+ ],
+ [
+ "▁",
+ "free"
+ ],
+ [
+ "▁pro",
+ "ble"
+ ],
+ [
+ "▁prob",
+ "le"
+ ],
+ [
+ "ci",
+ "te"
+ ],
+ [
+ "cit",
+ "e"
+ ],
+ [
+ "c",
+ "ite"
+ ],
+ [
+ "▁)",
+ ";"
+ ],
+ [
+ "▁",
+ ");"
+ ],
+ [
+ "iss",
+ "ion"
+ ],
+ [
+ "▁d",
+ "uring"
+ ],
+ [
+ "▁du",
+ "ring"
+ ],
+ [
+ "▁dur",
+ "ing"
+ ],
+ [
+ "▁-",
+ "-"
+ ],
+ [
+ "▁",
+ "--"
+ ],
+ [
+ "it",
+ "her"
+ ],
+ [
+ "ith",
+ "er"
+ ],
+ [
+ "i",
+ "ther"
+ ],
+ [
+ "л",
+ "я"
+ ],
+ [
+ "▁l",
+ "eg"
+ ],
+ [
+ "▁le",
+ "g"
+ ],
+ [
+ "▁",
+ "leg"
+ ],
+ [
+ "▁s",
+ "it"
+ ],
+ [
+ "▁si",
+ "t"
+ ],
+ [
+ "ic",
+ "ally"
+ ],
+ [
+ "ical",
+ "ly"
+ ],
+ [
+ "▁k",
+ "ey"
+ ],
+ [
+ "▁ke",
+ "y"
+ ],
+ [
+ "▁",
+ "key"
+ ],
+ [
+ "le",
+ "g"
+ ],
+ [
+ "l",
+ "eg"
+ ],
+ [
+ "tr",
+ "a"
+ ],
+ [
+ "t",
+ "ra"
+ ],
+ [
+ "▁m",
+ "om"
+ ],
+ [
+ "▁mo",
+ "m"
+ ],
+ [
+ "▁ex",
+ "pl"
+ ],
+ [
+ "▁exp",
+ "l"
+ ],
+ [
+ "▁",
+ "expl"
+ ],
+ [
+ "▁de",
+ "velop"
+ ],
+ [
+ "▁",
+ "develop"
+ ],
+ [
+ "▁e",
+ "vent"
+ ],
+ [
+ "▁ev",
+ "ent"
+ ],
+ [
+ "▁even",
+ "t"
+ ],
+ [
+ "▁",
+ "event"
+ ],
+ [
+ "▁N",
+ "ULL"
+ ],
+ [
+ "▁",
+ "NULL"
+ ],
+ [
+ "oh",
+ "n"
+ ],
+ [
+ "o",
+ "hn"
+ ],
+ [
+ "▁//",
+ "/"
+ ],
+ [
+ "▁/",
+ "//"
+ ],
+ [
+ "▁",
+ "///"
+ ],
+ [
+ "▁bus",
+ "iness"
+ ],
+ [
+ "▁",
+ "business"
+ ],
+ [
+ "ч",
+ "а"
+ ],
+ [
+ "▁pro",
+ "f"
+ ],
+ [
+ "▁pr",
+ "of"
+ ],
+ [
+ "▁",
+ "prof"
+ ],
+ [
+ "er",
+ "ror"
+ ],
+ [
+ "err",
+ "or"
+ ],
+ [
+ "▁p",
+ "or"
+ ],
+ [
+ "▁po",
+ "r"
+ ],
+ [
+ "▁",
+ "por"
+ ],
+ [
+ "▁com",
+ "mun"
+ ],
+ [
+ "▁comm",
+ "un"
+ ],
+ [
+ "▁",
+ "commun"
+ ],
+ [
+ "In",
+ "d"
+ ],
+ [
+ "I",
+ "nd"
+ ],
+ [
+ "iu",
+ "m"
+ ],
+ [
+ "i",
+ "um"
+ ],
+ [
+ "Te",
+ "st"
+ ],
+ [
+ "T",
+ "est"
+ ],
+ [
+ "▁A",
+ "d"
+ ],
+ [
+ "▁",
+ "Ad"
+ ],
+ [
+ "ou",
+ "ble"
+ ],
+ [
+ "▁s",
+ "on"
+ ],
+ [
+ "▁so",
+ "n"
+ ],
+ [
+ "▁",
+ "son"
+ ],
+ [
+ "ri",
+ "te"
+ ],
+ [
+ "rit",
+ "e"
+ ],
+ [
+ "r",
+ "ite"
+ ],
+ [
+ "re",
+ "ady"
+ ],
+ [
+ "read",
+ "y"
+ ],
+ [
+ "rea",
+ "dy"
+ ],
+ [
+ "▁{",
+ "\r"
+ ],
+ [
+ "▁",
+ "{\r"
+ ],
+ [
+ "▁t",
+ "hing"
+ ],
+ [
+ "▁th",
+ "ing"
+ ],
+ [
+ "▁thin",
+ "g"
+ ],
+ [
+ "▁",
+ "thing"
+ ],
+ [
+ "н",
+ "я"
+ ],
+ [
+ "▁P",
+ "h"
+ ],
+ [
+ "▁",
+ "Ph"
+ ],
+ [
+ "pe",
+ "d"
+ ],
+ [
+ "p",
+ "ed"
+ ],
+ [
+ "с",
+ "ь"
+ ],
+ [
+ "iv",
+ "ed"
+ ],
+ [
+ "ive",
+ "d"
+ ],
+ [
+ "i",
+ "ved"
+ ],
+ [
+ "Y",
+ "ou"
+ ],
+ [
+ "ar",
+ "l"
+ ],
+ [
+ "a",
+ "rl"
+ ],
+ [
+ "con",
+ "st"
+ ],
+ [
+ "cons",
+ "t"
+ ],
+ [
+ "..",
+ "/"
+ ],
+ [
+ ".",
+ "./"
+ ],
+ [
+ "S",
+ "e"
+ ],
+ [
+ "S",
+ "h"
+ ],
+ [
+ "▁p",
+ "ower"
+ ],
+ [
+ "▁po",
+ "wer"
+ ],
+ [
+ "▁pow",
+ "er"
+ ],
+ [
+ "▁",
+ "power"
+ ],
+ [
+ "rib",
+ "ute"
+ ],
+ [
+ "ribut",
+ "e"
+ ],
+ [
+ "ribu",
+ "te"
+ ],
+ [
+ "▁M",
+ "y"
+ ],
+ [
+ "▁",
+ "My"
+ ],
+ [
+ "▁t",
+ "alk"
+ ],
+ [
+ "▁tal",
+ "k"
+ ],
+ [
+ "▁",
+ "talk"
+ ],
+ [
+ "it",
+ "ch"
+ ],
+ [
+ "▁c",
+ "alled"
+ ],
+ [
+ "▁call",
+ "ed"
+ ],
+ [
+ "▁cal",
+ "led"
+ ],
+ [
+ "▁",
+ "called"
+ ],
+ [
+ "▁c",
+ "ame"
+ ],
+ [
+ "▁cam",
+ "e"
+ ],
+ [
+ "▁ca",
+ "me"
+ ],
+ [
+ "▁be",
+ "lie"
+ ],
+ [
+ "▁bel",
+ "ie"
+ ],
+ [
+ "U",
+ "R"
+ ],
+ [
+ "Ad",
+ "d"
+ ],
+ [
+ "A",
+ "dd"
+ ],
+ [
+ "▁R",
+ "es"
+ ],
+ [
+ "▁Re",
+ "s"
+ ],
+ [
+ "▁",
+ "Res"
+ ],
+ [
+ "as",
+ "ter"
+ ],
+ [
+ "ast",
+ "er"
+ ],
+ [
+ "aste",
+ "r"
+ ],
+ [
+ "a",
+ "ster"
+ ],
+ [
+ "el",
+ "la"
+ ],
+ [
+ "ell",
+ "a"
+ ],
+ [
+ "e",
+ "lla"
+ ],
+ [
+ "ob",
+ "al"
+ ],
+ [
+ "oba",
+ "l"
+ ],
+ [
+ "o",
+ "bal"
+ ],
+ [
+ "▁u",
+ "ntil"
+ ],
+ [
+ "▁un",
+ "til"
+ ],
+ [
+ "▁unt",
+ "il"
+ ],
+ [
+ "▁",
+ "until"
+ ],
+ [
+ "▁h",
+ "um"
+ ],
+ [
+ "▁",
+ "hum"
+ ],
+ [
+ "C",
+ "O"
+ ],
+ [
+ "at",
+ "ely"
+ ],
+ [
+ "ate",
+ "ly"
+ ],
+ [
+ "atel",
+ "y"
+ ],
+ [
+ "##",
+ "##"
+ ],
+ [
+ "###",
+ "#"
+ ],
+ [
+ "#",
+ "###"
+ ],
+ [
+ "pu",
+ "blic"
+ ],
+ [
+ "pub",
+ "lic"
+ ],
+ [
+ "p",
+ "ublic"
+ ],
+ [
+ "[",
+ "]"
+ ],
+ [
+ "▁r",
+ "oom"
+ ],
+ [
+ "▁ro",
+ "om"
+ ],
+ [
+ "▁",
+ "room"
+ ],
+ [
+ "le",
+ "n"
+ ],
+ [
+ "l",
+ "en"
+ ],
+ [
+ "▁f",
+ "amily"
+ ],
+ [
+ "▁fam",
+ "ily"
+ ],
+ [
+ "▁famil",
+ "y"
+ ],
+ [
+ "▁",
+ "family"
+ ],
+ [
+ "po",
+ "r"
+ ],
+ [
+ "p",
+ "or"
+ ],
+ [
+ "▁pro",
+ "gram"
+ ],
+ [
+ "▁pr",
+ "ogram"
+ ],
+ [
+ "▁",
+ "program"
+ ],
+ [
+ "▁h",
+ "ist"
+ ],
+ [
+ "▁his",
+ "t"
+ ],
+ [
+ "▁hi",
+ "st"
+ ],
+ [
+ "▁",
+ "hist"
+ ],
+ [
+ "▁m",
+ "us"
+ ],
+ [
+ "▁mu",
+ "s"
+ ],
+ [
+ "▁",
+ "mus"
+ ],
+ [
+ "ar",
+ "ge"
+ ],
+ [
+ "arg",
+ "e"
+ ],
+ [
+ "on",
+ "ey"
+ ],
+ [
+ "one",
+ "y"
+ ],
+ [
+ "o",
+ "ney"
+ ],
+ [
+ "I",
+ "m"
+ ],
+ [
+ "el",
+ "se"
+ ],
+ [
+ "els",
+ "e"
+ ],
+ [
+ "ail",
+ "s"
+ ],
+ [
+ "ai",
+ "ls"
+ ],
+ [
+ "a",
+ "ils"
+ ],
+ [
+ "a",
+ "f"
+ ],
+ [
+ "▁l",
+ "ove"
+ ],
+ [
+ "▁lo",
+ "ve"
+ ],
+ [
+ "▁lov",
+ "e"
+ ],
+ [
+ "▁",
+ "love"
+ ],
+ [
+ "ä",
+ "r"
+ ],
+ [
+ "as",
+ "es"
+ ],
+ [
+ "ase",
+ "s"
+ ],
+ [
+ "a",
+ "ses"
+ ],
+ [
+ "ph",
+ "a"
+ ],
+ [
+ "p",
+ "ha"
+ ],
+ [
+ "ou",
+ "rs"
+ ],
+ [
+ "our",
+ "s"
+ ],
+ [
+ "o",
+ "urs"
+ ],
+ [
+ "di",
+ "s"
+ ],
+ [
+ "d",
+ "is"
+ ],
+ [
+ "ma",
+ "p"
+ ],
+ [
+ "m",
+ "ap"
+ ],
+ [
+ "iv",
+ "er"
+ ],
+ [
+ "ive",
+ "r"
+ ],
+ [
+ "i",
+ "ver"
+ ],
+ [
+ "ö",
+ "r"
+ ],
+ [
+ "▁B",
+ "l"
+ ],
+ [
+ "▁",
+ "Bl"
+ ],
+ [
+ "at",
+ "eg"
+ ],
+ [
+ "ate",
+ "g"
+ ],
+ [
+ "st",
+ "ate"
+ ],
+ [
+ "stat",
+ "e"
+ ],
+ [
+ "sta",
+ "te"
+ ],
+ [
+ "St",
+ "ate"
+ ],
+ [
+ "Stat",
+ "e"
+ ],
+ [
+ "er",
+ "tain"
+ ],
+ [
+ "ert",
+ "ain"
+ ],
+ [
+ "erta",
+ "in"
+ ],
+ [
+ "▁e",
+ "ffect"
+ ],
+ [
+ "▁eff",
+ "ect"
+ ],
+ [
+ "▁",
+ "effect"
+ ],
+ [
+ "pr",
+ "int"
+ ],
+ [
+ "▁b",
+ "ig"
+ ],
+ [
+ "▁bi",
+ "g"
+ ],
+ [
+ "▁",
+ "big"
+ ],
+ [
+ "in",
+ "dex"
+ ],
+ [
+ "ind",
+ "ex"
+ ],
+ [
+ "inde",
+ "x"
+ ],
+ [
+ "▁p",
+ "ub"
+ ],
+ [
+ "▁pu",
+ "b"
+ ],
+ [
+ "▁",
+ "pub"
+ ],
+ [
+ "ve",
+ "rt"
+ ],
+ [
+ "ver",
+ "t"
+ ],
+ [
+ "v",
+ "ert"
+ ],
+ [
+ "er",
+ "o"
+ ],
+ [
+ "e",
+ "ro"
+ ],
+ [
+ "m",
+ "d"
+ ],
+ [
+ "▁m",
+ "ethod"
+ ],
+ [
+ "▁meth",
+ "od"
+ ],
+ [
+ "▁",
+ "method"
+ ],
+ [
+ "▁g",
+ "ame"
+ ],
+ [
+ "▁gam",
+ "e"
+ ],
+ [
+ "▁ga",
+ "me"
+ ],
+ [
+ "▁",
+ "game"
+ ],
+ [
+ "ri",
+ "es"
+ ],
+ [
+ "rie",
+ "s"
+ ],
+ [
+ "r",
+ "ies"
+ ],
+ [
+ "le",
+ "te"
+ ],
+ [
+ "let",
+ "e"
+ ],
+ [
+ "l",
+ "ete"
+ ],
+ [
+ "It",
+ "em"
+ ],
+ [
+ "I",
+ "tem"
+ ],
+ [
+ "IN",
+ "G"
+ ],
+ [
+ "I",
+ "NG"
+ ],
+ [
+ "re",
+ "sent"
+ ],
+ [
+ "res",
+ "ent"
+ ],
+ [
+ "al",
+ "ity"
+ ],
+ [
+ "ali",
+ "ty"
+ ],
+ [
+ "pt",
+ "y"
+ ],
+ [
+ "p",
+ "ty"
+ ],
+ [
+ "le",
+ "y"
+ ],
+ [
+ "l",
+ "ey"
+ ],
+ [
+ "oc",
+ "ument"
+ ],
+ [
+ "▁b",
+ "eg"
+ ],
+ [
+ "▁be",
+ "g"
+ ],
+ [
+ "T",
+ "R"
+ ],
+ [
+ "}",
+ "."
+ ],
+ [
+ "▁sch",
+ "ool"
+ ],
+ [
+ "▁",
+ "school"
+ ],
+ [
+ "he",
+ "s"
+ ],
+ [
+ "h",
+ "es"
+ ],
+ [
+ "д",
+ "о"
+ ],
+ [
+ "▁l",
+ "ot"
+ ],
+ [
+ "▁lo",
+ "t"
+ ],
+ [
+ "▁",
+ "lot"
+ ],
+ [
+ "▁t",
+ "ook"
+ ],
+ [
+ "▁to",
+ "ok"
+ ],
+ [
+ "▁too",
+ "k"
+ ],
+ [
+ "▁a",
+ "dv"
+ ],
+ [
+ "▁ad",
+ "v"
+ ],
+ [
+ "▁",
+ "adv"
+ ],
+ [
+ "▁c",
+ "ap"
+ ],
+ [
+ "▁ca",
+ "p"
+ ],
+ [
+ "▁",
+ "cap"
+ ],
+ [
+ "M",
+ "P"
+ ],
+ [
+ "un",
+ "k"
+ ],
+ [
+ "▁l",
+ "ight"
+ ],
+ [
+ "▁li",
+ "ght"
+ ],
+ [
+ "▁lig",
+ "ht"
+ ],
+ [
+ "▁",
+ "light"
+ ],
+ [
+ "▁l",
+ "ater"
+ ],
+ [
+ "▁la",
+ "ter"
+ ],
+ [
+ "▁late",
+ "r"
+ ],
+ [
+ "▁lat",
+ "er"
+ ],
+ [
+ ".",
+ ","
+ ],
+ [
+ "Ke",
+ "y"
+ ],
+ [
+ "K",
+ "ey"
+ ],
+ [
+ "it",
+ "ions"
+ ],
+ [
+ "ition",
+ "s"
+ ],
+ [
+ "iti",
+ "ons"
+ ],
+ [
+ "▁en",
+ "ough"
+ ],
+ [
+ "▁/",
+ "**"
+ ],
+ [
+ "▁/*",
+ "*"
+ ],
+ [
+ "▁",
+ "/**"
+ ],
+ [
+ "▁w",
+ "ent"
+ ],
+ [
+ "▁we",
+ "nt"
+ ],
+ [
+ "▁wen",
+ "t"
+ ],
+ [
+ "ã",
+ "o"
+ ],
+ [
+ "▁th",
+ "ough"
+ ],
+ [
+ "▁thou",
+ "gh"
+ ],
+ [
+ "▁",
+ "though"
+ ],
+ [
+ "▁g",
+ "roup"
+ ],
+ [
+ "▁gr",
+ "oup"
+ ],
+ [
+ "▁gro",
+ "up"
+ ],
+ [
+ "▁",
+ "group"
+ ],
+ [
+ "▁me",
+ "an"
+ ],
+ [
+ "▁",
+ "mean"
+ ],
+ [
+ "ск",
+ "и"
+ ],
+ [
+ "с",
+ "ки"
+ ],
+ [
+ "A",
+ "P"
+ ],
+ [
+ "▁n",
+ "um"
+ ],
+ [
+ "▁nu",
+ "m"
+ ],
+ [
+ "▁",
+ "num"
+ ],
+ [
+ "▁c",
+ "ond"
+ ],
+ [
+ "▁con",
+ "d"
+ ],
+ [
+ "▁co",
+ "nd"
+ ],
+ [
+ "▁",
+ "cond"
+ ],
+ [
+ "н",
+ "і"
+ ],
+ [
+ "▁g",
+ "iven"
+ ],
+ [
+ "▁giv",
+ "en"
+ ],
+ [
+ "▁give",
+ "n"
+ ],
+ [
+ "▁gi",
+ "ven"
+ ],
+ [
+ "▁w",
+ "hy"
+ ],
+ [
+ "▁wh",
+ "y"
+ ],
+ [
+ "▁",
+ "why"
+ ],
+ [
+ "▁re",
+ "ce"
+ ],
+ [
+ "▁rec",
+ "e"
+ ],
+ [
+ "▁s",
+ "ide"
+ ],
+ [
+ "▁si",
+ "de"
+ ],
+ [
+ "▁sid",
+ "e"
+ ],
+ [
+ "▁",
+ "side"
+ ],
+ [
+ "▁f",
+ "ar"
+ ],
+ [
+ "▁fa",
+ "r"
+ ],
+ [
+ "▁",
+ "far"
+ ],
+ [
+ "Con",
+ "text"
+ ],
+ [
+ "Cont",
+ "ext"
+ ],
+ [
+ "м",
+ "е"
+ ],
+ [
+ "▁l",
+ "og"
+ ],
+ [
+ "▁lo",
+ "g"
+ ],
+ [
+ "▁",
+ "log"
+ ],
+ [
+ "Vi",
+ "ew"
+ ],
+ [
+ "V",
+ "iew"
+ ],
+ [
+ "▁<",
+ "<"
+ ],
+ [
+ "▁",
+ "<<"
+ ],
+ [
+ "fi",
+ "l"
+ ],
+ [
+ "f",
+ "il"
+ ],
+ [
+ "ac",
+ "es"
+ ],
+ [
+ "ace",
+ "s"
+ ],
+ [
+ "a",
+ "ces"
+ ],
+ [
+ "en",
+ "cy"
+ ],
+ [
+ "enc",
+ "y"
+ ],
+ [
+ "oa",
+ "d"
+ ],
+ [
+ "o",
+ "ad"
+ ],
+ [
+ "er",
+ "ed"
+ ],
+ [
+ "ere",
+ "d"
+ ],
+ [
+ "e",
+ "red"
+ ],
+ [
+ "▁pro",
+ "duct"
+ ],
+ [
+ "▁produ",
+ "ct"
+ ],
+ [
+ "▁prod",
+ "uct"
+ ],
+ [
+ "▁",
+ "product"
+ ],
+ [
+ "E",
+ "T"
+ ],
+ [
+ "▁p",
+ "aram"
+ ],
+ [
+ "▁par",
+ "am"
+ ],
+ [
+ "▁para",
+ "m"
+ ],
+ [
+ "▁pa",
+ "ram"
+ ],
+ [
+ "▁",
+ "param"
+ ],
+ [
+ "▁p",
+ "rote"
+ ],
+ [
+ "▁pro",
+ "te"
+ ],
+ [
+ "▁pr",
+ "ote"
+ ],
+ [
+ "▁prot",
+ "e"
+ ],
+ [
+ "▁",
+ "prote"
+ ],
+ [
+ "te",
+ "s"
+ ],
+ [
+ "t",
+ "es"
+ ],
+ [
+ "Tim",
+ "e"
+ ],
+ [
+ "T",
+ "ime"
+ ],
+ [
+ "j",
+ "e"
+ ],
+ [
+ "ol",
+ "ution"
+ ],
+ [
+ "olut",
+ "ion"
+ ],
+ [
+ "▁р",
+ "а"
+ ],
+ [
+ "▁",
+ "ра"
+ ],
+ [
+ "▁mon",
+ "th"
+ ],
+ [
+ "▁mont",
+ "h"
+ ],
+ [
+ "▁",
+ "month"
+ ],
+ [
+ "fer",
+ "ence"
+ ],
+ [
+ "fe",
+ "rence"
+ ],
+ [
+ "▁a",
+ "ppe"
+ ],
+ [
+ "▁app",
+ "e"
+ ],
+ [
+ "▁ap",
+ "pe"
+ ],
+ [
+ "▁",
+ "appe"
+ ],
+ [
+ "▁f",
+ "ace"
+ ],
+ [
+ "▁fac",
+ "e"
+ ],
+ [
+ "▁fa",
+ "ce"
+ ],
+ [
+ "▁",
+ "face"
+ ],
+ [
+ "en",
+ "ed"
+ ],
+ [
+ "ene",
+ "d"
+ ],
+ [
+ "e",
+ "ned"
+ ],
+ [
+ "tr",
+ "act"
+ ],
+ [
+ "tra",
+ "ct"
+ ],
+ [
+ "t",
+ "ract"
+ ],
+ [
+ "▁l",
+ "ess"
+ ],
+ [
+ "▁le",
+ "ss"
+ ],
+ [
+ "▁les",
+ "s"
+ ],
+ [
+ "▁",
+ "less"
+ ],
+ [
+ "A",
+ "S"
+ ],
+ [
+ "é",
+ "e"
+ ],
+ [
+ "▁g",
+ "ive"
+ ],
+ [
+ "▁giv",
+ "e"
+ ],
+ [
+ "▁gi",
+ "ve"
+ ],
+ [
+ "▁k",
+ "ind"
+ ],
+ [
+ "▁ki",
+ "nd"
+ ],
+ [
+ "▁kin",
+ "d"
+ ],
+ [
+ "▁",
+ "kind"
+ ],
+ [
+ "▁c",
+ "ount"
+ ],
+ [
+ "▁co",
+ "unt"
+ ],
+ [
+ "▁coun",
+ "t"
+ ],
+ [
+ "▁cou",
+ "nt"
+ ],
+ [
+ "▁",
+ "count"
+ ],
+ [
+ "co",
+ "unt"
+ ],
+ [
+ "cou",
+ "nt"
+ ],
+ [
+ "c",
+ "ount"
+ ],
+ [
+ "▁s",
+ "top"
+ ],
+ [
+ "▁st",
+ "op"
+ ],
+ [
+ "▁sto",
+ "p"
+ ],
+ [
+ "▁",
+ "stop"
+ ],
+ [
+ "▁g",
+ "over"
+ ],
+ [
+ "▁go",
+ "ver"
+ ],
+ [
+ "k",
+ "a"
+ ],
+ [
+ "▁err",
+ "or"
+ ],
+ [
+ "▁er",
+ "ror"
+ ],
+ [
+ "▁",
+ "error"
+ ],
+ [
+ "en",
+ "ces"
+ ],
+ [
+ "ence",
+ "s"
+ ],
+ [
+ "enc",
+ "es"
+ ],
+ [
+ "▁m",
+ "il"
+ ],
+ [
+ "▁mi",
+ "l"
+ ],
+ [
+ "▁",
+ "mil"
+ ],
+ [
+ "al",
+ "f"
+ ],
+ [
+ "yn",
+ "c"
+ ],
+ [
+ "y",
+ "nc"
+ ],
+ [
+ "vi",
+ "ous"
+ ],
+ [
+ "v",
+ "ious"
+ ],
+ [
+ "h",
+ "o"
+ ],
+ [
+ "▁n",
+ "ight"
+ ],
+ [
+ "▁ni",
+ "ght"
+ ],
+ [
+ "▁",
+ "night"
+ ],
+ [
+ "er",
+ "a"
+ ],
+ [
+ "e",
+ "ra"
+ ],
+ [
+ "▁п",
+ "ро"
+ ],
+ [
+ "▁пр",
+ "о"
+ ],
+ [
+ "▁",
+ "про"
+ ],
+ [
+ "▁s",
+ "ol"
+ ],
+ [
+ "▁so",
+ "l"
+ ],
+ [
+ "▁",
+ "sol"
+ ],
+ [
+ "me",
+ "n"
+ ],
+ [
+ "m",
+ "en"
+ ],
+ [
+ "▁w",
+ "ater"
+ ],
+ [
+ "▁wat",
+ "er"
+ ],
+ [
+ "▁wa",
+ "ter"
+ ],
+ [
+ "▁",
+ "water"
+ ],
+ [
+ "er",
+ "ing"
+ ],
+ [
+ "eri",
+ "ng"
+ ],
+ [
+ "e",
+ "ring"
+ ],
+ [
+ "▁l",
+ "im"
+ ],
+ [
+ "▁li",
+ "m"
+ ],
+ [
+ "▁",
+ "lim"
+ ],
+ [
+ "Par",
+ "am"
+ ],
+ [
+ "P",
+ "aram"
+ ],
+ [
+ "▁h",
+ "ouse"
+ ],
+ [
+ "▁hous",
+ "e"
+ ],
+ [
+ "▁ho",
+ "use"
+ ],
+ [
+ "▁",
+ "house"
+ ],
+ [
+ "▁S",
+ "ystem"
+ ],
+ [
+ "▁",
+ "System"
+ ],
+ [
+ "▁p",
+ "ay"
+ ],
+ [
+ "▁pa",
+ "y"
+ ],
+ [
+ "▁",
+ "pay"
+ ],
+ [
+ "▁:",
+ "="
+ ],
+ [
+ "ur",
+ "o"
+ ],
+ [
+ "u",
+ "ro"
+ ],
+ [
+ "oc",
+ "i"
+ ],
+ [
+ "o",
+ "ci"
+ ],
+ [
+ "z",
+ "y"
+ ],
+ [
+ "▁al",
+ "ready"
+ ],
+ [
+ ",",
+ "\\"
+ ],
+ [
+ "le",
+ "ngth"
+ ],
+ [
+ "l",
+ "ength"
+ ],
+ [
+ "▁s",
+ "i"
+ ],
+ [
+ "▁",
+ "si"
+ ],
+ [
+ "▁inter",
+ "est"
+ ],
+ [
+ "▁inte",
+ "rest"
+ ],
+ [
+ "▁",
+ "interest"
+ ],
+ [
+ "af",
+ "f"
+ ],
+ [
+ "a",
+ "ff"
+ ],
+ [
+ "ct",
+ "ed"
+ ],
+ [
+ "c",
+ "ted"
+ ],
+ [
+ "ent",
+ "ion"
+ ],
+ [
+ "enti",
+ "on"
+ ],
+ [
+ "▁д",
+ "о"
+ ],
+ [
+ "▁",
+ "до"
+ ],
+ [
+ "um",
+ "e"
+ ],
+ [
+ "u",
+ "me"
+ ],
+ [
+ "▁app",
+ "ro"
+ ],
+ [
+ "▁ap",
+ "pro"
+ ],
+ [
+ "▁",
+ "appro"
+ ],
+ [
+ "br",
+ "e"
+ ],
+ [
+ "b",
+ "re"
+ ],
+ [
+ "I",
+ "G"
+ ],
+ [
+ "▁th",
+ "row"
+ ],
+ [
+ "▁thr",
+ "ow"
+ ],
+ [
+ "▁thro",
+ "w"
+ ],
+ [
+ "▁",
+ "throw"
+ ],
+ [
+ "math",
+ "cal"
+ ],
+ [
+ "ir",
+ "l"
+ ],
+ [
+ "i",
+ "rl"
+ ],
+ [
+ "▁p",
+ "rom"
+ ],
+ [
+ "▁pro",
+ "m"
+ ],
+ [
+ "▁pr",
+ "om"
+ ],
+ [
+ "▁",
+ "prom"
+ ],
+ [
+ "os",
+ "s"
+ ],
+ [
+ "o",
+ "ss"
+ ],
+ [
+ "▁re",
+ "quest"
+ ],
+ [
+ "▁requ",
+ "est"
+ ],
+ [
+ "▁req",
+ "uest"
+ ],
+ [
+ "▁",
+ "request"
+ ],
+ [
+ "equ",
+ "ation"
+ ],
+ [
+ "eq",
+ "uation"
+ ],
+ [
+ "ol",
+ "ogy"
+ ],
+ [
+ "olog",
+ "y"
+ ],
+ [
+ "olo",
+ "gy"
+ ],
+ [
+ "mi",
+ "t"
+ ],
+ [
+ "m",
+ "it"
+ ],
+ [
+ "▁p",
+ "ack"
+ ],
+ [
+ "▁pa",
+ "ck"
+ ],
+ [
+ "▁pac",
+ "k"
+ ],
+ [
+ "▁",
+ "pack"
+ ],
+ [
+ "in",
+ "o"
+ ],
+ [
+ "i",
+ "no"
+ ],
+ [
+ "ar",
+ "ray"
+ ],
+ [
+ "arr",
+ "ay"
+ ],
+ [
+ "z",
+ "a"
+ ],
+ [
+ "ti",
+ "l"
+ ],
+ [
+ "t",
+ "il"
+ ],
+ [
+ "U",
+ "N"
+ ],
+ [
+ "▁p",
+ "resent"
+ ],
+ [
+ "▁pre",
+ "sent"
+ ],
+ [
+ "▁pres",
+ "ent"
+ ],
+ [
+ "▁",
+ "present"
+ ],
+ [
+ "▁or",
+ "gan"
+ ],
+ [
+ "▁org",
+ "an"
+ ],
+ [
+ "▁",
+ "organ"
+ ],
+ [
+ "Fil",
+ "e"
+ ],
+ [
+ "Fi",
+ "le"
+ ],
+ [
+ "F",
+ "ile"
+ ],
+ [
+ "▁o",
+ "rig"
+ ],
+ [
+ "▁or",
+ "ig"
+ ],
+ [
+ "▁",
+ "orig"
+ ],
+ [
+ "▁f",
+ "ull"
+ ],
+ [
+ "▁ful",
+ "l"
+ ],
+ [
+ "▁fu",
+ "ll"
+ ],
+ [
+ "▁",
+ "full"
+ ],
+ [
+ "is",
+ "tr"
+ ],
+ [
+ "ist",
+ "r"
+ ],
+ [
+ "i",
+ "str"
+ ],
+ [
+ "▁f",
+ "lo"
+ ],
+ [
+ "▁fl",
+ "o"
+ ],
+ [
+ "h",
+ "r"
+ ],
+ [
+ "▁as",
+ "sert"
+ ],
+ [
+ "▁ass",
+ "ert"
+ ],
+ [
+ "▁",
+ "assert"
+ ],
+ [
+ "ar",
+ "ds"
+ ],
+ [
+ "ard",
+ "s"
+ ],
+ [
+ "ur",
+ "l"
+ ],
+ [
+ "u",
+ "rl"
+ ],
+ [
+ "en",
+ "n"
+ ],
+ [
+ "e",
+ "nn"
+ ],
+ [
+ "s",
+ "l"
+ ],
+ [
+ "▁",
+ "А"
+ ],
+ [
+ "▁c",
+ "ho"
+ ],
+ [
+ "▁ch",
+ "o"
+ ],
+ [
+ "▁",
+ "cho"
+ ],
+ [
+ "▁l",
+ "evel"
+ ],
+ [
+ "▁le",
+ "vel"
+ ],
+ [
+ "▁lev",
+ "el"
+ ],
+ [
+ "▁",
+ "level"
+ ],
+ [
+ "O",
+ "T"
+ ],
+ [
+ "wo",
+ "rd"
+ ],
+ [
+ "wor",
+ "d"
+ ],
+ [
+ "w",
+ "ord"
+ ],
+ [
+ "▁b",
+ "ody"
+ ],
+ [
+ "▁bo",
+ "dy"
+ ],
+ [
+ "▁bod",
+ "y"
+ ],
+ [
+ "▁",
+ "body"
+ ],
+ [
+ "▁u",
+ "ser"
+ ],
+ [
+ "▁us",
+ "er"
+ ],
+ [
+ "▁use",
+ "r"
+ ],
+ [
+ "▁",
+ "user"
+ ],
+ [
+ "í",
+ "a"
+ ],
+ [
+ "Q",
+ "u"
+ ],
+ [
+ "▁m",
+ "ain"
+ ],
+ [
+ "▁ma",
+ "in"
+ ],
+ [
+ "▁mai",
+ "n"
+ ],
+ [
+ "▁",
+ "main"
+ ],
+ [
+ "A",
+ "B"
+ ],
+ [
+ "pl",
+ "oy"
+ ],
+ [
+ "plo",
+ "y"
+ ],
+ [
+ "Ev",
+ "ent"
+ ],
+ [
+ "Even",
+ "t"
+ ],
+ [
+ "E",
+ "vent"
+ ],
+ [
+ "▁s",
+ "uper"
+ ],
+ [
+ "▁su",
+ "per"
+ ],
+ [
+ "▁sup",
+ "er"
+ ],
+ [
+ "▁",
+ "super"
+ ],
+ [
+ "ok",
+ "en"
+ ],
+ [
+ "oke",
+ "n"
+ ],
+ [
+ "o",
+ "ken"
+ ],
+ [
+ "▁",
+ "Н"
+ ],
+ [
+ "A",
+ "s"
+ ],
+ [
+ "th",
+ "ers"
+ ],
+ [
+ "ther",
+ "s"
+ ],
+ [
+ "the",
+ "rs"
+ ],
+ [
+ "м",
+ "о"
+ ],
+ [
+ "к",
+ "у"
+ ],
+ [
+ "▁d",
+ "ays"
+ ],
+ [
+ "▁day",
+ "s"
+ ],
+ [
+ "▁da",
+ "ys"
+ ],
+ [
+ "▁",
+ "days"
+ ],
+ [
+ "▁d",
+ "one"
+ ],
+ [
+ "▁do",
+ "ne"
+ ],
+ [
+ "▁don",
+ "e"
+ ],
+ [
+ "▁",
+ "done"
+ ],
+ [
+ "▁v",
+ "iew"
+ ],
+ [
+ "▁vi",
+ "ew"
+ ],
+ [
+ "▁vie",
+ "w"
+ ],
+ [
+ "▁",
+ "view"
+ ],
+ [
+ "si",
+ "de"
+ ],
+ [
+ "sid",
+ "e"
+ ],
+ [
+ "s",
+ "ide"
+ ],
+ [
+ "с",
+ "и"
+ ],
+ [
+ "')",
+ ";"
+ ],
+ [
+ "'",
+ ");"
+ ],
+ [
+ "▁v",
+ "ol"
+ ],
+ [
+ "▁vo",
+ "l"
+ ],
+ [
+ "▁",
+ "vol"
+ ],
+ [
+ "▁t",
+ "ot"
+ ],
+ [
+ "▁to",
+ "t"
+ ],
+ [
+ "▁",
+ "tot"
+ ],
+ [
+ "ca",
+ "se"
+ ],
+ [
+ "cas",
+ "e"
+ ],
+ [
+ "c",
+ "ase"
+ ],
+ [
+ "▁a",
+ "ff"
+ ],
+ [
+ "▁af",
+ "f"
+ ],
+ [
+ "▁",
+ "aff"
+ ],
+ [
+ "Requ",
+ "est"
+ ],
+ [
+ "Re",
+ "quest"
+ ],
+ [
+ "Req",
+ "uest"
+ ],
+ [
+ "▁M",
+ "an"
+ ],
+ [
+ "▁Ma",
+ "n"
+ ],
+ [
+ "▁",
+ "Man"
+ ],
+ [
+ "\\",
+ "\\"
+ ],
+ [
+ "▁J",
+ "ohn"
+ ],
+ [
+ "▁Jo",
+ "hn"
+ ],
+ [
+ "▁Joh",
+ "n"
+ ],
+ [
+ "▁",
+ "John"
+ ],
+ [
+ "▁",
+ "Б"
+ ],
+ [
+ "or",
+ "th"
+ ],
+ [
+ "ort",
+ "h"
+ ],
+ [
+ "▁j",
+ "e"
+ ],
+ [
+ "▁",
+ "je"
+ ],
+ [
+ "▁u",
+ "ne"
+ ],
+ [
+ "▁un",
+ "e"
+ ],
+ [
+ "▁",
+ "une"
+ ],
+ [
+ "l",
+ "a"
+ ],
+ [
+ "[",
+ "\""
+ ],
+ [
+ "fi",
+ "eld"
+ ],
+ [
+ "f",
+ "ield"
+ ],
+ [
+ "▁U",
+ "S"
+ ],
+ [
+ "▁",
+ "US"
+ ],
+ [
+ "ic",
+ "o"
+ ],
+ [
+ "i",
+ "co"
+ ],
+ [
+ "▁per",
+ "form"
+ ],
+ [
+ "▁perf",
+ "orm"
+ ],
+ [
+ "▁",
+ "perform"
+ ],
+ [
+ "ail",
+ "able"
+ ],
+ [
+ "Con",
+ "fig"
+ ],
+ [
+ "Conf",
+ "ig"
+ ],
+ [
+ "O",
+ "r"
+ ],
+ [
+ "▁mod",
+ "el"
+ ],
+ [
+ "▁mo",
+ "del"
+ ],
+ [
+ "▁mode",
+ "l"
+ ],
+ [
+ "▁",
+ "model"
+ ],
+ [
+ "al",
+ "es"
+ ],
+ [
+ "ale",
+ "s"
+ ],
+ [
+ "a",
+ "les"
+ ],
+ [
+ "▁c",
+ "reate"
+ ],
+ [
+ "▁cre",
+ "ate"
+ ],
+ [
+ "▁creat",
+ "e"
+ ],
+ [
+ "▁",
+ "create"
+ ],
+ [
+ "▁a",
+ "nn"
+ ],
+ [
+ "▁an",
+ "n"
+ ],
+ [
+ "▁",
+ "ann"
+ ],
+ [
+ "an",
+ "ces"
+ ],
+ [
+ "ance",
+ "s"
+ ],
+ [
+ "anc",
+ "es"
+ ],
+ [
+ "I",
+ "L"
+ ],
+ [
+ "in",
+ "ation"
+ ],
+ [
+ "▁I",
+ "m"
+ ],
+ [
+ "▁",
+ "Im"
+ ],
+ [
+ "an",
+ "te"
+ ],
+ [
+ "ant",
+ "e"
+ ],
+ [
+ "a",
+ "nte"
+ ],
+ [
+ "an",
+ "a"
+ ],
+ [
+ "a",
+ "na"
+ ],
+ [
+ "а",
+ "н"
+ ],
+ [
+ "▁t",
+ "old"
+ ],
+ [
+ "▁to",
+ "ld"
+ ],
+ [
+ "con",
+ "fig"
+ ],
+ [
+ "conf",
+ "ig"
+ ],
+ [
+ "\"",
+ "]"
+ ],
+ [
+ "me",
+ "t"
+ ],
+ [
+ "m",
+ "et"
+ ],
+ [
+ "l",
+ "t"
+ ],
+ [
+ "▁t",
+ "ext"
+ ],
+ [
+ "▁te",
+ "xt"
+ ],
+ [
+ "▁tex",
+ "t"
+ ],
+ [
+ "▁",
+ "text"
+ ],
+ [
+ "▁M",
+ "ay"
+ ],
+ [
+ "▁Ma",
+ "y"
+ ],
+ [
+ "▁",
+ "May"
+ ],
+ [
+ "▁o",
+ "rg"
+ ],
+ [
+ "▁or",
+ "g"
+ ],
+ [
+ "▁",
+ "org"
+ ],
+ [
+ "▁p",
+ "ort"
+ ],
+ [
+ "▁po",
+ "rt"
+ ],
+ [
+ "▁por",
+ "t"
+ ],
+ [
+ "▁",
+ "port"
+ ],
+ [
+ "P",
+ "l"
+ ],
+ [
+ "ent",
+ "ly"
+ ],
+ [
+ "▁d",
+ "oor"
+ ],
+ [
+ "▁do",
+ "or"
+ ],
+ [
+ "▁",
+ "door"
+ ],
+ [
+ "U",
+ "S"
+ ],
+ [
+ "▁(",
+ "*"
+ ],
+ [
+ "▁",
+ "(*"
+ ],
+ [
+ "k",
+ "t"
+ ],
+ [
+ "E",
+ "S"
+ ],
+ [
+ "ent",
+ "ial"
+ ],
+ [
+ "enti",
+ "al"
+ ],
+ [
+ "▁is",
+ "s"
+ ],
+ [
+ "▁i",
+ "ss"
+ ],
+ [
+ "▁",
+ "iss"
+ ],
+ [
+ "▁in",
+ "c"
+ ],
+ [
+ "▁i",
+ "nc"
+ ],
+ [
+ "▁",
+ "inc"
+ ],
+ [
+ "No",
+ "de"
+ ],
+ [
+ "N",
+ "ode"
+ ],
+ [
+ "iv",
+ "ely"
+ ],
+ [
+ "ive",
+ "ly"
+ ],
+ [
+ "ivel",
+ "y"
+ ],
+ [
+ "▁as",
+ "ked"
+ ],
+ [
+ "▁ask",
+ "ed"
+ ],
+ [
+ "ir",
+ "t"
+ ],
+ [
+ "i",
+ "rt"
+ ],
+ [
+ "▁T",
+ "e"
+ ],
+ [
+ "▁",
+ "Te"
+ ],
+ [
+ "▁re",
+ "port"
+ ],
+ [
+ "▁rep",
+ "ort"
+ ],
+ [
+ "▁repo",
+ "rt"
+ ],
+ [
+ "▁",
+ "report"
+ ],
+ [
+ "▁c",
+ "hang"
+ ],
+ [
+ "▁ch",
+ "ang"
+ ],
+ [
+ "▁cha",
+ "ng"
+ ],
+ [
+ "ст",
+ "и"
+ ],
+ [
+ "с",
+ "ти"
+ ],
+ [
+ "▁a",
+ "long"
+ ],
+ [
+ "▁al",
+ "ong"
+ ],
+ [
+ "▁ch",
+ "ange"
+ ],
+ [
+ "▁chang",
+ "e"
+ ],
+ [
+ "▁",
+ "change"
+ ],
+ [
+ "Si",
+ "ze"
+ ],
+ [
+ "S",
+ "ize"
+ ],
+ [
+ "▁e",
+ "ver"
+ ],
+ [
+ "▁ev",
+ "er"
+ ],
+ [
+ "▁",
+ "ever"
+ ],
+ [
+ "▁o",
+ "cc"
+ ],
+ [
+ "▁oc",
+ "c"
+ ],
+ [
+ "▁",
+ "occ"
+ ],
+ [
+ "ur",
+ "y"
+ ],
+ [
+ "u",
+ "ry"
+ ],
+ [
+ "▁m",
+ "ind"
+ ],
+ [
+ "▁min",
+ "d"
+ ],
+ [
+ "▁mi",
+ "nd"
+ ],
+ [
+ "▁",
+ "mind"
+ ],
+ [
+ "or",
+ "der"
+ ],
+ [
+ "ord",
+ "er"
+ ],
+ [
+ "po",
+ "int"
+ ],
+ [
+ "p",
+ "oint"
+ ],
+ [
+ "ст",
+ "о"
+ ],
+ [
+ "с",
+ "то"
+ ],
+ [
+ "▁w",
+ "he"
+ ],
+ [
+ "▁wh",
+ "e"
+ ],
+ [
+ "▁",
+ "whe"
+ ],
+ [
+ "▁import",
+ "ant"
+ ],
+ [
+ "▁",
+ "important"
+ ],
+ [
+ "de",
+ "s"
+ ],
+ [
+ "d",
+ "es"
+ ],
+ [
+ "▁N",
+ "ot"
+ ],
+ [
+ "▁No",
+ "t"
+ ],
+ [
+ "▁",
+ "Not"
+ ],
+ [
+ "▁w",
+ "rit"
+ ],
+ [
+ "▁wr",
+ "it"
+ ],
+ [
+ "▁",
+ "writ"
+ ],
+ [
+ "▁e",
+ "yes"
+ ],
+ [
+ "▁ey",
+ "es"
+ ],
+ [
+ "▁eye",
+ "s"
+ ],
+ [
+ "▁d",
+ "esc"
+ ],
+ [
+ "▁de",
+ "sc"
+ ],
+ [
+ "▁des",
+ "c"
+ ],
+ [
+ "▁",
+ "desc"
+ ],
+ [
+ "mo",
+ "st"
+ ],
+ [
+ "mos",
+ "t"
+ ],
+ [
+ "m",
+ "ost"
+ ],
+ [
+ "k",
+ "s"
+ ],
+ [
+ "▁b",
+ "it"
+ ],
+ [
+ "▁bi",
+ "t"
+ ],
+ [
+ "▁",
+ "bit"
+ ],
+ [
+ "▁su",
+ "ccess"
+ ],
+ [
+ "▁suc",
+ "cess"
+ ],
+ [
+ "▁succ",
+ "ess"
+ ],
+ [
+ "▁",
+ "success"
+ ],
+ [
+ "т",
+ "ь"
+ ],
+ [
+ "б",
+ "о"
+ ],
+ [
+ "co",
+ "re"
+ ],
+ [
+ "cor",
+ "e"
+ ],
+ [
+ "c",
+ "ore"
+ ],
+ [
+ "}",
+ "("
+ ],
+ [
+ "▁ar",
+ "ray"
+ ],
+ [
+ "▁arr",
+ "ay"
+ ],
+ [
+ "▁",
+ "array"
+ ],
+ [
+ "li",
+ "n"
+ ],
+ [
+ "l",
+ "in"
+ ],
+ [
+ "li",
+ "sh"
+ ],
+ [
+ "l",
+ "ish"
+ ],
+ [
+ "▁follow",
+ "ing"
+ ],
+ [
+ "Fi",
+ "eld"
+ ],
+ [
+ "F",
+ "ield"
+ ],
+ [
+ "id",
+ "s"
+ ],
+ [
+ "i",
+ "ds"
+ ],
+ [
+ "hi",
+ "ng"
+ ],
+ [
+ "hin",
+ "g"
+ ],
+ [
+ "h",
+ "ing"
+ ],
+ [
+ "▁c",
+ "al"
+ ],
+ [
+ "▁ca",
+ "l"
+ ],
+ [
+ "▁",
+ "cal"
+ ],
+ [
+ "I",
+ "s"
+ ],
+ [
+ "ar",
+ "ing"
+ ],
+ [
+ "ari",
+ "ng"
+ ],
+ [
+ "arin",
+ "g"
+ ],
+ [
+ "a",
+ "ring"
+ ],
+ [
+ "le",
+ "v"
+ ],
+ [
+ "l",
+ "ev"
+ ],
+ [
+ "al",
+ "t"
+ ],
+ [
+ "a",
+ "lt"
+ ],
+ [
+ "C",
+ "H"
+ ],
+ [
+ "▁d",
+ "é"
+ ],
+ [
+ "al",
+ "pha"
+ ],
+ [
+ "alph",
+ "a"
+ ],
+ [
+ "▁f",
+ "our"
+ ],
+ [
+ "▁fo",
+ "ur"
+ ],
+ [
+ "▁fou",
+ "r"
+ ],
+ [
+ "▁",
+ "four"
+ ],
+ [
+ "▁l",
+ "aw"
+ ],
+ [
+ "▁la",
+ "w"
+ ],
+ [
+ "▁",
+ "law"
+ ],
+ [
+ "▁с",
+ "е"
+ ],
+ [
+ "▁",
+ "се"
+ ],
+ [
+ "ir",
+ "on"
+ ],
+ [
+ "iro",
+ "n"
+ ],
+ [
+ "i",
+ "ron"
+ ],
+ [
+ "▁d",
+ "isc"
+ ],
+ [
+ "▁dis",
+ "c"
+ ],
+ [
+ "▁di",
+ "sc"
+ ],
+ [
+ "с",
+ "е"
+ ],
+ [
+ "ke",
+ "n"
+ ],
+ [
+ "k",
+ "en"
+ ],
+ [
+ "no",
+ "de"
+ ],
+ [
+ "nod",
+ "e"
+ ],
+ [
+ "n",
+ "ode"
+ ],
+ [
+ "▁P",
+ "ar"
+ ],
+ [
+ "▁Pa",
+ "r"
+ ],
+ [
+ "▁",
+ "Par"
+ ],
+ [
+ "▁E",
+ "ng"
+ ],
+ [
+ "▁En",
+ "g"
+ ],
+ [
+ "▁",
+ "Eng"
+ ],
+ [
+ "▁m",
+ "ove"
+ ],
+ [
+ "▁mov",
+ "e"
+ ],
+ [
+ "▁mo",
+ "ve"
+ ],
+ [
+ "▁",
+ "move"
+ ],
+ [
+ "▁L",
+ "icense"
+ ],
+ [
+ "▁Lic",
+ "ense"
+ ],
+ [
+ "▁",
+ "License"
+ ],
+ [
+ "cu",
+ "l"
+ ],
+ [
+ "c",
+ "ul"
+ ],
+ [
+ "ion",
+ "e"
+ ],
+ [
+ "io",
+ "ne"
+ ],
+ [
+ "i",
+ "one"
+ ],
+ [
+ ")",
+ "$"
+ ],
+ [
+ "▁t",
+ "w"
+ ],
+ [
+ "▁",
+ "tw"
+ ],
+ [
+ "W",
+ "e"
+ ],
+ [
+ "se",
+ "l"
+ ],
+ [
+ "s",
+ "el"
+ ],
+ [
+ "▁W",
+ "ith"
+ ],
+ [
+ "▁Wi",
+ "th"
+ ],
+ [
+ "▁",
+ "With"
+ ],
+ [
+ "▁on",
+ "ce"
+ ],
+ [
+ "▁",
+ "once"
+ ],
+ [
+ "Serv",
+ "ice"
+ ],
+ [
+ "S",
+ "ervice"
+ ],
+ [
+ "bo",
+ "l"
+ ],
+ [
+ "b",
+ "ol"
+ ],
+ [
+ "ur",
+ "ed"
+ ],
+ [
+ "ure",
+ "d"
+ ],
+ [
+ "u",
+ "red"
+ ],
+ [
+ "id",
+ "a"
+ ],
+ [
+ "i",
+ "da"
+ ],
+ [
+ "▁Q",
+ "u"
+ ],
+ [
+ "▁",
+ "Qu"
+ ],
+ [
+ "▁g",
+ "row"
+ ],
+ [
+ "▁gr",
+ "ow"
+ ],
+ [
+ "▁gro",
+ "w"
+ ],
+ [
+ "▁",
+ "grow"
+ ],
+ [
+ "▁c",
+ "onne"
+ ],
+ [
+ "▁con",
+ "ne"
+ ],
+ [
+ "▁conn",
+ "e"
+ ],
+ [
+ "▁",
+ "conne"
+ ],
+ [
+ "E",
+ "X"
+ ],
+ [
+ "▁h",
+ "tt"
+ ],
+ [
+ "▁",
+ "htt"
+ ],
+ [
+ "▁}",
+ ";"
+ ],
+ [
+ "▁",
+ "};"
+ ],
+ [
+ "▁w",
+ "alk"
+ ],
+ [
+ "▁wal",
+ "k"
+ ],
+ [
+ "▁",
+ "walk"
+ ],
+ [
+ "▁in",
+ "it"
+ ],
+ [
+ "▁i",
+ "nit"
+ ],
+ [
+ "▁",
+ "init"
+ ],
+ [
+ "na",
+ "l"
+ ],
+ [
+ "n",
+ "al"
+ ],
+ [
+ "en",
+ "der"
+ ],
+ [
+ "end",
+ "er"
+ ],
+ [
+ "ende",
+ "r"
+ ],
+ [
+ "e",
+ "nder"
+ ],
+ [
+ "cri",
+ "ption"
+ ],
+ [
+ "cript",
+ "ion"
+ ],
+ [
+ "mb",
+ "er"
+ ],
+ [
+ "m",
+ "ber"
+ ],
+ [
+ "le",
+ "cted"
+ ],
+ [
+ "lect",
+ "ed"
+ ],
+ [
+ "p",
+ "o"
+ ],
+ [
+ "▁n",
+ "il"
+ ],
+ [
+ "▁ni",
+ "l"
+ ],
+ [
+ "▁",
+ "nil"
+ ],
+ [
+ "▁p",
+ "rob"
+ ],
+ [
+ "▁pro",
+ "b"
+ ],
+ [
+ "▁pr",
+ "ob"
+ ],
+ [
+ "▁",
+ "prob"
+ ],
+ [
+ "ч",
+ "и"
+ ],
+ [
+ "▁S",
+ "te"
+ ],
+ [
+ "▁St",
+ "e"
+ ],
+ [
+ "▁",
+ "Ste"
+ ],
+ [
+ "is",
+ "on"
+ ],
+ [
+ "iso",
+ "n"
+ ],
+ [
+ "i",
+ "son"
+ ],
+ [
+ "an",
+ "ds"
+ ],
+ [
+ "and",
+ "s"
+ ],
+ [
+ "os",
+ "ed"
+ ],
+ [
+ "ose",
+ "d"
+ ],
+ [
+ "o",
+ "sed"
+ ],
+ [
+ "ж",
+ "е"
+ ],
+ [
+ "▁H",
+ "is"
+ ],
+ [
+ "▁Hi",
+ "s"
+ ],
+ [
+ "▁",
+ "His"
+ ],
+ [
+ "ü",
+ "r"
+ ],
+ [
+ "Ma",
+ "n"
+ ],
+ [
+ "M",
+ "an"
+ ],
+ [
+ "El",
+ "ement"
+ ],
+ [
+ "Elem",
+ "ent"
+ ],
+ [
+ "E",
+ "lement"
+ ],
+ [
+ "▁a",
+ "ble"
+ ],
+ [
+ "▁ab",
+ "le"
+ ],
+ [
+ "▁",
+ "able"
+ ],
+ [
+ "In",
+ "dex"
+ ],
+ [
+ "Ind",
+ "ex"
+ ],
+ [
+ "se",
+ "arch"
+ ],
+ [
+ "s",
+ "earch"
+ ],
+ [
+ "▁m",
+ "ag"
+ ],
+ [
+ "▁ma",
+ "g"
+ ],
+ [
+ "▁",
+ "mag"
+ ],
+ [
+ "а",
+ "р"
+ ],
+ [
+ "▁c",
+ "ourse"
+ ],
+ [
+ "▁cour",
+ "se"
+ ],
+ [
+ "▁cours",
+ "e"
+ ],
+ [
+ "▁",
+ "course"
+ ],
+ [
+ "▁C",
+ "ar"
+ ],
+ [
+ "▁Ca",
+ "r"
+ ],
+ [
+ "▁",
+ "Car"
+ ],
+ [
+ "▁e",
+ "xp"
+ ],
+ [
+ "▁ex",
+ "p"
+ ],
+ [
+ "▁",
+ "exp"
+ ],
+ [
+ "ap",
+ "h"
+ ],
+ [
+ "a",
+ "ph"
+ ],
+ [
+ "▁m",
+ "it"
+ ],
+ [
+ "▁mi",
+ "t"
+ ],
+ [
+ "▁",
+ "mit"
+ ],
+ [
+ "▁does",
+ "n"
+ ],
+ [
+ "▁def",
+ "ault"
+ ],
+ [
+ "▁",
+ "default"
+ ],
+ [
+ "/",
+ ">"
+ ],
+ [
+ "ai",
+ "m"
+ ],
+ [
+ "a",
+ "im"
+ ],
+ [
+ "▁s",
+ "ervice"
+ ],
+ [
+ "▁serv",
+ "ice"
+ ],
+ [
+ "▁",
+ "service"
+ ],
+ [
+ "▁with",
+ "in"
+ ],
+ [
+ "an",
+ "gu"
+ ],
+ [
+ "ang",
+ "u"
+ ],
+ [
+ "▁",
+ "Д"
+ ],
+ [
+ "uf",
+ "fer"
+ ],
+ [
+ "uff",
+ "er"
+ ],
+ [
+ "A",
+ "G"
+ ],
+ [
+ "▁D",
+ "o"
+ ],
+ [
+ "▁",
+ "Do"
+ ],
+ [
+ "▁in",
+ "cre"
+ ],
+ [
+ "▁inc",
+ "re"
+ ],
+ [
+ "▁under",
+ "stand"
+ ],
+ [
+ "}",
+ "^"
+ ],
+ [
+ "▁look",
+ "ed"
+ ],
+ [
+ "▁lo",
+ "oked"
+ ],
+ [
+ "ge",
+ "n"
+ ],
+ [
+ "g",
+ "en"
+ ],
+ [
+ "ail",
+ "ed"
+ ],
+ [
+ "ai",
+ "led"
+ ],
+ [
+ "a",
+ "iled"
+ ],
+ [
+ "▁",
+ "е"
+ ],
+ [
+ "ay",
+ "er"
+ ],
+ [
+ "aye",
+ "r"
+ ],
+ [
+ "a",
+ "yer"
+ ],
+ [
+ "▁O",
+ "ne"
+ ],
+ [
+ "▁On",
+ "e"
+ ],
+ [
+ "▁",
+ "One"
+ ],
+ [
+ "▁b",
+ "as"
+ ],
+ [
+ "▁ba",
+ "s"
+ ],
+ [
+ "▁",
+ "bas"
+ ],
+ [
+ "▁j",
+ "ob"
+ ],
+ [
+ "▁jo",
+ "b"
+ ],
+ [
+ "▁",
+ "job"
+ ],
+ [
+ "m",
+ "u"
+ ],
+ [
+ "bu",
+ "t"
+ ],
+ [
+ "b",
+ "ut"
+ ],
+ [
+ "el",
+ "ta"
+ ],
+ [
+ "elt",
+ "a"
+ ],
+ [
+ "▁Ch",
+ "rist"
+ ],
+ [
+ "▁Chris",
+ "t"
+ ],
+ [
+ "▁",
+ "Christ"
+ ],
+ [
+ "ur",
+ "ation"
+ ],
+ [
+ "▁re",
+ "cord"
+ ],
+ [
+ "▁rec",
+ "ord"
+ ],
+ [
+ "▁",
+ "record"
+ ],
+ [
+ "▁Un",
+ "ivers"
+ ],
+ [
+ "▁",
+ "Univers"
+ ],
+ [
+ "iv",
+ "id"
+ ],
+ [
+ "ivi",
+ "d"
+ ],
+ [
+ "i",
+ "vid"
+ ],
+ [
+ "val",
+ "id"
+ ],
+ [
+ "▁",
+ "Р"
+ ],
+ [
+ "▁h",
+ "old"
+ ],
+ [
+ "▁hol",
+ "d"
+ ],
+ [
+ "▁ho",
+ "ld"
+ ],
+ [
+ "▁",
+ "hold"
+ ],
+ [
+ "▁t",
+ "able"
+ ],
+ [
+ "▁tab",
+ "le"
+ ],
+ [
+ "▁ta",
+ "ble"
+ ],
+ [
+ "▁",
+ "table"
+ ],
+ [
+ "on",
+ "es"
+ ],
+ [
+ "one",
+ "s"
+ ],
+ [
+ "o",
+ "nes"
+ ],
+ [
+ "lin",
+ "k"
+ ],
+ [
+ "l",
+ "ink"
+ ],
+ [
+ "▁G",
+ "e"
+ ],
+ [
+ "▁",
+ "Ge"
+ ],
+ [
+ "▁of",
+ "fer"
+ ],
+ [
+ "▁off",
+ "er"
+ ],
+ [
+ "st",
+ "er"
+ ],
+ [
+ "ste",
+ "r"
+ ],
+ [
+ "s",
+ "ter"
+ ],
+ [
+ "For",
+ "m"
+ ],
+ [
+ "F",
+ "orm"
+ ],
+ [
+ "=",
+ "{"
+ ],
+ [
+ "▁н",
+ "е"
+ ],
+ [
+ "▁",
+ "не"
+ ],
+ [
+ "st",
+ "ance"
+ ],
+ [
+ "stan",
+ "ce"
+ ],
+ [
+ "▁g",
+ "overn"
+ ],
+ [
+ "▁go",
+ "vern"
+ ],
+ [
+ "▁gover",
+ "n"
+ ],
+ [
+ "▁",
+ "govern"
+ ],
+ [
+ "▁te",
+ "chn"
+ ],
+ [
+ "▁tech",
+ "n"
+ ],
+ [
+ "▁",
+ "techn"
+ ],
+ [
+ "▁p",
+ "rim"
+ ],
+ [
+ "▁pr",
+ "im"
+ ],
+ [
+ "▁pri",
+ "m"
+ ],
+ [
+ "▁",
+ "prim"
+ ],
+ [
+ "*",
+ "."
+ ],
+ [
+ "ch",
+ "o"
+ ],
+ [
+ "c",
+ "ho"
+ ],
+ [
+ "ma",
+ "x"
+ ],
+ [
+ "m",
+ "ax"
+ ],
+ [
+ "▁f",
+ "ore"
+ ],
+ [
+ "▁for",
+ "e"
+ ],
+ [
+ "▁fo",
+ "re"
+ ],
+ [
+ "▁",
+ "fore"
+ ],
+ [
+ "▁C",
+ "an"
+ ],
+ [
+ "▁Ca",
+ "n"
+ ],
+ [
+ "▁",
+ "Can"
+ ],
+ [
+ "▁pol",
+ "it"
+ ],
+ [
+ "▁po",
+ "lit"
+ ],
+ [
+ "▁",
+ "polit"
+ ],
+ [
+ "or",
+ "ies"
+ ],
+ [
+ "ori",
+ "es"
+ ],
+ [
+ "orie",
+ "s"
+ ],
+ [
+ "o",
+ "ries"
+ ],
+ [
+ "▁t",
+ "imes"
+ ],
+ [
+ "▁time",
+ "s"
+ ],
+ [
+ "▁tim",
+ "es"
+ ],
+ [
+ "▁ti",
+ "mes"
+ ],
+ [
+ "▁",
+ "times"
+ ],
+ [
+ "▁d",
+ "ans"
+ ],
+ [
+ "▁da",
+ "ns"
+ ],
+ [
+ "▁dan",
+ "s"
+ ],
+ [
+ "▁a",
+ "ir"
+ ],
+ [
+ "▁ai",
+ "r"
+ ],
+ [
+ "▁",
+ "air"
+ ],
+ [
+ "▁any",
+ "thing"
+ ],
+ [
+ "▁s",
+ "ever"
+ ],
+ [
+ "▁se",
+ "ver"
+ ],
+ [
+ "ac",
+ "y"
+ ],
+ [
+ "a",
+ "cy"
+ ],
+ [
+ "}",
+ "_"
+ ],
+ [
+ "H",
+ "e"
+ ],
+ [
+ "▁l",
+ "east"
+ ],
+ [
+ "▁le",
+ "ast"
+ ],
+ [
+ "ip",
+ "s"
+ ],
+ [
+ "i",
+ "ps"
+ ],
+ [
+ "EN",
+ "T"
+ ],
+ [
+ "E",
+ "NT"
+ ],
+ [
+ "d",
+ "o"
+ ],
+ [
+ "▁о",
+ "т"
+ ],
+ [
+ "▁",
+ "от"
+ ],
+ [
+ "▁c",
+ "ost"
+ ],
+ [
+ "▁co",
+ "st"
+ ],
+ [
+ "▁cos",
+ "t"
+ ],
+ [
+ "▁",
+ "cost"
+ ],
+ [
+ ".",
+ "”"
+ ],
+ [
+ "▁child",
+ "ren"
+ ],
+ [
+ "▁",
+ "children"
+ ],
+ [
+ "ab",
+ "ility"
+ ],
+ [
+ "abil",
+ "ity"
+ ],
+ [
+ "Bu",
+ "t"
+ ],
+ [
+ "B",
+ "ut"
+ ],
+ [
+ "▁p",
+ "ath"
+ ],
+ [
+ "▁pat",
+ "h"
+ ],
+ [
+ "▁pa",
+ "th"
+ ],
+ [
+ "▁",
+ "path"
+ ],
+ [
+ "res",
+ "ult"
+ ],
+ [
+ "ac",
+ "ter"
+ ],
+ [
+ "act",
+ "er"
+ ],
+ [
+ "▁e",
+ "lement"
+ ],
+ [
+ "▁el",
+ "ement"
+ ],
+ [
+ "▁ele",
+ "ment"
+ ],
+ [
+ "▁elem",
+ "ent"
+ ],
+ [
+ "▁",
+ "element"
+ ],
+ [
+ "e",
+ "e"
+ ],
+ [
+ "▁w",
+ "ait"
+ ],
+ [
+ "▁wa",
+ "it"
+ ],
+ [
+ "▁",
+ "wait"
+ ],
+ [
+ "▁m",
+ "oney"
+ ],
+ [
+ "▁mon",
+ "ey"
+ ],
+ [
+ "▁mo",
+ "ney"
+ ],
+ [
+ "Ma",
+ "p"
+ ],
+ [
+ "M",
+ "ap"
+ ],
+ [
+ "t",
+ "d"
+ ],
+ [
+ "oi",
+ "n"
+ ],
+ [
+ "o",
+ "in"
+ ],
+ [
+ "iv",
+ "ing"
+ ],
+ [
+ "ivi",
+ "ng"
+ ],
+ [
+ "i",
+ "ving"
+ ],
+ [
+ "ic",
+ "ht"
+ ],
+ [
+ "ich",
+ "t"
+ ],
+ [
+ "i",
+ "cht"
+ ],
+ [
+ "ic",
+ "y"
+ ],
+ [
+ "i",
+ "cy"
+ ],
+ [
+ "sc",
+ "h"
+ ],
+ [
+ "s",
+ "ch"
+ ],
+ [
+ "st",
+ "e"
+ ],
+ [
+ "s",
+ "te"
+ ],
+ [
+ "д",
+ "у"
+ ],
+ [
+ "or",
+ "ed"
+ ],
+ [
+ "ore",
+ "d"
+ ],
+ [
+ "o",
+ "red"
+ ],
+ [
+ "ou",
+ "d"
+ ],
+ [
+ "o",
+ "ud"
+ ],
+ [
+ "il",
+ "le"
+ ],
+ [
+ "ill",
+ "e"
+ ],
+ [
+ "i",
+ "lle"
+ ],
+ [
+ "is",
+ "ed"
+ ],
+ [
+ "ise",
+ "d"
+ ],
+ [
+ "i",
+ "sed"
+ ],
+ [
+ "pl",
+ "ication"
+ ],
+ [
+ "plic",
+ "ation"
+ ],
+ [
+ "▁c",
+ "ustom"
+ ],
+ [
+ "▁cust",
+ "om"
+ ],
+ [
+ "▁",
+ "custom"
+ ],
+ [
+ "▁h",
+ "aving"
+ ],
+ [
+ "▁ha",
+ "ving"
+ ],
+ [
+ "▁hav",
+ "ing"
+ ],
+ [
+ "pon",
+ "ent"
+ ],
+ [
+ "po",
+ "nent"
+ ],
+ [
+ "▁B",
+ "y"
+ ],
+ [
+ "▁",
+ "By"
+ ],
+ [
+ "ul",
+ "es"
+ ],
+ [
+ "ule",
+ "s"
+ ],
+ [
+ "u",
+ "les"
+ ],
+ [
+ "ue",
+ "d"
+ ],
+ [
+ "u",
+ "ed"
+ ],
+ [
+ "at",
+ "ter"
+ ],
+ [
+ "att",
+ "er"
+ ],
+ [
+ "atte",
+ "r"
+ ],
+ [
+ "An",
+ "d"
+ ],
+ [
+ "A",
+ "nd"
+ ],
+ [
+ "it",
+ "ive"
+ ],
+ [
+ "iti",
+ "ve"
+ ],
+ [
+ "De",
+ "f"
+ ],
+ [
+ "D",
+ "ef"
+ ],
+ [
+ "▁m",
+ "oment"
+ ],
+ [
+ "▁mom",
+ "ent"
+ ],
+ [
+ "▁mo",
+ "ment"
+ ],
+ [
+ "▁",
+ "moment"
+ ],
+ [
+ "at",
+ "erial"
+ ],
+ [
+ "ate",
+ "rial"
+ ],
+ [
+ "ater",
+ "ial"
+ ],
+ [
+ "Cl",
+ "ass"
+ ],
+ [
+ "C",
+ "lass"
+ ],
+ [
+ "og",
+ "raph"
+ ],
+ [
+ "ograp",
+ "h"
+ ],
+ [
+ "o",
+ "graph"
+ ],
+ [
+ "ik",
+ "e"
+ ],
+ [
+ "i",
+ "ke"
+ ],
+ [
+ "▁l",
+ "arge"
+ ],
+ [
+ "▁larg",
+ "e"
+ ],
+ [
+ "▁",
+ "large"
+ ],
+ [
+ "▁#",
+ "###"
+ ],
+ [
+ "▁##",
+ "##"
+ ],
+ [
+ "▁###",
+ "#"
+ ],
+ [
+ "▁",
+ "####"
+ ],
+ [
+ "▁e",
+ "ither"
+ ],
+ [
+ "du",
+ "ct"
+ ],
+ [
+ "duc",
+ "t"
+ ],
+ [
+ "d",
+ "uct"
+ ],
+ [
+ "▁T",
+ "hen"
+ ],
+ [
+ "▁The",
+ "n"
+ ],
+ [
+ "▁Th",
+ "en"
+ ],
+ [
+ "▁",
+ "Then"
+ ],
+ [
+ "▁G",
+ "u"
+ ],
+ [
+ "▁",
+ "Gu"
+ ],
+ [
+ "ole",
+ "an"
+ ],
+ [
+ "o",
+ "lean"
+ ],
+ [
+ "pe",
+ "rt"
+ ],
+ [
+ "per",
+ "t"
+ ],
+ [
+ "p",
+ "ert"
+ ],
+ [
+ "▁G",
+ "et"
+ ],
+ [
+ "▁Ge",
+ "t"
+ ],
+ [
+ "▁",
+ "Get"
+ ],
+ [
+ "▁A",
+ "b"
+ ],
+ [
+ "▁",
+ "Ab"
+ ],
+ [
+ "▁sh",
+ "ort"
+ ],
+ [
+ "▁",
+ "short"
+ ],
+ [
+ "O",
+ "n"
+ ],
+ [
+ "im",
+ "ent"
+ ],
+ [
+ "ime",
+ "nt"
+ ],
+ [
+ "imen",
+ "t"
+ ],
+ [
+ "i",
+ "ment"
+ ],
+ [
+ "▁pro",
+ "ject"
+ ],
+ [
+ "▁",
+ "project"
+ ],
+ [
+ "cri",
+ "pt"
+ ],
+ [
+ "cr",
+ "ipt"
+ ],
+ [
+ "c",
+ "ript"
+ ],
+ [
+ "▁incl",
+ "uding"
+ ],
+ [
+ "▁includ",
+ "ing"
+ ],
+ [
+ "▁inclu",
+ "ding"
+ ],
+ [
+ "▁",
+ "including"
+ ],
+ [
+ "ни",
+ "я"
+ ],
+ [
+ "▁m",
+ "aking"
+ ],
+ [
+ "▁ma",
+ "king"
+ ],
+ [
+ "▁",
+ "making"
+ ],
+ [
+ "▁some",
+ "one"
+ ],
+ [
+ "▁F",
+ "l"
+ ],
+ [
+ "▁",
+ "Fl"
+ ],
+ [
+ "▁s",
+ "at"
+ ],
+ [
+ "▁sa",
+ "t"
+ ],
+ [
+ "▁",
+ "sat"
+ ],
+ [
+ "▁comp",
+ "any"
+ ],
+ [
+ "▁compan",
+ "y"
+ ],
+ [
+ "▁",
+ "company"
+ ],
+ [
+ "oc",
+ "us"
+ ],
+ [
+ "p",
+ "u"
+ ],
+ [
+ "▁G",
+ "od"
+ ],
+ [
+ "▁Go",
+ "d"
+ ],
+ [
+ "▁",
+ "God"
+ ],
+ [
+ "if",
+ "ication"
+ ],
+ [
+ "ific",
+ "ation"
+ ],
+ [
+ "N",
+ "o"
+ ],
+ [
+ "▁s",
+ "n"
+ ],
+ [
+ "▁",
+ "sn"
+ ],
+ [
+ "an",
+ "o"
+ ],
+ [
+ "a",
+ "no"
+ ],
+ [
+ "g",
+ "a"
+ ],
+ [
+ "▁a",
+ "u"
+ ],
+ [
+ "▁",
+ "au"
+ ],
+ [
+ "▁c",
+ "ou"
+ ],
+ [
+ "▁co",
+ "u"
+ ],
+ [
+ "▁",
+ "cou"
+ ],
+ [
+ "á",
+ "s"
+ ],
+ [
+ "en",
+ "ded"
+ ],
+ [
+ "end",
+ "ed"
+ ],
+ [
+ "ende",
+ "d"
+ ],
+ [
+ "т",
+ "у"
+ ],
+ [
+ "ob",
+ "er"
+ ],
+ [
+ "obe",
+ "r"
+ ],
+ [
+ "o",
+ "ber"
+ ],
+ [
+ "▁n",
+ "othing"
+ ],
+ [
+ "▁not",
+ "hing"
+ ],
+ [
+ "▁no",
+ "thing"
+ ],
+ [
+ "▁n",
+ "et"
+ ],
+ [
+ "▁ne",
+ "t"
+ ],
+ [
+ "▁",
+ "net"
+ ],
+ [
+ "▁p",
+ "ot"
+ ],
+ [
+ "▁po",
+ "t"
+ ],
+ [
+ "▁",
+ "pot"
+ ],
+ [
+ "▁t",
+ "yp"
+ ],
+ [
+ "▁ty",
+ "p"
+ ],
+ [
+ "▁",
+ "typ"
+ ],
+ [
+ "▁it",
+ "em"
+ ],
+ [
+ "▁i",
+ "tem"
+ ],
+ [
+ "▁",
+ "item"
+ ],
+ [
+ "re",
+ "w"
+ ],
+ [
+ "r",
+ "ew"
+ ],
+ [
+ "At",
+ "t"
+ ],
+ [
+ "A",
+ "tt"
+ ],
+ [
+ "▁you",
+ "ng"
+ ],
+ [
+ "▁yo",
+ "ung"
+ ],
+ [
+ "}",
+ "\r"
+ ],
+ [
+ "nd",
+ "er"
+ ],
+ [
+ "nde",
+ "r"
+ ],
+ [
+ "n",
+ "der"
+ ],
+ [
+ "st",
+ "art"
+ ],
+ [
+ "sta",
+ "rt"
+ ],
+ [
+ "star",
+ "t"
+ ],
+ [
+ "▁S",
+ "c"
+ ],
+ [
+ "▁",
+ "Sc"
+ ],
+ [
+ "*",
+ ")"
+ ],
+ [
+ "▁e",
+ "nc"
+ ],
+ [
+ "▁en",
+ "c"
+ ],
+ [
+ "▁",
+ "enc"
+ ],
+ [
+ "▁w",
+ "omen"
+ ],
+ [
+ "▁wom",
+ "en"
+ ],
+ [
+ "▁wo",
+ "men"
+ ],
+ [
+ "▁look",
+ "ing"
+ ],
+ [
+ "▁lo",
+ "oking"
+ ],
+ [
+ "▁",
+ "looking"
+ ],
+ [
+ "▁р",
+ "о"
+ ],
+ [
+ "▁",
+ "ро"
+ ],
+ [
+ "▁he",
+ "alth"
+ ],
+ [
+ "▁heal",
+ "th"
+ ],
+ [
+ "▁",
+ "health"
+ ],
+ [
+ "Pat",
+ "h"
+ ],
+ [
+ "P",
+ "ath"
+ ],
+ [
+ "▁A",
+ "fter"
+ ],
+ [
+ "▁Af",
+ "ter"
+ ],
+ [
+ "▁",
+ "After"
+ ],
+ [
+ "▁m",
+ "ult"
+ ],
+ [
+ "▁mu",
+ "lt"
+ ],
+ [
+ "▁mul",
+ "t"
+ ],
+ [
+ "▁",
+ "mult"
+ ],
+ [
+ "▁{",
+ "\\"
+ ],
+ [
+ "▁",
+ "{\\"
+ ],
+ [
+ "▁l",
+ "and"
+ ],
+ [
+ "▁la",
+ "nd"
+ ],
+ [
+ "▁lan",
+ "d"
+ ],
+ [
+ "▁",
+ "land"
+ ],
+ [
+ "or",
+ "ld"
+ ],
+ [
+ "▁D",
+ "es"
+ ],
+ [
+ "▁De",
+ "s"
+ ],
+ [
+ "▁",
+ "Des"
+ ],
+ [
+ "▁e",
+ "ng"
+ ],
+ [
+ "▁en",
+ "g"
+ ],
+ [
+ "▁",
+ "eng"
+ ],
+ [
+ "in",
+ "put"
+ ],
+ [
+ "▁P",
+ "ol"
+ ],
+ [
+ "▁Po",
+ "l"
+ ],
+ [
+ "▁",
+ "Pol"
+ ],
+ [
+ "\"",
+ "\""
+ ],
+ [
+ "Co",
+ "de"
+ ],
+ [
+ "C",
+ "ode"
+ ],
+ [
+ "▁s",
+ "upp"
+ ],
+ [
+ "▁su",
+ "pp"
+ ],
+ [
+ "▁sup",
+ "p"
+ ],
+ [
+ "▁",
+ "supp"
+ ],
+ [
+ "ain",
+ "er"
+ ],
+ [
+ "ai",
+ "ner"
+ ],
+ [
+ "aine",
+ "r"
+ ],
+ [
+ "a",
+ "iner"
+ ],
+ [
+ "he",
+ "ck"
+ ],
+ [
+ "▁m",
+ "or"
+ ],
+ [
+ "▁mo",
+ "r"
+ ],
+ [
+ "▁",
+ "mor"
+ ],
+ [
+ "▁m",
+ "ill"
+ ],
+ [
+ "▁mil",
+ "l"
+ ],
+ [
+ "▁mi",
+ "ll"
+ ],
+ [
+ "▁",
+ "mill"
+ ],
+ [
+ "▁a",
+ "w"
+ ],
+ [
+ "▁",
+ "aw"
+ ],
+ [
+ "f",
+ "s"
+ ],
+ [
+ "▁do",
+ "ing"
+ ],
+ [
+ "ting",
+ "s"
+ ],
+ [
+ "t",
+ "ings"
+ ],
+ [
+ "ad",
+ "es"
+ ],
+ [
+ "ade",
+ "s"
+ ],
+ [
+ "a",
+ "des"
+ ],
+ [
+ "▁to",
+ "get"
+ ],
+ [
+ "▁c",
+ "ertain"
+ ],
+ [
+ "▁cert",
+ "ain"
+ ],
+ [
+ "▁cer",
+ "tain"
+ ],
+ [
+ "▁t",
+ "ogether"
+ ],
+ [
+ "▁toget",
+ "her"
+ ],
+ [
+ "C",
+ "E"
+ ],
+ [
+ "ide",
+ "o"
+ ],
+ [
+ "▁Amer",
+ "ican"
+ ],
+ [
+ "▁America",
+ "n"
+ ],
+ [
+ "▁",
+ "American"
+ ],
+ [
+ "on",
+ "y"
+ ],
+ [
+ "o",
+ "ny"
+ ],
+ [
+ "id",
+ "d"
+ ],
+ [
+ "i",
+ "dd"
+ ],
+ [
+ "I",
+ "I"
+ ],
+ [
+ "ge",
+ "d"
+ ],
+ [
+ "g",
+ "ed"
+ ],
+ [
+ "ab",
+ "les"
+ ],
+ [
+ "able",
+ "s"
+ ],
+ [
+ "abl",
+ "es"
+ ],
+ [
+ "a",
+ "bles"
+ ],
+ [
+ "▁ide",
+ "nt"
+ ],
+ [
+ "▁id",
+ "ent"
+ ],
+ [
+ "▁",
+ "ident"
+ ],
+ [
+ "io",
+ "d"
+ ],
+ [
+ "i",
+ "od"
+ ],
+ [
+ "▁p",
+ "arent"
+ ],
+ [
+ "▁par",
+ "ent"
+ ],
+ [
+ "▁pa",
+ "rent"
+ ],
+ [
+ "▁pare",
+ "nt"
+ ],
+ [
+ "▁",
+ "parent"
+ ],
+ [
+ "F",
+ "or"
+ ],
+ [
+ "amb",
+ "da"
+ ],
+ [
+ "an",
+ "do"
+ ],
+ [
+ "and",
+ "o"
+ ],
+ [
+ "=",
+ "\\"
+ ],
+ [
+ "ag",
+ "ed"
+ ],
+ [
+ "age",
+ "d"
+ ],
+ [
+ "a",
+ "ged"
+ ],
+ [
+ "en",
+ "ding"
+ ],
+ [
+ "end",
+ "ing"
+ ],
+ [
+ "In",
+ "t"
+ ],
+ [
+ "I",
+ "nt"
+ ],
+ [
+ "▁poss",
+ "ible"
+ ],
+ [
+ "▁",
+ "possible"
+ ],
+ [
+ "▁с",
+ "о"
+ ],
+ [
+ "▁",
+ "со"
+ ],
+ [
+ "iv",
+ "ity"
+ ],
+ [
+ "ivi",
+ "ty"
+ ],
+ [
+ "nu",
+ "m"
+ ],
+ [
+ "n",
+ "um"
+ ],
+ [
+ "r",
+ "t"
+ ],
+ [
+ "aj",
+ "or"
+ ],
+ [
+ "ajo",
+ "r"
+ ],
+ [
+ "a",
+ "jor"
+ ],
+ [
+ "cre",
+ "ate"
+ ],
+ [
+ "creat",
+ "e"
+ ],
+ [
+ "c",
+ "reate"
+ ],
+ [
+ "ri",
+ "de"
+ ],
+ [
+ "rid",
+ "e"
+ ],
+ [
+ "r",
+ "ide"
+ ],
+ [
+ "▁k",
+ "new"
+ ],
+ [
+ "▁kn",
+ "ew"
+ ],
+ [
+ "▁kne",
+ "w"
+ ],
+ [
+ "bi",
+ "t"
+ ],
+ [
+ "b",
+ "it"
+ ],
+ [
+ "it",
+ "ional"
+ ],
+ [
+ "ition",
+ "al"
+ ],
+ [
+ "iti",
+ "onal"
+ ],
+ [
+ "▁l",
+ "ik"
+ ],
+ [
+ "▁li",
+ "k"
+ ],
+ [
+ "▁",
+ "lik"
+ ],
+ [
+ "▁H",
+ "er"
+ ],
+ [
+ "▁He",
+ "r"
+ ],
+ [
+ "▁",
+ "Her"
+ ],
+ [
+ "ens",
+ "ion"
+ ],
+ [
+ "\"",
+ "."
+ ],
+ [
+ "ot",
+ "o"
+ ],
+ [
+ "o",
+ "to"
+ ],
+ [
+ "▁ex",
+ "ist"
+ ],
+ [
+ "▁",
+ "exist"
+ ],
+ [
+ "ak",
+ "en"
+ ],
+ [
+ "ake",
+ "n"
+ ],
+ [
+ "a",
+ "ken"
+ ],
+ [
+ "▁act",
+ "ually"
+ ],
+ [
+ "▁actual",
+ "ly"
+ ],
+ [
+ "c",
+ "a"
+ ],
+ [
+ "▁",
+ "Г"
+ ],
+ [
+ "х",
+ "о"
+ ],
+ [
+ "in",
+ "n"
+ ],
+ [
+ "i",
+ "nn"
+ ],
+ [
+ "Al",
+ "l"
+ ],
+ [
+ "A",
+ "ll"
+ ],
+ [
+ "bu",
+ "f"
+ ],
+ [
+ "b",
+ "uf"
+ ],
+ [
+ "▁M",
+ "e"
+ ],
+ [
+ "▁",
+ "Me"
+ ],
+ [
+ "▁s",
+ "een"
+ ],
+ [
+ "▁se",
+ "en"
+ ],
+ [
+ "▁see",
+ "n"
+ ],
+ [
+ "▁",
+ "seen"
+ ],
+ [
+ "op",
+ "s"
+ ],
+ [
+ "o",
+ "ps"
+ ],
+ [
+ "No",
+ "t"
+ ],
+ [
+ "N",
+ "ot"
+ ],
+ [
+ "▁cont",
+ "rol"
+ ],
+ [
+ "▁contr",
+ "ol"
+ ],
+ [
+ "▁contro",
+ "l"
+ ],
+ [
+ "▁",
+ "control"
+ ],
+ [
+ "▁res",
+ "pon"
+ ],
+ [
+ "▁resp",
+ "on"
+ ],
+ [
+ "▁",
+ "respon"
+ ],
+ [
+ "}",
+ ";"
+ ],
+ [
+ "il",
+ "t"
+ ],
+ [
+ "i",
+ "lt"
+ ],
+ [
+ "is",
+ "k"
+ ],
+ [
+ "i",
+ "sk"
+ ],
+ [
+ "▁b",
+ "ad"
+ ],
+ [
+ "▁ba",
+ "d"
+ ],
+ [
+ "▁",
+ "bad"
+ ],
+ [
+ "▁o",
+ "ften"
+ ],
+ [
+ "▁of",
+ "ten"
+ ],
+ [
+ "▁p",
+ "ast"
+ ],
+ [
+ "▁pas",
+ "t"
+ ],
+ [
+ "▁pa",
+ "st"
+ ],
+ [
+ "ap",
+ "er"
+ ],
+ [
+ "ape",
+ "r"
+ ],
+ [
+ "a",
+ "per"
+ ],
+ [
+ "▁re",
+ "ason"
+ ],
+ [
+ "▁",
+ "reason"
+ ],
+ [
+ "et",
+ "ers"
+ ],
+ [
+ "eter",
+ "s"
+ ],
+ [
+ "ete",
+ "rs"
+ ],
+ [
+ "e",
+ "ters"
+ ],
+ [
+ "▁w",
+ "anted"
+ ],
+ [
+ "▁want",
+ "ed"
+ ],
+ [
+ "ur",
+ "a"
+ ],
+ [
+ "u",
+ "ra"
+ ],
+ [
+ "ta",
+ "ble"
+ ],
+ [
+ "tab",
+ "le"
+ ],
+ [
+ "t",
+ "able"
+ ],
+ [
+ "or",
+ "mal"
+ ],
+ [
+ "orm",
+ "al"
+ ],
+ [
+ "wid",
+ "th"
+ ],
+ [
+ "w",
+ "idth"
+ ],
+ [
+ "г",
+ "а"
+ ],
+ [
+ "pt",
+ "r"
+ ],
+ [
+ "p",
+ "tr"
+ ],
+ [
+ "▁d",
+ "est"
+ ],
+ [
+ "▁de",
+ "st"
+ ],
+ [
+ "▁des",
+ "t"
+ ],
+ [
+ "▁",
+ "dest"
+ ],
+ [
+ "▁de",
+ "sign"
+ ],
+ [
+ "▁des",
+ "ign"
+ ],
+ [
+ "▁",
+ "design"
+ ],
+ [
+ "▁s",
+ "ound"
+ ],
+ [
+ "▁so",
+ "und"
+ ],
+ [
+ "▁sou",
+ "nd"
+ ],
+ [
+ "▁",
+ "sound"
+ ],
+ [
+ "▁p",
+ "lan"
+ ],
+ [
+ "▁pl",
+ "an"
+ ],
+ [
+ "▁",
+ "plan"
+ ],
+ [
+ "▁b",
+ "ase"
+ ],
+ [
+ "▁bas",
+ "e"
+ ],
+ [
+ "▁ba",
+ "se"
+ ],
+ [
+ "▁",
+ "base"
+ ],
+ [
+ "ha",
+ "nd"
+ ],
+ [
+ "han",
+ "d"
+ ],
+ [
+ "h",
+ "and"
+ ],
+ [
+ "g",
+ "s"
+ ],
+ [
+ "▁s",
+ "ays"
+ ],
+ [
+ "▁sa",
+ "ys"
+ ],
+ [
+ "▁say",
+ "s"
+ ],
+ [
+ "fun",
+ "ction"
+ ],
+ [
+ "f",
+ "unction"
+ ],
+ [
+ "▁t",
+ "ri"
+ ],
+ [
+ "▁tr",
+ "i"
+ ],
+ [
+ "▁",
+ "tri"
+ ],
+ [
+ "m",
+ "t"
+ ],
+ [
+ "▁in",
+ "vest"
+ ],
+ [
+ "▁inv",
+ "est"
+ ],
+ [
+ "▁av",
+ "ailable"
+ ],
+ [
+ "▁",
+ "available"
+ ],
+ [
+ "ay",
+ "out"
+ ],
+ [
+ "a",
+ "yout"
+ ],
+ [
+ "▁o",
+ "ch"
+ ],
+ [
+ "▁oc",
+ "h"
+ ],
+ [
+ "▁",
+ "och"
+ ],
+ [
+ "▁l",
+ "as"
+ ],
+ [
+ "▁la",
+ "s"
+ ],
+ [
+ "▁",
+ "las"
+ ],
+ [
+ "il",
+ "led"
+ ],
+ [
+ "ill",
+ "ed"
+ ],
+ [
+ "ille",
+ "d"
+ ],
+ [
+ "V",
+ "al"
+ ],
+ [
+ "▁",
+ "ф"
+ ],
+ [
+ "ie",
+ "ty"
+ ],
+ [
+ "iet",
+ "y"
+ ],
+ [
+ "i",
+ "ety"
+ ],
+ [
+ "mo",
+ "n"
+ ],
+ [
+ "m",
+ "on"
+ ],
+ [
+ "Ha",
+ "nd"
+ ],
+ [
+ "H",
+ "and"
+ ],
+ [
+ "F",
+ "r"
+ ],
+ [
+ "ia",
+ "m"
+ ],
+ [
+ "i",
+ "am"
+ ],
+ [
+ "pa",
+ "ce"
+ ],
+ [
+ "p",
+ "ace"
+ ],
+ [
+ "▁O",
+ "b"
+ ],
+ [
+ "▁",
+ "Ob"
+ ],
+ [
+ "▁p",
+ "ara"
+ ],
+ [
+ "▁par",
+ "a"
+ ],
+ [
+ "▁pa",
+ "ra"
+ ],
+ [
+ "▁",
+ "para"
+ ],
+ [
+ "▁me",
+ "et"
+ ],
+ [
+ "▁s",
+ "um"
+ ],
+ [
+ "▁su",
+ "m"
+ ],
+ [
+ "▁",
+ "sum"
+ ],
+ [
+ "M",
+ "essage"
+ ],
+ [
+ "ic",
+ "i"
+ ],
+ [
+ "i",
+ "ci"
+ ],
+ [
+ "▁k",
+ "nown"
+ ],
+ [
+ "▁kn",
+ "own"
+ ],
+ [
+ "▁know",
+ "n"
+ ],
+ [
+ "▁",
+ "known"
+ ],
+ [
+ "▁g",
+ "en"
+ ],
+ [
+ "▁ge",
+ "n"
+ ],
+ [
+ "▁",
+ "gen"
+ ],
+ [
+ "am",
+ "ma"
+ ],
+ [
+ "amm",
+ "a"
+ ],
+ [
+ "a",
+ "mma"
+ ],
+ [
+ "ar",
+ "r"
+ ],
+ [
+ "a",
+ "rr"
+ ],
+ [
+ "▁t",
+ "re"
+ ],
+ [
+ "▁tr",
+ "e"
+ ],
+ [
+ "▁",
+ "tre"
+ ],
+ [
+ "ok",
+ "e"
+ ],
+ [
+ "o",
+ "ke"
+ ],
+ [
+ "ut",
+ "h"
+ ],
+ [
+ "u",
+ "th"
+ ],
+ [
+ "~",
+ "\\"
+ ],
+ [
+ "▁exper",
+ "ience"
+ ],
+ [
+ "▁experi",
+ "ence"
+ ],
+ [
+ "ic",
+ "le"
+ ],
+ [
+ "icl",
+ "e"
+ ],
+ [
+ "i",
+ "cle"
+ ],
+ [
+ "▁I",
+ "l"
+ ],
+ [
+ "▁",
+ "Il"
+ ],
+ [
+ "▁s",
+ "ent"
+ ],
+ [
+ "▁se",
+ "nt"
+ ],
+ [
+ "▁sen",
+ "t"
+ ],
+ [
+ "▁",
+ "sent"
+ ],
+ [
+ "▁o",
+ "thers"
+ ],
+ [
+ "▁other",
+ "s"
+ ],
+ [
+ "▁",
+ "others"
+ ],
+ [
+ "▁s",
+ "oft"
+ ],
+ [
+ "▁so",
+ "ft"
+ ],
+ [
+ "▁",
+ "soft"
+ ],
+ [
+ "I",
+ "P"
+ ],
+ [
+ "▁m",
+ "ax"
+ ],
+ [
+ "▁ma",
+ "x"
+ ],
+ [
+ "▁",
+ "max"
+ ],
+ [
+ "ba",
+ "ll"
+ ],
+ [
+ "bal",
+ "l"
+ ],
+ [
+ "b",
+ "all"
+ ],
+ [
+ "▁mark",
+ "et"
+ ],
+ [
+ "▁mar",
+ "ket"
+ ],
+ [
+ "▁",
+ "market"
+ ],
+ [
+ "▁p",
+ "our"
+ ],
+ [
+ "▁po",
+ "ur"
+ ],
+ [
+ "▁pou",
+ "r"
+ ],
+ [
+ "pr",
+ "ession"
+ ],
+ [
+ "press",
+ "ion"
+ ],
+ [
+ "p",
+ "ression"
+ ],
+ [
+ "ep",
+ "s"
+ ],
+ [
+ "e",
+ "ps"
+ ],
+ [
+ "▁s",
+ "aw"
+ ],
+ [
+ "▁sa",
+ "w"
+ ],
+ [
+ "▁a",
+ "cross"
+ ],
+ [
+ "▁ac",
+ "ross"
+ ],
+ [
+ "▁S",
+ "u"
+ ],
+ [
+ "▁",
+ "Su"
+ ],
+ [
+ "O",
+ "ver"
+ ],
+ [
+ "ни",
+ "е"
+ ],
+ [
+ "ul",
+ "ation"
+ ],
+ [
+ "u",
+ "lation"
+ ],
+ [
+ "▁R",
+ "eg"
+ ],
+ [
+ "▁Re",
+ "g"
+ ],
+ [
+ "▁",
+ "Reg"
+ ],
+ [
+ "▁+",
+ "="
+ ],
+ [
+ "▁",
+ "+="
+ ],
+ [
+ "bo",
+ "dy"
+ ],
+ [
+ "b",
+ "ody"
+ ],
+ [
+ ")",
+ "\\"
+ ],
+ [
+ "▁pr",
+ "int"
+ ],
+ [
+ "▁pri",
+ "nt"
+ ],
+ [
+ "▁prin",
+ "t"
+ ],
+ [
+ "▁",
+ "print"
+ ],
+ [
+ "▁п",
+ "ри"
+ ],
+ [
+ "▁пр",
+ "и"
+ ],
+ [
+ "▁",
+ "при"
+ ],
+ [
+ "d",
+ "b"
+ ],
+ [
+ "our",
+ "ces"
+ ],
+ [
+ "ource",
+ "s"
+ ],
+ [
+ "ward",
+ "s"
+ ],
+ [
+ "war",
+ "ds"
+ ],
+ [
+ "w",
+ "ards"
+ ],
+ [
+ "▁bl",
+ "ack"
+ ],
+ [
+ "▁",
+ "black"
+ ],
+ [
+ "с",
+ "о"
+ ],
+ [
+ "il",
+ "i"
+ ],
+ [
+ "i",
+ "li"
+ ],
+ [
+ "▁E",
+ "d"
+ ],
+ [
+ "▁",
+ "Ed"
+ ],
+ [
+ "▁com",
+ "plet"
+ ],
+ [
+ "▁comp",
+ "let"
+ ],
+ [
+ "▁compl",
+ "et"
+ ],
+ [
+ "▁s",
+ "ingle"
+ ],
+ [
+ "▁sing",
+ "le"
+ ],
+ [
+ "▁sin",
+ "gle"
+ ],
+ [
+ "▁",
+ "single"
+ ],
+ [
+ "▁I",
+ "N"
+ ],
+ [
+ "▁",
+ "IN"
+ ],
+ [
+ "ac",
+ "hed"
+ ],
+ [
+ "ach",
+ "ed"
+ ],
+ [
+ "ache",
+ "d"
+ ],
+ [
+ "a",
+ "ched"
+ ],
+ [
+ "b",
+ "t"
+ ],
+ [
+ "▁c",
+ "ode"
+ ],
+ [
+ "▁co",
+ "de"
+ ],
+ [
+ "▁cod",
+ "e"
+ ],
+ [
+ "▁",
+ "code"
+ ],
+ [
+ "▁b",
+ "ool"
+ ],
+ [
+ "▁bo",
+ "ol"
+ ],
+ [
+ "▁",
+ "bool"
+ ],
+ [
+ "▁a",
+ "rea"
+ ],
+ [
+ "▁are",
+ "a"
+ ],
+ [
+ "▁ar",
+ "ea"
+ ],
+ [
+ "▁",
+ "area"
+ ],
+ [
+ "▁re",
+ "quire"
+ ],
+ [
+ "▁requ",
+ "ire"
+ ],
+ [
+ "▁",
+ "require"
+ ],
+ [
+ "▁pro",
+ "blem"
+ ],
+ [
+ "▁proble",
+ "m"
+ ],
+ [
+ "▁prob",
+ "lem"
+ ],
+ [
+ "ac",
+ "ed"
+ ],
+ [
+ "ace",
+ "d"
+ ],
+ [
+ "a",
+ "ced"
+ ],
+ [
+ "Eq",
+ "u"
+ ],
+ [
+ "E",
+ "qu"
+ ],
+ [
+ "▁con",
+ "fig"
+ ],
+ [
+ "▁conf",
+ "ig"
+ ],
+ [
+ "▁",
+ "config"
+ ],
+ [
+ "ve",
+ "c"
+ ],
+ [
+ "v",
+ "ec"
+ ],
+ [
+ "ne",
+ "y"
+ ],
+ [
+ "n",
+ "ey"
+ ],
+ [
+ "c",
+ "y"
+ ],
+ [
+ "A",
+ "l"
+ ],
+ [
+ "▁acc",
+ "ount"
+ ],
+ [
+ "▁ac",
+ "count"
+ ],
+ [
+ "▁",
+ "account"
+ ],
+ [
+ "ym",
+ "bol"
+ ],
+ [
+ "▁s",
+ "te"
+ ],
+ [
+ "▁st",
+ "e"
+ ],
+ [
+ "▁",
+ "ste"
+ ],
+ [
+ "ge",
+ "s"
+ ],
+ [
+ "g",
+ "es"
+ ],
+ [
+ "Ar",
+ "ray"
+ ],
+ [
+ "Arr",
+ "ay"
+ ],
+ [
+ "em",
+ "pl"
+ ],
+ [
+ "emp",
+ "l"
+ ],
+ [
+ "con",
+ "text"
+ ],
+ [
+ "cont",
+ "ext"
+ ],
+ [
+ "De",
+ "s"
+ ],
+ [
+ "D",
+ "es"
+ ],
+ [
+ "Res",
+ "ult"
+ ],
+ [
+ "ec",
+ "ut"
+ ],
+ [
+ "e",
+ "cut"
+ ],
+ [
+ "▁t",
+ "arget"
+ ],
+ [
+ "▁tar",
+ "get"
+ ],
+ [
+ "▁",
+ "target"
+ ],
+ [
+ "▁get",
+ "ting"
+ ],
+ [
+ "\"",
+ "/>"
+ ],
+ [
+ "og",
+ "le"
+ ],
+ [
+ "o",
+ "gle"
+ ],
+ [
+ "▁him",
+ "self"
+ ],
+ [
+ "▁was",
+ "n"
+ ],
+ [
+ "▁wa",
+ "sn"
+ ],
+ [
+ "▁b",
+ "lock"
+ ],
+ [
+ "▁bl",
+ "ock"
+ ],
+ [
+ "▁blo",
+ "ck"
+ ],
+ [
+ "▁",
+ "block"
+ ],
+ [
+ "▁a",
+ "nt"
+ ],
+ [
+ "▁an",
+ "t"
+ ],
+ [
+ "▁",
+ "ant"
+ ],
+ [
+ "▁Y",
+ "ork"
+ ],
+ [
+ "▁be",
+ "come"
+ ],
+ [
+ "▁bec",
+ "ome"
+ ],
+ [
+ "if",
+ "f"
+ ],
+ [
+ "i",
+ "ff"
+ ],
+ [
+ "port",
+ "s"
+ ],
+ [
+ "por",
+ "ts"
+ ],
+ [
+ "p",
+ "orts"
+ ],
+ [
+ "re",
+ "ate"
+ ],
+ [
+ "reat",
+ "e"
+ ],
+ [
+ "rea",
+ "te"
+ ],
+ [
+ "=",
+ "'"
+ ],
+ [
+ "c",
+ "d"
+ ],
+ [
+ "loc",
+ "ation"
+ ],
+ [
+ "l",
+ "ocation"
+ ],
+ [
+ "е",
+ "т"
+ ],
+ [
+ "▁a",
+ "ccess"
+ ],
+ [
+ "▁acc",
+ "ess"
+ ],
+ [
+ "▁ac",
+ "cess"
+ ],
+ [
+ "▁",
+ "access"
+ ],
+ [
+ "gr",
+ "ess"
+ ],
+ [
+ "gre",
+ "ss"
+ ],
+ [
+ "gres",
+ "s"
+ ],
+ [
+ "g",
+ "ress"
+ ],
+ [
+ "ro",
+ "s"
+ ],
+ [
+ "r",
+ "os"
+ ],
+ [
+ "U",
+ "p"
+ ],
+ [
+ "▁work",
+ "ing"
+ ],
+ [
+ "▁wor",
+ "king"
+ ],
+ [
+ "▁",
+ "working"
+ ],
+ [
+ "▁A",
+ "m"
+ ],
+ [
+ "▁",
+ "Am"
+ ],
+ [
+ "iq",
+ "u"
+ ],
+ [
+ "i",
+ "qu"
+ ],
+ [
+ "ce",
+ "r"
+ ],
+ [
+ "c",
+ "er"
+ ],
+ [
+ "▁(",
+ "("
+ ],
+ [
+ "▁",
+ "(("
+ ],
+ [
+ "▁P",
+ "er"
+ ],
+ [
+ "▁Pe",
+ "r"
+ ],
+ [
+ "▁",
+ "Per"
+ ],
+ [
+ "▁f",
+ "unc"
+ ],
+ [
+ "▁fun",
+ "c"
+ ],
+ [
+ "▁fu",
+ "nc"
+ ],
+ [
+ "▁",
+ "func"
+ ],
+ [
+ "▁g",
+ "irl"
+ ],
+ [
+ "▁gi",
+ "rl"
+ ],
+ [
+ "▁gir",
+ "l"
+ ],
+ [
+ "▁",
+ "girl"
+ ],
+ [
+ "▁ab",
+ "ove"
+ ],
+ [
+ "pe",
+ "n"
+ ],
+ [
+ "p",
+ "en"
+ ],
+ [
+ "п",
+ "и"
+ ],
+ [
+ "id",
+ "o"
+ ],
+ [
+ "i",
+ "do"
+ ],
+ [
+ "▁v",
+ "ersion"
+ ],
+ [
+ "▁vers",
+ "ion"
+ ],
+ [
+ "▁",
+ "version"
+ ],
+ [
+ "T",
+ "Y"
+ ],
+ [
+ "▁",
+ ";"
+ ],
+ [
+ "ma",
+ "ry"
+ ],
+ [
+ "mar",
+ "y"
+ ],
+ [
+ "m",
+ "ary"
+ ],
+ [
+ "ab",
+ "led"
+ ],
+ [
+ "able",
+ "d"
+ ],
+ [
+ "abl",
+ "ed"
+ ],
+ [
+ "a",
+ "bled"
+ ],
+ [
+ "an",
+ "nel"
+ ],
+ [
+ "ann",
+ "el"
+ ],
+ [
+ "anne",
+ "l"
+ ],
+ [
+ "▁ex",
+ "ample"
+ ],
+ [
+ "▁exam",
+ "ple"
+ ],
+ [
+ "▁",
+ "example"
+ ],
+ [
+ "▁con",
+ "text"
+ ],
+ [
+ "▁cont",
+ "ext"
+ ],
+ [
+ "▁",
+ "context"
+ ],
+ [
+ "O",
+ "P"
+ ],
+ [
+ "▁re",
+ "d"
+ ],
+ [
+ "▁r",
+ "ed"
+ ],
+ [
+ "▁",
+ "red"
+ ],
+ [
+ "▁c",
+ "ir"
+ ],
+ [
+ "▁ci",
+ "r"
+ ],
+ [
+ "▁",
+ "cir"
+ ],
+ [
+ "s",
+ "m"
+ ],
+ [
+ "Lo",
+ "g"
+ ],
+ [
+ "L",
+ "og"
+ ],
+ [
+ "▁s",
+ "pace"
+ ],
+ [
+ "▁sp",
+ "ace"
+ ],
+ [
+ "▁",
+ "space"
+ ],
+ [
+ "▁f",
+ "ut"
+ ],
+ [
+ "▁fu",
+ "t"
+ ],
+ [
+ "▁G",
+ "ener"
+ ],
+ [
+ "▁Ge",
+ "ner"
+ ],
+ [
+ "▁Gen",
+ "er"
+ ],
+ [
+ "▁Gene",
+ "r"
+ ],
+ [
+ "▁",
+ "Gener"
+ ],
+ [
+ "il",
+ "ls"
+ ],
+ [
+ "ill",
+ "s"
+ ],
+ [
+ "▁d",
+ "ri"
+ ],
+ [
+ "▁dr",
+ "i"
+ ],
+ [
+ "_",
+ "."
+ ],
+ [
+ "▁f",
+ "elt"
+ ],
+ [
+ "▁fe",
+ "lt"
+ ],
+ [
+ "▁fel",
+ "t"
+ ],
+ [
+ "▁o",
+ "ffic"
+ ],
+ [
+ "▁of",
+ "fic"
+ ],
+ [
+ "▁off",
+ "ic"
+ ],
+ [
+ "▁=",
+ "=="
+ ],
+ [
+ "▁==",
+ "="
+ ],
+ [
+ "▁",
+ "==="
+ ],
+ [
+ "i",
+ "i"
+ ],
+ [
+ "▁start",
+ "ed"
+ ],
+ [
+ "▁star",
+ "ted"
+ ],
+ [
+ "▁",
+ "Т"
+ ],
+ [
+ "▁}",
+ ");"
+ ],
+ [
+ "▁})",
+ ";"
+ ],
+ [
+ "▁",
+ "});"
+ ],
+ [
+ "j",
+ "s"
+ ],
+ [
+ "▁fr",
+ "ont"
+ ],
+ [
+ "▁fro",
+ "nt"
+ ],
+ [
+ "▁",
+ "front"
+ ],
+ [
+ "▁al",
+ "most"
+ ],
+ [
+ "ir",
+ "m"
+ ],
+ [
+ "i",
+ "rm"
+ ],
+ [
+ "!",
+ "\""
+ ],
+ [
+ "sign",
+ "ed"
+ ],
+ [
+ "sig",
+ "ned"
+ ],
+ [
+ "s",
+ "igned"
+ ],
+ [
+ "▁y",
+ "et"
+ ],
+ [
+ "▁ye",
+ "t"
+ ],
+ [
+ "▁t",
+ "rad"
+ ],
+ [
+ "▁tr",
+ "ad"
+ ],
+ [
+ "▁tra",
+ "d"
+ ],
+ [
+ "ient",
+ "s"
+ ],
+ [
+ "ien",
+ "ts"
+ ],
+ [
+ "i",
+ "ents"
+ ],
+ [
+ "am",
+ "a"
+ ],
+ [
+ "a",
+ "ma"
+ ],
+ [
+ "▁in",
+ "put"
+ ],
+ [
+ "▁",
+ "input"
+ ],
+ [
+ "li",
+ "m"
+ ],
+ [
+ "l",
+ "im"
+ ],
+ [
+ "п",
+ "а"
+ ],
+ [
+ "▁к",
+ "а"
+ ],
+ [
+ "▁",
+ "ка"
+ ],
+ [
+ "▁c",
+ "amp"
+ ],
+ [
+ "▁cam",
+ "p"
+ ],
+ [
+ "▁ca",
+ "mp"
+ ],
+ [
+ "▁",
+ "camp"
+ ],
+ [
+ "ib",
+ "r"
+ ],
+ [
+ "i",
+ "br"
+ ],
+ [
+ "fe",
+ "ct"
+ ],
+ [
+ "f",
+ "ect"
+ ],
+ [
+ "un",
+ "t"
+ ],
+ [
+ "u",
+ "nt"
+ ],
+ [
+ "▁h",
+ "alf"
+ ],
+ [
+ "▁hal",
+ "f"
+ ],
+ [
+ "▁",
+ "half"
+ ],
+ [
+ "▁c",
+ "over"
+ ],
+ [
+ "▁co",
+ "ver"
+ ],
+ [
+ "▁cov",
+ "er"
+ ],
+ [
+ "▁",
+ "cover"
+ ],
+ [
+ "angu",
+ "age"
+ ],
+ [
+ "▁b",
+ "en"
+ ],
+ [
+ "▁be",
+ "n"
+ ],
+ [
+ "▁",
+ "ben"
+ ],
+ [
+ "h",
+ "a"
+ ],
+ [
+ "▁d",
+ "iff"
+ ],
+ [
+ "▁di",
+ "ff"
+ ],
+ [
+ "▁dif",
+ "f"
+ ],
+ [
+ "▁",
+ "diff"
+ ],
+ [
+ "_",
+ "\\"
+ ],
+ [
+ "▁о",
+ "б"
+ ],
+ [
+ "▁",
+ "об"
+ ],
+ [
+ "]",
+ ")"
+ ],
+ [
+ "od",
+ "es"
+ ],
+ [
+ "ode",
+ "s"
+ ],
+ [
+ "o",
+ "des"
+ ],
+ [
+ "he",
+ "l"
+ ],
+ [
+ "h",
+ "el"
+ ],
+ [
+ "io",
+ "s"
+ ],
+ [
+ "i",
+ "os"
+ ],
+ [
+ "▁",
+ "О"
+ ],
+ [
+ "▁m",
+ "ot"
+ ],
+ [
+ "▁mo",
+ "t"
+ ],
+ [
+ "▁",
+ "mot"
+ ],
+ [
+ "▁s",
+ "ocial"
+ ],
+ [
+ "▁so",
+ "cial"
+ ],
+ [
+ "▁soc",
+ "ial"
+ ],
+ [
+ "▁soci",
+ "al"
+ ],
+ [
+ "▁",
+ "social"
+ ],
+ [
+ "////",
+ "////"
+ ],
+ [
+ "▁s",
+ "tre"
+ ],
+ [
+ "▁st",
+ "re"
+ ],
+ [
+ "▁str",
+ "e"
+ ],
+ [
+ "▁",
+ "stre"
+ ],
+ [
+ "gr",
+ "ound"
+ ],
+ [
+ "gro",
+ "und"
+ ],
+ [
+ "g",
+ "round"
+ ],
+ [
+ "і",
+ "в"
+ ],
+ [
+ "ob",
+ "ject"
+ ],
+ [
+ "obj",
+ "ect"
+ ],
+ [
+ "pl",
+ "es"
+ ],
+ [
+ "ple",
+ "s"
+ ],
+ [
+ "p",
+ "les"
+ ],
+ [
+ "re",
+ "ed"
+ ],
+ [
+ "ree",
+ "d"
+ ],
+ [
+ "r",
+ "eed"
+ ],
+ [
+ "▁e",
+ "en"
+ ],
+ [
+ "▁",
+ "een"
+ ],
+ [
+ "▁b",
+ "ased"
+ ],
+ [
+ "▁bas",
+ "ed"
+ ],
+ [
+ "▁base",
+ "d"
+ ],
+ [
+ "▁ba",
+ "sed"
+ ],
+ [
+ "▁",
+ "based"
+ ],
+ [
+ "▁r",
+ "ange"
+ ],
+ [
+ "▁ran",
+ "ge"
+ ],
+ [
+ "▁rang",
+ "e"
+ ],
+ [
+ "▁",
+ "range"
+ ],
+ [
+ "A",
+ "n"
+ ],
+ [
+ "ur",
+ "g"
+ ],
+ [
+ "u",
+ "rg"
+ ],
+ [
+ "▁le",
+ "arn"
+ ],
+ [
+ "▁lear",
+ "n"
+ ],
+ [
+ "▁",
+ "learn"
+ ],
+ [
+ "▁e",
+ "xc"
+ ],
+ [
+ "▁ex",
+ "c"
+ ],
+ [
+ "▁",
+ "exc"
+ ],
+ [
+ "▁im",
+ "p"
+ ],
+ [
+ "▁i",
+ "mp"
+ ],
+ [
+ "▁",
+ "imp"
+ ],
+ [
+ "▁me",
+ "ans"
+ ],
+ [
+ "▁mean",
+ "s"
+ ],
+ [
+ "▁w",
+ "ur"
+ ],
+ [
+ "en",
+ "ds"
+ ],
+ [
+ "end",
+ "s"
+ ],
+ [
+ "vo",
+ "id"
+ ],
+ [
+ "v",
+ "oid"
+ ],
+ [
+ "▁s",
+ "td"
+ ],
+ [
+ "▁st",
+ "d"
+ ],
+ [
+ "▁",
+ "std"
+ ],
+ [
+ "▁part",
+ "icular"
+ ],
+ [
+ "▁partic",
+ "ular"
+ ],
+ [
+ "▁particul",
+ "ar"
+ ],
+ [
+ "▁parti",
+ "cular"
+ ],
+ [
+ "j",
+ "a"
+ ],
+ [
+ "▁s",
+ "ource"
+ ],
+ [
+ "▁sour",
+ "ce"
+ ],
+ [
+ "▁",
+ "source"
+ ],
+ [
+ "def",
+ "ault"
+ ],
+ [
+ "p",
+ "y"
+ ],
+ [
+ "▁a",
+ "ls"
+ ],
+ [
+ "▁al",
+ "s"
+ ],
+ [
+ "▁",
+ "als"
+ ],
+ [
+ "sc",
+ "ri"
+ ],
+ [
+ "scr",
+ "i"
+ ],
+ [
+ "s",
+ "cri"
+ ],
+ [
+ "st",
+ "atus"
+ ],
+ [
+ "stat",
+ "us"
+ ],
+ [
+ "▁st",
+ "ory"
+ ],
+ [
+ "▁stor",
+ "y"
+ ],
+ [
+ "▁sto",
+ "ry"
+ ],
+ [
+ "▁",
+ "story"
+ ],
+ [
+ "▁b",
+ "egin"
+ ],
+ [
+ "▁be",
+ "gin"
+ ],
+ [
+ "▁beg",
+ "in"
+ ],
+ [
+ "▁",
+ "begin"
+ ],
+ [
+ "▁pos",
+ "ition"
+ ],
+ [
+ "▁posit",
+ "ion"
+ ],
+ [
+ "▁",
+ "position"
+ ],
+ [
+ "▁spec",
+ "ial"
+ ],
+ [
+ "▁spe",
+ "cial"
+ ],
+ [
+ "▁",
+ "special"
+ ],
+ [
+ "ph",
+ "p"
+ ],
+ [
+ "p",
+ "hp"
+ ],
+ [
+ "▁b",
+ "ar"
+ ],
+ [
+ "▁ba",
+ "r"
+ ],
+ [
+ "▁",
+ "bar"
+ ],
+ [
+ "▁p",
+ "ract"
+ ],
+ [
+ "▁pr",
+ "act"
+ ],
+ [
+ "▁pra",
+ "ct"
+ ],
+ [
+ "▁prac",
+ "t"
+ ],
+ [
+ "cal",
+ "l"
+ ],
+ [
+ "ca",
+ "ll"
+ ],
+ [
+ "c",
+ "all"
+ ],
+ [
+ "▁d",
+ "as"
+ ],
+ [
+ "▁da",
+ "s"
+ ],
+ [
+ "▁",
+ "das"
+ ],
+ [
+ "▁r",
+ "ad"
+ ],
+ [
+ "▁ra",
+ "d"
+ ],
+ [
+ "▁",
+ "rad"
+ ],
+ [
+ "▁cl",
+ "ose"
+ ],
+ [
+ "▁clos",
+ "e"
+ ],
+ [
+ "▁clo",
+ "se"
+ ],
+ [
+ "▁",
+ "close"
+ ],
+ [
+ "ww",
+ "w"
+ ],
+ [
+ "w",
+ "ww"
+ ],
+ [
+ "ер",
+ "е"
+ ],
+ [
+ "е",
+ "ре"
+ ],
+ [
+ "g",
+ "u"
+ ],
+ [
+ "▁E",
+ "r"
+ ],
+ [
+ "▁",
+ "Er"
+ ],
+ [
+ "▁d",
+ "om"
+ ],
+ [
+ "▁do",
+ "m"
+ ],
+ [
+ "▁",
+ "dom"
+ ],
+ [
+ "A",
+ "M"
+ ],
+ [
+ "▁b",
+ "ed"
+ ],
+ [
+ "▁be",
+ "d"
+ ],
+ [
+ "▁",
+ "bed"
+ ],
+ [
+ "▁sever",
+ "al"
+ ],
+ [
+ "au",
+ "l"
+ ],
+ [
+ "a",
+ "ul"
+ ],
+ [
+ "bo",
+ "x"
+ ],
+ [
+ "b",
+ "ox"
+ ],
+ [
+ "▁l",
+ "ow"
+ ],
+ [
+ "▁lo",
+ "w"
+ ],
+ [
+ "▁",
+ "low"
+ ],
+ [
+ "pa",
+ "ck"
+ ],
+ [
+ "p",
+ "ack"
+ ],
+ [
+ "Re",
+ "g"
+ ],
+ [
+ "R",
+ "eg"
+ ],
+ [
+ "O",
+ "f"
+ ],
+ [
+ "at",
+ "ures"
+ ],
+ [
+ "ature",
+ "s"
+ ],
+ [
+ "atur",
+ "es"
+ ],
+ [
+ "atu",
+ "res"
+ ],
+ [
+ "é",
+ "n"
+ ],
+ [
+ "ed",
+ "er"
+ ],
+ [
+ "ede",
+ "r"
+ ],
+ [
+ "e",
+ "der"
+ ],
+ [
+ "uild",
+ "er"
+ ],
+ [
+ "ca",
+ "st"
+ ],
+ [
+ "cas",
+ "t"
+ ],
+ [
+ "c",
+ "ast"
+ ],
+ [
+ "con",
+ "om"
+ ],
+ [
+ "co",
+ "nom"
+ ],
+ [
+ "c",
+ "onom"
+ ],
+ [
+ "ra",
+ "ft"
+ ],
+ [
+ "raf",
+ "t"
+ ],
+ [
+ "r",
+ "aft"
+ ],
+ [
+ "▁m",
+ "akes"
+ ],
+ [
+ "▁make",
+ "s"
+ ],
+ [
+ "▁ma",
+ "kes"
+ ],
+ [
+ "Lo",
+ "c"
+ ],
+ [
+ "L",
+ "oc"
+ ],
+ [
+ "ht",
+ "tp"
+ ],
+ [
+ "htt",
+ "p"
+ ],
+ [
+ "h",
+ "ttp"
+ ],
+ [
+ "▁a",
+ "bs"
+ ],
+ [
+ "▁ab",
+ "s"
+ ],
+ [
+ "▁",
+ "abs"
+ ],
+ [
+ "re",
+ "sh"
+ ],
+ [
+ "res",
+ "h"
+ ],
+ [
+ "r",
+ "esh"
+ ],
+ [
+ "▁W",
+ "ill"
+ ],
+ [
+ "▁Wil",
+ "l"
+ ],
+ [
+ "▁Wi",
+ "ll"
+ ],
+ [
+ "▁",
+ "Will"
+ ],
+ [
+ "bre",
+ "ak"
+ ],
+ [
+ "b",
+ "reak"
+ ],
+ [
+ "▁o",
+ "ptions"
+ ],
+ [
+ "▁opt",
+ "ions"
+ ],
+ [
+ "▁option",
+ "s"
+ ],
+ [
+ "▁",
+ "options"
+ ],
+ [
+ "fo",
+ "rt"
+ ],
+ [
+ "for",
+ "t"
+ ],
+ [
+ "f",
+ "ort"
+ ],
+ [
+ "▁и",
+ "з"
+ ],
+ [
+ "▁",
+ "из"
+ ],
+ [
+ "▁a",
+ "nal"
+ ],
+ [
+ "▁an",
+ "al"
+ ],
+ [
+ "▁",
+ "anal"
+ ],
+ [
+ "▁e",
+ "nv"
+ ],
+ [
+ "▁en",
+ "v"
+ ],
+ [
+ "▁",
+ "env"
+ ],
+ [
+ "(",
+ "{"
+ ],
+ [
+ "ev",
+ "ent"
+ ],
+ [
+ "even",
+ "t"
+ ],
+ [
+ "eve",
+ "nt"
+ ],
+ [
+ "e",
+ "vent"
+ ],
+ [
+ "▁p",
+ "age"
+ ],
+ [
+ "▁pa",
+ "ge"
+ ],
+ [
+ "▁pag",
+ "e"
+ ],
+ [
+ "▁",
+ "page"
+ ],
+ [
+ "ter",
+ "nal"
+ ],
+ [
+ "tern",
+ "al"
+ ],
+ [
+ "▁d",
+ "istribut"
+ ],
+ [
+ "▁dist",
+ "ribut"
+ ],
+ [
+ "▁f",
+ "ood"
+ ],
+ [
+ "▁fo",
+ "od"
+ ],
+ [
+ "▁foo",
+ "d"
+ ],
+ [
+ "▁",
+ "food"
+ ],
+ [
+ "che",
+ "ck"
+ ],
+ [
+ "c",
+ "heck"
+ ],
+ [
+ "C",
+ "K"
+ ],
+ [
+ "▁в",
+ "о"
+ ],
+ [
+ "▁",
+ "во"
+ ],
+ [
+ "as",
+ "sert"
+ ],
+ [
+ "ass",
+ "ert"
+ ],
+ [
+ "asse",
+ "rt"
+ ],
+ [
+ "á",
+ "n"
+ ],
+ [
+ "ba",
+ "se"
+ ],
+ [
+ "bas",
+ "e"
+ ],
+ [
+ "b",
+ "ase"
+ ],
+ [
+ "▁w",
+ "hole"
+ ],
+ [
+ "▁wh",
+ "ole"
+ ],
+ [
+ "▁who",
+ "le"
+ ],
+ [
+ "ac",
+ "ión"
+ ],
+ [
+ "ació",
+ "n"
+ ],
+ [
+ "aci",
+ "ón"
+ ],
+ [
+ "a",
+ "ción"
+ ],
+ [
+ "O",
+ "D"
+ ],
+ [
+ "▁turn",
+ "ed"
+ ],
+ [
+ "▁tur",
+ "ned"
+ ],
+ [
+ "ig",
+ "ma"
+ ],
+ [
+ "▁res",
+ "ponse"
+ ],
+ [
+ "▁respon",
+ "se"
+ ],
+ [
+ "▁respons",
+ "e"
+ ],
+ [
+ "▁",
+ "response"
+ ],
+ [
+ "▁Univers",
+ "ity"
+ ],
+ [
+ "▁d",
+ "iv"
+ ],
+ [
+ "▁di",
+ "v"
+ ],
+ [
+ "▁",
+ "div"
+ ],
+ [
+ "ap",
+ "ter"
+ ],
+ [
+ "apt",
+ "er"
+ ],
+ [
+ "▁result",
+ "s"
+ ],
+ [
+ "▁",
+ "results"
+ ],
+ [
+ "▁re",
+ "present"
+ ],
+ [
+ "▁rep",
+ "resent"
+ ],
+ [
+ "▁every",
+ "thing"
+ ],
+ [
+ "▁C",
+ "ent"
+ ],
+ [
+ "▁Ce",
+ "nt"
+ ],
+ [
+ "▁",
+ "Cent"
+ ],
+ [
+ "ut",
+ "es"
+ ],
+ [
+ "ute",
+ "s"
+ ],
+ [
+ "u",
+ "tes"
+ ],
+ [
+ "ri",
+ "x"
+ ],
+ [
+ "r",
+ "ix"
+ ],
+ [
+ "▁S",
+ "ome"
+ ],
+ [
+ "▁So",
+ "me"
+ ],
+ [
+ "▁Som",
+ "e"
+ ],
+ [
+ "▁",
+ "Some"
+ ],
+ [
+ "▁be",
+ "hind"
+ ],
+ [
+ "▁beh",
+ "ind"
+ ],
+ [
+ "▁c",
+ "reat"
+ ],
+ [
+ "▁cre",
+ "at"
+ ],
+ [
+ "▁",
+ "creat"
+ ],
+ [
+ "pl",
+ "ace"
+ ],
+ [
+ "plac",
+ "e"
+ ],
+ [
+ "p",
+ "lace"
+ ],
+ [
+ "s",
+ "u"
+ ],
+ [
+ "▁P",
+ "art"
+ ],
+ [
+ "▁Par",
+ "t"
+ ],
+ [
+ "▁Pa",
+ "rt"
+ ],
+ [
+ "▁",
+ "Part"
+ ],
+ [
+ "um",
+ "b"
+ ],
+ [
+ "u",
+ "mb"
+ ],
+ [
+ "math",
+ "bb"
+ ],
+ [
+ "pi",
+ "ng"
+ ],
+ [
+ "pin",
+ "g"
+ ],
+ [
+ "p",
+ "ing"
+ ],
+ [
+ "▁m",
+ "atch"
+ ],
+ [
+ "▁mat",
+ "ch"
+ ],
+ [
+ "▁",
+ "match"
+ ],
+ [
+ "O",
+ "ut"
+ ],
+ [
+ "do",
+ "m"
+ ],
+ [
+ "d",
+ "om"
+ ],
+ [
+ "▁s",
+ "itu"
+ ],
+ [
+ "▁sit",
+ "u"
+ ],
+ [
+ "▁si",
+ "tu"
+ ],
+ [
+ "d",
+ "r"
+ ],
+ [
+ "ar",
+ "a"
+ ],
+ [
+ "a",
+ "ra"
+ ],
+ [
+ "▁w",
+ "indow"
+ ],
+ [
+ "▁wind",
+ "ow"
+ ],
+ [
+ "▁",
+ "window"
+ ],
+ [
+ "n",
+ "s"
+ ],
+ [
+ "lish",
+ "ed"
+ ],
+ [
+ "l",
+ "ished"
+ ],
+ [
+ "▁V",
+ "er"
+ ],
+ [
+ "▁Ve",
+ "r"
+ ],
+ [
+ "▁",
+ "Ver"
+ ],
+ [
+ "▁m",
+ "essage"
+ ],
+ [
+ "▁mess",
+ "age"
+ ],
+ [
+ "▁",
+ "message"
+ ],
+ [
+ "▁E",
+ "m"
+ ],
+ [
+ "▁",
+ "Em"
+ ],
+ [
+ "▁h",
+ "uman"
+ ],
+ [
+ "▁hum",
+ "an"
+ ],
+ [
+ "▁",
+ "human"
+ ],
+ [
+ "per",
+ "ties"
+ ],
+ [
+ "pert",
+ "ies"
+ ],
+ [
+ "л",
+ "у"
+ ],
+ [
+ "le",
+ "m"
+ ],
+ [
+ "l",
+ "em"
+ ],
+ [
+ "OR",
+ "T"
+ ],
+ [
+ "O",
+ "RT"
+ ],
+ [
+ "▁e",
+ "arly"
+ ],
+ [
+ "▁ear",
+ "ly"
+ ],
+ [
+ "▁qu",
+ "ick"
+ ],
+ [
+ "▁qui",
+ "ck"
+ ],
+ [
+ "▁",
+ "quick"
+ ],
+ [
+ "▁т",
+ "а"
+ ],
+ [
+ "▁",
+ "та"
+ ],
+ [
+ "ro",
+ "id"
+ ],
+ [
+ "r",
+ "oid"
+ ],
+ [
+ "▁c",
+ "ountry"
+ ],
+ [
+ "▁coun",
+ "try"
+ ],
+ [
+ "▁count",
+ "ry"
+ ],
+ [
+ "▁countr",
+ "y"
+ ],
+ [
+ "▁",
+ "country"
+ ],
+ [
+ "▁d",
+ "ue"
+ ],
+ [
+ "▁du",
+ "e"
+ ],
+ [
+ "▁",
+ "due"
+ ],
+ [
+ "▁D",
+ "ie"
+ ],
+ [
+ "▁Di",
+ "e"
+ ],
+ [
+ "▁",
+ "Die"
+ ],
+ [
+ "▁t",
+ "rying"
+ ],
+ [
+ "▁tr",
+ "ying"
+ ],
+ [
+ "▁try",
+ "ing"
+ ],
+ [
+ "▁l",
+ "ive"
+ ],
+ [
+ "▁li",
+ "ve"
+ ],
+ [
+ "▁liv",
+ "e"
+ ],
+ [
+ "▁",
+ "live"
+ ],
+ [
+ "▁p",
+ "ress"
+ ],
+ [
+ "▁pre",
+ "ss"
+ ],
+ [
+ "▁pr",
+ "ess"
+ ],
+ [
+ "▁pres",
+ "s"
+ ],
+ [
+ "▁",
+ "press"
+ ],
+ [
+ "IN",
+ "T"
+ ],
+ [
+ "I",
+ "NT"
+ ],
+ [
+ "W",
+ "ith"
+ ],
+ [
+ "ov",
+ "ed"
+ ],
+ [
+ "ove",
+ "d"
+ ],
+ [
+ "o",
+ "ved"
+ ],
+ [
+ "▁spec",
+ "ific"
+ ],
+ [
+ "▁",
+ "specific"
+ ],
+ [
+ "▁f",
+ "all"
+ ],
+ [
+ "▁fa",
+ "ll"
+ ],
+ [
+ "▁fal",
+ "l"
+ ],
+ [
+ "▁",
+ "fall"
+ ],
+ [
+ "u",
+ "k"
+ ],
+ [
+ "y",
+ "l"
+ ],
+ [
+ "▁gener",
+ "al"
+ ],
+ [
+ "▁gen",
+ "eral"
+ ],
+ [
+ "▁gene",
+ "ral"
+ ],
+ [
+ "▁",
+ "general"
+ ],
+ [
+ "м",
+ "у"
+ ],
+ [
+ "н",
+ "у"
+ ],
+ [
+ "▁n",
+ "ames"
+ ],
+ [
+ "▁name",
+ "s"
+ ],
+ [
+ "▁na",
+ "mes"
+ ],
+ [
+ "▁nam",
+ "es"
+ ],
+ [
+ "▁",
+ "names"
+ ],
+ [
+ "wh",
+ "ere"
+ ],
+ [
+ "whe",
+ "re"
+ ],
+ [
+ "w",
+ "here"
+ ],
+ [
+ "▁The",
+ "se"
+ ],
+ [
+ "▁Th",
+ "ese"
+ ],
+ [
+ "▁",
+ "These"
+ ],
+ [
+ "▁s",
+ "il"
+ ],
+ [
+ "▁si",
+ "l"
+ ],
+ [
+ "▁",
+ "sil"
+ ],
+ [
+ "é",
+ "t"
+ ],
+ [
+ "▁e",
+ "ner"
+ ],
+ [
+ "▁en",
+ "er"
+ ],
+ [
+ "▁",
+ "ener"
+ ],
+ [
+ "▁N",
+ "ow"
+ ],
+ [
+ "▁No",
+ "w"
+ ],
+ [
+ "▁",
+ "Now"
+ ],
+ [
+ "▁add",
+ "ress"
+ ],
+ [
+ "▁addr",
+ "ess"
+ ],
+ [
+ "▁",
+ "address"
+ ],
+ [
+ "Res",
+ "ponse"
+ ],
+ [
+ "▁M",
+ "r"
+ ],
+ [
+ "▁",
+ "Mr"
+ ],
+ [
+ "▁an",
+ "sw"
+ ],
+ [
+ "▁ans",
+ "w"
+ ],
+ [
+ "▁fil",
+ "m"
+ ],
+ [
+ "▁fi",
+ "lm"
+ ],
+ [
+ "▁",
+ "film"
+ ],
+ [
+ "▁str",
+ "ong"
+ ],
+ [
+ "▁stro",
+ "ng"
+ ],
+ [
+ "▁",
+ "strong"
+ ],
+ [
+ "▁b",
+ "ring"
+ ],
+ [
+ "▁br",
+ "ing"
+ ],
+ [
+ "▁Un",
+ "ited"
+ ],
+ [
+ "▁Unit",
+ "ed"
+ ],
+ [
+ "▁g",
+ "e"
+ ],
+ [
+ "▁",
+ "ge"
+ ],
+ [
+ "▁w",
+ "oman"
+ ],
+ [
+ "▁wom",
+ "an"
+ ],
+ [
+ "▁wo",
+ "man"
+ ],
+ [
+ "▁",
+ "woman"
+ ],
+ [
+ "Ne",
+ "w"
+ ],
+ [
+ "N",
+ "ew"
+ ],
+ [
+ "et",
+ "t"
+ ],
+ [
+ "e",
+ "tt"
+ ],
+ [
+ ".",
+ ")"
+ ],
+ [
+ "en",
+ "ame"
+ ],
+ [
+ "ena",
+ "me"
+ ],
+ [
+ "e",
+ "name"
+ ],
+ [
+ "▁A",
+ "N"
+ ],
+ [
+ "▁",
+ "AN"
+ ],
+ [
+ "▁de",
+ "scrib"
+ ],
+ [
+ "▁desc",
+ "rib"
+ ],
+ [
+ "з",
+ "а"
+ ],
+ [
+ "is",
+ "ing"
+ ],
+ [
+ "isi",
+ "ng"
+ ],
+ [
+ "i",
+ "sing"
+ ],
+ [
+ "E",
+ "L"
+ ],
+ [
+ "q",
+ "l"
+ ],
+ [
+ "▁f",
+ "ur"
+ ],
+ [
+ "▁fu",
+ "r"
+ ],
+ [
+ "▁",
+ "fur"
+ ],
+ [
+ "y",
+ "ing"
+ ],
+ [
+ "▁C",
+ "al"
+ ],
+ [
+ "▁Ca",
+ "l"
+ ],
+ [
+ "▁",
+ "Cal"
+ ],
+ [
+ "▁D",
+ "r"
+ ],
+ [
+ "▁",
+ "Dr"
+ ],
+ [
+ "ER",
+ "R"
+ ],
+ [
+ "E",
+ "RR"
+ ],
+ [
+ "▁\\",
+ "\\"
+ ],
+ [
+ "▁",
+ "\\\\"
+ ],
+ [
+ "an",
+ "gle"
+ ],
+ [
+ "ang",
+ "le"
+ ],
+ [
+ "ur",
+ "ope"
+ ],
+ [
+ "uro",
+ "pe"
+ ],
+ [
+ "urop",
+ "e"
+ ],
+ [
+ "▁c",
+ "ity"
+ ],
+ [
+ "▁cit",
+ "y"
+ ],
+ [
+ "▁ci",
+ "ty"
+ ],
+ [
+ "▁",
+ "city"
+ ],
+ [
+ "▁in",
+ "dex"
+ ],
+ [
+ "▁ind",
+ "ex"
+ ],
+ [
+ "▁inde",
+ "x"
+ ],
+ [
+ "▁",
+ "index"
+ ],
+ [
+ "▁a",
+ "ction"
+ ],
+ [
+ "▁act",
+ "ion"
+ ],
+ [
+ "▁",
+ "action"
+ ],
+ [
+ "▁How",
+ "ever"
+ ],
+ [
+ "▁",
+ "However"
+ ],
+ [
+ "▁f",
+ "ig"
+ ],
+ [
+ "▁fi",
+ "g"
+ ],
+ [
+ "▁",
+ "fig"
+ ],
+ [
+ "ia",
+ "s"
+ ],
+ [
+ "i",
+ "as"
+ ],
+ [
+ "▁quest",
+ "ion"
+ ],
+ [
+ "▁",
+ "question"
+ ],
+ [
+ "▁J",
+ "an"
+ ],
+ [
+ "▁Ja",
+ "n"
+ ],
+ [
+ "▁",
+ "Jan"
+ ],
+ [
+ "▁M",
+ "ed"
+ ],
+ [
+ "▁Me",
+ "d"
+ ],
+ [
+ "▁",
+ "Med"
+ ],
+ [
+ "▁C",
+ "ont"
+ ],
+ [
+ "▁Con",
+ "t"
+ ],
+ [
+ "▁Co",
+ "nt"
+ ],
+ [
+ "▁",
+ "Cont"
+ ],
+ [
+ "am",
+ "ed"
+ ],
+ [
+ "ame",
+ "d"
+ ],
+ [
+ "a",
+ "med"
+ ],
+ [
+ "Cal",
+ "l"
+ ],
+ [
+ "C",
+ "all"
+ ],
+ [
+ "pl",
+ "ied"
+ ],
+ [
+ "tt",
+ "y"
+ ],
+ [
+ "t",
+ "ty"
+ ],
+ [
+ "▁ind",
+ "ivid"
+ ],
+ [
+ "pa",
+ "ge"
+ ],
+ [
+ "pag",
+ "e"
+ ],
+ [
+ "p",
+ "age"
+ ],
+ [
+ "▁c",
+ "omb"
+ ],
+ [
+ "▁com",
+ "b"
+ ],
+ [
+ "▁co",
+ "mb"
+ ],
+ [
+ "▁",
+ "comb"
+ ],
+ [
+ "se",
+ "ction"
+ ],
+ [
+ "sect",
+ "ion"
+ ],
+ [
+ "s",
+ "ection"
+ ],
+ [
+ "▁C",
+ "omm"
+ ],
+ [
+ "▁Com",
+ "m"
+ ],
+ [
+ "▁Co",
+ "mm"
+ ],
+ [
+ "▁",
+ "Comm"
+ ],
+ [
+ "ue",
+ "l"
+ ],
+ [
+ "u",
+ "el"
+ ],
+ [
+ "▁h",
+ "et"
+ ],
+ [
+ "▁he",
+ "t"
+ ],
+ [
+ "▁",
+ "het"
+ ],
+ [
+ "▁B",
+ "ar"
+ ],
+ [
+ "▁Ba",
+ "r"
+ ],
+ [
+ "▁",
+ "Bar"
+ ],
+ [
+ "ag",
+ "ement"
+ ],
+ [
+ "age",
+ "ment"
+ ],
+ [
+ "agem",
+ "ent"
+ ],
+ [
+ "fi",
+ "n"
+ ],
+ [
+ "f",
+ "in"
+ ],
+ [
+ "▁m",
+ "ajor"
+ ],
+ [
+ "▁ma",
+ "jor"
+ ],
+ [
+ "▁maj",
+ "or"
+ ],
+ [
+ "▁",
+ "major"
+ ],
+ [
+ "op",
+ "er"
+ ],
+ [
+ "ope",
+ "r"
+ ],
+ [
+ "o",
+ "per"
+ ],
+ [
+ "ap",
+ "i"
+ ],
+ [
+ "a",
+ "pi"
+ ],
+ [
+ "ro",
+ "om"
+ ],
+ [
+ "r",
+ "oom"
+ ],
+ [
+ "▁",
+ "„"
+ ],
+ [
+ "▁h",
+ "ab"
+ ],
+ [
+ "▁ha",
+ "b"
+ ],
+ [
+ "▁",
+ "hab"
+ ],
+ [
+ "з",
+ "и"
+ ],
+ [
+ "▁a",
+ "uf"
+ ],
+ [
+ "▁au",
+ "f"
+ ],
+ [
+ "▁",
+ "auf"
+ ],
+ [
+ "cur",
+ "rent"
+ ],
+ [
+ "curr",
+ "ent"
+ ],
+ [
+ "n",
+ "i"
+ ],
+ [
+ "▁in",
+ "clude"
+ ],
+ [
+ "▁incl",
+ "ude"
+ ],
+ [
+ "▁includ",
+ "e"
+ ],
+ [
+ "▁inclu",
+ "de"
+ ],
+ [
+ "▁",
+ "include"
+ ],
+ [
+ "▁qu",
+ "i"
+ ],
+ [
+ "▁q",
+ "ui"
+ ],
+ [
+ "v",
+ "a"
+ ],
+ [
+ "U",
+ "E"
+ ],
+ [
+ "▁ide",
+ "a"
+ ],
+ [
+ "▁id",
+ "ea"
+ ],
+ [
+ "▁",
+ "idea"
+ ],
+ [
+ ",",
+ "'"
+ ],
+ [
+ "▁requ",
+ "ired"
+ ],
+ [
+ "▁require",
+ "d"
+ ],
+ [
+ "▁",
+ "required"
+ ],
+ [
+ "▁he",
+ "art"
+ ],
+ [
+ "▁hear",
+ "t"
+ ],
+ [
+ "▁",
+ "heart"
+ ],
+ [
+ "ib",
+ "ility"
+ ],
+ [
+ "ibil",
+ "ity"
+ ],
+ [
+ "ict",
+ "ion"
+ ],
+ [
+ "i",
+ "ction"
+ ],
+ [
+ "Mod",
+ "el"
+ ],
+ [
+ "Mode",
+ "l"
+ ],
+ [
+ "Mo",
+ "del"
+ ],
+ [
+ "wr",
+ "ite"
+ ],
+ [
+ "writ",
+ "e"
+ ],
+ [
+ "w",
+ "rite"
+ ],
+ [
+ "▁cont",
+ "ent"
+ ],
+ [
+ "▁conten",
+ "t"
+ ],
+ [
+ "▁",
+ "content"
+ ],
+ [
+ "▁w",
+ "er"
+ ],
+ [
+ "▁we",
+ "r"
+ ],
+ [
+ "▁",
+ "wer"
+ ],
+ [
+ "▁h",
+ "ands"
+ ],
+ [
+ "▁hand",
+ "s"
+ ],
+ [
+ "▁han",
+ "ds"
+ ],
+ [
+ "ze",
+ "n"
+ ],
+ [
+ "z",
+ "en"
+ ],
+ [
+ "ch",
+ "ar"
+ ],
+ [
+ "cha",
+ "r"
+ ],
+ [
+ "c",
+ "har"
+ ],
+ [
+ "}^",
+ "{"
+ ],
+ [
+ "}",
+ "^{"
+ ],
+ [
+ "▁m",
+ "ass"
+ ],
+ [
+ "▁ma",
+ "ss"
+ ],
+ [
+ "▁mas",
+ "s"
+ ],
+ [
+ "▁",
+ "mass"
+ ],
+ [
+ "pl",
+ "y"
+ ],
+ [
+ "p",
+ "ly"
+ ],
+ [
+ "▁n",
+ "at"
+ ],
+ [
+ "▁na",
+ "t"
+ ],
+ [
+ "▁",
+ "nat"
+ ],
+ [
+ "re",
+ "l"
+ ],
+ [
+ "r",
+ "el"
+ ],
+ [
+ "▁d",
+ "at"
+ ],
+ [
+ "▁da",
+ "t"
+ ],
+ [
+ "▁",
+ "dat"
+ ],
+ [
+ "====",
+ "============"
+ ],
+ [
+ "========",
+ "========"
+ ],
+ [
+ "============",
+ "===="
+ ],
+ [
+ "im",
+ "al"
+ ],
+ [
+ "ima",
+ "l"
+ ],
+ [
+ "i",
+ "mal"
+ ],
+ [
+ "▁pro",
+ "bably"
+ ],
+ [
+ "▁prob",
+ "ably"
+ ],
+ [
+ "un",
+ "ch"
+ ],
+ [
+ "unc",
+ "h"
+ ],
+ [
+ "▁m",
+ "er"
+ ],
+ [
+ "▁me",
+ "r"
+ ],
+ [
+ "▁",
+ "mer"
+ ],
+ [
+ "il",
+ "ar"
+ ],
+ [
+ "ila",
+ "r"
+ ],
+ [
+ "i",
+ "lar"
+ ],
+ [
+ "ir",
+ "es"
+ ],
+ [
+ "ire",
+ "s"
+ ],
+ [
+ "i",
+ "res"
+ ],
+ [
+ "▁w",
+ "atch"
+ ],
+ [
+ "▁wat",
+ "ch"
+ ],
+ [
+ "▁",
+ "watch"
+ ],
+ [
+ "S",
+ "I"
+ ],
+ [
+ "▁c",
+ "ult"
+ ],
+ [
+ "▁cu",
+ "lt"
+ ],
+ [
+ "▁cul",
+ "t"
+ ],
+ [
+ "▁m",
+ "other"
+ ],
+ [
+ "▁mot",
+ "her"
+ ],
+ [
+ "▁mo",
+ "ther"
+ ],
+ [
+ "▁",
+ "mother"
+ ],
+ [
+ "▁govern",
+ "ment"
+ ],
+ [
+ "or",
+ "ding"
+ ],
+ [
+ "ord",
+ "ing"
+ ],
+ [
+ "▁(",
+ ")"
+ ],
+ [
+ "▁",
+ "()"
+ ],
+ [
+ "▁p",
+ "ri"
+ ],
+ [
+ "▁pr",
+ "i"
+ ],
+ [
+ "▁l",
+ "ink"
+ ],
+ [
+ "▁lin",
+ "k"
+ ],
+ [
+ "▁",
+ "link"
+ ],
+ [
+ "gr",
+ "oup"
+ ],
+ [
+ "gro",
+ "up"
+ ],
+ [
+ "g",
+ "roup"
+ ],
+ [
+ "O",
+ "L"
+ ],
+ [
+ "▁n",
+ "ear"
+ ],
+ [
+ "▁ne",
+ "ar"
+ ],
+ [
+ "▁S",
+ "er"
+ ],
+ [
+ "▁Se",
+ "r"
+ ],
+ [
+ "▁",
+ "Ser"
+ ],
+ [
+ "Se",
+ "r"
+ ],
+ [
+ "S",
+ "er"
+ ],
+ [
+ "it",
+ "o"
+ ],
+ [
+ "i",
+ "to"
+ ],
+ [
+ "▁value",
+ "s"
+ ],
+ [
+ "▁val",
+ "ues"
+ ],
+ [
+ "▁",
+ "values"
+ ],
+ [
+ "▁j",
+ "ava"
+ ],
+ [
+ "▁ja",
+ "va"
+ ],
+ [
+ "▁",
+ "java"
+ ],
+ [
+ "ful",
+ "ly"
+ ],
+ [
+ "full",
+ "y"
+ ],
+ [
+ "f",
+ "ully"
+ ],
+ [
+ "Co",
+ "unt"
+ ],
+ [
+ "C",
+ "ount"
+ ],
+ [
+ "++",
+ ")"
+ ],
+ [
+ "▁v",
+ "i"
+ ],
+ [
+ "▁",
+ "vi"
+ ],
+ [
+ "▁wh",
+ "ite"
+ ],
+ [
+ "▁",
+ "white"
+ ],
+ [
+ "ma",
+ "t"
+ ],
+ [
+ "m",
+ "at"
+ ],
+ [
+ "ct",
+ "x"
+ ],
+ [
+ "c",
+ "tx"
+ ],
+ [
+ "▁con",
+ "c"
+ ],
+ [
+ "▁co",
+ "nc"
+ ],
+ [
+ "▁",
+ "conc"
+ ],
+ [
+ "▁st",
+ "ay"
+ ],
+ [
+ "▁sta",
+ "y"
+ ],
+ [
+ "gi",
+ "ng"
+ ],
+ [
+ "gin",
+ "g"
+ ],
+ [
+ "g",
+ "ing"
+ ],
+ [
+ "▁c",
+ "lear"
+ ],
+ [
+ "▁cl",
+ "ear"
+ ],
+ [
+ "▁cle",
+ "ar"
+ ],
+ [
+ "▁",
+ "clear"
+ ],
+ [
+ "▁c",
+ "opy"
+ ],
+ [
+ "▁co",
+ "py"
+ ],
+ [
+ "▁cop",
+ "y"
+ ],
+ [
+ "▁",
+ "copy"
+ ],
+ [
+ "sel",
+ "ves"
+ ],
+ [
+ "▁prov",
+ "ide"
+ ],
+ [
+ "▁w",
+ "ords"
+ ],
+ [
+ "▁wor",
+ "ds"
+ ],
+ [
+ "▁word",
+ "s"
+ ],
+ [
+ "▁",
+ "words"
+ ],
+ [
+ "com",
+ "p"
+ ],
+ [
+ "co",
+ "mp"
+ ],
+ [
+ "c",
+ "omp"
+ ],
+ [
+ "ar",
+ "gs"
+ ],
+ [
+ "arg",
+ "s"
+ ],
+ [
+ "▁p",
+ "ick"
+ ],
+ [
+ "▁pi",
+ "ck"
+ ],
+ [
+ "▁pic",
+ "k"
+ ],
+ [
+ "▁",
+ "pick"
+ ],
+ [
+ "ul",
+ "y"
+ ],
+ [
+ "u",
+ "ly"
+ ],
+ [
+ "▁v",
+ "ari"
+ ],
+ [
+ "▁var",
+ "i"
+ ],
+ [
+ "▁va",
+ "ri"
+ ],
+ [
+ "▁",
+ "vari"
+ ],
+ [
+ "▁bel",
+ "ieve"
+ ],
+ [
+ "▁belie",
+ "ve"
+ ],
+ [
+ "▁C",
+ "o"
+ ],
+ [
+ "▁",
+ "Co"
+ ],
+ [
+ "Pro",
+ "perty"
+ ],
+ [
+ "Gr",
+ "oup"
+ ],
+ [
+ "G",
+ "roup"
+ ],
+ [
+ "▁t",
+ "en"
+ ],
+ [
+ "▁te",
+ "n"
+ ],
+ [
+ "▁",
+ "ten"
+ ],
+ [
+ "is",
+ "chen"
+ ],
+ [
+ "isch",
+ "en"
+ ],
+ [
+ "ische",
+ "n"
+ ],
+ [
+ "isc",
+ "hen"
+ ],
+ [
+ "i",
+ "schen"
+ ],
+ [
+ "et",
+ "urn"
+ ],
+ [
+ "e",
+ "turn"
+ ],
+ [
+ "iv",
+ "al"
+ ],
+ [
+ "iva",
+ "l"
+ ],
+ [
+ "i",
+ "val"
+ ],
+ [
+ "Sys",
+ "tem"
+ ],
+ [
+ "S",
+ "ystem"
+ ],
+ [
+ "C",
+ "L"
+ ],
+ [
+ "be",
+ "d"
+ ],
+ [
+ "b",
+ "ed"
+ ],
+ [
+ "▁t",
+ "otal"
+ ],
+ [
+ "▁to",
+ "tal"
+ ],
+ [
+ "▁tot",
+ "al"
+ ],
+ [
+ "▁",
+ "total"
+ ],
+ [
+ "▁is",
+ "t"
+ ],
+ [
+ "▁i",
+ "st"
+ ],
+ [
+ "▁",
+ "ist"
+ ],
+ [
+ "In",
+ "put"
+ ],
+ [
+ "um",
+ "ents"
+ ],
+ [
+ "ument",
+ "s"
+ ],
+ [
+ "umen",
+ "ts"
+ ],
+ [
+ "u",
+ "ments"
+ ],
+ [
+ "Man",
+ "ager"
+ ],
+ [
+ "ш",
+ "и"
+ ],
+ [
+ "▁w",
+ "in"
+ ],
+ [
+ "▁",
+ "win"
+ ],
+ [
+ "le",
+ "ep"
+ ],
+ [
+ "lee",
+ "p"
+ ],
+ [
+ "P",
+ "I"
+ ],
+ [
+ "но",
+ "го"
+ ],
+ [
+ "н",
+ "ого"
+ ],
+ [
+ "ru",
+ "ction"
+ ],
+ [
+ "ruct",
+ "ion"
+ ],
+ [
+ "r",
+ "uction"
+ ],
+ [
+ "▁in",
+ "te"
+ ],
+ [
+ "▁i",
+ "nte"
+ ],
+ [
+ "▁int",
+ "e"
+ ],
+ [
+ "▁",
+ "inte"
+ ],
+ [
+ "Ap",
+ "p"
+ ],
+ [
+ "A",
+ "pp"
+ ],
+ [
+ "av",
+ "or"
+ ],
+ [
+ "avo",
+ "r"
+ ],
+ [
+ "a",
+ "vor"
+ ],
+ [
+ "▁re",
+ "spect"
+ ],
+ [
+ "▁res",
+ "pect"
+ ],
+ [
+ "▁resp",
+ "ect"
+ ],
+ [
+ "▁",
+ "respect"
+ ],
+ [
+ "at",
+ "ors"
+ ],
+ [
+ "ator",
+ "s"
+ ],
+ [
+ "ato",
+ "rs"
+ ],
+ [
+ "▁c",
+ "omo"
+ ],
+ [
+ "▁com",
+ "o"
+ ],
+ [
+ "▁co",
+ "mo"
+ ],
+ [
+ "▁c",
+ "ut"
+ ],
+ [
+ "▁cu",
+ "t"
+ ],
+ [
+ "▁",
+ "cut"
+ ],
+ [
+ "F",
+ "A"
+ ],
+ [
+ "▁s",
+ "us"
+ ],
+ [
+ "▁su",
+ "s"
+ ],
+ [
+ "▁A",
+ "pp"
+ ],
+ [
+ "▁Ap",
+ "p"
+ ],
+ [
+ "▁",
+ "App"
+ ],
+ [
+ "re",
+ "ct"
+ ],
+ [
+ "rec",
+ "t"
+ ],
+ [
+ "r",
+ "ect"
+ ],
+ [
+ "F",
+ "I"
+ ],
+ [
+ "▁be",
+ "gan"
+ ],
+ [
+ "▁beg",
+ "an"
+ ],
+ [
+ "op",
+ "h"
+ ],
+ [
+ "o",
+ "ph"
+ ],
+ [
+ "▁s",
+ "ort"
+ ],
+ [
+ "▁so",
+ "rt"
+ ],
+ [
+ "▁sor",
+ "t"
+ ],
+ [
+ "▁",
+ "sort"
+ ],
+ [
+ "th",
+ "ough"
+ ],
+ [
+ "ј",
+ "е"
+ ],
+ [
+ "ic",
+ "ro"
+ ],
+ [
+ "i",
+ "cro"
+ ],
+ [
+ "Tr",
+ "ans"
+ ],
+ [
+ "Tra",
+ "ns"
+ ],
+ [
+ "л",
+ "і"
+ ],
+ [
+ "▁In",
+ "st"
+ ],
+ [
+ "▁Ins",
+ "t"
+ ],
+ [
+ "▁",
+ "Inst"
+ ],
+ [
+ "re",
+ "quest"
+ ],
+ [
+ "requ",
+ "est"
+ ],
+ [
+ "req",
+ "uest"
+ ],
+ [
+ "о",
+ "р"
+ ],
+ [
+ "▁rel",
+ "ations"
+ ],
+ [
+ "▁relation",
+ "s"
+ ],
+ [
+ "-",
+ "\\"
+ ],
+ [
+ "St",
+ "atus"
+ ],
+ [
+ "Stat",
+ "us"
+ ],
+ [
+ "ж",
+ "и"
+ ],
+ [
+ "▁f",
+ "ather"
+ ],
+ [
+ "▁fa",
+ "ther"
+ ],
+ [
+ "▁fat",
+ "her"
+ ],
+ [
+ "▁",
+ "father"
+ ],
+ [
+ "c",
+ "s"
+ ],
+ [
+ "▁s",
+ "ex"
+ ],
+ [
+ "▁se",
+ "x"
+ ],
+ [
+ "▁",
+ "sex"
+ ],
+ [
+ "is",
+ "ch"
+ ],
+ [
+ "isc",
+ "h"
+ ],
+ [
+ "i",
+ "sch"
+ ],
+ [
+ "v",
+ "o"
+ ],
+ [
+ "}_",
+ "{"
+ ],
+ [
+ "}",
+ "_{"
+ ],
+ [
+ "ave",
+ "n"
+ ],
+ [
+ "av",
+ "en"
+ ],
+ [
+ "a",
+ "ven"
+ ],
+ [
+ "▁N",
+ "e"
+ ],
+ [
+ "▁",
+ "Ne"
+ ],
+ [
+ "AT",
+ "E"
+ ],
+ [
+ "A",
+ "TE"
+ ],
+ [
+ "it",
+ "ten"
+ ],
+ [
+ "itt",
+ "en"
+ ],
+ [
+ "itte",
+ "n"
+ ],
+ [
+ "▁e",
+ "ss"
+ ],
+ [
+ "▁es",
+ "s"
+ ],
+ [
+ "▁",
+ "ess"
+ ],
+ [
+ "T",
+ "H"
+ ],
+ [
+ "ight",
+ "s"
+ ],
+ [
+ "igh",
+ "ts"
+ ],
+ [
+ "▁h",
+ "om"
+ ],
+ [
+ "▁ho",
+ "m"
+ ],
+ [
+ "▁",
+ "hom"
+ ],
+ [
+ "▁t",
+ "oday"
+ ],
+ [
+ "▁to",
+ "day"
+ ],
+ [
+ "▁tod",
+ "ay"
+ ],
+ [
+ "▁toda",
+ "y"
+ ],
+ [
+ "▁z",
+ "u"
+ ],
+ [
+ "▁",
+ "zu"
+ ],
+ [
+ "it",
+ "a"
+ ],
+ [
+ "i",
+ "ta"
+ ],
+ [
+ "▁is",
+ "n"
+ ],
+ [
+ "▁i",
+ "sn"
+ ],
+ [
+ "▁o",
+ "pt"
+ ],
+ [
+ "▁op",
+ "t"
+ ],
+ [
+ "▁",
+ "opt"
+ ],
+ [
+ "og",
+ "n"
+ ],
+ [
+ "o",
+ "gn"
+ ],
+ [
+ "é",
+ "r"
+ ],
+ [
+ "▁wh",
+ "ether"
+ ],
+ [
+ "▁whe",
+ "ther"
+ ],
+ [
+ "ix",
+ "ed"
+ ],
+ [
+ "ph",
+ "i"
+ ],
+ [
+ "p",
+ "hi"
+ ],
+ [
+ "id",
+ "ence"
+ ],
+ [
+ "iden",
+ "ce"
+ ],
+ [
+ "al",
+ "d"
+ ],
+ [
+ "a",
+ "ld"
+ ],
+ [
+ "Cl",
+ "ient"
+ ],
+ [
+ "A",
+ "t"
+ ],
+ [
+ "▁de",
+ "ath"
+ ],
+ [
+ "▁L",
+ "et"
+ ],
+ [
+ "▁Le",
+ "t"
+ ],
+ [
+ "▁",
+ "Let"
+ ],
+ [
+ "iu",
+ "s"
+ ],
+ [
+ "i",
+ "us"
+ ],
+ [
+ "г",
+ "и"
+ ],
+ [
+ "▁р",
+ "е"
+ ],
+ [
+ "▁",
+ "ре"
+ ],
+ [
+ "be",
+ "n"
+ ],
+ [
+ "b",
+ "en"
+ ],
+ [
+ ")",
+ "\r"
+ ],
+ [
+ "b",
+ "a"
+ ],
+ [
+ "><",
+ "/"
+ ],
+ [
+ ">",
+ ""
+ ],
+ [
+ "ave",
+ "l"
+ ],
+ [
+ "av",
+ "el"
+ ],
+ [
+ "a",
+ "vel"
+ ],
+ [
+ "▁m",
+ "iss"
+ ],
+ [
+ "▁mis",
+ "s"
+ ],
+ [
+ "▁mi",
+ "ss"
+ ],
+ [
+ "▁",
+ "miss"
+ ],
+ [
+ "▁n",
+ "ode"
+ ],
+ [
+ "▁no",
+ "de"
+ ],
+ [
+ "▁nod",
+ "e"
+ ],
+ [
+ "▁",
+ "node"
+ ],
+ [
+ "▁(",
+ "$"
+ ],
+ [
+ "▁",
+ "($"
+ ],
+ [
+ "▁col",
+ "or"
+ ],
+ [
+ "▁co",
+ "lor"
+ ],
+ [
+ "▁",
+ "color"
+ ],
+ [
+ "▁o",
+ "bt"
+ ],
+ [
+ "▁ob",
+ "t"
+ ],
+ [
+ "to",
+ "t"
+ ],
+ [
+ "t",
+ "ot"
+ ],
+ [
+ "▁п",
+ "ре"
+ ],
+ [
+ "▁пр",
+ "е"
+ ],
+ [
+ "▁",
+ "пре"
+ ],
+ [
+ "CO",
+ "N"
+ ],
+ [
+ "C",
+ "ON"
+ ],
+ [
+ "et",
+ "te"
+ ],
+ [
+ "ett",
+ "e"
+ ],
+ [
+ "▁G",
+ "o"
+ ],
+ [
+ "▁",
+ "Go"
+ ],
+ [
+ "F",
+ "l"
+ ],
+ [
+ "▁D",
+ "on"
+ ],
+ [
+ "▁Do",
+ "n"
+ ],
+ [
+ "▁",
+ "Don"
+ ],
+ [
+ "▁c",
+ "rit"
+ ],
+ [
+ "▁cr",
+ "it"
+ ],
+ [
+ "▁cri",
+ "t"
+ ],
+ [
+ "▁",
+ "crit"
+ ],
+ [
+ "▁r",
+ "i"
+ ],
+ [
+ "▁",
+ "ri"
+ ],
+ [
+ "pos",
+ "t"
+ ],
+ [
+ "po",
+ "st"
+ ],
+ [
+ "p",
+ "ost"
+ ],
+ [
+ "▁-",
+ ">"
+ ],
+ [
+ "▁",
+ "->"
+ ],
+ [
+ "▁J",
+ "ust"
+ ],
+ [
+ "▁Ju",
+ "st"
+ ],
+ [
+ "▁",
+ "Just"
+ ],
+ [
+ "Wh",
+ "at"
+ ],
+ [
+ "W",
+ "hat"
+ ],
+ [
+ "at",
+ "al"
+ ],
+ [
+ "ata",
+ "l"
+ ],
+ [
+ "a",
+ "tal"
+ ],
+ [
+ "▁M",
+ "in"
+ ],
+ [
+ "▁Mi",
+ "n"
+ ],
+ [
+ "▁",
+ "Min"
+ ],
+ [
+ "▁C",
+ "or"
+ ],
+ [
+ "▁Co",
+ "r"
+ ],
+ [
+ "▁",
+ "Cor"
+ ],
+ [
+ "▁d",
+ "ark"
+ ],
+ [
+ "▁dar",
+ "k"
+ ],
+ [
+ "▁",
+ "dark"
+ ],
+ [
+ "r",
+ "l"
+ ],
+ [
+ "▁l",
+ "arg"
+ ],
+ [
+ "▁la",
+ "rg"
+ ],
+ [
+ "▁",
+ "larg"
+ ],
+ [
+ "di",
+ "ng"
+ ],
+ [
+ "d",
+ "ing"
+ ],
+ [
+ "ó",
+ "n"
+ ],
+ [
+ "ou",
+ "ch"
+ ],
+ [
+ "o",
+ "uch"
+ ],
+ [
+ "▁u",
+ "m"
+ ],
+ [
+ "▁",
+ "um"
+ ],
+ [
+ "▁e",
+ "lect"
+ ],
+ [
+ "▁el",
+ "ect"
+ ],
+ [
+ "▁ele",
+ "ct"
+ ],
+ [
+ "▁",
+ "elect"
+ ],
+ [
+ "▁d",
+ "am"
+ ],
+ [
+ "▁da",
+ "m"
+ ],
+ [
+ "▁",
+ "dam"
+ ],
+ [
+ "▁ne",
+ "eds"
+ ],
+ [
+ "▁need",
+ "s"
+ ],
+ [
+ "▁m",
+ "atter"
+ ],
+ [
+ "▁mat",
+ "ter"
+ ],
+ [
+ "▁matt",
+ "er"
+ ],
+ [
+ "▁r",
+ "ather"
+ ],
+ [
+ "▁rat",
+ "her"
+ ],
+ [
+ "▁ra",
+ "ther"
+ ],
+ [
+ "fr",
+ "om"
+ ],
+ [
+ "f",
+ "rom"
+ ],
+ [
+ "ra",
+ "m"
+ ],
+ [
+ "r",
+ "am"
+ ],
+ [
+ "▁",
+ "і"
+ ],
+ [
+ "▁t",
+ "aken"
+ ],
+ [
+ "▁take",
+ "n"
+ ],
+ [
+ "▁tak",
+ "en"
+ ],
+ [
+ "▁ta",
+ "ken"
+ ],
+ [
+ "▁de",
+ "al"
+ ],
+ [
+ "▁per",
+ "iod"
+ ],
+ [
+ "▁",
+ "period"
+ ],
+ [
+ "▁M",
+ "on"
+ ],
+ [
+ "▁Mo",
+ "n"
+ ],
+ [
+ "▁",
+ "Mon"
+ ],
+ [
+ "▁",
+ "Л"
+ ],
+ [
+ "▁A",
+ "ug"
+ ],
+ [
+ "▁Au",
+ "g"
+ ],
+ [
+ "▁",
+ "Aug"
+ ],
+ [
+ "ru",
+ "n"
+ ],
+ [
+ "r",
+ "un"
+ ],
+ [
+ "m",
+ "m"
+ ],
+ [
+ "el",
+ "le"
+ ],
+ [
+ "ell",
+ "e"
+ ],
+ [
+ "e",
+ "lle"
+ ],
+ [
+ "▁ex",
+ "port"
+ ],
+ [
+ "▁exp",
+ "ort"
+ ],
+ [
+ "▁",
+ "export"
+ ],
+ [
+ "S",
+ "c"
+ ],
+ [
+ "vi",
+ "s"
+ ],
+ [
+ "v",
+ "is"
+ ],
+ [
+ "ab",
+ "or"
+ ],
+ [
+ "a",
+ "bor"
+ ],
+ [
+ "▁aut",
+ "hor"
+ ],
+ [
+ "▁auth",
+ "or"
+ ],
+ [
+ "▁",
+ "author"
+ ],
+ [
+ "è",
+ "re"
+ ],
+ [
+ "▁re",
+ "member"
+ ],
+ [
+ "▁rem",
+ "ember"
+ ],
+ [
+ "▁remem",
+ "ber"
+ ],
+ [
+ "▁re",
+ "du"
+ ],
+ [
+ "▁r",
+ "edu"
+ ],
+ [
+ "▁red",
+ "u"
+ ],
+ [
+ "▁",
+ "redu"
+ ],
+ [
+ "▁L",
+ "ist"
+ ],
+ [
+ "▁Li",
+ "st"
+ ],
+ [
+ "▁Lis",
+ "t"
+ ],
+ [
+ "▁",
+ "List"
+ ],
+ [
+ "▁f",
+ "ocus"
+ ],
+ [
+ "▁",
+ "focus"
+ ],
+ [
+ "▁char",
+ "acter"
+ ],
+ [
+ "▁",
+ "character"
+ ],
+ [
+ "Tab",
+ "le"
+ ],
+ [
+ "T",
+ "able"
+ ],
+ [
+ "▁individ",
+ "ual"
+ ],
+ [
+ "▁need",
+ "ed"
+ ],
+ [
+ "bu",
+ "m"
+ ],
+ [
+ "b",
+ "um"
+ ],
+ [
+ "▁st",
+ "yle"
+ ],
+ [
+ "▁sty",
+ "le"
+ ],
+ [
+ "▁",
+ "style"
+ ],
+ [
+ "in",
+ "ary"
+ ],
+ [
+ "ina",
+ "ry"
+ ],
+ [
+ "inar",
+ "y"
+ ],
+ [
+ "ers",
+ "ion"
+ ],
+ [
+ "ou",
+ "te"
+ ],
+ [
+ "out",
+ "e"
+ ],
+ [
+ "o",
+ "ute"
+ ],
+ [
+ "▁P",
+ "e"
+ ],
+ [
+ "▁",
+ "Pe"
+ ],
+ [
+ "▁h",
+ "on"
+ ],
+ [
+ "▁ho",
+ "n"
+ ],
+ [
+ "▁",
+ "hon"
+ ],
+ [
+ "mu",
+ "t"
+ ],
+ [
+ "m",
+ "ut"
+ ],
+ [
+ "se",
+ "e"
+ ],
+ [
+ "s",
+ "ee"
+ ],
+ [
+ "▁bec",
+ "ame"
+ ],
+ [
+ "▁d",
+ "ire"
+ ],
+ [
+ "▁di",
+ "re"
+ ],
+ [
+ "▁dir",
+ "e"
+ ],
+ [
+ "▁",
+ "dire"
+ ],
+ [
+ "▁d",
+ "ocument"
+ ],
+ [
+ "▁doc",
+ "ument"
+ ],
+ [
+ "▁",
+ "document"
+ ],
+ [
+ "se",
+ "c"
+ ],
+ [
+ "s",
+ "ec"
+ ],
+ [
+ "en",
+ "ing"
+ ],
+ [
+ "eni",
+ "ng"
+ ],
+ [
+ "e",
+ "ning"
+ ],
+ [
+ "▁vis",
+ "it"
+ ],
+ [
+ "▁",
+ "visit"
+ ],
+ [
+ "▁f",
+ "ac"
+ ],
+ [
+ "▁fa",
+ "c"
+ ],
+ [
+ "▁",
+ "fac"
+ ],
+ [
+ "t",
+ "x"
+ ],
+ [
+ "do",
+ "wn"
+ ],
+ [
+ "d",
+ "own"
+ ],
+ [
+ "pl",
+ "it"
+ ],
+ [
+ "p",
+ "lit"
+ ],
+ [
+ "▁ph",
+ "ys"
+ ],
+ [
+ "▁",
+ "phys"
+ ],
+ [
+ "it",
+ "ting"
+ ],
+ [
+ "itt",
+ "ing"
+ ],
+ [
+ "jo",
+ "y"
+ ],
+ [
+ "j",
+ "oy"
+ ],
+ [
+ "▁h",
+ "ig"
+ ],
+ [
+ "▁hi",
+ "g"
+ ],
+ [
+ "Th",
+ "is"
+ ],
+ [
+ "T",
+ "his"
+ ],
+ [
+ "A",
+ "d"
+ ],
+ [
+ "▁B",
+ "rit"
+ ],
+ [
+ "▁Br",
+ "it"
+ ],
+ [
+ "▁em",
+ "ploy"
+ ],
+ [
+ "▁r",
+ "é"
+ ],
+ [
+ "▁",
+ "ré"
+ ],
+ [
+ "▁",
+ "т"
+ ],
+ [
+ "l",
+ "ambda"
+ ],
+ [
+ "▁im",
+ "pro"
+ ],
+ [
+ "▁imp",
+ "ro"
+ ],
+ [
+ "▁B",
+ "o"
+ ],
+ [
+ "▁",
+ "Bo"
+ ],
+ [
+ "id",
+ "ing"
+ ],
+ [
+ "idi",
+ "ng"
+ ],
+ [
+ "i",
+ "ding"
+ ],
+ [
+ "▁on",
+ "line"
+ ],
+ [
+ "▁",
+ "online"
+ ],
+ [
+ "me",
+ "m"
+ ],
+ [
+ "m",
+ "em"
+ ],
+ [
+ "at",
+ "form"
+ ],
+ [
+ "▁W",
+ "ar"
+ ],
+ [
+ "▁Wa",
+ "r"
+ ],
+ [
+ "▁",
+ "War"
+ ],
+ [
+ "▁c",
+ "as"
+ ],
+ [
+ "▁ca",
+ "s"
+ ],
+ [
+ "▁",
+ "cas"
+ ],
+ [
+ "as",
+ "ure"
+ ],
+ [
+ "a",
+ "sure"
+ ],
+ [
+ "▁p",
+ "ur"
+ ],
+ [
+ "▁pu",
+ "r"
+ ],
+ [
+ "▁",
+ "pur"
+ ],
+ [
+ "me",
+ "di"
+ ],
+ [
+ "med",
+ "i"
+ ],
+ [
+ "m",
+ "edi"
+ ],
+ [
+ "Di",
+ "s"
+ ],
+ [
+ "D",
+ "is"
+ ],
+ [
+ "▁G",
+ "erm"
+ ],
+ [
+ "▁Ge",
+ "rm"
+ ],
+ [
+ "▁Ger",
+ "m"
+ ],
+ [
+ "p",
+ "c"
+ ],
+ [
+ "с",
+ "а"
+ ],
+ [
+ "▁friend",
+ "s"
+ ],
+ [
+ "▁M",
+ "c"
+ ],
+ [
+ "▁",
+ "Mc"
+ ],
+ [
+ "D",
+ "I"
+ ],
+ [
+ "▁pl",
+ "us"
+ ],
+ [
+ "▁",
+ "plus"
+ ],
+ [
+ "▁S",
+ "et"
+ ],
+ [
+ "▁Se",
+ "t"
+ ],
+ [
+ "▁",
+ "Set"
+ ],
+ [
+ "idd",
+ "le"
+ ],
+ [
+ "it",
+ "ut"
+ ],
+ [
+ "itu",
+ "t"
+ ],
+ [
+ "▁de",
+ "pend"
+ ],
+ [
+ "▁dep",
+ "end"
+ ],
+ [
+ "▁",
+ "depend"
+ ],
+ [
+ "re",
+ "st"
+ ],
+ [
+ "res",
+ "t"
+ ],
+ [
+ "r",
+ "est"
+ ],
+ [
+ "▁J",
+ "e"
+ ],
+ [
+ "▁",
+ "Je"
+ ],
+ [
+ "▁h",
+ "or"
+ ],
+ [
+ "▁ho",
+ "r"
+ ],
+ [
+ "▁",
+ "hor"
+ ],
+ [
+ "▁ent",
+ "ire"
+ ],
+ [
+ "Qu",
+ "ery"
+ ],
+ [
+ "Que",
+ "ry"
+ ],
+ [
+ "▁re",
+ "fer"
+ ],
+ [
+ "▁ref",
+ "er"
+ ],
+ [
+ "▁",
+ "refer"
+ ],
+ [
+ "▁h",
+ "ot"
+ ],
+ [
+ "▁ho",
+ "t"
+ ],
+ [
+ "▁",
+ "hot"
+ ],
+ [
+ "▁A",
+ "ust"
+ ],
+ [
+ "▁Aus",
+ "t"
+ ],
+ [
+ "▁Au",
+ "st"
+ ],
+ [
+ "▁com",
+ "mon"
+ ],
+ [
+ "▁comm",
+ "on"
+ ],
+ [
+ "▁",
+ "common"
+ ],
+ [
+ "ц",
+ "і"
+ ],
+ [
+ "▁p",
+ "ull"
+ ],
+ [
+ "▁pu",
+ "ll"
+ ],
+ [
+ "▁pul",
+ "l"
+ ],
+ [
+ "▁",
+ "pull"
+ ],
+ [
+ "▁A",
+ "dd"
+ ],
+ [
+ "▁Ad",
+ "d"
+ ],
+ [
+ "▁",
+ "Add"
+ ],
+ [
+ "▁se",
+ "ason"
+ ],
+ [
+ "▁sea",
+ "son"
+ ],
+ [
+ "▁seas",
+ "on"
+ ],
+ [
+ "▁",
+ "season"
+ ],
+ [
+ "▁in",
+ "vol"
+ ],
+ [
+ "▁inv",
+ "ol"
+ ],
+ [
+ "▁W",
+ "orld"
+ ],
+ [
+ "▁Wor",
+ "ld"
+ ],
+ [
+ "▁",
+ "World"
+ ],
+ [
+ "cl",
+ "ient"
+ ],
+ [
+ "cli",
+ "ent"
+ ],
+ [
+ "no",
+ "w"
+ ],
+ [
+ "n",
+ "ow"
+ ],
+ [
+ "tr",
+ "ue"
+ ],
+ [
+ "ap",
+ "pend"
+ ],
+ [
+ "app",
+ "end"
+ ],
+ [
+ "appe",
+ "nd"
+ ],
+ [
+ "appen",
+ "d"
+ ],
+ [
+ "it",
+ "ted"
+ ],
+ [
+ "itt",
+ "ed"
+ ],
+ [
+ "itte",
+ "d"
+ ],
+ [
+ "em",
+ "pt"
+ ],
+ [
+ "emp",
+ "t"
+ ],
+ [
+ ")",
+ "{"
+ ],
+ [
+ "//",
+ "/"
+ ],
+ [
+ "/",
+ "//"
+ ],
+ [
+ "▁p",
+ "rop"
+ ],
+ [
+ "▁pro",
+ "p"
+ ],
+ [
+ "▁pr",
+ "op"
+ ],
+ [
+ "▁",
+ "prop"
+ ],
+ [
+ "im",
+ "ate"
+ ],
+ [
+ "ima",
+ "te"
+ ],
+ [
+ "imat",
+ "e"
+ ],
+ [
+ "i",
+ "mate"
+ ],
+ [
+ "S",
+ "C"
+ ],
+ [
+ "▁h",
+ "ours"
+ ],
+ [
+ "▁hour",
+ "s"
+ ],
+ [
+ "▁ho",
+ "urs"
+ ],
+ [
+ "▁h",
+ "ope"
+ ],
+ [
+ "▁hop",
+ "e"
+ ],
+ [
+ "▁ho",
+ "pe"
+ ],
+ [
+ "an",
+ "dom"
+ ],
+ [
+ "and",
+ "om"
+ ],
+ [
+ "ando",
+ "m"
+ ],
+ [
+ "і",
+ "д"
+ ],
+ [
+ "ist",
+ "ic"
+ ],
+ [
+ "isti",
+ "c"
+ ],
+ [
+ "▁pro",
+ "perty"
+ ],
+ [
+ "▁proper",
+ "ty"
+ ],
+ [
+ "▁",
+ "property"
+ ],
+ [
+ "s",
+ "g"
+ ],
+ [
+ ">",
+ "("
+ ],
+ [
+ "▁w",
+ "rite"
+ ],
+ [
+ "▁wr",
+ "ite"
+ ],
+ [
+ "▁writ",
+ "e"
+ ],
+ [
+ "▁",
+ "write"
+ ],
+ [
+ "mar",
+ "k"
+ ],
+ [
+ "m",
+ "ark"
+ ],
+ [
+ "fin",
+ "d"
+ ],
+ [
+ "fi",
+ "nd"
+ ],
+ [
+ "f",
+ "ind"
+ ],
+ [
+ "▁person",
+ "al"
+ ],
+ [
+ "▁pers",
+ "onal"
+ ],
+ [
+ "▁persona",
+ "l"
+ ],
+ [
+ "▁",
+ "personal"
+ ],
+ [
+ "]",
+ "["
+ ],
+ [
+ "ro",
+ "wn"
+ ],
+ [
+ "row",
+ "n"
+ ],
+ [
+ "r",
+ "own"
+ ],
+ [
+ "P",
+ "h"
+ ],
+ [
+ "▁f",
+ "oot"
+ ],
+ [
+ "▁fo",
+ "ot"
+ ],
+ [
+ "▁foo",
+ "t"
+ ],
+ [
+ "▁",
+ "foot"
+ ],
+ [
+ "▁re",
+ "search"
+ ],
+ [
+ "▁res",
+ "earch"
+ ],
+ [
+ "iron",
+ "ment"
+ ],
+ [
+ "▁n",
+ "om"
+ ],
+ [
+ "▁no",
+ "m"
+ ],
+ [
+ "▁",
+ "nom"
+ ],
+ [
+ "▁in",
+ "stance"
+ ],
+ [
+ "▁inst",
+ "ance"
+ ],
+ [
+ "▁",
+ "instance"
+ ],
+ [
+ "▁h",
+ "eld"
+ ],
+ [
+ "▁he",
+ "ld"
+ ],
+ [
+ "▁hel",
+ "d"
+ ],
+ [
+ "▁",
+ "held"
+ ],
+ [
+ "D",
+ "e"
+ ],
+ [
+ "▁mem",
+ "bers"
+ ],
+ [
+ "▁member",
+ "s"
+ ],
+ [
+ "▁",
+ "members"
+ ],
+ [
+ "▁f",
+ "ire"
+ ],
+ [
+ "▁fi",
+ "re"
+ ],
+ [
+ "▁fir",
+ "e"
+ ],
+ [
+ "▁",
+ "fire"
+ ],
+ [
+ "▁hist",
+ "ory"
+ ],
+ [
+ "▁histor",
+ "y"
+ ],
+ [
+ "▁hi",
+ "story"
+ ],
+ [
+ "▁",
+ "history"
+ ],
+ [
+ "▁m",
+ "ap"
+ ],
+ [
+ "▁ma",
+ "p"
+ ],
+ [
+ "▁",
+ "map"
+ ],
+ [
+ "▁dis",
+ "cuss"
+ ],
+ [
+ "▁disc",
+ "uss"
+ ],
+ [
+ "▁e",
+ "spec"
+ ],
+ [
+ "▁es",
+ "pec"
+ ],
+ [
+ "▁esp",
+ "ec"
+ ],
+ [
+ "▁",
+ "espec"
+ ],
+ [
+ "▁t",
+ "aking"
+ ],
+ [
+ "▁tak",
+ "ing"
+ ],
+ [
+ "▁ta",
+ "king"
+ ],
+ [
+ "▁s",
+ "ervices"
+ ],
+ [
+ "▁serv",
+ "ices"
+ ],
+ [
+ "▁service",
+ "s"
+ ],
+ [
+ "▁",
+ "services"
+ ],
+ [
+ "▁ind",
+ "ust"
+ ],
+ [
+ "▁indu",
+ "st"
+ ],
+ [
+ "▁",
+ "indust"
+ ],
+ [
+ "ig",
+ "en"
+ ],
+ [
+ "ige",
+ "n"
+ ],
+ [
+ "i",
+ "gen"
+ ],
+ [
+ "▁A",
+ "ss"
+ ],
+ [
+ "▁As",
+ "s"
+ ],
+ [
+ "▁",
+ "Ass"
+ ],
+ [
+ "▁e",
+ "xpected"
+ ],
+ [
+ "▁ex",
+ "pected"
+ ],
+ [
+ "▁expect",
+ "ed"
+ ],
+ [
+ "▁",
+ "expected"
+ ],
+ [
+ "▁wur",
+ "de"
+ ],
+ [
+ "di",
+ "r"
+ ],
+ [
+ "d",
+ "ir"
+ ],
+ [
+ "▁a",
+ "mong"
+ ],
+ [
+ "▁am",
+ "ong"
+ ],
+ [
+ "▁s",
+ "ugg"
+ ],
+ [
+ "▁su",
+ "gg"
+ ],
+ [
+ "▁sug",
+ "g"
+ ],
+ [
+ "re",
+ "c"
+ ],
+ [
+ "r",
+ "ec"
+ ],
+ [
+ "In",
+ "ter"
+ ],
+ [
+ "Int",
+ "er"
+ ],
+ [
+ "bl",
+ "ock"
+ ],
+ [
+ "blo",
+ "ck"
+ ],
+ [
+ "b",
+ "lock"
+ ],
+ [
+ "▁R",
+ "ep"
+ ],
+ [
+ "▁Re",
+ "p"
+ ],
+ [
+ "▁",
+ "Rep"
+ ],
+ [
+ "▁p",
+ "ain"
+ ],
+ [
+ "▁pa",
+ "in"
+ ],
+ [
+ "▁f",
+ "ive"
+ ],
+ [
+ "▁fi",
+ "ve"
+ ],
+ [
+ "▁",
+ "five"
+ ],
+ [
+ "▁f",
+ "und"
+ ],
+ [
+ "▁fun",
+ "d"
+ ],
+ [
+ "▁fu",
+ "nd"
+ ],
+ [
+ "▁",
+ "fund"
+ ],
+ [
+ "ri",
+ "d"
+ ],
+ [
+ "r",
+ "id"
+ ],
+ [
+ "ar",
+ "row"
+ ],
+ [
+ "arr",
+ "ow"
+ ],
+ [
+ "▁t",
+ "reat"
+ ],
+ [
+ "▁tre",
+ "at"
+ ],
+ [
+ "▁he",
+ "ard"
+ ],
+ [
+ "▁hear",
+ "d"
+ ],
+ [
+ "▁de",
+ "term"
+ ],
+ [
+ "▁det",
+ "erm"
+ ],
+ [
+ "▁deter",
+ "m"
+ ],
+ [
+ "ic",
+ "ult"
+ ],
+ [
+ "▁s",
+ "ense"
+ ],
+ [
+ "▁sens",
+ "e"
+ ],
+ [
+ "▁sen",
+ "se"
+ ],
+ [
+ "es",
+ "e"
+ ],
+ [
+ "e",
+ "se"
+ ],
+ [
+ "F",
+ "un"
+ ],
+ [
+ "▁month",
+ "s"
+ ],
+ [
+ "▁mont",
+ "hs"
+ ],
+ [
+ "js",
+ "on"
+ ],
+ [
+ "j",
+ "son"
+ ],
+ [
+ ",",
+ "”"
+ ],
+ [
+ "T",
+ "I"
+ ],
+ [
+ "or",
+ "age"
+ ],
+ [
+ "ora",
+ "ge"
+ ],
+ [
+ "o",
+ "rage"
+ ],
+ [
+ "▁",
+ "У"
+ ],
+ [
+ "▁every",
+ "one"
+ ],
+ [
+ "▁c",
+ "los"
+ ],
+ [
+ "▁cl",
+ "os"
+ ],
+ [
+ "▁clo",
+ "s"
+ ],
+ [
+ "▁",
+ "clos"
+ ],
+ [
+ "ie",
+ "rs"
+ ],
+ [
+ "ier",
+ "s"
+ ],
+ [
+ "i",
+ "ers"
+ ],
+ [
+ "air",
+ "s"
+ ],
+ [
+ "ai",
+ "rs"
+ ],
+ [
+ "a",
+ "irs"
+ ],
+ [
+ "def",
+ "ine"
+ ],
+ [
+ "I",
+ "f"
+ ],
+ [
+ "os",
+ "p"
+ ],
+ [
+ "o",
+ "sp"
+ ],
+ [
+ "▁w",
+ "onder"
+ ],
+ [
+ "▁won",
+ "der"
+ ],
+ [
+ "▁wo",
+ "nder"
+ ],
+ [
+ "N",
+ "A"
+ ],
+ [
+ "qu",
+ "ery"
+ ],
+ [
+ "que",
+ "ry"
+ ],
+ [
+ "quer",
+ "y"
+ ],
+ [
+ "p",
+ "g"
+ ],
+ [
+ "it",
+ "es"
+ ],
+ [
+ "ite",
+ "s"
+ ],
+ [
+ "i",
+ "tes"
+ ],
+ [
+ "▁m",
+ "aterial"
+ ],
+ [
+ "▁mat",
+ "erial"
+ ],
+ [
+ "▁mate",
+ "rial"
+ ],
+ [
+ "▁mater",
+ "ial"
+ ],
+ [
+ "▁",
+ "material"
+ ],
+ [
+ "y",
+ "d"
+ ],
+ [
+ "Re",
+ "ad"
+ ],
+ [
+ "R",
+ "ead"
+ ],
+ [
+ "ht",
+ "ml"
+ ],
+ [
+ "h",
+ "tml"
+ ],
+ [
+ "T",
+ "E"
+ ],
+ [
+ "P",
+ "r"
+ ],
+ [
+ "^{",
+ "\\"
+ ],
+ [
+ "^",
+ "{\\"
+ ],
+ [
+ "▁g",
+ "ave"
+ ],
+ [
+ "▁ga",
+ "ve"
+ ],
+ [
+ "▁I",
+ "S"
+ ],
+ [
+ "▁",
+ "IS"
+ ],
+ [
+ "▁s",
+ "uggest"
+ ],
+ [
+ "▁sugg",
+ "est"
+ ],
+ [
+ "▁sug",
+ "gest"
+ ],
+ [
+ "Over",
+ "ride"
+ ],
+ [
+ "ro",
+ "du"
+ ],
+ [
+ "rod",
+ "u"
+ ],
+ [
+ "Fr",
+ "om"
+ ],
+ [
+ "F",
+ "rom"
+ ],
+ [
+ "▁E",
+ "urope"
+ ],
+ [
+ "▁Europ",
+ "e"
+ ],
+ [
+ "▁Euro",
+ "pe"
+ ],
+ [
+ "▁",
+ "Europe"
+ ],
+ [
+ "P",
+ "O"
+ ],
+ [
+ "▁s",
+ "oon"
+ ],
+ [
+ "▁so",
+ "on"
+ ],
+ [
+ "ho",
+ "st"
+ ],
+ [
+ "hos",
+ "t"
+ ],
+ [
+ "h",
+ "ost"
+ ],
+ [
+ "▁B",
+ "er"
+ ],
+ [
+ "▁Be",
+ "r"
+ ],
+ [
+ "▁",
+ "Ber"
+ ],
+ [
+ "..",
+ ".."
+ ],
+ [
+ "...",
+ "."
+ ],
+ [
+ ".",
+ "..."
+ ],
+ [
+ "▁H",
+ "ar"
+ ],
+ [
+ "▁Ha",
+ "r"
+ ],
+ [
+ "▁",
+ "Har"
+ ],
+ [
+ "▁e",
+ "nergy"
+ ],
+ [
+ "▁ener",
+ "gy"
+ ],
+ [
+ "▁energ",
+ "y"
+ ],
+ [
+ "▁",
+ "energy"
+ ],
+ [
+ ">",
+ "<"
+ ],
+ [
+ "ave",
+ "s"
+ ],
+ [
+ "av",
+ "es"
+ ],
+ [
+ "a",
+ "ves"
+ ],
+ [
+ "▁e",
+ "asy"
+ ],
+ [
+ "▁eas",
+ "y"
+ ],
+ [
+ "▁b",
+ "re"
+ ],
+ [
+ "▁br",
+ "e"
+ ],
+ [
+ "▁",
+ "bre"
+ ],
+ [
+ "fr",
+ "ame"
+ ],
+ [
+ "▁g",
+ "round"
+ ],
+ [
+ "▁gr",
+ "ound"
+ ],
+ [
+ "▁gro",
+ "und"
+ ],
+ [
+ "▁",
+ "ground"
+ ],
+ [
+ "wi",
+ "th"
+ ],
+ [
+ "w",
+ "ith"
+ ],
+ [
+ "▁in",
+ "side"
+ ],
+ [
+ "▁ins",
+ "ide"
+ ],
+ [
+ "ie",
+ "f"
+ ],
+ [
+ "i",
+ "ef"
+ ],
+ [
+ "▁m",
+ "o"
+ ],
+ [
+ "▁",
+ "mo"
+ ],
+ [
+ "p",
+ "m"
+ ],
+ [
+ "pa",
+ "n"
+ ],
+ [
+ "p",
+ "an"
+ ],
+ [
+ "ig",
+ "r"
+ ],
+ [
+ "i",
+ "gr"
+ ],
+ [
+ "▁o",
+ "m"
+ ],
+ [
+ "▁",
+ "om"
+ ],
+ [
+ "ne",
+ "xt"
+ ],
+ [
+ "nex",
+ "t"
+ ],
+ [
+ "n",
+ "ext"
+ ],
+ [
+ "om",
+ "et"
+ ],
+ [
+ "ome",
+ "t"
+ ],
+ [
+ "o",
+ "met"
+ ],
+ [
+ "▁st",
+ "atus"
+ ],
+ [
+ "▁stat",
+ "us"
+ ],
+ [
+ "▁",
+ "status"
+ ],
+ [
+ "▁}",
+ "\r"
+ ],
+ [
+ "▁",
+ "}\r"
+ ],
+ [
+ "▁mus",
+ "ic"
+ ],
+ [
+ "or",
+ "a"
+ ],
+ [
+ "o",
+ "ra"
+ ],
+ [
+ "il",
+ "es"
+ ],
+ [
+ "ile",
+ "s"
+ ],
+ [
+ "i",
+ "les"
+ ],
+ [
+ "k",
+ "i"
+ ],
+ [
+ "▁e",
+ "sc"
+ ],
+ [
+ "▁es",
+ "c"
+ ],
+ [
+ "▁",
+ "esc"
+ ],
+ [
+ "▁b",
+ "es"
+ ],
+ [
+ "▁be",
+ "s"
+ ],
+ [
+ "▁",
+ "bes"
+ ],
+ [
+ "▁D",
+ "is"
+ ],
+ [
+ "▁Di",
+ "s"
+ ],
+ [
+ "▁",
+ "Dis"
+ ],
+ [
+ "▁h",
+ "ost"
+ ],
+ [
+ "▁ho",
+ "st"
+ ],
+ [
+ "▁",
+ "host"
+ ],
+ [
+ "▁c",
+ "omes"
+ ],
+ [
+ "▁com",
+ "es"
+ ],
+ [
+ "▁co",
+ "mes"
+ ],
+ [
+ "▁come",
+ "s"
+ ],
+ [
+ "▁",
+ "comes"
+ ],
+ [
+ "us",
+ "ed"
+ ],
+ [
+ "use",
+ "d"
+ ],
+ [
+ "u",
+ "sed"
+ ],
+ [
+ "▁f",
+ "uture"
+ ],
+ [
+ "▁fut",
+ "ure"
+ ],
+ [
+ "▁",
+ "future"
+ ],
+ [
+ "lic",
+ "k"
+ ],
+ [
+ "li",
+ "ck"
+ ],
+ [
+ "l",
+ "ick"
+ ],
+ [
+ "ai",
+ "d"
+ ],
+ [
+ "a",
+ "id"
+ ],
+ [
+ "▁com",
+ "pet"
+ ],
+ [
+ "▁comp",
+ "et"
+ ],
+ [
+ "▁",
+ "compet"
+ ],
+ [
+ "▁v",
+ "oice"
+ ],
+ [
+ "▁vo",
+ "ice"
+ ],
+ [
+ "▁",
+ "voice"
+ ],
+ [
+ "▁l",
+ "oad"
+ ],
+ [
+ "▁lo",
+ "ad"
+ ],
+ [
+ "▁",
+ "load"
+ ],
+ [
+ "ev",
+ "el"
+ ],
+ [
+ "eve",
+ "l"
+ ],
+ [
+ "e",
+ "vel"
+ ],
+ [
+ "▁n",
+ "eg"
+ ],
+ [
+ "▁ne",
+ "g"
+ ],
+ [
+ "▁",
+ "neg"
+ ],
+ [
+ "▁com",
+ "mand"
+ ],
+ [
+ "▁comm",
+ "and"
+ ],
+ [
+ "▁",
+ "command"
+ ],
+ [
+ "▁f",
+ "ür"
+ ],
+ [
+ "▁p",
+ "ie"
+ ],
+ [
+ "▁pi",
+ "e"
+ ],
+ [
+ "▁",
+ "pie"
+ ],
+ [
+ "▁qu",
+ "ite"
+ ],
+ [
+ "▁qui",
+ "te"
+ ],
+ [
+ "▁quit",
+ "e"
+ ],
+ [
+ "▁b",
+ "lo"
+ ],
+ [
+ "▁bl",
+ "o"
+ ],
+ [
+ "▁",
+ "blo"
+ ],
+ [
+ "ag",
+ "n"
+ ],
+ [
+ "a",
+ "gn"
+ ],
+ [
+ "il",
+ "on"
+ ],
+ [
+ "ilo",
+ "n"
+ ],
+ [
+ "i",
+ "lon"
+ ],
+ [
+ "▁cl",
+ "aim"
+ ],
+ [
+ "▁",
+ "claim"
+ ],
+ [
+ "▁t",
+ "each"
+ ],
+ [
+ "▁te",
+ "ach"
+ ],
+ [
+ "▁tea",
+ "ch"
+ ],
+ [
+ "▁pre",
+ "vious"
+ ],
+ [
+ "▁prev",
+ "ious"
+ ],
+ [
+ "▁",
+ "previous"
+ ],
+ [
+ "▁s",
+ "ite"
+ ],
+ [
+ "▁sit",
+ "e"
+ ],
+ [
+ "▁si",
+ "te"
+ ],
+ [
+ "▁",
+ "site"
+ ],
+ [
+ "co",
+ "lor"
+ ],
+ [
+ "col",
+ "or"
+ ],
+ [
+ "colo",
+ "r"
+ ],
+ [
+ "at",
+ "tr"
+ ],
+ [
+ "att",
+ "r"
+ ],
+ [
+ "▁ac",
+ "cept"
+ ],
+ [
+ "▁",
+ "accept"
+ ],
+ [
+ "▁ex",
+ "act"
+ ],
+ [
+ ")",
+ "}"
+ ],
+ [
+ "af",
+ "t"
+ ],
+ [
+ "a",
+ "ft"
+ ],
+ [
+ "rol",
+ "ler"
+ ],
+ [
+ "roll",
+ "er"
+ ],
+ [
+ "о",
+ "н"
+ ],
+ [
+ "o",
+ "o"
+ ],
+ [
+ "Dat",
+ "e"
+ ],
+ [
+ "Da",
+ "te"
+ ],
+ [
+ "D",
+ "ate"
+ ],
+ [
+ "▁o",
+ "u"
+ ],
+ [
+ "▁",
+ "ou"
+ ],
+ [
+ "s",
+ "y"
+ ],
+ [
+ "▁pre",
+ "tty"
+ ],
+ [
+ "▁pret",
+ "ty"
+ ],
+ [
+ "▁im",
+ "age"
+ ],
+ [
+ "▁imag",
+ "e"
+ ],
+ [
+ "▁",
+ "image"
+ ],
+ [
+ "B",
+ "U"
+ ],
+ [
+ "▁term",
+ "s"
+ ],
+ [
+ "▁ter",
+ "ms"
+ ],
+ [
+ "▁s",
+ "earch"
+ ],
+ [
+ "▁se",
+ "arch"
+ ],
+ [
+ "▁sear",
+ "ch"
+ ],
+ [
+ "▁",
+ "search"
+ ],
+ [
+ "▁",
+ "è"
+ ],
+ [
+ "▁V",
+ "al"
+ ],
+ [
+ "▁Va",
+ "l"
+ ],
+ [
+ "▁",
+ "Val"
+ ],
+ [
+ "▁",
+ "‘"
+ ],
+ [
+ "▁D",
+ "av"
+ ],
+ [
+ "▁Da",
+ "v"
+ ],
+ [
+ "M",
+ "S"
+ ],
+ [
+ "sr",
+ "c"
+ ],
+ [
+ "s",
+ "rc"
+ ],
+ [
+ "ma",
+ "r"
+ ],
+ [
+ "m",
+ "ar"
+ ],
+ [
+ "in",
+ "cip"
+ ],
+ [
+ "inc",
+ "ip"
+ ],
+ [
+ "▁could",
+ "n"
+ ],
+ [
+ "ad",
+ "os"
+ ],
+ [
+ "ado",
+ "s"
+ ],
+ [
+ "▁d",
+ "ro"
+ ],
+ [
+ "▁dr",
+ "o"
+ ],
+ [
+ "▁",
+ "dro"
+ ],
+ [
+ "be",
+ "ta"
+ ],
+ [
+ "bet",
+ "a"
+ ],
+ [
+ "b",
+ "eta"
+ ],
+ [
+ "im",
+ "um"
+ ],
+ [
+ "▁min",
+ "utes"
+ ],
+ [
+ "▁minute",
+ "s"
+ ],
+ [
+ "▁minut",
+ "es"
+ ],
+ [
+ "▁g",
+ "rand"
+ ],
+ [
+ "▁gr",
+ "and"
+ ],
+ [
+ "▁gran",
+ "d"
+ ],
+ [
+ "▁gra",
+ "nd"
+ ],
+ [
+ "▁",
+ "grand"
+ ],
+ [
+ "▁",
+ "»"
+ ],
+ [
+ "▁O",
+ "ur"
+ ],
+ [
+ "▁",
+ "Our"
+ ],
+ [
+ "St",
+ "r"
+ ],
+ [
+ "S",
+ "tr"
+ ],
+ [
+ "VE",
+ "R"
+ ],
+ [
+ "V",
+ "ER"
+ ],
+ [
+ "ma",
+ "z"
+ ],
+ [
+ "m",
+ "az"
+ ],
+ [
+ "▁or",
+ "iginal"
+ ],
+ [
+ "▁orig",
+ "inal"
+ ],
+ [
+ "▁origin",
+ "al"
+ ],
+ [
+ "▁",
+ "original"
+ ],
+ [
+ "in",
+ "i"
+ ],
+ [
+ "i",
+ "ni"
+ ],
+ [
+ "▁c",
+ "oll"
+ ],
+ [
+ "▁col",
+ "l"
+ ],
+ [
+ "▁co",
+ "ll"
+ ],
+ [
+ "▁",
+ "coll"
+ ],
+ [
+ "lo",
+ "at"
+ ],
+ [
+ "▁o",
+ "s"
+ ],
+ [
+ "▁",
+ "os"
+ ],
+ [
+ "})",
+ ";"
+ ],
+ [
+ "}",
+ ");"
+ ],
+ [
+ "sum",
+ "mary"
+ ],
+ [
+ "▁w",
+ "all"
+ ],
+ [
+ "▁wa",
+ "ll"
+ ],
+ [
+ "▁wal",
+ "l"
+ ],
+ [
+ "▁",
+ "wall"
+ ],
+ [
+ "Col",
+ "or"
+ ],
+ [
+ "Co",
+ "lor"
+ ],
+ [
+ "▁v",
+ "ers"
+ ],
+ [
+ "▁ver",
+ "s"
+ ],
+ [
+ "▁ve",
+ "rs"
+ ],
+ [
+ "▁",
+ "vers"
+ ],
+ [
+ "▁d",
+ "ella"
+ ],
+ [
+ "▁de",
+ "lla"
+ ],
+ [
+ "▁del",
+ "la"
+ ],
+ [
+ "▁dell",
+ "a"
+ ],
+ [
+ "▁\"",
+ "\"\""
+ ],
+ [
+ "▁\"\"",
+ "\""
+ ],
+ [
+ "▁",
+ "\"\"\""
+ ],
+ [
+ "math",
+ "bf"
+ ],
+ [
+ "ze",
+ "r"
+ ],
+ [
+ "z",
+ "er"
+ ],
+ [
+ "au",
+ "r"
+ ],
+ [
+ "a",
+ "ur"
+ ],
+ [
+ "▁tr",
+ "ack"
+ ],
+ [
+ "▁tra",
+ "ck"
+ ],
+ [
+ "▁",
+ "track"
+ ],
+ [
+ "▁ass",
+ "oci"
+ ],
+ [
+ "▁",
+ "associ"
+ ],
+ [
+ "▁s",
+ "uff"
+ ],
+ [
+ "▁su",
+ "ff"
+ ],
+ [
+ "▁in",
+ "de"
+ ],
+ [
+ "▁i",
+ "nde"
+ ],
+ [
+ "▁ind",
+ "e"
+ ],
+ [
+ "▁",
+ "inde"
+ ],
+ [
+ "ag",
+ "ue"
+ ],
+ [
+ "agu",
+ "e"
+ ],
+ [
+ "a",
+ "gue"
+ ],
+ [
+ "▁A",
+ "pr"
+ ],
+ [
+ "▁Ap",
+ "r"
+ ],
+ [
+ "▁",
+ "Apr"
+ ],
+ [
+ "L",
+ "e"
+ ],
+ [
+ "ro",
+ "ups"
+ ],
+ [
+ "rou",
+ "ps"
+ ],
+ [
+ "roup",
+ "s"
+ ],
+ [
+ "bo",
+ "ard"
+ ],
+ [
+ "b",
+ "oard"
+ ],
+ [
+ "▁att",
+ "ack"
+ ],
+ [
+ "▁s",
+ "eries"
+ ],
+ [
+ "▁se",
+ "ries"
+ ],
+ [
+ "▁ser",
+ "ies"
+ ],
+ [
+ "▁serie",
+ "s"
+ ],
+ [
+ "▁",
+ "series"
+ ],
+ [
+ "▁in",
+ "stead"
+ ],
+ [
+ "▁inst",
+ "ead"
+ ],
+ [
+ "ha",
+ "m"
+ ],
+ [
+ "h",
+ "am"
+ ],
+ [
+ "bo",
+ "ok"
+ ],
+ [
+ "b",
+ "ook"
+ ],
+ [
+ "▁s",
+ "ix"
+ ],
+ [
+ "▁si",
+ "x"
+ ],
+ [
+ "▁",
+ "six"
+ ],
+ [
+ "▁R",
+ "ec"
+ ],
+ [
+ "▁Re",
+ "c"
+ ],
+ [
+ "▁",
+ "Rec"
+ ],
+ [
+ "▁c",
+ "oming"
+ ],
+ [
+ "▁com",
+ "ing"
+ ],
+ [
+ "▁co",
+ "ming"
+ ],
+ [
+ "▁",
+ "coming"
+ ],
+ [
+ "ur",
+ "t"
+ ],
+ [
+ "u",
+ "rt"
+ ],
+ [
+ "▁gl",
+ "obal"
+ ],
+ [
+ "▁glob",
+ "al"
+ ],
+ [
+ "▁glo",
+ "bal"
+ ],
+ [
+ "▁",
+ "global"
+ ],
+ [
+ "▁ne",
+ "cess"
+ ],
+ [
+ "▁neces",
+ "s"
+ ],
+ [
+ "▁",
+ "necess"
+ ],
+ [
+ "le",
+ "ge"
+ ],
+ [
+ "leg",
+ "e"
+ ],
+ [
+ "Po",
+ "s"
+ ],
+ [
+ "P",
+ "os"
+ ],
+ [
+ "▁le",
+ "ave"
+ ],
+ [
+ "▁",
+ "leave"
+ ],
+ [
+ "▁p",
+ "od"
+ ],
+ [
+ "▁po",
+ "d"
+ ],
+ [
+ "▁",
+ "pod"
+ ],
+ [
+ "ateg",
+ "ory"
+ ],
+ [
+ "ategor",
+ "y"
+ ],
+ [
+ "u",
+ "z"
+ ],
+ [
+ "▁de",
+ "ep"
+ ],
+ [
+ "▁",
+ "deep"
+ ],
+ [
+ "▁k",
+ "m"
+ ],
+ [
+ "▁",
+ "km"
+ ],
+ [
+ "▁out",
+ "side"
+ ],
+ [
+ "▁outs",
+ "ide"
+ ],
+ [
+ "ha",
+ "s"
+ ],
+ [
+ "h",
+ "as"
+ ],
+ [
+ "opt",
+ "ions"
+ ],
+ [
+ "option",
+ "s"
+ ],
+ [
+ "o",
+ "ptions"
+ ],
+ [
+ "▁S",
+ "m"
+ ],
+ [
+ "▁",
+ "Sm"
+ ],
+ [
+ "Su",
+ "b"
+ ],
+ [
+ "S",
+ "ub"
+ ],
+ [
+ "ro",
+ "ws"
+ ],
+ [
+ "row",
+ "s"
+ ],
+ [
+ "r",
+ "ows"
+ ],
+ [
+ "▁в",
+ "и"
+ ],
+ [
+ "▁",
+ "ви"
+ ],
+ [
+ "▁St",
+ "ates"
+ ],
+ [
+ "▁State",
+ "s"
+ ],
+ [
+ "▁Stat",
+ "es"
+ ],
+ [
+ "▁Sta",
+ "tes"
+ ],
+ [
+ "▁",
+ "States"
+ ],
+ [
+ "▁wr",
+ "ong"
+ ],
+ [
+ "▁how",
+ "ever"
+ ],
+ [
+ "▁s",
+ "em"
+ ],
+ [
+ "▁se",
+ "m"
+ ],
+ [
+ "▁",
+ "sem"
+ ],
+ [
+ "▁c",
+ "atch"
+ ],
+ [
+ "▁cat",
+ "ch"
+ ],
+ [
+ "▁",
+ "catch"
+ ],
+ [
+ "\")",
+ ","
+ ],
+ [
+ "\"",
+ "),"
+ ],
+ [
+ "mod",
+ "el"
+ ],
+ [
+ "mode",
+ "l"
+ ],
+ [
+ "mo",
+ "del"
+ ],
+ [
+ "▁h",
+ "ttp"
+ ],
+ [
+ "▁htt",
+ "p"
+ ],
+ [
+ "▁",
+ "http"
+ ],
+ [
+ "▁o",
+ "ption"
+ ],
+ [
+ "▁opt",
+ "ion"
+ ],
+ [
+ "▁",
+ "option"
+ ],
+ [
+ "ri",
+ "e"
+ ],
+ [
+ "r",
+ "ie"
+ ],
+ [
+ "▁с",
+ "та"
+ ],
+ [
+ "▁ст",
+ "а"
+ ],
+ [
+ "▁",
+ "ста"
+ ],
+ [
+ "▁ä",
+ "r"
+ ],
+ [
+ "▁",
+ "är"
+ ],
+ [
+ "▁en",
+ "joy"
+ ],
+ [
+ "▁enjo",
+ "y"
+ ],
+ [
+ "n",
+ "u"
+ ],
+ [
+ "▁p",
+ "as"
+ ],
+ [
+ "▁pa",
+ "s"
+ ],
+ [
+ "▁",
+ "pas"
+ ],
+ [
+ "▁a",
+ "mount"
+ ],
+ [
+ "▁am",
+ "ount"
+ ],
+ [
+ "▁",
+ "amount"
+ ],
+ [
+ "▁res",
+ "pons"
+ ],
+ [
+ "▁respon",
+ "s"
+ ],
+ [
+ "▁resp",
+ "ons"
+ ],
+ [
+ "▁",
+ "respons"
+ ],
+ [
+ "▁In",
+ "tern"
+ ],
+ [
+ "▁Inter",
+ "n"
+ ],
+ [
+ "▁Int",
+ "ern"
+ ],
+ [
+ "▁",
+ "Intern"
+ ],
+ [
+ "▁my",
+ "self"
+ ],
+ [
+ "▁o",
+ "pp"
+ ],
+ [
+ "▁op",
+ "p"
+ ],
+ [
+ "▁",
+ "opp"
+ ],
+ [
+ "▁S",
+ "im"
+ ],
+ [
+ "▁Si",
+ "m"
+ ],
+ [
+ "▁",
+ "Sim"
+ ],
+ [
+ "▁s",
+ "ens"
+ ],
+ [
+ "▁se",
+ "ns"
+ ],
+ [
+ "▁sen",
+ "s"
+ ],
+ [
+ "E",
+ "d"
+ ],
+ [
+ "▁(",
+ "\\"
+ ],
+ [
+ "▁",
+ "(\\"
+ ],
+ [
+ "▁stud",
+ "ents"
+ ],
+ [
+ "▁student",
+ "s"
+ ],
+ [
+ "но",
+ "в"
+ ],
+ [
+ "н",
+ "ов"
+ ],
+ [
+ "▁point",
+ "s"
+ ],
+ [
+ "▁",
+ "points"
+ ],
+ [
+ "ar",
+ "ning"
+ ],
+ [
+ "arn",
+ "ing"
+ ],
+ [
+ "U",
+ "P"
+ ],
+ [
+ "el",
+ "ling"
+ ],
+ [
+ "ell",
+ "ing"
+ ],
+ [
+ "elli",
+ "ng"
+ ],
+ [
+ "▁c",
+ "annot"
+ ],
+ [
+ "▁can",
+ "not"
+ ],
+ [
+ "B",
+ "e"
+ ],
+ [
+ "▁l",
+ "ength"
+ ],
+ [
+ "▁le",
+ "ngth"
+ ],
+ [
+ "▁",
+ "length"
+ ],
+ [
+ "nu",
+ "ll"
+ ],
+ [
+ "n",
+ "ull"
+ ],
+ [
+ "ui",
+ "nt"
+ ],
+ [
+ "u",
+ "int"
+ ],
+ [
+ "wi",
+ "se"
+ ],
+ [
+ "w",
+ "ise"
+ ],
+ [
+ "▁d",
+ "ouble"
+ ],
+ [
+ "▁dou",
+ "ble"
+ ],
+ [
+ "▁doub",
+ "le"
+ ],
+ [
+ "▁",
+ "double"
+ ],
+ [
+ "ig",
+ "e"
+ ],
+ [
+ "i",
+ "ge"
+ ],
+ [
+ "is",
+ "ta"
+ ],
+ [
+ "ist",
+ "a"
+ ],
+ [
+ "i",
+ "sta"
+ ],
+ [
+ "▁est",
+ "ab"
+ ],
+ [
+ "▁es",
+ "tab"
+ ],
+ [
+ "▁esta",
+ "b"
+ ],
+ [
+ "an",
+ "ch"
+ ],
+ [
+ "anc",
+ "h"
+ ],
+ [
+ "▁a",
+ "go"
+ ],
+ [
+ "▁ag",
+ "o"
+ ],
+ [
+ "▁",
+ "ago"
+ ],
+ [
+ "▁b",
+ "ound"
+ ],
+ [
+ "▁bo",
+ "und"
+ ],
+ [
+ "▁bou",
+ "nd"
+ ],
+ [
+ "▁",
+ "bound"
+ ],
+ [
+ "▁f",
+ "a"
+ ],
+ [
+ "▁",
+ "fa"
+ ],
+ [
+ "▁c",
+ "lean"
+ ],
+ [
+ "▁cle",
+ "an"
+ ],
+ [
+ "▁",
+ "clean"
+ ],
+ [
+ "▁sim",
+ "ple"
+ ],
+ [
+ "▁simpl",
+ "e"
+ ],
+ [
+ "▁",
+ "simple"
+ ],
+ [
+ "m",
+ "i"
+ ],
+ [
+ "####",
+ "####"
+ ],
+ [
+ "if",
+ "ier"
+ ],
+ [
+ "ifi",
+ "er"
+ ],
+ [
+ "▁Gener",
+ "al"
+ ],
+ [
+ "▁Gen",
+ "eral"
+ ],
+ [
+ "▁Gene",
+ "ral"
+ ],
+ [
+ "▁",
+ "General"
+ ],
+ [
+ "▁se",
+ "emed"
+ ],
+ [
+ "▁see",
+ "med"
+ ],
+ [
+ "▁seem",
+ "ed"
+ ],
+ [
+ "en",
+ "a"
+ ],
+ [
+ "e",
+ "na"
+ ],
+ [
+ "▁a",
+ "ge"
+ ],
+ [
+ "▁ag",
+ "e"
+ ],
+ [
+ "▁",
+ "age"
+ ],
+ [
+ "но",
+ "й"
+ ],
+ [
+ "end",
+ "if"
+ ],
+ [
+ "A",
+ "A"
+ ],
+ [
+ "▁c",
+ "aus"
+ ],
+ [
+ "▁ca",
+ "us"
+ ],
+ [
+ "▁e",
+ "duc"
+ ],
+ [
+ "▁ed",
+ "uc"
+ ],
+ [
+ "▁",
+ "educ"
+ ],
+ [
+ "▁c",
+ "ell"
+ ],
+ [
+ "▁ce",
+ "ll"
+ ],
+ [
+ "▁cel",
+ "l"
+ ],
+ [
+ "▁",
+ "cell"
+ ],
+ [
+ "Ge",
+ "ner"
+ ],
+ [
+ "Gen",
+ "er"
+ ],
+ [
+ "G",
+ "ener"
+ ],
+ [
+ "sp",
+ "ace"
+ ],
+ [
+ "s",
+ "pace"
+ ],
+ [
+ "▁Y",
+ "our"
+ ],
+ [
+ "▁You",
+ "r"
+ ],
+ [
+ "▁",
+ "Your"
+ ],
+ [
+ "▁be",
+ "aut"
+ ],
+ [
+ "g",
+ "t"
+ ],
+ [
+ "▁l",
+ "imit"
+ ],
+ [
+ "▁li",
+ "mit"
+ ],
+ [
+ "▁lim",
+ "it"
+ ],
+ [
+ "▁",
+ "limit"
+ ],
+ [
+ "▁d",
+ "ate"
+ ],
+ [
+ "▁da",
+ "te"
+ ],
+ [
+ "▁dat",
+ "e"
+ ],
+ [
+ "▁",
+ "date"
+ ],
+ [
+ "Ut",
+ "il"
+ ],
+ [
+ "U",
+ "til"
+ ],
+ [
+ "▁N",
+ "ational"
+ ],
+ [
+ "▁Nat",
+ "ional"
+ ],
+ [
+ "▁Nation",
+ "al"
+ ],
+ [
+ "▁",
+ "National"
+ ],
+ [
+ "ow",
+ "s"
+ ],
+ [
+ "o",
+ "ws"
+ ],
+ [
+ "pa",
+ "t"
+ ],
+ [
+ "p",
+ "at"
+ ],
+ [
+ "qu",
+ "ad"
+ ],
+ [
+ "▁o",
+ "k"
+ ],
+ [
+ "▁",
+ "ok"
+ ],
+ [
+ "▁",
+ "И"
+ ],
+ [
+ "ar",
+ "th"
+ ],
+ [
+ "art",
+ "h"
+ ],
+ [
+ "ha",
+ "t"
+ ],
+ [
+ "h",
+ "at"
+ ],
+ [
+ "▁comm",
+ "unity"
+ ],
+ [
+ "▁commun",
+ "ity"
+ ],
+ [
+ "ou",
+ "l"
+ ],
+ [
+ "o",
+ "ul"
+ ],
+ [
+ "▁e",
+ "conom"
+ ],
+ [
+ "▁ec",
+ "onom"
+ ],
+ [
+ "▁",
+ "econom"
+ ],
+ [
+ "Com",
+ "ponent"
+ ],
+ [
+ "bo",
+ "r"
+ ],
+ [
+ "b",
+ "or"
+ ],
+ [
+ "us",
+ "ion"
+ ],
+ [
+ "▁be",
+ "low"
+ ],
+ [
+ "▁bel",
+ "ow"
+ ],
+ [
+ "ear",
+ "ch"
+ ],
+ [
+ "e",
+ "arch"
+ ],
+ [
+ "or",
+ "es"
+ ],
+ [
+ "ore",
+ "s"
+ ],
+ [
+ "o",
+ "res"
+ ],
+ [
+ "ba",
+ "n"
+ ],
+ [
+ "b",
+ "an"
+ ],
+ [
+ "▁Aug",
+ "ust"
+ ],
+ [
+ "▁fur",
+ "ther"
+ ],
+ [
+ "sig",
+ "ma"
+ ],
+ [
+ "s",
+ "igma"
+ ],
+ [
+ "▁h",
+ "a"
+ ],
+ [
+ "▁",
+ "ha"
+ ],
+ [
+ "j",
+ "i"
+ ],
+ [
+ "▁com",
+ "put"
+ ],
+ [
+ "▁comp",
+ "ut"
+ ],
+ [
+ "▁",
+ "comput"
+ ],
+ [
+ "г",
+ "ра"
+ ],
+ [
+ "▁N",
+ "one"
+ ],
+ [
+ "▁No",
+ "ne"
+ ],
+ [
+ "▁Non",
+ "e"
+ ],
+ [
+ "▁",
+ "None"
+ ],
+ [
+ "▁t",
+ "er"
+ ],
+ [
+ "▁te",
+ "r"
+ ],
+ [
+ "▁",
+ "ter"
+ ],
+ [
+ "▁any",
+ "one"
+ ],
+ [
+ "▁t",
+ "ask"
+ ],
+ [
+ "▁ta",
+ "sk"
+ ],
+ [
+ "▁",
+ "task"
+ ],
+ [
+ "en",
+ "te"
+ ],
+ [
+ "ent",
+ "e"
+ ],
+ [
+ "e",
+ "nte"
+ ],
+ [
+ "pos",
+ "ition"
+ ],
+ [
+ "pp",
+ "ed"
+ ],
+ [
+ "ppe",
+ "d"
+ ],
+ [
+ "p",
+ "ped"
+ ],
+ [
+ "▁a",
+ "us"
+ ],
+ [
+ "▁au",
+ "s"
+ ],
+ [
+ "▁",
+ "aus"
+ ],
+ [
+ "Att",
+ "ribute"
+ ],
+ [
+ "Attrib",
+ "ute"
+ ],
+ [
+ "re",
+ "q"
+ ],
+ [
+ "r",
+ "eq"
+ ],
+ [
+ "ad",
+ "dr"
+ ],
+ [
+ "add",
+ "r"
+ ],
+ [
+ "li",
+ "ght"
+ ],
+ [
+ "lig",
+ "ht"
+ ],
+ [
+ "l",
+ "ight"
+ ],
+ [
+ "ш",
+ "е"
+ ],
+ [
+ "▁a",
+ "rm"
+ ],
+ [
+ "▁ar",
+ "m"
+ ],
+ [
+ "▁",
+ "arm"
+ ],
+ [
+ "co",
+ "ver"
+ ],
+ [
+ "cov",
+ "er"
+ ],
+ [
+ "c",
+ "over"
+ ],
+ [
+ "up",
+ "port"
+ ],
+ [
+ "upp",
+ "ort"
+ ],
+ [
+ "▁G",
+ "l"
+ ],
+ [
+ "▁",
+ "Gl"
+ ],
+ [
+ "▁S",
+ "an"
+ ],
+ [
+ "▁Sa",
+ "n"
+ ],
+ [
+ "▁",
+ "San"
+ ],
+ [
+ "▁wr",
+ "iting"
+ ],
+ [
+ "▁writ",
+ "ing"
+ ],
+ [
+ "▁",
+ "writing"
+ ],
+ [
+ "▁l",
+ "ost"
+ ],
+ [
+ "▁lo",
+ "st"
+ ],
+ [
+ "▁los",
+ "t"
+ ],
+ [
+ "▁M",
+ "ark"
+ ],
+ [
+ "▁Mar",
+ "k"
+ ],
+ [
+ "▁",
+ "Mark"
+ ],
+ [
+ "▁g",
+ "re"
+ ],
+ [
+ "▁gr",
+ "e"
+ ],
+ [
+ "▁",
+ "gre"
+ ],
+ [
+ "TY",
+ "PE"
+ ],
+ [
+ "T",
+ "YPE"
+ ],
+ [
+ "▁S",
+ "outh"
+ ],
+ [
+ "▁So",
+ "uth"
+ ],
+ [
+ "▁Sou",
+ "th"
+ ],
+ [
+ "▁Sout",
+ "h"
+ ],
+ [
+ "▁",
+ "South"
+ ],
+ [
+ "▁per",
+ "fect"
+ ],
+ [
+ "▁perf",
+ "ect"
+ ],
+ [
+ "▁pack",
+ "age"
+ ],
+ [
+ "▁",
+ "package"
+ ],
+ [
+ "▁in",
+ "fl"
+ ],
+ [
+ "▁inf",
+ "l"
+ ],
+ [
+ "▁",
+ "infl"
+ ],
+ [
+ "ha",
+ "ps"
+ ],
+ [
+ "h",
+ "aps"
+ ],
+ [
+ "▁A",
+ "ng"
+ ],
+ [
+ "▁An",
+ "g"
+ ],
+ [
+ "▁",
+ "Ang"
+ ],
+ [
+ "res",
+ "pon"
+ ],
+ [
+ "resp",
+ "on"
+ ],
+ [
+ "ri",
+ "s"
+ ],
+ [
+ "r",
+ "is"
+ ],
+ [
+ "pt",
+ "ember"
+ ],
+ [
+ "pte",
+ "mber"
+ ],
+ [
+ "▁build",
+ "ing"
+ ],
+ [
+ "▁",
+ "building"
+ ],
+ [
+ "VA",
+ "L"
+ ],
+ [
+ "V",
+ "AL"
+ ],
+ [
+ "fr",
+ "ee"
+ ],
+ [
+ "fre",
+ "e"
+ ],
+ [
+ "f",
+ "ree"
+ ],
+ [
+ "▁c",
+ "e"
+ ],
+ [
+ "▁",
+ "ce"
+ ],
+ [
+ "H",
+ "T"
+ ],
+ [
+ "▁F",
+ "rom"
+ ],
+ [
+ "▁Fr",
+ "om"
+ ],
+ [
+ "▁Fro",
+ "m"
+ ],
+ [
+ "▁",
+ "From"
+ ],
+ [
+ "d",
+ "s"
+ ],
+ [
+ "ro",
+ "y"
+ ],
+ [
+ "r",
+ "oy"
+ ],
+ [
+ "ach",
+ "ine"
+ ],
+ [
+ "achi",
+ "ne"
+ ],
+ [
+ "no",
+ "wn"
+ ],
+ [
+ "now",
+ "n"
+ ],
+ [
+ "n",
+ "own"
+ ],
+ [
+ "▁sa",
+ "ying"
+ ],
+ [
+ "▁say",
+ "ing"
+ ],
+ [
+ "▁б",
+ "ы"
+ ],
+ [
+ "▁",
+ "бы"
+ ],
+ [
+ "o",
+ "e"
+ ],
+ [
+ "Re",
+ "f"
+ ],
+ [
+ "R",
+ "ef"
+ ],
+ [
+ "▁net",
+ "work"
+ ],
+ [
+ "▁",
+ "network"
+ ],
+ [
+ "par",
+ "ent"
+ ],
+ [
+ "pa",
+ "rent"
+ ],
+ [
+ "pare",
+ "nt"
+ ],
+ [
+ "paren",
+ "t"
+ ],
+ [
+ "p",
+ "arent"
+ ],
+ [
+ "ug",
+ "e"
+ ],
+ [
+ "u",
+ "ge"
+ ],
+ [
+ "▁sim",
+ "ilar"
+ ],
+ [
+ ">",
+ "\r"
+ ],
+ [
+ "Build",
+ "er"
+ ],
+ [
+ "B",
+ "uilder"
+ ],
+ [
+ "▁l",
+ "iving"
+ ],
+ [
+ "▁li",
+ "ving"
+ ],
+ [
+ "▁liv",
+ "ing"
+ ],
+ [
+ "▁contin",
+ "ue"
+ ],
+ [
+ "▁continu",
+ "e"
+ ],
+ [
+ "▁",
+ "continue"
+ ],
+ [
+ "an",
+ "ger"
+ ],
+ [
+ "ang",
+ "er"
+ ],
+ [
+ "ange",
+ "r"
+ ],
+ [
+ "▁R",
+ "ed"
+ ],
+ [
+ "▁Re",
+ "d"
+ ],
+ [
+ "▁",
+ "Red"
+ ],
+ [
+ "▁h",
+ "air"
+ ],
+ [
+ "▁ha",
+ "ir"
+ ],
+ [
+ "an",
+ "ced"
+ ],
+ [
+ "ance",
+ "d"
+ ],
+ [
+ "anc",
+ "ed"
+ ],
+ [
+ "ia",
+ "ns"
+ ],
+ [
+ "ian",
+ "s"
+ ],
+ [
+ "i",
+ "ans"
+ ],
+ [
+ "▁d",
+ "ead"
+ ],
+ [
+ "▁de",
+ "ad"
+ ],
+ [
+ "▁",
+ "dead"
+ ],
+ [
+ "▁bo",
+ "olean"
+ ],
+ [
+ "▁",
+ "boolean"
+ ],
+ [
+ "ic",
+ "ation"
+ ],
+ [
+ "▁д",
+ "е"
+ ],
+ [
+ "▁",
+ "де"
+ ],
+ [
+ "▁cl",
+ "ient"
+ ],
+ [
+ "▁",
+ "client"
+ ],
+ [
+ "uc",
+ "t"
+ ],
+ [
+ "u",
+ "ct"
+ ],
+ [
+ "▁",
+ "•"
+ ],
+ [
+ "S",
+ "P"
+ ],
+ [
+ "ol",
+ "der"
+ ],
+ [
+ "old",
+ "er"
+ ],
+ [
+ "п",
+ "е"
+ ],
+ [
+ "ud",
+ "io"
+ ],
+ [
+ "udi",
+ "o"
+ ],
+ [
+ "▁d",
+ "eg"
+ ],
+ [
+ "▁de",
+ "g"
+ ],
+ [
+ "▁",
+ "deg"
+ ],
+ [
+ "as",
+ "ing"
+ ],
+ [
+ "asi",
+ "ng"
+ ],
+ [
+ "a",
+ "sing"
+ ],
+ [
+ "▁st",
+ "ep"
+ ],
+ [
+ "▁ste",
+ "p"
+ ],
+ [
+ "▁",
+ "step"
+ ],
+ [
+ "▁p",
+ "ers"
+ ],
+ [
+ "▁per",
+ "s"
+ ],
+ [
+ "▁pe",
+ "rs"
+ ],
+ [
+ "▁",
+ "pers"
+ ],
+ [
+ "ç",
+ "ão"
+ ],
+ [
+ "ob",
+ "j"
+ ],
+ [
+ "o",
+ "z"
+ ],
+ [
+ "ul",
+ "a"
+ ],
+ [
+ "u",
+ "la"
+ ],
+ [
+ "▁r",
+ "ound"
+ ],
+ [
+ "▁ro",
+ "und"
+ ],
+ [
+ "▁rou",
+ "nd"
+ ],
+ [
+ "▁",
+ "round"
+ ],
+ [
+ "▁u",
+ "pon"
+ ],
+ [
+ "▁up",
+ "on"
+ ],
+ [
+ "▁re",
+ "source"
+ ],
+ [
+ "▁res",
+ "ource"
+ ],
+ [
+ "▁",
+ "resource"
+ ],
+ [
+ "▁val",
+ "id"
+ ],
+ [
+ "▁",
+ "valid"
+ ],
+ [
+ "▁I",
+ "I"
+ ],
+ [
+ "▁",
+ "II"
+ ],
+ [
+ "bu",
+ "g"
+ ],
+ [
+ "b",
+ "ug"
+ ],
+ [
+ "st",
+ "d"
+ ],
+ [
+ "s",
+ "td"
+ ],
+ [
+ "▁a",
+ "ng"
+ ],
+ [
+ "▁an",
+ "g"
+ ],
+ [
+ "▁",
+ "ang"
+ ],
+ [
+ "sp",
+ "an"
+ ],
+ [
+ "s",
+ "pan"
+ ],
+ [
+ "po",
+ "l"
+ ],
+ [
+ "p",
+ "ol"
+ ],
+ [
+ "ial",
+ "og"
+ ],
+ [
+ "ia",
+ "log"
+ ],
+ [
+ "▁p",
+ "hot"
+ ],
+ [
+ "▁ph",
+ "ot"
+ ],
+ [
+ "?",
+ "'"
+ ],
+ [
+ "D",
+ "B"
+ ],
+ [
+ "▁F",
+ "in"
+ ],
+ [
+ "▁Fi",
+ "n"
+ ],
+ [
+ "▁",
+ "Fin"
+ ],
+ [
+ "V",
+ "E"
+ ],
+ [
+ "E",
+ "m"
+ ],
+ [
+ "▁c",
+ "am"
+ ],
+ [
+ "▁ca",
+ "m"
+ ],
+ [
+ "▁",
+ "cam"
+ ],
+ [
+ "tar",
+ "get"
+ ],
+ [
+ "t",
+ "arget"
+ ],
+ [
+ "pe",
+ "cted"
+ ],
+ [
+ "pect",
+ "ed"
+ ],
+ [
+ "pec",
+ "ted"
+ ],
+ [
+ "He",
+ "l"
+ ],
+ [
+ "H",
+ "el"
+ ],
+ [
+ "▁u",
+ "t"
+ ],
+ [
+ "▁",
+ "ut"
+ ],
+ [
+ "▁T",
+ "est"
+ ],
+ [
+ "▁Te",
+ "st"
+ ],
+ [
+ "▁Tes",
+ "t"
+ ],
+ [
+ "▁",
+ "Test"
+ ],
+ [
+ "▁t",
+ "own"
+ ],
+ [
+ "▁to",
+ "wn"
+ ],
+ [
+ "▁tow",
+ "n"
+ ],
+ [
+ "▁",
+ "town"
+ ],
+ [
+ "al",
+ "ign"
+ ],
+ [
+ "ali",
+ "gn"
+ ],
+ [
+ "▁we",
+ "bs"
+ ],
+ [
+ "▁web",
+ "s"
+ ],
+ [
+ "in",
+ "ner"
+ ],
+ [
+ "inn",
+ "er"
+ ],
+ [
+ "au",
+ "gh"
+ ],
+ [
+ "aug",
+ "h"
+ ],
+ [
+ "a",
+ "ugh"
+ ],
+ [
+ "▁ex",
+ "cept"
+ ],
+ [
+ "▁",
+ "except"
+ ],
+ [
+ "▁init",
+ "ial"
+ ],
+ [
+ "▁initi",
+ "al"
+ ],
+ [
+ "▁",
+ "initial"
+ ],
+ [
+ "en",
+ "ty"
+ ],
+ [
+ "ent",
+ "y"
+ ],
+ [
+ "lic",
+ "h"
+ ],
+ [
+ "li",
+ "ch"
+ ],
+ [
+ "l",
+ "ich"
+ ],
+ [
+ "▁A",
+ "ut"
+ ],
+ [
+ "▁Au",
+ "t"
+ ],
+ [
+ "▁",
+ "Aut"
+ ],
+ [
+ "to",
+ "p"
+ ],
+ [
+ "t",
+ "op"
+ ],
+ [
+ "▁f",
+ "ail"
+ ],
+ [
+ "▁fa",
+ "il"
+ ],
+ [
+ "▁",
+ "fail"
+ ],
+ [
+ "on",
+ "a"
+ ],
+ [
+ "o",
+ "na"
+ ],
+ [
+ "▁ben",
+ "ef"
+ ],
+ [
+ "an",
+ "ks"
+ ],
+ [
+ "ank",
+ "s"
+ ],
+ [
+ "is",
+ "che"
+ ],
+ [
+ "isch",
+ "e"
+ ],
+ [
+ "isc",
+ "he"
+ ],
+ [
+ "i",
+ "sche"
+ ],
+ [
+ ".",
+ "*"
+ ],
+ [
+ "▁sign",
+ "ific"
+ ],
+ [
+ "▁cont",
+ "act"
+ ],
+ [
+ "▁",
+ "contact"
+ ],
+ [
+ "Re",
+ "c"
+ ],
+ [
+ "R",
+ "ec"
+ ],
+ [
+ "ar",
+ "io"
+ ],
+ [
+ "ari",
+ "o"
+ ],
+ [
+ "a",
+ "rio"
+ ],
+ [
+ "ot",
+ "tom"
+ ],
+ [
+ "ott",
+ "om"
+ ],
+ [
+ "otto",
+ "m"
+ ],
+ [
+ "▁rel",
+ "ationship"
+ ],
+ [
+ "▁relations",
+ "hip"
+ ],
+ [
+ "▁relation",
+ "ship"
+ ],
+ [
+ "])",
+ ";"
+ ],
+ [
+ "]",
+ ");"
+ ],
+ [
+ "▁Н",
+ "а"
+ ],
+ [
+ "▁",
+ "На"
+ ],
+ [
+ "He",
+ "ad"
+ ],
+ [
+ "H",
+ "ead"
+ ],
+ [
+ "form",
+ "at"
+ ],
+ [
+ "for",
+ "mat"
+ ],
+ [
+ "▁é",
+ "t"
+ ],
+ [
+ "▁",
+ "ét"
+ ],
+ [
+ "▁M",
+ "ore"
+ ],
+ [
+ "▁Mor",
+ "e"
+ ],
+ [
+ "▁Mo",
+ "re"
+ ],
+ [
+ "▁",
+ "More"
+ ],
+ [
+ "act",
+ "ory"
+ ],
+ [
+ "actor",
+ "y"
+ ],
+ [
+ "port",
+ "un"
+ ],
+ [
+ "+",
+ "\\"
+ ],
+ [
+ "▁sim",
+ "ply"
+ ],
+ [
+ "▁simpl",
+ "y"
+ ],
+ [
+ "▁e",
+ "p"
+ ],
+ [
+ "▁",
+ "ep"
+ ],
+ [
+ "▁R",
+ "uss"
+ ],
+ [
+ "▁Ru",
+ "ss"
+ ],
+ [
+ "▁Rus",
+ "s"
+ ],
+ [
+ "n",
+ "í"
+ ],
+ [
+ "u",
+ "a"
+ ],
+ [
+ "er",
+ "c"
+ ],
+ [
+ "e",
+ "rc"
+ ],
+ [
+ "▁long",
+ "er"
+ ],
+ [
+ "▁lon",
+ "ger"
+ ],
+ [
+ "in",
+ "ition"
+ ],
+ [
+ "init",
+ "ion"
+ ],
+ [
+ "ect",
+ "or"
+ ],
+ [
+ "ec",
+ "tor"
+ ],
+ [
+ "e",
+ "ctor"
+ ],
+ [
+ "apt",
+ "ion"
+ ],
+ [
+ "a",
+ "ption"
+ ],
+ [
+ "▁prof",
+ "ess"
+ ],
+ [
+ "▁profes",
+ "s"
+ ],
+ [
+ "▁M",
+ "us"
+ ],
+ [
+ "▁Mu",
+ "s"
+ ],
+ [
+ "▁",
+ "Mus"
+ ],
+ [
+ "il",
+ "ities"
+ ],
+ [
+ "ili",
+ "ties"
+ ],
+ [
+ "è",
+ "s"
+ ],
+ [
+ "▁A",
+ "ct"
+ ],
+ [
+ "▁Ac",
+ "t"
+ ],
+ [
+ "▁",
+ "Act"
+ ],
+ [
+ "off",
+ "set"
+ ],
+ [
+ "offs",
+ "et"
+ ],
+ [
+ "▁i",
+ "ll"
+ ],
+ [
+ "▁il",
+ "l"
+ ],
+ [
+ "▁",
+ "ill"
+ ],
+ [
+ "ba",
+ "nd"
+ ],
+ [
+ "ban",
+ "d"
+ ],
+ [
+ "b",
+ "and"
+ ],
+ [
+ "▁A",
+ "g"
+ ],
+ [
+ "▁",
+ "Ag"
+ ],
+ [
+ "▁П",
+ "о"
+ ],
+ [
+ "▁",
+ "По"
+ ],
+ [
+ "б",
+ "и"
+ ],
+ [
+ "cont",
+ "ent"
+ ],
+ [
+ "ic",
+ "on"
+ ],
+ [
+ "ico",
+ "n"
+ ],
+ [
+ "i",
+ "con"
+ ],
+ [
+ "▁work",
+ "s"
+ ],
+ [
+ "▁wor",
+ "ks"
+ ],
+ [
+ "▁",
+ "works"
+ ],
+ [
+ "yn",
+ "am"
+ ],
+ [
+ "yna",
+ "m"
+ ],
+ [
+ "y",
+ "nam"
+ ],
+ [
+ "pl",
+ "ement"
+ ],
+ [
+ "ple",
+ "ment"
+ ],
+ [
+ "p",
+ "lement"
+ ],
+ [
+ "Res",
+ "ource"
+ ],
+ [
+ "Re",
+ "source"
+ ],
+ [
+ "Act",
+ "ion"
+ ],
+ [
+ "A",
+ "ction"
+ ],
+ [
+ "▁diff",
+ "icult"
+ ],
+ [
+ "▁W",
+ "est"
+ ],
+ [
+ "▁We",
+ "st"
+ ],
+ [
+ "▁Wes",
+ "t"
+ ],
+ [
+ "▁",
+ "West"
+ ],
+ [
+ "▁v",
+ "ideo"
+ ],
+ [
+ "▁vide",
+ "o"
+ ],
+ [
+ "▁",
+ "video"
+ ],
+ [
+ "▁T",
+ "HE"
+ ],
+ [
+ "▁TH",
+ "E"
+ ],
+ [
+ "▁",
+ "THE"
+ ],
+ [
+ "▁de",
+ "cl"
+ ],
+ [
+ "▁dec",
+ "l"
+ ],
+ [
+ "▁",
+ "decl"
+ ],
+ [
+ "on",
+ "don"
+ ],
+ [
+ "ond",
+ "on"
+ ],
+ [
+ "ondo",
+ "n"
+ ],
+ [
+ "de",
+ "d"
+ ],
+ [
+ "d",
+ "ed"
+ ],
+ [
+ "}{",
+ "\\"
+ ],
+ [
+ "}",
+ "{\\"
+ ],
+ [
+ "oc",
+ "r"
+ ],
+ [
+ "o",
+ "cr"
+ ],
+ [
+ "▁C",
+ "ity"
+ ],
+ [
+ "▁Cit",
+ "y"
+ ],
+ [
+ "▁Ci",
+ "ty"
+ ],
+ [
+ "▁",
+ "City"
+ ],
+ [
+ "▁",
+ "я"
+ ],
+ [
+ "ue",
+ "r"
+ ],
+ [
+ "u",
+ "er"
+ ],
+ [
+ "c",
+ "z"
+ ],
+ [
+ "▁im",
+ "ag"
+ ],
+ [
+ "▁i",
+ "mag"
+ ],
+ [
+ "▁",
+ "imag"
+ ],
+ [
+ "c",
+ "r"
+ ],
+ [
+ "et",
+ "e"
+ ],
+ [
+ "e",
+ "te"
+ ],
+ [
+ "id",
+ "get"
+ ],
+ [
+ "idge",
+ "t"
+ ],
+ [
+ "▁M",
+ "od"
+ ],
+ [
+ "▁Mo",
+ "d"
+ ],
+ [
+ "▁",
+ "Mod"
+ ],
+ [
+ "▁for",
+ "ward"
+ ],
+ [
+ "▁",
+ "forward"
+ ],
+ [
+ "▁p",
+ "ict"
+ ],
+ [
+ "▁pi",
+ "ct"
+ ],
+ [
+ "▁pic",
+ "t"
+ ],
+ [
+ "or",
+ "ge"
+ ],
+ [
+ "org",
+ "e"
+ ],
+ [
+ "▁sub",
+ "ject"
+ ],
+ [
+ "▁",
+ "subject"
+ ],
+ [
+ "up",
+ "date"
+ ],
+ [
+ "at",
+ "tle"
+ ],
+ [
+ "att",
+ "le"
+ ],
+ [
+ "s",
+ "a"
+ ],
+ [
+ "▁A",
+ "nt"
+ ],
+ [
+ "▁An",
+ "t"
+ ],
+ [
+ "▁",
+ "Ant"
+ ],
+ [
+ "▁r",
+ "unning"
+ ],
+ [
+ "▁run",
+ "ning"
+ ],
+ [
+ "▁",
+ "running"
+ ],
+ [
+ "▁s",
+ "al"
+ ],
+ [
+ "▁sa",
+ "l"
+ ],
+ [
+ "▁",
+ "sal"
+ ],
+ [
+ "con",
+ "ne"
+ ],
+ [
+ "conn",
+ "e"
+ ],
+ [
+ "c",
+ "onne"
+ ],
+ [
+ "▁out",
+ "put"
+ ],
+ [
+ "▁",
+ "output"
+ ],
+ [
+ "ad",
+ "ata"
+ ],
+ [
+ "ada",
+ "ta"
+ ],
+ [
+ "a",
+ "data"
+ ],
+ [
+ "M",
+ "L"
+ ],
+ [
+ "Che",
+ "ck"
+ ],
+ [
+ "C",
+ "heck"
+ ],
+ [
+ "led",
+ "ge"
+ ],
+ [
+ "l",
+ "edge"
+ ],
+ [
+ "▁p",
+ "aper"
+ ],
+ [
+ "▁pa",
+ "per"
+ ],
+ [
+ "▁pap",
+ "er"
+ ],
+ [
+ "▁",
+ "paper"
+ ],
+ [
+ "param",
+ "s"
+ ],
+ [
+ "par",
+ "ams"
+ ],
+ [
+ "para",
+ "ms"
+ ],
+ [
+ "av",
+ "y"
+ ],
+ [
+ "a",
+ "vy"
+ ],
+ [
+ "▁a",
+ "f"
+ ],
+ [
+ "▁",
+ "af"
+ ],
+ [
+ "▁e",
+ "ine"
+ ],
+ [
+ "▁ein",
+ "e"
+ ],
+ [
+ "▁j",
+ "our"
+ ],
+ [
+ "▁jo",
+ "ur"
+ ],
+ [
+ "▁jou",
+ "r"
+ ],
+ [
+ "▁",
+ "jour"
+ ],
+ [
+ "A",
+ "Y"
+ ],
+ [
+ "▁it",
+ "self"
+ ],
+ [
+ "▁its",
+ "elf"
+ ],
+ [
+ "▁S",
+ "tr"
+ ],
+ [
+ "▁St",
+ "r"
+ ],
+ [
+ "▁",
+ "Str"
+ ],
+ [
+ "st",
+ "yle"
+ ],
+ [
+ "sty",
+ "le"
+ ],
+ [
+ "Th",
+ "at"
+ ],
+ [
+ "T",
+ "hat"
+ ],
+ [
+ "▁m",
+ "illion"
+ ],
+ [
+ "▁mill",
+ "ion"
+ ],
+ [
+ "▁l",
+ "anguage"
+ ],
+ [
+ "▁",
+ "language"
+ ],
+ [
+ "O",
+ "S"
+ ],
+ [
+ "vi",
+ "ng"
+ ],
+ [
+ "vin",
+ "g"
+ ],
+ [
+ "v",
+ "ing"
+ ],
+ [
+ "▁м",
+ "а"
+ ],
+ [
+ "▁",
+ "ма"
+ ],
+ [
+ "▁т",
+ "о"
+ ],
+ [
+ "▁",
+ "то"
+ ],
+ [
+ ")",
+ "("
+ ],
+ [
+ "▁b",
+ "uy"
+ ],
+ [
+ "▁bu",
+ "y"
+ ],
+ [
+ ".",
+ "/"
+ ],
+ [
+ "▁.",
+ ".."
+ ],
+ [
+ "▁..",
+ "."
+ ],
+ [
+ "▁",
+ "..."
+ ],
+ [
+ "▁t",
+ "ried"
+ ],
+ [
+ "▁tr",
+ "ied"
+ ],
+ [
+ "▁tri",
+ "ed"
+ ],
+ [
+ "▁com",
+ "pl"
+ ],
+ [
+ "▁comp",
+ "l"
+ ],
+ [
+ "▁act",
+ "iv"
+ ],
+ [
+ "▁",
+ "activ"
+ ],
+ [
+ "ap",
+ "ped"
+ ],
+ [
+ "app",
+ "ed"
+ ],
+ [
+ "appe",
+ "d"
+ ],
+ [
+ "a",
+ "pped"
+ ],
+ [
+ "But",
+ "ton"
+ ],
+ [
+ "B",
+ "utton"
+ ],
+ [
+ "To",
+ "ken"
+ ],
+ [
+ "Tok",
+ "en"
+ ],
+ [
+ "T",
+ "oken"
+ ],
+ [
+ "▁prov",
+ "ided"
+ ],
+ [
+ "▁provide",
+ "d"
+ ],
+ [
+ "ib",
+ "er"
+ ],
+ [
+ "ibe",
+ "r"
+ ],
+ [
+ "i",
+ "ber"
+ ],
+ [
+ "▁c",
+ "reated"
+ ],
+ [
+ "▁cre",
+ "ated"
+ ],
+ [
+ "▁create",
+ "d"
+ ],
+ [
+ "▁creat",
+ "ed"
+ ],
+ [
+ "▁",
+ "created"
+ ],
+ [
+ "cur",
+ "ity"
+ ],
+ [
+ "c",
+ "urity"
+ ],
+ [
+ "En",
+ "d"
+ ],
+ [
+ "E",
+ "nd"
+ ],
+ [
+ "a",
+ "ł"
+ ],
+ [
+ "us",
+ "ter"
+ ],
+ [
+ "ust",
+ "er"
+ ],
+ [
+ "u",
+ "ster"
+ ],
+ [
+ "iz",
+ "ing"
+ ],
+ [
+ "izi",
+ "ng"
+ ],
+ [
+ "i",
+ "zing"
+ ],
+ [
+ "om",
+ "b"
+ ],
+ [
+ "o",
+ "mb"
+ ],
+ [
+ "▁s",
+ "ich"
+ ],
+ [
+ "▁si",
+ "ch"
+ ],
+ [
+ "▁com",
+ "pon"
+ ],
+ [
+ "▁comp",
+ "on"
+ ],
+ [
+ "▁S",
+ "ee"
+ ],
+ [
+ "▁Se",
+ "e"
+ ],
+ [
+ "▁",
+ "See"
+ ],
+ [
+ "▁u",
+ "int"
+ ],
+ [
+ "▁ui",
+ "nt"
+ ],
+ [
+ "▁",
+ "uint"
+ ],
+ [
+ "▁l",
+ "abel"
+ ],
+ [
+ "▁la",
+ "bel"
+ ],
+ [
+ "▁lab",
+ "el"
+ ],
+ [
+ "▁",
+ "label"
+ ],
+ [
+ "vo",
+ "l"
+ ],
+ [
+ "v",
+ "ol"
+ ],
+ [
+ "ó",
+ "w"
+ ],
+ [
+ "oc",
+ "ol"
+ ],
+ [
+ "oco",
+ "l"
+ ],
+ [
+ "o",
+ "col"
+ ],
+ [
+ "▁re",
+ "ceived"
+ ],
+ [
+ "▁rece",
+ "ived"
+ ],
+ [
+ "▁receive",
+ "d"
+ ],
+ [
+ "▁in",
+ "tern"
+ ],
+ [
+ "▁int",
+ "ern"
+ ],
+ [
+ "▁inter",
+ "n"
+ ],
+ [
+ "▁inte",
+ "rn"
+ ],
+ [
+ "▁",
+ "intern"
+ ],
+ [
+ "ц",
+ "е"
+ ],
+ [
+ "R",
+ "un"
+ ],
+ [
+ "▁r",
+ "oad"
+ ],
+ [
+ "▁ro",
+ "ad"
+ ],
+ [
+ "▁",
+ "road"
+ ],
+ [
+ "▁O",
+ "ct"
+ ],
+ [
+ "▁",
+ "Oct"
+ ],
+ [
+ "▁C",
+ "omp"
+ ],
+ [
+ "▁Com",
+ "p"
+ ],
+ [
+ "▁Co",
+ "mp"
+ ],
+ [
+ "▁",
+ "Comp"
+ ],
+ [
+ "▁stud",
+ "y"
+ ],
+ [
+ "▁т",
+ "е"
+ ],
+ [
+ "▁",
+ "те"
+ ],
+ [
+ "Ac",
+ "t"
+ ],
+ [
+ "A",
+ "ct"
+ ],
+ [
+ "▁t",
+ "our"
+ ],
+ [
+ "▁to",
+ "ur"
+ ],
+ [
+ "▁tou",
+ "r"
+ ],
+ [
+ "▁St",
+ "ate"
+ ],
+ [
+ "▁Stat",
+ "e"
+ ],
+ [
+ "▁Sta",
+ "te"
+ ],
+ [
+ "▁",
+ "State"
+ ],
+ [
+ "▁ad",
+ "ded"
+ ],
+ [
+ "▁add",
+ "ed"
+ ],
+ [
+ "▁",
+ "added"
+ ],
+ [
+ "htt",
+ "ps"
+ ],
+ [
+ "http",
+ "s"
+ ],
+ [
+ "st",
+ "ream"
+ ],
+ [
+ "stre",
+ "am"
+ ],
+ [
+ "▁l",
+ "ower"
+ ],
+ [
+ "▁lo",
+ "wer"
+ ],
+ [
+ "▁low",
+ "er"
+ ],
+ [
+ "▁",
+ "lower"
+ ],
+ [
+ "▁b",
+ "ox"
+ ],
+ [
+ "▁bo",
+ "x"
+ ],
+ [
+ "▁",
+ "box"
+ ],
+ [
+ "▁S",
+ "k"
+ ],
+ [
+ "▁",
+ "Sk"
+ ],
+ [
+ "▁them",
+ "selves"
+ ],
+ [
+ "▁c",
+ "ross"
+ ],
+ [
+ "▁cr",
+ "oss"
+ ],
+ [
+ "▁cro",
+ "ss"
+ ],
+ [
+ "▁",
+ "cross"
+ ],
+ [
+ "▁e",
+ "cho"
+ ],
+ [
+ "▁ec",
+ "ho"
+ ],
+ [
+ "▁",
+ "echo"
+ ],
+ [
+ "▁dev",
+ "ice"
+ ],
+ [
+ "▁",
+ "device"
+ ],
+ [
+ "pos",
+ "e"
+ ],
+ [
+ "po",
+ "se"
+ ],
+ [
+ "p",
+ "ose"
+ ],
+ [
+ "▁g",
+ "ames"
+ ],
+ [
+ "▁game",
+ "s"
+ ],
+ [
+ "▁gam",
+ "es"
+ ],
+ [
+ "▁ga",
+ "mes"
+ ],
+ [
+ "P",
+ "L"
+ ],
+ [
+ "W",
+ "indow"
+ ],
+ [
+ "is",
+ "es"
+ ],
+ [
+ "ise",
+ "s"
+ ],
+ [
+ "i",
+ "ses"
+ ],
+ [
+ "ti",
+ "tle"
+ ],
+ [
+ "tit",
+ "le"
+ ],
+ [
+ "t",
+ "itle"
+ ],
+ [
+ "St",
+ "ream"
+ ],
+ [
+ "z",
+ "t"
+ ],
+ [
+ "▁S",
+ "w"
+ ],
+ [
+ "▁",
+ "Sw"
+ ],
+ [
+ "▁r",
+ "ole"
+ ],
+ [
+ "▁ro",
+ "le"
+ ],
+ [
+ "▁",
+ "role"
+ ],
+ [
+ "ia",
+ "nt"
+ ],
+ [
+ "ian",
+ "t"
+ ],
+ [
+ "i",
+ "ant"
+ ],
+ [
+ "k",
+ "u"
+ ],
+ [
+ "se",
+ "qu"
+ ],
+ [
+ "seq",
+ "u"
+ ],
+ [
+ "s",
+ "equ"
+ ],
+ [
+ "▁l",
+ "ate"
+ ],
+ [
+ "▁la",
+ "te"
+ ],
+ [
+ "▁lat",
+ "e"
+ ],
+ [
+ "▁",
+ "late"
+ ],
+ [
+ "▁s",
+ "old"
+ ],
+ [
+ "▁so",
+ "ld"
+ ],
+ [
+ "▁sol",
+ "d"
+ ],
+ [
+ "р",
+ "я"
+ ],
+ [
+ "Com",
+ "m"
+ ],
+ [
+ "Co",
+ "mm"
+ ],
+ [
+ "C",
+ "omm"
+ ],
+ [
+ "▁en",
+ "tre"
+ ],
+ [
+ "▁ent",
+ "re"
+ ],
+ [
+ "▁entr",
+ "e"
+ ],
+ [
+ "▁",
+ "entre"
+ ],
+ [
+ "▁d",
+ "og"
+ ],
+ [
+ "▁do",
+ "g"
+ ],
+ [
+ "▁",
+ "dog"
+ ],
+ [
+ "dev",
+ "ice"
+ ],
+ [
+ "P",
+ "ar"
+ ],
+ [
+ "▁like",
+ "ly"
+ ],
+ [
+ "▁lik",
+ "ely"
+ ],
+ [
+ "▁",
+ "likely"
+ ],
+ [
+ "^{",
+ "-"
+ ],
+ [
+ "^",
+ "{-"
+ ],
+ [
+ "▁l",
+ "en"
+ ],
+ [
+ "▁le",
+ "n"
+ ],
+ [
+ "▁",
+ "len"
+ ],
+ [
+ "▁P",
+ "aul"
+ ],
+ [
+ "▁Pa",
+ "ul"
+ ],
+ [
+ "▁",
+ "Paul"
+ ],
+ [
+ "▁t",
+ "ool"
+ ],
+ [
+ "▁to",
+ "ol"
+ ],
+ [
+ "▁too",
+ "l"
+ ],
+ [
+ "▁",
+ "tool"
+ ],
+ [
+ "Of",
+ "f"
+ ],
+ [
+ "O",
+ "ff"
+ ],
+ [
+ "▁f",
+ "amil"
+ ],
+ [
+ "▁fam",
+ "il"
+ ],
+ [
+ "▁fa",
+ "mil"
+ ],
+ [
+ "▁d",
+ "raw"
+ ],
+ [
+ "▁dr",
+ "aw"
+ ],
+ [
+ "▁",
+ "draw"
+ ],
+ [
+ "ap",
+ "ping"
+ ],
+ [
+ "app",
+ "ing"
+ ],
+ [
+ "a",
+ "pping"
+ ],
+ [
+ "▁ev",
+ "ents"
+ ],
+ [
+ "▁even",
+ "ts"
+ ],
+ [
+ "▁event",
+ "s"
+ ],
+ [
+ "▁",
+ "events"
+ ],
+ [
+ "cre",
+ "t"
+ ],
+ [
+ "cr",
+ "et"
+ ],
+ [
+ "c",
+ "ret"
+ ],
+ [
+ "rou",
+ "ght"
+ ],
+ [
+ "rough",
+ "t"
+ ],
+ [
+ "r",
+ "ought"
+ ],
+ [
+ "Cont",
+ "ent"
+ ],
+ [
+ "▁soft",
+ "ware"
+ ],
+ [
+ "ri",
+ "a"
+ ],
+ [
+ "r",
+ "ia"
+ ],
+ [
+ "ms",
+ "g"
+ ],
+ [
+ "m",
+ "sg"
+ ],
+ [
+ "ga",
+ "mma"
+ ],
+ [
+ "g",
+ "amma"
+ ],
+ [
+ "▁h",
+ "ear"
+ ],
+ [
+ "▁he",
+ "ar"
+ ],
+ [
+ "Op",
+ "er"
+ ],
+ [
+ "O",
+ "per"
+ ],
+ [
+ "▁your",
+ "self"
+ ],
+ [
+ "▁yours",
+ "elf"
+ ],
+ [
+ "▁l",
+ "iter"
+ ],
+ [
+ "▁li",
+ "ter"
+ ],
+ [
+ "▁lit",
+ "er"
+ ],
+ [
+ "▁",
+ "liter"
+ ],
+ [
+ "em",
+ "p"
+ ],
+ [
+ "e",
+ "mp"
+ ],
+ [
+ "▁se",
+ "par"
+ ],
+ [
+ "▁sep",
+ "ar"
+ ],
+ [
+ "▁",
+ "separ"
+ ],
+ [
+ "▁",
+ "З"
+ ],
+ [
+ "▁t",
+ "itle"
+ ],
+ [
+ "▁tit",
+ "le"
+ ],
+ [
+ "▁ti",
+ "tle"
+ ],
+ [
+ "▁",
+ "title"
+ ],
+ [
+ "M",
+ "ethod"
+ ],
+ [
+ "math",
+ "rm"
+ ],
+ [
+ "▁s",
+ "low"
+ ],
+ [
+ "▁sl",
+ "ow"
+ ],
+ [
+ "▁R",
+ "om"
+ ],
+ [
+ "▁Ro",
+ "m"
+ ],
+ [
+ "▁",
+ "Rom"
+ ],
+ [
+ "!",
+ "!"
+ ],
+ [
+ "▁t",
+ "ax"
+ ],
+ [
+ "▁ta",
+ "x"
+ ],
+ [
+ "▁",
+ "tax"
+ ],
+ [
+ "ск",
+ "а"
+ ],
+ [
+ "с",
+ "ка"
+ ],
+ [
+ "empl",
+ "ate"
+ ],
+ [
+ "emp",
+ "late"
+ ],
+ [
+ "o",
+ "i"
+ ],
+ [
+ "▁A",
+ "rt"
+ ],
+ [
+ "▁Ar",
+ "t"
+ ],
+ [
+ "▁",
+ "Art"
+ ],
+ [
+ "f",
+ "alse"
+ ],
+ [
+ "ast",
+ "ic"
+ ],
+ [
+ "ст",
+ "ь"
+ ],
+ [
+ "с",
+ "ть"
+ ],
+ [
+ "oc",
+ "ket"
+ ],
+ [
+ "ock",
+ "et"
+ ],
+ [
+ "▁e",
+ "ns"
+ ],
+ [
+ "▁en",
+ "s"
+ ],
+ [
+ "▁",
+ "ens"
+ ],
+ [
+ "T",
+ "O"
+ ],
+ [
+ "am",
+ "ente"
+ ],
+ [
+ "ame",
+ "nte"
+ ],
+ [
+ "ament",
+ "e"
+ ],
+ [
+ "amen",
+ "te"
+ ],
+ [
+ "a",
+ "mente"
+ ],
+ [
+ "lo",
+ "cal"
+ ],
+ [
+ "loc",
+ "al"
+ ],
+ [
+ "l",
+ "ocal"
+ ],
+ [
+ "ch",
+ "ie"
+ ],
+ [
+ "chi",
+ "e"
+ ],
+ [
+ "▁p",
+ "an"
+ ],
+ [
+ "▁pa",
+ "n"
+ ],
+ [
+ "▁",
+ "pan"
+ ],
+ [
+ "ни",
+ "й"
+ ],
+ [
+ "ch",
+ "ema"
+ ],
+ [
+ "che",
+ "ma"
+ ],
+ [
+ "chem",
+ "a"
+ ],
+ [
+ "▁N",
+ "orth"
+ ],
+ [
+ "▁Nor",
+ "th"
+ ],
+ [
+ "▁Nort",
+ "h"
+ ],
+ [
+ "з",
+ "о"
+ ],
+ [
+ "▁>",
+ "="
+ ],
+ [
+ "▁",
+ ">="
+ ],
+ [
+ "A",
+ "ut"
+ ],
+ [
+ "▁d",
+ "ig"
+ ],
+ [
+ "▁di",
+ "g"
+ ],
+ [
+ "▁",
+ "dig"
+ ],
+ [
+ "▁se",
+ "ems"
+ ],
+ [
+ "▁see",
+ "ms"
+ ],
+ [
+ "▁seem",
+ "s"
+ ],
+ [
+ "▁mor",
+ "ning"
+ ],
+ [
+ "so",
+ "le"
+ ],
+ [
+ "sol",
+ "e"
+ ],
+ [
+ "s",
+ "ole"
+ ],
+ [
+ "um",
+ "er"
+ ],
+ [
+ "ume",
+ "r"
+ ],
+ [
+ "u",
+ "mer"
+ ],
+ [
+ "del",
+ "ta"
+ ],
+ [
+ "d",
+ "elta"
+ ],
+ [
+ "it",
+ "é"
+ ],
+ [
+ "i",
+ "té"
+ ],
+ [
+ "ab",
+ "ase"
+ ],
+ [
+ "aba",
+ "se"
+ ],
+ [
+ "a",
+ "base"
+ ],
+ [
+ "ra",
+ "f"
+ ],
+ [
+ "r",
+ "af"
+ ],
+ [
+ "▁ob",
+ "serv"
+ ],
+ [
+ "▁obs",
+ "erv"
+ ],
+ [
+ "▁",
+ "observ"
+ ],
+ [
+ "▁E",
+ "st"
+ ],
+ [
+ "▁Es",
+ "t"
+ ],
+ [
+ "▁",
+ "Est"
+ ],
+ [
+ "▁s",
+ "eg"
+ ],
+ [
+ "▁se",
+ "g"
+ ],
+ [
+ "▁",
+ "seg"
+ ],
+ [
+ "▁[",
+ "]"
+ ],
+ [
+ "▁",
+ "[]"
+ ],
+ [
+ "▁P",
+ "res"
+ ],
+ [
+ "▁Pr",
+ "es"
+ ],
+ [
+ "▁Pre",
+ "s"
+ ],
+ [
+ "▁",
+ "Pres"
+ ],
+ [
+ "if",
+ "ul"
+ ],
+ [
+ "i",
+ "ful"
+ ],
+ [
+ "pu",
+ "sh"
+ ],
+ [
+ "pus",
+ "h"
+ ],
+ [
+ "p",
+ "ush"
+ ],
+ [
+ "▁O",
+ "ff"
+ ],
+ [
+ "▁Of",
+ "f"
+ ],
+ [
+ "▁",
+ "Off"
+ ],
+ [
+ "ip",
+ "e"
+ ],
+ [
+ "i",
+ "pe"
+ ],
+ [
+ "at",
+ "i"
+ ],
+ [
+ "a",
+ "ti"
+ ],
+ [
+ "▁d",
+ "im"
+ ],
+ [
+ "▁di",
+ "m"
+ ],
+ [
+ "▁",
+ "dim"
+ ],
+ [
+ "ce",
+ "ed"
+ ],
+ [
+ "c",
+ "eed"
+ ],
+ [
+ "En",
+ "t"
+ ],
+ [
+ "E",
+ "nt"
+ ],
+ [
+ "__",
+ "__"
+ ],
+ [
+ "___",
+ "_"
+ ],
+ [
+ "_",
+ "___"
+ ],
+ [
+ "en",
+ "try"
+ ],
+ [
+ "ent",
+ "ry"
+ ],
+ [
+ "entr",
+ "y"
+ ],
+ [
+ "▁f",
+ "ight"
+ ],
+ [
+ "▁fig",
+ "ht"
+ ],
+ [
+ "▁fi",
+ "ght"
+ ],
+ [
+ "▁c",
+ "red"
+ ],
+ [
+ "▁cre",
+ "d"
+ ],
+ [
+ "▁cr",
+ "ed"
+ ],
+ [
+ "▁",
+ "cred"
+ ],
+ [
+ "▁O",
+ "R"
+ ],
+ [
+ "▁",
+ "OR"
+ ],
+ [
+ "▁D",
+ "ep"
+ ],
+ [
+ "▁De",
+ "p"
+ ],
+ [
+ "▁",
+ "Dep"
+ ],
+ [
+ "$",
+ "{"
+ ],
+ [
+ "ле",
+ "н"
+ ],
+ [
+ "л",
+ "ен"
+ ],
+ [
+ "Creat",
+ "e"
+ ],
+ [
+ "C",
+ "reate"
+ ],
+ [
+ "▁Apr",
+ "il"
+ ],
+ [
+ "▁Ap",
+ "ril"
+ ],
+ [
+ "min",
+ "istr"
+ ],
+ [
+ "F",
+ "L"
+ ],
+ [
+ "▁A",
+ "p"
+ ],
+ [
+ "▁",
+ "Ap"
+ ],
+ [
+ "▁H",
+ "ere"
+ ],
+ [
+ "▁He",
+ "re"
+ ],
+ [
+ "▁Her",
+ "e"
+ ],
+ [
+ "▁",
+ "Here"
+ ],
+ [
+ "priv",
+ "ate"
+ ],
+ [
+ "p",
+ "rivate"
+ ],
+ [
+ "In",
+ "stance"
+ ],
+ [
+ "Inst",
+ "ance"
+ ],
+ [
+ "ie",
+ "m"
+ ],
+ [
+ "i",
+ "em"
+ ],
+ [
+ "▁off",
+ "ice"
+ ],
+ [
+ "▁offic",
+ "e"
+ ],
+ [
+ "▁th",
+ "ird"
+ ],
+ [
+ "▁",
+ "third"
+ ],
+ [
+ "▁up",
+ "date"
+ ],
+ [
+ "▁",
+ "update"
+ ],
+ [
+ "Lin",
+ "e"
+ ],
+ [
+ "Li",
+ "ne"
+ ],
+ [
+ "L",
+ "ine"
+ ],
+ [
+ "ta",
+ "g"
+ ],
+ [
+ "t",
+ "ag"
+ ],
+ [
+ "▁e",
+ "specially"
+ ],
+ [
+ "▁espec",
+ "ially"
+ ],
+ [
+ "▁especial",
+ "ly"
+ ],
+ [
+ "▁",
+ "especially"
+ ],
+ [
+ "▁го",
+ "да"
+ ],
+ [
+ "▁год",
+ "а"
+ ],
+ [
+ "▁c",
+ "u"
+ ],
+ [
+ "▁",
+ "cu"
+ ],
+ [
+ "▁k",
+ "ill"
+ ],
+ [
+ "▁kil",
+ "l"
+ ],
+ [
+ "▁ki",
+ "ll"
+ ],
+ [
+ "▁",
+ "kill"
+ ],
+ [
+ "au",
+ "ght"
+ ],
+ [
+ "augh",
+ "t"
+ ],
+ [
+ "aug",
+ "ht"
+ ],
+ [
+ "▁s",
+ "we"
+ ],
+ [
+ "▁sw",
+ "e"
+ ],
+ [
+ "Option",
+ "s"
+ ],
+ [
+ "Opt",
+ "ions"
+ ],
+ [
+ "O",
+ "ptions"
+ ],
+ [
+ "I",
+ "M"
+ ],
+ [
+ "C",
+ "C"
+ ],
+ [
+ "▁com",
+ "pan"
+ ],
+ [
+ "▁comp",
+ "an"
+ ],
+ [
+ "ju",
+ "st"
+ ],
+ [
+ "j",
+ "ust"
+ ],
+ [
+ "▁Wh",
+ "ile"
+ ],
+ [
+ "▁",
+ "While"
+ ],
+ [
+ "iz",
+ "er"
+ ],
+ [
+ "ize",
+ "r"
+ ],
+ [
+ "i",
+ "zer"
+ ],
+ [
+ "▁м",
+ "о"
+ ],
+ [
+ "▁",
+ "мо"
+ ],
+ [
+ "к",
+ "е"
+ ],
+ [
+ "▁a",
+ "uto"
+ ],
+ [
+ "▁aut",
+ "o"
+ ],
+ [
+ "▁au",
+ "to"
+ ],
+ [
+ "▁",
+ "auto"
+ ],
+ [
+ "▁b",
+ "and"
+ ],
+ [
+ "▁ban",
+ "d"
+ ],
+ [
+ "▁ba",
+ "nd"
+ ],
+ [
+ "▁",
+ "band"
+ ],
+ [
+ "ме",
+ "н"
+ ],
+ [
+ "м",
+ "ен"
+ ],
+ [
+ "ique",
+ "s"
+ ],
+ [
+ "iqu",
+ "es"
+ ],
+ [
+ "iq",
+ "ues"
+ ],
+ [
+ "i",
+ "ques"
+ ],
+ [
+ "▁p",
+ "le"
+ ],
+ [
+ "▁pl",
+ "e"
+ ],
+ [
+ "▁",
+ "ple"
+ ],
+ [
+ "N",
+ "O"
+ ],
+ [
+ "▁O",
+ "F"
+ ],
+ [
+ "▁",
+ "OF"
+ ],
+ [
+ "▁s",
+ "ong"
+ ],
+ [
+ "▁so",
+ "ng"
+ ],
+ [
+ "▁son",
+ "g"
+ ],
+ [
+ "▁A",
+ "cc"
+ ],
+ [
+ "▁Ac",
+ "c"
+ ],
+ [
+ "▁",
+ "Acc"
+ ],
+ [
+ "EX",
+ "T"
+ ],
+ [
+ "E",
+ "XT"
+ ],
+ [
+ "en",
+ "sor"
+ ],
+ [
+ "ens",
+ "or"
+ ],
+ [
+ "enso",
+ "r"
+ ],
+ [
+ "in",
+ "ing"
+ ],
+ [
+ "ini",
+ "ng"
+ ],
+ [
+ "i",
+ "ning"
+ ],
+ [
+ "▁l",
+ "at"
+ ],
+ [
+ "▁la",
+ "t"
+ ],
+ [
+ "▁",
+ "lat"
+ ],
+ [
+ "bi",
+ "g"
+ ],
+ [
+ "b",
+ "ig"
+ ],
+ [
+ "▁K",
+ "ing"
+ ],
+ [
+ "▁Ki",
+ "ng"
+ ],
+ [
+ "▁Kin",
+ "g"
+ ],
+ [
+ "▁",
+ "King"
+ ],
+ [
+ "oc",
+ "h"
+ ],
+ [
+ "o",
+ "ch"
+ ],
+ [
+ "s",
+ "i"
+ ],
+ [
+ "▁H",
+ "ist"
+ ],
+ [
+ "▁His",
+ "t"
+ ],
+ [
+ "▁Hi",
+ "st"
+ ],
+ [
+ "▁",
+ "Hist"
+ ],
+ [
+ "▁qu",
+ "ality"
+ ],
+ [
+ "▁qual",
+ "ity"
+ ],
+ [
+ "▁",
+ "quality"
+ ],
+ [
+ "mod",
+ "e"
+ ],
+ [
+ "mo",
+ "de"
+ ],
+ [
+ "m",
+ "ode"
+ ],
+ [
+ "▁op",
+ "portun"
+ ],
+ [
+ "▁would",
+ "n"
+ ],
+ [
+ ":*",
+ "*"
+ ],
+ [
+ ":",
+ "**"
+ ],
+ [
+ "out",
+ "put"
+ ],
+ [
+ "▁fe",
+ "et"
+ ],
+ [
+ "▁fee",
+ "t"
+ ],
+ [
+ "▁m",
+ "is"
+ ],
+ [
+ "▁mi",
+ "s"
+ ],
+ [
+ "d",
+ "f"
+ ],
+ [
+ "ag",
+ "ing"
+ ],
+ [
+ "agi",
+ "ng"
+ ],
+ [
+ "a",
+ "ging"
+ ],
+ [
+ "▁м",
+ "е"
+ ],
+ [
+ "▁",
+ "ме"
+ ],
+ [
+ "▁t",
+ "ro"
+ ],
+ [
+ "▁tr",
+ "o"
+ ],
+ [
+ "▁d",
+ "efined"
+ ],
+ [
+ "▁def",
+ "ined"
+ ],
+ [
+ "▁define",
+ "d"
+ ],
+ [
+ "▁defin",
+ "ed"
+ ],
+ [
+ "▁",
+ "defined"
+ ],
+ [
+ "▁re",
+ "view"
+ ],
+ [
+ "▁rev",
+ "iew"
+ ],
+ [
+ "▁",
+ "review"
+ ],
+ [
+ "▁F",
+ "il"
+ ],
+ [
+ "▁Fi",
+ "l"
+ ],
+ [
+ "▁",
+ "Fil"
+ ],
+ [
+ ">",
+ ">"
+ ],
+ [
+ "▁pr",
+ "incip"
+ ],
+ [
+ "▁prin",
+ "cip"
+ ],
+ [
+ "Bas",
+ "e"
+ ],
+ [
+ "B",
+ "ase"
+ ],
+ [
+ "di",
+ "ct"
+ ],
+ [
+ "d",
+ "ict"
+ ],
+ [
+ "ve",
+ "rage"
+ ],
+ [
+ "ver",
+ "age"
+ ],
+ [
+ "ic",
+ "ient"
+ ],
+ [
+ "ici",
+ "ent"
+ ],
+ [
+ "I",
+ "F"
+ ],
+ [
+ "▁h",
+ "it"
+ ],
+ [
+ "▁hi",
+ "t"
+ ],
+ [
+ "▁",
+ "hit"
+ ],
+ [
+ "Pag",
+ "e"
+ ],
+ [
+ "P",
+ "age"
+ ],
+ [
+ "▁p",
+ "erm"
+ ],
+ [
+ "▁per",
+ "m"
+ ],
+ [
+ "▁pe",
+ "rm"
+ ],
+ [
+ "▁",
+ "perm"
+ ],
+ [
+ "ce",
+ "l"
+ ],
+ [
+ "c",
+ "el"
+ ],
+ [
+ "í",
+ "t"
+ ],
+ [
+ "▁ex",
+ "press"
+ ],
+ [
+ "▁exp",
+ "ress"
+ ],
+ [
+ "▁expr",
+ "ess"
+ ],
+ [
+ "▁",
+ "express"
+ ],
+ [
+ "▁ind",
+ "ic"
+ ],
+ [
+ "▁Se",
+ "ptember"
+ ],
+ [
+ "▁Sept",
+ "ember"
+ ],
+ [
+ "im",
+ "age"
+ ],
+ [
+ "ima",
+ "ge"
+ ],
+ [
+ "imag",
+ "e"
+ ],
+ [
+ "▁product",
+ "s"
+ ],
+ [
+ "▁",
+ "products"
+ ],
+ [
+ "▁m",
+ "edia"
+ ],
+ [
+ "▁med",
+ "ia"
+ ],
+ [
+ "▁medi",
+ "a"
+ ],
+ [
+ "▁",
+ "media"
+ ],
+ [
+ "ch",
+ "ange"
+ ],
+ [
+ "chan",
+ "ge"
+ ],
+ [
+ "ig",
+ "ger"
+ ],
+ [
+ "igg",
+ "er"
+ ],
+ [
+ "▁s",
+ "end"
+ ],
+ [
+ "▁se",
+ "nd"
+ ],
+ [
+ "▁sen",
+ "d"
+ ],
+ [
+ "▁",
+ "send"
+ ],
+ [
+ "la",
+ "st"
+ ],
+ [
+ "las",
+ "t"
+ ],
+ [
+ "l",
+ "ast"
+ ],
+ [
+ "min",
+ "g"
+ ],
+ [
+ "mi",
+ "ng"
+ ],
+ [
+ "m",
+ "ing"
+ ],
+ [
+ "p",
+ "a"
+ ],
+ [
+ "ua",
+ "ry"
+ ],
+ [
+ "uar",
+ "y"
+ ],
+ [
+ "u",
+ "ary"
+ ],
+ [
+ "▁spe",
+ "ak"
+ ],
+ [
+ "ны",
+ "й"
+ ],
+ [
+ "щ",
+ "е"
+ ],
+ [
+ "ys",
+ "is"
+ ],
+ [
+ "y",
+ "sis"
+ ],
+ [
+ "ly",
+ "ing"
+ ],
+ [
+ "l",
+ "ying"
+ ],
+ [
+ "▁",
+ "ч"
+ ],
+ [
+ "li",
+ "ke"
+ ],
+ [
+ "lik",
+ "e"
+ ],
+ [
+ "l",
+ "ike"
+ ],
+ [
+ "р",
+ "ы"
+ ],
+ [
+ "в",
+ "і"
+ ],
+ [
+ "▁M",
+ "ich"
+ ],
+ [
+ "▁Mic",
+ "h"
+ ],
+ [
+ "▁Mi",
+ "ch"
+ ],
+ [
+ "M",
+ "O"
+ ],
+ [
+ "▁J",
+ "ah"
+ ],
+ [
+ "▁Ja",
+ "h"
+ ],
+ [
+ "ens",
+ "ive"
+ ],
+ [
+ "▁sh",
+ "are"
+ ],
+ [
+ "▁shar",
+ "e"
+ ],
+ [
+ "▁sha",
+ "re"
+ ],
+ [
+ "▁",
+ "share"
+ ],
+ [
+ "▁develop",
+ "ment"
+ ],
+ [
+ "C",
+ "P"
+ ],
+ [
+ "sp",
+ "ec"
+ ],
+ [
+ "spe",
+ "c"
+ ],
+ [
+ "s",
+ "pec"
+ ],
+ [
+ "▁f",
+ "ast"
+ ],
+ [
+ "▁fa",
+ "st"
+ ],
+ [
+ "▁",
+ "fast"
+ ],
+ [
+ "he",
+ "t"
+ ],
+ [
+ "h",
+ "et"
+ ],
+ [
+ "H",
+ "O"
+ ],
+ [
+ "▁part",
+ "icip"
+ ],
+ [
+ "▁partic",
+ "ip"
+ ],
+ [
+ "▁parti",
+ "cip"
+ ],
+ [
+ "Bl",
+ "ock"
+ ],
+ [
+ "Blo",
+ "ck"
+ ],
+ [
+ "B",
+ "lock"
+ ],
+ [
+ "▁vi",
+ "ol"
+ ],
+ [
+ "▁fr",
+ "ame"
+ ],
+ [
+ "▁fra",
+ "me"
+ ],
+ [
+ "▁fram",
+ "e"
+ ],
+ [
+ "▁",
+ "frame"
+ ],
+ [
+ "▁qu",
+ "al"
+ ],
+ [
+ "▁q",
+ "ual"
+ ],
+ [
+ "▁",
+ "qual"
+ ],
+ [
+ "tr",
+ "e"
+ ],
+ [
+ "t",
+ "re"
+ ],
+ [
+ "▁",
+ "Ф"
+ ],
+ [
+ "▁to",
+ "ward"
+ ],
+ [
+ "▁tow",
+ "ard"
+ ],
+ [
+ "f",
+ "g"
+ ],
+ [
+ "Bo",
+ "x"
+ ],
+ [
+ "B",
+ "ox"
+ ],
+ [
+ "Col",
+ "umn"
+ ],
+ [
+ "▁mil",
+ "it"
+ ],
+ [
+ "▁mi",
+ "lit"
+ ],
+ [
+ "▁M",
+ "arch"
+ ],
+ [
+ "▁Mar",
+ "ch"
+ ],
+ [
+ "▁Marc",
+ "h"
+ ],
+ [
+ "▁var",
+ "ious"
+ ],
+ [
+ "▁vari",
+ "ous"
+ ],
+ [
+ "pa",
+ "ss"
+ ],
+ [
+ "pas",
+ "s"
+ ],
+ [
+ "p",
+ "ass"
+ ],
+ [
+ "▁P",
+ "ark"
+ ],
+ [
+ "▁Par",
+ "k"
+ ],
+ [
+ "▁B",
+ "en"
+ ],
+ [
+ "▁Be",
+ "n"
+ ],
+ [
+ "▁",
+ "Ben"
+ ],
+ [
+ "Fr",
+ "ame"
+ ],
+ [
+ "▁n",
+ "ormal"
+ ],
+ [
+ "▁nor",
+ "mal"
+ ],
+ [
+ "▁norm",
+ "al"
+ ],
+ [
+ "▁",
+ "normal"
+ ],
+ [
+ "op",
+ "en"
+ ],
+ [
+ "ope",
+ "n"
+ ],
+ [
+ "o",
+ "pen"
+ ],
+ [
+ "p",
+ "x"
+ ],
+ [
+ "▁ph",
+ "one"
+ ],
+ [
+ "▁",
+ "phone"
+ ],
+ [
+ "▁E",
+ "ven"
+ ],
+ [
+ "▁Ev",
+ "en"
+ ],
+ [
+ "▁Eve",
+ "n"
+ ],
+ [
+ "▁",
+ "Even"
+ ],
+ [
+ "▁m",
+ "a"
+ ],
+ [
+ "▁",
+ "ma"
+ ],
+ [
+ "ibr",
+ "ary"
+ ],
+ [
+ "St",
+ "art"
+ ],
+ [
+ "Star",
+ "t"
+ ],
+ [
+ "id",
+ "den"
+ ],
+ [
+ "idd",
+ "en"
+ ],
+ [
+ "rh",
+ "o"
+ ],
+ [
+ "r",
+ "ho"
+ ],
+ [
+ "gr",
+ "aph"
+ ],
+ [
+ "gra",
+ "ph"
+ ],
+ [
+ "g",
+ "raph"
+ ],
+ [
+ "ac",
+ "ing"
+ ],
+ [
+ "aci",
+ "ng"
+ ],
+ [
+ "a",
+ "cing"
+ ],
+ [
+ "'",
+ "."
+ ],
+ [
+ "ar",
+ "ter"
+ ],
+ [
+ "art",
+ "er"
+ ],
+ [
+ "arte",
+ "r"
+ ],
+ [
+ "me",
+ "s"
+ ],
+ [
+ "m",
+ "es"
+ ],
+ [
+ "in",
+ "st"
+ ],
+ [
+ "ins",
+ "t"
+ ],
+ [
+ "▁i",
+ "r"
+ ],
+ [
+ "▁",
+ "ir"
+ ],
+ [
+ "act",
+ "ive"
+ ],
+ [
+ "activ",
+ "e"
+ ],
+ [
+ "▁f",
+ "em"
+ ],
+ [
+ "▁fe",
+ "m"
+ ],
+ [
+ "▁",
+ "fem"
+ ],
+ [
+ "▁m",
+ "oved"
+ ],
+ [
+ "▁mov",
+ "ed"
+ ],
+ [
+ "▁move",
+ "d"
+ ],
+ [
+ "▁mo",
+ "ved"
+ ],
+ [
+ "▁st",
+ "ore"
+ ],
+ [
+ "▁stor",
+ "e"
+ ],
+ [
+ "▁sto",
+ "re"
+ ],
+ [
+ "▁",
+ "store"
+ ],
+ [
+ "▁p",
+ "rice"
+ ],
+ [
+ "▁pr",
+ "ice"
+ ],
+ [
+ "▁pri",
+ "ce"
+ ],
+ [
+ "▁",
+ "price"
+ ],
+ [
+ "\")",
+ "."
+ ],
+ [
+ "\"",
+ ")."
+ ],
+ [
+ "ber",
+ "g"
+ ],
+ [
+ "be",
+ "rg"
+ ],
+ [
+ "b",
+ "erg"
+ ],
+ [
+ "▁n",
+ "ov"
+ ],
+ [
+ "▁no",
+ "v"
+ ],
+ [
+ "▁",
+ "nov"
+ ],
+ [
+ "▁c",
+ "ard"
+ ],
+ [
+ "▁car",
+ "d"
+ ],
+ [
+ "▁ca",
+ "rd"
+ ],
+ [
+ "▁",
+ "card"
+ ],
+ [
+ "el",
+ "low"
+ ],
+ [
+ "ell",
+ "ow"
+ ],
+ [
+ "ello",
+ "w"
+ ],
+ [
+ "▁part",
+ "y"
+ ],
+ [
+ "▁par",
+ "ty"
+ ],
+ [
+ "▁",
+ "party"
+ ],
+ [
+ "▁M",
+ "or"
+ ],
+ [
+ "▁Mo",
+ "r"
+ ],
+ [
+ "ae",
+ "l"
+ ],
+ [
+ "a",
+ "el"
+ ],
+ [
+ "▁per",
+ "cent"
+ ],
+ [
+ "▁",
+ "percent"
+ ],
+ [
+ "▁tr",
+ "aining"
+ ],
+ [
+ "▁tra",
+ "ining"
+ ],
+ [
+ "▁train",
+ "ing"
+ ],
+ [
+ "▁",
+ "training"
+ ],
+ [
+ "▁in",
+ "g"
+ ],
+ [
+ "▁i",
+ "ng"
+ ],
+ [
+ "▁",
+ "ing"
+ ],
+ [
+ "im",
+ "er"
+ ],
+ [
+ "ime",
+ "r"
+ ],
+ [
+ "i",
+ "mer"
+ ],
+ [
+ "▁S",
+ "am"
+ ],
+ [
+ "▁Sa",
+ "m"
+ ],
+ [
+ "▁",
+ "Sam"
+ ],
+ [
+ "Def",
+ "ault"
+ ],
+ [
+ "▁f",
+ "uck"
+ ],
+ [
+ "▁fu",
+ "ck"
+ ],
+ [
+ "▁com",
+ "plete"
+ ],
+ [
+ "▁comp",
+ "lete"
+ ],
+ [
+ "▁complet",
+ "e"
+ ],
+ [
+ "▁compl",
+ "ete"
+ ],
+ [
+ "▁",
+ "complete"
+ ],
+ [
+ "ui",
+ "d"
+ ],
+ [
+ "u",
+ "id"
+ ],
+ [
+ "▁det",
+ "ails"
+ ],
+ [
+ "▁detail",
+ "s"
+ ],
+ [
+ "▁",
+ "details"
+ ],
+ [
+ "▁l",
+ "ed"
+ ],
+ [
+ "▁le",
+ "d"
+ ],
+ [
+ "▁",
+ "led"
+ ],
+ [
+ "Po",
+ "int"
+ ],
+ [
+ "P",
+ "oint"
+ ],
+ [
+ "▁C",
+ "ount"
+ ],
+ [
+ "▁Co",
+ "unt"
+ ],
+ [
+ "▁Coun",
+ "t"
+ ],
+ [
+ "▁Cou",
+ "nt"
+ ],
+ [
+ "▁",
+ "Count"
+ ],
+ [
+ "▁reg",
+ "ard"
+ ],
+ [
+ "z",
+ "o"
+ ],
+ [
+ "▁B",
+ "ro"
+ ],
+ [
+ "▁Br",
+ "o"
+ ],
+ [
+ "▁",
+ "Bro"
+ ],
+ [
+ "▁rec",
+ "ogn"
+ ],
+ [
+ "▁",
+ "recogn"
+ ],
+ [
+ "▁H",
+ "ol"
+ ],
+ [
+ "▁Ho",
+ "l"
+ ],
+ [
+ "▁",
+ "Hol"
+ ],
+ [
+ "U",
+ "M"
+ ],
+ [
+ "el",
+ "ement"
+ ],
+ [
+ "ele",
+ "ment"
+ ],
+ [
+ "elem",
+ "ent"
+ ],
+ [
+ "e",
+ "lement"
+ ],
+ [
+ "Mod",
+ "e"
+ ],
+ [
+ "Mo",
+ "de"
+ ],
+ [
+ "M",
+ "ode"
+ ],
+ [
+ "▁ex",
+ "am"
+ ],
+ [
+ "▁E",
+ "X"
+ ],
+ [
+ "▁",
+ "EX"
+ ],
+ [
+ "Im",
+ "age"
+ ],
+ [
+ "ver",
+ "se"
+ ],
+ [
+ "vers",
+ "e"
+ ],
+ [
+ "ri",
+ "ter"
+ ],
+ [
+ "rit",
+ "er"
+ ],
+ [
+ "rite",
+ "r"
+ ],
+ [
+ "r",
+ "iter"
+ ],
+ [
+ "so",
+ "ft"
+ ],
+ [
+ "s",
+ "oft"
+ ],
+ [
+ "▁int",
+ "rodu"
+ ],
+ [
+ "▁intro",
+ "du"
+ ],
+ [
+ "▁sur",
+ "pr"
+ ],
+ [
+ "Buf",
+ "fer"
+ ],
+ [
+ "Buff",
+ "er"
+ ],
+ [
+ "B",
+ "uffer"
+ ],
+ [
+ "le",
+ "ctor"
+ ],
+ [
+ "lect",
+ "or"
+ ],
+ [
+ "l",
+ "ector"
+ ],
+ [
+ "ar",
+ "en"
+ ],
+ [
+ "are",
+ "n"
+ ],
+ [
+ "a",
+ "ren"
+ ],
+ [
+ "an",
+ "ged"
+ ],
+ [
+ "ang",
+ "ed"
+ ],
+ [
+ "ange",
+ "d"
+ ],
+ [
+ "▁P",
+ "at"
+ ],
+ [
+ "▁Pa",
+ "t"
+ ],
+ [
+ "▁",
+ "Pat"
+ ],
+ [
+ "▁P",
+ "al"
+ ],
+ [
+ "▁Pa",
+ "l"
+ ],
+ [
+ "▁",
+ "Pal"
+ ],
+ [
+ "▁con",
+ "tr"
+ ],
+ [
+ "▁cont",
+ "r"
+ ],
+ [
+ "▁",
+ "contr"
+ ],
+ [
+ "Hand",
+ "ler"
+ ],
+ [
+ "Handle",
+ "r"
+ ],
+ [
+ "▁fe",
+ "atures"
+ ],
+ [
+ "▁feature",
+ "s"
+ ],
+ [
+ "▁feat",
+ "ures"
+ ],
+ [
+ "▁",
+ "features"
+ ],
+ [
+ "ip",
+ "le"
+ ],
+ [
+ "i",
+ "ple"
+ ],
+ [
+ "▁C",
+ "ON"
+ ],
+ [
+ "▁CO",
+ "N"
+ ],
+ [
+ "▁",
+ "CON"
+ ],
+ [
+ "Fi",
+ "l"
+ ],
+ [
+ "F",
+ "il"
+ ],
+ [
+ "▁P",
+ "ort"
+ ],
+ [
+ "▁Po",
+ "rt"
+ ],
+ [
+ "▁Por",
+ "t"
+ ],
+ [
+ "▁",
+ "Port"
+ ],
+ [
+ "▁th",
+ "inking"
+ ],
+ [
+ "▁think",
+ "ing"
+ ],
+ [
+ "▁thin",
+ "king"
+ ],
+ [
+ "do",
+ "c"
+ ],
+ [
+ "d",
+ "oc"
+ ],
+ [
+ "we",
+ "r"
+ ],
+ [
+ "w",
+ "er"
+ ],
+ [
+ "▁work",
+ "ed"
+ ],
+ [
+ "▁wor",
+ "ked"
+ ],
+ [
+ "P",
+ "C"
+ ],
+ [
+ "c",
+ "m"
+ ],
+ [
+ "da",
+ "t"
+ ],
+ [
+ "d",
+ "at"
+ ],
+ [
+ "PR",
+ "O"
+ ],
+ [
+ "P",
+ "RO"
+ ],
+ [
+ "▁E",
+ "very"
+ ],
+ [
+ "▁Ev",
+ "ery"
+ ],
+ [
+ "▁Ever",
+ "y"
+ ],
+ [
+ "▁Eve",
+ "ry"
+ ],
+ [
+ "▁",
+ "Every"
+ ],
+ [
+ "▁e",
+ "ra"
+ ],
+ [
+ "▁er",
+ "a"
+ ],
+ [
+ "▁",
+ "era"
+ ],
+ [
+ "▁F",
+ "irst"
+ ],
+ [
+ "▁",
+ "First"
+ ],
+ [
+ "g",
+ "n"
+ ],
+ [
+ "▁im",
+ "medi"
+ ],
+ [
+ "▁imm",
+ "edi"
+ ],
+ [
+ "ov",
+ "ember"
+ ],
+ [
+ "ove",
+ "mber"
+ ],
+ [
+ "ap",
+ "an"
+ ],
+ [
+ "apa",
+ "n"
+ ],
+ [
+ "a",
+ "pan"
+ ],
+ [
+ "▁ex",
+ "tra"
+ ],
+ [
+ "▁ext",
+ "ra"
+ ],
+ [
+ "▁extr",
+ "a"
+ ],
+ [
+ "▁",
+ "extra"
+ ],
+ [
+ "▁s",
+ "ection"
+ ],
+ [
+ "▁se",
+ "ction"
+ ],
+ [
+ "▁sect",
+ "ion"
+ ],
+ [
+ "▁",
+ "section"
+ ],
+ [
+ "▁J",
+ "une"
+ ],
+ [
+ "▁Jun",
+ "e"
+ ],
+ [
+ "▁Ju",
+ "ne"
+ ],
+ [
+ "▁v",
+ "ia"
+ ],
+ [
+ "▁vi",
+ "a"
+ ],
+ [
+ "▁",
+ "via"
+ ],
+ [
+ "▁g",
+ "one"
+ ],
+ [
+ "▁go",
+ "ne"
+ ],
+ [
+ "com",
+ "e"
+ ],
+ [
+ "co",
+ "me"
+ ],
+ [
+ "c",
+ "ome"
+ ],
+ [
+ "▁s",
+ "tri"
+ ],
+ [
+ "▁st",
+ "ri"
+ ],
+ [
+ "▁str",
+ "i"
+ ],
+ [
+ "▁",
+ "stri"
+ ],
+ [
+ "^",
+ "\\"
+ ],
+ [
+ "ant",
+ "ly"
+ ],
+ [
+ "▁ar",
+ "ch"
+ ],
+ [
+ "▁arc",
+ "h"
+ ],
+ [
+ "▁",
+ "arch"
+ ],
+ [
+ "S",
+ "ource"
+ ],
+ [
+ "▁con",
+ "v"
+ ],
+ [
+ "▁co",
+ "nv"
+ ],
+ [
+ "▁",
+ "conv"
+ ],
+ [
+ "▁L",
+ "ondon"
+ ],
+ [
+ "▁Lond",
+ "on"
+ ],
+ [
+ "▁",
+ "London"
+ ],
+ [
+ "Num",
+ "ber"
+ ],
+ [
+ "N",
+ "umber"
+ ],
+ [
+ "▁quest",
+ "ions"
+ ],
+ [
+ "▁question",
+ "s"
+ ],
+ [
+ "an",
+ "did"
+ ],
+ [
+ "and",
+ "id"
+ ],
+ [
+ "▁play",
+ "ed"
+ ],
+ [
+ "en",
+ "v"
+ ],
+ [
+ "e",
+ "nv"
+ ],
+ [
+ "▁Sch",
+ "ool"
+ ],
+ [
+ "▁nat",
+ "ural"
+ ],
+ [
+ "▁natur",
+ "al"
+ ],
+ [
+ "▁",
+ "natural"
+ ],
+ [
+ "ca",
+ "n"
+ ],
+ [
+ "c",
+ "an"
+ ],
+ [
+ "▁ne",
+ "ws"
+ ],
+ [
+ "▁new",
+ "s"
+ ],
+ [
+ "▁",
+ "news"
+ ],
+ [
+ "D",
+ "R"
+ ],
+ [
+ "▁c",
+ "hall"
+ ],
+ [
+ "▁ch",
+ "all"
+ ],
+ [
+ "▁cha",
+ "ll"
+ ],
+ [
+ "▁S",
+ "oc"
+ ],
+ [
+ "▁So",
+ "c"
+ ],
+ [
+ "▁",
+ "э"
+ ],
+ [
+ "▁att",
+ "empt"
+ ],
+ [
+ "*",
+ "}"
+ ],
+ [
+ "N",
+ "ull"
+ ],
+ [
+ "ro",
+ "te"
+ ],
+ [
+ "rot",
+ "e"
+ ],
+ [
+ "r",
+ "ote"
+ ],
+ [
+ "▁b",
+ "i"
+ ],
+ [
+ "▁",
+ "bi"
+ ],
+ [
+ "▁wr",
+ "itten"
+ ],
+ [
+ "▁writ",
+ "ten"
+ ],
+ [
+ "▁",
+ "written"
+ ],
+ [
+ "▁bl",
+ "ood"
+ ],
+ [
+ "▁blo",
+ "od"
+ ],
+ [
+ "▁happ",
+ "ened"
+ ],
+ [
+ "▁happen",
+ "ed"
+ ],
+ [
+ "▁c",
+ "ause"
+ ],
+ [
+ "▁caus",
+ "e"
+ ],
+ [
+ "▁ca",
+ "use"
+ ],
+ [
+ "as",
+ "hing"
+ ],
+ [
+ "ash",
+ "ing"
+ ],
+ [
+ "ashi",
+ "ng"
+ ],
+ [
+ "▁Will",
+ "iam"
+ ],
+ [
+ "ad",
+ "em"
+ ],
+ [
+ "ade",
+ "m"
+ ],
+ [
+ "a",
+ "dem"
+ ],
+ [
+ "▁b",
+ "rought"
+ ],
+ [
+ "▁br",
+ "ought"
+ ],
+ [
+ "▁dis",
+ "play"
+ ],
+ [
+ "▁displ",
+ "ay"
+ ],
+ [
+ "▁disp",
+ "lay"
+ ],
+ [
+ "▁",
+ "display"
+ ],
+ [
+ "im",
+ "a"
+ ],
+ [
+ "i",
+ "ma"
+ ],
+ [
+ "▁fin",
+ "ally"
+ ],
+ [
+ "▁final",
+ "ly"
+ ],
+ [
+ "ta",
+ "b"
+ ],
+ [
+ "t",
+ "ab"
+ ],
+ [
+ "▁return",
+ "ed"
+ ],
+ [
+ "ны",
+ "х"
+ ],
+ [
+ "ni",
+ "e"
+ ],
+ [
+ "n",
+ "ie"
+ ],
+ [
+ "▁",
+ "q"
+ ],
+ [
+ "▁h",
+ "ers"
+ ],
+ [
+ "▁he",
+ "rs"
+ ],
+ [
+ "▁her",
+ "s"
+ ],
+ [
+ "▁P",
+ "re"
+ ],
+ [
+ "▁Pr",
+ "e"
+ ],
+ [
+ "▁",
+ "Pre"
+ ],
+ [
+ "▁d",
+ "ou"
+ ],
+ [
+ "▁do",
+ "u"
+ ],
+ [
+ "buf",
+ "fer"
+ ],
+ [
+ "buff",
+ "er"
+ ],
+ [
+ "b",
+ "uffer"
+ ],
+ [
+ "▁eff",
+ "ort"
+ ],
+ [
+ "ain",
+ "e"
+ ],
+ [
+ "ai",
+ "ne"
+ ],
+ [
+ "a",
+ "ine"
+ ],
+ [
+ "x",
+ "y"
+ ],
+ [
+ "▁his",
+ "tor"
+ ],
+ [
+ "▁hist",
+ "or"
+ ],
+ [
+ "en",
+ "u"
+ ],
+ [
+ "e",
+ "nu"
+ ],
+ [
+ "▁ar",
+ "riv"
+ ],
+ [
+ "▁arr",
+ "iv"
+ ],
+ [
+ "▁D",
+ "em"
+ ],
+ [
+ "▁De",
+ "m"
+ ],
+ [
+ "▁",
+ "Dem"
+ ],
+ [
+ "▁f",
+ "avor"
+ ],
+ [
+ "▁fa",
+ "vor"
+ ],
+ [
+ "▁fav",
+ "or"
+ ],
+ [
+ "▁hand",
+ "le"
+ ],
+ [
+ "▁",
+ "handle"
+ ],
+ [
+ "SE",
+ "T"
+ ],
+ [
+ "S",
+ "ET"
+ ],
+ [
+ "▁P",
+ "ublic"
+ ],
+ [
+ "▁Pub",
+ "lic"
+ ],
+ [
+ "▁Pu",
+ "blic"
+ ],
+ [
+ "▁",
+ "Public"
+ ],
+ [
+ "ru",
+ "pt"
+ ],
+ [
+ "rup",
+ "t"
+ ],
+ [
+ "r",
+ "upt"
+ ],
+ [
+ "▁u",
+ "r"
+ ],
+ [
+ "▁",
+ "ur"
+ ],
+ [
+ "▁for",
+ "ce"
+ ],
+ [
+ "▁",
+ "force"
+ ],
+ [
+ "▁é",
+ "s"
+ ],
+ [
+ "▁",
+ "és"
+ ],
+ [
+ "ub",
+ "e"
+ ],
+ [
+ "u",
+ "be"
+ ],
+ [
+ "Pr",
+ "e"
+ ],
+ [
+ "P",
+ "re"
+ ],
+ [
+ "р",
+ "і"
+ ],
+ [
+ "in",
+ "y"
+ ],
+ [
+ "i",
+ "ny"
+ ],
+ [
+ "th",
+ "eta"
+ ],
+ [
+ "the",
+ "ta"
+ ],
+ [
+ "is",
+ "f"
+ ],
+ [
+ "i",
+ "sf"
+ ],
+ [
+ "▁n",
+ "ational"
+ ],
+ [
+ "▁nat",
+ "ional"
+ ],
+ [
+ "▁nation",
+ "al"
+ ],
+ [
+ "Equ",
+ "al"
+ ],
+ [
+ "Eq",
+ "ual"
+ ],
+ [
+ "E",
+ "qual"
+ ],
+ [
+ "ren",
+ "ch"
+ ],
+ [
+ "▁w",
+ "ife"
+ ],
+ [
+ "▁c",
+ "apt"
+ ],
+ [
+ "▁cap",
+ "t"
+ ],
+ [
+ "▁ca",
+ "pt"
+ ],
+ [
+ "▁In",
+ "ter"
+ ],
+ [
+ "▁Int",
+ "er"
+ ],
+ [
+ "▁",
+ "Inter"
+ ],
+ [
+ "ta",
+ "u"
+ ],
+ [
+ "t",
+ "au"
+ ],
+ [
+ "▁s",
+ "leep"
+ ],
+ [
+ "▁sle",
+ "ep"
+ ],
+ [
+ "▁",
+ "sleep"
+ ],
+ [
+ "../",
+ "../"
+ ],
+ [
+ "▁iss",
+ "ue"
+ ],
+ [
+ "▁",
+ "issue"
+ ],
+ [
+ "▁m",
+ "ember"
+ ],
+ [
+ "▁me",
+ "mber"
+ ],
+ [
+ "▁mem",
+ "ber"
+ ],
+ [
+ "▁",
+ "member"
+ ],
+ [
+ "▁a",
+ "wait"
+ ],
+ [
+ "▁aw",
+ "ait"
+ ],
+ [
+ "▁",
+ "await"
+ ],
+ [
+ "▁D",
+ "an"
+ ],
+ [
+ "▁Da",
+ "n"
+ ],
+ [
+ "▁",
+ "Dan"
+ ],
+ [
+ "z",
+ "i"
+ ],
+ [
+ "in",
+ "ate"
+ ],
+ [
+ "ina",
+ "te"
+ ],
+ [
+ "i",
+ "nate"
+ ],
+ [
+ "▁s",
+ "ym"
+ ],
+ [
+ "▁sy",
+ "m"
+ ],
+ [
+ "▁",
+ "sym"
+ ],
+ [
+ "ch",
+ "an"
+ ],
+ [
+ "cha",
+ "n"
+ ],
+ [
+ "c",
+ "han"
+ ],
+ [
+ "▁J",
+ "ack"
+ ],
+ [
+ "▁Jac",
+ "k"
+ ],
+ [
+ "▁Ja",
+ "ck"
+ ],
+ [
+ "▁",
+ "Jack"
+ ],
+ [
+ "▁Eng",
+ "lish"
+ ],
+ [
+ "▁",
+ "English"
+ ],
+ [
+ "▁s",
+ "z"
+ ],
+ [
+ "▁",
+ "sz"
+ ],
+ [
+ "rib",
+ "utes"
+ ],
+ [
+ "ribut",
+ "es"
+ ],
+ [
+ "ribute",
+ "s"
+ ],
+ [
+ "ribu",
+ "tes"
+ ],
+ [
+ "▁i",
+ "gn"
+ ],
+ [
+ "▁ig",
+ "n"
+ ],
+ [
+ "▁",
+ "ign"
+ ],
+ [
+ "á",
+ "l"
+ ],
+ [
+ "▁app",
+ "ear"
+ ],
+ [
+ "▁appe",
+ "ar"
+ ],
+ [
+ "ra",
+ "d"
+ ],
+ [
+ "r",
+ "ad"
+ ],
+ [
+ "id",
+ "ge"
+ ],
+ [
+ "▁co",
+ "uple"
+ ],
+ [
+ "▁cou",
+ "ple"
+ ],
+ [
+ "▁coup",
+ "le"
+ ],
+ [
+ "▁s",
+ "hip"
+ ],
+ [
+ "▁sh",
+ "ip"
+ ],
+ [
+ "▁",
+ "ship"
+ ],
+ [
+ "li",
+ "g"
+ ],
+ [
+ "l",
+ "ig"
+ ],
+ [
+ "we",
+ "b"
+ ],
+ [
+ "w",
+ "eb"
+ ],
+ [
+ "▁us",
+ "ually"
+ ],
+ [
+ "▁usual",
+ "ly"
+ ],
+ [
+ "▁re",
+ "ady"
+ ],
+ [
+ "▁read",
+ "y"
+ ],
+ [
+ "▁",
+ "ready"
+ ],
+ [
+ "▁v",
+ "ill"
+ ],
+ [
+ "▁vi",
+ "ll"
+ ],
+ [
+ "▁vil",
+ "l"
+ ],
+ [
+ "▁W",
+ "hy"
+ ],
+ [
+ "▁Wh",
+ "y"
+ ],
+ [
+ "▁",
+ "Why"
+ ],
+ [
+ "eb",
+ "ru"
+ ],
+ [
+ "e",
+ "bru"
+ ],
+ [
+ "▁g",
+ "rad"
+ ],
+ [
+ "▁gr",
+ "ad"
+ ],
+ [
+ "▁gra",
+ "d"
+ ],
+ [
+ "▁",
+ "grad"
+ ],
+ [
+ "or",
+ "ds"
+ ],
+ [
+ "ord",
+ "s"
+ ],
+ [
+ "▁in",
+ "f"
+ ],
+ [
+ "▁i",
+ "nf"
+ ],
+ [
+ "▁",
+ "inf"
+ ],
+ [
+ "▁l",
+ "oss"
+ ],
+ [
+ "▁lo",
+ "ss"
+ ],
+ [
+ "▁los",
+ "s"
+ ],
+ [
+ "▁",
+ "loss"
+ ],
+ [
+ "▁o",
+ "d"
+ ],
+ [
+ "▁",
+ "od"
+ ],
+ [
+ "▁Ph",
+ "il"
+ ],
+ [
+ "▁",
+ "Phil"
+ ],
+ [
+ "ser",
+ "ver"
+ ],
+ [
+ "serv",
+ "er"
+ ],
+ [
+ "serve",
+ "r"
+ ],
+ [
+ "▁U",
+ "p"
+ ],
+ [
+ "▁",
+ "Up"
+ ],
+ [
+ "▁b",
+ "uff"
+ ],
+ [
+ "▁bu",
+ "ff"
+ ],
+ [
+ "▁buf",
+ "f"
+ ],
+ [
+ "▁",
+ "buff"
+ ],
+ [
+ "▁fil",
+ "ename"
+ ],
+ [
+ "▁file",
+ "name"
+ ],
+ [
+ "▁",
+ "filename"
+ ],
+ [
+ "AB",
+ "LE"
+ ],
+ [
+ "it",
+ "ing"
+ ],
+ [
+ "iti",
+ "ng"
+ ],
+ [
+ "i",
+ "ting"
+ ],
+ [
+ "ef",
+ "ore"
+ ],
+ [
+ "e",
+ "fore"
+ ],
+ [
+ "()",
+ "->"
+ ],
+ [
+ "(",
+ ")->"
+ ],
+ [
+ "▁cond",
+ "itions"
+ ],
+ [
+ "▁condition",
+ "s"
+ ],
+ [
+ "▁",
+ "conditions"
+ ],
+ [
+ "v",
+ "m"
+ ],
+ [
+ "el",
+ "d"
+ ],
+ [
+ "e",
+ "ld"
+ ],
+ [
+ "it",
+ "z"
+ ],
+ [
+ "i",
+ "tz"
+ ],
+ [
+ "▁Tr",
+ "ans"
+ ],
+ [
+ "▁Tra",
+ "ns"
+ ],
+ [
+ "▁",
+ "Trans"
+ ],
+ [
+ "▁w",
+ "eight"
+ ],
+ [
+ "▁we",
+ "ight"
+ ],
+ [
+ "▁weigh",
+ "t"
+ ],
+ [
+ "▁",
+ "weight"
+ ],
+ [
+ "▁high",
+ "er"
+ ],
+ [
+ "▁hig",
+ "her"
+ ],
+ [
+ "▁r",
+ "ate"
+ ],
+ [
+ "▁rat",
+ "e"
+ ],
+ [
+ "▁ra",
+ "te"
+ ],
+ [
+ "▁",
+ "rate"
+ ],
+ [
+ "▁acc",
+ "om"
+ ],
+ [
+ "▁ac",
+ "com"
+ ],
+ [
+ "vi",
+ "der"
+ ],
+ [
+ "vid",
+ "er"
+ ],
+ [
+ "v",
+ "ider"
+ ],
+ [
+ "O",
+ "M"
+ ],
+ [
+ "▁w",
+ "ays"
+ ],
+ [
+ "▁way",
+ "s"
+ ],
+ [
+ "▁wa",
+ "ys"
+ ],
+ [
+ "▁",
+ "ways"
+ ],
+ [
+ "com",
+ "ing"
+ ],
+ [
+ "co",
+ "ming"
+ ],
+ [
+ "c",
+ "oming"
+ ],
+ [
+ "▁l",
+ "ock"
+ ],
+ [
+ "▁loc",
+ "k"
+ ],
+ [
+ "▁lo",
+ "ck"
+ ],
+ [
+ "▁",
+ "lock"
+ ],
+ [
+ "▁e",
+ "tc"
+ ],
+ [
+ "▁et",
+ "c"
+ ],
+ [
+ "▁",
+ "etc"
+ ],
+ [
+ "▁a",
+ "vec"
+ ],
+ [
+ "▁av",
+ "ec"
+ ],
+ [
+ "▁ave",
+ "c"
+ ],
+ [
+ "▁t",
+ "akes"
+ ],
+ [
+ "▁take",
+ "s"
+ ],
+ [
+ "▁tak",
+ "es"
+ ],
+ [
+ "▁ta",
+ "kes"
+ ],
+ [
+ "▁C",
+ "har"
+ ],
+ [
+ "▁Ch",
+ "ar"
+ ],
+ [
+ "▁Cha",
+ "r"
+ ],
+ [
+ "▁",
+ "Char"
+ ],
+ [
+ "▁N",
+ "ovember"
+ ],
+ [
+ "▁Nov",
+ "ember"
+ ],
+ [
+ "m",
+ "ethod"
+ ],
+ [
+ "▁A",
+ "ustral"
+ ],
+ [
+ "▁Aust",
+ "ral"
+ ],
+ [
+ "▁",
+ "Austral"
+ ],
+ [
+ "▁Amer",
+ "ica"
+ ],
+ [
+ "▁",
+ "America"
+ ],
+ [
+ "lo",
+ "ng"
+ ],
+ [
+ "lon",
+ "g"
+ ],
+ [
+ "l",
+ "ong"
+ ],
+ [
+ "ce",
+ "mber"
+ ],
+ [
+ "c",
+ "ember"
+ ],
+ [
+ "▁polit",
+ "ical"
+ ],
+ [
+ "fl",
+ "ow"
+ ],
+ [
+ "f",
+ "low"
+ ],
+ [
+ "▁may",
+ "be"
+ ],
+ [
+ "▁",
+ "maybe"
+ ],
+ [
+ "▁a",
+ "mb"
+ ],
+ [
+ "▁am",
+ "b"
+ ],
+ [
+ "▁",
+ "amb"
+ ],
+ [
+ "La",
+ "yout"
+ ],
+ [
+ "L",
+ "ayout"
+ ],
+ [
+ "il",
+ "ed"
+ ],
+ [
+ "ile",
+ "d"
+ ],
+ [
+ "i",
+ "led"
+ ],
+ [
+ "om",
+ "en"
+ ],
+ [
+ "ome",
+ "n"
+ ],
+ [
+ "o",
+ "men"
+ ],
+ [
+ "ol",
+ "a"
+ ],
+ [
+ "o",
+ "la"
+ ],
+ [
+ "ic",
+ "ip"
+ ],
+ [
+ "ici",
+ "p"
+ ],
+ [
+ "i",
+ "cip"
+ ],
+ [
+ "part",
+ "ial"
+ ],
+ [
+ "Tr",
+ "ue"
+ ],
+ [
+ "▁f",
+ "loor"
+ ],
+ [
+ "▁fl",
+ "oor"
+ ],
+ [
+ "▁flo",
+ "or"
+ ],
+ [
+ "▁",
+ "floor"
+ ],
+ [
+ "▁D",
+ "ef"
+ ],
+ [
+ "▁De",
+ "f"
+ ],
+ [
+ "▁",
+ "Def"
+ ],
+ [
+ "▁conc",
+ "ern"
+ ],
+ [
+ "▁conce",
+ "rn"
+ ],
+ [
+ "▁concer",
+ "n"
+ ],
+ [
+ "y",
+ "r"
+ ],
+ [
+ "▁sh",
+ "ows"
+ ],
+ [
+ "▁show",
+ "s"
+ ],
+ [
+ "i",
+ "h"
+ ],
+ [
+ "▁an",
+ "swer"
+ ],
+ [
+ "▁answ",
+ "er"
+ ],
+ [
+ "▁ans",
+ "wer"
+ ],
+ [
+ "▁",
+ "answer"
+ ],
+ [
+ "ac",
+ "c"
+ ],
+ [
+ "a",
+ "cc"
+ ],
+ [
+ "▁b",
+ "all"
+ ],
+ [
+ "▁bal",
+ "l"
+ ],
+ [
+ "▁ba",
+ "ll"
+ ],
+ [
+ "▁",
+ "ball"
+ ],
+ [
+ "▁R",
+ "ev"
+ ],
+ [
+ "▁Re",
+ "v"
+ ],
+ [
+ "▁",
+ "Rev"
+ ],
+ [
+ "▁s",
+ "un"
+ ],
+ [
+ "▁su",
+ "n"
+ ],
+ [
+ "▁",
+ "sun"
+ ],
+ [
+ "▁quick",
+ "ly"
+ ],
+ [
+ "▁s",
+ "omet"
+ ],
+ [
+ "▁so",
+ "met"
+ ],
+ [
+ "▁some",
+ "t"
+ ],
+ [
+ "▁som",
+ "et"
+ ],
+ [
+ "ment",
+ "e"
+ ],
+ [
+ "me",
+ "nte"
+ ],
+ [
+ "men",
+ "te"
+ ],
+ [
+ "m",
+ "ente"
+ ],
+ [
+ "▁M",
+ "al"
+ ],
+ [
+ "▁Ma",
+ "l"
+ ],
+ [
+ "▁",
+ "Mal"
+ ],
+ [
+ "und",
+ "red"
+ ],
+ [
+ "▁iss",
+ "ues"
+ ],
+ [
+ "▁issue",
+ "s"
+ ],
+ [
+ "▁",
+ "issues"
+ ],
+ [
+ "ec",
+ "ause"
+ ],
+ [
+ "eca",
+ "use"
+ ],
+ [
+ "pe",
+ "s"
+ ],
+ [
+ "p",
+ "es"
+ ],
+ [
+ "▁p",
+ "layer"
+ ],
+ [
+ "▁pl",
+ "ayer"
+ ],
+ [
+ "▁play",
+ "er"
+ ],
+ [
+ "▁",
+ "player"
+ ],
+ [
+ "▁par",
+ "ents"
+ ],
+ [
+ "▁parent",
+ "s"
+ ],
+ [
+ "▁",
+ "parents"
+ ],
+ [
+ "▁pop",
+ "ular"
+ ],
+ [
+ "▁popula",
+ "r"
+ ],
+ [
+ "▁popul",
+ "ar"
+ ],
+ [
+ "▁m",
+ "ode"
+ ],
+ [
+ "▁mod",
+ "e"
+ ],
+ [
+ "▁mo",
+ "de"
+ ],
+ [
+ "▁",
+ "mode"
+ ],
+ [
+ "▁m",
+ "ention"
+ ],
+ [
+ "▁ment",
+ "ion"
+ ],
+ [
+ "N",
+ "E"
+ ],
+ [
+ "Lo",
+ "ad"
+ ],
+ [
+ "L",
+ "oad"
+ ],
+ [
+ "▁reg",
+ "ular"
+ ],
+ [
+ "▁regul",
+ "ar"
+ ],
+ [
+ "▁",
+ "regular"
+ ],
+ [
+ "ave",
+ "d"
+ ],
+ [
+ "av",
+ "ed"
+ ],
+ [
+ "a",
+ "ved"
+ ],
+ [
+ "?",
+ ":"
+ ],
+ [
+ "ye",
+ "ar"
+ ],
+ [
+ "y",
+ "ear"
+ ],
+ [
+ "fun",
+ "c"
+ ],
+ [
+ "fu",
+ "nc"
+ ],
+ [
+ "f",
+ "unc"
+ ],
+ [
+ "▁per",
+ "formance"
+ ],
+ [
+ "▁perform",
+ "ance"
+ ],
+ [
+ "▁J",
+ "uly"
+ ],
+ [
+ "▁Jul",
+ "y"
+ ],
+ [
+ "▁Ju",
+ "ly"
+ ],
+ [
+ "th",
+ "ern"
+ ],
+ [
+ "ther",
+ "n"
+ ],
+ [
+ "the",
+ "rn"
+ ],
+ [
+ "▁we",
+ "bsite"
+ ],
+ [
+ "▁webs",
+ "ite"
+ ],
+ [
+ "▁web",
+ "site"
+ ],
+ [
+ "fo",
+ "rd"
+ ],
+ [
+ "for",
+ "d"
+ ],
+ [
+ "f",
+ "ord"
+ ],
+ [
+ "P",
+ "R"
+ ],
+ [
+ "el",
+ "a"
+ ],
+ [
+ "e",
+ "la"
+ ],
+ [
+ "le",
+ "vel"
+ ],
+ [
+ "lev",
+ "el"
+ ],
+ [
+ "l",
+ "evel"
+ ],
+ [
+ "ui",
+ "t"
+ ],
+ [
+ "u",
+ "it"
+ ],
+ [
+ "fl",
+ "ags"
+ ],
+ [
+ "flag",
+ "s"
+ ],
+ [
+ "▁w",
+ "orth"
+ ],
+ [
+ "▁wor",
+ "th"
+ ],
+ [
+ "▁",
+ "worth"
+ ],
+ [
+ "▁cor",
+ "respon"
+ ],
+ [
+ "▁Brit",
+ "ish"
+ ],
+ [
+ "si",
+ "m"
+ ],
+ [
+ "s",
+ "im"
+ ],
+ [
+ "▁al",
+ "one"
+ ],
+ [
+ "▁",
+ "alone"
+ ],
+ [
+ "▁h",
+ "ar"
+ ],
+ [
+ "▁ha",
+ "r"
+ ],
+ [
+ "▁",
+ "har"
+ ],
+ [
+ "▁o",
+ "nes"
+ ],
+ [
+ "▁on",
+ "es"
+ ],
+ [
+ "▁one",
+ "s"
+ ],
+ [
+ "▁",
+ "ones"
+ ],
+ [
+ "ob",
+ "ile"
+ ],
+ [
+ "obi",
+ "le"
+ ],
+ [
+ "obil",
+ "e"
+ ],
+ [
+ "▁d",
+ "ru"
+ ],
+ [
+ "▁dr",
+ "u"
+ ],
+ [
+ "▁",
+ "dru"
+ ],
+ [
+ "ch",
+ "i"
+ ],
+ [
+ "c",
+ "hi"
+ ],
+ [
+ "▁D",
+ "avid"
+ ],
+ [
+ "▁Dav",
+ "id"
+ ],
+ [
+ "▁Da",
+ "vid"
+ ],
+ [
+ "▁",
+ "David"
+ ],
+ [
+ "▁proble",
+ "ms"
+ ],
+ [
+ "▁problem",
+ "s"
+ ],
+ [
+ "▁col",
+ "umn"
+ ],
+ [
+ "▁",
+ "column"
+ ],
+ [
+ "()",
+ ";\r"
+ ],
+ [
+ "();",
+ "\r"
+ ],
+ [
+ "(",
+ ");\r"
+ ],
+ [
+ "Z",
+ "E"
+ ],
+ [
+ "▁re",
+ "lig"
+ ],
+ [
+ "▁rel",
+ "ig"
+ ],
+ [
+ "▁reli",
+ "g"
+ ],
+ [
+ "olog",
+ "ical"
+ ],
+ [
+ "▁reg",
+ "ion"
+ ],
+ [
+ "▁",
+ "region"
+ ],
+ [
+ "ad",
+ "y"
+ ],
+ [
+ "a",
+ "dy"
+ ],
+ [
+ "I",
+ "O"
+ ],
+ [
+ "an",
+ "der"
+ ],
+ [
+ "and",
+ "er"
+ ],
+ [
+ "ande",
+ "r"
+ ],
+ [
+ "a",
+ "nder"
+ ],
+ [
+ "Ne",
+ "t"
+ ],
+ [
+ "N",
+ "et"
+ ],
+ [
+ "▁bu",
+ "ilt"
+ ],
+ [
+ "▁",
+ "built"
+ ],
+ [
+ "▁inst",
+ "all"
+ ],
+ [
+ "▁",
+ "install"
+ ],
+ [
+ "▁appro",
+ "ach"
+ ],
+ [
+ "C",
+ "ur"
+ ],
+ [
+ "▁f",
+ "ine"
+ ],
+ [
+ "▁fin",
+ "e"
+ ],
+ [
+ "▁fi",
+ "ne"
+ ],
+ [
+ "▁talk",
+ "ing"
+ ],
+ [
+ "▁tal",
+ "king"
+ ],
+ [
+ "▁ch",
+ "anges"
+ ],
+ [
+ "▁chang",
+ "es"
+ ],
+ [
+ "▁change",
+ "s"
+ ],
+ [
+ "▁",
+ "changes"
+ ],
+ [
+ "St",
+ "yle"
+ ],
+ [
+ "▁M",
+ "art"
+ ],
+ [
+ "▁Mar",
+ "t"
+ ],
+ [
+ "▁Ma",
+ "rt"
+ ],
+ [
+ "▁",
+ "Mart"
+ ],
+ [
+ "л",
+ "ю"
+ ],
+ [
+ "res",
+ "ponse"
+ ],
+ [
+ "respon",
+ "se"
+ ],
+ [
+ "respons",
+ "e"
+ ],
+ [
+ "te",
+ "ger"
+ ],
+ [
+ "{",
+ "\r"
+ ],
+ [
+ "ir",
+ "it"
+ ],
+ [
+ "iri",
+ "t"
+ ],
+ [
+ "i",
+ "rit"
+ ],
+ [
+ "▁prote",
+ "cted"
+ ],
+ [
+ "▁protect",
+ "ed"
+ ],
+ [
+ "▁",
+ "protected"
+ ],
+ [
+ "▁re",
+ "le"
+ ],
+ [
+ "▁r",
+ "ele"
+ ],
+ [
+ "▁rel",
+ "e"
+ ],
+ [
+ "er",
+ "ship"
+ ],
+ [
+ "ers",
+ "hip"
+ ],
+ [
+ "те",
+ "ль"
+ ],
+ [
+ "тел",
+ "ь"
+ ],
+ [
+ "un",
+ "signed"
+ ],
+ [
+ "uns",
+ "igned"
+ ],
+ [
+ "ial",
+ "ize"
+ ],
+ [
+ "▁htt",
+ "ps"
+ ],
+ [
+ "▁http",
+ "s"
+ ],
+ [
+ "▁",
+ "https"
+ ],
+ [
+ "T",
+ "ag"
+ ],
+ [
+ "▁$",
+ "("
+ ],
+ [
+ "▁",
+ "$("
+ ],
+ [
+ "mo",
+ "re"
+ ],
+ [
+ "mor",
+ "e"
+ ],
+ [
+ "m",
+ "ore"
+ ],
+ [
+ "ype",
+ "s"
+ ],
+ [
+ "yp",
+ "es"
+ ],
+ [
+ "y",
+ "pes"
+ ],
+ [
+ "▁st",
+ "ream"
+ ],
+ [
+ "▁stre",
+ "am"
+ ],
+ [
+ "▁",
+ "stream"
+ ],
+ [
+ "et",
+ "ch"
+ ],
+ [
+ "etc",
+ "h"
+ ],
+ [
+ "▁eng",
+ "ine"
+ ],
+ [
+ "▁",
+ "engine"
+ ],
+ [
+ "K",
+ "E"
+ ],
+ [
+ "cm",
+ "d"
+ ],
+ [
+ "c",
+ "md"
+ ],
+ [
+ "sc",
+ "ript"
+ ],
+ [
+ "scri",
+ "pt"
+ ],
+ [
+ "scr",
+ "ipt"
+ ],
+ [
+ "s",
+ "cript"
+ ],
+ [
+ "tt",
+ "p"
+ ],
+ [
+ "t",
+ "tp"
+ ],
+ [
+ "▁a",
+ "void"
+ ],
+ [
+ "▁av",
+ "oid"
+ ],
+ [
+ "▁t",
+ "err"
+ ],
+ [
+ "▁te",
+ "rr"
+ ],
+ [
+ "▁ter",
+ "r"
+ ],
+ [
+ "▁r",
+ "ock"
+ ],
+ [
+ "▁ro",
+ "ck"
+ ],
+ [
+ "▁",
+ "rock"
+ ],
+ [
+ "▁f",
+ "ul"
+ ],
+ [
+ "▁fu",
+ "l"
+ ],
+ [
+ "▁",
+ "ful"
+ ],
+ [
+ "Up",
+ "date"
+ ],
+ [
+ "▁env",
+ "ironment"
+ ],
+ [
+ "▁environ",
+ "ment"
+ ],
+ [
+ "▁",
+ "environment"
+ ],
+ [
+ "▁p",
+ "rec"
+ ],
+ [
+ "▁pre",
+ "c"
+ ],
+ [
+ "▁pr",
+ "ec"
+ ],
+ [
+ "▁",
+ "prec"
+ ],
+ [
+ "▁с",
+ "а"
+ ],
+ [
+ "▁",
+ "са"
+ ],
+ [
+ "▁c",
+ "ases"
+ ],
+ [
+ "▁case",
+ "s"
+ ],
+ [
+ "▁cas",
+ "es"
+ ],
+ [
+ "▁ca",
+ "ses"
+ ],
+ [
+ "▁",
+ "cases"
+ ],
+ [
+ "▁off",
+ "set"
+ ],
+ [
+ "▁",
+ "offset"
+ ],
+ [
+ "▁r",
+ "ais"
+ ],
+ [
+ "▁ra",
+ "is"
+ ],
+ [
+ "▁",
+ "rais"
+ ],
+ [
+ "li",
+ "b"
+ ],
+ [
+ "l",
+ "ib"
+ ],
+ [
+ "ée",
+ "s"
+ ],
+ [
+ "é",
+ "es"
+ ],
+ [
+ "a",
+ "a"
+ ],
+ [
+ "y",
+ "t"
+ ],
+ [
+ "▁a",
+ "rr"
+ ],
+ [
+ "▁ar",
+ "r"
+ ],
+ [
+ "▁",
+ "arr"
+ ],
+ [
+ "opy",
+ "right"
+ ],
+ [
+ "f",
+ "irst"
+ ],
+ [
+ "▁u",
+ "til"
+ ],
+ [
+ "▁ut",
+ "il"
+ ],
+ [
+ "▁",
+ "util"
+ ],
+ [
+ "▁fe",
+ "ature"
+ ],
+ [
+ "▁feat",
+ "ure"
+ ],
+ [
+ "▁",
+ "feature"
+ ],
+ [
+ "pos",
+ "ed"
+ ],
+ [
+ "po",
+ "sed"
+ ],
+ [
+ "pose",
+ "d"
+ ],
+ [
+ "p",
+ "osed"
+ ],
+ [
+ "ff",
+ "ect"
+ ],
+ [
+ "f",
+ "fect"
+ ],
+ [
+ "ж",
+ "а"
+ ],
+ [
+ "it",
+ "ude"
+ ],
+ [
+ "itu",
+ "de"
+ ],
+ [
+ "itud",
+ "e"
+ ],
+ [
+ "em",
+ "ents"
+ ],
+ [
+ "ement",
+ "s"
+ ],
+ [
+ "emen",
+ "ts"
+ ],
+ [
+ "e",
+ "ments"
+ ],
+ [
+ "as",
+ "c"
+ ],
+ [
+ "a",
+ "sc"
+ ],
+ [
+ "ad",
+ "or"
+ ],
+ [
+ "ado",
+ "r"
+ ],
+ [
+ "le",
+ "ctions"
+ ],
+ [
+ "lect",
+ "ions"
+ ],
+ [
+ "lection",
+ "s"
+ ],
+ [
+ "▁cl",
+ "ub"
+ ],
+ [
+ "▁",
+ "club"
+ ],
+ [
+ "]",
+ "{"
+ ],
+ [
+ "▁*",
+ ")"
+ ],
+ [
+ "▁",
+ "*)"
+ ],
+ [
+ "ст",
+ "во"
+ ],
+ [
+ "ств",
+ "о"
+ ],
+ [
+ "с",
+ "тво"
+ ],
+ [
+ "▁im",
+ "m"
+ ],
+ [
+ "▁i",
+ "mm"
+ ],
+ [
+ "▁",
+ "imm"
+ ],
+ [
+ "▁for",
+ "mer"
+ ],
+ [
+ "▁form",
+ "er"
+ ],
+ [
+ "▁forme",
+ "r"
+ ],
+ [
+ "▁",
+ "former"
+ ],
+ [
+ "▁r",
+ "ights"
+ ],
+ [
+ "▁right",
+ "s"
+ ],
+ [
+ "▁dec",
+ "ided"
+ ],
+ [
+ "▁decide",
+ "d"
+ ],
+ [
+ "▁decid",
+ "ed"
+ ],
+ [
+ "▁re",
+ "v"
+ ],
+ [
+ "▁r",
+ "ev"
+ ],
+ [
+ "▁",
+ "rev"
+ ],
+ [
+ "▁m",
+ "ent"
+ ],
+ [
+ "▁me",
+ "nt"
+ ],
+ [
+ "▁men",
+ "t"
+ ],
+ [
+ "▁",
+ "ment"
+ ],
+ [
+ "an",
+ "i"
+ ],
+ [
+ "a",
+ "ni"
+ ],
+ [
+ "▁st",
+ "ru"
+ ],
+ [
+ "▁str",
+ "u"
+ ],
+ [
+ "▁",
+ "stru"
+ ],
+ [
+ "▁att",
+ "ention"
+ ],
+ [
+ "art",
+ "ment"
+ ],
+ [
+ "▁I",
+ "tal"
+ ],
+ [
+ "▁It",
+ "al"
+ ],
+ [
+ "al",
+ "le"
+ ],
+ [
+ "all",
+ "e"
+ ],
+ [
+ "a",
+ "lle"
+ ],
+ [
+ "▁b",
+ "is"
+ ],
+ [
+ "▁bi",
+ "s"
+ ],
+ [
+ "▁",
+ "bis"
+ ],
+ [
+ "ge",
+ "ner"
+ ],
+ [
+ "gen",
+ "er"
+ ],
+ [
+ "g",
+ "ener"
+ ],
+ [
+ "▁in",
+ "tegr"
+ ],
+ [
+ "▁int",
+ "egr"
+ ],
+ [
+ "▁inte",
+ "gr"
+ ],
+ [
+ "▁",
+ "integr"
+ ],
+ [
+ "el",
+ "lo"
+ ],
+ [
+ "ell",
+ "o"
+ ],
+ [
+ "ry",
+ "pt"
+ ],
+ [
+ "▁a",
+ "chie"
+ ],
+ [
+ "ne",
+ "s"
+ ],
+ [
+ "n",
+ "es"
+ ],
+ [
+ "▁s",
+ "tra"
+ ],
+ [
+ "▁st",
+ "ra"
+ ],
+ [
+ "▁str",
+ "a"
+ ],
+ [
+ "▁",
+ "stra"
+ ],
+ [
+ "s",
+ "b"
+ ],
+ [
+ "▁t",
+ "ypes"
+ ],
+ [
+ "▁type",
+ "s"
+ ],
+ [
+ "▁typ",
+ "es"
+ ],
+ [
+ "▁ty",
+ "pes"
+ ],
+ [
+ "▁",
+ "types"
+ ],
+ [
+ "▁R",
+ "E"
+ ],
+ [
+ "▁",
+ "RE"
+ ],
+ [
+ "In",
+ "it"
+ ],
+ [
+ "I",
+ "nit"
+ ],
+ [
+ "▁com",
+ "ment"
+ ],
+ [
+ "▁comm",
+ "ent"
+ ],
+ [
+ "▁comme",
+ "nt"
+ ],
+ [
+ "▁",
+ "comment"
+ ],
+ [
+ "▁add",
+ "ition"
+ ],
+ [
+ "▁I",
+ "D"
+ ],
+ [
+ "▁",
+ "ID"
+ ],
+ [
+ "AR",
+ "T"
+ ],
+ [
+ "A",
+ "RT"
+ ],
+ [
+ "F",
+ "O"
+ ],
+ [
+ "щ",
+ "и"
+ ],
+ [
+ "Con",
+ "ne"
+ ],
+ [
+ "Conn",
+ "e"
+ ],
+ [
+ "C",
+ "onne"
+ ],
+ [
+ "▁s",
+ "qu"
+ ],
+ [
+ "▁sq",
+ "u"
+ ],
+ [
+ "▁consider",
+ "ed"
+ ],
+ [
+ "▁consid",
+ "ered"
+ ],
+ [
+ "id",
+ "ad"
+ ],
+ [
+ "ida",
+ "d"
+ ],
+ [
+ "▁Oct",
+ "ober"
+ ],
+ [
+ "ci",
+ "al"
+ ],
+ [
+ "cia",
+ "l"
+ ],
+ [
+ "c",
+ "ial"
+ ],
+ [
+ "▁O",
+ "f"
+ ],
+ [
+ "▁",
+ "Of"
+ ],
+ [
+ "▁tr",
+ "avel"
+ ],
+ [
+ "▁tra",
+ "vel"
+ ],
+ [
+ "▁trav",
+ "el"
+ ],
+ [
+ "▁b",
+ "oy"
+ ],
+ [
+ "▁bo",
+ "y"
+ ],
+ [
+ "▁",
+ "boy"
+ ],
+ [
+ "')",
+ "."
+ ],
+ [
+ "'",
+ ")."
+ ],
+ [
+ "u",
+ "y"
+ ],
+ [
+ "il",
+ "la"
+ ],
+ [
+ "ill",
+ "a"
+ ],
+ [
+ "i",
+ "lla"
+ ],
+ [
+ "is",
+ "try"
+ ],
+ [
+ "ist",
+ "ry"
+ ],
+ [
+ "istr",
+ "y"
+ ],
+ [
+ "▁v",
+ "a"
+ ],
+ [
+ "▁",
+ "va"
+ ],
+ [
+ "▁C",
+ "he"
+ ],
+ [
+ "▁Ch",
+ "e"
+ ],
+ [
+ "▁",
+ "Che"
+ ],
+ [
+ "ER",
+ "T"
+ ],
+ [
+ "E",
+ "RT"
+ ],
+ [
+ "en",
+ "de"
+ ],
+ [
+ "end",
+ "e"
+ ],
+ [
+ "e",
+ "nde"
+ ],
+ [
+ "un",
+ "gen"
+ ],
+ [
+ "ung",
+ "en"
+ ],
+ [
+ "unge",
+ "n"
+ ],
+ [
+ "ab",
+ "y"
+ ],
+ [
+ "a",
+ "by"
+ ],
+ [
+ "▁R",
+ "ober"
+ ],
+ [
+ "▁Ro",
+ "ber"
+ ],
+ [
+ "▁Rob",
+ "er"
+ ],
+ [
+ "▁play",
+ "ing"
+ ],
+ [
+ "il",
+ "s"
+ ],
+ [
+ "i",
+ "ls"
+ ],
+ [
+ "▁s",
+ "am"
+ ],
+ [
+ "▁sa",
+ "m"
+ ],
+ [
+ "▁",
+ "sam"
+ ],
+ [
+ "▁ex",
+ "ecut"
+ ],
+ [
+ "▁exec",
+ "ut"
+ ],
+ [
+ "▁",
+ "execut"
+ ],
+ [
+ "▁U",
+ "s"
+ ],
+ [
+ "▁",
+ "Us"
+ ],
+ [
+ "▁m",
+ "ut"
+ ],
+ [
+ "▁mu",
+ "t"
+ ],
+ [
+ "▁",
+ "mut"
+ ],
+ [
+ "▁b",
+ "al"
+ ],
+ [
+ "▁ba",
+ "l"
+ ],
+ [
+ "▁",
+ "bal"
+ ],
+ [
+ "as",
+ "se"
+ ],
+ [
+ "ass",
+ "e"
+ ],
+ [
+ "▁k",
+ "ids"
+ ],
+ [
+ "▁kid",
+ "s"
+ ],
+ [
+ "▁ki",
+ "ds"
+ ],
+ [
+ "▁fin",
+ "anc"
+ ],
+ [
+ "go",
+ "r"
+ ],
+ [
+ "g",
+ "or"
+ ],
+ [
+ "▁S",
+ "ec"
+ ],
+ [
+ "▁Se",
+ "c"
+ ],
+ [
+ "▁",
+ "Sec"
+ ],
+ [
+ "ber",
+ "t"
+ ],
+ [
+ "be",
+ "rt"
+ ],
+ [
+ "b",
+ "ert"
+ ],
+ [
+ "▁H",
+ "igh"
+ ],
+ [
+ "▁Hig",
+ "h"
+ ],
+ [
+ "▁Hi",
+ "gh"
+ ],
+ [
+ "▁",
+ "High"
+ ],
+ [
+ "▁",
+ "је"
+ ],
+ [
+ "▁ke",
+ "pt"
+ ],
+ [
+ "but",
+ "ton"
+ ],
+ [
+ "b",
+ "utton"
+ ],
+ [
+ "it",
+ "ory"
+ ],
+ [
+ "itor",
+ "y"
+ ],
+ [
+ "ito",
+ "ry"
+ ],
+ [
+ "▁R",
+ "em"
+ ],
+ [
+ "▁Re",
+ "m"
+ ],
+ [
+ "▁",
+ "Rem"
+ ],
+ [
+ "▁D",
+ "E"
+ ],
+ [
+ "▁",
+ "DE"
+ ],
+ [
+ "▁re",
+ "ach"
+ ],
+ [
+ "▁r",
+ "each"
+ ],
+ [
+ "▁",
+ "reach"
+ ],
+ [
+ "▁b",
+ "ur"
+ ],
+ [
+ "▁bu",
+ "r"
+ ],
+ [
+ "▁",
+ "bur"
+ ],
+ [
+ "La",
+ "bel"
+ ],
+ [
+ "L",
+ "abel"
+ ],
+ [
+ "á",
+ "t"
+ ],
+ [
+ "ag",
+ "o"
+ ],
+ [
+ "a",
+ "go"
+ ],
+ [
+ "▁pass",
+ "ed"
+ ],
+ [
+ "▁pas",
+ "sed"
+ ],
+ [
+ "▁be",
+ "hav"
+ ],
+ [
+ "▁beh",
+ "av"
+ ],
+ [
+ "xF",
+ "F"
+ ],
+ [
+ "x",
+ "FF"
+ ],
+ [
+ "▁R",
+ "eturn"
+ ],
+ [
+ "▁Re",
+ "turn"
+ ],
+ [
+ "▁Ret",
+ "urn"
+ ],
+ [
+ "▁",
+ "Return"
+ ],
+ [
+ "ST",
+ "R"
+ ],
+ [
+ "S",
+ "TR"
+ ],
+ [
+ "▁L",
+ "es"
+ ],
+ [
+ "▁Le",
+ "s"
+ ],
+ [
+ "▁",
+ "Les"
+ ],
+ [
+ "▁o",
+ "rd"
+ ],
+ [
+ "▁or",
+ "d"
+ ],
+ [
+ "▁",
+ "ord"
+ ],
+ [
+ "al",
+ "a"
+ ],
+ [
+ "a",
+ "la"
+ ],
+ [
+ "in",
+ "ger"
+ ],
+ [
+ "ing",
+ "er"
+ ],
+ [
+ "inge",
+ "r"
+ ],
+ [
+ "▁S",
+ "ince"
+ ],
+ [
+ "▁Sin",
+ "ce"
+ ],
+ [
+ "▁",
+ "Since"
+ ],
+ [
+ "▁exper",
+ "i"
+ ],
+ [
+ "▁exp",
+ "eri"
+ ],
+ [
+ "▁s",
+ "hall"
+ ],
+ [
+ "▁sh",
+ "all"
+ ],
+ [
+ "▁sha",
+ "ll"
+ ],
+ [
+ "▁",
+ "shall"
+ ],
+ [
+ "▁s",
+ "tar"
+ ],
+ [
+ "▁st",
+ "ar"
+ ],
+ [
+ "▁sta",
+ "r"
+ ],
+ [
+ "▁",
+ "star"
+ ],
+ [
+ "no",
+ "n"
+ ],
+ [
+ "n",
+ "on"
+ ],
+ [
+ "▁g",
+ "un"
+ ],
+ [
+ "▁gu",
+ "n"
+ ],
+ [
+ "▁",
+ "gun"
+ ],
+ [
+ "▁B",
+ "el"
+ ],
+ [
+ "▁Be",
+ "l"
+ ],
+ [
+ "▁",
+ "Bel"
+ ],
+ [
+ "▁ob",
+ "j"
+ ],
+ [
+ "▁",
+ "obj"
+ ],
+ [
+ "ar",
+ "es"
+ ],
+ [
+ "are",
+ "s"
+ ],
+ [
+ "a",
+ "res"
+ ],
+ [
+ "r",
+ "s"
+ ],
+ [
+ "▁we",
+ "eks"
+ ],
+ [
+ "▁week",
+ "s"
+ ],
+ [
+ "ne",
+ "n"
+ ],
+ [
+ "n",
+ "en"
+ ],
+ [
+ "▁S",
+ "tre"
+ ],
+ [
+ "▁St",
+ "re"
+ ],
+ [
+ "▁Str",
+ "e"
+ ],
+ [
+ "or",
+ "ing"
+ ],
+ [
+ "ori",
+ "ng"
+ ],
+ [
+ "o",
+ "ring"
+ ],
+ [
+ "▁",
+ "î"
+ ],
+ [
+ "▁ser",
+ "ious"
+ ],
+ [
+ "time",
+ "s"
+ ],
+ [
+ "ti",
+ "mes"
+ ],
+ [
+ "tim",
+ "es"
+ ],
+ [
+ "t",
+ "imes"
+ ],
+ [
+ "▁H",
+ "ouse"
+ ],
+ [
+ "▁Ho",
+ "use"
+ ],
+ [
+ "▁Hou",
+ "se"
+ ],
+ [
+ "▁r",
+ "oll"
+ ],
+ [
+ "▁ro",
+ "ll"
+ ],
+ [
+ "▁",
+ "roll"
+ ],
+ [
+ "▁reg",
+ "ister"
+ ],
+ [
+ "▁",
+ "register"
+ ],
+ [
+ "▁mod",
+ "ule"
+ ],
+ [
+ "▁mo",
+ "dule"
+ ],
+ [
+ "▁",
+ "module"
+ ],
+ [
+ "▁app",
+ "lic"
+ ],
+ [
+ "▁ap",
+ "plic"
+ ],
+ [
+ "▁appl",
+ "ic"
+ ],
+ [
+ "I",
+ "R"
+ ],
+ [
+ "▁c",
+ "ook"
+ ],
+ [
+ "▁co",
+ "ok"
+ ],
+ [
+ "▁",
+ "cook"
+ ],
+ [
+ "au",
+ "x"
+ ],
+ [
+ "a",
+ "ux"
+ ],
+ [
+ "▁s",
+ "ave"
+ ],
+ [
+ "▁sa",
+ "ve"
+ ],
+ [
+ "▁sav",
+ "e"
+ ],
+ [
+ "▁",
+ "save"
+ ],
+ [
+ "▁C",
+ "r"
+ ],
+ [
+ "▁",
+ "Cr"
+ ],
+ [
+ ",",
+ "\r"
+ ],
+ [
+ "▁st",
+ "ates"
+ ],
+ [
+ "▁stat",
+ "es"
+ ],
+ [
+ "▁state",
+ "s"
+ ],
+ [
+ "▁sta",
+ "tes"
+ ],
+ [
+ "▁",
+ "states"
+ ],
+ [
+ "▁em",
+ "pty"
+ ],
+ [
+ "▁emp",
+ "ty"
+ ],
+ [
+ "▁empt",
+ "y"
+ ],
+ [
+ "▁",
+ "empty"
+ ],
+ [
+ "▁aut",
+ "om"
+ ],
+ [
+ "▁au",
+ "tom"
+ ],
+ [
+ "▁auto",
+ "m"
+ ],
+ [
+ "▁",
+ "autom"
+ ],
+ [
+ "fig",
+ "ure"
+ ],
+ [
+ "ian",
+ "ce"
+ ],
+ [
+ "i",
+ "ance"
+ ],
+ [
+ "▁h",
+ "appy"
+ ],
+ [
+ "▁happ",
+ "y"
+ ],
+ [
+ "▁f",
+ "n"
+ ],
+ [
+ "▁",
+ "fn"
+ ],
+ [
+ "▁j",
+ "ud"
+ ],
+ [
+ "▁ju",
+ "d"
+ ],
+ [
+ "▁",
+ "jud"
+ ],
+ [
+ "▁h",
+ "at"
+ ],
+ [
+ "▁ha",
+ "t"
+ ],
+ [
+ "▁",
+ "hat"
+ ],
+ [
+ "AC",
+ "K"
+ ],
+ [
+ "A",
+ "CK"
+ ],
+ [
+ "▁F",
+ "e"
+ ],
+ [
+ "▁",
+ "Fe"
+ ],
+ [
+ "$",
+ "-"
+ ],
+ [
+ "iv",
+ "il"
+ ],
+ [
+ "ivi",
+ "l"
+ ],
+ [
+ "i",
+ "vil"
+ ],
+ [
+ "ot",
+ "ed"
+ ],
+ [
+ "ote",
+ "d"
+ ],
+ [
+ "o",
+ "ted"
+ ],
+ [
+ "▁size",
+ "of"
+ ],
+ [
+ "▁",
+ "sizeof"
+ ],
+ [
+ "▁sit",
+ "uation"
+ ],
+ [
+ "▁situ",
+ "ation"
+ ],
+ [
+ "▁l",
+ "ives"
+ ],
+ [
+ "▁li",
+ "ves"
+ ],
+ [
+ "▁live",
+ "s"
+ ],
+ [
+ "▁liv",
+ "es"
+ ],
+ [
+ "▁fe",
+ "eling"
+ ],
+ [
+ "▁feel",
+ "ing"
+ ],
+ [
+ "▁fee",
+ "ling"
+ ],
+ [
+ "▁r",
+ "isk"
+ ],
+ [
+ "▁ri",
+ "sk"
+ ],
+ [
+ "▁ris",
+ "k"
+ ],
+ [
+ "▁Jan",
+ "uary"
+ ],
+ [
+ "▁Januar",
+ "y"
+ ],
+ [
+ "▁Ob",
+ "ject"
+ ],
+ [
+ "▁",
+ "Object"
+ ],
+ [
+ "▁re",
+ "comm"
+ ],
+ [
+ "▁rec",
+ "omm"
+ ],
+ [
+ "▁в",
+ "ы"
+ ],
+ [
+ "▁",
+ "вы"
+ ],
+ [
+ "▁pot",
+ "ential"
+ ],
+ [
+ "ea",
+ "h"
+ ],
+ [
+ "e",
+ "ah"
+ ],
+ [
+ "▁com",
+ "plex"
+ ],
+ [
+ "▁comp",
+ "lex"
+ ],
+ [
+ "▁compl",
+ "ex"
+ ],
+ [
+ "▁",
+ "complex"
+ ],
+ [
+ "print",
+ "f"
+ ],
+ [
+ "ist",
+ "ance"
+ ],
+ [
+ "istan",
+ "ce"
+ ],
+ [
+ "i",
+ "stance"
+ ],
+ [
+ "ir",
+ "th"
+ ],
+ [
+ "irt",
+ "h"
+ ],
+ [
+ "li",
+ "k"
+ ],
+ [
+ "l",
+ "ik"
+ ],
+ [
+ "as",
+ "te"
+ ],
+ [
+ "ast",
+ "e"
+ ],
+ [
+ "a",
+ "ste"
+ ],
+ [
+ "▁wh",
+ "ose"
+ ],
+ [
+ "▁who",
+ "se"
+ ],
+ [
+ "Ar",
+ "g"
+ ],
+ [
+ "A",
+ "rg"
+ ],
+ [
+ "▁mod",
+ "ern"
+ ],
+ [
+ "▁mo",
+ "dern"
+ ],
+ [
+ "▁mode",
+ "rn"
+ ],
+ [
+ "▁moder",
+ "n"
+ ],
+ [
+ "ion",
+ "es"
+ ],
+ [
+ "io",
+ "nes"
+ ],
+ [
+ "ione",
+ "s"
+ ],
+ [
+ "i",
+ "ones"
+ ],
+ [
+ "▁ч",
+ "е"
+ ],
+ [
+ "▁",
+ "че"
+ ],
+ [
+ "▁s",
+ "ett"
+ ],
+ [
+ "▁se",
+ "tt"
+ ],
+ [
+ "▁set",
+ "t"
+ ],
+ [
+ "▁M",
+ "ag"
+ ],
+ [
+ "▁Ma",
+ "g"
+ ],
+ [
+ "▁",
+ "Mag"
+ ],
+ [
+ "a",
+ "e"
+ ],
+ [
+ "▁cond",
+ "ition"
+ ],
+ [
+ "▁",
+ "condition"
+ ],
+ [
+ "Le",
+ "ngth"
+ ],
+ [
+ "L",
+ "ength"
+ ],
+ [
+ "▁f",
+ "it"
+ ],
+ [
+ "▁fi",
+ "t"
+ ],
+ [
+ "▁",
+ "fit"
+ ],
+ [
+ "ound",
+ "s"
+ ],
+ [
+ "oun",
+ "ds"
+ ],
+ [
+ "▁ch",
+ "anged"
+ ],
+ [
+ "▁chang",
+ "ed"
+ ],
+ [
+ "▁change",
+ "d"
+ ],
+ [
+ "▁",
+ "changed"
+ ],
+ [
+ "▁g",
+ "uy"
+ ],
+ [
+ "▁gu",
+ "y"
+ ],
+ [
+ "fil",
+ "ter"
+ ],
+ [
+ "at",
+ "ever"
+ ],
+ [
+ "ate",
+ "ver"
+ ],
+ [
+ "é",
+ "d"
+ ],
+ [
+ "re",
+ "move"
+ ],
+ [
+ "rem",
+ "ove"
+ ],
+ [
+ "▁h",
+ "op"
+ ],
+ [
+ "▁ho",
+ "p"
+ ],
+ [
+ "▁",
+ "hop"
+ ],
+ [
+ "▁O",
+ "ut"
+ ],
+ [
+ "▁",
+ "Out"
+ ],
+ [
+ "▁R",
+ "ich"
+ ],
+ [
+ "▁Ric",
+ "h"
+ ],
+ [
+ "▁",
+ "Rich"
+ ],
+ [
+ "ch",
+ "ild"
+ ],
+ [
+ "chi",
+ "ld"
+ ],
+ [
+ "▁in",
+ "cluded"
+ ],
+ [
+ "▁incl",
+ "uded"
+ ],
+ [
+ "▁includ",
+ "ed"
+ ],
+ [
+ "▁include",
+ "d"
+ ],
+ [
+ "▁inclu",
+ "ded"
+ ],
+ [
+ "$",
+ "\\"
+ ],
+ [
+ "▁T",
+ "om"
+ ],
+ [
+ "▁To",
+ "m"
+ ],
+ [
+ "▁",
+ "Tom"
+ ],
+ [
+ "el",
+ "ine"
+ ],
+ [
+ "eli",
+ "ne"
+ ],
+ [
+ "elin",
+ "e"
+ ],
+ [
+ "e",
+ "line"
+ ],
+ [
+ "▁s",
+ "ometimes"
+ ],
+ [
+ "▁some",
+ "times"
+ ],
+ [
+ "▁somet",
+ "imes"
+ ],
+ [
+ "▁sometime",
+ "s"
+ ],
+ [
+ "▁dr",
+ "ink"
+ ],
+ [
+ "▁qu",
+ "ant"
+ ],
+ [
+ "▁",
+ "quant"
+ ],
+ [
+ "▁p",
+ "lease"
+ ],
+ [
+ "▁ple",
+ "ase"
+ ],
+ [
+ "▁I",
+ "nt"
+ ],
+ [
+ "▁In",
+ "t"
+ ],
+ [
+ "▁",
+ "Int"
+ ],
+ [
+ "ri",
+ "ef"
+ ],
+ [
+ "rie",
+ "f"
+ ],
+ [
+ "r",
+ "ief"
+ ],
+ [
+ "▁ex",
+ "actly"
+ ],
+ [
+ "▁exact",
+ "ly"
+ ],
+ [
+ "ci",
+ "ng"
+ ],
+ [
+ "cin",
+ "g"
+ ],
+ [
+ "c",
+ "ing"
+ ],
+ [
+ "▁all",
+ "owed"
+ ],
+ [
+ "▁allow",
+ "ed"
+ ],
+ [
+ "▁",
+ "allowed"
+ ],
+ [
+ "bu",
+ "ild"
+ ],
+ [
+ "b",
+ "uild"
+ ],
+ [
+ "▁beaut",
+ "iful"
+ ],
+ [
+ "▁W",
+ "ell"
+ ],
+ [
+ "▁We",
+ "ll"
+ ],
+ [
+ "▁Wel",
+ "l"
+ ],
+ [
+ "▁",
+ "Well"
+ ],
+ [
+ "▁look",
+ "s"
+ ],
+ [
+ "▁lo",
+ "oks"
+ ],
+ [
+ "▁",
+ "ü"
+ ],
+ [
+ "▁ch",
+ "ance"
+ ],
+ [
+ "▁w",
+ "rote"
+ ],
+ [
+ "▁wr",
+ "ote"
+ ],
+ [
+ "▁n",
+ "or"
+ ],
+ [
+ "▁no",
+ "r"
+ ],
+ [
+ "▁",
+ "nor"
+ ],
+ [
+ "▁f",
+ "ailed"
+ ],
+ [
+ "▁fa",
+ "iled"
+ ],
+ [
+ "▁fail",
+ "ed"
+ ],
+ [
+ "▁",
+ "failed"
+ ],
+ [
+ "Me",
+ "t"
+ ],
+ [
+ "M",
+ "et"
+ ],
+ [
+ "▁p",
+ "rior"
+ ],
+ [
+ "▁pr",
+ "ior"
+ ],
+ [
+ "▁pri",
+ "or"
+ ],
+ [
+ "▁h",
+ "undred"
+ ],
+ [
+ "ско",
+ "й"
+ ],
+ [
+ "с",
+ "кой"
+ ],
+ [
+ "or",
+ "ia"
+ ],
+ [
+ "ori",
+ "a"
+ ],
+ [
+ "o",
+ "ria"
+ ],
+ [
+ "▁c",
+ "y"
+ ],
+ [
+ "▁",
+ "cy"
+ ],
+ [
+ "▁w",
+ "eb"
+ ],
+ [
+ "▁we",
+ "b"
+ ],
+ [
+ "▁",
+ "web"
+ ],
+ [
+ "▁m",
+ "ess"
+ ],
+ [
+ "▁me",
+ "ss"
+ ],
+ [
+ "▁mes",
+ "s"
+ ],
+ [
+ "le",
+ "q"
+ ],
+ [
+ "l",
+ "eq"
+ ],
+ [
+ "d",
+ "y"
+ ],
+ [
+ "te",
+ "x"
+ ],
+ [
+ "t",
+ "ex"
+ ],
+ [
+ "▁a",
+ "nim"
+ ],
+ [
+ "▁an",
+ "im"
+ ],
+ [
+ "▁",
+ "anim"
+ ],
+ [
+ "at",
+ "ur"
+ ],
+ [
+ "atu",
+ "r"
+ ],
+ [
+ "▁str",
+ "ucture"
+ ],
+ [
+ "▁struct",
+ "ure"
+ ],
+ [
+ "▁",
+ "structure"
+ ],
+ [
+ "opt",
+ "ion"
+ ],
+ [
+ "o",
+ "ption"
+ ],
+ [
+ "▁act",
+ "ual"
+ ],
+ [
+ "▁",
+ "actual"
+ ],
+ [
+ "▁Fr",
+ "anc"
+ ],
+ [
+ "▁Fra",
+ "nc"
+ ],
+ [
+ "▁Fran",
+ "c"
+ ],
+ [
+ "en",
+ "ced"
+ ],
+ [
+ "ence",
+ "d"
+ ],
+ [
+ "enc",
+ "ed"
+ ],
+ [
+ ".<",
+ "/"
+ ],
+ [
+ ".",
+ ""
+ ],
+ [
+ "▁f",
+ "low"
+ ],
+ [
+ "▁fl",
+ "ow"
+ ],
+ [
+ "▁flo",
+ "w"
+ ],
+ [
+ "▁",
+ "flow"
+ ],
+ [
+ "▁A",
+ "fr"
+ ],
+ [
+ "▁Af",
+ "r"
+ ],
+ [
+ "de",
+ "t"
+ ],
+ [
+ "d",
+ "et"
+ ],
+ [
+ "▁K",
+ "e"
+ ],
+ [
+ "▁",
+ "Ke"
+ ],
+ [
+ "et",
+ "y"
+ ],
+ [
+ "e",
+ "ty"
+ ],
+ [
+ "ски",
+ "й"
+ ],
+ [
+ "с",
+ "кий"
+ ],
+ [
+ "▁st",
+ "uff"
+ ],
+ [
+ "it",
+ "ter"
+ ],
+ [
+ "itt",
+ "er"
+ ],
+ [
+ "itte",
+ "r"
+ ],
+ [
+ "▁ar",
+ "gs"
+ ],
+ [
+ "▁arg",
+ "s"
+ ],
+ [
+ "▁",
+ "args"
+ ],
+ [
+ "▁al",
+ "bum"
+ ],
+ [
+ "▁",
+ "album"
+ ],
+ [
+ "▁",
+ "]"
+ ],
+ [
+ "ug",
+ "in"
+ ],
+ [
+ "u",
+ "gin"
+ ],
+ [
+ "S",
+ "U"
+ ],
+ [
+ "Pe",
+ "r"
+ ],
+ [
+ "P",
+ "er"
+ ],
+ [
+ "▁cir",
+ "c"
+ ],
+ [
+ "▁ci",
+ "rc"
+ ],
+ [
+ "▁",
+ "circ"
+ ],
+ [
+ "▁cor",
+ "rect"
+ ],
+ [
+ "▁corre",
+ "ct"
+ ],
+ [
+ "▁",
+ "correct"
+ ],
+ [
+ "▁l",
+ "ines"
+ ],
+ [
+ "▁li",
+ "nes"
+ ],
+ [
+ "▁line",
+ "s"
+ ],
+ [
+ "▁lin",
+ "es"
+ ],
+ [
+ "▁",
+ "lines"
+ ],
+ [
+ "▁complet",
+ "ely"
+ ],
+ [
+ "▁complete",
+ "ly"
+ ],
+ [
+ "kn",
+ "own"
+ ],
+ [
+ "know",
+ "n"
+ ],
+ [
+ "k",
+ "nown"
+ ],
+ [
+ "▁t",
+ "ree"
+ ],
+ [
+ "▁tr",
+ "ee"
+ ],
+ [
+ "▁tre",
+ "e"
+ ],
+ [
+ "▁",
+ "tree"
+ ],
+ [
+ "ro",
+ "ot"
+ ],
+ [
+ "r",
+ "oot"
+ ],
+ [
+ "▁J",
+ "apan"
+ ],
+ [
+ "▁Ja",
+ "pan"
+ ],
+ [
+ "▁Jap",
+ "an"
+ ],
+ [
+ "ol",
+ "es"
+ ],
+ [
+ "ole",
+ "s"
+ ],
+ [
+ "o",
+ "les"
+ ],
+ [
+ "en",
+ "do"
+ ],
+ [
+ "end",
+ "o"
+ ],
+ [
+ "▁l",
+ "ocation"
+ ],
+ [
+ "▁loc",
+ "ation"
+ ],
+ [
+ "▁",
+ "location"
+ ],
+ [
+ "▁",
+ "Х"
+ ],
+ [
+ "▁m",
+ "id"
+ ],
+ [
+ "▁mi",
+ "d"
+ ],
+ [
+ "▁",
+ "mid"
+ ],
+ [
+ "al",
+ "ing"
+ ],
+ [
+ "ali",
+ "ng"
+ ],
+ [
+ "alin",
+ "g"
+ ],
+ [
+ "a",
+ "ling"
+ ],
+ [
+ "G",
+ "L"
+ ],
+ [
+ "ia",
+ "no"
+ ],
+ [
+ "ian",
+ "o"
+ ],
+ [
+ "i",
+ "ano"
+ ],
+ [
+ "▁{",
+ "}"
+ ],
+ [
+ "▁",
+ "{}"
+ ],
+ [
+ "la",
+ "ng"
+ ],
+ [
+ "lan",
+ "g"
+ ],
+ [
+ "l",
+ "ang"
+ ],
+ [
+ "▁equ",
+ "ip"
+ ],
+ [
+ "ERR",
+ "OR"
+ ],
+ [
+ "▁mem",
+ "ory"
+ ],
+ [
+ "▁memor",
+ "y"
+ ],
+ [
+ "▁memo",
+ "ry"
+ ],
+ [
+ "▁",
+ "memory"
+ ],
+ [
+ "▁(",
+ "\""
+ ],
+ [
+ "▁",
+ "(\""
+ ],
+ [
+ "▁n",
+ "ature"
+ ],
+ [
+ "▁nat",
+ "ure"
+ ],
+ [
+ "▁natur",
+ "e"
+ ],
+ [
+ "go",
+ "ogle"
+ ],
+ [
+ "ab",
+ "s"
+ ],
+ [
+ "a",
+ "bs"
+ ],
+ [
+ "B",
+ "C"
+ ],
+ [
+ "▁g",
+ "ets"
+ ],
+ [
+ "▁get",
+ "s"
+ ],
+ [
+ "▁ge",
+ "ts"
+ ],
+ [
+ "▁",
+ "gets"
+ ],
+ [
+ "Com",
+ "mand"
+ ],
+ [
+ "Comm",
+ "and"
+ ],
+ [
+ "TE",
+ "R"
+ ],
+ [
+ "T",
+ "ER"
+ ],
+ [
+ "al",
+ "ed"
+ ],
+ [
+ "ale",
+ "d"
+ ],
+ [
+ "a",
+ "led"
+ ],
+ [
+ "c",
+ "p"
+ ],
+ [
+ "▁p",
+ "urch"
+ ],
+ [
+ "▁pur",
+ "ch"
+ ],
+ [
+ "▁D",
+ "en"
+ ],
+ [
+ "▁De",
+ "n"
+ ],
+ [
+ "▁",
+ "Den"
+ ],
+ [
+ "▁her",
+ "self"
+ ],
+ [
+ "▁hers",
+ "elf"
+ ],
+ [
+ "▁I",
+ "r"
+ ],
+ [
+ "▁",
+ "Ir"
+ ],
+ [
+ "▁s",
+ "ie"
+ ],
+ [
+ "▁si",
+ "e"
+ ],
+ [
+ "ga",
+ "r"
+ ],
+ [
+ "g",
+ "ar"
+ ],
+ [
+ "A",
+ "p"
+ ],
+ [
+ "▁n",
+ "el"
+ ],
+ [
+ "▁ne",
+ "l"
+ ],
+ [
+ "▁",
+ "nel"
+ ],
+ [
+ "ot",
+ "a"
+ ],
+ [
+ "o",
+ "ta"
+ ],
+ [
+ ")",
+ "]"
+ ],
+ [
+ "co",
+ "r"
+ ],
+ [
+ "c",
+ "or"
+ ],
+ [
+ "ac",
+ "ht"
+ ],
+ [
+ "ach",
+ "t"
+ ],
+ [
+ "a",
+ "cht"
+ ],
+ [
+ "(",
+ "*"
+ ],
+ [
+ "irt",
+ "ual"
+ ],
+ [
+ "▁pol",
+ "ice"
+ ],
+ [
+ "▁polic",
+ "e"
+ ],
+ [
+ "▁s",
+ "kin"
+ ],
+ [
+ "▁sk",
+ "in"
+ ],
+ [
+ "▁ski",
+ "n"
+ ],
+ [
+ "▁",
+ "skin"
+ ],
+ [
+ "sh",
+ "ip"
+ ],
+ [
+ "s",
+ "hip"
+ ],
+ [
+ "ef",
+ "ined"
+ ],
+ [
+ "augh",
+ "ter"
+ ],
+ [
+ "aught",
+ "er"
+ ],
+ [
+ "in",
+ "ding"
+ ],
+ [
+ "ind",
+ "ing"
+ ],
+ [
+ "indi",
+ "ng"
+ ],
+ [
+ "▁S",
+ "l"
+ ],
+ [
+ "▁",
+ "Sl"
+ ],
+ [
+ "▁in",
+ "flu"
+ ],
+ [
+ "▁infl",
+ "u"
+ ],
+ [
+ "▁inf",
+ "lu"
+ ],
+ [
+ "▁m",
+ "ount"
+ ],
+ [
+ "▁mo",
+ "unt"
+ ],
+ [
+ "▁mou",
+ "nt"
+ ],
+ [
+ "▁",
+ "mount"
+ ],
+ [
+ "▁a",
+ "z"
+ ],
+ [
+ "▁",
+ "az"
+ ],
+ [
+ "▁w",
+ "ood"
+ ],
+ [
+ "▁wo",
+ "od"
+ ],
+ [
+ "▁",
+ "wood"
+ ],
+ [
+ "ot",
+ "es"
+ ],
+ [
+ "ote",
+ "s"
+ ],
+ [
+ "o",
+ "tes"
+ ],
+ [
+ "eg",
+ "a"
+ ],
+ [
+ "e",
+ "ga"
+ ],
+ [
+ "▁acc",
+ "ording"
+ ],
+ [
+ "▁accord",
+ "ing"
+ ],
+ [
+ "▁name",
+ "space"
+ ],
+ [
+ "▁names",
+ "pace"
+ ],
+ [
+ "▁",
+ "namespace"
+ ],
+ [
+ "Del",
+ "ta"
+ ],
+ [
+ "D",
+ "elta"
+ ],
+ [
+ "st",
+ "ant"
+ ],
+ [
+ "sta",
+ "nt"
+ ],
+ [
+ "stan",
+ "t"
+ ],
+ [
+ "▁pub",
+ "lished"
+ ],
+ [
+ "▁publish",
+ "ed"
+ ],
+ [
+ "▁",
+ "published"
+ ],
+ [
+ "ak",
+ "er"
+ ],
+ [
+ "ake",
+ "r"
+ ],
+ [
+ "a",
+ "ker"
+ ],
+ [
+ "▁Bl",
+ "ack"
+ ],
+ [
+ "▁",
+ "Black"
+ ],
+ [
+ "l",
+ "n"
+ ],
+ [
+ "▁indust",
+ "ry"
+ ],
+ [
+ "SO",
+ "N"
+ ],
+ [
+ "S",
+ "ON"
+ ],
+ [
+ "Re",
+ "p"
+ ],
+ [
+ "R",
+ "ep"
+ ],
+ [
+ "▁ch",
+ "oice"
+ ],
+ [
+ "▁cho",
+ "ice"
+ ],
+ [
+ "▁",
+ "choice"
+ ],
+ [
+ "▁in",
+ "n"
+ ],
+ [
+ "▁i",
+ "nn"
+ ],
+ [
+ "▁",
+ "inn"
+ ],
+ [
+ "k",
+ "l"
+ ],
+ [
+ "▁p",
+ "al"
+ ],
+ [
+ "▁pa",
+ "l"
+ ],
+ [
+ "▁",
+ "pal"
+ ],
+ [
+ "▁a",
+ "ud"
+ ],
+ [
+ "▁au",
+ "d"
+ ],
+ [
+ "▁",
+ "aud"
+ ],
+ [
+ "▁stand",
+ "ard"
+ ],
+ [
+ "▁",
+ "standard"
+ ],
+ [
+ "▁know",
+ "ledge"
+ ],
+ [
+ "**",
+ ","
+ ],
+ [
+ "*",
+ "*,"
+ ],
+ [
+ "▁F",
+ "rank"
+ ],
+ [
+ "▁Fr",
+ "ank"
+ ],
+ [
+ "▁Fran",
+ "k"
+ ],
+ [
+ "s",
+ "q"
+ ],
+ [
+ "Out",
+ "put"
+ ],
+ [
+ "▁f",
+ "ör"
+ ],
+ [
+ "▁fö",
+ "r"
+ ],
+ [
+ "▁",
+ "för"
+ ],
+ [
+ "Val",
+ "id"
+ ],
+ [
+ "ug",
+ "h"
+ ],
+ [
+ "u",
+ "gh"
+ ],
+ [
+ "▁bo",
+ "oks"
+ ],
+ [
+ "▁book",
+ "s"
+ ],
+ [
+ "▁",
+ "books"
+ ],
+ [
+ "▁J",
+ "ames"
+ ],
+ [
+ "▁Jam",
+ "es"
+ ],
+ [
+ "▁Ja",
+ "mes"
+ ],
+ [
+ "k",
+ "o"
+ ],
+ [
+ "▁compan",
+ "ies"
+ ],
+ [
+ "an",
+ "ning"
+ ],
+ [
+ "ann",
+ "ing"
+ ],
+ [
+ "anni",
+ "ng"
+ ],
+ [
+ "▁v",
+ "ict"
+ ],
+ [
+ "▁vi",
+ "ct"
+ ],
+ [
+ "▁vic",
+ "t"
+ ],
+ [
+ "▁re",
+ "pl"
+ ],
+ [
+ "▁rep",
+ "l"
+ ],
+ [
+ "▁s",
+ "che"
+ ],
+ [
+ "▁sc",
+ "he"
+ ],
+ [
+ "▁sch",
+ "e"
+ ],
+ [
+ "▁",
+ "sche"
+ ],
+ [
+ "▁h",
+ "appen"
+ ],
+ [
+ "▁happ",
+ "en"
+ ],
+ [
+ "▁ha",
+ "ppen"
+ ],
+ [
+ "ft",
+ "y"
+ ],
+ [
+ "f",
+ "ty"
+ ],
+ [
+ "ac",
+ "ity"
+ ],
+ [
+ "aci",
+ "ty"
+ ],
+ [
+ "a",
+ "city"
+ ],
+ [
+ "ir",
+ "a"
+ ],
+ [
+ "i",
+ "ra"
+ ],
+ [
+ "▁im",
+ "plement"
+ ],
+ [
+ "▁imp",
+ "lement"
+ ],
+ [
+ "▁impl",
+ "ement"
+ ],
+ [
+ "▁",
+ "implement"
+ ],
+ [
+ "ско",
+ "го"
+ ],
+ [
+ "ск",
+ "ого"
+ ],
+ [
+ "с",
+ "кого"
+ ],
+ [
+ "num",
+ "ber"
+ ],
+ [
+ "nu",
+ "mber"
+ ],
+ [
+ "n",
+ "umber"
+ ],
+ [
+ "S",
+ "H"
+ ],
+ [
+ "ir",
+ "o"
+ ],
+ [
+ "i",
+ "ro"
+ ],
+ [
+ "▁f",
+ "ear"
+ ],
+ [
+ "▁fe",
+ "ar"
+ ],
+ [
+ "▁t",
+ "ouch"
+ ],
+ [
+ "▁to",
+ "uch"
+ ],
+ [
+ "▁tou",
+ "ch"
+ ],
+ [
+ "▁",
+ "touch"
+ ],
+ [
+ "▁c",
+ "ast"
+ ],
+ [
+ "▁cas",
+ "t"
+ ],
+ [
+ "▁ca",
+ "st"
+ ],
+ [
+ "▁",
+ "cast"
+ ],
+ [
+ "AS",
+ "S"
+ ],
+ [
+ "A",
+ "SS"
+ ],
+ [
+ "▁cons",
+ "ist"
+ ],
+ [
+ "T",
+ "ask"
+ ],
+ [
+ "▁s",
+ "ig"
+ ],
+ [
+ "▁si",
+ "g"
+ ],
+ [
+ "▁",
+ "sig"
+ ],
+ [
+ "б",
+ "а"
+ ],
+ [
+ "ig",
+ "ation"
+ ],
+ [
+ "▁M",
+ "ost"
+ ],
+ [
+ "▁Mo",
+ "st"
+ ],
+ [
+ "▁Mos",
+ "t"
+ ],
+ [
+ "▁",
+ "Most"
+ ],
+ [
+ "▁D",
+ "er"
+ ],
+ [
+ "▁De",
+ "r"
+ ],
+ [
+ "▁",
+ "Der"
+ ],
+ [
+ "}(",
+ "\\"
+ ],
+ [
+ "}",
+ "(\\"
+ ],
+ [
+ ":",
+ "\""
+ ],
+ [
+ "▁F",
+ "ig"
+ ],
+ [
+ "▁Fi",
+ "g"
+ ],
+ [
+ "▁",
+ "Fig"
+ ],
+ [
+ "al",
+ "i"
+ ],
+ [
+ "a",
+ "li"
+ ],
+ [
+ "in",
+ "er"
+ ],
+ [
+ "ine",
+ "r"
+ ],
+ [
+ "i",
+ "ner"
+ ],
+ [
+ "')",
+ ","
+ ],
+ [
+ "'",
+ "),"
+ ],
+ [
+ "▁C",
+ "oun"
+ ],
+ [
+ "▁Co",
+ "un"
+ ],
+ [
+ "▁Cou",
+ "n"
+ ],
+ [
+ "(",
+ "_"
+ ],
+ [
+ "▁d",
+ "istributed"
+ ],
+ [
+ "▁distribut",
+ "ed"
+ ],
+ [
+ "▁distribute",
+ "d"
+ ],
+ [
+ "NA",
+ "ME"
+ ],
+ [
+ "N",
+ "AME"
+ ],
+ [
+ "▁m",
+ "ur"
+ ],
+ [
+ "▁mu",
+ "r"
+ ],
+ [
+ "▁care",
+ "er"
+ ],
+ [
+ "~",
+ "~"
+ ],
+ [
+ "pe",
+ "rs"
+ ],
+ [
+ "per",
+ "s"
+ ],
+ [
+ "p",
+ "ers"
+ ],
+ [
+ "ar",
+ "ies"
+ ],
+ [
+ "ari",
+ "es"
+ ],
+ [
+ "a",
+ "ries"
+ ],
+ [
+ "en",
+ "ses"
+ ],
+ [
+ "ens",
+ "es"
+ ],
+ [
+ "ense",
+ "s"
+ ],
+ [
+ "▁Al",
+ "so"
+ ],
+ [
+ "▁Als",
+ "o"
+ ],
+ [
+ "Vers",
+ "ion"
+ ],
+ [
+ "V",
+ "ersion"
+ ],
+ [
+ "▁un",
+ "ique"
+ ],
+ [
+ "▁uniqu",
+ "e"
+ ],
+ [
+ "▁",
+ "unique"
+ ],
+ [
+ "▁Fr",
+ "ance"
+ ],
+ [
+ "▁Franc",
+ "e"
+ ],
+ [
+ "▁Fran",
+ "ce"
+ ],
+ [
+ "B",
+ "A"
+ ],
+ [
+ "k",
+ "y"
+ ],
+ [
+ "▁F",
+ "ebru"
+ ],
+ [
+ "▁Fe",
+ "bru"
+ ],
+ [
+ "▁Feb",
+ "ru"
+ ],
+ [
+ "▁d",
+ "ied"
+ ],
+ [
+ "▁di",
+ "ed"
+ ],
+ [
+ "▁die",
+ "d"
+ ],
+ [
+ "om",
+ "ega"
+ ],
+ [
+ "ome",
+ "ga"
+ ],
+ [
+ "▁F",
+ "orm"
+ ],
+ [
+ "▁For",
+ "m"
+ ],
+ [
+ "▁Fo",
+ "rm"
+ ],
+ [
+ "▁",
+ "Form"
+ ],
+ [
+ "▁w",
+ "idth"
+ ],
+ [
+ "▁wid",
+ "th"
+ ],
+ [
+ "▁",
+ "width"
+ ],
+ [
+ "to",
+ "col"
+ ],
+ [
+ "t",
+ "ocol"
+ ],
+ [
+ "▁l",
+ "ie"
+ ],
+ [
+ "▁li",
+ "e"
+ ],
+ [
+ "▁",
+ "lie"
+ ],
+ [
+ "Sh",
+ "e"
+ ],
+ [
+ "S",
+ "he"
+ ],
+ [
+ "é",
+ "m"
+ ],
+ [
+ "▁stra",
+ "ight"
+ ],
+ [
+ "▁n",
+ "ach"
+ ],
+ [
+ "▁na",
+ "ch"
+ ],
+ [
+ "▁st",
+ "ood"
+ ],
+ [
+ "▁sto",
+ "od"
+ ],
+ [
+ "▁",
+ "stood"
+ ],
+ [
+ "ol",
+ "ds"
+ ],
+ [
+ "old",
+ "s"
+ ],
+ [
+ "▁g",
+ "oes"
+ ],
+ [
+ "▁go",
+ "es"
+ ],
+ [
+ "ce",
+ "ll"
+ ],
+ [
+ "cel",
+ "l"
+ ],
+ [
+ "c",
+ "ell"
+ ],
+ [
+ "▁t",
+ "ill"
+ ],
+ [
+ "▁til",
+ "l"
+ ],
+ [
+ "▁ti",
+ "ll"
+ ],
+ [
+ "L",
+ "I"
+ ],
+ [
+ "dr",
+ "aw"
+ ],
+ [
+ "d",
+ "raw"
+ ],
+ [
+ "▁s",
+ "atisf"
+ ],
+ [
+ "▁sat",
+ "isf"
+ ],
+ [
+ "▁re",
+ "ading"
+ ],
+ [
+ "▁read",
+ "ing"
+ ],
+ [
+ "AT",
+ "ION"
+ ],
+ [
+ "A",
+ "TION"
+ ],
+ [
+ "▁A",
+ "re"
+ ],
+ [
+ "▁Ar",
+ "e"
+ ],
+ [
+ "▁",
+ "Are"
+ ],
+ [
+ "▁A",
+ "c"
+ ],
+ [
+ "▁",
+ "Ac"
+ ],
+ [
+ ")",
+ "*"
+ ],
+ [
+ "▁add",
+ "itional"
+ ],
+ [
+ "▁addition",
+ "al"
+ ],
+ [
+ "wo",
+ "od"
+ ],
+ [
+ "w",
+ "ood"
+ ],
+ [
+ "ci",
+ "l"
+ ],
+ [
+ "c",
+ "il"
+ ],
+ [
+ "п",
+ "у"
+ ],
+ [
+ "UL",
+ "T"
+ ],
+ [
+ "U",
+ "LT"
+ ],
+ [
+ "▁b",
+ "ill"
+ ],
+ [
+ "▁bi",
+ "ll"
+ ],
+ [
+ "▁bil",
+ "l"
+ ],
+ [
+ "ma",
+ "s"
+ ],
+ [
+ "m",
+ "as"
+ ],
+ [
+ "an",
+ "ia"
+ ],
+ [
+ "ani",
+ "a"
+ ],
+ [
+ "a",
+ "nia"
+ ],
+ [
+ "с",
+ "у"
+ ],
+ [
+ "an",
+ "z"
+ ],
+ [
+ "he",
+ "ight"
+ ],
+ [
+ "h",
+ "eight"
+ ],
+ [
+ "j",
+ "o"
+ ],
+ [
+ "▁d",
+ "os"
+ ],
+ [
+ "▁do",
+ "s"
+ ],
+ [
+ "\\",
+ "\""
+ ],
+ [
+ "▁/",
+ ">"
+ ],
+ [
+ "▁",
+ "/>"
+ ],
+ [
+ "▁p",
+ "roduction"
+ ],
+ [
+ "▁produ",
+ "ction"
+ ],
+ [
+ "▁product",
+ "ion"
+ ],
+ [
+ "▁prod",
+ "uction"
+ ],
+ [
+ "▁",
+ "production"
+ ],
+ [
+ "ig",
+ "er"
+ ],
+ [
+ "ige",
+ "r"
+ ],
+ [
+ "i",
+ "ger"
+ ],
+ [
+ "▁с",
+ "т"
+ ],
+ [
+ "▁",
+ "ст"
+ ],
+ [
+ "sh",
+ "ow"
+ ],
+ [
+ "s",
+ "how"
+ ],
+ [
+ "▁pop",
+ "ulation"
+ ],
+ [
+ "▁popul",
+ "ation"
+ ],
+ [
+ "▁p",
+ "ark"
+ ],
+ [
+ "▁par",
+ "k"
+ ],
+ [
+ "▁",
+ "park"
+ ],
+ [
+ "▁Z",
+ "e"
+ ],
+ [
+ "▁necess",
+ "ary"
+ ],
+ [
+ "▁",
+ "necessary"
+ ],
+ [
+ "▁t",
+ "rust"
+ ],
+ [
+ "▁tr",
+ "ust"
+ ],
+ [
+ "▁sh",
+ "own"
+ ],
+ [
+ "▁show",
+ "n"
+ ],
+ [
+ "mod",
+ "ule"
+ ],
+ [
+ "mo",
+ "dule"
+ ],
+ [
+ "G",
+ "E"
+ ],
+ [
+ "▁l",
+ "ay"
+ ],
+ [
+ "▁la",
+ "y"
+ ],
+ [
+ "▁",
+ "lay"
+ ],
+ [
+ "▁ann",
+ "oun"
+ ],
+ [
+ "▁class",
+ "Name"
+ ],
+ [
+ "▁",
+ "className"
+ ],
+ [
+ "▁cal",
+ "cul"
+ ],
+ [
+ "▁calc",
+ "ul"
+ ],
+ [
+ "Fun",
+ "ction"
+ ],
+ [
+ "F",
+ "unction"
+ ],
+ [
+ "▁S",
+ "al"
+ ],
+ [
+ "▁Sa",
+ "l"
+ ],
+ [
+ "▁",
+ "Sal"
+ ],
+ [
+ "O",
+ "K"
+ ],
+ [
+ "T",
+ "P"
+ ],
+ [
+ "▁en",
+ "try"
+ ],
+ [
+ "▁ent",
+ "ry"
+ ],
+ [
+ "▁entr",
+ "y"
+ ],
+ [
+ "▁",
+ "entry"
+ ],
+ [
+ "▁St",
+ "ud"
+ ],
+ [
+ "▁",
+ "Stud"
+ ],
+ [
+ "▁it",
+ "ems"
+ ],
+ [
+ "▁item",
+ "s"
+ ],
+ [
+ "▁",
+ "items"
+ ],
+ [
+ "▁se",
+ "curity"
+ ],
+ [
+ "▁sec",
+ "urity"
+ ],
+ [
+ "▁secur",
+ "ity"
+ ],
+ [
+ "▁",
+ "security"
+ ],
+ [
+ "En",
+ "try"
+ ],
+ [
+ "Ent",
+ "ry"
+ ],
+ [
+ "f",
+ "loat"
+ ],
+ [
+ "l",
+ "s"
+ ],
+ [
+ "ib",
+ "ly"
+ ],
+ [
+ "▁cont",
+ "ribut"
+ ],
+ [
+ "▁C",
+ "heck"
+ ],
+ [
+ "▁Che",
+ "ck"
+ ],
+ [
+ "▁",
+ "Check"
+ ],
+ [
+ "M",
+ "D"
+ ],
+ [
+ "▁impro",
+ "ve"
+ ],
+ [
+ "Par",
+ "t"
+ ],
+ [
+ "P",
+ "art"
+ ],
+ [
+ "▁system",
+ "s"
+ ],
+ [
+ "▁syst",
+ "ems"
+ ],
+ [
+ "B",
+ "l"
+ ],
+ [
+ "▁pol",
+ "icy"
+ ],
+ [
+ "▁polic",
+ "y"
+ ],
+ [
+ "▁",
+ "policy"
+ ],
+ [
+ "▁s",
+ "creen"
+ ],
+ [
+ "▁sc",
+ "reen"
+ ],
+ [
+ "▁scr",
+ "een"
+ ],
+ [
+ "▁",
+ "screen"
+ ],
+ [
+ "▁A",
+ "ny"
+ ],
+ [
+ "▁An",
+ "y"
+ ],
+ [
+ "▁",
+ "Any"
+ ],
+ [
+ "▁op",
+ "ened"
+ ],
+ [
+ "▁open",
+ "ed"
+ ],
+ [
+ "al",
+ "loc"
+ ],
+ [
+ "all",
+ "oc"
+ ],
+ [
+ "allo",
+ "c"
+ ],
+ [
+ "▁De",
+ "cember"
+ ],
+ [
+ "▁Dec",
+ "ember"
+ ],
+ [
+ "▁",
+ "É"
+ ],
+ [
+ "▁e",
+ "mail"
+ ],
+ [
+ "▁em",
+ "ail"
+ ],
+ [
+ "▁",
+ "email"
+ ],
+ [
+ "ad",
+ "er"
+ ],
+ [
+ "ade",
+ "r"
+ ],
+ [
+ "a",
+ "der"
+ ],
+ [
+ "=",
+ ">"
+ ],
+ [
+ "▁H",
+ "en"
+ ],
+ [
+ "▁He",
+ "n"
+ ],
+ [
+ "▁",
+ "Hen"
+ ],
+ [
+ "▁in",
+ "fo"
+ ],
+ [
+ "▁inf",
+ "o"
+ ],
+ [
+ "▁",
+ "info"
+ ],
+ [
+ "▁f",
+ "loat"
+ ],
+ [
+ "▁flo",
+ "at"
+ ],
+ [
+ "▁",
+ "float"
+ ],
+ [
+ "▁sw",
+ "itch"
+ ],
+ [
+ "▁",
+ "switch"
+ ],
+ [
+ "ра",
+ "н"
+ ],
+ [
+ "р",
+ "ан"
+ ],
+ [
+ "ur",
+ "ance"
+ ],
+ [
+ "▁as",
+ "sum"
+ ],
+ [
+ "▁ass",
+ "um"
+ ],
+ [
+ "us",
+ "tr"
+ ],
+ [
+ "ust",
+ "r"
+ ],
+ [
+ "u",
+ "str"
+ ],
+ [
+ "▁g",
+ "roups"
+ ],
+ [
+ "▁group",
+ "s"
+ ],
+ [
+ "▁gro",
+ "ups"
+ ],
+ [
+ "▁",
+ "groups"
+ ],
+ [
+ "▁R",
+ "ead"
+ ],
+ [
+ "▁Re",
+ "ad"
+ ],
+ [
+ "▁",
+ "Read"
+ ],
+ [
+ "▁w",
+ "at"
+ ],
+ [
+ "▁wa",
+ "t"
+ ],
+ [
+ "S",
+ "p"
+ ],
+ [
+ "ве",
+ "р"
+ ],
+ [
+ "в",
+ "ер"
+ ],
+ [
+ "RA",
+ "N"
+ ],
+ [
+ "R",
+ "AN"
+ ],
+ [
+ "hi",
+ "b"
+ ],
+ [
+ "h",
+ "ib"
+ ],
+ [
+ "AL",
+ "L"
+ ],
+ [
+ "A",
+ "LL"
+ ],
+ [
+ "▁h",
+ "us"
+ ],
+ [
+ "▁",
+ "hus"
+ ],
+ [
+ "Sp",
+ "ec"
+ ],
+ [
+ "Spe",
+ "c"
+ ],
+ [
+ "S",
+ "pec"
+ ],
+ [
+ "\")",
+ ")"
+ ],
+ [
+ "\"",
+ "))"
+ ],
+ [
+ "▁F",
+ "rench"
+ ],
+ [
+ "▁C",
+ "lass"
+ ],
+ [
+ "▁Cl",
+ "ass"
+ ],
+ [
+ "▁",
+ "Class"
+ ],
+ [
+ "▁pres",
+ "ident"
+ ],
+ [
+ "▁presid",
+ "ent"
+ ],
+ [
+ "▁def",
+ "init"
+ ],
+ [
+ "▁defin",
+ "it"
+ ],
+ [
+ "▁N",
+ "or"
+ ],
+ [
+ "▁No",
+ "r"
+ ],
+ [
+ "▁T",
+ "hom"
+ ],
+ [
+ "▁Th",
+ "om"
+ ],
+ [
+ "ai",
+ "gn"
+ ],
+ [
+ "a",
+ "ign"
+ ],
+ [
+ "W",
+ "idth"
+ ],
+ [
+ "D",
+ "o"
+ ],
+ [
+ "▁{",
+ "@"
+ ],
+ [
+ "ag",
+ "on"
+ ],
+ [
+ "ago",
+ "n"
+ ],
+ [
+ "a",
+ "gon"
+ ],
+ [
+ "▁L",
+ "u"
+ ],
+ [
+ "▁",
+ "Lu"
+ ],
+ [
+ "▁follow",
+ "ed"
+ ],
+ [
+ "M",
+ "M"
+ ],
+ [
+ "as",
+ "ons"
+ ],
+ [
+ "ason",
+ "s"
+ ],
+ [
+ "tm",
+ "p"
+ ],
+ [
+ "t",
+ "mp"
+ ],
+ [
+ "▁th",
+ "rows"
+ ],
+ [
+ "▁throw",
+ "s"
+ ],
+ [
+ "▁thr",
+ "ows"
+ ],
+ [
+ "▁thro",
+ "ws"
+ ],
+ [
+ "▁",
+ "throws"
+ ],
+ [
+ "IT",
+ "Y"
+ ],
+ [
+ "I",
+ "TY"
+ ],
+ [
+ "но",
+ "м"
+ ],
+ [
+ "▁f",
+ "air"
+ ],
+ [
+ "▁fa",
+ "ir"
+ ],
+ [
+ "▁p",
+ "en"
+ ],
+ [
+ "▁pe",
+ "n"
+ ],
+ [
+ "▁",
+ "pen"
+ ],
+ [
+ "é",
+ "g"
+ ],
+ [
+ "▁inter",
+ "face"
+ ],
+ [
+ "▁",
+ "interface"
+ ],
+ [
+ "▁s",
+ "af"
+ ],
+ [
+ "▁sa",
+ "f"
+ ],
+ [
+ "oo",
+ "n"
+ ],
+ [
+ "o",
+ "on"
+ ],
+ [
+ "B",
+ "ack"
+ ],
+ [
+ "▁s",
+ "peed"
+ ],
+ [
+ "▁sp",
+ "eed"
+ ],
+ [
+ "▁spe",
+ "ed"
+ ],
+ [
+ "▁",
+ "speed"
+ ],
+ [
+ "▁ext",
+ "ends"
+ ],
+ [
+ "▁extend",
+ "s"
+ ],
+ [
+ "em",
+ "pty"
+ ],
+ [
+ "empt",
+ "y"
+ ],
+ [
+ "emp",
+ "ty"
+ ],
+ [
+ "▁п",
+ "ере"
+ ],
+ [
+ "▁пер",
+ "е"
+ ],
+ [
+ "▁пе",
+ "ре"
+ ],
+ [
+ "▁pro",
+ "per"
+ ],
+ [
+ "▁pr",
+ "oper"
+ ],
+ [
+ "▁prop",
+ "er"
+ ],
+ [
+ "▁d",
+ "riv"
+ ],
+ [
+ "▁dr",
+ "iv"
+ ],
+ [
+ "▁dri",
+ "v"
+ ],
+ [
+ "ф",
+ "и"
+ ],
+ [
+ "▁c",
+ "enter"
+ ],
+ [
+ "▁cent",
+ "er"
+ ],
+ [
+ "▁",
+ "center"
+ ],
+ [
+ "he",
+ "ader"
+ ],
+ [
+ "head",
+ "er"
+ ],
+ [
+ "▁}",
+ ")"
+ ],
+ [
+ "▁",
+ "})"
+ ],
+ [
+ "w",
+ "a"
+ ],
+ [
+ "▁m",
+ "iddle"
+ ],
+ [
+ "▁",
+ "middle"
+ ],
+ [
+ "▁ch",
+ "oose"
+ ],
+ [
+ "▁cho",
+ "ose"
+ ],
+ [
+ "▁St",
+ "ad"
+ ],
+ [
+ "▁Sta",
+ "d"
+ ],
+ [
+ "S",
+ "O"
+ ],
+ [
+ "Fact",
+ "ory"
+ ],
+ [
+ "Factor",
+ "y"
+ ],
+ [
+ "F",
+ "actory"
+ ],
+ [
+ "De",
+ "v"
+ ],
+ [
+ "D",
+ "ev"
+ ],
+ [
+ "ic",
+ "les"
+ ],
+ [
+ "icle",
+ "s"
+ ],
+ [
+ "icl",
+ "es"
+ ],
+ [
+ "i",
+ "cles"
+ ],
+ [
+ "▁ap",
+ "plication"
+ ],
+ [
+ "▁applic",
+ "ation"
+ ],
+ [
+ "▁appl",
+ "ication"
+ ],
+ [
+ "▁",
+ "application"
+ ],
+ [
+ "▁mod",
+ "els"
+ ],
+ [
+ "▁model",
+ "s"
+ ],
+ [
+ "▁mode",
+ "ls"
+ ],
+ [
+ "▁",
+ "models"
+ ],
+ [
+ "pi",
+ "te"
+ ],
+ [
+ "pit",
+ "e"
+ ],
+ [
+ "p",
+ "ite"
+ ],
+ [
+ "ca",
+ "p"
+ ],
+ [
+ "c",
+ "ap"
+ ],
+ [
+ "x",
+ "i"
+ ],
+ [
+ "osp",
+ "ital"
+ ],
+ [
+ "▁d",
+ "ream"
+ ],
+ [
+ "▁dre",
+ "am"
+ ],
+ [
+ "EN",
+ "D"
+ ],
+ [
+ "E",
+ "ND"
+ ],
+ [
+ "▁con",
+ "tract"
+ ],
+ [
+ "▁cont",
+ "ract"
+ ],
+ [
+ "▁contr",
+ "act"
+ ],
+ [
+ "▁contra",
+ "ct"
+ ],
+ [
+ "▁",
+ "contract"
+ ],
+ [
+ "icro",
+ "soft"
+ ],
+ [
+ "▁th",
+ "ous"
+ ],
+ [
+ "▁thou",
+ "s"
+ ],
+ [
+ "iz",
+ "es"
+ ],
+ [
+ "ize",
+ "s"
+ ],
+ [
+ "i",
+ "zes"
+ ],
+ [
+ "▁д",
+ "а"
+ ],
+ [
+ "▁",
+ "да"
+ ],
+ [
+ "▁C",
+ "O"
+ ],
+ [
+ "▁",
+ "CO"
+ ],
+ [
+ "▁d",
+ "irection"
+ ],
+ [
+ "▁di",
+ "rection"
+ ],
+ [
+ "▁direct",
+ "ion"
+ ],
+ [
+ "▁dire",
+ "ction"
+ ],
+ [
+ "▁dir",
+ "ection"
+ ],
+ [
+ "▁",
+ "direction"
+ ],
+ [
+ "▁`",
+ "`"
+ ],
+ [
+ "▁",
+ "``"
+ ],
+ [
+ "▁d",
+ "rive"
+ ],
+ [
+ "▁dr",
+ "ive"
+ ],
+ [
+ "▁dri",
+ "ve"
+ ],
+ [
+ "▁driv",
+ "e"
+ ],
+ [
+ "▁",
+ "drive"
+ ],
+ [
+ "Ma",
+ "x"
+ ],
+ [
+ "M",
+ "ax"
+ ],
+ [
+ "ci",
+ "a"
+ ],
+ [
+ "c",
+ "ia"
+ ],
+ [
+ "▁contin",
+ "u"
+ ],
+ [
+ "▁A",
+ "lex"
+ ],
+ [
+ "▁Al",
+ "ex"
+ ],
+ [
+ "▁Ale",
+ "x"
+ ],
+ [
+ "▁",
+ "Alex"
+ ],
+ [
+ "▁g",
+ "old"
+ ],
+ [
+ "▁go",
+ "ld"
+ ],
+ [
+ "▁gol",
+ "d"
+ ],
+ [
+ "▁",
+ "gold"
+ ],
+ [
+ "▁p",
+ "rep"
+ ],
+ [
+ "▁pre",
+ "p"
+ ],
+ [
+ "▁pr",
+ "ep"
+ ],
+ [
+ "▁or",
+ "igin"
+ ],
+ [
+ "▁orig",
+ "in"
+ ],
+ [
+ "▁",
+ "origin"
+ ],
+ [
+ "▁r",
+ "ap"
+ ],
+ [
+ "▁ra",
+ "p"
+ ],
+ [
+ "▁",
+ "rap"
+ ],
+ [
+ "O",
+ "p"
+ ],
+ [
+ "ous",
+ "ly"
+ ],
+ [
+ "▁are",
+ "as"
+ ],
+ [
+ "▁area",
+ "s"
+ ],
+ [
+ "PO",
+ "RT"
+ ],
+ [
+ "P",
+ "ORT"
+ ],
+ [
+ "он",
+ "а"
+ ],
+ [
+ "о",
+ "на"
+ ],
+ [
+ "▁sa",
+ "fe"
+ ],
+ [
+ "▁saf",
+ "e"
+ ],
+ [
+ "▁",
+ "safe"
+ ],
+ [
+ "▁profess",
+ "ional"
+ ],
+ [
+ "▁profession",
+ "al"
+ ],
+ [
+ "ap",
+ "ache"
+ ],
+ [
+ "apa",
+ "che"
+ ],
+ [
+ "▁t",
+ "emper"
+ ],
+ [
+ "▁tem",
+ "per"
+ ],
+ [
+ "▁temp",
+ "er"
+ ],
+ [
+ "s",
+ "z"
+ ],
+ [
+ "▁u",
+ "nit"
+ ],
+ [
+ "▁un",
+ "it"
+ ],
+ [
+ "▁",
+ "unit"
+ ],
+ [
+ "▁c",
+ "op"
+ ],
+ [
+ "▁co",
+ "p"
+ ],
+ [
+ "▁",
+ "cop"
+ ],
+ [
+ "eq",
+ "n"
+ ],
+ [
+ "List",
+ "ener"
+ ],
+ [
+ "Listen",
+ "er"
+ ],
+ [
+ "▁for",
+ "mat"
+ ],
+ [
+ "▁form",
+ "at"
+ ],
+ [
+ "▁forma",
+ "t"
+ ],
+ [
+ "▁",
+ "format"
+ ],
+ [
+ "se",
+ "lect"
+ ],
+ [
+ "sel",
+ "ect"
+ ],
+ [
+ "s",
+ "elect"
+ ],
+ [
+ "▁com",
+ "fort"
+ ],
+ [
+ "▁",
+ "comfort"
+ ],
+ [
+ "▁me",
+ "ant"
+ ],
+ [
+ "▁mean",
+ "t"
+ ],
+ [
+ "id",
+ "ay"
+ ],
+ [
+ "ida",
+ "y"
+ ],
+ [
+ "i",
+ "day"
+ ],
+ [
+ "em",
+ "e"
+ ],
+ [
+ "e",
+ "me"
+ ],
+ [
+ "▁act",
+ "ive"
+ ],
+ [
+ "▁activ",
+ "e"
+ ],
+ [
+ "▁",
+ "active"
+ ],
+ [
+ "▁n",
+ "ote"
+ ],
+ [
+ "▁not",
+ "e"
+ ],
+ [
+ "▁no",
+ "te"
+ ],
+ [
+ "▁",
+ "note"
+ ],
+ [
+ "▁M",
+ "il"
+ ],
+ [
+ "▁Mi",
+ "l"
+ ],
+ [
+ "▁",
+ "Mil"
+ ],
+ [
+ "on",
+ "ly"
+ ],
+ [
+ "▁<",
+ "="
+ ],
+ [
+ "▁",
+ "<="
+ ],
+ [
+ "▁ne",
+ "igh"
+ ],
+ [
+ "▁nei",
+ "gh"
+ ],
+ [
+ "a",
+ "o"
+ ],
+ [
+ "▁bl",
+ "ue"
+ ],
+ [
+ "▁",
+ "blue"
+ ],
+ [
+ "▁T",
+ "V"
+ ],
+ [
+ "▁",
+ "TV"
+ ],
+ [
+ "Ch",
+ "ild"
+ ],
+ [
+ "▁re",
+ "ached"
+ ],
+ [
+ "▁reach",
+ "ed"
+ ],
+ [
+ "Add",
+ "ress"
+ ],
+ [
+ "Addr",
+ "ess"
+ ],
+ [
+ "ст",
+ "в"
+ ],
+ [
+ "▁cl",
+ "osed"
+ ],
+ [
+ "▁close",
+ "d"
+ ],
+ [
+ "▁clos",
+ "ed"
+ ],
+ [
+ "▁clo",
+ "sed"
+ ],
+ [
+ "▁",
+ "closed"
+ ],
+ [
+ "in",
+ "der"
+ ],
+ [
+ "ind",
+ "er"
+ ],
+ [
+ "inde",
+ "r"
+ ],
+ [
+ "i",
+ "nder"
+ ],
+ [
+ "ol",
+ "o"
+ ],
+ [
+ "o",
+ "lo"
+ ],
+ [
+ "▁a",
+ "lt"
+ ],
+ [
+ "▁al",
+ "t"
+ ],
+ [
+ "▁",
+ "alt"
+ ],
+ [
+ "▁a",
+ "dm"
+ ],
+ [
+ "▁ad",
+ "m"
+ ],
+ [
+ "Form",
+ "at"
+ ],
+ [
+ "For",
+ "mat"
+ ],
+ [
+ "U",
+ "I"
+ ],
+ [
+ "▁H",
+ "am"
+ ],
+ [
+ "▁Ha",
+ "m"
+ ],
+ [
+ "▁f",
+ "requ"
+ ],
+ [
+ "▁fr",
+ "equ"
+ ],
+ [
+ "▁fre",
+ "qu"
+ ],
+ [
+ "▁in",
+ "depend"
+ ],
+ [
+ "▁inde",
+ "pend"
+ ],
+ [
+ "▁",
+ "independ"
+ ],
+ [
+ "▁eas",
+ "ily"
+ ],
+ [
+ "▁L",
+ "and"
+ ],
+ [
+ "▁La",
+ "nd"
+ ],
+ [
+ "▁Lan",
+ "d"
+ ],
+ [
+ "▁",
+ "Land"
+ ],
+ [
+ "▁t",
+ "or"
+ ],
+ [
+ "▁to",
+ "r"
+ ],
+ [
+ "▁",
+ "tor"
+ ],
+ [
+ "ograph",
+ "y"
+ ],
+ [
+ "ograp",
+ "hy"
+ ],
+ [
+ "in",
+ "fty"
+ ],
+ [
+ "inf",
+ "ty"
+ ],
+ [
+ "▁W",
+ "ork"
+ ],
+ [
+ "▁Wor",
+ "k"
+ ],
+ [
+ "▁",
+ "Work"
+ ],
+ [
+ "iv",
+ "en"
+ ],
+ [
+ "ive",
+ "n"
+ ],
+ [
+ "i",
+ "ven"
+ ],
+ [
+ "▁Count",
+ "y"
+ ],
+ [
+ "▁Coun",
+ "ty"
+ ],
+ [
+ "▁s",
+ "rc"
+ ],
+ [
+ "▁",
+ "src"
+ ],
+ [
+ "}$",
+ ","
+ ],
+ [
+ "}",
+ "$,"
+ ],
+ [
+ "par",
+ "se"
+ ],
+ [
+ "pars",
+ "e"
+ ],
+ [
+ "p",
+ "arse"
+ ],
+ [
+ "C",
+ "D"
+ ],
+ [
+ "▁C",
+ "our"
+ ],
+ [
+ "▁Co",
+ "ur"
+ ],
+ [
+ "▁Cou",
+ "r"
+ ],
+ [
+ "▁f",
+ "ol"
+ ],
+ [
+ "▁fo",
+ "l"
+ ],
+ [
+ "▁",
+ "fol"
+ ],
+ [
+ "Ent",
+ "ity"
+ ],
+ [
+ "pg",
+ "f"
+ ],
+ [
+ "▁Ch",
+ "ina"
+ ],
+ [
+ "▁Chi",
+ "na"
+ ],
+ [
+ "▁S",
+ "ub"
+ ],
+ [
+ "▁Su",
+ "b"
+ ],
+ [
+ "▁",
+ "Sub"
+ ],
+ [
+ "ho",
+ "od"
+ ],
+ [
+ "h",
+ "ood"
+ ],
+ [
+ "▁field",
+ "s"
+ ],
+ [
+ "▁",
+ "fields"
+ ],
+ [
+ "▁y",
+ "es"
+ ],
+ [
+ "▁ye",
+ "s"
+ ],
+ [
+ "▁",
+ "yes"
+ ],
+ [
+ "re",
+ "nd"
+ ],
+ [
+ "ren",
+ "d"
+ ],
+ [
+ "r",
+ "end"
+ ],
+ [
+ "▁to",
+ "wards"
+ ],
+ [
+ "▁toward",
+ "s"
+ ],
+ [
+ "▁tow",
+ "ards"
+ ],
+ [
+ "▁st",
+ "aff"
+ ],
+ [
+ "▁sta",
+ "ff"
+ ],
+ [
+ "▁",
+ "staff"
+ ],
+ [
+ "▁A",
+ "ir"
+ ],
+ [
+ "▁",
+ "Air"
+ ],
+ [
+ "▁st",
+ "ation"
+ ],
+ [
+ "▁stat",
+ "ion"
+ ],
+ [
+ "▁",
+ "station"
+ ],
+ [
+ "at",
+ "ives"
+ ],
+ [
+ "ative",
+ "s"
+ ],
+ [
+ "ati",
+ "ves"
+ ],
+ [
+ "ativ",
+ "es"
+ ],
+ [
+ "▁imp",
+ "act"
+ ],
+ [
+ "в",
+ "ы"
+ ],
+ [
+ "▁direct",
+ "ly"
+ ],
+ [
+ "iss",
+ "ions"
+ ],
+ [
+ "ission",
+ "s"
+ ],
+ [
+ "iv",
+ "a"
+ ],
+ [
+ "i",
+ "va"
+ ],
+ [
+ "|",
+ "\\"
+ ],
+ [
+ "Pt",
+ "r"
+ ],
+ [
+ "P",
+ "tr"
+ ],
+ [
+ "▁S",
+ "ant"
+ ],
+ [
+ "▁San",
+ "t"
+ ],
+ [
+ "▁Sa",
+ "nt"
+ ],
+ [
+ "Po",
+ "l"
+ ],
+ [
+ "P",
+ "ol"
+ ],
+ [
+ "▁pro",
+ "gress"
+ ],
+ [
+ "▁",
+ "progress"
+ ],
+ [
+ "it",
+ "ar"
+ ],
+ [
+ "ita",
+ "r"
+ ],
+ [
+ "i",
+ "tar"
+ ],
+ [
+ "▁p",
+ "arts"
+ ],
+ [
+ "▁part",
+ "s"
+ ],
+ [
+ "▁par",
+ "ts"
+ ],
+ [
+ "▁",
+ "parts"
+ ],
+ [
+ "▁pl",
+ "ant"
+ ],
+ [
+ "▁plan",
+ "t"
+ ],
+ [
+ "▁",
+ "plant"
+ ],
+ [
+ "▁abs",
+ "olut"
+ ],
+ [
+ "▁gu",
+ "ess"
+ ],
+ [
+ "eq",
+ "ref"
+ ],
+ [
+ "▁t",
+ "im"
+ ],
+ [
+ "▁ti",
+ "m"
+ ],
+ [
+ "▁",
+ "tim"
+ ],
+ [
+ "▁L",
+ "ou"
+ ],
+ [
+ "▁Lo",
+ "u"
+ ],
+ [
+ "▁",
+ "Lou"
+ ],
+ [
+ "▁c",
+ "ool"
+ ],
+ [
+ "▁co",
+ "ol"
+ ],
+ [
+ "al",
+ "u"
+ ],
+ [
+ "a",
+ "lu"
+ ],
+ [
+ "▁m",
+ "outh"
+ ],
+ [
+ "▁mo",
+ "uth"
+ ],
+ [
+ "▁mou",
+ "th"
+ ],
+ [
+ "▁",
+ "mouth"
+ ],
+ [
+ "ни",
+ "х"
+ ],
+ [
+ "▁h",
+ "eight"
+ ],
+ [
+ "▁he",
+ "ight"
+ ],
+ [
+ "▁",
+ "height"
+ ],
+ [
+ "ge",
+ "st"
+ ],
+ [
+ "ges",
+ "t"
+ ],
+ [
+ "g",
+ "est"
+ ],
+ [
+ "▁P",
+ "ost"
+ ],
+ [
+ "▁Po",
+ "st"
+ ],
+ [
+ "▁Pos",
+ "t"
+ ],
+ [
+ "▁",
+ "Post"
+ ],
+ [
+ "▁b",
+ "oard"
+ ],
+ [
+ "▁bo",
+ "ard"
+ ],
+ [
+ "▁",
+ "board"
+ ],
+ [
+ "▁t",
+ "it"
+ ],
+ [
+ "▁ti",
+ "t"
+ ],
+ [
+ "▁",
+ "tit"
+ ],
+ [
+ "▁h",
+ "our"
+ ],
+ [
+ "▁ho",
+ "ur"
+ ],
+ [
+ "▁",
+ "hour"
+ ],
+ [
+ "▁ser",
+ "ver"
+ ],
+ [
+ "▁serv",
+ "er"
+ ],
+ [
+ "▁serve",
+ "r"
+ ],
+ [
+ "▁",
+ "server"
+ ],
+ [
+ "▁p",
+ "layers"
+ ],
+ [
+ "▁play",
+ "ers"
+ ],
+ [
+ "▁player",
+ "s"
+ ],
+ [
+ "ri",
+ "er"
+ ],
+ [
+ "rie",
+ "r"
+ ],
+ [
+ "r",
+ "ier"
+ ],
+ [
+ "Lin",
+ "k"
+ ],
+ [
+ "L",
+ "ink"
+ ],
+ [
+ "▁Pres",
+ "ident"
+ ],
+ [
+ "]",
+ "("
+ ],
+ [
+ "▁con",
+ "struct"
+ ],
+ [
+ "▁const",
+ "ruct"
+ ],
+ [
+ "▁constr",
+ "uct"
+ ],
+ [
+ "▁constru",
+ "ct"
+ ],
+ [
+ "▁",
+ "construct"
+ ],
+ [
+ "hand",
+ "le"
+ ],
+ [
+ "}$",
+ "."
+ ],
+ [
+ "}",
+ "$."
+ ],
+ [
+ "ry",
+ "ing"
+ ],
+ [
+ "r",
+ "ying"
+ ],
+ [
+ "▁s",
+ "hop"
+ ],
+ [
+ "▁sh",
+ "op"
+ ],
+ [
+ "▁",
+ "shop"
+ ],
+ [
+ "ia",
+ "na"
+ ],
+ [
+ "ian",
+ "a"
+ ],
+ [
+ "i",
+ "ana"
+ ],
+ [
+ "ex",
+ "p"
+ ],
+ [
+ "e",
+ "xp"
+ ],
+ [
+ "Hel",
+ "per"
+ ],
+ [
+ "Help",
+ "er"
+ ],
+ [
+ "Off",
+ "set"
+ ],
+ [
+ "ac",
+ "hes"
+ ],
+ [
+ "ach",
+ "es"
+ ],
+ [
+ "ache",
+ "s"
+ ],
+ [
+ "a",
+ "ches"
+ ],
+ [
+ "▁conne",
+ "ction"
+ ],
+ [
+ "▁connect",
+ "ion"
+ ],
+ [
+ "▁conn",
+ "ection"
+ ],
+ [
+ "▁",
+ "connection"
+ ],
+ [
+ "▁d",
+ "ifference"
+ ],
+ [
+ "▁dif",
+ "ference"
+ ],
+ [
+ "▁differ",
+ "ence"
+ ],
+ [
+ "serv",
+ "ice"
+ ],
+ [
+ "s",
+ "ervice"
+ ],
+ [
+ "▁g",
+ "as"
+ ],
+ [
+ "▁ga",
+ "s"
+ ],
+ [
+ "▁",
+ "gas"
+ ],
+ [
+ "▁p",
+ "riv"
+ ],
+ [
+ "▁pr",
+ "iv"
+ ],
+ [
+ "▁pri",
+ "v"
+ ],
+ [
+ "▁",
+ "priv"
+ ],
+ [
+ "▁un",
+ "ivers"
+ ],
+ [
+ "▁",
+ "univers"
+ ],
+ [
+ "▁w",
+ "ish"
+ ],
+ [
+ "▁wis",
+ "h"
+ ],
+ [
+ "Re",
+ "m"
+ ],
+ [
+ "R",
+ "em"
+ ],
+ [
+ "U",
+ "rl"
+ ],
+ [
+ "ge",
+ "b"
+ ],
+ [
+ "g",
+ "eb"
+ ],
+ [
+ "S",
+ "o"
+ ],
+ [
+ "ens",
+ "ions"
+ ],
+ [
+ "ension",
+ "s"
+ ],
+ [
+ "Mod",
+ "ule"
+ ],
+ [
+ "Mo",
+ "dule"
+ ],
+ [
+ "SI",
+ "ZE"
+ ],
+ [
+ "▁p",
+ "rem"
+ ],
+ [
+ "▁pre",
+ "m"
+ ],
+ [
+ "▁pr",
+ "em"
+ ],
+ [
+ "wind",
+ "ow"
+ ],
+ [
+ "w",
+ "indow"
+ ],
+ [
+ "▁d",
+ "ies"
+ ],
+ [
+ "▁di",
+ "es"
+ ],
+ [
+ "▁die",
+ "s"
+ ],
+ [
+ "de",
+ "l"
+ ],
+ [
+ "d",
+ "el"
+ ],
+ [
+ "▁r",
+ "ow"
+ ],
+ [
+ "▁ro",
+ "w"
+ ],
+ [
+ "▁",
+ "row"
+ ],
+ [
+ "▁a",
+ "verage"
+ ],
+ [
+ "▁aver",
+ "age"
+ ],
+ [
+ "▁ave",
+ "rage"
+ ],
+ [
+ "xi",
+ "m"
+ ],
+ [
+ "x",
+ "im"
+ ],
+ [
+ "▁p",
+ "u"
+ ],
+ [
+ "▁",
+ "pu"
+ ],
+ [
+ "an",
+ "ç"
+ ],
+ [
+ "De",
+ "t"
+ ],
+ [
+ "D",
+ "et"
+ ],
+ [
+ "ke",
+ "r"
+ ],
+ [
+ "k",
+ "er"
+ ],
+ [
+ "y",
+ "a"
+ ],
+ [
+ "▁D",
+ "et"
+ ],
+ [
+ "▁De",
+ "t"
+ ],
+ [
+ "▁",
+ "Det"
+ ],
+ [
+ "▁p",
+ "å"
+ ],
+ [
+ "▁n",
+ "amed"
+ ],
+ [
+ "▁name",
+ "d"
+ ],
+ [
+ "▁na",
+ "med"
+ ],
+ [
+ "▁nam",
+ "ed"
+ ],
+ [
+ "▁",
+ "named"
+ ],
+ [
+ "▁dec",
+ "ision"
+ ],
+ [
+ "▁decis",
+ "ion"
+ ],
+ [
+ "wi",
+ "n"
+ ],
+ [
+ "w",
+ "in"
+ ],
+ [
+ "▁Ge",
+ "orge"
+ ],
+ [
+ "▁Georg",
+ "e"
+ ],
+ [
+ "ar",
+ "ily"
+ ],
+ [
+ "ari",
+ "ly"
+ ],
+ [
+ "▁s",
+ "olution"
+ ],
+ [
+ "▁sol",
+ "ution"
+ ],
+ [
+ "▁mult",
+ "iple"
+ ],
+ [
+ "▁multi",
+ "ple"
+ ],
+ [
+ "▁multip",
+ "le"
+ ],
+ [
+ "▁",
+ "multiple"
+ ],
+ [
+ "at",
+ "egy"
+ ],
+ [
+ "ate",
+ "gy"
+ ],
+ [
+ "ateg",
+ "y"
+ ],
+ [
+ "▁le",
+ "arning"
+ ],
+ [
+ "▁learn",
+ "ing"
+ ],
+ [
+ "▁lear",
+ "ning"
+ ],
+ [
+ "▁",
+ "learning"
+ ],
+ [
+ "▁se",
+ "cret"
+ ],
+ [
+ "▁sec",
+ "ret"
+ ],
+ [
+ "▁secre",
+ "t"
+ ],
+ [
+ "▁",
+ "secret"
+ ],
+ [
+ "D",
+ "O"
+ ],
+ [
+ "▁n",
+ "ice"
+ ],
+ [
+ "▁ni",
+ "ce"
+ ],
+ [
+ "▁nic",
+ "e"
+ ],
+ [
+ "▁",
+ "nice"
+ ],
+ [
+ "////////",
+ "////////"
+ ],
+ [
+ "S",
+ "u"
+ ],
+ [
+ "it",
+ "ation"
+ ],
+ [
+ "itat",
+ "ion"
+ ],
+ [
+ "▁j",
+ "oin"
+ ],
+ [
+ "▁jo",
+ "in"
+ ],
+ [
+ "▁",
+ "join"
+ ],
+ [
+ "▁el",
+ "ements"
+ ],
+ [
+ "▁element",
+ "s"
+ ],
+ [
+ "▁ele",
+ "ments"
+ ],
+ [
+ "▁elem",
+ "ents"
+ ],
+ [
+ "▁",
+ "elements"
+ ],
+ [
+ "▁e",
+ "mer"
+ ],
+ [
+ "▁em",
+ "er"
+ ],
+ [
+ "til",
+ "de"
+ ],
+ [
+ "t",
+ "ilde"
+ ],
+ [
+ "▁d",
+ "ep"
+ ],
+ [
+ "▁de",
+ "p"
+ ],
+ [
+ "▁",
+ "dep"
+ ],
+ [
+ "▁s",
+ "hot"
+ ],
+ [
+ "▁sh",
+ "ot"
+ ],
+ [
+ "▁",
+ "shot"
+ ],
+ [
+ "▁pl",
+ "atform"
+ ],
+ [
+ "▁plat",
+ "form"
+ ],
+ [
+ "▁",
+ "platform"
+ ],
+ [
+ "ot",
+ "hing"
+ ],
+ [
+ "oth",
+ "ing"
+ ],
+ [
+ "o",
+ "thing"
+ ],
+ [
+ "M",
+ "y"
+ ],
+ [
+ "ed",
+ "ia"
+ ],
+ [
+ "edi",
+ "a"
+ ],
+ [
+ "om",
+ "s"
+ ],
+ [
+ "o",
+ "ms"
+ ],
+ [
+ "ail",
+ "y"
+ ],
+ [
+ "ai",
+ "ly"
+ ],
+ [
+ "a",
+ "ily"
+ ],
+ [
+ "(",
+ "["
+ ],
+ [
+ "▁d",
+ "ress"
+ ],
+ [
+ "▁dr",
+ "ess"
+ ],
+ [
+ "▁dre",
+ "ss"
+ ],
+ [
+ "▁off",
+ "icial"
+ ],
+ [
+ "▁offic",
+ "ial"
+ ],
+ [
+ "es",
+ "tern"
+ ],
+ [
+ "est",
+ "ern"
+ ],
+ [
+ "ester",
+ "n"
+ ],
+ [
+ "este",
+ "rn"
+ ],
+ [
+ "▁dis",
+ "cover"
+ ],
+ [
+ "▁disc",
+ "over"
+ ],
+ [
+ "▁disco",
+ "ver"
+ ],
+ [
+ "▁m",
+ "i"
+ ],
+ [
+ "▁",
+ "mi"
+ ],
+ [
+ "ны",
+ "е"
+ ],
+ [
+ "C",
+ "A"
+ ],
+ [
+ "od",
+ "ing"
+ ],
+ [
+ "odi",
+ "ng"
+ ],
+ [
+ "o",
+ "ding"
+ ],
+ [
+ "▁F",
+ "ound"
+ ],
+ [
+ "▁Fou",
+ "nd"
+ ],
+ [
+ "▁Fo",
+ "und"
+ ],
+ [
+ "▁",
+ "Found"
+ ],
+ [
+ "▁a",
+ "ffect"
+ ],
+ [
+ "▁aff",
+ "ect"
+ ],
+ [
+ "▁af",
+ "fect"
+ ],
+ [
+ "Vi",
+ "s"
+ ],
+ [
+ "V",
+ "is"
+ ],
+ [
+ "st",
+ "ract"
+ ],
+ [
+ "str",
+ "act"
+ ],
+ [
+ "stra",
+ "ct"
+ ],
+ [
+ "s",
+ "tract"
+ ],
+ [
+ "ic",
+ "ed"
+ ],
+ [
+ "ice",
+ "d"
+ ],
+ [
+ "i",
+ "ced"
+ ],
+ [
+ "de",
+ "bug"
+ ],
+ [
+ "d",
+ "ebug"
+ ],
+ [
+ "▁rel",
+ "ated"
+ ],
+ [
+ "▁relate",
+ "d"
+ ],
+ [
+ "▁",
+ "related"
+ ],
+ [
+ "▁s",
+ "pect"
+ ],
+ [
+ "▁sp",
+ "ect"
+ ],
+ [
+ "▁spec",
+ "t"
+ ],
+ [
+ "▁spe",
+ "ct"
+ ],
+ [
+ "▁",
+ "spect"
+ ],
+ [
+ "us",
+ "hed"
+ ],
+ [
+ "ush",
+ "ed"
+ ],
+ [
+ "сь",
+ "ко"
+ ],
+ [
+ "▁b",
+ "ank"
+ ],
+ [
+ "▁ban",
+ "k"
+ ],
+ [
+ "▁",
+ "bank"
+ ],
+ [
+ "▁c",
+ "ele"
+ ],
+ [
+ "▁ce",
+ "le"
+ ],
+ [
+ "▁cel",
+ "e"
+ ],
+ [
+ "AN",
+ "D"
+ ],
+ [
+ "A",
+ "ND"
+ ],
+ [
+ "ol",
+ "f"
+ ],
+ [
+ "е",
+ "м"
+ ],
+ [
+ "▁f",
+ "ill"
+ ],
+ [
+ "▁fil",
+ "l"
+ ],
+ [
+ "▁fi",
+ "ll"
+ ],
+ [
+ "▁",
+ "fill"
+ ],
+ [
+ "▁g",
+ "ives"
+ ],
+ [
+ "▁giv",
+ "es"
+ ],
+ [
+ "▁give",
+ "s"
+ ],
+ [
+ "▁gi",
+ "ves"
+ ],
+ [
+ "▁б",
+ "у"
+ ],
+ [
+ "▁",
+ "бу"
+ ],
+ [
+ "ar",
+ "on"
+ ],
+ [
+ "aro",
+ "n"
+ ],
+ [
+ "a",
+ "ron"
+ ],
+ [
+ "▁J",
+ "es"
+ ],
+ [
+ "▁Je",
+ "s"
+ ],
+ [
+ "RE",
+ "G"
+ ],
+ [
+ "▁s",
+ "udd"
+ ],
+ [
+ "▁su",
+ "dd"
+ ],
+ [
+ "▁sud",
+ "d"
+ ],
+ [
+ "date",
+ "d"
+ ],
+ [
+ "da",
+ "ted"
+ ],
+ [
+ "dat",
+ "ed"
+ ],
+ [
+ "d",
+ "ated"
+ ],
+ [
+ "v",
+ "i"
+ ],
+ [
+ "▁g",
+ "i"
+ ],
+ [
+ "▁",
+ "gi"
+ ],
+ [
+ "se",
+ "nd"
+ ],
+ [
+ "sen",
+ "d"
+ ],
+ [
+ "s",
+ "end"
+ ],
+ [
+ "cp",
+ "p"
+ ],
+ [
+ "c",
+ "pp"
+ ],
+ [
+ "▁s",
+ "pent"
+ ],
+ [
+ "▁sp",
+ "ent"
+ ],
+ [
+ "▁spe",
+ "nt"
+ ],
+ [
+ "an",
+ "de"
+ ],
+ [
+ "and",
+ "e"
+ ],
+ [
+ "a",
+ "nde"
+ ],
+ [
+ "▁oper",
+ "ation"
+ ],
+ [
+ "▁",
+ "operation"
+ ],
+ [
+ "pro",
+ "cess"
+ ],
+ [
+ "proc",
+ "ess"
+ ],
+ [
+ "▁in",
+ "form"
+ ],
+ [
+ "▁inf",
+ "orm"
+ ],
+ [
+ "▁info",
+ "rm"
+ ],
+ [
+ "▁F",
+ "ree"
+ ],
+ [
+ "▁Fr",
+ "ee"
+ ],
+ [
+ "▁Fre",
+ "e"
+ ],
+ [
+ "▁",
+ "Free"
+ ],
+ [
+ "yo",
+ "nd"
+ ],
+ [
+ "y",
+ "ond"
+ ],
+ [
+ "▁per",
+ "haps"
+ ],
+ [
+ "▁su",
+ "rv"
+ ],
+ [
+ "▁sur",
+ "v"
+ ],
+ [
+ "▁L",
+ "oc"
+ ],
+ [
+ "▁Lo",
+ "c"
+ ],
+ [
+ "▁",
+ "Loc"
+ ],
+ [
+ "▁con",
+ "cl"
+ ],
+ [
+ "▁conc",
+ "l"
+ ],
+ [
+ "▁ра",
+ "з"
+ ],
+ [
+ "▁",
+ "раз"
+ ],
+ [
+ "▁O",
+ "ver"
+ ],
+ [
+ "▁",
+ "Over"
+ ],
+ [
+ "ho",
+ "l"
+ ],
+ [
+ "h",
+ "ol"
+ ],
+ [
+ "ra",
+ "z"
+ ],
+ [
+ "r",
+ "az"
+ ],
+ [
+ "Wr",
+ "ite"
+ ],
+ [
+ "Writ",
+ "e"
+ ],
+ [
+ "W",
+ "rite"
+ ],
+ [
+ "▁g",
+ "iving"
+ ],
+ [
+ "▁giv",
+ "ing"
+ ],
+ [
+ "▁gi",
+ "ving"
+ ],
+ [
+ "r",
+ "d"
+ ],
+ [
+ "in",
+ "stance"
+ ],
+ [
+ "inst",
+ "ance"
+ ],
+ [
+ "▁re",
+ "leased"
+ ],
+ [
+ "▁rele",
+ "ased"
+ ],
+ [
+ "▁release",
+ "d"
+ ],
+ [
+ "▁R",
+ "o"
+ ],
+ [
+ "▁",
+ "Ro"
+ ],
+ [
+ "R",
+ "A"
+ ],
+ [
+ "▁pract",
+ "ice"
+ ],
+ [
+ "▁g",
+ "raph"
+ ],
+ [
+ "▁gr",
+ "aph"
+ ],
+ [
+ "▁gra",
+ "ph"
+ ],
+ [
+ "▁grap",
+ "h"
+ ],
+ [
+ "▁",
+ "graph"
+ ],
+ [
+ "▁incre",
+ "ase"
+ ],
+ [
+ "▁fig",
+ "ure"
+ ],
+ [
+ "▁",
+ "figure"
+ ],
+ [
+ "Fil",
+ "ter"
+ ],
+ [
+ "HE",
+ "CK"
+ ],
+ [
+ "id",
+ "x"
+ ],
+ [
+ "i",
+ "dx"
+ ],
+ [
+ "▁g",
+ "lass"
+ ],
+ [
+ "▁gl",
+ "ass"
+ ],
+ [
+ "▁",
+ "glass"
+ ],
+ [
+ "sk",
+ "i"
+ ],
+ [
+ "s",
+ "ki"
+ ],
+ [
+ "com",
+ "es"
+ ],
+ [
+ "co",
+ "mes"
+ ],
+ [
+ "come",
+ "s"
+ ],
+ [
+ "c",
+ "omes"
+ ],
+ [
+ "▁c",
+ "at"
+ ],
+ [
+ "▁ca",
+ "t"
+ ],
+ [
+ "▁",
+ "cat"
+ ],
+ [
+ "▁c",
+ "old"
+ ],
+ [
+ "▁col",
+ "d"
+ ],
+ [
+ "▁co",
+ "ld"
+ ],
+ [
+ "go",
+ "to"
+ ],
+ [
+ "got",
+ "o"
+ ],
+ [
+ "g",
+ "oto"
+ ],
+ [
+ "uf",
+ "act"
+ ],
+ [
+ "u",
+ "fact"
+ ],
+ [
+ "▁C",
+ "opyright"
+ ],
+ [
+ "▁Copy",
+ "right"
+ ],
+ [
+ "▁",
+ "Copyright"
+ ],
+ [
+ "}}",
+ "\\"
+ ],
+ [
+ "}",
+ "}\\"
+ ],
+ [
+ "▁str",
+ "eng"
+ ],
+ [
+ "▁stre",
+ "ng"
+ ],
+ [
+ "▁d",
+ "ir"
+ ],
+ [
+ "▁di",
+ "r"
+ ],
+ [
+ "▁",
+ "dir"
+ ],
+ [
+ "to",
+ "ken"
+ ],
+ [
+ "tok",
+ "en"
+ ],
+ [
+ "t",
+ "oken"
+ ],
+ [
+ "▁occ",
+ "ur"
+ ],
+ [
+ "▁oc",
+ "cur"
+ ],
+ [
+ "arl",
+ "ier"
+ ],
+ [
+ "▁me",
+ "asure"
+ ],
+ [
+ "▁meas",
+ "ure"
+ ],
+ [
+ "▁",
+ "measure"
+ ],
+ [
+ "▁s",
+ "ec"
+ ],
+ [
+ "▁se",
+ "c"
+ ],
+ [
+ "▁",
+ "sec"
+ ],
+ [
+ "▁m",
+ "ás"
+ ],
+ [
+ "▁má",
+ "s"
+ ],
+ [
+ "▁N",
+ "et"
+ ],
+ [
+ "▁Ne",
+ "t"
+ ],
+ [
+ "▁",
+ "Net"
+ ],
+ [
+ "▁arg",
+ "ument"
+ ],
+ [
+ "▁",
+ "argument"
+ ],
+ [
+ "▁s",
+ "ou"
+ ],
+ [
+ "▁so",
+ "u"
+ ],
+ [
+ "▁m",
+ "oving"
+ ],
+ [
+ "▁mov",
+ "ing"
+ ],
+ [
+ "▁mo",
+ "ving"
+ ],
+ [
+ "▁p",
+ "refer"
+ ],
+ [
+ "▁pre",
+ "fer"
+ ],
+ [
+ "▁pref",
+ "er"
+ ],
+ [
+ "ma",
+ "sk"
+ ],
+ [
+ "mas",
+ "k"
+ ],
+ [
+ "m",
+ "ask"
+ ],
+ [
+ "<",
+ "<"
+ ],
+ [
+ "▁bre",
+ "ath"
+ ],
+ [
+ "▁breat",
+ "h"
+ ],
+ [
+ "▁phys",
+ "ical"
+ ],
+ [
+ "▁pos",
+ "itive"
+ ],
+ [
+ "▁posit",
+ "ive"
+ ],
+ [
+ "▁s",
+ "or"
+ ],
+ [
+ "▁so",
+ "r"
+ ],
+ [
+ "▁",
+ "sor"
+ ],
+ [
+ "▁de",
+ "part"
+ ],
+ [
+ "▁dep",
+ "art"
+ ],
+ [
+ "▁re",
+ "move"
+ ],
+ [
+ "▁rem",
+ "ove"
+ ],
+ [
+ "▁",
+ "remove"
+ ],
+ [
+ "▁k",
+ "it"
+ ],
+ [
+ "▁ki",
+ "t"
+ ],
+ [
+ "▁",
+ "kit"
+ ],
+ [
+ "▁me",
+ "eting"
+ ],
+ [
+ "▁meet",
+ "ing"
+ ],
+ [
+ "▁D",
+ "ata"
+ ],
+ [
+ "▁Da",
+ "ta"
+ ],
+ [
+ "▁Dat",
+ "a"
+ ],
+ [
+ "▁",
+ "Data"
+ ],
+ [
+ "og",
+ "raf"
+ ],
+ [
+ "act",
+ "ions"
+ ],
+ [
+ "action",
+ "s"
+ ],
+ [
+ "a",
+ "ctions"
+ ],
+ [
+ "▁param",
+ "eters"
+ ],
+ [
+ "▁parameter",
+ "s"
+ ],
+ [
+ "▁",
+ "parameters"
+ ],
+ [
+ "▁A",
+ "tt"
+ ],
+ [
+ "▁At",
+ "t"
+ ],
+ [
+ "▁",
+ "Att"
+ ],
+ [
+ "es",
+ "ch"
+ ],
+ [
+ "esc",
+ "h"
+ ],
+ [
+ "e",
+ "sch"
+ ],
+ [
+ "▁inv",
+ "olved"
+ ],
+ [
+ "▁invol",
+ "ved"
+ ],
+ [
+ "▁involve",
+ "d"
+ ],
+ [
+ "ä",
+ "t"
+ ],
+ [
+ "L",
+ "L"
+ ],
+ [
+ "B",
+ "ar"
+ ],
+ [
+ "▁с",
+ "и"
+ ],
+ [
+ "▁",
+ "си"
+ ],
+ [
+ "ec",
+ "h"
+ ],
+ [
+ "e",
+ "ch"
+ ],
+ [
+ "GE",
+ "T"
+ ],
+ [
+ "G",
+ "ET"
+ ],
+ [
+ "▁pre",
+ "vent"
+ ],
+ [
+ "▁pr",
+ "event"
+ ],
+ [
+ "▁prev",
+ "ent"
+ ],
+ [
+ "▁",
+ "prevent"
+ ],
+ [
+ "▁be",
+ "yond"
+ ],
+ [
+ "▁O",
+ "ther"
+ ],
+ [
+ "▁Ot",
+ "her"
+ ],
+ [
+ "▁",
+ "Other"
+ ],
+ [
+ "ä",
+ "n"
+ ],
+ [
+ "by",
+ "te"
+ ],
+ [
+ "▁sudd",
+ "en"
+ ],
+ [
+ "▁sud",
+ "den"
+ ],
+ [
+ "ol",
+ "ve"
+ ],
+ [
+ "olv",
+ "e"
+ ],
+ [
+ "▁н",
+ "о"
+ ],
+ [
+ "▁",
+ "но"
+ ],
+ [
+ "LO",
+ "G"
+ ],
+ [
+ "L",
+ "OG"
+ ],
+ [
+ "un",
+ "it"
+ ],
+ [
+ "uni",
+ "t"
+ ],
+ [
+ "u",
+ "nit"
+ ],
+ [
+ "▁tr",
+ "uth"
+ ],
+ [
+ "ra",
+ "t"
+ ],
+ [
+ "r",
+ "at"
+ ],
+ [
+ "S",
+ "D"
+ ],
+ [
+ "▁e",
+ "at"
+ ],
+ [
+ "▁M",
+ "ad"
+ ],
+ [
+ "▁Ma",
+ "d"
+ ],
+ [
+ "▁",
+ "Mad"
+ ],
+ [
+ "▁prov",
+ "ides"
+ ],
+ [
+ "▁provide",
+ "s"
+ ],
+ [
+ "▁s",
+ "ession"
+ ],
+ [
+ "▁",
+ "session"
+ ],
+ [
+ "De",
+ "le"
+ ],
+ [
+ "Del",
+ "e"
+ ],
+ [
+ "D",
+ "ele"
+ ],
+ [
+ "▁con",
+ "vers"
+ ],
+ [
+ "▁conv",
+ "ers"
+ ],
+ [
+ "▁conver",
+ "s"
+ ],
+ [
+ "▁conve",
+ "rs"
+ ],
+ [
+ "cent",
+ "er"
+ ],
+ [
+ "cen",
+ "ter"
+ ],
+ [
+ "c",
+ "enter"
+ ],
+ [
+ "▁contin",
+ "ued"
+ ],
+ [
+ "▁continue",
+ "d"
+ ],
+ [
+ "▁continu",
+ "ed"
+ ],
+ [
+ "ot",
+ "ion"
+ ],
+ [
+ "oti",
+ "on"
+ ],
+ [
+ "ca",
+ "che"
+ ],
+ [
+ "c",
+ "ache"
+ ],
+ [
+ "dis",
+ "play"
+ ],
+ [
+ "disp",
+ "lay"
+ ],
+ [
+ "▁prote",
+ "ct"
+ ],
+ [
+ "▁prot",
+ "ect"
+ ],
+ [
+ "am",
+ "s"
+ ],
+ [
+ "a",
+ "ms"
+ ],
+ [
+ "▁p",
+ "ow"
+ ],
+ [
+ "▁po",
+ "w"
+ ],
+ [
+ "▁",
+ "pow"
+ ],
+ [
+ "CT",
+ "ION"
+ ],
+ [
+ "C",
+ "TION"
+ ],
+ [
+ "▁M",
+ "ac"
+ ],
+ [
+ "▁Ma",
+ "c"
+ ],
+ [
+ "▁",
+ "Mac"
+ ],
+ [
+ "m",
+ "o"
+ ],
+ [
+ "х",
+ "а"
+ ],
+ [
+ "▁d",
+ "istance"
+ ],
+ [
+ "▁di",
+ "stance"
+ ],
+ [
+ "▁dist",
+ "ance"
+ ],
+ [
+ "▁",
+ "distance"
+ ],
+ [
+ "▁T",
+ "ime"
+ ],
+ [
+ "▁Tim",
+ "e"
+ ],
+ [
+ "▁Ti",
+ "me"
+ ],
+ [
+ "▁",
+ "Time"
+ ],
+ [
+ "g",
+ "i"
+ ],
+ [
+ "▁s",
+ "equ"
+ ],
+ [
+ "▁se",
+ "qu"
+ ],
+ [
+ "▁seq",
+ "u"
+ ],
+ [
+ "▁",
+ "sequ"
+ ],
+ [
+ "T",
+ "arget"
+ ],
+ [
+ "с",
+ "ле"
+ ],
+ [
+ "Ser",
+ "ver"
+ ],
+ [
+ "Serv",
+ "er"
+ ],
+ [
+ "▁w",
+ "ide"
+ ],
+ [
+ "▁wid",
+ "e"
+ ],
+ [
+ "▁",
+ "wide"
+ ],
+ [
+ "cl",
+ "ose"
+ ],
+ [
+ "clos",
+ "e"
+ ],
+ [
+ "▁c",
+ "ru"
+ ],
+ [
+ "▁cr",
+ "u"
+ ],
+ [
+ "Ex",
+ "t"
+ ],
+ [
+ "E",
+ "xt"
+ ],
+ [
+ "▁s",
+ "elect"
+ ],
+ [
+ "▁se",
+ "lect"
+ ],
+ [
+ "▁sel",
+ "ect"
+ ],
+ [
+ "▁sele",
+ "ct"
+ ],
+ [
+ "▁",
+ "select"
+ ],
+ [
+ "▁pat",
+ "tern"
+ ],
+ [
+ "▁",
+ "pattern"
+ ],
+ [
+ "\")",
+ ");"
+ ],
+ [
+ "\"))",
+ ";"
+ ],
+ [
+ "\"",
+ "));"
+ ],
+ [
+ "Pro",
+ "vider"
+ ],
+ [
+ "Prov",
+ "ider"
+ ],
+ [
+ "UR",
+ "L"
+ ],
+ [
+ "U",
+ "RL"
+ ],
+ [
+ "▁g",
+ "reen"
+ ],
+ [
+ "▁gr",
+ "een"
+ ],
+ [
+ "▁gre",
+ "en"
+ ],
+ [
+ "▁",
+ "green"
+ ],
+ [
+ "▁wait",
+ "ing"
+ ],
+ [
+ "▁wa",
+ "iting"
+ ],
+ [
+ "pro",
+ "to"
+ ],
+ [
+ "pr",
+ "oto"
+ ],
+ [
+ "prot",
+ "o"
+ ],
+ [
+ "▁immedi",
+ "ately"
+ ],
+ [
+ "▁immediate",
+ "ly"
+ ],
+ [
+ "com",
+ "mon"
+ ],
+ [
+ "comm",
+ "on"
+ ],
+ [
+ "az",
+ "ione"
+ ],
+ [
+ "azi",
+ "one"
+ ],
+ [
+ "a",
+ "zione"
+ ],
+ [
+ "ri",
+ "ver"
+ ],
+ [
+ "riv",
+ "er"
+ ],
+ [
+ "rive",
+ "r"
+ ],
+ [
+ "r",
+ "iver"
+ ],
+ [
+ "▁s",
+ "en"
+ ],
+ [
+ "▁se",
+ "n"
+ ],
+ [
+ "▁",
+ "sen"
+ ],
+ [
+ "▁!",
+ "=="
+ ],
+ [
+ "▁!=",
+ "="
+ ],
+ [
+ "▁Febru",
+ "ary"
+ ],
+ [
+ "▁Februar",
+ "y"
+ ],
+ [
+ "ur",
+ "b"
+ ],
+ [
+ "u",
+ "rb"
+ ],
+ [
+ "▁S",
+ "en"
+ ],
+ [
+ "▁Se",
+ "n"
+ ],
+ [
+ "de",
+ "st"
+ ],
+ [
+ "des",
+ "t"
+ ],
+ [
+ "d",
+ "est"
+ ],
+ [
+ "<",
+ "?"
+ ],
+ [
+ "▁ed",
+ "ge"
+ ],
+ [
+ "▁",
+ "edge"
+ ],
+ [
+ "▁m",
+ "ais"
+ ],
+ [
+ "▁ma",
+ "is"
+ ],
+ [
+ "▁mai",
+ "s"
+ ],
+ [
+ "gor",
+ "ith"
+ ],
+ [
+ "cp",
+ "u"
+ ],
+ [
+ "c",
+ "pu"
+ ],
+ [
+ "▁educ",
+ "ation"
+ ],
+ [
+ "▁associ",
+ "ated"
+ ],
+ [
+ "▁associate",
+ "d"
+ ],
+ [
+ "No",
+ "ne"
+ ],
+ [
+ "Non",
+ "e"
+ ],
+ [
+ "N",
+ "one"
+ ],
+ [
+ "h",
+ "i"
+ ],
+ [
+ "▁p",
+ "oor"
+ ],
+ [
+ "▁po",
+ "or"
+ ],
+ [
+ "se",
+ "m"
+ ],
+ [
+ "s",
+ "em"
+ ],
+ [
+ "▁W",
+ "il"
+ ],
+ [
+ "▁Wi",
+ "l"
+ ],
+ [
+ "▁b",
+ "ud"
+ ],
+ [
+ "▁bu",
+ "d"
+ ],
+ [
+ "▁",
+ "bud"
+ ],
+ [
+ "▁a",
+ "uch"
+ ],
+ [
+ "▁au",
+ "ch"
+ ],
+ [
+ "▁",
+ "auch"
+ ],
+ [
+ "el",
+ "ler"
+ ],
+ [
+ "ell",
+ "er"
+ ],
+ [
+ "elle",
+ "r"
+ ],
+ [
+ "▁L",
+ "ife"
+ ],
+ [
+ "▁Li",
+ "fe"
+ ],
+ [
+ "▁",
+ "Life"
+ ],
+ [
+ "▁f",
+ "iles"
+ ],
+ [
+ "▁fil",
+ "es"
+ ],
+ [
+ "▁file",
+ "s"
+ ],
+ [
+ "▁fi",
+ "les"
+ ],
+ [
+ "▁",
+ "files"
+ ],
+ [
+ "▁le",
+ "ading"
+ ],
+ [
+ "▁lead",
+ "ing"
+ ],
+ [
+ "▁",
+ "leading"
+ ],
+ [
+ "▁ob",
+ "tain"
+ ],
+ [
+ "▁obt",
+ "ain"
+ ],
+ [
+ "▁J",
+ "ul"
+ ],
+ [
+ "▁Ju",
+ "l"
+ ],
+ [
+ "at",
+ "ory"
+ ],
+ [
+ "ator",
+ "y"
+ ],
+ [
+ "ato",
+ "ry"
+ ],
+ [
+ "г",
+ "у"
+ ],
+ [
+ "it",
+ "able"
+ ],
+ [
+ "ita",
+ "ble"
+ ],
+ [
+ "i",
+ "table"
+ ],
+ [
+ "▁on",
+ "to"
+ ],
+ [
+ "▁ont",
+ "o"
+ ],
+ [
+ "▁",
+ "onto"
+ ],
+ [
+ "▁b",
+ "orn"
+ ],
+ [
+ "▁bo",
+ "rn"
+ ],
+ [
+ "▁bor",
+ "n"
+ ],
+ [
+ "▁",
+ "born"
+ ],
+ [
+ "or",
+ "em"
+ ],
+ [
+ "ore",
+ "m"
+ ],
+ [
+ "o",
+ "rem"
+ ],
+ [
+ "▁Stre",
+ "et"
+ ],
+ [
+ "▁m",
+ "aint"
+ ],
+ [
+ "▁main",
+ "t"
+ ],
+ [
+ "▁ma",
+ "int"
+ ],
+ [
+ "▁mai",
+ "nt"
+ ],
+ [
+ "Param",
+ "s"
+ ],
+ [
+ "Par",
+ "ams"
+ ],
+ [
+ "ri",
+ "p"
+ ],
+ [
+ "r",
+ "ip"
+ ],
+ [
+ "▁S",
+ "T"
+ ],
+ [
+ "▁",
+ "ST"
+ ],
+ [
+ "u",
+ "v"
+ ],
+ [
+ "ma",
+ "in"
+ ],
+ [
+ "m",
+ "ain"
+ ],
+ [
+ "▁re",
+ "cent"
+ ],
+ [
+ "▁rec",
+ "ent"
+ ],
+ [
+ "▁rece",
+ "nt"
+ ],
+ [
+ "We",
+ "b"
+ ],
+ [
+ "W",
+ "eb"
+ ],
+ [
+ "ov",
+ "a"
+ ],
+ [
+ "o",
+ "va"
+ ],
+ [
+ "ц",
+ "а"
+ ],
+ [
+ "ais",
+ "e"
+ ],
+ [
+ "ai",
+ "se"
+ ],
+ [
+ "a",
+ "ise"
+ ],
+ [
+ "yle",
+ "s"
+ ],
+ [
+ "yl",
+ "es"
+ ],
+ [
+ "y",
+ "les"
+ ],
+ [
+ "▁de",
+ "scribed"
+ ],
+ [
+ "▁desc",
+ "ribed"
+ ],
+ [
+ "▁describ",
+ "ed"
+ ],
+ [
+ "▁describe",
+ "d"
+ ],
+ [
+ "▁begin",
+ "ning"
+ ],
+ [
+ "▁D",
+ "ay"
+ ],
+ [
+ "▁Da",
+ "y"
+ ],
+ [
+ "▁",
+ "Day"
+ ],
+ [
+ "▁V",
+ "ol"
+ ],
+ [
+ "▁Vo",
+ "l"
+ ],
+ [
+ "▁",
+ "Vol"
+ ],
+ [
+ "▁h",
+ "uge"
+ ],
+ [
+ "▁hug",
+ "e"
+ ],
+ [
+ "Ha",
+ "s"
+ ],
+ [
+ "H",
+ "as"
+ ],
+ [
+ "an",
+ "cy"
+ ],
+ [
+ "anc",
+ "y"
+ ],
+ [
+ "He",
+ "ader"
+ ],
+ [
+ "Head",
+ "er"
+ ],
+ [
+ "▁a",
+ "ren"
+ ],
+ [
+ "▁are",
+ "n"
+ ],
+ [
+ "▁ar",
+ "en"
+ ],
+ [
+ "▁",
+ "aren"
+ ],
+ [
+ "ва",
+ "н"
+ ],
+ [
+ "в",
+ "ан"
+ ],
+ [
+ "▁en",
+ "sure"
+ ],
+ [
+ "▁ens",
+ "ure"
+ ],
+ [
+ "▁",
+ "ensure"
+ ],
+ [
+ "▁p",
+ "et"
+ ],
+ [
+ "▁pe",
+ "t"
+ ],
+ [
+ "▁",
+ "pet"
+ ],
+ [
+ "mu",
+ "lt"
+ ],
+ [
+ "mul",
+ "t"
+ ],
+ [
+ "m",
+ "ult"
+ ],
+ [
+ "▁L",
+ "ike"
+ ],
+ [
+ "▁Li",
+ "ke"
+ ],
+ [
+ "▁",
+ "Like"
+ ],
+ [
+ "▁man",
+ "agement"
+ ],
+ [
+ "▁manage",
+ "ment"
+ ],
+ [
+ "▁",
+ "management"
+ ],
+ [
+ "P",
+ "S"
+ ],
+ [
+ "wh",
+ "ile"
+ ],
+ [
+ "▁back",
+ "ground"
+ ],
+ [
+ "▁",
+ "background"
+ ],
+ [
+ "ount",
+ "er"
+ ],
+ [
+ "oun",
+ "ter"
+ ],
+ [
+ "o",
+ "unter"
+ ],
+ [
+ "bo",
+ "ol"
+ ],
+ [
+ "b",
+ "ool"
+ ],
+ [
+ "F",
+ "C"
+ ],
+ [
+ "N",
+ "um"
+ ],
+ [
+ "R",
+ "L"
+ ],
+ [
+ "▁ex",
+ "cl"
+ ],
+ [
+ "▁exc",
+ "l"
+ ],
+ [
+ "▁e",
+ "ye"
+ ],
+ [
+ "▁ey",
+ "e"
+ ],
+ [
+ "im",
+ "g"
+ ],
+ [
+ "i",
+ "mg"
+ ],
+ [
+ "▁r",
+ "om"
+ ],
+ [
+ "▁ro",
+ "m"
+ ],
+ [
+ "▁",
+ "rom"
+ ],
+ [
+ "▁H",
+ "el"
+ ],
+ [
+ "▁He",
+ "l"
+ ],
+ [
+ "▁",
+ "Hel"
+ ],
+ [
+ "Opt",
+ "ion"
+ ],
+ [
+ "O",
+ "ption"
+ ],
+ [
+ "▁stop",
+ "ped"
+ ],
+ [
+ "▁sto",
+ "pped"
+ ],
+ [
+ "▁th",
+ "read"
+ ],
+ [
+ "▁thr",
+ "ead"
+ ],
+ [
+ "▁",
+ "thread"
+ ],
+ [
+ "to",
+ "type"
+ ],
+ [
+ "tot",
+ "ype"
+ ],
+ [
+ "t",
+ "otype"
+ ],
+ [
+ "))",
+ ")"
+ ],
+ [
+ ")",
+ "))"
+ ],
+ [
+ "▁st",
+ "age"
+ ],
+ [
+ "▁stag",
+ "e"
+ ],
+ [
+ "▁sta",
+ "ge"
+ ],
+ [
+ "▁",
+ "stage"
+ ],
+ [
+ "▁ü",
+ "ber"
+ ],
+ [
+ "▁",
+ "über"
+ ],
+ [
+ "▁al",
+ "though"
+ ],
+ [
+ "▁",
+ "although"
+ ],
+ [
+ "Type",
+ "s"
+ ],
+ [
+ "Ty",
+ "pes"
+ ],
+ [
+ "Typ",
+ "es"
+ ],
+ [
+ "T",
+ "ypes"
+ ],
+ [
+ "▁O",
+ "h"
+ ],
+ [
+ "▁",
+ "Oh"
+ ],
+ [
+ "▁e",
+ "ight"
+ ],
+ [
+ "▁",
+ "eight"
+ ],
+ [
+ "▁de",
+ "scription"
+ ],
+ [
+ "▁des",
+ "cription"
+ ],
+ [
+ "▁",
+ "description"
+ ],
+ [
+ "'",
+ "'"
+ ],
+ [
+ "ö",
+ "n"
+ ],
+ [
+ "▁sur",
+ "face"
+ ],
+ [
+ "▁surf",
+ "ace"
+ ],
+ [
+ "▁",
+ "surface"
+ ],
+ [
+ "▁Intern",
+ "ational"
+ ],
+ [
+ "▁ch",
+ "arg"
+ ],
+ [
+ "▁char",
+ "g"
+ ],
+ [
+ "▁cha",
+ "rg"
+ ],
+ [
+ "▁",
+ "charg"
+ ],
+ [
+ "▁col",
+ "lection"
+ ],
+ [
+ "▁coll",
+ "ection"
+ ],
+ [
+ "▁collect",
+ "ion"
+ ],
+ [
+ "▁colle",
+ "ction"
+ ],
+ [
+ "▁",
+ "collection"
+ ],
+ [
+ "▁us",
+ "ers"
+ ],
+ [
+ "▁use",
+ "rs"
+ ],
+ [
+ "▁user",
+ "s"
+ ],
+ [
+ "▁",
+ "users"
+ ],
+ [
+ "▁ob",
+ "vious"
+ ],
+ [
+ "▁cent",
+ "ury"
+ ],
+ [
+ "▁",
+ "century"
+ ],
+ [
+ "ic",
+ "ks"
+ ],
+ [
+ "ick",
+ "s"
+ ],
+ [
+ "i",
+ "cks"
+ ],
+ [
+ "▁art",
+ "icle"
+ ],
+ [
+ "▁artic",
+ "le"
+ ],
+ [
+ "▁",
+ "article"
+ ],
+ [
+ "▁\"",
+ "\\"
+ ],
+ [
+ "▁",
+ "\"\\"
+ ],
+ [
+ "di",
+ "m"
+ ],
+ [
+ "d",
+ "im"
+ ],
+ [
+ "▁s",
+ "in"
+ ],
+ [
+ "▁si",
+ "n"
+ ],
+ [
+ "▁",
+ "sin"
+ ],
+ [
+ "en",
+ "ge"
+ ],
+ [
+ "eng",
+ "e"
+ ],
+ [
+ "Cont",
+ "rol"
+ ],
+ [
+ "▁com",
+ "mit"
+ ],
+ [
+ "▁comm",
+ "it"
+ ],
+ [
+ "▁",
+ "commit"
+ ],
+ [
+ "ens",
+ "ity"
+ ],
+ [
+ "▁t",
+ "ra"
+ ],
+ [
+ "▁tr",
+ "a"
+ ],
+ [
+ "▁",
+ "tra"
+ ],
+ [
+ "cript",
+ "or"
+ ],
+ [
+ "▁N",
+ "OT"
+ ],
+ [
+ "▁NO",
+ "T"
+ ],
+ [
+ "▁",
+ "NOT"
+ ],
+ [
+ "we",
+ "ll"
+ ],
+ [
+ "w",
+ "ell"
+ ],
+ [
+ "▁M",
+ "ichael"
+ ],
+ [
+ "▁Mich",
+ "ael"
+ ],
+ [
+ "▁n",
+ "od"
+ ],
+ [
+ "▁no",
+ "d"
+ ],
+ [
+ "▁",
+ "nod"
+ ],
+ [
+ "▁m",
+ "ort"
+ ],
+ [
+ "▁mor",
+ "t"
+ ],
+ [
+ "▁mo",
+ "rt"
+ ],
+ [
+ "iv",
+ "o"
+ ],
+ [
+ "i",
+ "vo"
+ ],
+ [
+ "is",
+ "ation"
+ ],
+ [
+ "▁P",
+ "o"
+ ],
+ [
+ "▁",
+ "Po"
+ ],
+ [
+ "▁P",
+ "aris"
+ ],
+ [
+ "▁Par",
+ "is"
+ ],
+ [
+ "▁Pa",
+ "ris"
+ ],
+ [
+ "▁ad",
+ "ministr"
+ ],
+ [
+ "▁admin",
+ "istr"
+ ],
+ [
+ "▁",
+ "administr"
+ ],
+ [
+ "bu",
+ "rg"
+ ],
+ [
+ "bur",
+ "g"
+ ],
+ [
+ "b",
+ "urg"
+ ],
+ [
+ "cd",
+ "ot"
+ ],
+ [
+ "c",
+ "dot"
+ ],
+ [
+ "▁mil",
+ "itary"
+ ],
+ [
+ "▁milit",
+ "ary"
+ ],
+ [
+ "▁militar",
+ "y"
+ ],
+ [
+ "▁B",
+ "est"
+ ],
+ [
+ "▁Be",
+ "st"
+ ],
+ [
+ "▁Bes",
+ "t"
+ ],
+ [
+ "▁",
+ "Best"
+ ],
+ [
+ "▁К",
+ "а"
+ ],
+ [
+ "▁",
+ "Ка"
+ ],
+ [
+ "IN",
+ "E"
+ ],
+ [
+ "I",
+ "NE"
+ ],
+ [
+ "▁through",
+ "out"
+ ],
+ [
+ "S",
+ "l"
+ ],
+ [
+ "▁im",
+ "pl"
+ ],
+ [
+ "▁imp",
+ "l"
+ ],
+ [
+ "▁",
+ "impl"
+ ],
+ [
+ "cont",
+ "rol"
+ ],
+ [
+ "contr",
+ "ol"
+ ],
+ [
+ "▁",
+ "Ч"
+ ],
+ [
+ "▁u",
+ "it"
+ ],
+ [
+ "▁ui",
+ "t"
+ ],
+ [
+ "▁",
+ "uit"
+ ],
+ [
+ "▁un",
+ "signed"
+ ],
+ [
+ "▁uns",
+ "igned"
+ ],
+ [
+ "▁",
+ "unsigned"
+ ],
+ [
+ "▁M",
+ "ary"
+ ],
+ [
+ "▁Mar",
+ "y"
+ ],
+ [
+ "▁Ma",
+ "ry"
+ ],
+ [
+ "Ch",
+ "ar"
+ ],
+ [
+ "C",
+ "har"
+ ],
+ [
+ "м",
+ "і"
+ ],
+ [
+ "▁th",
+ "reat"
+ ],
+ [
+ "▁c",
+ "ourt"
+ ],
+ [
+ "▁co",
+ "urt"
+ ],
+ [
+ "▁cour",
+ "t"
+ ],
+ [
+ "▁cou",
+ "rt"
+ ],
+ [
+ "▁",
+ "court"
+ ],
+ [
+ "vi",
+ "lle"
+ ],
+ [
+ "vil",
+ "le"
+ ],
+ [
+ "v",
+ "ille"
+ ],
+ [
+ "▁",
+ "ш"
+ ],
+ [
+ "▁C",
+ "am"
+ ],
+ [
+ "▁Ca",
+ "m"
+ ],
+ [
+ "▁",
+ "Cam"
+ ],
+ [
+ ".",
+ "\r"
+ ],
+ [
+ "▁current",
+ "ly"
+ ],
+ [
+ "▁curr",
+ "ently"
+ ],
+ [
+ "ro",
+ "t"
+ ],
+ [
+ "r",
+ "ot"
+ ],
+ [
+ "▁D",
+ "ate"
+ ],
+ [
+ "▁Da",
+ "te"
+ ],
+ [
+ "▁Dat",
+ "e"
+ ],
+ [
+ "▁",
+ "Date"
+ ],
+ [
+ "▁s",
+ "hit"
+ ],
+ [
+ "▁sh",
+ "it"
+ ],
+ [
+ "▁",
+ "shit"
+ ],
+ [
+ "▁$",
+ "{\\"
+ ],
+ [
+ "▁${",
+ "\\"
+ ],
+ [
+ "un",
+ "n"
+ ],
+ [
+ "u",
+ "nn"
+ ],
+ [
+ "U",
+ "s"
+ ],
+ [
+ "▁b",
+ "uffer"
+ ],
+ [
+ "▁buff",
+ "er"
+ ],
+ [
+ "▁buf",
+ "fer"
+ ],
+ [
+ "▁",
+ "buffer"
+ ],
+ [
+ "▁s",
+ "ont"
+ ],
+ [
+ "▁so",
+ "nt"
+ ],
+ [
+ "▁son",
+ "t"
+ ],
+ [
+ "▁let",
+ "ter"
+ ],
+ [
+ "▁lett",
+ "er"
+ ],
+ [
+ "▁",
+ "letter"
+ ],
+ [
+ "in",
+ "ated"
+ ],
+ [
+ "ina",
+ "ted"
+ ],
+ [
+ "inate",
+ "d"
+ ],
+ [
+ "Ch",
+ "ange"
+ ],
+ [
+ "▁h",
+ "ref"
+ ],
+ [
+ "▁hr",
+ "ef"
+ ],
+ [
+ "▁",
+ "href"
+ ],
+ [
+ "▁l",
+ "ack"
+ ],
+ [
+ "▁la",
+ "ck"
+ ],
+ [
+ "▁lac",
+ "k"
+ ],
+ [
+ "▁o",
+ "il"
+ ],
+ [
+ "▁C",
+ "ons"
+ ],
+ [
+ "▁Con",
+ "s"
+ ],
+ [
+ "▁Co",
+ "ns"
+ ],
+ [
+ "▁",
+ "Cons"
+ ],
+ [
+ "▁J",
+ "er"
+ ],
+ [
+ "▁Je",
+ "r"
+ ],
+ [
+ "BU",
+ "G"
+ ],
+ [
+ "B",
+ "UG"
+ ],
+ [
+ "if",
+ "orn"
+ ],
+ [
+ "▁pro",
+ "perties"
+ ],
+ [
+ "▁proper",
+ "ties"
+ ],
+ [
+ "▁",
+ "properties"
+ ],
+ [
+ "▁r",
+ "andom"
+ ],
+ [
+ "▁ran",
+ "dom"
+ ],
+ [
+ "▁rand",
+ "om"
+ ],
+ [
+ "▁",
+ "random"
+ ],
+ [
+ "▁br",
+ "other"
+ ],
+ [
+ "▁bro",
+ "ther"
+ ],
+ [
+ "▁p",
+ "iece"
+ ],
+ [
+ "▁pie",
+ "ce"
+ ],
+ [
+ "▁",
+ "piece"
+ ],
+ [
+ "б",
+ "у"
+ ],
+ [
+ "ist",
+ "ics"
+ ],
+ [
+ "istic",
+ "s"
+ ],
+ [
+ "isti",
+ "cs"
+ ],
+ [
+ "▁techn",
+ "ology"
+ ],
+ [
+ "gl",
+ "obal"
+ ],
+ [
+ "glob",
+ "al"
+ ],
+ [
+ "▁trans",
+ "form"
+ ],
+ [
+ "▁",
+ "transform"
+ ],
+ [
+ "er",
+ "d"
+ ],
+ [
+ "e",
+ "rd"
+ ],
+ [
+ "▁B",
+ "ecause"
+ ],
+ [
+ "▁",
+ "Because"
+ ],
+ [
+ "PE",
+ "CT"
+ ],
+ [
+ "P",
+ "ECT"
+ ],
+ [
+ "pr",
+ "et"
+ ],
+ [
+ "pre",
+ "t"
+ ],
+ [
+ "p",
+ "ret"
+ ],
+ [
+ "▁го",
+ "ду"
+ ],
+ [
+ "▁год",
+ "у"
+ ],
+ [
+ "▁M",
+ "et"
+ ],
+ [
+ "▁Me",
+ "t"
+ ],
+ [
+ "▁",
+ "Met"
+ ],
+ [
+ "▁p",
+ "sy"
+ ],
+ [
+ "▁ps",
+ "y"
+ ],
+ [
+ "▁",
+ "psy"
+ ],
+ [
+ "▁о",
+ "д"
+ ],
+ [
+ "▁g",
+ "od"
+ ],
+ [
+ "▁go",
+ "d"
+ ],
+ [
+ "▁",
+ "god"
+ ],
+ [
+ "▁D",
+ "el"
+ ],
+ [
+ "▁De",
+ "l"
+ ],
+ [
+ "▁",
+ "Del"
+ ],
+ [
+ "base",
+ "d"
+ ],
+ [
+ "ba",
+ "sed"
+ ],
+ [
+ "bas",
+ "ed"
+ ],
+ [
+ "b",
+ "ased"
+ ],
+ [
+ "▁v",
+ "oor"
+ ],
+ [
+ "▁vo",
+ "or"
+ ],
+ [
+ "▁C",
+ "all"
+ ],
+ [
+ "▁Cal",
+ "l"
+ ],
+ [
+ "▁Ca",
+ "ll"
+ ],
+ [
+ "▁",
+ "Call"
+ ],
+ [
+ "S",
+ "A"
+ ],
+ [
+ "▁fil",
+ "ter"
+ ],
+ [
+ "▁",
+ "filter"
+ ],
+ [
+ "▁incl",
+ "udes"
+ ],
+ [
+ "▁includ",
+ "es"
+ ],
+ [
+ "▁include",
+ "s"
+ ],
+ [
+ "▁inclu",
+ "des"
+ ],
+ [
+ "▁",
+ "includes"
+ ],
+ [
+ "olut",
+ "ions"
+ ],
+ [
+ "olution",
+ "s"
+ ],
+ [
+ "f",
+ "d"
+ ],
+ [
+ "▁w",
+ "ind"
+ ],
+ [
+ "▁win",
+ "d"
+ ],
+ [
+ "▁",
+ "wind"
+ ],
+ [
+ "▁б",
+ "о"
+ ],
+ [
+ "▁",
+ "бо"
+ ],
+ [
+ "▁ab",
+ "ility"
+ ],
+ [
+ "▁",
+ "ability"
+ ],
+ [
+ "ca",
+ "rd"
+ ],
+ [
+ "car",
+ "d"
+ ],
+ [
+ "c",
+ "ard"
+ ],
+ [
+ "▁n",
+ "umer"
+ ],
+ [
+ "▁num",
+ "er"
+ ],
+ [
+ "▁nu",
+ "mer"
+ ],
+ [
+ "▁",
+ "numer"
+ ],
+ [
+ "add",
+ "ress"
+ ],
+ [
+ "addr",
+ "ess"
+ ],
+ [
+ "▁go",
+ "al"
+ ],
+ [
+ "ash",
+ "ington"
+ ],
+ [
+ "ashing",
+ "ton"
+ ],
+ [
+ "▁s",
+ "light"
+ ],
+ [
+ "▁sl",
+ "ight"
+ ],
+ [
+ "ab",
+ "a"
+ ],
+ [
+ "a",
+ "ba"
+ ],
+ [
+ "▁L",
+ "og"
+ ],
+ [
+ "▁Lo",
+ "g"
+ ],
+ [
+ "▁",
+ "Log"
+ ],
+ [
+ "Set",
+ "tings"
+ ],
+ [
+ "Setting",
+ "s"
+ ],
+ [
+ "ad",
+ "ow"
+ ],
+ [
+ "ado",
+ "w"
+ ],
+ [
+ "▁p",
+ "i"
+ ],
+ [
+ "▁",
+ "pi"
+ ],
+ [
+ "ir",
+ "ing"
+ ],
+ [
+ "iri",
+ "ng"
+ ],
+ [
+ "i",
+ "ring"
+ ],
+ [
+ "F",
+ "T"
+ ],
+ [
+ "▁number",
+ "s"
+ ],
+ [
+ "▁num",
+ "bers"
+ ],
+ [
+ "con",
+ "f"
+ ],
+ [
+ "co",
+ "nf"
+ ],
+ [
+ "ta",
+ "sk"
+ ],
+ [
+ "t",
+ "ask"
+ ],
+ [
+ "▁î",
+ "n"
+ ],
+ [
+ "т",
+ "ы"
+ ],
+ [
+ "▁re",
+ "ceive"
+ ],
+ [
+ "▁rece",
+ "ive"
+ ],
+ [
+ "▁r",
+ "oot"
+ ],
+ [
+ "▁ro",
+ "ot"
+ ],
+ [
+ "▁",
+ "root"
+ ],
+ [
+ "▁Ind",
+ "ia"
+ ],
+ [
+ "pat",
+ "ch"
+ ],
+ [
+ "p",
+ "atch"
+ ],
+ [
+ "é",
+ "l"
+ ],
+ [
+ "▁sum",
+ "mer"
+ ],
+ [
+ "▁method",
+ "s"
+ ],
+ [
+ "▁",
+ "methods"
+ ],
+ [
+ "▁pl",
+ "aces"
+ ],
+ [
+ "▁place",
+ "s"
+ ],
+ [
+ "▁plac",
+ "es"
+ ],
+ [
+ "▁М",
+ "а"
+ ],
+ [
+ "▁",
+ "Ма"
+ ],
+ [
+ "▁cap",
+ "ital"
+ ],
+ [
+ "▁capit",
+ "al"
+ ],
+ [
+ "▁ev",
+ "idence"
+ ],
+ [
+ "▁G",
+ "erman"
+ ],
+ [
+ "▁Germ",
+ "an"
+ ],
+ [
+ "▁Ger",
+ "man"
+ ],
+ [
+ "\\",
+ ","
+ ],
+ [
+ "D",
+ "A"
+ ],
+ [
+ "ec",
+ "ute"
+ ],
+ [
+ "ecut",
+ "e"
+ ],
+ [
+ "col",
+ "umn"
+ ],
+ [
+ "▁fun",
+ "ctions"
+ ],
+ [
+ "▁function",
+ "s"
+ ],
+ [
+ "▁",
+ "functions"
+ ],
+ [
+ "▁c",
+ "ounter"
+ ],
+ [
+ "▁co",
+ "unter"
+ ],
+ [
+ "▁coun",
+ "ter"
+ ],
+ [
+ "▁count",
+ "er"
+ ],
+ [
+ "▁",
+ "counter"
+ ],
+ [
+ "▁ar",
+ "ms"
+ ],
+ [
+ "▁arm",
+ "s"
+ ],
+ [
+ "▁",
+ "arms"
+ ],
+ [
+ "▁f",
+ "eed"
+ ],
+ [
+ "▁fe",
+ "ed"
+ ],
+ [
+ "▁fee",
+ "d"
+ ],
+ [
+ "▁",
+ "feed"
+ ],
+ [
+ "ve",
+ "y"
+ ],
+ [
+ "v",
+ "ey"
+ ],
+ [
+ "he",
+ "nt"
+ ],
+ [
+ "hen",
+ "t"
+ ],
+ [
+ "h",
+ "ent"
+ ],
+ [
+ "MA",
+ "X"
+ ],
+ [
+ "M",
+ "AX"
+ ],
+ [
+ "▁ac",
+ "qu"
+ ],
+ [
+ "▁app",
+ "ly"
+ ],
+ [
+ "▁ap",
+ "ply"
+ ],
+ [
+ "▁appl",
+ "y"
+ ],
+ [
+ "▁",
+ "apply"
+ ],
+ [
+ "▁hus",
+ "band"
+ ],
+ [
+ "▁k",
+ "illed"
+ ],
+ [
+ "▁kill",
+ "ed"
+ ],
+ [
+ "▁kil",
+ "led"
+ ],
+ [
+ "▁S",
+ "pec"
+ ],
+ [
+ "▁Sp",
+ "ec"
+ ],
+ [
+ "▁Spe",
+ "c"
+ ],
+ [
+ "▁",
+ "Spec"
+ ],
+ [
+ "ent",
+ "ity"
+ ],
+ [
+ "enti",
+ "ty"
+ ],
+ [
+ "▁e",
+ "arlier"
+ ],
+ [
+ "▁M",
+ "iss"
+ ],
+ [
+ "▁Mi",
+ "ss"
+ ],
+ [
+ "▁Mis",
+ "s"
+ ],
+ [
+ "▁",
+ "Miss"
+ ],
+ [
+ "▁set",
+ "ting"
+ ],
+ [
+ "▁sett",
+ "ing"
+ ],
+ [
+ "▁",
+ "setting"
+ ],
+ [
+ "it",
+ "ect"
+ ],
+ [
+ "ite",
+ "ct"
+ ],
+ [
+ "▁d",
+ "ed"
+ ],
+ [
+ "▁de",
+ "d"
+ ],
+ [
+ "▁",
+ "ded"
+ ],
+ [
+ "Ro",
+ "w"
+ ],
+ [
+ "R",
+ "ow"
+ ],
+ [
+ "▁r",
+ "an"
+ ],
+ [
+ "▁ra",
+ "n"
+ ],
+ [
+ "▁",
+ "ran"
+ ],
+ [
+ "▁Y",
+ "es"
+ ],
+ [
+ "▁Ye",
+ "s"
+ ],
+ [
+ "▁",
+ "Yes"
+ ],
+ [
+ "▁fin",
+ "ancial"
+ ],
+ [
+ "▁financ",
+ "ial"
+ ],
+ [
+ "s",
+ "ession"
+ ],
+ [
+ "le",
+ "ar"
+ ],
+ [
+ "l",
+ "ear"
+ ],
+ [
+ "is",
+ "hing"
+ ],
+ [
+ "ish",
+ "ing"
+ ],
+ [
+ "ishi",
+ "ng"
+ ],
+ [
+ "▁ne",
+ "arly"
+ ],
+ [
+ "▁near",
+ "ly"
+ ],
+ [
+ "▁d",
+ "ur"
+ ],
+ [
+ "▁du",
+ "r"
+ ],
+ [
+ "▁m",
+ "achine"
+ ],
+ [
+ "▁mach",
+ "ine"
+ ],
+ [
+ "▁",
+ "machine"
+ ],
+ [
+ "xf",
+ "f"
+ ],
+ [
+ "x",
+ "ff"
+ ],
+ [
+ "br",
+ "o"
+ ],
+ [
+ "b",
+ "ro"
+ ],
+ [
+ "▁s",
+ "ymbol"
+ ],
+ [
+ "▁sym",
+ "bol"
+ ],
+ [
+ "▁",
+ "symbol"
+ ],
+ [
+ "land",
+ "s"
+ ],
+ [
+ "lan",
+ "ds"
+ ],
+ [
+ "l",
+ "ands"
+ ],
+ [
+ "Ac",
+ "c"
+ ],
+ [
+ "A",
+ "cc"
+ ],
+ [
+ "d",
+ "i"
+ ],
+ [
+ "▁Rober",
+ "t"
+ ],
+ [
+ "▁Ro",
+ "bert"
+ ],
+ [
+ "▁Rob",
+ "ert"
+ ],
+ [
+ "pro",
+ "p"
+ ],
+ [
+ "pr",
+ "op"
+ ],
+ [
+ "p",
+ "rop"
+ ],
+ [
+ "ur",
+ "ity"
+ ],
+ [
+ "uri",
+ "ty"
+ ],
+ [
+ "▁#",
+ "####"
+ ],
+ [
+ "▁##",
+ "###"
+ ],
+ [
+ "▁###",
+ "##"
+ ],
+ [
+ "▁####",
+ "#"
+ ],
+ [
+ "▁walk",
+ "ed"
+ ],
+ [
+ "▁wal",
+ "ked"
+ ],
+ [
+ "▁intern",
+ "ational"
+ ],
+ [
+ "▁internation",
+ "al"
+ ],
+ [
+ "▁",
+ "Е"
+ ],
+ [
+ "Y",
+ "es"
+ ],
+ [
+ "▁re",
+ "lease"
+ ],
+ [
+ "▁rele",
+ "ase"
+ ],
+ [
+ "▁",
+ "release"
+ ],
+ [
+ "▁start",
+ "ing"
+ ],
+ [
+ "▁star",
+ "ting"
+ ],
+ [
+ "st",
+ "atic"
+ ],
+ [
+ "stat",
+ "ic"
+ ],
+ [
+ "▁b",
+ "ei"
+ ],
+ [
+ "▁be",
+ "i"
+ ],
+ [
+ "al",
+ "low"
+ ],
+ [
+ "all",
+ "ow"
+ ],
+ [
+ "allo",
+ "w"
+ ],
+ [
+ "▁Pe",
+ "ople"
+ ],
+ [
+ "▁",
+ "People"
+ ],
+ [
+ "e",
+ "z"
+ ],
+ [
+ "▁param",
+ "eter"
+ ],
+ [
+ "▁",
+ "parameter"
+ ],
+ [
+ "C",
+ "ache"
+ ],
+ [
+ "▁$",
+ "$"
+ ],
+ [
+ "▁",
+ "$$"
+ ],
+ [
+ "amp",
+ "ions"
+ ],
+ [
+ "ampion",
+ "s"
+ ],
+ [
+ "▁M",
+ "er"
+ ],
+ [
+ "▁Me",
+ "r"
+ ],
+ [
+ "▁",
+ "Mer"
+ ],
+ [
+ "▁k",
+ "om"
+ ],
+ [
+ "▁ko",
+ "m"
+ ],
+ [
+ "▁",
+ "kom"
+ ],
+ [
+ "le",
+ "ted"
+ ],
+ [
+ "let",
+ "ed"
+ ],
+ [
+ "lete",
+ "d"
+ ],
+ [
+ "l",
+ "eted"
+ ],
+ [
+ "oi",
+ "s"
+ ],
+ [
+ "o",
+ "is"
+ ],
+ [
+ "▁O",
+ "pen"
+ ],
+ [
+ "▁Op",
+ "en"
+ ],
+ [
+ "▁",
+ "Open"
+ ],
+ [
+ "ty",
+ "pes"
+ ],
+ [
+ "type",
+ "s"
+ ],
+ [
+ "typ",
+ "es"
+ ],
+ [
+ "t",
+ "ypes"
+ ],
+ [
+ "▁f",
+ "ue"
+ ],
+ [
+ "▁fu",
+ "e"
+ ],
+ [
+ "ac",
+ "ters"
+ ],
+ [
+ "act",
+ "ers"
+ ],
+ [
+ "acter",
+ "s"
+ ],
+ [
+ "▁re",
+ "ference"
+ ],
+ [
+ "▁refer",
+ "ence"
+ ],
+ [
+ "▁",
+ "reference"
+ ],
+ [
+ "Equ",
+ "als"
+ ],
+ [
+ "Equal",
+ "s"
+ ],
+ [
+ "Eq",
+ "uals"
+ ],
+ [
+ "▁a",
+ "ware"
+ ],
+ [
+ "▁aw",
+ "are"
+ ],
+ [
+ "▁",
+ "aware"
+ ],
+ [
+ "▁h",
+ "ol"
+ ],
+ [
+ "▁ho",
+ "l"
+ ],
+ [
+ "▁",
+ "hol"
+ ],
+ [
+ "▁de",
+ "mand"
+ ],
+ [
+ "▁dem",
+ "and"
+ ],
+ [
+ "lo",
+ "r"
+ ],
+ [
+ "l",
+ "or"
+ ],
+ [
+ "▁v",
+ "eh"
+ ],
+ [
+ "▁ve",
+ "h"
+ ],
+ [
+ "▁",
+ "veh"
+ ],
+ [
+ "▁not",
+ "ice"
+ ],
+ [
+ "▁",
+ "notice"
+ ],
+ [
+ "▁com",
+ "ponent"
+ ],
+ [
+ "▁compon",
+ "ent"
+ ],
+ [
+ "▁",
+ "component"
+ ],
+ [
+ "f",
+ "n"
+ ],
+ [
+ "▁anal",
+ "ysis"
+ ],
+ [
+ "▁analy",
+ "sis"
+ ],
+ [
+ "▁analys",
+ "is"
+ ],
+ [
+ "▁",
+ "analysis"
+ ],
+ [
+ "mat",
+ "ch"
+ ],
+ [
+ "m",
+ "atch"
+ ],
+ [
+ "▁effect",
+ "ive"
+ ],
+ [
+ "▁",
+ "effective"
+ ],
+ [
+ "pro",
+ "duct"
+ ],
+ [
+ "produ",
+ "ct"
+ ],
+ [
+ "prod",
+ "uct"
+ ],
+ [
+ "ни",
+ "к"
+ ],
+ [
+ "▁le",
+ "gal"
+ ],
+ [
+ "▁leg",
+ "al"
+ ],
+ [
+ "▁",
+ "legal"
+ ],
+ [
+ "е",
+ "й"
+ ],
+ [
+ "se",
+ "mb"
+ ],
+ [
+ "sem",
+ "b"
+ ],
+ [
+ "s",
+ "emb"
+ ],
+ [
+ "▁loc",
+ "ated"
+ ],
+ [
+ "▁locate",
+ "d"
+ ],
+ [
+ "▁с",
+ "у"
+ ],
+ [
+ "▁",
+ "су"
+ ],
+ [
+ "Q",
+ "L"
+ ],
+ [
+ "in",
+ "ct"
+ ],
+ [
+ "inc",
+ "t"
+ ],
+ [
+ "et",
+ "o"
+ ],
+ [
+ "e",
+ "to"
+ ],
+ [
+ "Dr",
+ "aw"
+ ],
+ [
+ "D",
+ "raw"
+ ],
+ [
+ "▁sc",
+ "ale"
+ ],
+ [
+ "▁scal",
+ "e"
+ ],
+ [
+ "▁",
+ "scale"
+ ],
+ [
+ "ро",
+ "в"
+ ],
+ [
+ "р",
+ "ов"
+ ],
+ [
+ "▁w",
+ "ants"
+ ],
+ [
+ "▁want",
+ "s"
+ ],
+ [
+ "H",
+ "ow"
+ ],
+ [
+ "▁w",
+ "el"
+ ],
+ [
+ "▁we",
+ "l"
+ ],
+ [
+ "is",
+ "ions"
+ ],
+ [
+ "ision",
+ "s"
+ ],
+ [
+ "isi",
+ "ons"
+ ],
+ [
+ "▁de",
+ "liver"
+ ],
+ [
+ "▁del",
+ "iver"
+ ],
+ [
+ "un",
+ "der"
+ ],
+ [
+ "und",
+ "er"
+ ],
+ [
+ "unde",
+ "r"
+ ],
+ [
+ "u",
+ "nder"
+ ],
+ [
+ "▁d",
+ "eb"
+ ],
+ [
+ "▁de",
+ "b"
+ ],
+ [
+ "▁j",
+ "u"
+ ],
+ [
+ "▁",
+ "ju"
+ ],
+ [
+ "val",
+ "ues"
+ ],
+ [
+ "value",
+ "s"
+ ],
+ [
+ "▁s",
+ "ister"
+ ],
+ [
+ "▁si",
+ "ster"
+ ],
+ [
+ "▁sist",
+ "er"
+ ],
+ [
+ "ко",
+ "в"
+ ],
+ [
+ "к",
+ "ов"
+ ],
+ [
+ "▁C",
+ "reate"
+ ],
+ [
+ "▁Creat",
+ "e"
+ ],
+ [
+ "▁Cre",
+ "ate"
+ ],
+ [
+ "▁",
+ "Create"
+ ],
+ [
+ "▁I",
+ "nc"
+ ],
+ [
+ "▁In",
+ "c"
+ ],
+ [
+ "▁a",
+ "ux"
+ ],
+ [
+ "▁au",
+ "x"
+ ],
+ [
+ "▁",
+ "aux"
+ ],
+ [
+ "▁Wh",
+ "ite"
+ ],
+ [
+ "▁Whit",
+ "e"
+ ],
+ [
+ "▁",
+ "White"
+ ],
+ [
+ "Me",
+ "nu"
+ ],
+ [
+ "Men",
+ "u"
+ ],
+ [
+ "M",
+ "enu"
+ ],
+ [
+ "au",
+ "d"
+ ],
+ [
+ "a",
+ "ud"
+ ],
+ [
+ "re",
+ "source"
+ ],
+ [
+ "res",
+ "ource"
+ ],
+ [
+ "▁c",
+ "ab"
+ ],
+ [
+ "▁ca",
+ "b"
+ ],
+ [
+ "▁l",
+ "if"
+ ],
+ [
+ "▁li",
+ "f"
+ ],
+ [
+ "▁",
+ "lif"
+ ],
+ [
+ "▁c",
+ "ulture"
+ ],
+ [
+ "▁cult",
+ "ure"
+ ],
+ [
+ "ic",
+ "he"
+ ],
+ [
+ "ich",
+ "e"
+ ],
+ [
+ "i",
+ "che"
+ ],
+ [
+ "▁wh",
+ "atever"
+ ],
+ [
+ "▁what",
+ "ever"
+ ],
+ [
+ "▁de",
+ "signed"
+ ],
+ [
+ "▁des",
+ "igned"
+ ],
+ [
+ "▁design",
+ "ed"
+ ],
+ [
+ "▁re",
+ "pe"
+ ],
+ [
+ "▁rep",
+ "e"
+ ],
+ [
+ "▁M",
+ "ont"
+ ],
+ [
+ "▁Mon",
+ "t"
+ ],
+ [
+ "▁Mo",
+ "nt"
+ ],
+ [
+ "▁",
+ "Mont"
+ ],
+ [
+ "▁ch",
+ "arge"
+ ],
+ [
+ "▁char",
+ "ge"
+ ],
+ [
+ "▁charg",
+ "e"
+ ],
+ [
+ "▁",
+ "charge"
+ ],
+ [
+ "Name",
+ "s"
+ ],
+ [
+ "Na",
+ "mes"
+ ],
+ [
+ "N",
+ "ames"
+ ],
+ [
+ "▁in",
+ "sp"
+ ],
+ [
+ "▁ins",
+ "p"
+ ],
+ [
+ "▁custom",
+ "ers"
+ ],
+ [
+ "▁customer",
+ "s"
+ ],
+ [
+ "os",
+ "a"
+ ],
+ [
+ "o",
+ "sa"
+ ],
+ [
+ "▁d",
+ "aughter"
+ ],
+ [
+ "▁E",
+ "ast"
+ ],
+ [
+ "E",
+ "Q"
+ ],
+ [
+ "▁o",
+ "pin"
+ ],
+ [
+ "▁op",
+ "in"
+ ],
+ [
+ "▁F",
+ "re"
+ ],
+ [
+ "▁Fr",
+ "e"
+ ],
+ [
+ "▁se",
+ "ek"
+ ],
+ [
+ "▁see",
+ "k"
+ ],
+ [
+ "▁",
+ "seek"
+ ],
+ [
+ "▁p",
+ "ush"
+ ],
+ [
+ "▁pu",
+ "sh"
+ ],
+ [
+ "▁",
+ "push"
+ ],
+ [
+ "▁n",
+ "av"
+ ],
+ [
+ "▁na",
+ "v"
+ ],
+ [
+ "▁",
+ "nav"
+ ],
+ [
+ "▁b",
+ "urn"
+ ],
+ [
+ "▁bu",
+ "rn"
+ ],
+ [
+ "▁bur",
+ "n"
+ ],
+ [
+ "▁",
+ "burn"
+ ],
+ [
+ "ar",
+ "den"
+ ],
+ [
+ "ard",
+ "en"
+ ],
+ [
+ "arde",
+ "n"
+ ],
+ [
+ "ha",
+ "sh"
+ ],
+ [
+ "has",
+ "h"
+ ],
+ [
+ "h",
+ "ash"
+ ],
+ [
+ "▁opportun",
+ "ity"
+ ],
+ [
+ "▁M",
+ "at"
+ ],
+ [
+ "▁Ma",
+ "t"
+ ],
+ [
+ "▁",
+ "Mat"
+ ],
+ [
+ "oy",
+ "al"
+ ],
+ [
+ "oya",
+ "l"
+ ],
+ [
+ "o",
+ "yal"
+ ],
+ [
+ "▁p",
+ "un"
+ ],
+ [
+ "▁pu",
+ "n"
+ ],
+ [
+ "sc",
+ "ale"
+ ],
+ [
+ "scal",
+ "e"
+ ],
+ [
+ "yn",
+ "amic"
+ ],
+ [
+ "ynam",
+ "ic"
+ ],
+ [
+ "yna",
+ "mic"
+ ],
+ [
+ "▁T",
+ "ype"
+ ],
+ [
+ "▁Ty",
+ "pe"
+ ],
+ [
+ "▁Typ",
+ "e"
+ ],
+ [
+ "▁",
+ "Type"
+ ],
+ [
+ "il",
+ "ing"
+ ],
+ [
+ "ili",
+ "ng"
+ ],
+ [
+ "i",
+ "ling"
+ ],
+ [
+ "▁qu",
+ "ery"
+ ],
+ [
+ "▁que",
+ "ry"
+ ],
+ [
+ "▁quer",
+ "y"
+ ],
+ [
+ "▁",
+ "query"
+ ],
+ [
+ "▁m",
+ "ist"
+ ],
+ [
+ "▁mis",
+ "t"
+ ],
+ [
+ "▁mi",
+ "st"
+ ],
+ [
+ "ro",
+ "r"
+ ],
+ [
+ "r",
+ "or"
+ ],
+ [
+ "for",
+ "ce"
+ ],
+ [
+ "▁On",
+ "ce"
+ ],
+ [
+ "▁",
+ "Once"
+ ],
+ [
+ "▁med",
+ "ical"
+ ],
+ [
+ "▁medic",
+ "al"
+ ],
+ [
+ "▁medi",
+ "cal"
+ ],
+ [
+ "li",
+ "e"
+ ],
+ [
+ "l",
+ "ie"
+ ],
+ [
+ "▁stud",
+ "ent"
+ ],
+ [
+ "▁",
+ "student"
+ ],
+ [
+ "ed",
+ "eral"
+ ],
+ [
+ "eder",
+ "al"
+ ],
+ [
+ "ede",
+ "ral"
+ ],
+ [
+ "▁l",
+ "ov"
+ ],
+ [
+ "▁lo",
+ "v"
+ ],
+ [
+ "▁",
+ "lov"
+ ],
+ [
+ "if",
+ "orm"
+ ],
+ [
+ "i",
+ "form"
+ ],
+ [
+ "▁al",
+ "tern"
+ ],
+ [
+ "▁alt",
+ "ern"
+ ],
+ [
+ "▁alter",
+ "n"
+ ],
+ [
+ "▁",
+ "altern"
+ ],
+ [
+ "bi",
+ "n"
+ ],
+ [
+ "b",
+ "in"
+ ],
+ [
+ "od",
+ "er"
+ ],
+ [
+ "ode",
+ "r"
+ ],
+ [
+ "o",
+ "der"
+ ],
+ [
+ "▁return",
+ "s"
+ ],
+ [
+ "▁",
+ "returns"
+ ],
+ [
+ "reg",
+ "ister"
+ ],
+ [
+ "ut",
+ "s"
+ ],
+ [
+ "u",
+ "ts"
+ ],
+ [
+ "C",
+ "I"
+ ],
+ [
+ "▁T",
+ "or"
+ ],
+ [
+ "▁To",
+ "r"
+ ],
+ [
+ "▁",
+ "Tor"
+ ],
+ [
+ "C",
+ "R"
+ ],
+ [
+ "▁L",
+ "os"
+ ],
+ [
+ "▁Lo",
+ "s"
+ ],
+ [
+ "▁",
+ "Los"
+ ],
+ [
+ "am",
+ "ily"
+ ],
+ [
+ "ami",
+ "ly"
+ ],
+ [
+ "amil",
+ "y"
+ ],
+ [
+ "air",
+ "e"
+ ],
+ [
+ "ai",
+ "re"
+ ],
+ [
+ "a",
+ "ire"
+ ],
+ [
+ "++",
+ ";"
+ ],
+ [
+ "Cont",
+ "roller"
+ ],
+ [
+ "Control",
+ "ler"
+ ],
+ [
+ "wi",
+ "de"
+ ],
+ [
+ "wid",
+ "e"
+ ],
+ [
+ "w",
+ "ide"
+ ],
+ [
+ "x",
+ "x"
+ ],
+ [
+ "row",
+ "ser"
+ ],
+ [
+ "rows",
+ "er"
+ ],
+ [
+ "▁B",
+ "ook"
+ ],
+ [
+ "▁Bo",
+ "ok"
+ ],
+ [
+ "▁",
+ "Book"
+ ],
+ [
+ "Cont",
+ "ainer"
+ ],
+ [
+ "pl",
+ "oad"
+ ],
+ [
+ "plo",
+ "ad"
+ ],
+ [
+ "p",
+ "load"
+ ],
+ [
+ "▁E",
+ "v"
+ ],
+ [
+ "▁",
+ "Ev"
+ ],
+ [
+ "▁t",
+ "al"
+ ],
+ [
+ "▁ta",
+ "l"
+ ],
+ [
+ "▁",
+ "tal"
+ ],
+ [
+ "▁the",
+ "ory"
+ ],
+ [
+ "eqn",
+ "array"
+ ],
+ [
+ "б",
+ "е"
+ ],
+ [
+ "▁rep",
+ "orted"
+ ],
+ [
+ "▁report",
+ "ed"
+ ],
+ [
+ "▁me",
+ "aning"
+ ],
+ [
+ "▁mean",
+ "ing"
+ ],
+ [
+ "▁s",
+ "y"
+ ],
+ [
+ "▁",
+ "sy"
+ ],
+ [
+ "ri",
+ "be"
+ ],
+ [
+ "rib",
+ "e"
+ ],
+ [
+ "r",
+ "ibe"
+ ],
+ [
+ "ic",
+ "ate"
+ ],
+ [
+ "ica",
+ "te"
+ ],
+ [
+ "ho",
+ "ld"
+ ],
+ [
+ "hol",
+ "d"
+ ],
+ [
+ "h",
+ "old"
+ ],
+ [
+ "▁of",
+ "fers"
+ ],
+ [
+ "▁off",
+ "ers"
+ ],
+ [
+ "▁offer",
+ "s"
+ ],
+ [
+ "▁t",
+ "empl"
+ ],
+ [
+ "▁tem",
+ "pl"
+ ],
+ [
+ "▁temp",
+ "l"
+ ],
+ [
+ "cs",
+ "s"
+ ],
+ [
+ "c",
+ "ss"
+ ],
+ [
+ "▁p",
+ "icture"
+ ],
+ [
+ "▁pict",
+ "ure"
+ ],
+ [
+ "▁",
+ "picture"
+ ],
+ [
+ "▁a",
+ "sync"
+ ],
+ [
+ "▁as",
+ "ync"
+ ],
+ [
+ "▁",
+ "async"
+ ],
+ [
+ "▁st",
+ "ock"
+ ],
+ [
+ "▁sto",
+ "ck"
+ ],
+ [
+ "▁",
+ "stock"
+ ],
+ [
+ "▁in",
+ "ternal"
+ ],
+ [
+ "▁inter",
+ "nal"
+ ],
+ [
+ "▁intern",
+ "al"
+ ],
+ [
+ "▁",
+ "internal"
+ ],
+ [
+ "t",
+ "i"
+ ],
+ [
+ "B",
+ "O"
+ ],
+ [
+ "V",
+ "er"
+ ],
+ [
+ "с",
+ "по"
+ ],
+ [
+ "▁d",
+ "emon"
+ ],
+ [
+ "▁de",
+ "mon"
+ ],
+ [
+ "▁dem",
+ "on"
+ ],
+ [
+ "▁demo",
+ "n"
+ ],
+ [
+ "▁l",
+ "augh"
+ ],
+ [
+ "▁la",
+ "ugh"
+ ],
+ [
+ "▁laug",
+ "h"
+ ],
+ [
+ "▁E",
+ "nd"
+ ],
+ [
+ "▁En",
+ "d"
+ ],
+ [
+ "▁",
+ "End"
+ ],
+ [
+ "▁k",
+ "on"
+ ],
+ [
+ "▁ko",
+ "n"
+ ],
+ [
+ "▁",
+ "kon"
+ ],
+ [
+ "▁ide",
+ "as"
+ ],
+ [
+ "▁idea",
+ "s"
+ ],
+ [
+ "▁c",
+ "andid"
+ ],
+ [
+ "▁can",
+ "did"
+ ],
+ [
+ "▁cand",
+ "id"
+ ],
+ [
+ "Me",
+ "m"
+ ],
+ [
+ "M",
+ "em"
+ ],
+ [
+ "iz",
+ "z"
+ ],
+ [
+ "i",
+ "zz"
+ ],
+ [
+ "re",
+ "fix"
+ ],
+ [
+ "ref",
+ "ix"
+ ],
+ [
+ "▁A",
+ "ND"
+ ],
+ [
+ "▁AN",
+ "D"
+ ],
+ [
+ "▁",
+ "AND"
+ ],
+ [
+ "eg",
+ "en"
+ ],
+ [
+ "e",
+ "gen"
+ ],
+ [
+ "E",
+ "l"
+ ],
+ [
+ "▁camp",
+ "aign"
+ ],
+ [
+ "H",
+ "ttp"
+ ],
+ [
+ "▁R",
+ "ob"
+ ],
+ [
+ "▁Ro",
+ "b"
+ ],
+ [
+ "▁",
+ "Rob"
+ ],
+ [
+ "д",
+ "і"
+ ],
+ [
+ "▁b",
+ "ul"
+ ],
+ [
+ "▁bu",
+ "l"
+ ],
+ [
+ "▁",
+ "bul"
+ ],
+ [
+ "▁К",
+ "о"
+ ],
+ [
+ "▁",
+ "Ко"
+ ],
+ [
+ "▁count",
+ "ries"
+ ],
+ [
+ "▁countr",
+ "ies"
+ ],
+ [
+ "»",
+ "."
+ ],
+ [
+ "▁ex",
+ "pression"
+ ],
+ [
+ "▁exp",
+ "ression"
+ ],
+ [
+ "▁express",
+ "ion"
+ ],
+ [
+ "▁expr",
+ "ession"
+ ],
+ [
+ "▁",
+ "expression"
+ ],
+ [
+ "▁Eng",
+ "land"
+ ],
+ [
+ "s",
+ "f"
+ ],
+ [
+ "▁certain",
+ "ly"
+ ],
+ [
+ "ag",
+ "en"
+ ],
+ [
+ "age",
+ "n"
+ ],
+ [
+ "a",
+ "gen"
+ ],
+ [
+ "▁ч",
+ "а"
+ ],
+ [
+ "▁",
+ "ча"
+ ],
+ [
+ "▁A",
+ "NY"
+ ],
+ [
+ "▁AN",
+ "Y"
+ ],
+ [
+ "▁",
+ "ANY"
+ ],
+ [
+ "▁conne",
+ "ct"
+ ],
+ [
+ "▁conn",
+ "ect"
+ ],
+ [
+ "▁",
+ "connect"
+ ],
+ [
+ "F",
+ "E"
+ ],
+ [
+ "▁and",
+ "roid"
+ ],
+ [
+ "▁",
+ "android"
+ ],
+ [
+ "▁G",
+ "old"
+ ],
+ [
+ "▁Go",
+ "ld"
+ ],
+ [
+ "▁Gol",
+ "d"
+ ],
+ [
+ "▁",
+ "Gold"
+ ],
+ [
+ "▁op",
+ "pos"
+ ],
+ [
+ "▁opp",
+ "os"
+ ],
+ [
+ "ov",
+ "ern"
+ ],
+ [
+ "ove",
+ "rn"
+ ],
+ [
+ "over",
+ "n"
+ ],
+ [
+ "o",
+ "vern"
+ ],
+ [
+ "▁Com",
+ "mun"
+ ],
+ [
+ "▁Comm",
+ "un"
+ ],
+ [
+ ",",
+ "_"
+ ],
+ [
+ "as",
+ "ion"
+ ],
+ [
+ "asi",
+ "on"
+ ],
+ [
+ "L",
+ "a"
+ ],
+ [
+ "▁f",
+ "irm"
+ ],
+ [
+ "▁fi",
+ "rm"
+ ],
+ [
+ "▁fir",
+ "m"
+ ],
+ [
+ "▁Al",
+ "though"
+ ],
+ [
+ "▁G",
+ "ood"
+ ],
+ [
+ "▁Go",
+ "od"
+ ],
+ [
+ "▁",
+ "Good"
+ ],
+ [
+ "▁L",
+ "aw"
+ ],
+ [
+ "▁La",
+ "w"
+ ],
+ [
+ "er",
+ "ve"
+ ],
+ [
+ "erv",
+ "e"
+ ],
+ [
+ "▁b",
+ "rand"
+ ],
+ [
+ "▁br",
+ "and"
+ ],
+ [
+ "▁bra",
+ "nd"
+ ],
+ [
+ "▁",
+ "brand"
+ ],
+ [
+ "M",
+ "in"
+ ],
+ [
+ "fil",
+ "l"
+ ],
+ [
+ "fi",
+ "ll"
+ ],
+ [
+ "f",
+ "ill"
+ ],
+ [
+ "']",
+ ","
+ ],
+ [
+ "'",
+ "],"
+ ],
+ [
+ "▁J",
+ "ew"
+ ],
+ [
+ "▁Je",
+ "w"
+ ],
+ [
+ "il",
+ "er"
+ ],
+ [
+ "ile",
+ "r"
+ ],
+ [
+ "i",
+ "ler"
+ ],
+ [
+ "in",
+ "gle"
+ ],
+ [
+ "ing",
+ "le"
+ ],
+ [
+ "it",
+ "hub"
+ ],
+ [
+ "ith",
+ "ub"
+ ],
+ [
+ "▁D",
+ "iv"
+ ],
+ [
+ "▁Di",
+ "v"
+ ],
+ [
+ "▁",
+ "Div"
+ ],
+ [
+ "▁c",
+ "ert"
+ ],
+ [
+ "▁ce",
+ "rt"
+ ],
+ [
+ "▁cer",
+ "t"
+ ],
+ [
+ "▁",
+ "cert"
+ ],
+ [
+ "He",
+ "ight"
+ ],
+ [
+ "H",
+ "eight"
+ ],
+ [
+ "ra",
+ "el"
+ ],
+ [
+ "r",
+ "ael"
+ ],
+ [
+ "The",
+ "re"
+ ],
+ [
+ "Th",
+ "ere"
+ ],
+ [
+ "T",
+ "here"
+ ],
+ [
+ "it",
+ "ute"
+ ],
+ [
+ "itut",
+ "e"
+ ],
+ [
+ "itu",
+ "te"
+ ],
+ [
+ "▁a",
+ "maz"
+ ],
+ [
+ "▁am",
+ "az"
+ ],
+ [
+ "▁",
+ "amaz"
+ ],
+ [
+ "lo",
+ "ok"
+ ],
+ [
+ "l",
+ "ook"
+ ],
+ [
+ "▁S",
+ "E"
+ ],
+ [
+ "▁",
+ "SE"
+ ],
+ [
+ "▁j",
+ "o"
+ ],
+ [
+ "▁",
+ "jo"
+ ],
+ [
+ "▁pull",
+ "ed"
+ ],
+ [
+ "▁pul",
+ "led"
+ ],
+ [
+ "▁re",
+ "sources"
+ ],
+ [
+ "▁res",
+ "ources"
+ ],
+ [
+ "▁resource",
+ "s"
+ ],
+ [
+ "▁",
+ "resources"
+ ],
+ [
+ "▁M",
+ "ax"
+ ],
+ [
+ "▁Ma",
+ "x"
+ ],
+ [
+ "▁",
+ "Max"
+ ],
+ [
+ "▁ag",
+ "reed"
+ ],
+ [
+ "▁agree",
+ "d"
+ ],
+ [
+ "▁agre",
+ "ed"
+ ],
+ [
+ "as",
+ "y"
+ ],
+ [
+ "a",
+ "sy"
+ ],
+ [
+ "▁treat",
+ "ment"
+ ],
+ [
+ "\">",
+ ""
+ ],
+ [
+ "\"><",
+ "/"
+ ],
+ [
+ "\"",
+ ">"
+ ],
+ [
+ "ма",
+ "н"
+ ],
+ [
+ "м",
+ "ан"
+ ],
+ [
+ "▁E",
+ "rr"
+ ],
+ [
+ "▁Er",
+ "r"
+ ],
+ [
+ "▁",
+ "Err"
+ ],
+ [
+ "or",
+ "ig"
+ ],
+ [
+ "ori",
+ "g"
+ ],
+ [
+ "o",
+ "rig"
+ ],
+ [
+ "co",
+ "s"
+ ],
+ [
+ "c",
+ "os"
+ ],
+ [
+ "▁May",
+ "be"
+ ],
+ [
+ "▁",
+ "Maybe"
+ ],
+ [
+ "ot",
+ "al"
+ ],
+ [
+ "ota",
+ "l"
+ ],
+ [
+ "o",
+ "tal"
+ ],
+ [
+ "▁tr",
+ "ain"
+ ],
+ [
+ "▁tra",
+ "in"
+ ],
+ [
+ "▁",
+ "train"
+ ],
+ [
+ "▁S",
+ "ervice"
+ ],
+ [
+ "▁Serv",
+ "ice"
+ ],
+ [
+ "▁",
+ "Service"
+ ],
+ [
+ "▁i",
+ "h"
+ ],
+ [
+ "▁",
+ "ih"
+ ],
+ [
+ "▁sp",
+ "irit"
+ ],
+ [
+ "▁spir",
+ "it"
+ ],
+ [
+ "Com",
+ "p"
+ ],
+ [
+ "Co",
+ "mp"
+ ],
+ [
+ "C",
+ "omp"
+ ],
+ [
+ "sq",
+ "rt"
+ ],
+ [
+ "▁b",
+ "road"
+ ],
+ [
+ "▁br",
+ "oad"
+ ],
+ [
+ "▁bro",
+ "ad"
+ ],
+ [
+ "▁",
+ "broad"
+ ],
+ [
+ "}",
+ "["
+ ],
+ [
+ "▁sh",
+ "ape"
+ ],
+ [
+ "▁sha",
+ "pe"
+ ],
+ [
+ "▁",
+ "shape"
+ ],
+ [
+ "▁d",
+ "oc"
+ ],
+ [
+ "▁do",
+ "c"
+ ],
+ [
+ "▁",
+ "doc"
+ ],
+ [
+ "ho",
+ "w"
+ ],
+ [
+ "h",
+ "ow"
+ ],
+ [
+ "▁t",
+ "ag"
+ ],
+ [
+ "▁ta",
+ "g"
+ ],
+ [
+ "▁",
+ "tag"
+ ],
+ [
+ "ata",
+ "log"
+ ],
+ [
+ "atal",
+ "og"
+ ],
+ [
+ "s",
+ "d"
+ ],
+ [
+ "▁me",
+ "as"
+ ],
+ [
+ "▁Р",
+ "о"
+ ],
+ [
+ "▁ex",
+ "ception"
+ ],
+ [
+ "▁except",
+ "ion"
+ ],
+ [
+ "▁",
+ "exception"
+ ],
+ [
+ "▁T",
+ "w"
+ ],
+ [
+ "▁",
+ "Tw"
+ ],
+ [
+ "▁interest",
+ "ing"
+ ],
+ [
+ "AT",
+ "A"
+ ],
+ [
+ "A",
+ "TA"
+ ],
+ [
+ "▁R",
+ "el"
+ ],
+ [
+ "▁Re",
+ "l"
+ ],
+ [
+ "▁",
+ "Rel"
+ ],
+ [
+ "á",
+ "r"
+ ],
+ [
+ "▁use",
+ "ful"
+ ],
+ [
+ "use",
+ "um"
+ ],
+ [
+ "▁b",
+ "ottom"
+ ],
+ [
+ "▁bott",
+ "om"
+ ],
+ [
+ "▁bot",
+ "tom"
+ ],
+ [
+ "▁",
+ "bottom"
+ ],
+ [
+ "▁other",
+ "wise"
+ ],
+ [
+ "▁ag",
+ "ree"
+ ],
+ [
+ "▁agre",
+ "e"
+ ],
+ [
+ "ch",
+ "t"
+ ],
+ [
+ "c",
+ "ht"
+ ],
+ [
+ "th",
+ "en"
+ ],
+ [
+ "the",
+ "n"
+ ],
+ [
+ "t",
+ "hen"
+ ],
+ [
+ "▁signific",
+ "ant"
+ ],
+ [
+ "}",
+ "/"
+ ],
+ [
+ "▁ch",
+ "annel"
+ ],
+ [
+ "▁",
+ "channel"
+ ],
+ [
+ "ic",
+ "ial"
+ ],
+ [
+ "ici",
+ "al"
+ ],
+ [
+ "icia",
+ "l"
+ ],
+ [
+ "i",
+ "cial"
+ ],
+ [
+ "ти",
+ "в"
+ ],
+ [
+ "var",
+ "e"
+ ],
+ [
+ "va",
+ "re"
+ ],
+ [
+ "v",
+ "are"
+ ],
+ [
+ "▁en",
+ "ter"
+ ],
+ [
+ "▁ent",
+ "er"
+ ],
+ [
+ "▁",
+ "enter"
+ ],
+ [
+ "En",
+ "g"
+ ],
+ [
+ "E",
+ "ng"
+ ],
+ [
+ "u",
+ "j"
+ ],
+ [
+ "UR",
+ "E"
+ ],
+ [
+ "U",
+ "RE"
+ ],
+ [
+ "que",
+ "ue"
+ ],
+ [
+ "on",
+ "o"
+ ],
+ [
+ "o",
+ "no"
+ ],
+ [
+ "▁cont",
+ "ains"
+ ],
+ [
+ "▁contain",
+ "s"
+ ],
+ [
+ "▁",
+ "contains"
+ ],
+ [
+ "M",
+ "I"
+ ],
+ [
+ "▁n",
+ "ation"
+ ],
+ [
+ "▁nat",
+ "ion"
+ ],
+ [
+ "▁r",
+ "ules"
+ ],
+ [
+ "▁rule",
+ "s"
+ ],
+ [
+ "▁ru",
+ "les"
+ ],
+ [
+ "▁rul",
+ "es"
+ ],
+ [
+ "▁",
+ "rules"
+ ],
+ [
+ "fo",
+ "l"
+ ],
+ [
+ "f",
+ "ol"
+ ],
+ [
+ "▁p",
+ "a"
+ ],
+ [
+ "▁",
+ "pa"
+ ],
+ [
+ "ar",
+ "p"
+ ],
+ [
+ "a",
+ "rp"
+ ],
+ [
+ "▁qu",
+ "iet"
+ ],
+ [
+ "▁qui",
+ "et"
+ ],
+ [
+ "▁t",
+ "hus"
+ ],
+ [
+ "▁th",
+ "us"
+ ],
+ [
+ "ip",
+ "ped"
+ ],
+ [
+ "ipp",
+ "ed"
+ ],
+ [
+ "i",
+ "pped"
+ ],
+ [
+ "an",
+ "not"
+ ],
+ [
+ "ann",
+ "ot"
+ ],
+ [
+ "anno",
+ "t"
+ ],
+ [
+ "ud",
+ "es"
+ ],
+ [
+ "ude",
+ "s"
+ ],
+ [
+ "u",
+ "des"
+ ],
+ [
+ "()",
+ ":"
+ ],
+ [
+ "(",
+ "):"
+ ],
+ [
+ "name",
+ "s"
+ ],
+ [
+ "na",
+ "mes"
+ ],
+ [
+ "nam",
+ "es"
+ ],
+ [
+ "n",
+ "ames"
+ ],
+ [
+ "▁com",
+ "pos"
+ ],
+ [
+ "▁comp",
+ "os"
+ ],
+ [
+ "▁in",
+ "j"
+ ],
+ [
+ "un",
+ "a"
+ ],
+ [
+ "u",
+ "na"
+ ],
+ [
+ "bin",
+ "d"
+ ],
+ [
+ "bi",
+ "nd"
+ ],
+ [
+ "b",
+ "ind"
+ ],
+ [
+ "▁f",
+ "ully"
+ ],
+ [
+ "▁full",
+ "y"
+ ],
+ [
+ "▁ful",
+ "ly"
+ ],
+ [
+ "▁",
+ "fully"
+ ],
+ [
+ "ra",
+ "s"
+ ],
+ [
+ "r",
+ "as"
+ ],
+ [
+ "Util",
+ "s"
+ ],
+ [
+ "Ut",
+ "ils"
+ ],
+ [
+ "an",
+ "ges"
+ ],
+ [
+ "ang",
+ "es"
+ ],
+ [
+ "ange",
+ "s"
+ ],
+ [
+ "du",
+ "le"
+ ],
+ [
+ "d",
+ "ule"
+ ],
+ [
+ "▁Christ",
+ "ian"
+ ],
+ [
+ "▁re",
+ "ve"
+ ],
+ [
+ "▁r",
+ "eve"
+ ],
+ [
+ "▁rev",
+ "e"
+ ],
+ [
+ "än",
+ "d"
+ ],
+ [
+ "ä",
+ "nd"
+ ],
+ [
+ "▁col",
+ "lect"
+ ],
+ [
+ "▁coll",
+ "ect"
+ ],
+ [
+ "▁colle",
+ "ct"
+ ],
+ [
+ "▁",
+ "collect"
+ ],
+ [
+ "▁cele",
+ "br"
+ ],
+ [
+ "an",
+ "da"
+ ],
+ [
+ "and",
+ "a"
+ ],
+ [
+ "í",
+ "n"
+ ],
+ [
+ "jo",
+ "in"
+ ],
+ [
+ "j",
+ "oin"
+ ],
+ [
+ "▁p",
+ "aid"
+ ],
+ [
+ "▁pa",
+ "id"
+ ],
+ [
+ "▁",
+ "paid"
+ ],
+ [
+ "Co",
+ "re"
+ ],
+ [
+ "Cor",
+ "e"
+ ],
+ [
+ "C",
+ "ore"
+ ],
+ [
+ "G",
+ "e"
+ ],
+ [
+ ".",
+ "$"
+ ],
+ [
+ "▁f",
+ "if"
+ ],
+ [
+ "▁fi",
+ "f"
+ ],
+ [
+ "▁",
+ "fif"
+ ],
+ [
+ "▁u",
+ "ma"
+ ],
+ [
+ "▁um",
+ "a"
+ ],
+ [
+ "▁",
+ "uma"
+ ],
+ [
+ "▁",
+ "~"
+ ],
+ [
+ "erv",
+ "ices"
+ ],
+ [
+ "ervice",
+ "s"
+ ],
+ [
+ "▁rec",
+ "ently"
+ ],
+ [
+ "▁recent",
+ "ly"
+ ],
+ [
+ "de",
+ "sc"
+ ],
+ [
+ "des",
+ "c"
+ ],
+ [
+ "d",
+ "esc"
+ ],
+ [
+ "▁he",
+ "avy"
+ ],
+ [
+ "▁heav",
+ "y"
+ ],
+ [
+ "▁r",
+ "ule"
+ ],
+ [
+ "▁ru",
+ "le"
+ ],
+ [
+ "▁rul",
+ "e"
+ ],
+ [
+ "▁",
+ "rule"
+ ],
+ [
+ "▁P",
+ "lease"
+ ],
+ [
+ "▁Ple",
+ "ase"
+ ],
+ [
+ "▁",
+ "Please"
+ ],
+ [
+ "ps",
+ "i"
+ ],
+ [
+ "p",
+ "si"
+ ],
+ [
+ "▁con",
+ "sole"
+ ],
+ [
+ "▁cons",
+ "ole"
+ ],
+ [
+ "▁",
+ "console"
+ ],
+ [
+ "▁f",
+ "ort"
+ ],
+ [
+ "▁for",
+ "t"
+ ],
+ [
+ "▁fo",
+ "rt"
+ ],
+ [
+ "▁",
+ "fort"
+ ],
+ [
+ ".",
+ "\\"
+ ],
+ [
+ "▁W",
+ "ashington"
+ ],
+ [
+ "▁g",
+ "ar"
+ ],
+ [
+ "▁ga",
+ "r"
+ ],
+ [
+ "▁",
+ "gar"
+ ],
+ [
+ "▁G",
+ "roup"
+ ],
+ [
+ "▁Gr",
+ "oup"
+ ],
+ [
+ "▁Gro",
+ "up"
+ ],
+ [
+ "▁",
+ "Group"
+ ],
+ [
+ "▁inter",
+ "view"
+ ],
+ [
+ "an",
+ "ned"
+ ],
+ [
+ "ann",
+ "ed"
+ ],
+ [
+ "anne",
+ "d"
+ ],
+ [
+ "sq",
+ "l"
+ ],
+ [
+ "s",
+ "ql"
+ ],
+ [
+ "▁a",
+ "nc"
+ ],
+ [
+ "▁an",
+ "c"
+ ],
+ [
+ "▁",
+ "anc"
+ ],
+ [
+ "ј",
+ "а"
+ ],
+ [
+ "P",
+ "ack"
+ ],
+ [
+ "▁Cl",
+ "ub"
+ ],
+ [
+ "▁m",
+ "ask"
+ ],
+ [
+ "▁ma",
+ "sk"
+ ],
+ [
+ "▁mas",
+ "k"
+ ],
+ [
+ "▁",
+ "mask"
+ ],
+ [
+ "▁con",
+ "cept"
+ ],
+ [
+ "▁conce",
+ "pt"
+ ],
+ [
+ "▁[",
+ "'"
+ ],
+ [
+ "▁",
+ "['"
+ ],
+ [
+ "▁se",
+ "lected"
+ ],
+ [
+ "▁select",
+ "ed"
+ ],
+ [
+ "▁sele",
+ "cted"
+ ],
+ [
+ "▁",
+ "selected"
+ ],
+ [
+ "▁U",
+ "se"
+ ],
+ [
+ "▁Us",
+ "e"
+ ],
+ [
+ "▁",
+ "Use"
+ ],
+ [
+ "▁e",
+ "le"
+ ],
+ [
+ "▁el",
+ "e"
+ ],
+ [
+ "▁",
+ "ele"
+ ],
+ [
+ "ear",
+ "s"
+ ],
+ [
+ "ea",
+ "rs"
+ ],
+ [
+ "e",
+ "ars"
+ ],
+ [
+ "▁r",
+ "ace"
+ ],
+ [
+ "▁rac",
+ "e"
+ ],
+ [
+ "▁ra",
+ "ce"
+ ],
+ [
+ "h",
+ "y"
+ ],
+ [
+ "O",
+ "m"
+ ],
+ [
+ "▁st",
+ "eps"
+ ],
+ [
+ "▁ste",
+ "ps"
+ ],
+ [
+ "▁step",
+ "s"
+ ],
+ [
+ "▁",
+ "steps"
+ ],
+ [
+ "il",
+ "a"
+ ],
+ [
+ "i",
+ "la"
+ ],
+ [
+ "es",
+ "ts"
+ ],
+ [
+ "est",
+ "s"
+ ],
+ [
+ "e",
+ "sts"
+ ],
+ [
+ "ed",
+ "s"
+ ],
+ [
+ "e",
+ "ds"
+ ],
+ [
+ "▁stre",
+ "et"
+ ],
+ [
+ "ne",
+ "rs"
+ ],
+ [
+ "ner",
+ "s"
+ ],
+ [
+ "n",
+ "ers"
+ ],
+ [
+ "▁b",
+ "irth"
+ ],
+ [
+ "po",
+ "p"
+ ],
+ [
+ "p",
+ "op"
+ ],
+ [
+ "▁",
+ "ли"
+ ],
+ [
+ "M",
+ "B"
+ ],
+ [
+ "к",
+ "ра"
+ ],
+ [
+ "ci",
+ "r"
+ ],
+ [
+ "c",
+ "ir"
+ ],
+ [
+ "eps",
+ "ilon"
+ ],
+ [
+ "e",
+ "psilon"
+ ],
+ [
+ "▁con",
+ "stant"
+ ],
+ [
+ "▁const",
+ "ant"
+ ],
+ [
+ "▁",
+ "constant"
+ ],
+ [
+ "qu",
+ "es"
+ ],
+ [
+ "que",
+ "s"
+ ],
+ [
+ "q",
+ "ues"
+ ],
+ [
+ "ad",
+ "as"
+ ],
+ [
+ "ada",
+ "s"
+ ],
+ [
+ "a",
+ "das"
+ ],
+ [
+ "▁kn",
+ "ows"
+ ],
+ [
+ "▁know",
+ "s"
+ ],
+ [
+ "▁P",
+ "y"
+ ],
+ [
+ "▁",
+ "Py"
+ ],
+ [
+ "cl",
+ "es"
+ ],
+ [
+ "cle",
+ "s"
+ ],
+ [
+ "c",
+ "les"
+ ],
+ [
+ "▁c",
+ "it"
+ ],
+ [
+ "▁ci",
+ "t"
+ ],
+ [
+ "▁",
+ "cit"
+ ],
+ [
+ "▁p",
+ "air"
+ ],
+ [
+ "▁pa",
+ "ir"
+ ],
+ [
+ "▁",
+ "pair"
+ ],
+ [
+ "in",
+ "ese"
+ ],
+ [
+ "ine",
+ "se"
+ ],
+ [
+ "ines",
+ "e"
+ ],
+ [
+ "▁P",
+ "eter"
+ ],
+ [
+ "▁Pe",
+ "ter"
+ ],
+ [
+ "▁Pet",
+ "er"
+ ],
+ [
+ "▁Pete",
+ "r"
+ ],
+ [
+ "▁fin",
+ "ished"
+ ],
+ [
+ "▁finish",
+ "ed"
+ ],
+ [
+ "▁",
+ "finished"
+ ],
+ [
+ "▁m",
+ "aster"
+ ],
+ [
+ "▁ma",
+ "ster"
+ ],
+ [
+ "▁mas",
+ "ter"
+ ],
+ [
+ "▁mast",
+ "er"
+ ],
+ [
+ "▁",
+ "master"
+ ],
+ [
+ "▁tw",
+ "enty"
+ ],
+ [
+ "▁f",
+ "ell"
+ ],
+ [
+ "▁fe",
+ "ll"
+ ],
+ [
+ "▁fel",
+ "l"
+ ],
+ [
+ "▁cent",
+ "ral"
+ ],
+ [
+ "▁m",
+ "es"
+ ],
+ [
+ "▁me",
+ "s"
+ ],
+ [
+ "▁",
+ "mes"
+ ],
+ [
+ "re",
+ "v"
+ ],
+ [
+ "r",
+ "ev"
+ ],
+ [
+ "ST",
+ "AT"
+ ],
+ [
+ "st",
+ "at"
+ ],
+ [
+ "sta",
+ "t"
+ ],
+ [
+ "s",
+ "tat"
+ ],
+ [
+ "▁all",
+ "ows"
+ ],
+ [
+ "▁allow",
+ "s"
+ ],
+ [
+ "▁g",
+ "ro"
+ ],
+ [
+ "▁gr",
+ "o"
+ ],
+ [
+ "▁",
+ "gro"
+ ],
+ [
+ "Cl",
+ "ick"
+ ],
+ [
+ "C",
+ "lick"
+ ],
+ [
+ "▁st",
+ "ories"
+ ],
+ [
+ "▁stor",
+ "ies"
+ ],
+ [
+ "▁sto",
+ "ries"
+ ],
+ [
+ "F",
+ "e"
+ ],
+ [
+ "å",
+ "r"
+ ],
+ [
+ "▁b",
+ "aby"
+ ],
+ [
+ "▁bab",
+ "y"
+ ],
+ [
+ "▁ba",
+ "by"
+ ],
+ [
+ "en",
+ "cia"
+ ],
+ [
+ "enc",
+ "ia"
+ ],
+ [
+ "enci",
+ "a"
+ ],
+ [
+ "e",
+ "ncia"
+ ],
+ [
+ "▁e",
+ "iner"
+ ],
+ [
+ "▁ein",
+ "er"
+ ],
+ [
+ "▁eine",
+ "r"
+ ],
+ [
+ "Ar",
+ "e"
+ ],
+ [
+ "A",
+ "re"
+ ],
+ [
+ "eb",
+ "ug"
+ ],
+ [
+ "e",
+ "bug"
+ ],
+ [
+ "st",
+ "ore"
+ ],
+ [
+ "sto",
+ "re"
+ ],
+ [
+ "\",",
+ "\""
+ ],
+ [
+ "\"",
+ ",\""
+ ],
+ [
+ "la",
+ "m"
+ ],
+ [
+ "l",
+ "am"
+ ],
+ [
+ "▁s",
+ "v"
+ ],
+ [
+ "▁",
+ "sv"
+ ],
+ [
+ "ци",
+ "и"
+ ],
+ [
+ "NU",
+ "LL"
+ ],
+ [
+ "N",
+ "ULL"
+ ],
+ [
+ "▁L",
+ "eg"
+ ],
+ [
+ "▁Le",
+ "g"
+ ],
+ [
+ "▁",
+ "Leg"
+ ],
+ [
+ "▁m",
+ "ovie"
+ ],
+ [
+ "▁mov",
+ "ie"
+ ],
+ [
+ "▁h",
+ "ous"
+ ],
+ [
+ "▁ho",
+ "us"
+ ],
+ [
+ "▁learn",
+ "ed"
+ ],
+ [
+ "▁lear",
+ "ned"
+ ],
+ [
+ "bo",
+ "n"
+ ],
+ [
+ "b",
+ "on"
+ ],
+ [
+ "▁trans",
+ "fer"
+ ],
+ [
+ "▁",
+ "transfer"
+ ],
+ [
+ "iforn",
+ "ia"
+ ],
+ [
+ "ps",
+ "ilon"
+ ],
+ [
+ "psi",
+ "lon"
+ ],
+ [
+ "▁S",
+ "oft"
+ ],
+ [
+ "▁So",
+ "ft"
+ ],
+ [
+ "▁Sof",
+ "t"
+ ],
+ [
+ "▁",
+ "Soft"
+ ],
+ [
+ "▁com",
+ "mer"
+ ],
+ [
+ "▁comm",
+ "er"
+ ],
+ [
+ "▁comme",
+ "r"
+ ],
+ [
+ "▁had",
+ "n"
+ ],
+ [
+ "▁ha",
+ "dn"
+ ],
+ [
+ "▁E",
+ "in"
+ ],
+ [
+ "▁T",
+ "wo"
+ ],
+ [
+ "▁Tw",
+ "o"
+ ],
+ [
+ "▁",
+ "Two"
+ ],
+ [
+ "cr",
+ "aft"
+ ],
+ [
+ "c",
+ "raft"
+ ],
+ [
+ "Pro",
+ "cess"
+ ],
+ [
+ "Proc",
+ "ess"
+ ],
+ [
+ "▁по",
+ "д"
+ ],
+ [
+ "ar",
+ "gin"
+ ],
+ [
+ "arg",
+ "in"
+ ],
+ [
+ "▁est",
+ "im"
+ ],
+ [
+ "▁es",
+ "tim"
+ ],
+ [
+ "▁M",
+ "em"
+ ],
+ [
+ "▁Me",
+ "m"
+ ],
+ [
+ "▁",
+ "Mem"
+ ],
+ [
+ "ik",
+ "a"
+ ],
+ [
+ "i",
+ "ka"
+ ],
+ [
+ "▁T",
+ "od"
+ ],
+ [
+ "▁To",
+ "d"
+ ],
+ [
+ "du",
+ "c"
+ ],
+ [
+ "d",
+ "uc"
+ ],
+ [
+ "▁d",
+ "anger"
+ ],
+ [
+ "▁dan",
+ "ger"
+ ],
+ [
+ "ri",
+ "ve"
+ ],
+ [
+ "riv",
+ "e"
+ ],
+ [
+ "r",
+ "ive"
+ ],
+ [
+ "Do",
+ "n"
+ ],
+ [
+ "D",
+ "on"
+ ],
+ [
+ "▁Q",
+ "ue"
+ ],
+ [
+ "▁Qu",
+ "e"
+ ],
+ [
+ "▁",
+ "Que"
+ ],
+ [
+ "ha",
+ "l"
+ ],
+ [
+ "h",
+ "al"
+ ],
+ [
+ "▁m",
+ "m"
+ ],
+ [
+ "▁",
+ "mm"
+ ],
+ [
+ "▁S",
+ "ur"
+ ],
+ [
+ "▁Su",
+ "r"
+ ],
+ [
+ "▁",
+ "Sur"
+ ],
+ [
+ "Or",
+ "der"
+ ],
+ [
+ "Ord",
+ "er"
+ ],
+ [
+ "▁d",
+ "istribution"
+ ],
+ [
+ "▁distribut",
+ "ion"
+ ],
+ [
+ "f",
+ "a"
+ ],
+ [
+ "▁M",
+ "any"
+ ],
+ [
+ "▁Man",
+ "y"
+ ],
+ [
+ "▁Ma",
+ "ny"
+ ],
+ [
+ "▁",
+ "Many"
+ ],
+ [
+ "pl",
+ "icit"
+ ],
+ [
+ "plic",
+ "it"
+ ],
+ [
+ "Em",
+ "pty"
+ ],
+ [
+ "Emp",
+ "ty"
+ ],
+ [
+ "Hand",
+ "le"
+ ],
+ [
+ "▁t",
+ "oken"
+ ],
+ [
+ "▁to",
+ "ken"
+ ],
+ [
+ "▁tok",
+ "en"
+ ],
+ [
+ "▁",
+ "token"
+ ],
+ [
+ "▁e",
+ "pis"
+ ],
+ [
+ "▁ep",
+ "is"
+ ],
+ [
+ "▁ass",
+ "ist"
+ ],
+ [
+ "▁pur",
+ "pose"
+ ],
+ [
+ "▁",
+ "ц"
+ ],
+ [
+ "N",
+ "U"
+ ],
+ [
+ "id",
+ "ers"
+ ],
+ [
+ "ide",
+ "rs"
+ ],
+ [
+ "ider",
+ "s"
+ ],
+ [
+ "i",
+ "ders"
+ ],
+ [
+ "ra",
+ "te"
+ ],
+ [
+ "rat",
+ "e"
+ ],
+ [
+ "r",
+ "ate"
+ ],
+ [
+ "The",
+ "y"
+ ],
+ [
+ "Th",
+ "ey"
+ ],
+ [
+ "Param",
+ "eter"
+ ],
+ [
+ "De",
+ "c"
+ ],
+ [
+ "D",
+ "ec"
+ ],
+ [
+ "▁str",
+ "ugg"
+ ],
+ [
+ "▁stru",
+ "gg"
+ ],
+ [
+ "▁sh",
+ "oot"
+ ],
+ [
+ "I",
+ "V"
+ ],
+ [
+ "▁G",
+ "reat"
+ ],
+ [
+ "▁Gre",
+ "at"
+ ],
+ [
+ "▁",
+ "Great"
+ ],
+ [
+ "▁S",
+ "il"
+ ],
+ [
+ "▁Si",
+ "l"
+ ],
+ [
+ "▁",
+ "Sil"
+ ],
+ [
+ "▁l",
+ "oved"
+ ],
+ [
+ "▁lo",
+ "ved"
+ ],
+ [
+ "▁love",
+ "d"
+ ],
+ [
+ "▁lov",
+ "ed"
+ ],
+ [
+ "▁c",
+ "lick"
+ ],
+ [
+ "▁cl",
+ "ick"
+ ],
+ [
+ "▁",
+ "click"
+ ],
+ [
+ "▁re",
+ "serv"
+ ],
+ [
+ "▁res",
+ "erv"
+ ],
+ [
+ "▁в",
+ "е"
+ ],
+ [
+ "▁",
+ "ве"
+ ],
+ [
+ "▁s",
+ "pread"
+ ],
+ [
+ "▁sp",
+ "read"
+ ],
+ [
+ "▁spr",
+ "ead"
+ ],
+ [
+ "▁o",
+ "g"
+ ],
+ [
+ "▁",
+ "og"
+ ],
+ [
+ "▁$",
+ "{"
+ ],
+ [
+ "▁",
+ "${"
+ ],
+ [
+ "▁m",
+ "iles"
+ ],
+ [
+ "▁mil",
+ "es"
+ ],
+ [
+ "▁mi",
+ "les"
+ ],
+ [
+ "▁mile",
+ "s"
+ ],
+ [
+ "▁success",
+ "ful"
+ ],
+ [
+ "▁",
+ "successful"
+ ],
+ [
+ "o",
+ "j"
+ ],
+ [
+ "▁D",
+ "irect"
+ ],
+ [
+ "▁Di",
+ "rect"
+ ],
+ [
+ "▁Dire",
+ "ct"
+ ],
+ [
+ "▁Dir",
+ "ect"
+ ],
+ [
+ "▁",
+ "Direct"
+ ],
+ [
+ "▁a",
+ "x"
+ ],
+ [
+ "▁",
+ "ax"
+ ],
+ [
+ "▁grow",
+ "th"
+ ],
+ [
+ "W",
+ "ork"
+ ],
+ [
+ "▁ch",
+ "urch"
+ ],
+ [
+ "In",
+ "st"
+ ],
+ [
+ "Ins",
+ "t"
+ ],
+ [
+ "IC",
+ "E"
+ ],
+ [
+ "I",
+ "CE"
+ ],
+ [
+ "st",
+ "en"
+ ],
+ [
+ "ste",
+ "n"
+ ],
+ [
+ "s",
+ "ten"
+ ],
+ [
+ "ро",
+ "д"
+ ],
+ [
+ "▁C",
+ "enter"
+ ],
+ [
+ "▁Cent",
+ "er"
+ ],
+ [
+ "▁",
+ "Center"
+ ],
+ [
+ "se",
+ "s"
+ ],
+ [
+ "s",
+ "es"
+ ],
+ [
+ "go",
+ "t"
+ ],
+ [
+ "g",
+ "ot"
+ ],
+ [
+ "de",
+ "lete"
+ ],
+ [
+ "del",
+ "ete"
+ ],
+ [
+ "▁M",
+ "a"
+ ],
+ [
+ "▁",
+ "Ma"
+ ],
+ [
+ "%",
+ "%"
+ ],
+ [
+ "▁c",
+ "row"
+ ],
+ [
+ "▁cr",
+ "ow"
+ ],
+ [
+ "▁cro",
+ "w"
+ ],
+ [
+ "D",
+ "F"
+ ],
+ [
+ "fr",
+ "ont"
+ ],
+ [
+ "▁b",
+ "log"
+ ],
+ [
+ "▁bl",
+ "og"
+ ],
+ [
+ "▁blo",
+ "g"
+ ],
+ [
+ "▁",
+ "blog"
+ ],
+ [
+ "▁comp",
+ "uter"
+ ],
+ [
+ "▁comput",
+ "er"
+ ],
+ [
+ "▁compute",
+ "r"
+ ],
+ [
+ "на",
+ "я"
+ ],
+ [
+ "▁m",
+ "ir"
+ ],
+ [
+ "▁mi",
+ "r"
+ ],
+ [
+ "▁",
+ "mir"
+ ],
+ [
+ "▁S",
+ "uper"
+ ],
+ [
+ "▁Su",
+ "per"
+ ],
+ [
+ "▁Sup",
+ "er"
+ ],
+ [
+ "▁",
+ "Super"
+ ],
+ [
+ "',",
+ "'"
+ ],
+ [
+ "'",
+ ",'"
+ ],
+ [
+ "▁mult",
+ "i"
+ ],
+ [
+ "▁mul",
+ "ti"
+ ],
+ [
+ "▁",
+ "multi"
+ ],
+ [
+ "▁g",
+ "ru"
+ ],
+ [
+ "▁gr",
+ "u"
+ ],
+ [
+ "▁",
+ "gru"
+ ],
+ [
+ "▁J",
+ "o"
+ ],
+ [
+ "▁",
+ "Jo"
+ ],
+ [
+ "▁Can",
+ "ada"
+ ],
+ [
+ "▁Canad",
+ "a"
+ ],
+ [
+ "▁Th",
+ "omas"
+ ],
+ [
+ "▁Thom",
+ "as"
+ ],
+ [
+ "▁large",
+ "r"
+ ],
+ [
+ "▁larg",
+ "er"
+ ],
+ [
+ "▁com",
+ "par"
+ ],
+ [
+ "▁comp",
+ "ar"
+ ],
+ [
+ "▁",
+ "compar"
+ ],
+ [
+ "Cur",
+ "rent"
+ ],
+ [
+ "th",
+ "at"
+ ],
+ [
+ "tha",
+ "t"
+ ],
+ [
+ "t",
+ "hat"
+ ],
+ [
+ "▁d",
+ "rop"
+ ],
+ [
+ "▁dr",
+ "op"
+ ],
+ [
+ "▁dro",
+ "p"
+ ],
+ [
+ "▁",
+ "drop"
+ ],
+ [
+ "ен",
+ "т"
+ ],
+ [
+ "▁Re",
+ "public"
+ ],
+ [
+ "▁Rep",
+ "ublic"
+ ],
+ [
+ "▁Repub",
+ "lic"
+ ],
+ [
+ "▁d",
+ "ise"
+ ],
+ [
+ "▁dis",
+ "e"
+ ],
+ [
+ "▁di",
+ "se"
+ ],
+ [
+ "▁effect",
+ "s"
+ ],
+ [
+ "▁girl",
+ "s"
+ ],
+ [
+ "▁gir",
+ "ls"
+ ],
+ [
+ "en",
+ "cies"
+ ],
+ [
+ "enc",
+ "ies"
+ ],
+ [
+ "enci",
+ "es"
+ ],
+ [
+ "el",
+ "lig"
+ ],
+ [
+ "ell",
+ "ig"
+ ],
+ [
+ "elli",
+ "g"
+ ],
+ [
+ "▁N",
+ "ote"
+ ],
+ [
+ "▁No",
+ "te"
+ ],
+ [
+ "▁Not",
+ "e"
+ ],
+ [
+ "▁",
+ "Note"
+ ],
+ [
+ "▁Ass",
+ "oci"
+ ],
+ [
+ "▁",
+ "Associ"
+ ],
+ [
+ "▁u",
+ "ses"
+ ],
+ [
+ "▁us",
+ "es"
+ ],
+ [
+ "▁use",
+ "s"
+ ],
+ [
+ "▁",
+ "uses"
+ ],
+ [
+ "el",
+ "led"
+ ],
+ [
+ "ell",
+ "ed"
+ ],
+ [
+ "elle",
+ "d"
+ ],
+ [
+ "▁w",
+ "arm"
+ ],
+ [
+ "▁war",
+ "m"
+ ],
+ [
+ "▁wa",
+ "rm"
+ ],
+ [
+ "th",
+ "read"
+ ],
+ [
+ "fo",
+ "nt"
+ ],
+ [
+ "fon",
+ "t"
+ ],
+ [
+ "f",
+ "ont"
+ ],
+ [
+ "▁z",
+ "um"
+ ],
+ [
+ "▁zu",
+ "m"
+ ],
+ [
+ "▁follow",
+ "s"
+ ],
+ [
+ "▁w",
+ "hom"
+ ],
+ [
+ "▁wh",
+ "om"
+ ],
+ [
+ "▁who",
+ "m"
+ ],
+ [
+ "T",
+ "A"
+ ],
+ [
+ "▁w",
+ "ild"
+ ],
+ [
+ "▁A",
+ "R"
+ ],
+ [
+ "▁",
+ "AR"
+ ],
+ [
+ "ia",
+ "ble"
+ ],
+ [
+ "i",
+ "able"
+ ],
+ [
+ "▁Tr",
+ "ue"
+ ],
+ [
+ "▁Tru",
+ "e"
+ ],
+ [
+ "▁",
+ "True"
+ ],
+ [
+ "Pos",
+ "ition"
+ ],
+ [
+ "▁s",
+ "ell"
+ ],
+ [
+ "▁se",
+ "ll"
+ ],
+ [
+ "▁sel",
+ "l"
+ ],
+ [
+ "ch",
+ "er"
+ ],
+ [
+ "che",
+ "r"
+ ],
+ [
+ "c",
+ "her"
+ ],
+ [
+ "▁B",
+ "us"
+ ],
+ [
+ "▁Bu",
+ "s"
+ ],
+ [
+ "▁",
+ "Bus"
+ ],
+ [
+ "▁le",
+ "an"
+ ],
+ [
+ "▁",
+ "lean"
+ ],
+ [
+ "AC",
+ "E"
+ ],
+ [
+ "A",
+ "CE"
+ ],
+ [
+ "▁s",
+ "erved"
+ ],
+ [
+ "▁ser",
+ "ved"
+ ],
+ [
+ "▁serv",
+ "ed"
+ ],
+ [
+ "▁serve",
+ "d"
+ ],
+ [
+ "h",
+ "w"
+ ],
+ [
+ "▁C",
+ "ur"
+ ],
+ [
+ "▁Cu",
+ "r"
+ ],
+ [
+ "▁",
+ "Cur"
+ ],
+ [
+ "▁n",
+ "orth"
+ ],
+ [
+ "▁nor",
+ "th"
+ ],
+ [
+ "▁nort",
+ "h"
+ ],
+ [
+ "Da",
+ "t"
+ ],
+ [
+ "D",
+ "at"
+ ],
+ [
+ "▁>",
+ ">"
+ ],
+ [
+ "▁",
+ ">>"
+ ],
+ [
+ "com",
+ "mand"
+ ],
+ [
+ "comm",
+ "and"
+ ],
+ [
+ "at",
+ "z"
+ ],
+ [
+ "a",
+ "tz"
+ ],
+ [
+ "▁m",
+ "al"
+ ],
+ [
+ "▁ma",
+ "l"
+ ],
+ [
+ "▁",
+ "mal"
+ ],
+ [
+ "ста",
+ "в"
+ ],
+ [
+ "▁P",
+ "ress"
+ ],
+ [
+ "▁Pr",
+ "ess"
+ ],
+ [
+ "▁Pres",
+ "s"
+ ],
+ [
+ "▁Pre",
+ "ss"
+ ],
+ [
+ "▁",
+ "Press"
+ ],
+ [
+ "▁char",
+ "acters"
+ ],
+ [
+ "▁character",
+ "s"
+ ],
+ [
+ "▁z",
+ "ero"
+ ],
+ [
+ "▁ze",
+ "ro"
+ ],
+ [
+ "▁",
+ "zero"
+ ],
+ [
+ "AG",
+ "E"
+ ],
+ [
+ "A",
+ "GE"
+ ],
+ [
+ "rap",
+ "per"
+ ],
+ [
+ "▁kit",
+ "chen"
+ ],
+ [
+ "am",
+ "ing"
+ ],
+ [
+ "ami",
+ "ng"
+ ],
+ [
+ "amin",
+ "g"
+ ],
+ [
+ "a",
+ "ming"
+ ],
+ [
+ "▁re",
+ "str"
+ ],
+ [
+ "▁r",
+ "estr"
+ ],
+ [
+ "▁res",
+ "tr"
+ ],
+ [
+ "▁rest",
+ "r"
+ ],
+ [
+ "X",
+ "X"
+ ],
+ [
+ "▁Col",
+ "lege"
+ ],
+ [
+ "▁Ar",
+ "ray"
+ ],
+ [
+ "▁Arr",
+ "ay"
+ ],
+ [
+ "▁",
+ "Array"
+ ],
+ [
+ "▁f",
+ "resh"
+ ],
+ [
+ "▁fr",
+ "esh"
+ ],
+ [
+ "▁fre",
+ "sh"
+ ],
+ [
+ "▁fres",
+ "h"
+ ],
+ [
+ "▁sh",
+ "ift"
+ ],
+ [
+ "▁",
+ "shift"
+ ],
+ [
+ "▁spec",
+ "ified"
+ ],
+ [
+ "pl",
+ "ete"
+ ],
+ [
+ "ple",
+ "te"
+ ],
+ [
+ "plet",
+ "e"
+ ],
+ [
+ "p",
+ "lete"
+ ],
+ [
+ "IT",
+ "E"
+ ],
+ [
+ "I",
+ "TE"
+ ],
+ [
+ "▁C",
+ "amp"
+ ],
+ [
+ "▁Cam",
+ "p"
+ ],
+ [
+ "▁Ca",
+ "mp"
+ ],
+ [
+ "▁",
+ "Camp"
+ ],
+ [
+ "ri",
+ "al"
+ ],
+ [
+ "ria",
+ "l"
+ ],
+ [
+ "r",
+ "ial"
+ ],
+ [
+ "c",
+ "b"
+ ],
+ [
+ "▁T",
+ "H"
+ ],
+ [
+ "▁",
+ "TH"
+ ],
+ [
+ "I",
+ "B"
+ ],
+ [
+ "os",
+ "en"
+ ],
+ [
+ "ose",
+ "n"
+ ],
+ [
+ "o",
+ "sen"
+ ],
+ [
+ "▁",
+ "ú"
+ ],
+ [
+ "▁par",
+ "ams"
+ ],
+ [
+ "▁param",
+ "s"
+ ],
+ [
+ "▁para",
+ "ms"
+ ],
+ [
+ "▁",
+ "params"
+ ],
+ [
+ "ign",
+ "ment"
+ ],
+ [
+ "ad",
+ "ding"
+ ],
+ [
+ "add",
+ "ing"
+ ],
+ [
+ "▁deg",
+ "ree"
+ ],
+ [
+ "▁",
+ "degree"
+ ],
+ [
+ "Loc",
+ "al"
+ ],
+ [
+ "Lo",
+ "cal"
+ ],
+ [
+ "L",
+ "ocal"
+ ],
+ [
+ "O",
+ "h"
+ ],
+ [
+ "▁z",
+ "ur"
+ ],
+ [
+ "▁zu",
+ "r"
+ ],
+ [
+ "▁level",
+ "s"
+ ],
+ [
+ "▁lev",
+ "els"
+ ],
+ [
+ "C",
+ "S"
+ ],
+ [
+ "fin",
+ "ished"
+ ],
+ [
+ "finish",
+ "ed"
+ ],
+ [
+ "C",
+ "ase"
+ ],
+ [
+ "ri",
+ "age"
+ ],
+ [
+ "ria",
+ "ge"
+ ],
+ [
+ "Vec",
+ "tor"
+ ],
+ [
+ "V",
+ "ector"
+ ],
+ [
+ "▁s",
+ "ea"
+ ],
+ [
+ "▁se",
+ "a"
+ ],
+ [
+ "▁",
+ "sea"
+ ],
+ [
+ "ant",
+ "ic"
+ ],
+ [
+ "anti",
+ "c"
+ ],
+ [
+ "▁Le",
+ "ague"
+ ],
+ [
+ "▁there",
+ "fore"
+ ],
+ [
+ "▁ther",
+ "efore"
+ ],
+ [
+ "On",
+ "e"
+ ],
+ [
+ "O",
+ "ne"
+ ],
+ [
+ "Re",
+ "turn"
+ ],
+ [
+ "Ret",
+ "urn"
+ ],
+ [
+ "R",
+ "eturn"
+ ],
+ [
+ "Acc",
+ "ess"
+ ],
+ [
+ "Ac",
+ "cess"
+ ],
+ [
+ "A",
+ "ccess"
+ ],
+ [
+ "va",
+ "s"
+ ],
+ [
+ "v",
+ "as"
+ ],
+ [
+ "▁о",
+ "с"
+ ],
+ [
+ "▁r",
+ "at"
+ ],
+ [
+ "▁ra",
+ "t"
+ ],
+ [
+ "▁",
+ "rat"
+ ],
+ [
+ "Bi",
+ "g"
+ ],
+ [
+ "B",
+ "ig"
+ ],
+ [
+ "▁be",
+ "havior"
+ ],
+ [
+ "▁behav",
+ "ior"
+ ],
+ [
+ "▁behavi",
+ "or"
+ ],
+ [
+ "k",
+ "r"
+ ],
+ [
+ "▁un",
+ "defined"
+ ],
+ [
+ "▁und",
+ "efined"
+ ],
+ [
+ "▁",
+ "undefined"
+ ],
+ [
+ "▁E",
+ "s"
+ ],
+ [
+ "▁",
+ "Es"
+ ],
+ [
+ "▁appe",
+ "ared"
+ ],
+ [
+ "▁appear",
+ "ed"
+ ],
+ [
+ "el",
+ "es"
+ ],
+ [
+ "ele",
+ "s"
+ ],
+ [
+ "e",
+ "les"
+ ],
+ [
+ "▁W",
+ "AR"
+ ],
+ [
+ "▁WA",
+ "R"
+ ],
+ [
+ "▁",
+ "WAR"
+ ],
+ [
+ "St",
+ "at"
+ ],
+ [
+ "S",
+ "tat"
+ ],
+ [
+ "▁Go",
+ "ogle"
+ ],
+ [
+ "▁",
+ "Google"
+ ],
+ [
+ "▁c",
+ "redit"
+ ],
+ [
+ "▁cre",
+ "dit"
+ ],
+ [
+ "▁cr",
+ "edit"
+ ],
+ [
+ "▁cred",
+ "it"
+ ],
+ [
+ "▁F",
+ "ile"
+ ],
+ [
+ "▁Fil",
+ "e"
+ ],
+ [
+ "▁Fi",
+ "le"
+ ],
+ [
+ "▁",
+ "File"
+ ],
+ [
+ "an",
+ "ging"
+ ],
+ [
+ "ang",
+ "ing"
+ ],
+ [
+ "ho",
+ "use"
+ ],
+ [
+ "hou",
+ "se"
+ ],
+ [
+ "h",
+ "ouse"
+ ],
+ [
+ "rom",
+ "ise"
+ ],
+ [
+ "ge",
+ "nt"
+ ],
+ [
+ "gen",
+ "t"
+ ],
+ [
+ "g",
+ "ent"
+ ],
+ [
+ "▁hab",
+ "it"
+ ],
+ [
+ "▁ha",
+ "bit"
+ ],
+ [
+ "▁soc",
+ "iety"
+ ],
+ [
+ "▁soci",
+ "ety"
+ ],
+ [
+ "▁societ",
+ "y"
+ ],
+ [
+ "▁enc",
+ "our"
+ ],
+ [
+ "▁p",
+ "aint"
+ ],
+ [
+ "▁pain",
+ "t"
+ ],
+ [
+ "▁pa",
+ "int"
+ ],
+ [
+ "pe",
+ "t"
+ ],
+ [
+ "p",
+ "et"
+ ],
+ [
+ "▁U",
+ "K"
+ ],
+ [
+ "▁",
+ "UK"
+ ],
+ [
+ "aw",
+ "s"
+ ],
+ [
+ "a",
+ "ws"
+ ],
+ [
+ "on",
+ "om"
+ ],
+ [
+ "ono",
+ "m"
+ ],
+ [
+ "o",
+ "nom"
+ ],
+ [
+ "G",
+ "l"
+ ],
+ [
+ "}_",
+ "{\\"
+ ],
+ [
+ "}_{",
+ "\\"
+ ],
+ [
+ "}",
+ "_{\\"
+ ],
+ [
+ "el",
+ "ess"
+ ],
+ [
+ "ele",
+ "ss"
+ ],
+ [
+ "eles",
+ "s"
+ ],
+ [
+ "e",
+ "less"
+ ],
+ [
+ "em",
+ "y"
+ ],
+ [
+ "e",
+ "my"
+ ],
+ [
+ "▁C",
+ "ong"
+ ],
+ [
+ "▁Con",
+ "g"
+ ],
+ [
+ "▁Co",
+ "ng"
+ ],
+ [
+ "▁develop",
+ "ed"
+ ],
+ [
+ "▁im",
+ "ages"
+ ],
+ [
+ "▁image",
+ "s"
+ ],
+ [
+ "▁imag",
+ "es"
+ ],
+ [
+ "▁",
+ "images"
+ ],
+ [
+ "▁",
+ "ö"
+ ],
+ [
+ "▁f",
+ "ont"
+ ],
+ [
+ "▁fo",
+ "nt"
+ ],
+ [
+ "▁fon",
+ "t"
+ ],
+ [
+ "▁",
+ "font"
+ ],
+ [
+ "cl",
+ "ear"
+ ],
+ [
+ "cle",
+ "ar"
+ ],
+ [
+ "c",
+ "lear"
+ ],
+ [
+ "gi",
+ "n"
+ ],
+ [
+ "g",
+ "in"
+ ],
+ [
+ "▁L",
+ "ord"
+ ],
+ [
+ "▁Lo",
+ "rd"
+ ],
+ [
+ "▁Lor",
+ "d"
+ ],
+ [
+ "▁trans",
+ "port"
+ ],
+ [
+ "▁",
+ "transport"
+ ],
+ [
+ "▁:",
+ ":"
+ ],
+ [
+ "▁",
+ "::"
+ ],
+ [
+ "▁c",
+ "up"
+ ],
+ [
+ "▁cu",
+ "p"
+ ],
+ [
+ "▁",
+ "cup"
+ ],
+ [
+ "ul",
+ "ate"
+ ],
+ [
+ "ula",
+ "te"
+ ],
+ [
+ "u",
+ "late"
+ ],
+ [
+ "▁D",
+ "uring"
+ ],
+ [
+ "▁Du",
+ "ring"
+ ],
+ [
+ "▁Dur",
+ "ing"
+ ],
+ [
+ "pr",
+ "iv"
+ ],
+ [
+ "p",
+ "riv"
+ ],
+ [
+ "▁ext",
+ "rem"
+ ],
+ [
+ "▁extr",
+ "em"
+ ],
+ [
+ "▁D",
+ "i"
+ ],
+ [
+ "▁",
+ "Di"
+ ],
+ [
+ "▁d",
+ "oubt"
+ ],
+ [
+ "▁dou",
+ "bt"
+ ],
+ [
+ "▁doub",
+ "t"
+ ],
+ [
+ "P",
+ "y"
+ ],
+ [
+ "if",
+ "ying"
+ ],
+ [
+ "ify",
+ "ing"
+ ],
+ [
+ "sp",
+ "lit"
+ ],
+ [
+ "spl",
+ "it"
+ ],
+ [
+ "s",
+ "plit"
+ ],
+ [
+ "eg",
+ "o"
+ ],
+ [
+ "e",
+ "go"
+ ],
+ [
+ "git",
+ "hub"
+ ],
+ [
+ "g",
+ "ithub"
+ ],
+ [
+ "▁)",
+ ","
+ ],
+ [
+ "▁",
+ "),"
+ ],
+ [
+ "RO",
+ "M"
+ ],
+ [
+ "R",
+ "OM"
+ ],
+ [
+ "▁ch",
+ "air"
+ ],
+ [
+ "▁cha",
+ "ir"
+ ],
+ [
+ "▁",
+ "chair"
+ ],
+ [
+ "▁t",
+ "rade"
+ ],
+ [
+ "▁tr",
+ "ade"
+ ],
+ [
+ "▁trad",
+ "e"
+ ],
+ [
+ "▁tra",
+ "de"
+ ],
+ [
+ "▁n",
+ "icht"
+ ],
+ [
+ "▁ni",
+ "cht"
+ ],
+ [
+ "▁nic",
+ "ht"
+ ],
+ [
+ "To",
+ "p"
+ ],
+ [
+ "T",
+ "op"
+ ],
+ [
+ "St",
+ "ore"
+ ],
+ [
+ "▁p",
+ "arte"
+ ],
+ [
+ "▁part",
+ "e"
+ ],
+ [
+ "▁par",
+ "te"
+ ],
+ [
+ "pro",
+ "ject"
+ ],
+ [
+ "ni",
+ "a"
+ ],
+ [
+ "n",
+ "ia"
+ ],
+ [
+ "▁в",
+ "ід"
+ ],
+ [
+ "▁ві",
+ "д"
+ ],
+ [
+ "wa",
+ "r"
+ ],
+ [
+ "w",
+ "ar"
+ ],
+ [
+ "▁Pro",
+ "f"
+ ],
+ [
+ "▁Pr",
+ "of"
+ ],
+ [
+ "▁c",
+ "aught"
+ ],
+ [
+ "Th",
+ "read"
+ ],
+ [
+ "ст",
+ "ва"
+ ],
+ [
+ "ств",
+ "а"
+ ],
+ [
+ "с",
+ "тва"
+ ],
+ [
+ "aut",
+ "hor"
+ ],
+ [
+ "auth",
+ "or"
+ ],
+ [
+ "▁d",
+ "oll"
+ ],
+ [
+ "▁do",
+ "ll"
+ ],
+ [
+ "▁dol",
+ "l"
+ ],
+ [
+ "▁h",
+ "arm"
+ ],
+ [
+ "▁ha",
+ "rm"
+ ],
+ [
+ "▁har",
+ "m"
+ ],
+ [
+ "▁",
+ "harm"
+ ],
+ [
+ "▁G",
+ "en"
+ ],
+ [
+ "▁Ge",
+ "n"
+ ],
+ [
+ "▁",
+ "Gen"
+ ],
+ [
+ "tr",
+ "ee"
+ ],
+ [
+ "tre",
+ "e"
+ ],
+ [
+ "t",
+ "ree"
+ ],
+ [
+ "et",
+ "ime"
+ ],
+ [
+ "eti",
+ "me"
+ ],
+ [
+ "e",
+ "time"
+ ],
+ [
+ "cf",
+ "g"
+ ],
+ [
+ "c",
+ "fg"
+ ],
+ [
+ "▁gu",
+ "ys"
+ ],
+ [
+ "▁guy",
+ "s"
+ ],
+ [
+ "▁Cal",
+ "ifornia"
+ ],
+ [
+ "▁G",
+ "reen"
+ ],
+ [
+ "▁Gr",
+ "een"
+ ],
+ [
+ "▁Gre",
+ "en"
+ ],
+ [
+ "▁Gree",
+ "n"
+ ],
+ [
+ "▁",
+ "Green"
+ ],
+ [
+ "▁mov",
+ "ement"
+ ],
+ [
+ "▁move",
+ "ment"
+ ],
+ [
+ "▁mo",
+ "vement"
+ ],
+ [
+ "ie",
+ "j"
+ ],
+ [
+ "i",
+ "ej"
+ ],
+ [
+ "▁stat",
+ "ement"
+ ],
+ [
+ "▁state",
+ "ment"
+ ],
+ [
+ "▁",
+ "statement"
+ ],
+ [
+ "▁se",
+ "eing"
+ ],
+ [
+ "▁see",
+ "ing"
+ ],
+ [
+ "▁h",
+ "aven"
+ ],
+ [
+ "▁have",
+ "n"
+ ],
+ [
+ "▁ha",
+ "ven"
+ ],
+ [
+ "▁hav",
+ "en"
+ ],
+ [
+ "vent",
+ "ion"
+ ],
+ [
+ "v",
+ "ention"
+ ],
+ [
+ "S",
+ "L"
+ ],
+ [
+ "ched",
+ "ul"
+ ],
+ [
+ "ie",
+ "rt"
+ ],
+ [
+ "ier",
+ "t"
+ ],
+ [
+ "i",
+ "ert"
+ ],
+ [
+ "▁pr",
+ "imary"
+ ],
+ [
+ "▁prim",
+ "ary"
+ ],
+ [
+ "▁pri",
+ "mary"
+ ],
+ [
+ "▁prima",
+ "ry"
+ ],
+ [
+ "▁",
+ "primary"
+ ],
+ [
+ "▁c",
+ "ivil"
+ ],
+ [
+ "▁ci",
+ "vil"
+ ],
+ [
+ "▁civ",
+ "il"
+ ],
+ [
+ "ri",
+ "an"
+ ],
+ [
+ "ria",
+ "n"
+ ],
+ [
+ "r",
+ "ian"
+ ],
+ [
+ "▁b",
+ "utton"
+ ],
+ [
+ "▁but",
+ "ton"
+ ],
+ [
+ "▁butt",
+ "on"
+ ],
+ [
+ "▁",
+ "button"
+ ],
+ [
+ "▁l",
+ "ived"
+ ],
+ [
+ "▁li",
+ "ved"
+ ],
+ [
+ "▁live",
+ "d"
+ ],
+ [
+ "▁liv",
+ "ed"
+ ],
+ [
+ "P",
+ "ass"
+ ],
+ [
+ "so",
+ "r"
+ ],
+ [
+ "s",
+ "or"
+ ],
+ [
+ "▁watch",
+ "ing"
+ ],
+ [
+ "▁wat",
+ "ching"
+ ],
+ [
+ "▁sk",
+ "ills"
+ ],
+ [
+ "▁skill",
+ "s"
+ ],
+ [
+ "te",
+ "e"
+ ],
+ [
+ "t",
+ "ee"
+ ],
+ [
+ "Le",
+ "vel"
+ ],
+ [
+ "L",
+ "evel"
+ ],
+ [
+ "▁sc",
+ "ient"
+ ],
+ [
+ "h",
+ "s"
+ ],
+ [
+ "▁a",
+ "gre"
+ ],
+ [
+ "▁ag",
+ "re"
+ ],
+ [
+ "ca",
+ "t"
+ ],
+ [
+ "c",
+ "at"
+ ],
+ [
+ "▁t",
+ "end"
+ ],
+ [
+ "▁te",
+ "nd"
+ ],
+ [
+ "▁ten",
+ "d"
+ ],
+ [
+ "▁M",
+ "ill"
+ ],
+ [
+ "▁Mil",
+ "l"
+ ],
+ [
+ "▁Mi",
+ "ll"
+ ],
+ [
+ "▁",
+ "Mill"
+ ],
+ [
+ "▁C",
+ "ap"
+ ],
+ [
+ "▁Ca",
+ "p"
+ ],
+ [
+ "▁",
+ "Cap"
+ ],
+ [
+ "OR",
+ "D"
+ ],
+ [
+ "O",
+ "RD"
+ ],
+ [
+ "gl",
+ "e"
+ ],
+ [
+ "g",
+ "le"
+ ],
+ [
+ "▁с",
+ "во"
+ ],
+ [
+ "»",
+ ","
+ ],
+ [
+ "▁a",
+ "head"
+ ],
+ [
+ "▁ah",
+ "ead"
+ ],
+ [
+ "ve",
+ "st"
+ ],
+ [
+ "ves",
+ "t"
+ ],
+ [
+ "v",
+ "est"
+ ],
+ [
+ "▁J",
+ "ose"
+ ],
+ [
+ "▁Jo",
+ "se"
+ ],
+ [
+ "▁Jos",
+ "e"
+ ],
+ [
+ "is",
+ "cher"
+ ],
+ [
+ "isch",
+ "er"
+ ],
+ [
+ "ische",
+ "r"
+ ],
+ [
+ "isc",
+ "her"
+ ],
+ [
+ "ș",
+ "i"
+ ],
+ [
+ "▁le",
+ "aving"
+ ],
+ [
+ "▁д",
+ "ля"
+ ],
+ [
+ "▁s",
+ "outh"
+ ],
+ [
+ "▁so",
+ "uth"
+ ],
+ [
+ "▁sou",
+ "th"
+ ],
+ [
+ "▁sout",
+ "h"
+ ],
+ [
+ "▁con",
+ "sum"
+ ],
+ [
+ "▁cons",
+ "um"
+ ],
+ [
+ "▁",
+ "consum"
+ ],
+ [
+ "R",
+ "ange"
+ ],
+ [
+ "▁activ",
+ "ities"
+ ],
+ [
+ "Se",
+ "c"
+ ],
+ [
+ "S",
+ "ec"
+ ],
+ [
+ "▁s",
+ "ales"
+ ],
+ [
+ "▁sa",
+ "les"
+ ],
+ [
+ "▁sal",
+ "es"
+ ],
+ [
+ "▁sale",
+ "s"
+ ],
+ [
+ "▁f",
+ "ix"
+ ],
+ [
+ "▁fi",
+ "x"
+ ],
+ [
+ "▁",
+ "fix"
+ ],
+ [
+ "▁j",
+ "ed"
+ ],
+ [
+ "▁je",
+ "d"
+ ],
+ [
+ "▁",
+ "jed"
+ ],
+ [
+ "ru",
+ "m"
+ ],
+ [
+ "r",
+ "um"
+ ],
+ [
+ "ve",
+ "ctor"
+ ],
+ [
+ "vec",
+ "tor"
+ ],
+ [
+ "v",
+ "ector"
+ ],
+ [
+ "▁s",
+ "pot"
+ ],
+ [
+ "▁sp",
+ "ot"
+ ],
+ [
+ "▁spo",
+ "t"
+ ],
+ [
+ "▁",
+ "spot"
+ ],
+ [
+ "▁man",
+ "ufact"
+ ],
+ [
+ "к",
+ "т"
+ ],
+ [
+ "or",
+ "row"
+ ],
+ [
+ "orr",
+ "ow"
+ ],
+ [
+ "si",
+ "gn"
+ ],
+ [
+ "sig",
+ "n"
+ ],
+ [
+ "s",
+ "ign"
+ ],
+ [
+ "▁col",
+ "lege"
+ ],
+ [
+ "▁colle",
+ "ge"
+ ],
+ [
+ "▁colleg",
+ "e"
+ ],
+ [
+ "▁d",
+ "river"
+ ],
+ [
+ "▁dr",
+ "iver"
+ ],
+ [
+ "▁dri",
+ "ver"
+ ],
+ [
+ "▁driv",
+ "er"
+ ],
+ [
+ "▁drive",
+ "r"
+ ],
+ [
+ "▁",
+ "driver"
+ ],
+ [
+ "▁def",
+ "initely"
+ ],
+ [
+ "▁definit",
+ "ely"
+ ],
+ [
+ "▁s",
+ "pend"
+ ],
+ [
+ "▁sp",
+ "end"
+ ],
+ [
+ "▁spe",
+ "nd"
+ ],
+ [
+ "miss",
+ "ion"
+ ],
+ [
+ "m",
+ "ission"
+ ],
+ [
+ "з",
+ "у"
+ ],
+ [
+ "at",
+ "ively"
+ ],
+ [
+ "ative",
+ "ly"
+ ],
+ [
+ "ativ",
+ "ely"
+ ],
+ [
+ "b",
+ "i"
+ ],
+ [
+ "Call",
+ "back"
+ ],
+ [
+ "▁particular",
+ "ly"
+ ],
+ [
+ "▁particul",
+ "arly"
+ ],
+ [
+ "▁h",
+ "ell"
+ ],
+ [
+ "▁he",
+ "ll"
+ ],
+ [
+ "▁hel",
+ "l"
+ ],
+ [
+ "▁",
+ "hell"
+ ],
+ [
+ "▁p",
+ "ool"
+ ],
+ [
+ "▁po",
+ "ol"
+ ],
+ [
+ "▁",
+ "pool"
+ ],
+ [
+ "PR",
+ "E"
+ ],
+ [
+ "P",
+ "RE"
+ ],
+ [
+ "▁cle",
+ "arly"
+ ],
+ [
+ "▁clear",
+ "ly"
+ ],
+ [
+ "P",
+ "T"
+ ],
+ [
+ "ot",
+ "hes"
+ ],
+ [
+ "oth",
+ "es"
+ ],
+ [
+ "othe",
+ "s"
+ ],
+ [
+ "▁I",
+ "d"
+ ],
+ [
+ "▁",
+ "Id"
+ ],
+ [
+ "Loc",
+ "ation"
+ ],
+ [
+ "L",
+ "ocation"
+ ],
+ [
+ "▁R",
+ "un"
+ ],
+ [
+ "▁Ru",
+ "n"
+ ],
+ [
+ "▁",
+ "Run"
+ ],
+ [
+ "▁f",
+ "ixed"
+ ],
+ [
+ "▁fix",
+ "ed"
+ ],
+ [
+ "▁",
+ "fixed"
+ ],
+ [
+ "▁H",
+ "and"
+ ],
+ [
+ "▁Ha",
+ "nd"
+ ],
+ [
+ "▁Han",
+ "d"
+ ],
+ [
+ "▁",
+ "Hand"
+ ],
+ [
+ "ba",
+ "l"
+ ],
+ [
+ "b",
+ "al"
+ ],
+ [
+ "d",
+ "ouble"
+ ],
+ [
+ "C",
+ "an"
+ ],
+ [
+ "Om",
+ "ega"
+ ],
+ [
+ "▁chall",
+ "eng"
+ ],
+ [
+ "▁stand",
+ "ing"
+ ],
+ [
+ "▁stan",
+ "ding"
+ ],
+ [
+ "▁",
+ "standing"
+ ],
+ [
+ "it",
+ "en"
+ ],
+ [
+ "ite",
+ "n"
+ ],
+ [
+ "i",
+ "ten"
+ ],
+ [
+ "▁me",
+ "chan"
+ ],
+ [
+ "▁d",
+ "urch"
+ ],
+ [
+ "▁dur",
+ "ch"
+ ],
+ [
+ "▁d",
+ "ell"
+ ],
+ [
+ "▁de",
+ "ll"
+ ],
+ [
+ "▁del",
+ "l"
+ ],
+ [
+ "▁rais",
+ "ed"
+ ],
+ [
+ "▁raise",
+ "d"
+ ],
+ [
+ "▁ra",
+ "ised"
+ ],
+ [
+ "▁we",
+ "ak"
+ ],
+ [
+ "▁",
+ "weak"
+ ],
+ [
+ "▁D",
+ "u"
+ ],
+ [
+ "▁",
+ "Du"
+ ],
+ [
+ "gr",
+ "ad"
+ ],
+ [
+ "gra",
+ "d"
+ ],
+ [
+ "g",
+ "rad"
+ ],
+ [
+ "▁sc",
+ "ene"
+ ],
+ [
+ "▁scen",
+ "e"
+ ],
+ [
+ "▁",
+ "scene"
+ ],
+ [
+ "pos",
+ "s"
+ ],
+ [
+ "po",
+ "ss"
+ ],
+ [
+ "p",
+ "oss"
+ ],
+ [
+ "▁t",
+ "on"
+ ],
+ [
+ "▁to",
+ "n"
+ ],
+ [
+ "▁",
+ "ton"
+ ],
+ [
+ "▁e",
+ "arth"
+ ],
+ [
+ "▁ear",
+ "th"
+ ],
+ [
+ "ul",
+ "ations"
+ ],
+ [
+ "ulation",
+ "s"
+ ],
+ [
+ "▁str",
+ "ength"
+ ],
+ [
+ "▁stre",
+ "ngth"
+ ],
+ [
+ "▁streng",
+ "th"
+ ],
+ [
+ "ak",
+ "ed"
+ ],
+ [
+ "ake",
+ "d"
+ ],
+ [
+ "a",
+ "ked"
+ ],
+ [
+ "▁re",
+ "main"
+ ],
+ [
+ "▁rem",
+ "ain"
+ ],
+ [
+ "▁B",
+ "i"
+ ],
+ [
+ "▁",
+ "Bi"
+ ],
+ [
+ "▁custom",
+ "er"
+ ],
+ [
+ "▁cust",
+ "omer"
+ ],
+ [
+ "▁",
+ "customer"
+ ],
+ [
+ "ran",
+ "ge"
+ ],
+ [
+ "r",
+ "ange"
+ ],
+ [
+ "▁inter",
+ "ested"
+ ],
+ [
+ "▁interest",
+ "ed"
+ ],
+ [
+ "ON",
+ "E"
+ ],
+ [
+ "O",
+ "NE"
+ ],
+ [
+ "▁c",
+ "off"
+ ],
+ [
+ "▁co",
+ "ff"
+ ],
+ [
+ "re",
+ "quire"
+ ],
+ [
+ "requ",
+ "ire"
+ ],
+ [
+ "▁On",
+ "ly"
+ ],
+ [
+ "▁",
+ "Only"
+ ],
+ [
+ "▁W",
+ "eb"
+ ],
+ [
+ "▁We",
+ "b"
+ ],
+ [
+ "▁",
+ "Web"
+ ],
+ [
+ "▁f",
+ "arm"
+ ],
+ [
+ "▁far",
+ "m"
+ ],
+ [
+ "▁fa",
+ "rm"
+ ],
+ [
+ "▁act",
+ "ivity"
+ ],
+ [
+ "▁activ",
+ "ity"
+ ],
+ [
+ "▁",
+ "activity"
+ ],
+ [
+ "▁r",
+ "out"
+ ],
+ [
+ "▁ro",
+ "ut"
+ ],
+ [
+ "▁rou",
+ "t"
+ ],
+ [
+ "bl",
+ "ing"
+ ],
+ [
+ "b",
+ "ling"
+ ],
+ [
+ "S",
+ "Y"
+ ],
+ [
+ "▁Rich",
+ "ard"
+ ],
+ [
+ "▁Ric",
+ "hard"
+ ],
+ [
+ "▁R",
+ "ef"
+ ],
+ [
+ "▁Re",
+ "f"
+ ],
+ [
+ "▁",
+ "Ref"
+ ],
+ [
+ "▁ко",
+ "н"
+ ],
+ [
+ "▁к",
+ "он"
+ ],
+ [
+ "▁",
+ "кон"
+ ],
+ [
+ "▁j",
+ "un"
+ ],
+ [
+ "▁ju",
+ "n"
+ ],
+ [
+ "bo",
+ "rn"
+ ],
+ [
+ "bor",
+ "n"
+ ],
+ [
+ "b",
+ "orn"
+ ],
+ [
+ "ij",
+ "n"
+ ],
+ [
+ "Config",
+ "uration"
+ ],
+ [
+ "um",
+ "an"
+ ],
+ [
+ "uma",
+ "n"
+ ],
+ [
+ "u",
+ "man"
+ ],
+ [
+ "E",
+ "E"
+ ],
+ [
+ "▁mar",
+ "ried"
+ ],
+ [
+ "▁З",
+ "а"
+ ],
+ [
+ "▁",
+ "За"
+ ],
+ [
+ "▁f",
+ "at"
+ ],
+ [
+ "▁fa",
+ "t"
+ ],
+ [
+ "▁k",
+ "id"
+ ],
+ [
+ "▁ki",
+ "d"
+ ],
+ [
+ "▁T",
+ "ur"
+ ],
+ [
+ "▁Tu",
+ "r"
+ ],
+ [
+ "▁",
+ "Tur"
+ ],
+ [
+ "▁off",
+ "ered"
+ ],
+ [
+ "▁offer",
+ "ed"
+ ],
+ [
+ "ni",
+ "c"
+ ],
+ [
+ "n",
+ "ic"
+ ],
+ [
+ "▁B",
+ "ig"
+ ],
+ [
+ "▁Bi",
+ "g"
+ ],
+ [
+ "▁",
+ "Big"
+ ],
+ [
+ "Ga",
+ "mma"
+ ],
+ [
+ "G",
+ "amma"
+ ],
+ [
+ "▁He",
+ "alth"
+ ],
+ [
+ "▁",
+ "Health"
+ ],
+ [
+ "▁T",
+ "R"
+ ],
+ [
+ "▁",
+ "TR"
+ ],
+ [
+ "▁s",
+ "ię"
+ ],
+ [
+ "▁si",
+ "ę"
+ ],
+ [
+ "▁const",
+ "ruction"
+ ],
+ [
+ "▁construct",
+ "ion"
+ ],
+ [
+ "▁constr",
+ "uction"
+ ],
+ [
+ "▁constru",
+ "ction"
+ ],
+ [
+ "▁",
+ "construction"
+ ],
+ [
+ "▁Ch",
+ "urch"
+ ],
+ [
+ "▁B",
+ "et"
+ ],
+ [
+ "▁Be",
+ "t"
+ ],
+ [
+ "▁",
+ "Bet"
+ ],
+ [
+ "bu",
+ "s"
+ ],
+ [
+ "b",
+ "us"
+ ],
+ [
+ "▁e",
+ "arn"
+ ],
+ [
+ "▁ear",
+ "n"
+ ],
+ [
+ "ri",
+ "ct"
+ ],
+ [
+ "ric",
+ "t"
+ ],
+ [
+ "r",
+ "ict"
+ ],
+ [
+ "▁п",
+ "ра"
+ ],
+ [
+ "▁пр",
+ "а"
+ ],
+ [
+ "▁",
+ "пра"
+ ],
+ [
+ "▁br",
+ "ain"
+ ],
+ [
+ "▁bra",
+ "in"
+ ],
+ [
+ "▁f",
+ "ra"
+ ],
+ [
+ "▁fr",
+ "a"
+ ],
+ [
+ "▁O",
+ "p"
+ ],
+ [
+ "▁",
+ "Op"
+ ],
+ [
+ "FI",
+ "G"
+ ],
+ [
+ "F",
+ "IG"
+ ],
+ [
+ "em",
+ "a"
+ ],
+ [
+ "e",
+ "ma"
+ ],
+ [
+ "▁Europe",
+ "an"
+ ],
+ [
+ "▁S",
+ "aint"
+ ],
+ [
+ "▁Sa",
+ "int"
+ ],
+ [
+ "▁",
+ "Saint"
+ ],
+ [
+ "AR",
+ "E"
+ ],
+ [
+ "A",
+ "RE"
+ ],
+ [
+ "ur",
+ "i"
+ ],
+ [
+ "u",
+ "ri"
+ ],
+ [
+ "▁R",
+ "iver"
+ ],
+ [
+ "{",
+ "}"
+ ],
+ [
+ "▁s",
+ "itting"
+ ],
+ [
+ "▁sit",
+ "ting"
+ ],
+ [
+ "▁under",
+ "standing"
+ ],
+ [
+ "▁understand",
+ "ing"
+ ],
+ [
+ "▁pl",
+ "ans"
+ ],
+ [
+ "▁plan",
+ "s"
+ ],
+ [
+ "rop",
+ "ri"
+ ],
+ [
+ "▁old",
+ "er"
+ ],
+ [
+ "▁ol",
+ "der"
+ ],
+ [
+ "▁",
+ "older"
+ ],
+ [
+ "▁pres",
+ "sure"
+ ],
+ [
+ "▁press",
+ "ure"
+ ],
+ [
+ "Im",
+ "pl"
+ ],
+ [
+ "Imp",
+ "l"
+ ],
+ [
+ "▁pe",
+ "ace"
+ ],
+ [
+ "Conne",
+ "ction"
+ ],
+ [
+ "Conn",
+ "ection"
+ ],
+ [
+ "Connect",
+ "ion"
+ ],
+ [
+ "▁f",
+ "i"
+ ],
+ [
+ "▁",
+ "fi"
+ ],
+ [
+ "ri",
+ "ch"
+ ],
+ [
+ "ric",
+ "h"
+ ],
+ [
+ "r",
+ "ich"
+ ],
+ [
+ "▁sh",
+ "ut"
+ ],
+ [
+ "ap",
+ "ers"
+ ],
+ [
+ "ape",
+ "rs"
+ ],
+ [
+ "aper",
+ "s"
+ ],
+ [
+ "a",
+ "pers"
+ ],
+ [
+ "Po",
+ "rt"
+ ],
+ [
+ "P",
+ "ort"
+ ],
+ [
+ "▁L",
+ "ook"
+ ],
+ [
+ "▁Lo",
+ "ok"
+ ],
+ [
+ "▁",
+ "Look"
+ ],
+ [
+ "ri",
+ "m"
+ ],
+ [
+ "r",
+ "im"
+ ],
+ [
+ "au",
+ "th"
+ ],
+ [
+ "aut",
+ "h"
+ ],
+ [
+ "a",
+ "uth"
+ ],
+ [
+ "au",
+ "to"
+ ],
+ [
+ "aut",
+ "o"
+ ],
+ [
+ "a",
+ "uto"
+ ],
+ [
+ "▁high",
+ "ly"
+ ],
+ [
+ "▁un",
+ "less"
+ ],
+ [
+ "▁W",
+ "al"
+ ],
+ [
+ "▁Wa",
+ "l"
+ ],
+ [
+ "▁re",
+ "n"
+ ],
+ [
+ "▁r",
+ "en"
+ ],
+ [
+ "▁",
+ "ren"
+ ],
+ [
+ "w",
+ "s"
+ ],
+ [
+ "▁c",
+ "ore"
+ ],
+ [
+ "▁co",
+ "re"
+ ],
+ [
+ "▁cor",
+ "e"
+ ],
+ [
+ "▁",
+ "core"
+ ],
+ [
+ "(",
+ "-"
+ ],
+ [
+ "▁c",
+ "lim"
+ ],
+ [
+ "▁cl",
+ "im"
+ ],
+ [
+ "ru",
+ "it"
+ ],
+ [
+ "r",
+ "uit"
+ ],
+ [
+ "▁call",
+ "back"
+ ],
+ [
+ "▁",
+ "callback"
+ ],
+ [
+ "he",
+ "st"
+ ],
+ [
+ "hes",
+ "t"
+ ],
+ [
+ "h",
+ "est"
+ ],
+ [
+ "▁Char",
+ "les"
+ ],
+ [
+ "▁Charl",
+ "es"
+ ],
+ [
+ "▁L",
+ "ong"
+ ],
+ [
+ "▁Lo",
+ "ng"
+ ],
+ [
+ "▁",
+ "Long"
+ ],
+ [
+ "}",
+ "="
+ ],
+ [
+ "ъ",
+ "р"
+ ],
+ [
+ "▁sh",
+ "ared"
+ ],
+ [
+ "▁share",
+ "d"
+ ],
+ [
+ "▁shar",
+ "ed"
+ ],
+ [
+ "▁sha",
+ "red"
+ ],
+ [
+ "▁",
+ "shared"
+ ],
+ [
+ "ul",
+ "ated"
+ ],
+ [
+ "ula",
+ "ted"
+ ],
+ [
+ "ulate",
+ "d"
+ ],
+ [
+ "gorith",
+ "m"
+ ],
+ [
+ "▁H",
+ "ome"
+ ],
+ [
+ "▁Ho",
+ "me"
+ ],
+ [
+ "▁Hom",
+ "e"
+ ],
+ [
+ "▁",
+ "Home"
+ ],
+ [
+ "▁vill",
+ "age"
+ ],
+ [
+ "▁vil",
+ "lage"
+ ],
+ [
+ "ee",
+ "s"
+ ],
+ [
+ "e",
+ "es"
+ ],
+ [
+ "s",
+ "v"
+ ],
+ [
+ "▁rest",
+ "aur"
+ ],
+ [
+ "re",
+ "y"
+ ],
+ [
+ "r",
+ "ey"
+ ],
+ [
+ "▁C",
+ "ast"
+ ],
+ [
+ "▁Cas",
+ "t"
+ ],
+ [
+ "▁Ca",
+ "st"
+ ],
+ [
+ "▁",
+ "Cast"
+ ],
+ [
+ "▁P",
+ "erson"
+ ],
+ [
+ "▁Per",
+ "son"
+ ],
+ [
+ "▁Pers",
+ "on"
+ ],
+ [
+ "▁",
+ "Person"
+ ],
+ [
+ "ки",
+ "й"
+ ],
+ [
+ "▁organ",
+ "iz"
+ ],
+ [
+ "▁R",
+ "ad"
+ ],
+ [
+ "▁Ra",
+ "d"
+ ],
+ [
+ "▁",
+ "Rad"
+ ],
+ [
+ "pon",
+ "ents"
+ ],
+ [
+ "ponent",
+ "s"
+ ],
+ [
+ "▁wer",
+ "den"
+ ],
+ [
+ "▁werd",
+ "en"
+ ],
+ [
+ "▁b",
+ "ow"
+ ],
+ [
+ "▁bo",
+ "w"
+ ],
+ [
+ "▁",
+ "bow"
+ ],
+ [
+ "se",
+ "n"
+ ],
+ [
+ "s",
+ "en"
+ ],
+ [
+ "am",
+ "i"
+ ],
+ [
+ "a",
+ "mi"
+ ],
+ [
+ "Inter",
+ "face"
+ ],
+ [
+ "▁b",
+ "asis"
+ ],
+ [
+ "▁bas",
+ "is"
+ ],
+ [
+ "▁ba",
+ "sis"
+ ],
+ [
+ "▁Comp",
+ "any"
+ ],
+ [
+ "▁Compan",
+ "y"
+ ],
+ [
+ "▁",
+ "Company"
+ ],
+ [
+ "er",
+ "nel"
+ ],
+ [
+ "ern",
+ "el"
+ ],
+ [
+ "erne",
+ "l"
+ ],
+ [
+ "it",
+ "u"
+ ],
+ [
+ "i",
+ "tu"
+ ],
+ [
+ "Has",
+ "h"
+ ],
+ [
+ "Ha",
+ "sh"
+ ],
+ [
+ "H",
+ "ash"
+ ],
+ [
+ "▁a",
+ "an"
+ ],
+ [
+ "▁",
+ "х"
+ ],
+ [
+ "▁s",
+ "mile"
+ ],
+ [
+ "▁sm",
+ "ile"
+ ],
+ [
+ "x",
+ "ml"
+ ],
+ [
+ "▁s",
+ "cen"
+ ],
+ [
+ "▁sc",
+ "en"
+ ],
+ [
+ "am",
+ "m"
+ ],
+ [
+ "a",
+ "mm"
+ ],
+ [
+ "to",
+ "ol"
+ ],
+ [
+ "too",
+ "l"
+ ],
+ [
+ "t",
+ "ool"
+ ],
+ [
+ "ar",
+ "ia"
+ ],
+ [
+ "ari",
+ "a"
+ ],
+ [
+ "a",
+ "ria"
+ ],
+ [
+ "▁acc",
+ "ur"
+ ],
+ [
+ "▁ac",
+ "cur"
+ ],
+ [
+ "▁",
+ "accur"
+ ],
+ [
+ "set",
+ "tings"
+ ],
+ [
+ "setting",
+ "s"
+ ],
+ [
+ "▁Jes",
+ "us"
+ ],
+ [
+ "ac",
+ "ement"
+ ],
+ [
+ "ace",
+ "ment"
+ ],
+ [
+ "po",
+ "wer"
+ ],
+ [
+ "pow",
+ "er"
+ ],
+ [
+ "p",
+ "ower"
+ ],
+ [
+ "(",
+ "!"
+ ],
+ [
+ "▁c",
+ "alls"
+ ],
+ [
+ "▁call",
+ "s"
+ ],
+ [
+ "▁cal",
+ "ls"
+ ],
+ [
+ "▁",
+ "calls"
+ ],
+ [
+ "▁bas",
+ "ic"
+ ],
+ [
+ "▁",
+ "basic"
+ ],
+ [
+ "▁set",
+ "tings"
+ ],
+ [
+ "▁sett",
+ "ings"
+ ],
+ [
+ "▁setting",
+ "s"
+ ],
+ [
+ "▁",
+ "settings"
+ ],
+ [
+ "ri",
+ "pt"
+ ],
+ [
+ "rip",
+ "t"
+ ],
+ [
+ "r",
+ "ipt"
+ ],
+ [
+ "po",
+ "ol"
+ ],
+ [
+ "p",
+ "ool"
+ ],
+ [
+ "ct",
+ "ors"
+ ],
+ [
+ "ctor",
+ "s"
+ ],
+ [
+ "▁Found",
+ "ation"
+ ],
+ [
+ "▁",
+ "Foundation"
+ ],
+ [
+ "▁we",
+ "ap"
+ ],
+ [
+ "KE",
+ "Y"
+ ],
+ [
+ "K",
+ "EY"
+ ],
+ [
+ "fo",
+ "ot"
+ ],
+ [
+ "foo",
+ "t"
+ ],
+ [
+ "f",
+ "oot"
+ ],
+ [
+ "▁r",
+ "adio"
+ ],
+ [
+ "▁rad",
+ "io"
+ ],
+ [
+ "▁radi",
+ "o"
+ ],
+ [
+ "▁",
+ "radio"
+ ],
+ [
+ "▁hel",
+ "ped"
+ ],
+ [
+ "▁help",
+ "ed"
+ ],
+ [
+ "ma",
+ "nn"
+ ],
+ [
+ "man",
+ "n"
+ ],
+ [
+ "m",
+ "ann"
+ ],
+ [
+ "▁j",
+ "ump"
+ ],
+ [
+ "▁ju",
+ "mp"
+ ],
+ [
+ "▁t",
+ "ick"
+ ],
+ [
+ "▁ti",
+ "ck"
+ ],
+ [
+ "▁",
+ "tick"
+ ],
+ [
+ "▁gr",
+ "owing"
+ ],
+ [
+ "▁grow",
+ "ing"
+ ],
+ [
+ "▁gro",
+ "wing"
+ ],
+ [
+ "at",
+ "en"
+ ],
+ [
+ "ate",
+ "n"
+ ],
+ [
+ "a",
+ "ten"
+ ],
+ [
+ "re",
+ "al"
+ ],
+ [
+ "rea",
+ "l"
+ ],
+ [
+ "▁incre",
+ "asing"
+ ],
+ [
+ "Dev",
+ "ice"
+ ],
+ [
+ "var",
+ "epsilon"
+ ],
+ [
+ "vare",
+ "psilon"
+ ],
+ [
+ "▁s",
+ "ets"
+ ],
+ [
+ "▁se",
+ "ts"
+ ],
+ [
+ "▁set",
+ "s"
+ ],
+ [
+ "▁",
+ "sets"
+ ],
+ [
+ "▁adv",
+ "ant"
+ ],
+ [
+ "Op",
+ "en"
+ ],
+ [
+ "O",
+ "pen"
+ ],
+ [
+ "▁re",
+ "asons"
+ ],
+ [
+ "▁reason",
+ "s"
+ ],
+ [
+ "▁sup",
+ "posed"
+ ],
+ [
+ "▁supp",
+ "osed"
+ ],
+ [
+ "▁suppose",
+ "d"
+ ],
+ [
+ "oe",
+ "s"
+ ],
+ [
+ "o",
+ "es"
+ ],
+ [
+ "ed",
+ "e"
+ ],
+ [
+ "e",
+ "de"
+ ],
+ [
+ "te",
+ "en"
+ ],
+ [
+ "tee",
+ "n"
+ ],
+ [
+ "t",
+ "een"
+ ],
+ [
+ "if",
+ "def"
+ ],
+ [
+ "▁de",
+ "lete"
+ ],
+ [
+ "▁del",
+ "ete"
+ ],
+ [
+ "▁delet",
+ "e"
+ ],
+ [
+ "▁",
+ "delete"
+ ],
+ [
+ "▁&",
+ "="
+ ],
+ [
+ "▁",
+ "&="
+ ],
+ [
+ "▁B",
+ "ill"
+ ],
+ [
+ "▁Bi",
+ "ll"
+ ],
+ [
+ "▁Bil",
+ "l"
+ ],
+ [
+ "▁",
+ "Bill"
+ ],
+ [
+ "▁a",
+ "im"
+ ],
+ [
+ "▁ai",
+ "m"
+ ],
+ [
+ "▁",
+ "aim"
+ ],
+ [
+ "▁O",
+ "k"
+ ],
+ [
+ "▁",
+ "Ok"
+ ],
+ [
+ "▁A",
+ "v"
+ ],
+ [
+ "▁",
+ "Av"
+ ],
+ [
+ "re",
+ "ci"
+ ],
+ [
+ "rec",
+ "i"
+ ],
+ [
+ "ac",
+ "ks"
+ ],
+ [
+ "ack",
+ "s"
+ ],
+ [
+ "a",
+ "cks"
+ ],
+ [
+ "is",
+ "te"
+ ],
+ [
+ "ist",
+ "e"
+ ],
+ [
+ "i",
+ "ste"
+ ],
+ [
+ "Pro",
+ "perties"
+ ],
+ [
+ "▁t",
+ "mp"
+ ],
+ [
+ "▁tm",
+ "p"
+ ],
+ [
+ "▁",
+ "tmp"
+ ],
+ [
+ "▁d",
+ "ei"
+ ],
+ [
+ "▁de",
+ "i"
+ ],
+ [
+ "PE",
+ "R"
+ ],
+ [
+ "P",
+ "ER"
+ ],
+ [
+ "D",
+ "C"
+ ],
+ [
+ "st",
+ "a"
+ ],
+ [
+ "s",
+ "ta"
+ ],
+ [
+ "ни",
+ "и"
+ ],
+ [
+ "▁lim",
+ "ited"
+ ],
+ [
+ "▁limit",
+ "ed"
+ ],
+ [
+ "▁",
+ "limited"
+ ],
+ [
+ "▁great",
+ "er"
+ ],
+ [
+ "▁gre",
+ "ater"
+ ],
+ [
+ "de",
+ "scription"
+ ],
+ [
+ "des",
+ "cription"
+ ],
+ [
+ "or",
+ "i"
+ ],
+ [
+ "o",
+ "ri"
+ ],
+ [
+ "ain",
+ "ts"
+ ],
+ [
+ "aint",
+ "s"
+ ],
+ [
+ "▁h",
+ "y"
+ ],
+ [
+ "▁",
+ "hy"
+ ],
+ [
+ "▁M",
+ "el"
+ ],
+ [
+ "▁Me",
+ "l"
+ ],
+ [
+ "▁C",
+ "H"
+ ],
+ [
+ "▁",
+ "CH"
+ ],
+ [
+ "con",
+ "s"
+ ],
+ [
+ "co",
+ "ns"
+ ],
+ [
+ "c",
+ "ons"
+ ],
+ [
+ "▁sur",
+ "round"
+ ],
+ [
+ "▁W",
+ "ho"
+ ],
+ [
+ "▁Wh",
+ "o"
+ ],
+ [
+ "▁",
+ "Who"
+ ],
+ [
+ "ar",
+ "c"
+ ],
+ [
+ "a",
+ "rc"
+ ],
+ [
+ "▁te",
+ "lev"
+ ],
+ [
+ "▁tele",
+ "v"
+ ],
+ [
+ "▁tel",
+ "ev"
+ ],
+ [
+ "it",
+ "ution"
+ ],
+ [
+ "itut",
+ "ion"
+ ],
+ [
+ "▁e",
+ "qual"
+ ],
+ [
+ "▁equ",
+ "al"
+ ],
+ [
+ "▁eq",
+ "ual"
+ ],
+ [
+ "▁",
+ "equal"
+ ],
+ [
+ "к",
+ "і"
+ ],
+ [
+ "▁Is",
+ "rael"
+ ],
+ [
+ "ä",
+ "h"
+ ],
+ [
+ "▁C",
+ "aption"
+ ],
+ [
+ "▁Capt",
+ "ion"
+ ],
+ [
+ "▁Ca",
+ "ption"
+ ],
+ [
+ "▁ex",
+ "erc"
+ ],
+ [
+ "em",
+ "por"
+ ],
+ [
+ "emp",
+ "or"
+ ],
+ [
+ "▁+",
+ "+"
+ ],
+ [
+ "▁",
+ "++"
+ ],
+ [
+ "▁l",
+ "ib"
+ ],
+ [
+ "▁li",
+ "b"
+ ],
+ [
+ "▁",
+ "lib"
+ ],
+ [
+ "ma",
+ "ke"
+ ],
+ [
+ "m",
+ "ake"
+ ],
+ [
+ "▁M",
+ "A"
+ ],
+ [
+ "▁",
+ "MA"
+ ],
+ [
+ "co",
+ "py"
+ ],
+ [
+ "cop",
+ "y"
+ ],
+ [
+ "c",
+ "opy"
+ ],
+ [
+ "f",
+ "riend"
+ ],
+ [
+ "▁ко",
+ "то"
+ ],
+ [
+ "▁",
+ "кото"
+ ],
+ [
+ "▁dam",
+ "age"
+ ],
+ [
+ "▁\\",
+ ","
+ ],
+ [
+ "▁",
+ "\\,"
+ ],
+ [
+ "od",
+ "ed"
+ ],
+ [
+ "ode",
+ "d"
+ ],
+ [
+ "o",
+ "ded"
+ ],
+ [
+ "▁n",
+ "one"
+ ],
+ [
+ "▁no",
+ "ne"
+ ],
+ [
+ "▁non",
+ "e"
+ ],
+ [
+ "▁",
+ "none"
+ ],
+ [
+ "▁ev",
+ "alu"
+ ],
+ [
+ "▁eval",
+ "u"
+ ],
+ [
+ "▁",
+ "evalu"
+ ],
+ [
+ "st",
+ "on"
+ ],
+ [
+ "sto",
+ "n"
+ ],
+ [
+ "s",
+ "ton"
+ ],
+ [
+ ">",
+ ","
+ ],
+ [
+ "FO",
+ "R"
+ ],
+ [
+ "F",
+ "OR"
+ ],
+ [
+ "▁n",
+ "orm"
+ ],
+ [
+ "▁no",
+ "rm"
+ ],
+ [
+ "▁nor",
+ "m"
+ ],
+ [
+ "▁",
+ "norm"
+ ],
+ [
+ "ap",
+ "pe"
+ ],
+ [
+ "app",
+ "e"
+ ],
+ [
+ "a",
+ "ppe"
+ ],
+ [
+ "S",
+ "ession"
+ ],
+ [
+ "▁ad",
+ "ult"
+ ],
+ [
+ "▁h",
+ "ospital"
+ ],
+ [
+ "▁hosp",
+ "ital"
+ ],
+ [
+ "▁recomm",
+ "end"
+ ],
+ [
+ "pro",
+ "perty"
+ ],
+ [
+ "ste",
+ "in"
+ ],
+ [
+ "fin",
+ "al"
+ ],
+ [
+ "fi",
+ "nal"
+ ],
+ [
+ "f",
+ "inal"
+ ],
+ [
+ "▁n",
+ "u"
+ ],
+ [
+ "▁",
+ "nu"
+ ],
+ [
+ "se",
+ "cond"
+ ],
+ [
+ "sec",
+ "ond"
+ ],
+ [
+ "▁a",
+ "spect"
+ ],
+ [
+ "▁as",
+ "pect"
+ ],
+ [
+ "▁asp",
+ "ect"
+ ],
+ [
+ "\")",
+ "]"
+ ],
+ [
+ "\"",
+ ")]"
+ ],
+ [
+ "же",
+ "н"
+ ],
+ [
+ "ж",
+ "ен"
+ ],
+ [
+ "am",
+ "ento"
+ ],
+ [
+ "ament",
+ "o"
+ ],
+ [
+ "amen",
+ "to"
+ ],
+ [
+ "▁r",
+ "ac"
+ ],
+ [
+ "▁ra",
+ "c"
+ ],
+ [
+ "▁",
+ "rac"
+ ],
+ [
+ "sa",
+ "ve"
+ ],
+ [
+ "s",
+ "ave"
+ ],
+ [
+ "▁foot",
+ "ball"
+ ],
+ [
+ "A",
+ "b"
+ ],
+ [
+ "un",
+ "gs"
+ ],
+ [
+ "ung",
+ "s"
+ ],
+ [
+ "ab",
+ "il"
+ ],
+ [
+ "abi",
+ "l"
+ ],
+ [
+ "a",
+ "bil"
+ ],
+ [
+ "▁Ar",
+ "ch"
+ ],
+ [
+ "▁Arc",
+ "h"
+ ],
+ [
+ "▁",
+ "Arch"
+ ],
+ [
+ "sys",
+ "tem"
+ ],
+ [
+ "s",
+ "ystem"
+ ],
+ [
+ "hi",
+ "st"
+ ],
+ [
+ "his",
+ "t"
+ ],
+ [
+ "h",
+ "ist"
+ ],
+ [
+ "▁l",
+ "uck"
+ ],
+ [
+ "▁lu",
+ "ck"
+ ],
+ [
+ "▁luc",
+ "k"
+ ],
+ [
+ "re",
+ "nder"
+ ],
+ [
+ "ren",
+ "der"
+ ],
+ [
+ "rend",
+ "er"
+ ],
+ [
+ "r",
+ "ender"
+ ],
+ [
+ "▁se",
+ "in"
+ ],
+ [
+ "▁sei",
+ "n"
+ ],
+ [
+ "ion",
+ "i"
+ ],
+ [
+ "io",
+ "ni"
+ ],
+ [
+ "i",
+ "oni"
+ ],
+ [
+ "▁r",
+ "ot"
+ ],
+ [
+ "▁ro",
+ "t"
+ ],
+ [
+ "▁",
+ "rot"
+ ],
+ [
+ "▁cor",
+ "ner"
+ ],
+ [
+ "▁corn",
+ "er"
+ ],
+ [
+ "▁app",
+ "ropri"
+ ],
+ [
+ "▁ap",
+ "propri"
+ ],
+ [
+ "▁",
+ "appropri"
+ ],
+ [
+ "▁Soft",
+ "ware"
+ ],
+ [
+ "▁t",
+ "ele"
+ ],
+ [
+ "▁te",
+ "le"
+ ],
+ [
+ "▁tel",
+ "e"
+ ],
+ [
+ "▁",
+ "tele"
+ ],
+ [
+ "De",
+ "lete"
+ ],
+ [
+ "Dele",
+ "te"
+ ],
+ [
+ "Del",
+ "ete"
+ ],
+ [
+ "▁Acc",
+ "ording"
+ ],
+ [
+ "▁pr",
+ "ison"
+ ],
+ [
+ "▁pri",
+ "son"
+ ],
+ [
+ "▁",
+ "prison"
+ ],
+ [
+ "▁l",
+ "ic"
+ ],
+ [
+ "▁li",
+ "c"
+ ],
+ [
+ "▁",
+ "lic"
+ ],
+ [
+ "▁м",
+ "и"
+ ],
+ [
+ "▁",
+ "ми"
+ ],
+ [
+ "ter",
+ "m"
+ ],
+ [
+ "te",
+ "rm"
+ ],
+ [
+ "t",
+ "erm"
+ ],
+ [
+ "se",
+ "ts"
+ ],
+ [
+ "set",
+ "s"
+ ],
+ [
+ "s",
+ "ets"
+ ],
+ [
+ "▁v",
+ "el"
+ ],
+ [
+ "▁ve",
+ "l"
+ ],
+ [
+ "▁",
+ "vel"
+ ],
+ [
+ "▁r",
+ "ank"
+ ],
+ [
+ "▁ran",
+ "k"
+ ],
+ [
+ "▁",
+ "rank"
+ ],
+ [
+ "▁ex",
+ "isting"
+ ],
+ [
+ "▁exist",
+ "ing"
+ ],
+ [
+ "▁",
+ "existing"
+ ],
+ [
+ "▁V",
+ "ir"
+ ],
+ [
+ "▁Vi",
+ "r"
+ ],
+ [
+ "▁t",
+ "rip"
+ ],
+ [
+ "▁tr",
+ "ip"
+ ],
+ [
+ "▁tri",
+ "p"
+ ],
+ [
+ "▁м",
+ "у"
+ ],
+ [
+ "▁",
+ "му"
+ ],
+ [
+ "av",
+ "ax"
+ ],
+ [
+ "ava",
+ "x"
+ ],
+ [
+ "▁r",
+ "is"
+ ],
+ [
+ "▁ri",
+ "s"
+ ],
+ [
+ "▁",
+ "ris"
+ ],
+ [
+ "▁def",
+ "ine"
+ ],
+ [
+ "▁defin",
+ "e"
+ ],
+ [
+ "▁",
+ "define"
+ ],
+ [
+ "▁he",
+ "at"
+ ],
+ [
+ "ca",
+ "r"
+ ],
+ [
+ "c",
+ "ar"
+ ],
+ [
+ "▁con",
+ "vert"
+ ],
+ [
+ "▁conv",
+ "ert"
+ ],
+ [
+ "▁conver",
+ "t"
+ ],
+ [
+ "▁conve",
+ "rt"
+ ],
+ [
+ "▁",
+ "convert"
+ ],
+ [
+ "em",
+ "ail"
+ ],
+ [
+ "ema",
+ "il"
+ ],
+ [
+ "e",
+ "mail"
+ ],
+ [
+ "▁U",
+ "nder"
+ ],
+ [
+ "▁Un",
+ "der"
+ ],
+ [
+ "▁Und",
+ "er"
+ ],
+ [
+ "▁",
+ "Under"
+ ],
+ [
+ "▁",
+ "Ш"
+ ],
+ [
+ "▁G",
+ "rand"
+ ],
+ [
+ "▁Gr",
+ "and"
+ ],
+ [
+ "▁Gran",
+ "d"
+ ],
+ [
+ "▁Gra",
+ "nd"
+ ],
+ [
+ "▁ex",
+ "ists"
+ ],
+ [
+ "▁exist",
+ "s"
+ ],
+ [
+ "▁",
+ "exists"
+ ],
+ [
+ "sy",
+ "s"
+ ],
+ [
+ "s",
+ "ys"
+ ],
+ [
+ "ef",
+ "f"
+ ],
+ [
+ "e",
+ "ff"
+ ],
+ [
+ "▁T",
+ "op"
+ ],
+ [
+ "▁To",
+ "p"
+ ],
+ [
+ "▁",
+ "Top"
+ ],
+ [
+ "▁",
+ "č"
+ ],
+ [
+ "▁t",
+ "empor"
+ ],
+ [
+ "▁tem",
+ "por"
+ ],
+ [
+ "▁temp",
+ "or"
+ ],
+ [
+ "▁tempo",
+ "r"
+ ],
+ [
+ "▁arg",
+ "uments"
+ ],
+ [
+ "▁argument",
+ "s"
+ ],
+ [
+ "▁",
+ "arguments"
+ ],
+ [
+ "▁support",
+ "ed"
+ ],
+ [
+ "▁supp",
+ "orted"
+ ],
+ [
+ "▁",
+ "supported"
+ ],
+ [
+ "en",
+ "sed"
+ ],
+ [
+ "ens",
+ "ed"
+ ],
+ [
+ "ense",
+ "d"
+ ],
+ [
+ "▁Franc",
+ "is"
+ ],
+ [
+ "▁co",
+ "ord"
+ ],
+ [
+ "▁",
+ "coord"
+ ],
+ [
+ "▁achie",
+ "ve"
+ ],
+ [
+ "▁N",
+ "ame"
+ ],
+ [
+ "▁Na",
+ "me"
+ ],
+ [
+ "▁Nam",
+ "e"
+ ],
+ [
+ "▁",
+ "Name"
+ ],
+ [
+ "▁J",
+ "ahr"
+ ],
+ [
+ "▁Jah",
+ "r"
+ ],
+ [
+ "▁Ja",
+ "hr"
+ ],
+ [
+ "▁G",
+ "i"
+ ],
+ [
+ "sh",
+ "e"
+ ],
+ [
+ "s",
+ "he"
+ ],
+ [
+ "▁D",
+ "ev"
+ ],
+ [
+ "▁De",
+ "v"
+ ],
+ [
+ "▁",
+ "Dev"
+ ],
+ [
+ "▁a",
+ "lla"
+ ],
+ [
+ "▁al",
+ "la"
+ ],
+ [
+ "▁all",
+ "a"
+ ],
+ [
+ "▁",
+ "alla"
+ ],
+ [
+ "▁W",
+ "IT"
+ ],
+ [
+ "ag",
+ "ment"
+ ],
+ [
+ "c",
+ "ustom"
+ ],
+ [
+ "al",
+ "ls"
+ ],
+ [
+ "all",
+ "s"
+ ],
+ [
+ "&",
+ "&"
+ ],
+ [
+ "W",
+ "E"
+ ],
+ [
+ "▁h",
+ "olding"
+ ],
+ [
+ "▁hold",
+ "ing"
+ ],
+ [
+ "▁hol",
+ "ding"
+ ],
+ [
+ "pro",
+ "totype"
+ ],
+ [
+ "proto",
+ "type"
+ ],
+ [
+ "prot",
+ "otype"
+ ],
+ [
+ "▁f",
+ "ing"
+ ],
+ [
+ "▁fin",
+ "g"
+ ],
+ [
+ "▁fi",
+ "ng"
+ ],
+ [
+ "▁b",
+ "ag"
+ ],
+ [
+ "▁ba",
+ "g"
+ ],
+ [
+ "▁",
+ "bag"
+ ],
+ [
+ "▁Par",
+ "ty"
+ ],
+ [
+ "▁Part",
+ "y"
+ ],
+ [
+ "st",
+ "ack"
+ ],
+ [
+ "sta",
+ "ck"
+ ],
+ [
+ "▁econom",
+ "ic"
+ ],
+ [
+ "▁G",
+ "al"
+ ],
+ [
+ "▁Ga",
+ "l"
+ ],
+ [
+ "id",
+ "ents"
+ ],
+ [
+ "ident",
+ "s"
+ ],
+ [
+ "iden",
+ "ts"
+ ],
+ [
+ "▁J",
+ "un"
+ ],
+ [
+ "▁Ju",
+ "n"
+ ],
+ [
+ "▁sh",
+ "owed"
+ ],
+ [
+ "▁show",
+ "ed"
+ ],
+ [
+ "os",
+ "h"
+ ],
+ [
+ "o",
+ "sh"
+ ],
+ [
+ "▁B",
+ "ay"
+ ],
+ [
+ "▁Ba",
+ "y"
+ ],
+ [
+ "▁",
+ "Bay"
+ ],
+ [
+ "ma",
+ "il"
+ ],
+ [
+ "m",
+ "ail"
+ ],
+ [
+ "▁S",
+ "O"
+ ],
+ [
+ "▁",
+ "SO"
+ ],
+ [
+ "▁\"",
+ "<"
+ ],
+ [
+ "graph",
+ "ics"
+ ],
+ [
+ "▁f",
+ "u"
+ ],
+ [
+ "▁",
+ "fu"
+ ],
+ [
+ "cl",
+ "ick"
+ ],
+ [
+ "cli",
+ "ck"
+ ],
+ [
+ "c",
+ "lick"
+ ],
+ [
+ "▁b",
+ "attle"
+ ],
+ [
+ "▁batt",
+ "le"
+ ],
+ [
+ "▁bat",
+ "tle"
+ ],
+ [
+ "{",
+ "{"
+ ],
+ [
+ "▁E",
+ "vent"
+ ],
+ [
+ "▁Even",
+ "t"
+ ],
+ [
+ "▁Ev",
+ "ent"
+ ],
+ [
+ "▁Eve",
+ "nt"
+ ],
+ [
+ "▁",
+ "Event"
+ ],
+ [
+ "ri",
+ "or"
+ ],
+ [
+ "rio",
+ "r"
+ ],
+ [
+ "r",
+ "ior"
+ ],
+ [
+ "ch",
+ "aft"
+ ],
+ [
+ "cha",
+ "ft"
+ ],
+ [
+ "▁f",
+ "avorite"
+ ],
+ [
+ "▁favor",
+ "ite"
+ ],
+ [
+ "us",
+ "ive"
+ ],
+ [
+ "sup",
+ "port"
+ ],
+ [
+ "supp",
+ "ort"
+ ],
+ [
+ "s",
+ "upport"
+ ],
+ [
+ "b",
+ "m"
+ ],
+ [
+ "K",
+ "ind"
+ ],
+ [
+ "▁saf",
+ "ety"
+ ],
+ [
+ "▁safe",
+ "ty"
+ ],
+ [
+ "▁E",
+ "nt"
+ ],
+ [
+ "▁En",
+ "t"
+ ],
+ [
+ "▁",
+ "Ent"
+ ],
+ [
+ "cu",
+ "p"
+ ],
+ [
+ "c",
+ "up"
+ ],
+ [
+ "▁Austral",
+ "ia"
+ ],
+ [
+ "▁dest",
+ "roy"
+ ],
+ [
+ "▁destro",
+ "y"
+ ],
+ [
+ "▁",
+ "destroy"
+ ],
+ [
+ "▁organ",
+ "ization"
+ ],
+ [
+ "▁organiz",
+ "ation"
+ ],
+ [
+ "id",
+ "en"
+ ],
+ [
+ "ide",
+ "n"
+ ],
+ [
+ "i",
+ "den"
+ ],
+ [
+ "########",
+ "########"
+ ],
+ [
+ "de",
+ "c"
+ ],
+ [
+ "d",
+ "ec"
+ ],
+ [
+ "▁z",
+ "a"
+ ],
+ [
+ "▁",
+ "za"
+ ],
+ [
+ "▁s",
+ "even"
+ ],
+ [
+ "▁se",
+ "ven"
+ ],
+ [
+ "▁",
+ "seven"
+ ],
+ [
+ "ar",
+ "ely"
+ ],
+ [
+ "are",
+ "ly"
+ ],
+ [
+ "arel",
+ "y"
+ ],
+ [
+ "▁f",
+ "lag"
+ ],
+ [
+ "▁fl",
+ "ag"
+ ],
+ [
+ "▁",
+ "flag"
+ ],
+ [
+ "Di",
+ "r"
+ ],
+ [
+ "D",
+ "ir"
+ ],
+ [
+ "▁C",
+ "arl"
+ ],
+ [
+ "▁Car",
+ "l"
+ ],
+ [
+ "▁Ca",
+ "rl"
+ ],
+ [
+ "▁do",
+ "ctor"
+ ],
+ [
+ "▁doc",
+ "tor"
+ ],
+ [
+ "▁var",
+ "iety"
+ ],
+ [
+ "▁vari",
+ "ety"
+ ],
+ [
+ "▁L",
+ "in"
+ ],
+ [
+ "▁Li",
+ "n"
+ ],
+ [
+ "▁",
+ "Lin"
+ ],
+ [
+ "▁t",
+ "om"
+ ],
+ [
+ "▁to",
+ "m"
+ ],
+ [
+ "▁",
+ "tom"
+ ],
+ [
+ "^{",
+ "("
+ ],
+ [
+ "^",
+ "{("
+ ],
+ [
+ "B",
+ "o"
+ ],
+ [
+ "an",
+ "tes"
+ ],
+ [
+ "ant",
+ "es"
+ ],
+ [
+ "ante",
+ "s"
+ ],
+ [
+ "▁m",
+ "ine"
+ ],
+ [
+ "▁min",
+ "e"
+ ],
+ [
+ "▁mi",
+ "ne"
+ ],
+ [
+ "▁",
+ "mine"
+ ],
+ [
+ "▁M",
+ "it"
+ ],
+ [
+ "▁Mi",
+ "t"
+ ],
+ [
+ "▁de",
+ "scribe"
+ ],
+ [
+ "▁desc",
+ "ribe"
+ ],
+ [
+ "▁describ",
+ "e"
+ ],
+ [
+ "Ar",
+ "gs"
+ ],
+ [
+ "Arg",
+ "s"
+ ],
+ [
+ "L",
+ "S"
+ ],
+ [
+ "AP",
+ "I"
+ ],
+ [
+ "A",
+ "PI"
+ ],
+ [
+ "▁L",
+ "uc"
+ ],
+ [
+ "▁Lu",
+ "c"
+ ],
+ [
+ "▁",
+ "Luc"
+ ],
+ [
+ "ph",
+ "one"
+ ],
+ [
+ "▁sc",
+ "ience"
+ ],
+ [
+ "▁",
+ "science"
+ ],
+ [
+ "▁O",
+ "per"
+ ],
+ [
+ "▁Op",
+ "er"
+ ],
+ [
+ "▁",
+ "Oper"
+ ],
+ [
+ "Ne",
+ "xt"
+ ],
+ [
+ "N",
+ "ext"
+ ],
+ [
+ "▁invest",
+ "ig"
+ ],
+ [
+ "▁demon",
+ "str"
+ ],
+ [
+ "▁G",
+ "overn"
+ ],
+ [
+ "▁Go",
+ "vern"
+ ],
+ [
+ "▁object",
+ "s"
+ ],
+ [
+ "▁",
+ "objects"
+ ],
+ [
+ "▁Lou",
+ "is"
+ ],
+ [
+ "▁Lo",
+ "uis"
+ ],
+ [
+ "▁Return",
+ "s"
+ ],
+ [
+ "▁",
+ "Returns"
+ ],
+ [
+ "▁h",
+ "an"
+ ],
+ [
+ "▁ha",
+ "n"
+ ],
+ [
+ "▁",
+ "han"
+ ],
+ [
+ "na",
+ "m"
+ ],
+ [
+ "n",
+ "am"
+ ],
+ [
+ "▁com",
+ "me"
+ ],
+ [
+ "▁comm",
+ "e"
+ ],
+ [
+ "▁pres",
+ "ence"
+ ],
+ [
+ "▁p",
+ "el"
+ ],
+ [
+ "▁pe",
+ "l"
+ ],
+ [
+ "▁",
+ "pel"
+ ],
+ [
+ "▁det",
+ "ect"
+ ],
+ [
+ "▁",
+ "detect"
+ ],
+ [
+ ")",
+ "="
+ ],
+ [
+ "▁Ch",
+ "inese"
+ ],
+ [
+ "▁r",
+ "ich"
+ ],
+ [
+ "▁ri",
+ "ch"
+ ],
+ [
+ "▁ric",
+ "h"
+ ],
+ [
+ "▁",
+ "rich"
+ ],
+ [
+ "▁class",
+ "es"
+ ],
+ [
+ "▁classe",
+ "s"
+ ],
+ [
+ "▁clas",
+ "ses"
+ ],
+ [
+ "▁",
+ "classes"
+ ],
+ [
+ "▁exp",
+ "and"
+ ],
+ [
+ "▁",
+ "expand"
+ ],
+ [
+ "▁D",
+ "om"
+ ],
+ [
+ "▁Do",
+ "m"
+ ],
+ [
+ "▁",
+ "Dom"
+ ],
+ [
+ "▁D",
+ "ec"
+ ],
+ [
+ "▁De",
+ "c"
+ ],
+ [
+ "▁",
+ "Dec"
+ ],
+ [
+ "s",
+ "n"
+ ],
+ [
+ "pe",
+ "ed"
+ ],
+ [
+ "p",
+ "eed"
+ ],
+ [
+ "▁J",
+ "im"
+ ],
+ [
+ "▁Ji",
+ "m"
+ ],
+ [
+ "sh",
+ "ould"
+ ],
+ [
+ "▁Sm",
+ "ith"
+ ],
+ [
+ "▁p",
+ "ages"
+ ],
+ [
+ "▁page",
+ "s"
+ ],
+ [
+ "▁pa",
+ "ges"
+ ],
+ [
+ "▁pag",
+ "es"
+ ],
+ [
+ "▁",
+ "pages"
+ ],
+ [
+ "▁Je",
+ "an"
+ ],
+ [
+ "ri",
+ "cs"
+ ],
+ [
+ "ric",
+ "s"
+ ],
+ [
+ "r",
+ "ics"
+ ],
+ [
+ "▁S",
+ "und"
+ ],
+ [
+ "▁Su",
+ "nd"
+ ],
+ [
+ "▁Sun",
+ "d"
+ ],
+ [
+ "ad",
+ "s"
+ ],
+ [
+ "a",
+ "ds"
+ ],
+ [
+ "▁The",
+ "ir"
+ ],
+ [
+ "un",
+ "icip"
+ ],
+ [
+ "uni",
+ "cip"
+ ],
+ [
+ "unic",
+ "ip"
+ ],
+ [
+ "в",
+ "у"
+ ],
+ [
+ "▁down",
+ "load"
+ ],
+ [
+ "▁",
+ "download"
+ ],
+ [
+ "▁st",
+ "ress"
+ ],
+ [
+ "▁str",
+ "ess"
+ ],
+ [
+ "▁stre",
+ "ss"
+ ],
+ [
+ "▁P",
+ "et"
+ ],
+ [
+ "▁Pe",
+ "t"
+ ],
+ [
+ "▁",
+ "Pet"
+ ],
+ [
+ "me",
+ "nu"
+ ],
+ [
+ "men",
+ "u"
+ ],
+ [
+ "m",
+ "enu"
+ ],
+ [
+ "re",
+ "me"
+ ],
+ [
+ "rem",
+ "e"
+ ],
+ [
+ "r",
+ "eme"
+ ],
+ [
+ "▁com",
+ "pared"
+ ],
+ [
+ "▁comp",
+ "ared"
+ ],
+ [
+ "▁compar",
+ "ed"
+ ],
+ [
+ "▁compare",
+ "d"
+ ],
+ [
+ "St",
+ "e"
+ ],
+ [
+ "S",
+ "te"
+ ],
+ [
+ "IN",
+ "D"
+ ],
+ [
+ "I",
+ "ND"
+ ],
+ [
+ "cont",
+ "ainer"
+ ],
+ [
+ "▁Ind",
+ "ian"
+ ],
+ [
+ "▁India",
+ "n"
+ ],
+ [
+ "or",
+ "en"
+ ],
+ [
+ "ore",
+ "n"
+ ],
+ [
+ "o",
+ "ren"
+ ],
+ [
+ "▁s",
+ "es"
+ ],
+ [
+ "▁se",
+ "s"
+ ],
+ [
+ "▁",
+ "ses"
+ ],
+ [
+ "▁W",
+ "he"
+ ],
+ [
+ "▁Wh",
+ "e"
+ ],
+ [
+ "▁",
+ "Whe"
+ ],
+ [
+ "▁r",
+ "oku"
+ ],
+ [
+ "▁ro",
+ "ku"
+ ],
+ [
+ "▁estab",
+ "lished"
+ ],
+ [
+ "▁establish",
+ "ed"
+ ],
+ [
+ "▁gener",
+ "ally"
+ ],
+ [
+ "▁general",
+ "ly"
+ ],
+ [
+ "▁f",
+ "le"
+ ],
+ [
+ "▁fl",
+ "e"
+ ],
+ [
+ "__",
+ "("
+ ],
+ [
+ "_",
+ "_("
+ ],
+ [
+ "=\"",
+ "+"
+ ],
+ [
+ "=",
+ "\"+"
+ ],
+ [
+ "V",
+ "ar"
+ ],
+ [
+ "▁M",
+ "ake"
+ ],
+ [
+ "▁Ma",
+ "ke"
+ ],
+ [
+ "▁Mak",
+ "e"
+ ],
+ [
+ "▁",
+ "Make"
+ ],
+ [
+ "▁rem",
+ "oved"
+ ],
+ [
+ "▁remove",
+ "d"
+ ],
+ [
+ "▁",
+ "removed"
+ ],
+ [
+ "z",
+ "z"
+ ],
+ [
+ "ü",
+ "n"
+ ],
+ [
+ "▁m",
+ "ix"
+ ],
+ [
+ "▁mi",
+ "x"
+ ],
+ [
+ "▁",
+ "mix"
+ ],
+ [
+ "er",
+ "k"
+ ],
+ [
+ "iat",
+ "ion"
+ ],
+ [
+ "i",
+ "ation"
+ ],
+ [
+ "ou",
+ "ter"
+ ],
+ [
+ "out",
+ "er"
+ ],
+ [
+ "oute",
+ "r"
+ ],
+ [
+ "o",
+ "uter"
+ ],
+ [
+ "S",
+ "K"
+ ],
+ [
+ "▁be",
+ "comes"
+ ],
+ [
+ "▁bec",
+ "omes"
+ ],
+ [
+ "▁become",
+ "s"
+ ],
+ [
+ "▁H",
+ "all"
+ ],
+ [
+ "▁Ha",
+ "ll"
+ ],
+ [
+ "▁Hal",
+ "l"
+ ],
+ [
+ "sc",
+ "ious"
+ ],
+ [
+ "▁w",
+ "atched"
+ ],
+ [
+ "▁watch",
+ "ed"
+ ],
+ [
+ "▁wat",
+ "ched"
+ ],
+ [
+ "▁g",
+ "ather"
+ ],
+ [
+ "▁ga",
+ "ther"
+ ],
+ [
+ "▁",
+ "gather"
+ ],
+ [
+ "▁Res",
+ "ult"
+ ],
+ [
+ "▁",
+ "Result"
+ ],
+ [
+ "pro",
+ "of"
+ ],
+ [
+ "pa",
+ "y"
+ ],
+ [
+ "p",
+ "ay"
+ ],
+ [
+ "▁produ",
+ "ced"
+ ],
+ [
+ "▁produce",
+ "d"
+ ],
+ [
+ "▁prod",
+ "uced"
+ ],
+ [
+ "▁|",
+ "="
+ ],
+ [
+ "▁b",
+ "order"
+ ],
+ [
+ "▁bord",
+ "er"
+ ],
+ [
+ "▁bor",
+ "der"
+ ],
+ [
+ "▁",
+ "border"
+ ],
+ [
+ "▁d",
+ "in"
+ ],
+ [
+ "▁di",
+ "n"
+ ],
+ [
+ "▁s",
+ "cript"
+ ],
+ [
+ "▁sc",
+ "ript"
+ ],
+ [
+ "▁scr",
+ "ipt"
+ ],
+ [
+ "▁",
+ "script"
+ ],
+ [
+ "▁a",
+ "ctions"
+ ],
+ [
+ "▁act",
+ "ions"
+ ],
+ [
+ "▁action",
+ "s"
+ ],
+ [
+ "▁",
+ "actions"
+ ],
+ [
+ "▁m",
+ "as"
+ ],
+ [
+ "▁ma",
+ "s"
+ ],
+ [
+ "▁",
+ "mas"
+ ],
+ [
+ "щ",
+ "а"
+ ],
+ [
+ "oot",
+ "h"
+ ],
+ [
+ "oo",
+ "th"
+ ],
+ [
+ "o",
+ "oth"
+ ],
+ [
+ "▁Te",
+ "chn"
+ ],
+ [
+ "▁Tech",
+ "n"
+ ],
+ [
+ "Js",
+ "on"
+ ],
+ [
+ "J",
+ "son"
+ ],
+ [
+ "▁f",
+ "illed"
+ ],
+ [
+ "▁fil",
+ "led"
+ ],
+ [
+ "▁fill",
+ "ed"
+ ],
+ [
+ "▁",
+ "filled"
+ ],
+ [
+ "де",
+ "н"
+ ],
+ [
+ "д",
+ "ен"
+ ],
+ [
+ "und",
+ "le"
+ ],
+ [
+ "ст",
+ "у"
+ ],
+ [
+ "с",
+ "ту"
+ ],
+ [
+ "To",
+ "ol"
+ ],
+ [
+ "Too",
+ "l"
+ ],
+ [
+ "T",
+ "ool"
+ ],
+ [
+ "▁k",
+ "ing"
+ ],
+ [
+ "▁ki",
+ "ng"
+ ],
+ [
+ "▁kin",
+ "g"
+ ],
+ [
+ "▁",
+ "king"
+ ],
+ [
+ "▁v",
+ "en"
+ ],
+ [
+ "▁ve",
+ "n"
+ ],
+ [
+ "▁",
+ "ven"
+ ],
+ [
+ "st",
+ "ra"
+ ],
+ [
+ "str",
+ "a"
+ ],
+ [
+ "s",
+ "tra"
+ ],
+ [
+ "▁pre",
+ "dict"
+ ],
+ [
+ "▁pred",
+ "ict"
+ ],
+ [
+ "▁",
+ "predict"
+ ],
+ [
+ "▁l",
+ "ui"
+ ],
+ [
+ "▁lu",
+ "i"
+ ],
+ [
+ "▁WAR",
+ "RAN"
+ ],
+ [
+ "▁F",
+ "un"
+ ],
+ [
+ "▁Fu",
+ "n"
+ ],
+ [
+ "▁",
+ "Fun"
+ ],
+ [
+ "Sc",
+ "ript"
+ ],
+ [
+ "S",
+ "cript"
+ ],
+ [
+ "▁power",
+ "ful"
+ ],
+ [
+ "▁l",
+ "ose"
+ ],
+ [
+ "▁lo",
+ "se"
+ ],
+ [
+ "▁los",
+ "e"
+ ],
+ [
+ "at",
+ "ically"
+ ],
+ [
+ "atic",
+ "ally"
+ ],
+ [
+ "▁d",
+ "aily"
+ ],
+ [
+ "▁da",
+ "ily"
+ ],
+ [
+ "▁dai",
+ "ly"
+ ],
+ [
+ "▁r",
+ "ing"
+ ],
+ [
+ "▁ri",
+ "ng"
+ ],
+ [
+ "▁",
+ "ring"
+ ],
+ [
+ "▁ar",
+ "rived"
+ ],
+ [
+ "▁arriv",
+ "ed"
+ ],
+ [
+ "▁arr",
+ "ived"
+ ],
+ [
+ "▁arrive",
+ "d"
+ ],
+ [
+ "St",
+ "ack"
+ ],
+ [
+ "sc",
+ "ope"
+ ],
+ [
+ "s",
+ "cope"
+ ],
+ [
+ "▁B",
+ "ack"
+ ],
+ [
+ "▁Ba",
+ "ck"
+ ],
+ [
+ "▁",
+ "Back"
+ ],
+ [
+ "el",
+ "ij"
+ ],
+ [
+ "eli",
+ "j"
+ ],
+ [
+ "e",
+ "lij"
+ ],
+ [
+ "▁z",
+ "e"
+ ],
+ [
+ "▁",
+ "ze"
+ ],
+ [
+ "ke",
+ "ys"
+ ],
+ [
+ "key",
+ "s"
+ ],
+ [
+ "{",
+ "\""
+ ],
+ [
+ "VI",
+ "D"
+ ],
+ [
+ "V",
+ "ID"
+ ],
+ [
+ "▁l",
+ "icense"
+ ],
+ [
+ "▁lic",
+ "ense"
+ ],
+ [
+ "▁",
+ "license"
+ ],
+ [
+ "wh",
+ "at"
+ ],
+ [
+ "w",
+ "hat"
+ ],
+ [
+ "▁pro",
+ "ced"
+ ],
+ [
+ "▁proc",
+ "ed"
+ ],
+ [
+ "ra",
+ "nt"
+ ],
+ [
+ "ran",
+ "t"
+ ],
+ [
+ "r",
+ "ant"
+ ],
+ [
+ "est",
+ "ival"
+ ],
+ [
+ "ag",
+ "ram"
+ ],
+ [
+ "agr",
+ "am"
+ ],
+ [
+ "agra",
+ "m"
+ ],
+ [
+ "a",
+ "gram"
+ ],
+ [
+ "▁L",
+ "O"
+ ],
+ [
+ "▁",
+ "LO"
+ ],
+ [
+ "▁Hen",
+ "ry"
+ ],
+ [
+ "▁fl",
+ "ags"
+ ],
+ [
+ "▁flag",
+ "s"
+ ],
+ [
+ "▁",
+ "flags"
+ ],
+ [
+ "Do",
+ "wn"
+ ],
+ [
+ "D",
+ "own"
+ ],
+ [
+ "scri",
+ "ption"
+ ],
+ [
+ "script",
+ "ion"
+ ],
+ [
+ "s",
+ "cription"
+ ],
+ [
+ "▁famil",
+ "ies"
+ ],
+ [
+ "▁familie",
+ "s"
+ ],
+ [
+ "is",
+ "se"
+ ],
+ [
+ "iss",
+ "e"
+ ],
+ [
+ "bo",
+ "ur"
+ ],
+ [
+ "b",
+ "our"
+ ],
+ [
+ "▁B",
+ "ur"
+ ],
+ [
+ "▁Bu",
+ "r"
+ ],
+ [
+ "—",
+ "\""
+ ],
+ [
+ "▁b",
+ "rief"
+ ],
+ [
+ "▁br",
+ "ief"
+ ],
+ [
+ "▁",
+ "brief"
+ ],
+ [
+ "▁cre",
+ "ating"
+ ],
+ [
+ "▁creat",
+ "ing"
+ ],
+ [
+ "▁cl",
+ "ients"
+ ],
+ [
+ "▁client",
+ "s"
+ ],
+ [
+ "ran",
+ "gle"
+ ],
+ [
+ "r",
+ "angle"
+ ],
+ [
+ "▁amaz",
+ "ing"
+ ],
+ [
+ "▁s",
+ "ind"
+ ],
+ [
+ "▁si",
+ "nd"
+ ],
+ [
+ "▁sin",
+ "d"
+ ],
+ [
+ "▁cover",
+ "ed"
+ ],
+ [
+ "▁cov",
+ "ered"
+ ],
+ [
+ "▁",
+ "covered"
+ ],
+ [
+ "We",
+ "ll"
+ ],
+ [
+ "W",
+ "ell"
+ ],
+ [
+ "ст",
+ "е"
+ ],
+ [
+ "с",
+ "те"
+ ],
+ [
+ "то",
+ "р"
+ ],
+ [
+ "т",
+ "ор"
+ ],
+ [
+ "▁B",
+ "as"
+ ],
+ [
+ "▁Ba",
+ "s"
+ ],
+ [
+ "▁",
+ "Bas"
+ ],
+ [
+ "to",
+ "tal"
+ ],
+ [
+ "tot",
+ "al"
+ ],
+ [
+ "t",
+ "otal"
+ ],
+ [
+ "▁I",
+ "nit"
+ ],
+ [
+ "▁In",
+ "it"
+ ],
+ [
+ "▁",
+ "Init"
+ ],
+ [
+ "▁s",
+ "and"
+ ],
+ [
+ "▁sa",
+ "nd"
+ ],
+ [
+ "▁san",
+ "d"
+ ],
+ [
+ "Un",
+ "it"
+ ],
+ [
+ "U",
+ "nit"
+ ],
+ [
+ "▁mur",
+ "der"
+ ],
+ [
+ "▁b",
+ "right"
+ ],
+ [
+ "▁br",
+ "ight"
+ ],
+ [
+ "▁brig",
+ "ht"
+ ],
+ [
+ "▁t",
+ "rav"
+ ],
+ [
+ "▁tr",
+ "av"
+ ],
+ [
+ "▁tra",
+ "v"
+ ],
+ [
+ "ic",
+ "ans"
+ ],
+ [
+ "ica",
+ "ns"
+ ],
+ [
+ "ican",
+ "s"
+ ],
+ [
+ "▁att",
+ "ribute"
+ ],
+ [
+ "▁attribut",
+ "e"
+ ],
+ [
+ "▁",
+ "attribute"
+ ],
+ [
+ "f",
+ "c"
+ ],
+ [
+ "▁pl",
+ "aced"
+ ],
+ [
+ "▁place",
+ "d"
+ ],
+ [
+ "▁plac",
+ "ed"
+ ],
+ [
+ "ES",
+ "T"
+ ],
+ [
+ "E",
+ "ST"
+ ],
+ [
+ "Var",
+ "i"
+ ],
+ [
+ "V",
+ "ari"
+ ],
+ [
+ "▁c",
+ "os"
+ ],
+ [
+ "▁co",
+ "s"
+ ],
+ [
+ "▁",
+ "cos"
+ ],
+ [
+ "▁at",
+ "tract"
+ ],
+ [
+ "▁att",
+ "ract"
+ ],
+ [
+ "▁attr",
+ "act"
+ ],
+ [
+ "▁attra",
+ "ct"
+ ],
+ [
+ "an",
+ "el"
+ ],
+ [
+ "ane",
+ "l"
+ ],
+ [
+ "a",
+ "nel"
+ ],
+ [
+ "})",
+ "."
+ ],
+ [
+ "}",
+ ")."
+ ],
+ [
+ "by",
+ "tes"
+ ],
+ [
+ "byte",
+ "s"
+ ],
+ [
+ "▁p",
+ "arse"
+ ],
+ [
+ "▁par",
+ "se"
+ ],
+ [
+ "▁",
+ "parse"
+ ],
+ [
+ "▁be",
+ "long"
+ ],
+ [
+ "▁bel",
+ "ong"
+ ],
+ [
+ "B",
+ "N"
+ ],
+ [
+ "▁S",
+ "ol"
+ ],
+ [
+ "▁So",
+ "l"
+ ],
+ [
+ "P",
+ "o"
+ ],
+ [
+ "`",
+ ","
+ ],
+ [
+ "▁c",
+ "alling"
+ ],
+ [
+ "▁call",
+ "ing"
+ ],
+ [
+ "▁cal",
+ "ling"
+ ],
+ [
+ "▁?",
+ ">"
+ ],
+ [
+ "▁",
+ "?>"
+ ],
+ [
+ "▁it",
+ "er"
+ ],
+ [
+ "▁i",
+ "ter"
+ ],
+ [
+ "▁",
+ "iter"
+ ],
+ [
+ "▁u",
+ "rl"
+ ],
+ [
+ "▁ur",
+ "l"
+ ],
+ [
+ "▁",
+ "url"
+ ],
+ [
+ "▁ev",
+ "ening"
+ ],
+ [
+ "▁even",
+ "ing"
+ ],
+ [
+ "re",
+ "ek"
+ ],
+ [
+ "ree",
+ "k"
+ ],
+ [
+ "▁hon",
+ "est"
+ ],
+ [
+ "▁direct",
+ "or"
+ ],
+ [
+ "▁dire",
+ "ctor"
+ ],
+ [
+ "▁dir",
+ "ector"
+ ],
+ [
+ "R",
+ "C"
+ ],
+ [
+ "▁s",
+ "olid"
+ ],
+ [
+ "▁sol",
+ "id"
+ ],
+ [
+ "▁",
+ "solid"
+ ],
+ [
+ "▁ph",
+ "il"
+ ],
+ [
+ "ie",
+ "ne"
+ ],
+ [
+ "ien",
+ "e"
+ ],
+ [
+ "i",
+ "ene"
+ ],
+ [
+ "FA",
+ "ULT"
+ ],
+ [
+ "co",
+ "pe"
+ ],
+ [
+ "cop",
+ "e"
+ ],
+ [
+ "c",
+ "ope"
+ ],
+ [
+ "▁Hist",
+ "ory"
+ ],
+ [
+ "▁Histor",
+ "y"
+ ],
+ [
+ "▁Hi",
+ "story"
+ ],
+ [
+ "▁",
+ "History"
+ ],
+ [
+ "▁Te",
+ "am"
+ ],
+ [
+ "▁",
+ "Team"
+ ],
+ [
+ "ree",
+ "dom"
+ ],
+ [
+ "reed",
+ "om"
+ ],
+ [
+ "▁r",
+ "u"
+ ],
+ [
+ "▁",
+ "ru"
+ ],
+ [
+ "U",
+ "B"
+ ],
+ [
+ "▁w",
+ "orse"
+ ],
+ [
+ "▁wor",
+ "se"
+ ],
+ [
+ "im",
+ "o"
+ ],
+ [
+ "i",
+ "mo"
+ ],
+ [
+ "Ma",
+ "t"
+ ],
+ [
+ "M",
+ "at"
+ ],
+ [
+ "▁M",
+ "ex"
+ ],
+ [
+ "▁Me",
+ "x"
+ ],
+ [
+ "ac",
+ "tor"
+ ],
+ [
+ "act",
+ "or"
+ ],
+ [
+ "a",
+ "ctor"
+ ],
+ [
+ "▁v",
+ "or"
+ ],
+ [
+ "▁vo",
+ "r"
+ ],
+ [
+ "▁",
+ "vor"
+ ],
+ [
+ "ть",
+ "ся"
+ ],
+ [
+ "▁exper",
+ "iment"
+ ],
+ [
+ "▁experi",
+ "ment"
+ ],
+ [
+ "▁P",
+ "lay"
+ ],
+ [
+ "▁Pl",
+ "ay"
+ ],
+ [
+ "▁",
+ "Play"
+ ],
+ [
+ "▁An",
+ "other"
+ ],
+ [
+ "▁happ",
+ "ens"
+ ],
+ [
+ "▁happen",
+ "s"
+ ],
+ [
+ "ua",
+ "n"
+ ],
+ [
+ "u",
+ "an"
+ ],
+ [
+ "▁pat",
+ "ients"
+ ],
+ [
+ "▁patient",
+ "s"
+ ],
+ [
+ "▁re",
+ "nd"
+ ],
+ [
+ "▁r",
+ "end"
+ ],
+ [
+ "▁ren",
+ "d"
+ ],
+ [
+ "▁",
+ "rend"
+ ],
+ [
+ "▁M",
+ "o"
+ ],
+ [
+ "▁",
+ "Mo"
+ ],
+ [
+ "▁T",
+ "ex"
+ ],
+ [
+ "▁Te",
+ "x"
+ ],
+ [
+ "▁",
+ "Tex"
+ ],
+ [
+ "▁w",
+ "ed"
+ ],
+ [
+ "▁we",
+ "d"
+ ],
+ [
+ "▁",
+ "wed"
+ ],
+ [
+ "t",
+ "n"
+ ],
+ [
+ "in",
+ "sert"
+ ],
+ [
+ "ins",
+ "ert"
+ ],
+ [
+ "▁п",
+ "а"
+ ],
+ [
+ "▁",
+ "па"
+ ],
+ [
+ "▁an",
+ "ti"
+ ],
+ [
+ "▁ant",
+ "i"
+ ],
+ [
+ "▁",
+ "anti"
+ ],
+ [
+ "Mat",
+ "ch"
+ ],
+ [
+ "M",
+ "atch"
+ ],
+ [
+ "ampions",
+ "hip"
+ ],
+ [
+ "ampion",
+ "ship"
+ ],
+ [
+ "▁for",
+ "ces"
+ ],
+ [
+ "▁force",
+ "s"
+ ],
+ [
+ "▁H",
+ "ot"
+ ],
+ [
+ "▁Ho",
+ "t"
+ ],
+ [
+ "▁",
+ "Hot"
+ ],
+ [
+ "▁ph",
+ "ase"
+ ],
+ [
+ "▁",
+ "phase"
+ ],
+ [
+ "▁t",
+ "emplate"
+ ],
+ [
+ "▁templ",
+ "ate"
+ ],
+ [
+ "▁temp",
+ "late"
+ ],
+ [
+ "▁",
+ "template"
+ ],
+ [
+ "st",
+ "op"
+ ],
+ [
+ "sto",
+ "p"
+ ],
+ [
+ "s",
+ "top"
+ ],
+ [
+ "ic",
+ "ated"
+ ],
+ [
+ "ica",
+ "ted"
+ ],
+ [
+ "icate",
+ "d"
+ ],
+ [
+ "▁man",
+ "aged"
+ ],
+ [
+ "▁manage",
+ "d"
+ ],
+ [
+ "▁",
+ "managed"
+ ],
+ [
+ "wa",
+ "it"
+ ],
+ [
+ "w",
+ "ait"
+ ],
+ [
+ "▁*",
+ "("
+ ],
+ [
+ "▁",
+ "*("
+ ],
+ [
+ "G",
+ "B"
+ ],
+ [
+ "▁app",
+ "oint"
+ ],
+ [
+ "▁ap",
+ "point"
+ ],
+ [
+ "▁",
+ "appoint"
+ ],
+ [
+ "ł",
+ "a"
+ ],
+ [
+ "▁s",
+ "tick"
+ ],
+ [
+ "▁st",
+ "ick"
+ ],
+ [
+ "▁",
+ "stick"
+ ],
+ [
+ "▁F",
+ "OR"
+ ],
+ [
+ "▁FO",
+ "R"
+ ],
+ [
+ "▁",
+ "FOR"
+ ],
+ [
+ "▁V",
+ "is"
+ ],
+ [
+ "▁Vi",
+ "s"
+ ],
+ [
+ "▁",
+ "Vis"
+ ],
+ [
+ "to",
+ "r"
+ ],
+ [
+ "t",
+ "or"
+ ],
+ [
+ "▁p",
+ "ř"
+ ],
+ [
+ "qu",
+ "est"
+ ],
+ [
+ "que",
+ "st"
+ ],
+ [
+ "ques",
+ "t"
+ ],
+ [
+ "q",
+ "uest"
+ ],
+ [
+ "us",
+ "es"
+ ],
+ [
+ "use",
+ "s"
+ ],
+ [
+ "u",
+ "ses"
+ ],
+ [
+ "\");",
+ "\r"
+ ],
+ [
+ "\")",
+ ";\r"
+ ],
+ [
+ "\"",
+ ");\r"
+ ],
+ [
+ "▁sudden",
+ "ly"
+ ],
+ [
+ "▁sud",
+ "denly"
+ ],
+ [
+ "é",
+ "c"
+ ],
+ [
+ "N",
+ "D"
+ ],
+ [
+ "ur",
+ "op"
+ ],
+ [
+ "uro",
+ "p"
+ ],
+ [
+ "u",
+ "rop"
+ ],
+ [
+ "ре",
+ "д"
+ ],
+ [
+ "▁ins",
+ "urance"
+ ],
+ [
+ "ac",
+ "cess"
+ ],
+ [
+ "acc",
+ "ess"
+ ],
+ [
+ "a",
+ "ccess"
+ ],
+ [
+ "un",
+ "finished"
+ ],
+ [
+ "▁t",
+ "amb"
+ ],
+ [
+ "▁ta",
+ "mb"
+ ],
+ [
+ "▁tam",
+ "b"
+ ],
+ [
+ "▁s",
+ "ac"
+ ],
+ [
+ "▁sa",
+ "c"
+ ],
+ [
+ "▁C",
+ "ourt"
+ ],
+ [
+ "▁Co",
+ "urt"
+ ],
+ [
+ "▁Cour",
+ "t"
+ ],
+ [
+ "▁Cou",
+ "rt"
+ ],
+ [
+ "▁miss",
+ "ing"
+ ],
+ [
+ "▁mis",
+ "sing"
+ ],
+ [
+ "▁",
+ "missing"
+ ],
+ [
+ "▁W",
+ "here"
+ ],
+ [
+ "▁Wh",
+ "ere"
+ ],
+ [
+ "▁Whe",
+ "re"
+ ],
+ [
+ "▁",
+ "Where"
+ ],
+ [
+ "▁S",
+ "um"
+ ],
+ [
+ "▁Su",
+ "m"
+ ],
+ [
+ "▁",
+ "Sum"
+ ],
+ [
+ "}^",
+ "{\\"
+ ],
+ [
+ "}^{",
+ "\\"
+ ],
+ [
+ "}",
+ "^{\\"
+ ],
+ [
+ "▁s",
+ "ua"
+ ],
+ [
+ "▁su",
+ "a"
+ ],
+ [
+ "_",
+ ","
+ ],
+ [
+ "▁th",
+ "ick"
+ ],
+ [
+ "▁Tr",
+ "ump"
+ ],
+ [
+ "▁Tru",
+ "mp"
+ ],
+ [
+ "▁oper",
+ "ations"
+ ],
+ [
+ "▁operation",
+ "s"
+ ],
+ [
+ "▁",
+ "operations"
+ ],
+ [
+ "F",
+ "S"
+ ],
+ [
+ "▁de",
+ "ux"
+ ],
+ [
+ "d",
+ "z"
+ ],
+ [
+ "Temp",
+ "late"
+ ],
+ [
+ "T",
+ "emplate"
+ ],
+ [
+ "▁\"",
+ "/"
+ ],
+ [
+ "▁o",
+ "dd"
+ ],
+ [
+ "▁od",
+ "d"
+ ],
+ [
+ "▁",
+ "odd"
+ ],
+ [
+ "▁re",
+ "ality"
+ ],
+ [
+ "▁real",
+ "ity"
+ ],
+ [
+ "▁te",
+ "ams"
+ ],
+ [
+ "▁team",
+ "s"
+ ],
+ [
+ "▁tea",
+ "ms"
+ ],
+ [
+ "▁c",
+ "er"
+ ],
+ [
+ "▁ce",
+ "r"
+ ],
+ [
+ "▁",
+ "cer"
+ ],
+ [
+ "om",
+ "a"
+ ],
+ [
+ "o",
+ "ma"
+ ],
+ [
+ "▁",
+ "și"
+ ],
+ [
+ "▁cl",
+ "oud"
+ ],
+ [
+ "▁clo",
+ "ud"
+ ],
+ [
+ "▁",
+ "cloud"
+ ],
+ [
+ "▁Dep",
+ "artment"
+ ],
+ [
+ "N",
+ "e"
+ ],
+ [
+ "▁requ",
+ "ires"
+ ],
+ [
+ "▁require",
+ "s"
+ ],
+ [
+ "it",
+ "ems"
+ ],
+ [
+ "ite",
+ "ms"
+ ],
+ [
+ "item",
+ "s"
+ ],
+ [
+ "▁I",
+ "II"
+ ],
+ [
+ "▁II",
+ "I"
+ ],
+ [
+ "▁",
+ "III"
+ ],
+ [
+ "right",
+ "arrow"
+ ],
+ [
+ ")-",
+ ">"
+ ],
+ [
+ ")",
+ "->"
+ ],
+ [
+ "▁w",
+ "riter"
+ ],
+ [
+ "▁wr",
+ "iter"
+ ],
+ [
+ "▁writ",
+ "er"
+ ],
+ [
+ "▁write",
+ "r"
+ ],
+ [
+ "▁",
+ "writer"
+ ],
+ [
+ "re",
+ "place"
+ ],
+ [
+ "rep",
+ "lace"
+ ],
+ [
+ "▁t",
+ "hr"
+ ],
+ [
+ "▁th",
+ "r"
+ ],
+ [
+ "je",
+ "n"
+ ],
+ [
+ "j",
+ "en"
+ ],
+ [
+ "▁o",
+ "t"
+ ],
+ [
+ "▁",
+ "ot"
+ ],
+ [
+ "▁occ",
+ "up"
+ ],
+ [
+ "▁oc",
+ "cup"
+ ],
+ [
+ "▁",
+ "occup"
+ ],
+ [
+ "▁event",
+ "ually"
+ ],
+ [
+ "▁M",
+ "ath"
+ ],
+ [
+ "▁Mat",
+ "h"
+ ],
+ [
+ "▁Ma",
+ "th"
+ ],
+ [
+ "▁",
+ "Math"
+ ],
+ [
+ "▁con",
+ "serv"
+ ],
+ [
+ "▁cons",
+ "erv"
+ ],
+ [
+ "▁conse",
+ "rv"
+ ],
+ [
+ "am",
+ "er"
+ ],
+ [
+ "ame",
+ "r"
+ ],
+ [
+ "a",
+ "mer"
+ ],
+ [
+ "▁F",
+ "ort"
+ ],
+ [
+ "▁For",
+ "t"
+ ],
+ [
+ "▁Fo",
+ "rt"
+ ],
+ [
+ "▁d",
+ "ry"
+ ],
+ [
+ "▁dr",
+ "y"
+ ],
+ [
+ "▁sex",
+ "ual"
+ ],
+ [
+ "▁co",
+ "sts"
+ ],
+ [
+ "▁cost",
+ "s"
+ ],
+ [
+ "▁cos",
+ "ts"
+ ],
+ [
+ "▁for",
+ "ms"
+ ],
+ [
+ "▁form",
+ "s"
+ ],
+ [
+ "▁",
+ "forms"
+ ],
+ [
+ "▁V",
+ "ict"
+ ],
+ [
+ "▁Vi",
+ "ct"
+ ],
+ [
+ "▁Vic",
+ "t"
+ ],
+ [
+ "PA",
+ "R"
+ ],
+ [
+ "P",
+ "AR"
+ ],
+ [
+ "frame",
+ "work"
+ ],
+ [
+ "▁д",
+ "и"
+ ],
+ [
+ "▁",
+ "ди"
+ ],
+ [
+ "Oper",
+ "ation"
+ ],
+ [
+ "з",
+ "на"
+ ],
+ [
+ "wh",
+ "ich"
+ ],
+ [
+ "▁t",
+ "ight"
+ ],
+ [
+ "▁ti",
+ "ght"
+ ],
+ [
+ "In",
+ "valid"
+ ],
+ [
+ "▁part",
+ "ner"
+ ],
+ [
+ "▁п",
+ "ред"
+ ],
+ [
+ "▁пре",
+ "д"
+ ],
+ [
+ "▁th",
+ "ank"
+ ],
+ [
+ "▁than",
+ "k"
+ ],
+ [
+ "▁gu",
+ "ard"
+ ],
+ [
+ "▁",
+ "guard"
+ ],
+ [
+ "he",
+ "m"
+ ],
+ [
+ "h",
+ "em"
+ ],
+ [
+ "Bo",
+ "dy"
+ ],
+ [
+ "B",
+ "ody"
+ ],
+ [
+ "▁e",
+ "mot"
+ ],
+ [
+ "▁em",
+ "ot"
+ ],
+ [
+ "I",
+ "X"
+ ],
+ [
+ "fa",
+ "st"
+ ],
+ [
+ "fas",
+ "t"
+ ],
+ [
+ "f",
+ "ast"
+ ],
+ [
+ "щ",
+ "о"
+ ],
+ [
+ "ñ",
+ "o"
+ ],
+ [
+ "ni",
+ "ght"
+ ],
+ [
+ "n",
+ "ight"
+ ],
+ [
+ "▁S",
+ "ci"
+ ],
+ [
+ "▁Sc",
+ "i"
+ ],
+ [
+ "ни",
+ "ка"
+ ],
+ [
+ "ник",
+ "а"
+ ],
+ [
+ "▁T",
+ "O"
+ ],
+ [
+ "▁",
+ "TO"
+ ],
+ [
+ "▁individ",
+ "uals"
+ ],
+ [
+ "▁individual",
+ "s"
+ ],
+ [
+ "сс",
+ "и"
+ ],
+ [
+ "с",
+ "си"
+ ],
+ [
+ "})",
+ ","
+ ],
+ [
+ "}",
+ "),"
+ ],
+ [
+ "F",
+ "alse"
+ ],
+ [
+ "(\"",
+ "%"
+ ],
+ [
+ "(",
+ "\"%"
+ ],
+ [
+ "▁op",
+ "tim"
+ ],
+ [
+ "▁opt",
+ "im"
+ ],
+ [
+ "▁",
+ "optim"
+ ],
+ [
+ "▁-",
+ "->"
+ ],
+ [
+ "▁--",
+ ">"
+ ],
+ [
+ "▁",
+ "-->"
+ ],
+ [
+ "▁f",
+ "actor"
+ ],
+ [
+ "▁fact",
+ "or"
+ ],
+ [
+ "▁fac",
+ "tor"
+ ],
+ [
+ "▁fa",
+ "ctor"
+ ],
+ [
+ "▁",
+ "factor"
+ ],
+ [
+ "▁sm",
+ "aller"
+ ],
+ [
+ "▁small",
+ "er"
+ ],
+ [
+ "▁con",
+ "tain"
+ ],
+ [
+ "▁cont",
+ "ain"
+ ],
+ [
+ "sp",
+ "ect"
+ ],
+ [
+ "spec",
+ "t"
+ ],
+ [
+ "spe",
+ "ct"
+ ],
+ [
+ "s",
+ "pect"
+ ],
+ [
+ "Eng",
+ "ine"
+ ],
+ [
+ "▁ann",
+ "ounced"
+ ],
+ [
+ "▁announ",
+ "ced"
+ ],
+ [
+ "▁announce",
+ "d"
+ ],
+ [
+ "▁Dem",
+ "ocr"
+ ],
+ [
+ "▁r",
+ "ob"
+ ],
+ [
+ "▁ro",
+ "b"
+ ],
+ [
+ "▁",
+ "rob"
+ ],
+ [
+ "▁f",
+ "lat"
+ ],
+ [
+ "▁fl",
+ "at"
+ ],
+ [
+ "▁",
+ "flat"
+ ],
+ [
+ "os",
+ "oph"
+ ],
+ [
+ "oso",
+ "ph"
+ ],
+ [
+ "Se",
+ "arch"
+ ],
+ [
+ "S",
+ "earch"
+ ],
+ [
+ "ah",
+ "l"
+ ],
+ [
+ "a",
+ "hl"
+ ],
+ [
+ "▁Ex",
+ "ception"
+ ],
+ [
+ "▁Except",
+ "ion"
+ ],
+ [
+ "▁",
+ "Exception"
+ ],
+ [
+ "▁O",
+ "l"
+ ],
+ [
+ "equ",
+ "als"
+ ],
+ [
+ "eq",
+ "uals"
+ ],
+ [
+ "equal",
+ "s"
+ ],
+ [
+ "▁un",
+ "ter"
+ ],
+ [
+ "▁unt",
+ "er"
+ ],
+ [
+ "▁",
+ "unter"
+ ],
+ [
+ "sh",
+ "ape"
+ ],
+ [
+ "sha",
+ "pe"
+ ],
+ [
+ "N",
+ "S"
+ ],
+ [
+ "Ob",
+ "j"
+ ],
+ [
+ "▁spec",
+ "ies"
+ ],
+ [
+ "▁spe",
+ "cies"
+ ],
+ [
+ "we",
+ "ight"
+ ],
+ [
+ "wei",
+ "ght"
+ ],
+ [
+ "w",
+ "eight"
+ ],
+ [
+ "yo",
+ "u"
+ ],
+ [
+ "y",
+ "ou"
+ ],
+ [
+ "▁e",
+ "ste"
+ ],
+ [
+ "▁est",
+ "e"
+ ],
+ [
+ "▁es",
+ "te"
+ ],
+ [
+ "▁",
+ "este"
+ ],
+ [
+ "▁V",
+ "iew"
+ ],
+ [
+ "▁Vi",
+ "ew"
+ ],
+ [
+ "▁",
+ "View"
+ ],
+ [
+ "▁m",
+ "ission"
+ ],
+ [
+ "▁miss",
+ "ion"
+ ],
+ [
+ "▁",
+ "mission"
+ ],
+ [
+ "▁j",
+ "ournal"
+ ],
+ [
+ "▁jour",
+ "nal"
+ ],
+ [
+ "▁",
+ "journal"
+ ],
+ [
+ "Value",
+ "s"
+ ],
+ [
+ "Val",
+ "ues"
+ ],
+ [
+ "▁ein",
+ "em"
+ ],
+ [
+ "▁eine",
+ "m"
+ ],
+ [
+ "is",
+ "mo"
+ ],
+ [
+ "ism",
+ "o"
+ ],
+ [
+ "▁project",
+ "s"
+ ],
+ [
+ "▁",
+ "projects"
+ ],
+ [
+ "▁D",
+ "as"
+ ],
+ [
+ "▁Da",
+ "s"
+ ],
+ [
+ "ri",
+ "ble"
+ ],
+ [
+ "rib",
+ "le"
+ ],
+ [
+ "r",
+ "ible"
+ ],
+ [
+ "▁s",
+ "erve"
+ ],
+ [
+ "▁ser",
+ "ve"
+ ],
+ [
+ "▁serv",
+ "e"
+ ],
+ [
+ "▁",
+ "serve"
+ ],
+ [
+ "▁op",
+ "ening"
+ ],
+ [
+ "▁open",
+ "ing"
+ ],
+ [
+ "▁h",
+ "ur"
+ ],
+ [
+ "▁program",
+ "s"
+ ],
+ [
+ "▁U",
+ "SA"
+ ],
+ [
+ "▁US",
+ "A"
+ ],
+ [
+ "▁",
+ "USA"
+ ],
+ [
+ "il",
+ "iar"
+ ],
+ [
+ "ili",
+ "ar"
+ ],
+ [
+ "ilia",
+ "r"
+ ],
+ [
+ "id",
+ "os"
+ ],
+ [
+ "ido",
+ "s"
+ ],
+ [
+ "B",
+ "r"
+ ],
+ [
+ "est",
+ "amp"
+ ],
+ [
+ "esta",
+ "mp"
+ ],
+ [
+ "▁t",
+ "ools"
+ ],
+ [
+ "▁to",
+ "ols"
+ ],
+ [
+ "▁too",
+ "ls"
+ ],
+ [
+ "▁tool",
+ "s"
+ ],
+ [
+ "▁",
+ "tools"
+ ],
+ [
+ "an",
+ "ner"
+ ],
+ [
+ "ann",
+ "er"
+ ],
+ [
+ "anne",
+ "r"
+ ],
+ [
+ "R",
+ "T"
+ ],
+ [
+ "▁St",
+ "art"
+ ],
+ [
+ "▁Star",
+ "t"
+ ],
+ [
+ "▁Sta",
+ "rt"
+ ],
+ [
+ "▁",
+ "Start"
+ ],
+ [
+ "▁b",
+ "ath"
+ ],
+ [
+ "▁bat",
+ "h"
+ ],
+ [
+ "▁ba",
+ "th"
+ ],
+ [
+ "▁coff",
+ "ee"
+ ],
+ [
+ "or",
+ "ter"
+ ],
+ [
+ "ort",
+ "er"
+ ],
+ [
+ "orte",
+ "r"
+ ],
+ [
+ "in",
+ "ternal"
+ ],
+ [
+ "inter",
+ "nal"
+ ],
+ [
+ "intern",
+ "al"
+ ],
+ [
+ "file",
+ "s"
+ ],
+ [
+ "fil",
+ "es"
+ ],
+ [
+ "fi",
+ "les"
+ ],
+ [
+ "f",
+ "iles"
+ ],
+ [
+ "IN",
+ "VAL"
+ ],
+ [
+ "ak",
+ "o"
+ ],
+ [
+ "a",
+ "ko"
+ ],
+ [
+ "d",
+ "t"
+ ],
+ [
+ "▁Se",
+ "cond"
+ ],
+ [
+ "▁Sec",
+ "ond"
+ ],
+ [
+ "▁",
+ "Second"
+ ],
+ [
+ "▁al",
+ "loc"
+ ],
+ [
+ "▁all",
+ "oc"
+ ],
+ [
+ "▁",
+ "alloc"
+ ],
+ [
+ "▁en",
+ "ded"
+ ],
+ [
+ "▁end",
+ "ed"
+ ],
+ [
+ "▁ende",
+ "d"
+ ],
+ [
+ "▁",
+ "ended"
+ ],
+ [
+ "ac",
+ "ional"
+ ],
+ [
+ "aci",
+ "onal"
+ ],
+ [
+ "acion",
+ "al"
+ ],
+ [
+ "acio",
+ "nal"
+ ],
+ [
+ "▁man",
+ "ager"
+ ],
+ [
+ "▁manage",
+ "r"
+ ],
+ [
+ "▁",
+ "manager"
+ ],
+ [
+ "▁S",
+ "un"
+ ],
+ [
+ "▁Su",
+ "n"
+ ],
+ [
+ "▁",
+ "Sun"
+ ],
+ [
+ "ag",
+ "g"
+ ],
+ [
+ "a",
+ "gg"
+ ],
+ [
+ "▁le",
+ "ader"
+ ],
+ [
+ "▁lead",
+ "er"
+ ],
+ [
+ "ol",
+ "ved"
+ ],
+ [
+ "olve",
+ "d"
+ ],
+ [
+ "olv",
+ "ed"
+ ],
+ [
+ "▁ч",
+ "то"
+ ],
+ [
+ "▁trad",
+ "itional"
+ ],
+ [
+ "▁tradition",
+ "al"
+ ],
+ [
+ "sh",
+ "ot"
+ ],
+ [
+ "s",
+ "hot"
+ ],
+ [
+ "ru",
+ "p"
+ ],
+ [
+ "r",
+ "up"
+ ],
+ [
+ "C",
+ "F"
+ ],
+ [
+ "▁E",
+ "ach"
+ ],
+ [
+ "▁",
+ "Each"
+ ],
+ [
+ "w",
+ "r"
+ ],
+ [
+ "▁S",
+ "om"
+ ],
+ [
+ "▁So",
+ "m"
+ ],
+ [
+ "▁",
+ "Som"
+ ],
+ [
+ "▁material",
+ "s"
+ ],
+ [
+ "▁mater",
+ "ials"
+ ],
+ [
+ "▁m",
+ "sg"
+ ],
+ [
+ "▁ms",
+ "g"
+ ],
+ [
+ "▁",
+ "msg"
+ ],
+ [
+ "▁s",
+ "yn"
+ ],
+ [
+ "▁sy",
+ "n"
+ ],
+ [
+ "▁",
+ "syn"
+ ],
+ [
+ "▁produ",
+ "ce"
+ ],
+ [
+ "▁prod",
+ "uce"
+ ],
+ [
+ "▁st",
+ "orage"
+ ],
+ [
+ "▁stor",
+ "age"
+ ],
+ [
+ "▁sto",
+ "rage"
+ ],
+ [
+ "▁",
+ "storage"
+ ],
+ [
+ "sub",
+ "section"
+ ],
+ [
+ "▁S",
+ "ie"
+ ],
+ [
+ "▁Si",
+ "e"
+ ],
+ [
+ "▁I",
+ "P"
+ ],
+ [
+ "▁",
+ "IP"
+ ],
+ [
+ "CE",
+ "SS"
+ ],
+ [
+ "▁w",
+ "a"
+ ],
+ [
+ "▁",
+ "wa"
+ ],
+ [
+ "Re",
+ "cord"
+ ],
+ [
+ "Rec",
+ "ord"
+ ],
+ [
+ "▁mark",
+ "eting"
+ ],
+ [
+ "▁market",
+ "ing"
+ ],
+ [
+ "pl",
+ "et"
+ ],
+ [
+ "ple",
+ "t"
+ ],
+ [
+ "p",
+ "let"
+ ],
+ [
+ "D",
+ "ialog"
+ ],
+ [
+ "▁mention",
+ "ed"
+ ],
+ [
+ "▁ment",
+ "ioned"
+ ],
+ [
+ "▁N",
+ "a"
+ ],
+ [
+ "▁",
+ "Na"
+ ],
+ [
+ "▁Un",
+ "ion"
+ ],
+ [
+ "▁",
+ "Union"
+ ],
+ [
+ "▁A",
+ "PI"
+ ],
+ [
+ "▁AP",
+ "I"
+ ],
+ [
+ "▁",
+ "API"
+ ],
+ [
+ "▁neg",
+ "ative"
+ ],
+ [
+ "▁",
+ "negative"
+ ],
+ [
+ "tx",
+ "t"
+ ],
+ [
+ "t",
+ "xt"
+ ],
+ [
+ "▁eas",
+ "ier"
+ ],
+ [
+ "le",
+ "gal"
+ ],
+ [
+ "leg",
+ "al"
+ ],
+ [
+ "De",
+ "p"
+ ],
+ [
+ "D",
+ "ep"
+ ],
+ [
+ "▁no",
+ "vel"
+ ],
+ [
+ "▁nov",
+ "el"
+ ],
+ [
+ "▁nove",
+ "l"
+ ],
+ [
+ "eu",
+ "r"
+ ],
+ [
+ "e",
+ "ur"
+ ],
+ [
+ "ac",
+ "ió"
+ ],
+ [
+ "aci",
+ "ó"
+ ],
+ [
+ "a",
+ "ció"
+ ],
+ [
+ "▁B",
+ "ud"
+ ],
+ [
+ "▁Bu",
+ "d"
+ ],
+ [
+ "▁c",
+ "arry"
+ ],
+ [
+ "▁car",
+ "ry"
+ ],
+ [
+ "sch",
+ "aft"
+ ],
+ [
+ "s",
+ "chaft"
+ ],
+ [
+ "▁br",
+ "oken"
+ ],
+ [
+ "▁bro",
+ "ken"
+ ],
+ [
+ "▁broke",
+ "n"
+ ],
+ [
+ "▁t",
+ "rees"
+ ],
+ [
+ "▁tr",
+ "ees"
+ ],
+ [
+ "▁tre",
+ "es"
+ ],
+ [
+ "▁tree",
+ "s"
+ ],
+ [
+ ">(",
+ ");"
+ ],
+ [
+ ">()",
+ ";"
+ ],
+ [
+ ">",
+ "();"
+ ],
+ [
+ "▁e",
+ "mb"
+ ],
+ [
+ "▁em",
+ "b"
+ ],
+ [
+ "▁",
+ "emb"
+ ],
+ [
+ "ie",
+ "der"
+ ],
+ [
+ "ied",
+ "er"
+ ],
+ [
+ "i",
+ "eder"
+ ],
+ [
+ "▁r",
+ "oute"
+ ],
+ [
+ "▁ro",
+ "ute"
+ ],
+ [
+ "▁rout",
+ "e"
+ ],
+ [
+ "▁rou",
+ "te"
+ ],
+ [
+ "▁",
+ "route"
+ ],
+ [
+ "ik",
+ "el"
+ ],
+ [
+ "ike",
+ "l"
+ ],
+ [
+ "i",
+ "kel"
+ ],
+ [
+ "▁l",
+ "isten"
+ ],
+ [
+ "▁li",
+ "sten"
+ ],
+ [
+ "▁list",
+ "en"
+ ],
+ [
+ "▁",
+ "listen"
+ ],
+ [
+ "ash",
+ "ion"
+ ],
+ [
+ "ashi",
+ "on"
+ ],
+ [
+ "▁M",
+ "rs"
+ ],
+ [
+ "▁Mr",
+ "s"
+ ],
+ [
+ "▁equip",
+ "ment"
+ ],
+ [
+ "ag",
+ "ger"
+ ],
+ [
+ "agg",
+ "er"
+ ],
+ [
+ "▁T",
+ "hus"
+ ],
+ [
+ "▁Th",
+ "us"
+ ],
+ [
+ "▁mat",
+ "rix"
+ ],
+ [
+ "▁",
+ "matrix"
+ ],
+ [
+ "al",
+ "la"
+ ],
+ [
+ "all",
+ "a"
+ ],
+ [
+ "a",
+ "lla"
+ ],
+ [
+ "▁T",
+ "our"
+ ],
+ [
+ "▁To",
+ "ur"
+ ],
+ [
+ "▁con",
+ "versation"
+ ],
+ [
+ "▁convers",
+ "ation"
+ ],
+ [
+ "Mo",
+ "n"
+ ],
+ [
+ "M",
+ "on"
+ ],
+ [
+ "our",
+ "nal"
+ ],
+ [
+ "▁min",
+ "ute"
+ ],
+ [
+ "▁minut",
+ "e"
+ ],
+ [
+ "▁",
+ "minute"
+ ],
+ [
+ "A",
+ "m"
+ ],
+ [
+ "Ap",
+ "i"
+ ],
+ [
+ "A",
+ "pi"
+ ],
+ [
+ "▁for",
+ "get"
+ ],
+ [
+ "▁forg",
+ "et"
+ ],
+ [
+ "M",
+ "e"
+ ],
+ [
+ "lev",
+ "ant"
+ ],
+ [
+ "te",
+ "mp"
+ ],
+ [
+ "tem",
+ "p"
+ ],
+ [
+ "t",
+ "emp"
+ ],
+ [
+ "▁t",
+ "elling"
+ ],
+ [
+ "▁tell",
+ "ing"
+ ],
+ [
+ "▁tel",
+ "ling"
+ ],
+ [
+ "mo",
+ "ve"
+ ],
+ [
+ "mov",
+ "e"
+ ],
+ [
+ "m",
+ "ove"
+ ],
+ [
+ "▁in",
+ "dependent"
+ ],
+ [
+ "▁independ",
+ "ent"
+ ],
+ [
+ "to",
+ "String"
+ ],
+ [
+ "ed",
+ "it"
+ ],
+ [
+ "edi",
+ "t"
+ ],
+ [
+ "e",
+ "dit"
+ ],
+ [
+ "▁J",
+ "ac"
+ ],
+ [
+ "▁Ja",
+ "c"
+ ],
+ [
+ "az",
+ "z"
+ ],
+ [
+ "a",
+ "zz"
+ ],
+ [
+ "re",
+ "act"
+ ],
+ [
+ "rea",
+ "ct"
+ ],
+ [
+ "▁c",
+ "in"
+ ],
+ [
+ "▁ci",
+ "n"
+ ],
+ [
+ "▁",
+ "cin"
+ ],
+ [
+ "▁P",
+ "rov"
+ ],
+ [
+ "▁Pro",
+ "v"
+ ],
+ [
+ "▁Pr",
+ "ov"
+ ],
+ [
+ "▁",
+ "Prov"
+ ],
+ [
+ "is",
+ "ted"
+ ],
+ [
+ "ist",
+ "ed"
+ ],
+ [
+ "iste",
+ "d"
+ ],
+ [
+ "i",
+ "sted"
+ ],
+ [
+ "▁h",
+ "ash"
+ ],
+ [
+ "▁has",
+ "h"
+ ],
+ [
+ "▁ha",
+ "sh"
+ ],
+ [
+ "▁",
+ "hash"
+ ],
+ [
+ "on",
+ "na"
+ ],
+ [
+ "ik",
+ "i"
+ ],
+ [
+ "i",
+ "ki"
+ ],
+ [
+ "▁gener",
+ "ated"
+ ],
+ [
+ "▁generate",
+ "d"
+ ],
+ [
+ "▁gene",
+ "rated"
+ ],
+ [
+ "▁",
+ "generated"
+ ],
+ [
+ "Re",
+ "nder"
+ ],
+ [
+ "Rend",
+ "er"
+ ],
+ [
+ "R",
+ "ender"
+ ],
+ [
+ "▁psy",
+ "ch"
+ ],
+ [
+ "▁ps",
+ "ych"
+ ],
+ [
+ "na",
+ "v"
+ ],
+ [
+ "n",
+ "av"
+ ],
+ [
+ "▁en",
+ "tr"
+ ],
+ [
+ "▁ent",
+ "r"
+ ],
+ [
+ "▁",
+ "entr"
+ ],
+ [
+ "п",
+ "ра"
+ ],
+ [
+ "r",
+ "x"
+ ],
+ [
+ "AT",
+ "H"
+ ],
+ [
+ "A",
+ "TH"
+ ],
+ [
+ "▁ass",
+ "ume"
+ ],
+ [
+ "▁assum",
+ "e"
+ ],
+ [
+ "Tr",
+ "ee"
+ ],
+ [
+ "T",
+ "ree"
+ ],
+ [
+ "semb",
+ "ly"
+ ],
+ [
+ "sembl",
+ "y"
+ ],
+ [
+ "▁M",
+ "att"
+ ],
+ [
+ "▁Mat",
+ "t"
+ ],
+ [
+ "▁Ma",
+ "tt"
+ ],
+ [
+ "ca",
+ "ption"
+ ],
+ [
+ "c",
+ "aption"
+ ],
+ [
+ "▁s",
+ "olutions"
+ ],
+ [
+ "▁solution",
+ "s"
+ ],
+ [
+ "▁fa",
+ "ith"
+ ],
+ [
+ "▁fait",
+ "h"
+ ],
+ [
+ "▁dig",
+ "ital"
+ ],
+ [
+ "▁digit",
+ "al"
+ ],
+ [
+ "▁ex",
+ "cell"
+ ],
+ [
+ "▁exc",
+ "ell"
+ ],
+ [
+ "▁V",
+ "ersion"
+ ],
+ [
+ "▁Vers",
+ "ion"
+ ],
+ [
+ "▁",
+ "Version"
+ ],
+ [
+ "De",
+ "bug"
+ ],
+ [
+ "D",
+ "ebug"
+ ],
+ [
+ "▁ж",
+ "и"
+ ],
+ [
+ "▁",
+ "жи"
+ ],
+ [
+ "▁car",
+ "ried"
+ ],
+ [
+ "re",
+ "set"
+ ],
+ [
+ "res",
+ "et"
+ ],
+ [
+ "▁slow",
+ "ly"
+ ],
+ [
+ "an",
+ "cing"
+ ],
+ [
+ "anc",
+ "ing"
+ ],
+ [
+ "▁own",
+ "er"
+ ],
+ [
+ "▁",
+ "owner"
+ ],
+ [
+ "▁T",
+ "er"
+ ],
+ [
+ "▁Te",
+ "r"
+ ],
+ [
+ "▁D",
+ "id"
+ ],
+ [
+ "▁Di",
+ "d"
+ ],
+ [
+ "▁",
+ "Did"
+ ],
+ [
+ "▁g",
+ "est"
+ ],
+ [
+ "▁ge",
+ "st"
+ ],
+ [
+ "▁ges",
+ "t"
+ ],
+ [
+ "▁",
+ "gest"
+ ],
+ [
+ "▁é",
+ "té"
+ ],
+ [
+ "▁ét",
+ "é"
+ ],
+ [
+ "▁",
+ "été"
+ ],
+ [
+ "▁pro",
+ "of"
+ ],
+ [
+ "▁",
+ "proof"
+ ],
+ [
+ "F",
+ "ont"
+ ],
+ [
+ "▁n",
+ "ob"
+ ],
+ [
+ "▁no",
+ "b"
+ ],
+ [
+ "▁",
+ "nob"
+ ],
+ [
+ "C",
+ "o"
+ ],
+ [
+ "▁G",
+ "NU"
+ ],
+ [
+ "▁l",
+ "iber"
+ ],
+ [
+ "▁li",
+ "ber"
+ ],
+ [
+ "▁lib",
+ "er"
+ ],
+ [
+ "it",
+ "ness"
+ ],
+ [
+ "▁h",
+ "ij"
+ ],
+ [
+ "▁hi",
+ "j"
+ ],
+ [
+ "▁v",
+ "ert"
+ ],
+ [
+ "▁ver",
+ "t"
+ ],
+ [
+ "▁ve",
+ "rt"
+ ],
+ [
+ "▁",
+ "vert"
+ ],
+ [
+ "ш",
+ "а"
+ ],
+ [
+ "FL",
+ "AG"
+ ],
+ [
+ "ME",
+ "NT"
+ ],
+ [
+ "M",
+ "ENT"
+ ],
+ [
+ "▁S",
+ "on"
+ ],
+ [
+ "▁So",
+ "n"
+ ],
+ [
+ "Mu",
+ "lt"
+ ],
+ [
+ "M",
+ "ult"
+ ],
+ [
+ "▁d",
+ "istrict"
+ ],
+ [
+ "▁di",
+ "strict"
+ ],
+ [
+ "▁dist",
+ "rict"
+ ],
+ [
+ "conne",
+ "ct"
+ ],
+ [
+ "conn",
+ "ect"
+ ],
+ [
+ "ject",
+ "ion"
+ ],
+ [
+ "je",
+ "ction"
+ ],
+ [
+ "j",
+ "ection"
+ ],
+ [
+ "ly",
+ "mp"
+ ],
+ [
+ "▁real",
+ "ized"
+ ],
+ [
+ "▁realize",
+ "d"
+ ],
+ [
+ "▁realiz",
+ "ed"
+ ],
+ [
+ "mo",
+ "s"
+ ],
+ [
+ "m",
+ "os"
+ ],
+ [
+ "y",
+ "e"
+ ],
+ [
+ "▁re",
+ "nder"
+ ],
+ [
+ "▁r",
+ "ender"
+ ],
+ [
+ "▁ren",
+ "der"
+ ],
+ [
+ "▁rend",
+ "er"
+ ],
+ [
+ "▁",
+ "render"
+ ],
+ [
+ "ri",
+ "o"
+ ],
+ [
+ "r",
+ "io"
+ ],
+ [
+ "▁inter",
+ "pret"
+ ],
+ [
+ "▁",
+ "interpret"
+ ],
+ [
+ "▁slight",
+ "ly"
+ ],
+ [
+ "fi",
+ "x"
+ ],
+ [
+ "f",
+ "ix"
+ ],
+ [
+ "▁stud",
+ "ies"
+ ],
+ [
+ "▁r",
+ "id"
+ ],
+ [
+ "▁ri",
+ "d"
+ ],
+ [
+ "▁",
+ "rid"
+ ],
+ [
+ "at",
+ "re"
+ ],
+ [
+ "atr",
+ "e"
+ ],
+ [
+ "a",
+ "tre"
+ ],
+ [
+ "▁benef",
+ "its"
+ ],
+ [
+ "▁benefit",
+ "s"
+ ],
+ [
+ "▁F",
+ "ace"
+ ],
+ [
+ "▁Fa",
+ "ce"
+ ],
+ [
+ "▁Fac",
+ "e"
+ ],
+ [
+ "▁",
+ "Face"
+ ],
+ [
+ "iv",
+ "ery"
+ ],
+ [
+ "ive",
+ "ry"
+ ],
+ [
+ "iver",
+ "y"
+ ],
+ [
+ "i",
+ "very"
+ ],
+ [
+ "ри",
+ "я"
+ ],
+ [
+ "doc",
+ "ument"
+ ],
+ [
+ "d",
+ "ocument"
+ ],
+ [
+ "▁as",
+ "king"
+ ],
+ [
+ "▁ask",
+ "ing"
+ ],
+ [
+ "La",
+ "st"
+ ],
+ [
+ "L",
+ "ast"
+ ],
+ [
+ "ar",
+ "ante"
+ ],
+ [
+ "ara",
+ "nte"
+ ],
+ [
+ "aran",
+ "te"
+ ],
+ [
+ "▁Mart",
+ "in"
+ ],
+ [
+ "▁E",
+ "ll"
+ ],
+ [
+ "▁El",
+ "l"
+ ],
+ [
+ "▁v",
+ "ector"
+ ],
+ [
+ "▁ve",
+ "ctor"
+ ],
+ [
+ "▁vec",
+ "tor"
+ ],
+ [
+ "▁",
+ "vector"
+ ],
+ [
+ "▁for",
+ "ced"
+ ],
+ [
+ "▁force",
+ "d"
+ ],
+ [
+ "▁",
+ "forced"
+ ],
+ [
+ "о",
+ "ло"
+ ],
+ [
+ "P",
+ "H"
+ ],
+ [
+ "W",
+ "R"
+ ],
+ [
+ "▁K",
+ "l"
+ ],
+ [
+ "▁s",
+ "ky"
+ ],
+ [
+ "▁sk",
+ "y"
+ ],
+ [
+ "▁",
+ "sky"
+ ],
+ [
+ "▁str",
+ "ategy"
+ ],
+ [
+ "▁strateg",
+ "y"
+ ],
+ [
+ "▁strat",
+ "egy"
+ ],
+ [
+ "oc",
+ "ked"
+ ],
+ [
+ "ock",
+ "ed"
+ ],
+ [
+ "▁ne",
+ "ck"
+ ],
+ [
+ "ś",
+ "ci"
+ ],
+ [
+ "O",
+ "UT"
+ ],
+ [
+ "))",
+ ","
+ ],
+ [
+ ")",
+ "),"
+ ],
+ [
+ "C",
+ "ustom"
+ ],
+ [
+ "▁w",
+ "ie"
+ ],
+ [
+ "▁",
+ "wie"
+ ],
+ [
+ "▁s",
+ "weet"
+ ],
+ [
+ "▁swe",
+ "et"
+ ],
+ [
+ "▁t",
+ "emp"
+ ],
+ [
+ "▁te",
+ "mp"
+ ],
+ [
+ "▁tem",
+ "p"
+ ],
+ [
+ "▁",
+ "temp"
+ ],
+ [
+ "▁fore",
+ "ign"
+ ],
+ [
+ "▁h",
+ "all"
+ ],
+ [
+ "▁ha",
+ "ll"
+ ],
+ [
+ "▁hal",
+ "l"
+ ],
+ [
+ "▁",
+ "hall"
+ ],
+ [
+ "as",
+ "tr"
+ ],
+ [
+ "ast",
+ "r"
+ ],
+ [
+ "a",
+ "str"
+ ],
+ [
+ "As",
+ "s"
+ ],
+ [
+ "A",
+ "ss"
+ ],
+ [
+ "MO",
+ "DE"
+ ],
+ [
+ "MOD",
+ "E"
+ ],
+ [
+ "▁max",
+ "imum"
+ ],
+ [
+ "▁maxim",
+ "um"
+ ],
+ [
+ "an",
+ "nels"
+ ],
+ [
+ "ann",
+ "els"
+ ],
+ [
+ "annel",
+ "s"
+ ],
+ [
+ "anne",
+ "ls"
+ ],
+ [
+ "▁t",
+ "ip"
+ ],
+ [
+ "▁ti",
+ "p"
+ ],
+ [
+ "▁",
+ "tip"
+ ],
+ [
+ "▁second",
+ "s"
+ ],
+ [
+ "▁sec",
+ "onds"
+ ],
+ [
+ "▁",
+ "seconds"
+ ],
+ [
+ "▁st",
+ "ack"
+ ],
+ [
+ "▁sta",
+ "ck"
+ ],
+ [
+ "▁",
+ "stack"
+ ],
+ [
+ "ig",
+ "a"
+ ],
+ [
+ "i",
+ "ga"
+ ],
+ [
+ "▁r",
+ "aise"
+ ],
+ [
+ "▁rais",
+ "e"
+ ],
+ [
+ "▁ra",
+ "ise"
+ ],
+ [
+ "▁",
+ "raise"
+ ],
+ [
+ "en",
+ "able"
+ ],
+ [
+ "ena",
+ "ble"
+ ],
+ [
+ "oi",
+ "r"
+ ],
+ [
+ "o",
+ "ir"
+ ],
+ [
+ "▁s",
+ "oul"
+ ],
+ [
+ "▁so",
+ "ul"
+ ],
+ [
+ "▁sou",
+ "l"
+ ],
+ [
+ "K",
+ "e"
+ ],
+ [
+ ")$",
+ "."
+ ],
+ [
+ ")",
+ "$."
+ ],
+ [
+ "▁T",
+ "im"
+ ],
+ [
+ "▁Ti",
+ "m"
+ ],
+ [
+ "▁",
+ "Tim"
+ ],
+ [
+ "AL",
+ "SE"
+ ],
+ [
+ "is",
+ "er"
+ ],
+ [
+ "ise",
+ "r"
+ ],
+ [
+ "i",
+ "ser"
+ ],
+ [
+ "cont",
+ "in"
+ ],
+ [
+ "be",
+ "l"
+ ],
+ [
+ "b",
+ "el"
+ ],
+ [
+ "▁m",
+ "ad"
+ ],
+ [
+ "▁ma",
+ "d"
+ ],
+ [
+ "▁",
+ "mad"
+ ],
+ [
+ "lic",
+ "hen"
+ ],
+ [
+ "li",
+ "chen"
+ ],
+ [
+ "lich",
+ "en"
+ ],
+ [
+ "liche",
+ "n"
+ ],
+ [
+ "l",
+ "ichen"
+ ],
+ [
+ "ab",
+ "e"
+ ],
+ [
+ "a",
+ "be"
+ ],
+ [
+ "sa",
+ "fe"
+ ],
+ [
+ "▁con",
+ "cent"
+ ],
+ [
+ "▁conc",
+ "ent"
+ ],
+ [
+ "▁conce",
+ "nt"
+ ],
+ [
+ "bo",
+ "und"
+ ],
+ [
+ "b",
+ "ound"
+ ],
+ [
+ "▁R",
+ "equ"
+ ],
+ [
+ "▁Re",
+ "qu"
+ ],
+ [
+ "▁",
+ "Requ"
+ ],
+ [
+ "sw",
+ "itch"
+ ],
+ [
+ "▁st",
+ "one"
+ ],
+ [
+ "▁sto",
+ "ne"
+ ],
+ [
+ "▁",
+ "stone"
+ ],
+ [
+ "▁trans",
+ "l"
+ ],
+ [
+ "▁",
+ "transl"
+ ],
+ [
+ "▁v",
+ "ac"
+ ],
+ [
+ "▁va",
+ "c"
+ ],
+ [
+ "an",
+ "don"
+ ],
+ [
+ "and",
+ "on"
+ ],
+ [
+ "ando",
+ "n"
+ ],
+ [
+ "▁F",
+ "ore"
+ ],
+ [
+ "▁For",
+ "e"
+ ],
+ [
+ "▁Fo",
+ "re"
+ ],
+ [
+ "▁",
+ "Fore"
+ ],
+ [
+ "▁s",
+ "ounds"
+ ],
+ [
+ "▁sound",
+ "s"
+ ],
+ [
+ "▁P",
+ "op"
+ ],
+ [
+ "▁Po",
+ "p"
+ ],
+ [
+ "▁",
+ "Pop"
+ ],
+ [
+ "▁H",
+ "T"
+ ],
+ [
+ "▁",
+ "HT"
+ ],
+ [
+ "li",
+ "a"
+ ],
+ [
+ "l",
+ "ia"
+ ],
+ [
+ "en",
+ "ter"
+ ],
+ [
+ "ent",
+ "er"
+ ],
+ [
+ "ente",
+ "r"
+ ],
+ [
+ "▁hel",
+ "ps"
+ ],
+ [
+ "▁help",
+ "s"
+ ],
+ [
+ "ed",
+ "y"
+ ],
+ [
+ "e",
+ "dy"
+ ],
+ [
+ "ст",
+ "вен"
+ ],
+ [
+ "ств",
+ "ен"
+ ],
+ [
+ "стве",
+ "н"
+ ],
+ [
+ "an",
+ "ted"
+ ],
+ [
+ "ant",
+ "ed"
+ ],
+ [
+ "ante",
+ "d"
+ ],
+ [
+ "▁I",
+ "ts"
+ ],
+ [
+ "▁It",
+ "s"
+ ],
+ [
+ "▁St",
+ "ep"
+ ],
+ [
+ "▁Ste",
+ "p"
+ ],
+ [
+ "▁",
+ "Step"
+ ],
+ [
+ "I",
+ "con"
+ ],
+ [
+ "▁EX",
+ "PECT"
+ ],
+ [
+ "▁",
+ "EXPECT"
+ ],
+ [
+ "ial",
+ "ized"
+ ],
+ [
+ "ialize",
+ "d"
+ ],
+ [
+ "Pos",
+ "t"
+ ],
+ [
+ "Po",
+ "st"
+ ],
+ [
+ "P",
+ "ost"
+ ],
+ [
+ "az",
+ "e"
+ ],
+ [
+ "a",
+ "ze"
+ ],
+ [
+ "▁Car",
+ "ol"
+ ],
+ [
+ "▁Ca",
+ "rol"
+ ],
+ [
+ "▁re",
+ "q"
+ ],
+ [
+ "▁r",
+ "eq"
+ ],
+ [
+ "▁",
+ "req"
+ ],
+ [
+ "▁crit",
+ "ical"
+ ],
+ [
+ "▁critic",
+ "al"
+ ],
+ [
+ "D",
+ "S"
+ ],
+ [
+ "▁se",
+ "at"
+ ],
+ [
+ "▁sea",
+ "t"
+ ],
+ [
+ "ap",
+ "ed"
+ ],
+ [
+ "ape",
+ "d"
+ ],
+ [
+ "a",
+ "ped"
+ ],
+ [
+ "▁up",
+ "per"
+ ],
+ [
+ "▁upp",
+ "er"
+ ],
+ [
+ "▁",
+ "upper"
+ ],
+ [
+ "▁S",
+ "y"
+ ],
+ [
+ "▁",
+ "Sy"
+ ],
+ [
+ "▁ex",
+ "plain"
+ ],
+ [
+ "▁expl",
+ "ain"
+ ],
+ [
+ "▁'",
+ "./"
+ ],
+ [
+ "▁'.",
+ "/"
+ ],
+ [
+ "ut",
+ "ils"
+ ],
+ [
+ "util",
+ "s"
+ ],
+ [
+ "uti",
+ "ls"
+ ],
+ [
+ "poss",
+ "ible"
+ ],
+ [
+ "▁d",
+ "ont"
+ ],
+ [
+ "▁do",
+ "nt"
+ ],
+ [
+ "▁don",
+ "t"
+ ],
+ [
+ "H",
+ "ost"
+ ],
+ [
+ "▁appro",
+ "xim"
+ ],
+ [
+ "▁approx",
+ "im"
+ ],
+ [
+ "As",
+ "ync"
+ ],
+ [
+ "A",
+ "sync"
+ ],
+ [
+ "▁g",
+ "rab"
+ ],
+ [
+ "▁gr",
+ "ab"
+ ],
+ [
+ "▁gra",
+ "b"
+ ],
+ [
+ "▁s",
+ "ources"
+ ],
+ [
+ "▁source",
+ "s"
+ ],
+ [
+ "▁sour",
+ "ces"
+ ],
+ [
+ "▁",
+ "sources"
+ ],
+ [
+ "▁M",
+ "os"
+ ],
+ [
+ "▁Mo",
+ "s"
+ ],
+ [
+ "▁Germ",
+ "any"
+ ],
+ [
+ "▁German",
+ "y"
+ ],
+ [
+ "▁Ger",
+ "many"
+ ],
+ [
+ "▁r",
+ "ub"
+ ],
+ [
+ "▁ru",
+ "b"
+ ],
+ [
+ "▁",
+ "rub"
+ ],
+ [
+ "CH",
+ "AN"
+ ],
+ [
+ "▁r",
+ "ain"
+ ],
+ [
+ "▁ra",
+ "in"
+ ],
+ [
+ "▁tr",
+ "uly"
+ ],
+ [
+ "▁join",
+ "ed"
+ ],
+ [
+ "▁jo",
+ "ined"
+ ],
+ [
+ "▁<",
+ "?"
+ ],
+ [
+ "▁",
+ ""
+ ],
+ [
+ "▁L",
+ "o"
+ ],
+ [
+ "▁",
+ "Lo"
+ ],
+ [
+ "Des",
+ "cription"
+ ],
+ [
+ "De",
+ "scription"
+ ],
+ [
+ "ak",
+ "t"
+ ],
+ [
+ "a",
+ "kt"
+ ],
+ [
+ "▁A",
+ "nn"
+ ],
+ [
+ "▁An",
+ "n"
+ ],
+ [
+ "▁",
+ "Ann"
+ ],
+ [
+ "^",
+ "*"
+ ],
+ [
+ "id",
+ "ae"
+ ],
+ [
+ "ida",
+ "e"
+ ],
+ [
+ "(",
+ ":"
+ ],
+ [
+ "t",
+ "w"
+ ],
+ [
+ "Ma",
+ "r"
+ ],
+ [
+ "M",
+ "ar"
+ ],
+ [
+ "pro",
+ "du"
+ ],
+ [
+ "prod",
+ "u"
+ ],
+ [
+ "p",
+ "rodu"
+ ],
+ [
+ "▁sp",
+ "oke"
+ ],
+ [
+ "▁spo",
+ "ke"
+ ],
+ [
+ "ю",
+ "т"
+ ],
+ [
+ "▁walk",
+ "ing"
+ ],
+ [
+ "▁wal",
+ "king"
+ ],
+ [
+ "▁nod",
+ "ded"
+ ],
+ [
+ "Pro",
+ "ps"
+ ],
+ [
+ "Pr",
+ "ops"
+ ],
+ [
+ "Prop",
+ "s"
+ ],
+ [
+ "En",
+ "abled"
+ ],
+ [
+ "Enable",
+ "d"
+ ],
+ [
+ "ir",
+ "k"
+ ],
+ [
+ "FI",
+ "LE"
+ ],
+ [
+ "FIL",
+ "E"
+ ],
+ [
+ "F",
+ "ILE"
+ ],
+ [
+ "equ",
+ "al"
+ ],
+ [
+ "eq",
+ "ual"
+ ],
+ [
+ "e",
+ "qual"
+ ],
+ [
+ "pp",
+ "ing"
+ ],
+ [
+ "p",
+ "ping"
+ ],
+ [
+ "ol",
+ "i"
+ ],
+ [
+ "o",
+ "li"
+ ],
+ [
+ "E",
+ "V"
+ ],
+ [
+ "en",
+ "z"
+ ],
+ [
+ "et",
+ "ing"
+ ],
+ [
+ "eti",
+ "ng"
+ ],
+ [
+ "e",
+ "ting"
+ ],
+ [
+ "▁s",
+ "ample"
+ ],
+ [
+ "▁sam",
+ "ple"
+ ],
+ [
+ "▁",
+ "sample"
+ ],
+ [
+ "▁art",
+ "ist"
+ ],
+ [
+ "[",
+ "$"
+ ],
+ [
+ "it",
+ "à"
+ ],
+ [
+ "й",
+ "о"
+ ],
+ [
+ "pro",
+ "ps"
+ ],
+ [
+ "pr",
+ "ops"
+ ],
+ [
+ "prop",
+ "s"
+ ],
+ [
+ "b",
+ "u"
+ ],
+ [
+ "е",
+ "в"
+ ],
+ [
+ "▁respons",
+ "ible"
+ ],
+ [
+ "M",
+ "T"
+ ],
+ [
+ "▁caus",
+ "ed"
+ ],
+ [
+ "▁cause",
+ "d"
+ ],
+ [
+ "▁ca",
+ "used"
+ ],
+ [
+ "▁the",
+ "me"
+ ],
+ [
+ "▁th",
+ "eme"
+ ],
+ [
+ "▁them",
+ "e"
+ ],
+ [
+ "▁",
+ "theme"
+ ],
+ [
+ "▁W",
+ "as"
+ ],
+ [
+ "▁Wa",
+ "s"
+ ],
+ [
+ "▁",
+ "Was"
+ ],
+ [
+ "▁B",
+ "efore"
+ ],
+ [
+ "▁Be",
+ "fore"
+ ],
+ [
+ "▁",
+ "Before"
+ ],
+ [
+ "ac",
+ "le"
+ ],
+ [
+ "acl",
+ "e"
+ ],
+ [
+ "a",
+ "cle"
+ ],
+ [
+ "▁ро",
+ "ку"
+ ],
+ [
+ "c",
+ "u"
+ ],
+ [
+ "DE",
+ "V"
+ ],
+ [
+ "D",
+ "EV"
+ ],
+ [
+ "▁h",
+ "ung"
+ ],
+ [
+ "▁hun",
+ "g"
+ ],
+ [
+ "▁",
+ "hung"
+ ],
+ [
+ "text",
+ "bf"
+ ],
+ [
+ "▁s",
+ "pin"
+ ],
+ [
+ "▁sp",
+ "in"
+ ],
+ [
+ "▁",
+ "spin"
+ ],
+ [
+ "▁la",
+ "test"
+ ],
+ [
+ "▁late",
+ "st"
+ ],
+ [
+ "▁lat",
+ "est"
+ ],
+ [
+ "▁",
+ "latest"
+ ],
+ [
+ "ent",
+ "ially"
+ ],
+ [
+ "ential",
+ "ly"
+ ],
+ [
+ "enti",
+ "ally"
+ ],
+ [
+ "▁Pro",
+ "gram"
+ ],
+ [
+ "▁Pr",
+ "ogram"
+ ],
+ [
+ "▁",
+ "Program"
+ ],
+ [
+ "Met",
+ "adata"
+ ],
+ [
+ "Meta",
+ "data"
+ ],
+ [
+ "pass",
+ "word"
+ ],
+ [
+ "▁h",
+ "urt"
+ ],
+ [
+ "▁hur",
+ "t"
+ ],
+ [
+ "к",
+ "с"
+ ],
+ [
+ "▁A",
+ "us"
+ ],
+ [
+ "▁Au",
+ "s"
+ ],
+ [
+ "se",
+ "y"
+ ],
+ [
+ "s",
+ "ey"
+ ],
+ [
+ "al",
+ "let"
+ ],
+ [
+ "all",
+ "et"
+ ],
+ [
+ "alle",
+ "t"
+ ],
+ [
+ "x",
+ "F"
+ ],
+ [
+ "▁R",
+ "oad"
+ ],
+ [
+ "▁Ro",
+ "ad"
+ ],
+ [
+ "ет",
+ "ся"
+ ],
+ [
+ "е",
+ "тся"
+ ],
+ [
+ "▁re",
+ "nt"
+ ],
+ [
+ "▁r",
+ "ent"
+ ],
+ [
+ "▁ren",
+ "t"
+ ],
+ [
+ "▁",
+ "rent"
+ ],
+ [
+ "ци",
+ "я"
+ ],
+ [
+ "▁As",
+ "sert"
+ ],
+ [
+ "▁Ass",
+ "ert"
+ ],
+ [
+ "▁",
+ "Assert"
+ ],
+ [
+ "і",
+ "ль"
+ ],
+ [
+ "ü",
+ "ck"
+ ],
+ [
+ "▁s",
+ "ites"
+ ],
+ [
+ "▁sit",
+ "es"
+ ],
+ [
+ "▁si",
+ "tes"
+ ],
+ [
+ "▁site",
+ "s"
+ ],
+ [
+ "Doc",
+ "ument"
+ ],
+ [
+ "D",
+ "ocument"
+ ],
+ [
+ "▁obt",
+ "ained"
+ ],
+ [
+ "▁obtain",
+ "ed"
+ ],
+ [
+ "▁c",
+ "i"
+ ],
+ [
+ "▁",
+ "ci"
+ ],
+ [
+ "▁[",
+ "\""
+ ],
+ [
+ "▁",
+ "[\""
+ ],
+ [
+ "▁com",
+ "pleted"
+ ],
+ [
+ "▁comp",
+ "leted"
+ ],
+ [
+ "▁complet",
+ "ed"
+ ],
+ [
+ "▁compl",
+ "eted"
+ ],
+ [
+ "▁complete",
+ "d"
+ ],
+ [
+ "as",
+ "et"
+ ],
+ [
+ "ase",
+ "t"
+ ],
+ [
+ "a",
+ "set"
+ ],
+ [
+ "ra",
+ "id"
+ ],
+ [
+ "rai",
+ "d"
+ ],
+ [
+ "r",
+ "aid"
+ ],
+ [
+ "▁s",
+ "orry"
+ ],
+ [
+ "▁sor",
+ "ry"
+ ],
+ [
+ "▁f",
+ "ab"
+ ],
+ [
+ "▁fa",
+ "b"
+ ],
+ [
+ "▁",
+ "fab"
+ ],
+ [
+ "▁sch",
+ "ools"
+ ],
+ [
+ "▁school",
+ "s"
+ ],
+ [
+ "хо",
+ "ди"
+ ],
+ [
+ "ход",
+ "и"
+ ],
+ [
+ "▁s",
+ "cr"
+ ],
+ [
+ "▁sc",
+ "r"
+ ],
+ [
+ "▁",
+ "scr"
+ ],
+ [
+ "▁in",
+ "cor"
+ ],
+ [
+ "▁inc",
+ "or"
+ ],
+ [
+ "▁'",
+ "/"
+ ],
+ [
+ "▁s",
+ "pr"
+ ],
+ [
+ "▁sp",
+ "r"
+ ],
+ [
+ "▁",
+ "spr"
+ ],
+ [
+ "▁T",
+ "ext"
+ ],
+ [
+ "▁Te",
+ "xt"
+ ],
+ [
+ "▁Tex",
+ "t"
+ ],
+ [
+ "▁",
+ "Text"
+ ],
+ [
+ "▁com",
+ "mercial"
+ ],
+ [
+ "▁commer",
+ "cial"
+ ],
+ [
+ "in",
+ "gly"
+ ],
+ [
+ "ing",
+ "ly"
+ ],
+ [
+ "▁opin",
+ "ion"
+ ],
+ [
+ "▁S",
+ "tar"
+ ],
+ [
+ "▁St",
+ "ar"
+ ],
+ [
+ "▁Sta",
+ "r"
+ ],
+ [
+ "▁",
+ "Star"
+ ],
+ [
+ "Si",
+ "gn"
+ ],
+ [
+ "Sig",
+ "n"
+ ],
+ [
+ "S",
+ "ign"
+ ],
+ [
+ "▁j",
+ "avax"
+ ],
+ [
+ "▁java",
+ "x"
+ ],
+ [
+ "▁",
+ "javax"
+ ],
+ [
+ "w",
+ "i"
+ ],
+ [
+ "la",
+ "t"
+ ],
+ [
+ "l",
+ "at"
+ ],
+ [
+ "▁K",
+ "ey"
+ ],
+ [
+ "▁Ke",
+ "y"
+ ],
+ [
+ "▁",
+ "Key"
+ ],
+ [
+ "var",
+ "phi"
+ ],
+ [
+ "д",
+ "ы"
+ ],
+ [
+ "▁conne",
+ "cted"
+ ],
+ [
+ "▁connect",
+ "ed"
+ ],
+ [
+ "▁",
+ "connected"
+ ],
+ [
+ "▁ad",
+ "just"
+ ],
+ [
+ "▁adj",
+ "ust"
+ ],
+ [
+ "▁",
+ "adjust"
+ ],
+ [
+ "▁A",
+ "z"
+ ],
+ [
+ "▁",
+ "Az"
+ ],
+ [
+ "▁pl",
+ "anning"
+ ],
+ [
+ "▁plan",
+ "ning"
+ ],
+ [
+ "--",
+ "-"
+ ],
+ [
+ "-",
+ "--"
+ ],
+ [
+ "In",
+ "teger"
+ ],
+ [
+ "au",
+ "f"
+ ],
+ [
+ "a",
+ "uf"
+ ],
+ [
+ "ex",
+ "pected"
+ ],
+ [
+ "expect",
+ "ed"
+ ],
+ [
+ "e",
+ "xpected"
+ ],
+ [
+ "▁f",
+ "ant"
+ ],
+ [
+ "▁fa",
+ "nt"
+ ],
+ [
+ "▁fan",
+ "t"
+ ],
+ [
+ "▁t",
+ "ou"
+ ],
+ [
+ "▁to",
+ "u"
+ ],
+ [
+ "Par",
+ "ent"
+ ],
+ [
+ "P",
+ "arent"
+ ],
+ [
+ "▁L",
+ "at"
+ ],
+ [
+ "▁La",
+ "t"
+ ],
+ [
+ "▁",
+ "Lat"
+ ],
+ [
+ "▁thought",
+ "s"
+ ],
+ [
+ "▁though",
+ "ts"
+ ],
+ [
+ "▁J",
+ "ud"
+ ],
+ [
+ "▁Ju",
+ "d"
+ ],
+ [
+ "Param",
+ "eters"
+ ],
+ [
+ "Parameter",
+ "s"
+ ],
+ [
+ "G",
+ "r"
+ ],
+ [
+ "ро",
+ "м"
+ ],
+ [
+ "I",
+ "A"
+ ],
+ [
+ "▁B",
+ "ob"
+ ],
+ [
+ "▁Bo",
+ "b"
+ ],
+ [
+ "lic",
+ "t"
+ ],
+ [
+ "li",
+ "ct"
+ ],
+ [
+ "l",
+ "ict"
+ ],
+ [
+ "la",
+ "n"
+ ],
+ [
+ "l",
+ "an"
+ ],
+ [
+ "om",
+ "ic"
+ ],
+ [
+ "omi",
+ "c"
+ ],
+ [
+ "o",
+ "mic"
+ ],
+ [
+ "▁a",
+ "part"
+ ],
+ [
+ "▁ap",
+ "art"
+ ],
+ [
+ "▁t",
+ "rou"
+ ],
+ [
+ "▁tr",
+ "ou"
+ ],
+ [
+ "▁tro",
+ "u"
+ ],
+ [
+ "▁app",
+ "reci"
+ ],
+ [
+ "▁Christ",
+ "mas"
+ ],
+ [
+ "ir",
+ "q"
+ ],
+ [
+ "i",
+ "rq"
+ ],
+ [
+ "th",
+ "on"
+ ],
+ [
+ "t",
+ "hon"
+ ],
+ [
+ "▁Er",
+ "ror"
+ ],
+ [
+ "▁Err",
+ "or"
+ ],
+ [
+ "▁",
+ "Error"
+ ],
+ [
+ "▁s",
+ "core"
+ ],
+ [
+ "▁sc",
+ "ore"
+ ],
+ [
+ "▁",
+ "score"
+ ],
+ [
+ "ro",
+ "me"
+ ],
+ [
+ "rom",
+ "e"
+ ],
+ [
+ "r",
+ "ome"
+ ],
+ [
+ "▁ne",
+ "ighbor"
+ ],
+ [
+ "▁neigh",
+ "bor"
+ ],
+ [
+ "▁neighb",
+ "or"
+ ],
+ [
+ "▁M",
+ "ur"
+ ],
+ [
+ "▁Mu",
+ "r"
+ ],
+ [
+ "ad",
+ "min"
+ ],
+ [
+ "▁Fil",
+ "m"
+ ],
+ [
+ "▁Fi",
+ "lm"
+ ],
+ [
+ "Re",
+ "ct"
+ ],
+ [
+ "Rec",
+ "t"
+ ],
+ [
+ "R",
+ "ect"
+ ],
+ [
+ "▁config",
+ "uration"
+ ],
+ [
+ "▁",
+ "configuration"
+ ],
+ [
+ "▁c",
+ "s"
+ ],
+ [
+ "▁",
+ "cs"
+ ],
+ [
+ "gu",
+ "n"
+ ],
+ [
+ "g",
+ "un"
+ ],
+ [
+ "ch",
+ "annel"
+ ],
+ [
+ "chan",
+ "nel"
+ ],
+ [
+ "▁Re",
+ "port"
+ ],
+ [
+ "▁Rep",
+ "ort"
+ ],
+ [
+ "▁",
+ "Report"
+ ],
+ [
+ "▁str",
+ "ateg"
+ ],
+ [
+ "▁strat",
+ "eg"
+ ],
+ [
+ "▁work",
+ "ers"
+ ],
+ [
+ "▁wor",
+ "kers"
+ ],
+ [
+ "▁worker",
+ "s"
+ ],
+ [
+ "▁",
+ "workers"
+ ],
+ [
+ "field",
+ "s"
+ ],
+ [
+ "Sch",
+ "ema"
+ ],
+ [
+ "Sche",
+ "ma"
+ ],
+ [
+ "S",
+ "chema"
+ ],
+ [
+ "ap",
+ "pa"
+ ],
+ [
+ "app",
+ "a"
+ ],
+ [
+ "ol",
+ "ic"
+ ],
+ [
+ "oli",
+ "c"
+ ],
+ [
+ "o",
+ "lic"
+ ],
+ [
+ "E",
+ "O"
+ ],
+ [
+ "▁Ch",
+ "arl"
+ ],
+ [
+ "▁Char",
+ "l"
+ ],
+ [
+ "▁Cha",
+ "rl"
+ ],
+ [
+ "▁C",
+ "up"
+ ],
+ [
+ "▁Cu",
+ "p"
+ ],
+ [
+ "pn",
+ "g"
+ ],
+ [
+ "p",
+ "ng"
+ ],
+ [
+ "▁H",
+ "ill"
+ ],
+ [
+ "▁Hi",
+ "ll"
+ ],
+ [
+ "▁Hil",
+ "l"
+ ],
+ [
+ "ow",
+ "e"
+ ],
+ [
+ "o",
+ "we"
+ ],
+ [
+ "▁most",
+ "ly"
+ ],
+ [
+ "”",
+ "."
+ ],
+ [
+ "▁fin",
+ "ish"
+ ],
+ [
+ "▁",
+ "finish"
+ ],
+ [
+ "▁С",
+ "о"
+ ],
+ [
+ "▁st",
+ "ars"
+ ],
+ [
+ "▁star",
+ "s"
+ ],
+ [
+ "▁sta",
+ "rs"
+ ],
+ [
+ "pl",
+ "ayer"
+ ],
+ [
+ "play",
+ "er"
+ ],
+ [
+ "p",
+ "layer"
+ ],
+ [
+ "▁in",
+ "ner"
+ ],
+ [
+ "▁inn",
+ "er"
+ ],
+ [
+ "▁",
+ "inner"
+ ],
+ [
+ "com",
+ "ponent"
+ ],
+ [
+ "ti",
+ "m"
+ ],
+ [
+ "t",
+ "im"
+ ],
+ [
+ "I",
+ "E"
+ ],
+ [
+ "▁t",
+ "her"
+ ],
+ [
+ "▁the",
+ "r"
+ ],
+ [
+ "▁th",
+ "er"
+ ],
+ [
+ "▁",
+ "ther"
+ ],
+ [
+ "▁s",
+ "mart"
+ ],
+ [
+ "▁sm",
+ "art"
+ ],
+ [
+ "▁",
+ "smart"
+ ],
+ [
+ "▁s",
+ "ad"
+ ],
+ [
+ "▁sa",
+ "d"
+ ],
+ [
+ "▁Coun",
+ "cil"
+ ],
+ [
+ "ar",
+ "ea"
+ ],
+ [
+ "are",
+ "a"
+ ],
+ [
+ "a",
+ "rea"
+ ],
+ [
+ "la",
+ "y"
+ ],
+ [
+ "l",
+ "ay"
+ ],
+ [
+ "▁б",
+ "а"
+ ],
+ [
+ "▁",
+ "ба"
+ ],
+ [
+ "▁gr",
+ "adu"
+ ],
+ [
+ "▁grad",
+ "u"
+ ],
+ [
+ "▁gra",
+ "du"
+ ],
+ [
+ "▁c",
+ "hem"
+ ],
+ [
+ "▁ch",
+ "em"
+ ],
+ [
+ "▁che",
+ "m"
+ ],
+ [
+ "▁",
+ "chem"
+ ],
+ [
+ "▁h",
+ "o"
+ ],
+ [
+ "▁",
+ "ho"
+ ],
+ [
+ "Se",
+ "lect"
+ ],
+ [
+ "S",
+ "elect"
+ ],
+ [
+ "▁in",
+ "str"
+ ],
+ [
+ "▁inst",
+ "r"
+ ],
+ [
+ "▁ins",
+ "tr"
+ ],
+ [
+ "▁",
+ "instr"
+ ],
+ [
+ "▁k",
+ "l"
+ ],
+ [
+ "▁",
+ "kl"
+ ],
+ [
+ "if",
+ "ications"
+ ],
+ [
+ "ific",
+ "ations"
+ ],
+ [
+ "ification",
+ "s"
+ ],
+ [
+ "Lo",
+ "ng"
+ ],
+ [
+ "L",
+ "ong"
+ ],
+ [
+ "▁s",
+ "obre"
+ ],
+ [
+ "▁so",
+ "bre"
+ ],
+ [
+ "▁sob",
+ "re"
+ ],
+ [
+ "▁O",
+ "ld"
+ ],
+ [
+ "▁Ol",
+ "d"
+ ],
+ [
+ "▁",
+ "Old"
+ ],
+ [
+ "we",
+ "st"
+ ],
+ [
+ "w",
+ "est"
+ ],
+ [
+ "},",
+ "\\"
+ ],
+ [
+ "}",
+ ",\\"
+ ],
+ [
+ "in",
+ "gu"
+ ],
+ [
+ "ing",
+ "u"
+ ],
+ [
+ "▁sp",
+ "ring"
+ ],
+ [
+ "▁spr",
+ "ing"
+ ],
+ [
+ "▁",
+ "spring"
+ ],
+ [
+ "▁n",
+ "ur"
+ ],
+ [
+ "▁nu",
+ "r"
+ ],
+ [
+ "ex",
+ "ample"
+ ],
+ [
+ "Wh",
+ "en"
+ ],
+ [
+ "Whe",
+ "n"
+ ],
+ [
+ "W",
+ "hen"
+ ],
+ [
+ "▁adv",
+ "ice"
+ ],
+ [
+ "▁u",
+ "lt"
+ ],
+ [
+ "▁ul",
+ "t"
+ ],
+ [
+ "▁",
+ "ult"
+ ],
+ [
+ "en",
+ "nis"
+ ],
+ [
+ "enn",
+ "is"
+ ],
+ [
+ "▁L",
+ "ove"
+ ],
+ [
+ "▁Lo",
+ "ve"
+ ],
+ [
+ "▁Lov",
+ "e"
+ ],
+ [
+ "▁",
+ "Love"
+ ],
+ [
+ "▁\"",
+ "\""
+ ],
+ [
+ "▁",
+ "\"\""
+ ],
+ [
+ "▁incre",
+ "ased"
+ ],
+ [
+ "▁increase",
+ "d"
+ ],
+ [
+ "▁f",
+ "inding"
+ ],
+ [
+ "▁fin",
+ "ding"
+ ],
+ [
+ "▁find",
+ "ing"
+ ],
+ [
+ "ir",
+ "ty"
+ ],
+ [
+ "irt",
+ "y"
+ ],
+ [
+ "ist",
+ "rict"
+ ],
+ [
+ "istr",
+ "ict"
+ ],
+ [
+ "i",
+ "strict"
+ ],
+ [
+ "▁l",
+ "ayer"
+ ],
+ [
+ "▁la",
+ "yer"
+ ],
+ [
+ "▁lay",
+ "er"
+ ],
+ [
+ "▁",
+ "layer"
+ ],
+ [
+ "temp",
+ "late"
+ ],
+ [
+ "t",
+ "emplate"
+ ],
+ [
+ "F",
+ "irst"
+ ],
+ [
+ "ны",
+ "м"
+ ],
+ [
+ "igr",
+ "ation"
+ ],
+ [
+ "ren",
+ "cy"
+ ],
+ [
+ "r",
+ "ency"
+ ],
+ [
+ "ow",
+ "ie"
+ ],
+ [
+ "owi",
+ "e"
+ ],
+ [
+ "o",
+ "wie"
+ ],
+ [
+ "▁n",
+ "p"
+ ],
+ [
+ "▁",
+ "np"
+ ],
+ [
+ "▁s",
+ "election"
+ ],
+ [
+ "▁se",
+ "lection"
+ ],
+ [
+ "▁select",
+ "ion"
+ ],
+ [
+ "▁sel",
+ "ection"
+ ],
+ [
+ "▁sele",
+ "ction"
+ ],
+ [
+ "▁",
+ "selection"
+ ],
+ [
+ "▁N",
+ "ach"
+ ],
+ [
+ "▁Na",
+ "ch"
+ ],
+ [
+ "▁P",
+ "RO"
+ ],
+ [
+ "▁PR",
+ "O"
+ ],
+ [
+ "▁",
+ "PRO"
+ ],
+ [
+ "▁p",
+ "olic"
+ ],
+ [
+ "▁pol",
+ "ic"
+ ],
+ [
+ "▁po",
+ "lic"
+ ],
+ [
+ "▁data",
+ "base"
+ ],
+ [
+ "▁dat",
+ "abase"
+ ],
+ [
+ "▁",
+ "database"
+ ],
+ [
+ "▁by",
+ "te"
+ ],
+ [
+ "▁",
+ "byte"
+ ],
+ [
+ "▁prov",
+ "iding"
+ ],
+ [
+ "ma",
+ "c"
+ ],
+ [
+ "m",
+ "ac"
+ ],
+ [
+ "▁me",
+ "tal"
+ ],
+ [
+ "▁met",
+ "al"
+ ],
+ [
+ "▁meta",
+ "l"
+ ],
+ [
+ "mod",
+ "ules"
+ ],
+ [
+ "module",
+ "s"
+ ],
+ [
+ "▁Ge",
+ "org"
+ ],
+ [
+ "▁S",
+ "a"
+ ],
+ [
+ "▁",
+ "Sa"
+ ],
+ [
+ "▁est",
+ "ablish"
+ ],
+ [
+ "▁estab",
+ "lish"
+ ],
+ [
+ "..",
+ ".\""
+ ],
+ [
+ "...",
+ "\""
+ ],
+ [
+ "i",
+ "u"
+ ],
+ [
+ "ki",
+ "n"
+ ],
+ [
+ "k",
+ "in"
+ ],
+ [
+ "▁e",
+ "th"
+ ],
+ [
+ "▁et",
+ "h"
+ ],
+ [
+ "▁",
+ "eth"
+ ],
+ [
+ "▁S",
+ "and"
+ ],
+ [
+ "▁San",
+ "d"
+ ],
+ [
+ "▁Sa",
+ "nd"
+ ],
+ [
+ "▁Ch",
+ "apter"
+ ],
+ [
+ "▁Chap",
+ "ter"
+ ],
+ [
+ "▁g",
+ "al"
+ ],
+ [
+ "▁ga",
+ "l"
+ ],
+ [
+ "▁",
+ "gal"
+ ],
+ [
+ "▁i",
+ "ce"
+ ],
+ [
+ "▁ic",
+ "e"
+ ],
+ [
+ "▁",
+ "ice"
+ ],
+ [
+ "Re",
+ "d"
+ ],
+ [
+ "R",
+ "ed"
+ ],
+ [
+ "▁d",
+ "al"
+ ],
+ [
+ "▁da",
+ "l"
+ ],
+ [
+ "▁",
+ "dal"
+ ],
+ [
+ "▁pr",
+ "incipal"
+ ],
+ [
+ "▁princip",
+ "al"
+ ],
+ [
+ "Ms",
+ "g"
+ ],
+ [
+ "M",
+ "sg"
+ ],
+ [
+ "▁rem",
+ "ains"
+ ],
+ [
+ "▁remain",
+ "s"
+ ],
+ [
+ "н",
+ "г"
+ ],
+ [
+ "T",
+ "itle"
+ ],
+ [
+ "Re",
+ "l"
+ ],
+ [
+ "R",
+ "el"
+ ],
+ [
+ "Dis",
+ "play"
+ ],
+ [
+ "No",
+ "n"
+ ],
+ [
+ "N",
+ "on"
+ ],
+ [
+ "▁def",
+ "inition"
+ ],
+ [
+ "▁definit",
+ "ion"
+ ],
+ [
+ "▁defin",
+ "ition"
+ ],
+ [
+ "▁",
+ "definition"
+ ],
+ [
+ "▁at",
+ "tr"
+ ],
+ [
+ "▁att",
+ "r"
+ ],
+ [
+ "▁",
+ "attr"
+ ],
+ [
+ "▁sign",
+ "al"
+ ],
+ [
+ "▁sig",
+ "nal"
+ ],
+ [
+ "▁",
+ "signal"
+ ],
+ [
+ "h",
+ "l"
+ ],
+ [
+ "▁s",
+ "el"
+ ],
+ [
+ "▁se",
+ "l"
+ ],
+ [
+ "▁",
+ "sel"
+ ],
+ [
+ "▁vol",
+ "ume"
+ ],
+ [
+ "▁",
+ "volume"
+ ],
+ [
+ "▁c",
+ "ache"
+ ],
+ [
+ "▁ca",
+ "che"
+ ],
+ [
+ "▁",
+ "cache"
+ ],
+ [
+ "he",
+ "ns"
+ ],
+ [
+ "hen",
+ "s"
+ ],
+ [
+ "h",
+ "ens"
+ ],
+ [
+ "▁w",
+ "ird"
+ ],
+ [
+ "▁wir",
+ "d"
+ ],
+ [
+ "[",
+ "\\"
+ ],
+ [
+ "NO",
+ "T"
+ ],
+ [
+ "N",
+ "OT"
+ ],
+ [
+ "▁e",
+ "lection"
+ ],
+ [
+ "▁el",
+ "ection"
+ ],
+ [
+ "▁elect",
+ "ion"
+ ],
+ [
+ "▁ele",
+ "ction"
+ ],
+ [
+ "▁",
+ "election"
+ ],
+ [
+ "ut",
+ "t"
+ ],
+ [
+ "u",
+ "tt"
+ ],
+ [
+ "▁W",
+ "indow"
+ ],
+ [
+ "▁Wind",
+ "ow"
+ ],
+ [
+ "▁",
+ "Window"
+ ],
+ [
+ "en",
+ "tal"
+ ],
+ [
+ "ent",
+ "al"
+ ],
+ [
+ "enta",
+ "l"
+ ],
+ [
+ "if",
+ "est"
+ ],
+ [
+ "ife",
+ "st"
+ ],
+ [
+ "x",
+ "f"
+ ],
+ [
+ "▁Р",
+ "а"
+ ],
+ [
+ "▁over",
+ "all"
+ ],
+ [
+ "bl",
+ "ic"
+ ],
+ [
+ "b",
+ "lic"
+ ],
+ [
+ "▁ed",
+ "itor"
+ ],
+ [
+ "▁edit",
+ "or"
+ ],
+ [
+ "▁",
+ "editor"
+ ],
+ [
+ "ad",
+ "en"
+ ],
+ [
+ "ade",
+ "n"
+ ],
+ [
+ "a",
+ "den"
+ ],
+ [
+ "▁c",
+ "art"
+ ],
+ [
+ "▁car",
+ "t"
+ ],
+ [
+ "▁ca",
+ "rt"
+ ],
+ [
+ "▁",
+ "cart"
+ ],
+ [
+ "Le",
+ "ft"
+ ],
+ [
+ "L",
+ "eft"
+ ],
+ [
+ "ul",
+ "s"
+ ],
+ [
+ "u",
+ "ls"
+ ],
+ [
+ "bin",
+ "g"
+ ],
+ [
+ "bi",
+ "ng"
+ ],
+ [
+ "b",
+ "ing"
+ ],
+ [
+ "R",
+ "ight"
+ ],
+ [
+ "▁s",
+ "é"
+ ],
+ [
+ "Si",
+ "m"
+ ],
+ [
+ "S",
+ "im"
+ ],
+ [
+ "▁came",
+ "ra"
+ ],
+ [
+ "▁cam",
+ "era"
+ ],
+ [
+ "▁",
+ "camera"
+ ],
+ [
+ "▁f",
+ "av"
+ ],
+ [
+ "▁fa",
+ "v"
+ ],
+ [
+ "De",
+ "cl"
+ ],
+ [
+ "Dec",
+ "l"
+ ],
+ [
+ "sp",
+ "ring"
+ ],
+ [
+ "spr",
+ "ing"
+ ],
+ [
+ "▁err",
+ "ors"
+ ],
+ [
+ "▁er",
+ "rors"
+ ],
+ [
+ "▁error",
+ "s"
+ ],
+ [
+ "▁",
+ "errors"
+ ],
+ [
+ "T",
+ "ab"
+ ],
+ [
+ "print",
+ "ln"
+ ],
+ [
+ "▁B",
+ "ern"
+ ],
+ [
+ "▁Be",
+ "rn"
+ ],
+ [
+ "▁Ber",
+ "n"
+ ],
+ [
+ "na",
+ "b"
+ ],
+ [
+ "n",
+ "ab"
+ ],
+ [
+ "▁B",
+ "ase"
+ ],
+ [
+ "▁Bas",
+ "e"
+ ],
+ [
+ "▁Ba",
+ "se"
+ ],
+ [
+ "▁",
+ "Base"
+ ],
+ [
+ "▁a",
+ "uth"
+ ],
+ [
+ "▁aut",
+ "h"
+ ],
+ [
+ "▁au",
+ "th"
+ ],
+ [
+ "▁",
+ "auth"
+ ],
+ [
+ "▁app",
+ "arent"
+ ],
+ [
+ "▁ap",
+ "parent"
+ ],
+ [
+ "▁appar",
+ "ent"
+ ],
+ [
+ "▁pres",
+ "ented"
+ ],
+ [
+ "▁present",
+ "ed"
+ ],
+ [
+ "▁rem",
+ "ained"
+ ],
+ [
+ "▁remain",
+ "ed"
+ ],
+ [
+ "▁w",
+ "et"
+ ],
+ [
+ "▁we",
+ "t"
+ ],
+ [
+ "En",
+ "c"
+ ],
+ [
+ "E",
+ "nc"
+ ],
+ [
+ "IN",
+ "FO"
+ ],
+ [
+ "▁S",
+ "ing"
+ ],
+ [
+ "▁Si",
+ "ng"
+ ],
+ [
+ "▁Sin",
+ "g"
+ ],
+ [
+ "▁",
+ "Sing"
+ ],
+ [
+ "pack",
+ "age"
+ ],
+ [
+ "))",
+ ");"
+ ],
+ [
+ ")))",
+ ";"
+ ],
+ [
+ ")",
+ "));"
+ ],
+ [
+ "▁S",
+ "ocial"
+ ],
+ [
+ "▁So",
+ "cial"
+ ],
+ [
+ "▁Soc",
+ "ial"
+ ],
+ [
+ "▁Soci",
+ "al"
+ ],
+ [
+ "▁M",
+ "ass"
+ ],
+ [
+ "▁Ma",
+ "ss"
+ ],
+ [
+ "▁Mas",
+ "s"
+ ],
+ [
+ "▁",
+ "Mass"
+ ],
+ [
+ "▁des",
+ "pite"
+ ],
+ [
+ "▁desp",
+ "ite"
+ ],
+ [
+ "▁m",
+ "obile"
+ ],
+ [
+ "▁mob",
+ "ile"
+ ],
+ [
+ "▁mobil",
+ "e"
+ ],
+ [
+ "▁",
+ "mobile"
+ ],
+ [
+ "▁l",
+ "abor"
+ ],
+ [
+ "▁la",
+ "bor"
+ ],
+ [
+ "▁lab",
+ "or"
+ ],
+ [
+ "G",
+ "o"
+ ],
+ [
+ "▁e",
+ "sp"
+ ],
+ [
+ "▁es",
+ "p"
+ ],
+ [
+ "▁",
+ "esp"
+ ],
+ [
+ "▁T",
+ "able"
+ ],
+ [
+ "▁Ta",
+ "ble"
+ ],
+ [
+ "▁Tab",
+ "le"
+ ],
+ [
+ "▁",
+ "Table"
+ ],
+ [
+ "▁ex",
+ "pert"
+ ],
+ [
+ "▁exper",
+ "t"
+ ],
+ [
+ "▁exp",
+ "ert"
+ ],
+ [
+ "▁f",
+ "lex"
+ ],
+ [
+ "▁fl",
+ "ex"
+ ],
+ [
+ "▁fle",
+ "x"
+ ],
+ [
+ "▁",
+ "flex"
+ ],
+ [
+ "▁prof",
+ "ession"
+ ],
+ [
+ "▁profess",
+ "ion"
+ ],
+ [
+ "▁p",
+ "il"
+ ],
+ [
+ "▁pi",
+ "l"
+ ],
+ [
+ "Col",
+ "lection"
+ ],
+ [
+ "Coll",
+ "ection"
+ ],
+ [
+ "Collect",
+ "ion"
+ ],
+ [
+ "LO",
+ "CK"
+ ],
+ [
+ "LOC",
+ "K"
+ ],
+ [
+ "▁ap",
+ "plied"
+ ],
+ [
+ "▁appl",
+ "ied"
+ ],
+ [
+ "al",
+ "ler"
+ ],
+ [
+ "all",
+ "er"
+ ],
+ [
+ "alle",
+ "r"
+ ],
+ [
+ "or",
+ "ph"
+ ],
+ [
+ "orp",
+ "h"
+ ],
+ [
+ "EN",
+ "SE"
+ ],
+ [
+ "ENS",
+ "E"
+ ],
+ [
+ "▁бы",
+ "л"
+ ],
+ [
+ "▁d",
+ "b"
+ ],
+ [
+ "▁",
+ "db"
+ ],
+ [
+ "over",
+ "line"
+ ],
+ [
+ "▁C",
+ "ode"
+ ],
+ [
+ "▁Co",
+ "de"
+ ],
+ [
+ "▁",
+ "Code"
+ ],
+ [
+ "▁by",
+ "tes"
+ ],
+ [
+ "▁byte",
+ "s"
+ ],
+ [
+ "▁",
+ "bytes"
+ ],
+ [
+ "▁tr",
+ "ouble"
+ ],
+ [
+ "▁trou",
+ "ble"
+ ],
+ [
+ "▁на",
+ "се"
+ ],
+ [
+ "D",
+ "D"
+ ],
+ [
+ "▁Y",
+ "ear"
+ ],
+ [
+ "▁Ye",
+ "ar"
+ ],
+ [
+ "▁",
+ "Year"
+ ],
+ [
+ "mb",
+ "ox"
+ ],
+ [
+ "m",
+ "box"
+ ],
+ [
+ "▁ke",
+ "eping"
+ ],
+ [
+ "▁keep",
+ "ing"
+ ],
+ [
+ "▁",
+ "keeping"
+ ],
+ [
+ "▁k",
+ "ick"
+ ],
+ [
+ "▁ki",
+ "ck"
+ ],
+ [
+ "än",
+ "g"
+ ],
+ [
+ "ä",
+ "ng"
+ ],
+ [
+ "▁correspon",
+ "ding"
+ ],
+ [
+ "▁correspond",
+ "ing"
+ ],
+ [
+ "▁l",
+ "ibrary"
+ ],
+ [
+ "▁",
+ "library"
+ ],
+ [
+ "▁*/",
+ "\r"
+ ],
+ [
+ "▁",
+ "*/\r"
+ ],
+ [
+ "call",
+ "back"
+ ],
+ [
+ "um",
+ "s"
+ ],
+ [
+ "u",
+ "ms"
+ ],
+ [
+ "▁j",
+ "son"
+ ],
+ [
+ "▁js",
+ "on"
+ ],
+ [
+ "▁",
+ "json"
+ ],
+ [
+ "▁M",
+ "ount"
+ ],
+ [
+ "▁Mo",
+ "unt"
+ ],
+ [
+ "▁",
+ "Mount"
+ ],
+ [
+ "▁St",
+ "and"
+ ],
+ [
+ "▁Stan",
+ "d"
+ ],
+ [
+ "▁Sta",
+ "nd"
+ ],
+ [
+ "▁",
+ "Stand"
+ ],
+ [
+ "IG",
+ "HT"
+ ],
+ [
+ "IGH",
+ "T"
+ ],
+ [
+ "▁New",
+ "s"
+ ],
+ [
+ "▁Ne",
+ "ws"
+ ],
+ [
+ "▁",
+ "News"
+ ],
+ [
+ "▁com",
+ "ments"
+ ],
+ [
+ "▁comm",
+ "ents"
+ ],
+ [
+ "▁comment",
+ "s"
+ ],
+ [
+ "▁",
+ "comments"
+ ],
+ [
+ "return",
+ "s"
+ ],
+ [
+ "C",
+ "al"
+ ],
+ [
+ "▁a",
+ "ward"
+ ],
+ [
+ "▁aw",
+ "ard"
+ ],
+ [
+ "▁b",
+ "ought"
+ ],
+ [
+ "▁bou",
+ "ght"
+ ],
+ [
+ "include",
+ "graphics"
+ ],
+ [
+ "▁",
+ "ле"
+ ],
+ [
+ "do",
+ "t"
+ ],
+ [
+ "d",
+ "ot"
+ ],
+ [
+ "ro",
+ "nic"
+ ],
+ [
+ "ron",
+ "ic"
+ ],
+ [
+ "r",
+ "onic"
+ ],
+ [
+ "▁extrem",
+ "ely"
+ ],
+ [
+ "▁extreme",
+ "ly"
+ ],
+ [
+ "▁min",
+ "or"
+ ],
+ [
+ "▁mi",
+ "nor"
+ ],
+ [
+ "if",
+ "er"
+ ],
+ [
+ "ife",
+ "r"
+ ],
+ [
+ "i",
+ "fer"
+ ],
+ [
+ "ja",
+ "va"
+ ],
+ [
+ "jav",
+ "a"
+ ],
+ [
+ "j",
+ "ava"
+ ],
+ [
+ "en",
+ "dar"
+ ],
+ [
+ "end",
+ "ar"
+ ],
+ [
+ "enda",
+ "r"
+ ],
+ [
+ "la",
+ "yout"
+ ],
+ [
+ "lay",
+ "out"
+ ],
+ [
+ "l",
+ "ayout"
+ ],
+ [
+ "pl",
+ "ies"
+ ],
+ [
+ "▁b",
+ "uf"
+ ],
+ [
+ "▁bu",
+ "f"
+ ],
+ [
+ "▁",
+ "buf"
+ ],
+ [
+ "▁Is",
+ "land"
+ ],
+ [
+ "▁Ab",
+ "out"
+ ],
+ [
+ "▁",
+ "About"
+ ],
+ [
+ "▁w",
+ "est"
+ ],
+ [
+ "▁we",
+ "st"
+ ],
+ [
+ "▁",
+ "west"
+ ],
+ [
+ "▁S",
+ "cott"
+ ],
+ [
+ "▁Sc",
+ "ott"
+ ],
+ [
+ "▁Scot",
+ "t"
+ ],
+ [
+ "AC",
+ "T"
+ ],
+ [
+ "A",
+ "CT"
+ ],
+ [
+ "Wh",
+ "y"
+ ],
+ [
+ "W",
+ "hy"
+ ],
+ [
+ "▁large",
+ "st"
+ ],
+ [
+ "▁larg",
+ "est"
+ ],
+ [
+ "▁cont",
+ "ainer"
+ ],
+ [
+ "▁contain",
+ "er"
+ ],
+ [
+ "▁",
+ "container"
+ ],
+ [
+ "▁t",
+ "emperature"
+ ],
+ [
+ "▁temper",
+ "ature"
+ ],
+ [
+ "▁",
+ "£"
+ ],
+ [
+ "▁red",
+ "uce"
+ ],
+ [
+ "▁redu",
+ "ce"
+ ],
+ [
+ "▁",
+ "reduce"
+ ],
+ [
+ "▁f",
+ "oi"
+ ],
+ [
+ "▁fo",
+ "i"
+ ],
+ [
+ "ha",
+ "n"
+ ],
+ [
+ "h",
+ "an"
+ ],
+ [
+ "▁b",
+ "od"
+ ],
+ [
+ "▁bo",
+ "d"
+ ],
+ [
+ "▁V",
+ "an"
+ ],
+ [
+ "▁Va",
+ "n"
+ ],
+ [
+ "▁null",
+ "ptr"
+ ],
+ [
+ "▁",
+ "nullptr"
+ ],
+ [
+ "▁d",
+ "ating"
+ ],
+ [
+ "▁da",
+ "ting"
+ ],
+ [
+ "▁dat",
+ "ing"
+ ],
+ [
+ "▁",
+ "dating"
+ ],
+ [
+ "▁ch",
+ "ain"
+ ],
+ [
+ "▁cha",
+ "in"
+ ],
+ [
+ "▁",
+ "chain"
+ ],
+ [
+ "Fl",
+ "ags"
+ ],
+ [
+ "Flag",
+ "s"
+ ],
+ [
+ "ient",
+ "o"
+ ],
+ [
+ "ien",
+ "to"
+ ],
+ [
+ "i",
+ "ento"
+ ],
+ [
+ "so",
+ "rt"
+ ],
+ [
+ "sor",
+ "t"
+ ],
+ [
+ "s",
+ "ort"
+ ],
+ [
+ "▁f",
+ "an"
+ ],
+ [
+ "▁fa",
+ "n"
+ ],
+ [
+ "▁",
+ "fan"
+ ],
+ [
+ "▁det",
+ "ermine"
+ ],
+ [
+ "▁determ",
+ "ine"
+ ],
+ [
+ "▁determin",
+ "e"
+ ],
+ [
+ "▁deter",
+ "mine"
+ ],
+ [
+ "▁w",
+ "ear"
+ ],
+ [
+ "▁we",
+ "ar"
+ ],
+ [
+ "▁",
+ "wear"
+ ],
+ [
+ "B",
+ "E"
+ ],
+ [
+ "▁appropri",
+ "ate"
+ ],
+ [
+ "л",
+ "ся"
+ ],
+ [
+ "то",
+ "в"
+ ],
+ [
+ "т",
+ "ов"
+ ],
+ [
+ "▁go",
+ "als"
+ ],
+ [
+ "▁goal",
+ "s"
+ ],
+ [
+ "▁M",
+ "ap"
+ ],
+ [
+ "▁Ma",
+ "p"
+ ],
+ [
+ "▁",
+ "Map"
+ ],
+ [
+ "▁S",
+ "ar"
+ ],
+ [
+ "▁Sa",
+ "r"
+ ],
+ [
+ "▁O",
+ "ption"
+ ],
+ [
+ "▁Opt",
+ "ion"
+ ],
+ [
+ "▁",
+ "Option"
+ ],
+ [
+ "▁h",
+ "ate"
+ ],
+ [
+ "▁ha",
+ "te"
+ ],
+ [
+ "▁hat",
+ "e"
+ ],
+ [
+ "▁z",
+ "ijn"
+ ],
+ [
+ ",",
+ "-"
+ ],
+ [
+ "▁im",
+ "plied"
+ ],
+ [
+ "▁impl",
+ "ied"
+ ],
+ [
+ "bit",
+ "s"
+ ],
+ [
+ "bi",
+ "ts"
+ ],
+ [
+ "b",
+ "its"
+ ],
+ [
+ "▁M",
+ "en"
+ ],
+ [
+ "▁Me",
+ "n"
+ ],
+ [
+ "▁",
+ "Men"
+ ],
+ [
+ "sk",
+ "ip"
+ ],
+ [
+ "ski",
+ "p"
+ ],
+ [
+ "▁M",
+ "ond"
+ ],
+ [
+ "▁Mon",
+ "d"
+ ],
+ [
+ "▁Mo",
+ "nd"
+ ],
+ [
+ "▁H",
+ "on"
+ ],
+ [
+ "▁Ho",
+ "n"
+ ],
+ [
+ "▁pro",
+ "ve"
+ ],
+ [
+ "▁pr",
+ "ove"
+ ],
+ [
+ "▁prov",
+ "e"
+ ],
+ [
+ "va",
+ "n"
+ ],
+ [
+ "v",
+ "an"
+ ],
+ [
+ "▁tr",
+ "aff"
+ ],
+ [
+ "▁tra",
+ "ff"
+ ],
+ [
+ "▁in",
+ "tr"
+ ],
+ [
+ "▁int",
+ "r"
+ ],
+ [
+ "▁",
+ "intr"
+ ],
+ [
+ "pi",
+ "c"
+ ],
+ [
+ "p",
+ "ic"
+ ],
+ [
+ "▁dro",
+ "pped"
+ ],
+ [
+ "▁drop",
+ "ped"
+ ],
+ [
+ "▁w",
+ "erd"
+ ],
+ [
+ "▁we",
+ "rd"
+ ],
+ [
+ "▁wer",
+ "d"
+ ],
+ [
+ "▁separ",
+ "ate"
+ ],
+ [
+ "is",
+ "a"
+ ],
+ [
+ "i",
+ "sa"
+ ],
+ [
+ "▁t",
+ "ab"
+ ],
+ [
+ "▁ta",
+ "b"
+ ],
+ [
+ "▁",
+ "tab"
+ ],
+ [
+ "tm",
+ "l"
+ ],
+ [
+ "t",
+ "ml"
+ ],
+ [
+ "▁\"",
+ "$"
+ ],
+ [
+ "mu",
+ "tex"
+ ],
+ [
+ "mut",
+ "ex"
+ ],
+ [
+ "▁P",
+ "an"
+ ],
+ [
+ "▁Pa",
+ "n"
+ ],
+ [
+ "▁",
+ "Pan"
+ ],
+ [
+ "ser",
+ "ve"
+ ],
+ [
+ "serv",
+ "e"
+ ],
+ [
+ "s",
+ "erve"
+ ],
+ [
+ "▁hot",
+ "el"
+ ],
+ [
+ "▁L",
+ "ast"
+ ],
+ [
+ "▁La",
+ "st"
+ ],
+ [
+ "▁Las",
+ "t"
+ ],
+ [
+ "▁",
+ "Last"
+ ],
+ [
+ "st",
+ "ep"
+ ],
+ [
+ "ste",
+ "p"
+ ],
+ [
+ "▁v",
+ "ir"
+ ],
+ [
+ "▁vi",
+ "r"
+ ],
+ [
+ "▁",
+ "vir"
+ ],
+ [
+ "R",
+ "ule"
+ ],
+ [
+ "is",
+ "tan"
+ ],
+ [
+ "ist",
+ "an"
+ ],
+ [
+ "ista",
+ "n"
+ ],
+ [
+ "i",
+ "stan"
+ ],
+ [
+ "ot",
+ "ing"
+ ],
+ [
+ "oti",
+ "ng"
+ ],
+ [
+ "o",
+ "ting"
+ ],
+ [
+ "ar",
+ "ks"
+ ],
+ [
+ "ark",
+ "s"
+ ],
+ [
+ "(_",
+ "_"
+ ],
+ [
+ "(",
+ "__"
+ ],
+ [
+ "▁e",
+ "ls"
+ ],
+ [
+ "▁el",
+ "s"
+ ],
+ [
+ "▁",
+ "els"
+ ],
+ [
+ "Pl",
+ "ayer"
+ ],
+ [
+ "Play",
+ "er"
+ ],
+ [
+ "P",
+ "layer"
+ ],
+ [
+ "]",
+ "]"
+ ],
+ [
+ "ви",
+ "ч"
+ ],
+ [
+ "yc",
+ "h"
+ ],
+ [
+ "y",
+ "ch"
+ ],
+ [
+ "ex",
+ "ception"
+ ],
+ [
+ "except",
+ "ion"
+ ],
+ [
+ "=\"",
+ "../"
+ ],
+ [
+ "▁im",
+ "agine"
+ ],
+ [
+ "▁imag",
+ "ine"
+ ],
+ [
+ "▁imagin",
+ "e"
+ ],
+ [
+ "\"}",
+ ","
+ ],
+ [
+ "\"",
+ "},"
+ ],
+ [
+ "ic",
+ "ago"
+ ],
+ [
+ "ica",
+ "go"
+ ],
+ [
+ "el",
+ "er"
+ ],
+ [
+ "ele",
+ "r"
+ ],
+ [
+ "e",
+ "ler"
+ ],
+ [
+ "▁v",
+ "s"
+ ],
+ [
+ "▁",
+ "vs"
+ ],
+ [
+ "▁A",
+ "frica"
+ ],
+ [
+ "▁Afr",
+ "ica"
+ ],
+ [
+ "▁Bus",
+ "iness"
+ ],
+ [
+ "oc",
+ "ks"
+ ],
+ [
+ "ock",
+ "s"
+ ],
+ [
+ "o",
+ "cks"
+ ],
+ [
+ "▁p",
+ "rz"
+ ],
+ [
+ "▁pr",
+ "z"
+ ],
+ [
+ "▁fuck",
+ "ing"
+ ],
+ [
+ "▁p",
+ "icked"
+ ],
+ [
+ "▁pick",
+ "ed"
+ ],
+ [
+ "▁pic",
+ "ked"
+ ],
+ [
+ "▁в",
+ "і"
+ ],
+ [
+ "▁",
+ "ві"
+ ],
+ [
+ "▁\"",
+ ","
+ ],
+ [
+ "▁",
+ "\","
+ ],
+ [
+ "▁b",
+ "ott"
+ ],
+ [
+ "▁bo",
+ "tt"
+ ],
+ [
+ "▁bot",
+ "t"
+ ],
+ [
+ "▁fail",
+ "ure"
+ ],
+ [
+ "▁",
+ "failure"
+ ],
+ [
+ "[",
+ ":"
+ ],
+ [
+ "▁G",
+ "ar"
+ ],
+ [
+ "▁Ga",
+ "r"
+ ],
+ [
+ "ap",
+ "es"
+ ],
+ [
+ "ape",
+ "s"
+ ],
+ [
+ "a",
+ "pes"
+ ],
+ [
+ "up",
+ "le"
+ ],
+ [
+ "u",
+ "ple"
+ ],
+ [
+ "▁f",
+ "er"
+ ],
+ [
+ "▁fe",
+ "r"
+ ],
+ [
+ "▁",
+ "fer"
+ ],
+ [
+ "▁p",
+ "urchase"
+ ],
+ [
+ "▁purch",
+ "ase"
+ ],
+ [
+ "▁п",
+ "ер"
+ ],
+ [
+ "▁пе",
+ "р"
+ ],
+ [
+ "▁",
+ "пер"
+ ],
+ [
+ "▁b",
+ "ird"
+ ],
+ [
+ "▁bi",
+ "rd"
+ ],
+ [
+ "▁",
+ "bird"
+ ],
+ [
+ "W",
+ "idget"
+ ],
+ [
+ "▁Sund",
+ "ay"
+ ],
+ [
+ "▁Sun",
+ "day"
+ ],
+ [
+ "▁A",
+ "maz"
+ ],
+ [
+ "▁Am",
+ "az"
+ ],
+ [
+ "▁",
+ "Amaz"
+ ],
+ [
+ "▁cons",
+ "ult"
+ ],
+ [
+ "ut",
+ "sch"
+ ],
+ [
+ "uts",
+ "ch"
+ ],
+ [
+ "an",
+ "to"
+ ],
+ [
+ "ant",
+ "o"
+ ],
+ [
+ "St",
+ "orage"
+ ],
+ [
+ "▁he",
+ "ader"
+ ],
+ [
+ "▁head",
+ "er"
+ ],
+ [
+ "▁",
+ "header"
+ ],
+ [
+ "üh",
+ "r"
+ ],
+ [
+ "ü",
+ "hr"
+ ],
+ [
+ "▁H",
+ "a"
+ ],
+ [
+ "▁",
+ "Ha"
+ ],
+ [
+ "▁Associ",
+ "ation"
+ ],
+ [
+ "▁s",
+ "ight"
+ ],
+ [
+ "▁si",
+ "ght"
+ ],
+ [
+ "▁sig",
+ "ht"
+ ],
+ [
+ "▁sigh",
+ "t"
+ ],
+ [
+ "C",
+ "ell"
+ ],
+ [
+ "▁pro",
+ "file"
+ ],
+ [
+ "▁prof",
+ "ile"
+ ],
+ [
+ "▁",
+ "profile"
+ ],
+ [
+ "▁fem",
+ "ale"
+ ],
+ [
+ "å",
+ "n"
+ ],
+ [
+ "▁w",
+ "id"
+ ],
+ [
+ "▁",
+ "wid"
+ ],
+ [
+ "z",
+ "n"
+ ],
+ [
+ "Dir",
+ "ect"
+ ],
+ [
+ "Di",
+ "rect"
+ ],
+ [
+ "D",
+ "irect"
+ ],
+ [
+ "▁st",
+ "ret"
+ ],
+ [
+ "▁str",
+ "et"
+ ],
+ [
+ "▁stre",
+ "t"
+ ],
+ [
+ "▁",
+ "stret"
+ ],
+ [
+ "aa",
+ "t"
+ ],
+ [
+ "a",
+ "at"
+ ],
+ [
+ "▁pat",
+ "ient"
+ ],
+ [
+ "▁",
+ "patient"
+ ],
+ [
+ "he",
+ "re"
+ ],
+ [
+ "her",
+ "e"
+ ],
+ [
+ "h",
+ "ere"
+ ],
+ [
+ "▁A",
+ "tl"
+ ],
+ [
+ "▁At",
+ "l"
+ ],
+ [
+ "in",
+ "et"
+ ],
+ [
+ "ine",
+ "t"
+ ],
+ [
+ "i",
+ "net"
+ ],
+ [
+ "Def",
+ "inition"
+ ],
+ [
+ "im",
+ "ary"
+ ],
+ [
+ "ima",
+ "ry"
+ ],
+ [
+ "i",
+ "mary"
+ ],
+ [
+ "Pol",
+ "icy"
+ ],
+ [
+ "▁d",
+ "ut"
+ ],
+ [
+ "▁du",
+ "t"
+ ],
+ [
+ "▁major",
+ "ity"
+ ],
+ [
+ "с",
+ "і"
+ ],
+ [
+ "▁Pro",
+ "ject"
+ ],
+ [
+ "▁",
+ "Project"
+ ],
+ [
+ "By",
+ "Id"
+ ],
+ [
+ "▁belie",
+ "ved"
+ ],
+ [
+ "▁believe",
+ "d"
+ ],
+ [
+ "▁Mus",
+ "ic"
+ ],
+ [
+ "▁",
+ "Music"
+ ],
+ [
+ "з",
+ "ы"
+ ],
+ [
+ "an",
+ "ti"
+ ],
+ [
+ "ant",
+ "i"
+ ],
+ [
+ "▁o",
+ "der"
+ ],
+ [
+ "▁od",
+ "er"
+ ],
+ [
+ "▁",
+ "oder"
+ ],
+ [
+ "Ch",
+ "annel"
+ ],
+ [
+ "▁s",
+ "le"
+ ],
+ [
+ "▁sl",
+ "e"
+ ],
+ [
+ "▁sequ",
+ "ence"
+ ],
+ [
+ "▁",
+ "sequence"
+ ],
+ [
+ "▁pie",
+ "ces"
+ ],
+ [
+ "▁piece",
+ "s"
+ ],
+ [
+ "▁k",
+ "ne"
+ ],
+ [
+ "▁kn",
+ "e"
+ ],
+ [
+ "▁abs",
+ "olutely"
+ ],
+ [
+ "▁absolut",
+ "ely"
+ ],
+ [
+ "▁absolute",
+ "ly"
+ ],
+ [
+ "▁Phil",
+ "ip"
+ ],
+ [
+ "ab",
+ "ilities"
+ ],
+ [
+ "abil",
+ "ities"
+ ],
+ [
+ "Qu",
+ "e"
+ ],
+ [
+ "Q",
+ "ue"
+ ],
+ [
+ "▁K",
+ "ar"
+ ],
+ [
+ "▁Ka",
+ "r"
+ ],
+ [
+ "Ex",
+ "ecut"
+ ],
+ [
+ "Exec",
+ "ut"
+ ],
+ [
+ "▁D",
+ "evel"
+ ],
+ [
+ "▁De",
+ "vel"
+ ],
+ [
+ "▁Dev",
+ "el"
+ ],
+ [
+ "▁elect",
+ "ric"
+ ],
+ [
+ "ful",
+ "l"
+ ],
+ [
+ "fu",
+ "ll"
+ ],
+ [
+ "f",
+ "ull"
+ ],
+ [
+ "rol",
+ "led"
+ ],
+ [
+ "roll",
+ "ed"
+ ],
+ [
+ "Do",
+ "m"
+ ],
+ [
+ "D",
+ "om"
+ ],
+ [
+ "▁r",
+ "iver"
+ ],
+ [
+ "▁ri",
+ "ver"
+ ],
+ [
+ "▁riv",
+ "er"
+ ],
+ [
+ "▁",
+ "river"
+ ],
+ [
+ "▁health",
+ "y"
+ ],
+ [
+ "▁heal",
+ "thy"
+ ],
+ [
+ "▁ex",
+ "tern"
+ ],
+ [
+ "▁ext",
+ "ern"
+ ],
+ [
+ "fi",
+ "t"
+ ],
+ [
+ "f",
+ "it"
+ ],
+ [
+ "▁co",
+ "ach"
+ ],
+ [
+ "▁K",
+ "r"
+ ],
+ [
+ "as",
+ "ta"
+ ],
+ [
+ "ast",
+ "a"
+ ],
+ [
+ "a",
+ "sta"
+ ],
+ [
+ "Com",
+ "pat"
+ ],
+ [
+ "Comp",
+ "at"
+ ],
+ [
+ "▁e",
+ "xit"
+ ],
+ [
+ "▁ex",
+ "it"
+ ],
+ [
+ "▁",
+ "exit"
+ ],
+ [
+ "▁Con",
+ "st"
+ ],
+ [
+ "▁Cons",
+ "t"
+ ],
+ [
+ "▁",
+ "Const"
+ ],
+ [
+ "af",
+ "ter"
+ ],
+ [
+ "aft",
+ "er"
+ ],
+ [
+ "a",
+ "fter"
+ ],
+ [
+ "▁should",
+ "er"
+ ],
+ [
+ "▁j",
+ "obs"
+ ],
+ [
+ "▁job",
+ "s"
+ ],
+ [
+ "▁jo",
+ "bs"
+ ],
+ [
+ "zo",
+ "ne"
+ ],
+ [
+ "zon",
+ "e"
+ ],
+ [
+ "z",
+ "one"
+ ],
+ [
+ "▁s",
+ "ale"
+ ],
+ [
+ "▁sa",
+ "le"
+ ],
+ [
+ "▁sal",
+ "e"
+ ],
+ [
+ "ix",
+ "el"
+ ],
+ [
+ "▁determ",
+ "ined"
+ ],
+ [
+ "▁determine",
+ "d"
+ ],
+ [
+ "▁determin",
+ "ed"
+ ],
+ [
+ "▁any",
+ "way"
+ ],
+ [
+ "or",
+ "f"
+ ],
+ [
+ "o",
+ "rf"
+ ],
+ [
+ "▁G",
+ "er"
+ ],
+ [
+ "▁Ge",
+ "r"
+ ],
+ [
+ "all",
+ "el"
+ ],
+ [
+ "alle",
+ "l"
+ ],
+ [
+ "re",
+ "es"
+ ],
+ [
+ "ree",
+ "s"
+ ],
+ [
+ "r",
+ "ees"
+ ],
+ [
+ "as",
+ "m"
+ ],
+ [
+ "a",
+ "sm"
+ ],
+ [
+ "im",
+ "s"
+ ],
+ [
+ "i",
+ "ms"
+ ],
+ [
+ "▁rec",
+ "ords"
+ ],
+ [
+ "▁record",
+ "s"
+ ],
+ [
+ "▁",
+ "records"
+ ],
+ [
+ "▁cor",
+ "por"
+ ],
+ [
+ "▁int",
+ "ellig"
+ ],
+ [
+ "▁intel",
+ "lig"
+ ],
+ [
+ "▁P",
+ "rem"
+ ],
+ [
+ "▁Pr",
+ "em"
+ ],
+ [
+ "▁Pre",
+ "m"
+ ],
+ [
+ "▁d",
+ "riving"
+ ],
+ [
+ "▁dr",
+ "iving"
+ ],
+ [
+ "▁dri",
+ "ving"
+ ],
+ [
+ "▁driv",
+ "ing"
+ ],
+ [
+ "▁mar",
+ "riage"
+ ],
+ [
+ "▁Th",
+ "ank"
+ ],
+ [
+ "▁",
+ "Thank"
+ ],
+ [
+ "▁w",
+ "illing"
+ ],
+ [
+ "▁will",
+ "ing"
+ ],
+ [
+ "M",
+ "C"
+ ],
+ [
+ "Field",
+ "s"
+ ],
+ [
+ "It",
+ "ems"
+ ],
+ [
+ "Item",
+ "s"
+ ],
+ [
+ "▁m",
+ "icro"
+ ],
+ [
+ "▁mi",
+ "cro"
+ ],
+ [
+ "▁mic",
+ "ro"
+ ],
+ [
+ "▁l",
+ "ift"
+ ],
+ [
+ "▁li",
+ "ft"
+ ],
+ [
+ "▁lif",
+ "t"
+ ],
+ [
+ "ir",
+ "ection"
+ ],
+ [
+ "ire",
+ "ction"
+ ],
+ [
+ "irect",
+ "ion"
+ ],
+ [
+ "i",
+ "rection"
+ ],
+ [
+ "Acc",
+ "ount"
+ ],
+ [
+ "Ac",
+ "count"
+ ],
+ [
+ "▁arch",
+ "itect"
+ ],
+ [
+ "tr",
+ "ack"
+ ],
+ [
+ "tra",
+ "ck"
+ ],
+ [
+ "▁p",
+ "rin"
+ ],
+ [
+ "▁pr",
+ "in"
+ ],
+ [
+ "▁pri",
+ "n"
+ ],
+ [
+ "P",
+ "A"
+ ],
+ [
+ "▁r",
+ "uns"
+ ],
+ [
+ "▁run",
+ "s"
+ ],
+ [
+ "▁ru",
+ "ns"
+ ],
+ [
+ "▁Tex",
+ "as"
+ ],
+ [
+ "is",
+ "her"
+ ],
+ [
+ "ish",
+ "er"
+ ],
+ [
+ "en",
+ "sure"
+ ],
+ [
+ "ens",
+ "ure"
+ ],
+ [
+ "▁B",
+ "oth"
+ ],
+ [
+ "▁Bo",
+ "th"
+ ],
+ [
+ "▁Bot",
+ "h"
+ ],
+ [
+ "ко",
+ "м"
+ ],
+ [
+ "▁Col",
+ "or"
+ ],
+ [
+ "▁Co",
+ "lor"
+ ],
+ [
+ "▁",
+ "Color"
+ ],
+ [
+ "Reg",
+ "ister"
+ ],
+ [
+ "▁J",
+ "oe"
+ ],
+ [
+ "▁Jo",
+ "e"
+ ],
+ [
+ "ge",
+ "q"
+ ],
+ [
+ "g",
+ "eq"
+ ],
+ [
+ "le",
+ "ts"
+ ],
+ [
+ "let",
+ "s"
+ ],
+ [
+ "l",
+ "ets"
+ ],
+ [
+ "ad",
+ "ing"
+ ],
+ [
+ "adi",
+ "ng"
+ ],
+ [
+ "a",
+ "ding"
+ ],
+ [
+ "▁ar",
+ "my"
+ ],
+ [
+ "▁arm",
+ "y"
+ ],
+ [
+ "▁B",
+ "ank"
+ ],
+ [
+ "▁Ban",
+ "k"
+ ],
+ [
+ "▁",
+ "Bank"
+ ],
+ [
+ "ot",
+ "ic"
+ ],
+ [
+ "oti",
+ "c"
+ ],
+ [
+ "Pro",
+ "duct"
+ ],
+ [
+ "Produ",
+ "ct"
+ ],
+ [
+ "im",
+ "port"
+ ],
+ [
+ "imp",
+ "ort"
+ ],
+ [
+ "▁W",
+ "ed"
+ ],
+ [
+ "▁We",
+ "d"
+ ],
+ [
+ "▁c",
+ "ry"
+ ],
+ [
+ "▁cr",
+ "y"
+ ],
+ [
+ "gr",
+ "ade"
+ ],
+ [
+ "grad",
+ "e"
+ ],
+ [
+ "gra",
+ "de"
+ ],
+ [
+ "g",
+ "rade"
+ ],
+ [
+ "di",
+ "g"
+ ],
+ [
+ "d",
+ "ig"
+ ],
+ [
+ "ga",
+ "l"
+ ],
+ [
+ "g",
+ "al"
+ ],
+ [
+ "к",
+ "ла"
+ ],
+ [
+ "es",
+ "ted"
+ ],
+ [
+ "est",
+ "ed"
+ ],
+ [
+ "este",
+ "d"
+ ],
+ [
+ "e",
+ "sted"
+ ],
+ [
+ "õ",
+ "es"
+ ],
+ [
+ "ge",
+ "rs"
+ ],
+ [
+ "ger",
+ "s"
+ ],
+ [
+ "g",
+ "ers"
+ ],
+ [
+ "olog",
+ "ie"
+ ],
+ [
+ "olo",
+ "gie"
+ ],
+ [
+ "то",
+ "м"
+ ],
+ [
+ "ra",
+ "zy"
+ ],
+ [
+ "raz",
+ "y"
+ ],
+ [
+ "r",
+ "azy"
+ ],
+ [
+ "▁d",
+ "inner"
+ ],
+ [
+ "▁din",
+ "ner"
+ ],
+ [
+ "Q",
+ "U"
+ ],
+ [
+ "▁fin",
+ "gers"
+ ],
+ [
+ "▁fing",
+ "ers"
+ ],
+ [
+ "▁finger",
+ "s"
+ ],
+ [
+ "UL",
+ "E"
+ ],
+ [
+ "U",
+ "LE"
+ ],
+ [
+ "cl",
+ "aim"
+ ],
+ [
+ "▁adv",
+ "antage"
+ ],
+ [
+ "▁advant",
+ "age"
+ ],
+ [
+ "▁var",
+ "iable"
+ ],
+ [
+ "▁vari",
+ "able"
+ ],
+ [
+ "▁",
+ "variable"
+ ],
+ [
+ "▁med",
+ "ic"
+ ],
+ [
+ "▁medi",
+ "c"
+ ],
+ [
+ "▁m",
+ "ale"
+ ],
+ [
+ "▁ma",
+ "le"
+ ],
+ [
+ "▁mal",
+ "e"
+ ],
+ [
+ "▁circ",
+ "um"
+ ],
+ [
+ "▁м",
+ "і"
+ ],
+ [
+ "▁",
+ "мі"
+ ],
+ [
+ "▁inter",
+ "net"
+ ],
+ [
+ "▁intern",
+ "et"
+ ],
+ [
+ "W",
+ "N"
+ ],
+ [
+ "▁l",
+ "ab"
+ ],
+ [
+ "▁la",
+ "b"
+ ],
+ [
+ "▁",
+ "lab"
+ ],
+ [
+ "az",
+ "ine"
+ ],
+ [
+ "azi",
+ "ne"
+ ],
+ [
+ "ч",
+ "но"
+ ],
+ [
+ "▁l",
+ "oop"
+ ],
+ [
+ "▁lo",
+ "op"
+ ],
+ [
+ "▁",
+ "loop"
+ ],
+ [
+ "▁p",
+ "red"
+ ],
+ [
+ "▁pre",
+ "d"
+ ],
+ [
+ "▁pr",
+ "ed"
+ ],
+ [
+ "▁",
+ "pred"
+ ],
+ [
+ "▁con",
+ "sequ"
+ ],
+ [
+ "▁cons",
+ "equ"
+ ],
+ [
+ "▁conse",
+ "qu"
+ ],
+ [
+ "▁bal",
+ "ance"
+ ],
+ [
+ "▁",
+ "balance"
+ ],
+ [
+ "fort",
+ "un"
+ ],
+ [
+ "▁g",
+ "ift"
+ ],
+ [
+ "▁gi",
+ "ft"
+ ],
+ [
+ "▁d",
+ "rug"
+ ],
+ [
+ "▁dr",
+ "ug"
+ ],
+ [
+ "▁dru",
+ "g"
+ ],
+ [
+ "▁c",
+ "ash"
+ ],
+ [
+ "▁cas",
+ "h"
+ ],
+ [
+ "▁ca",
+ "sh"
+ ],
+ [
+ "ски",
+ "х"
+ ],
+ [
+ "с",
+ "ких"
+ ],
+ [
+ "r",
+ "g"
+ ],
+ [
+ "ist",
+ "ribut"
+ ],
+ [
+ "▁high",
+ "est"
+ ],
+ [
+ "▁hig",
+ "hest"
+ ],
+ [
+ "êm",
+ "e"
+ ],
+ [
+ "ê",
+ "me"
+ ],
+ [
+ "em",
+ "ph"
+ ],
+ [
+ "emp",
+ "h"
+ ],
+ [
+ "em",
+ "on"
+ ],
+ [
+ "e",
+ "mon"
+ ],
+ [
+ "▁per",
+ "formed"
+ ],
+ [
+ "▁perform",
+ "ed"
+ ],
+ [
+ "cu",
+ "t"
+ ],
+ [
+ "c",
+ "ut"
+ ],
+ [
+ "▁cl",
+ "oser"
+ ],
+ [
+ "▁close",
+ "r"
+ ],
+ [
+ "▁clos",
+ "er"
+ ],
+ [
+ "▁clo",
+ "ser"
+ ],
+ [
+ "▁be",
+ "coming"
+ ],
+ [
+ "▁bec",
+ "oming"
+ ],
+ [
+ "▁\"",
+ "\","
+ ],
+ [
+ "▁\"\"",
+ ","
+ ],
+ [
+ "st",
+ "ar"
+ ],
+ [
+ "sta",
+ "r"
+ ],
+ [
+ "s",
+ "tar"
+ ],
+ [
+ "pu",
+ "b"
+ ],
+ [
+ "p",
+ "ub"
+ ],
+ [
+ "▁pre",
+ "par"
+ ],
+ [
+ "▁prep",
+ "ar"
+ ],
+ [
+ "▁v",
+ "ote"
+ ],
+ [
+ "▁vo",
+ "te"
+ ],
+ [
+ "▁vot",
+ "e"
+ ],
+ [
+ "▁",
+ "vote"
+ ],
+ [
+ "il",
+ "de"
+ ],
+ [
+ "ild",
+ "e"
+ ],
+ [
+ "▁im",
+ "press"
+ ],
+ [
+ "▁imp",
+ "ress"
+ ],
+ [
+ "▁employ",
+ "ees"
+ ],
+ [
+ "▁employee",
+ "s"
+ ],
+ [
+ "▁e",
+ "inen"
+ ],
+ [
+ "▁ein",
+ "en"
+ ],
+ [
+ "▁eine",
+ "n"
+ ],
+ [
+ "▁sm",
+ "ooth"
+ ],
+ [
+ "▁s",
+ "now"
+ ],
+ [
+ "▁sn",
+ "ow"
+ ],
+ [
+ "▁p",
+ "urs"
+ ],
+ [
+ "▁pur",
+ "s"
+ ],
+ [
+ "▁pu",
+ "rs"
+ ],
+ [
+ "▁v",
+ "oc"
+ ],
+ [
+ "▁vo",
+ "c"
+ ],
+ [
+ "▁M",
+ "icrosoft"
+ ],
+ [
+ "▁Micro",
+ "soft"
+ ],
+ [
+ "▁",
+ "Microsoft"
+ ],
+ [
+ "P",
+ "U"
+ ],
+ [
+ "▁in",
+ "come"
+ ],
+ [
+ "▁inc",
+ "ome"
+ ],
+ [
+ "in",
+ "os"
+ ],
+ [
+ "ino",
+ "s"
+ ],
+ [
+ "i",
+ "nos"
+ ],
+ [
+ "▁oper",
+ "ator"
+ ],
+ [
+ "▁opera",
+ "tor"
+ ],
+ [
+ "▁",
+ "operator"
+ ],
+ [
+ "▁equ",
+ "ival"
+ ],
+ [
+ "▁pass",
+ "word"
+ ],
+ [
+ "▁",
+ "password"
+ ],
+ [
+ "ci",
+ "ón"
+ ],
+ [
+ "ció",
+ "n"
+ ],
+ [
+ "c",
+ "ión"
+ ],
+ [
+ "su",
+ "ccess"
+ ],
+ [
+ "▁e",
+ "mp"
+ ],
+ [
+ "▁em",
+ "p"
+ ],
+ [
+ "▁",
+ "emp"
+ ],
+ [
+ "HO",
+ "UT"
+ ],
+ [
+ "H",
+ "OUT"
+ ],
+ [
+ "▁c",
+ "a"
+ ],
+ [
+ "▁",
+ "ca"
+ ],
+ [
+ "fl",
+ "ag"
+ ],
+ [
+ "f",
+ "lag"
+ ],
+ [
+ "il",
+ "ly"
+ ],
+ [
+ "ill",
+ "y"
+ ],
+ [
+ "cre",
+ "te"
+ ],
+ [
+ "cr",
+ "ete"
+ ],
+ [
+ "cret",
+ "e"
+ ],
+ [
+ "fr",
+ "ak"
+ ],
+ [
+ "▁h",
+ "idden"
+ ],
+ [
+ "▁hid",
+ "den"
+ ],
+ [
+ "▁",
+ "hidden"
+ ],
+ [
+ "▁\"",
+ "%"
+ ],
+ [
+ "▁",
+ "\"%"
+ ],
+ [
+ "ER",
+ "N"
+ ],
+ [
+ "ро",
+ "ва"
+ ],
+ [
+ "ров",
+ "а"
+ ],
+ [
+ "▁U",
+ "N"
+ ],
+ [
+ "▁",
+ "UN"
+ ],
+ [
+ "ro",
+ "ke"
+ ],
+ [
+ "rok",
+ "e"
+ ],
+ [
+ "r",
+ "oke"
+ ],
+ [
+ "mi",
+ "ss"
+ ],
+ [
+ "m",
+ "iss"
+ ],
+ [
+ "▁s",
+ "plit"
+ ],
+ [
+ "▁sp",
+ "lit"
+ ],
+ [
+ "▁spl",
+ "it"
+ ],
+ [
+ "▁",
+ "split"
+ ],
+ [
+ "Re",
+ "ference"
+ ],
+ [
+ ")$",
+ ","
+ ],
+ [
+ ")",
+ "$,"
+ ],
+ [
+ "ep",
+ "er"
+ ],
+ [
+ "e",
+ "per"
+ ],
+ [
+ "▁N",
+ "O"
+ ],
+ [
+ "▁",
+ "NO"
+ ],
+ [
+ "▁s",
+ "quare"
+ ],
+ [
+ "▁squ",
+ "are"
+ ],
+ [
+ "▁",
+ "square"
+ ],
+ [
+ "su",
+ "r"
+ ],
+ [
+ "s",
+ "ur"
+ ],
+ [
+ "че",
+ "н"
+ ],
+ [
+ "ч",
+ "ен"
+ ],
+ [
+ "es",
+ "ter"
+ ],
+ [
+ "est",
+ "er"
+ ],
+ [
+ "este",
+ "r"
+ ],
+ [
+ "e",
+ "ster"
+ ],
+ [
+ "н",
+ "ь"
+ ],
+ [
+ "}",
+ "\""
+ ],
+ [
+ "ra",
+ "wn"
+ ],
+ [
+ "raw",
+ "n"
+ ],
+ [
+ "r",
+ "awn"
+ ],
+ [
+ "ru",
+ "le"
+ ],
+ [
+ "r",
+ "ule"
+ ],
+ [
+ "▁aud",
+ "ience"
+ ],
+ [
+ "es",
+ "te"
+ ],
+ [
+ "est",
+ "e"
+ ],
+ [
+ "e",
+ "ste"
+ ],
+ [
+ "em",
+ "s"
+ ],
+ [
+ "e",
+ "ms"
+ ],
+ [
+ "IC",
+ "ENSE"
+ ],
+ [
+ "▁I",
+ "ll"
+ ],
+ [
+ "▁Il",
+ "l"
+ ],
+ [
+ "▁",
+ "Ill"
+ ],
+ [
+ "US",
+ "E"
+ ],
+ [
+ "U",
+ "SE"
+ ],
+ [
+ "▁b",
+ "on"
+ ],
+ [
+ "▁bo",
+ "n"
+ ],
+ [
+ "▁",
+ "bon"
+ ],
+ [
+ "bu",
+ "r"
+ ],
+ [
+ "b",
+ "ur"
+ ],
+ [
+ "▁s",
+ "ick"
+ ],
+ [
+ "▁si",
+ "ck"
+ ],
+ [
+ "▁h",
+ "orse"
+ ],
+ [
+ "▁hor",
+ "se"
+ ],
+ [
+ "▁hors",
+ "e"
+ ],
+ [
+ "▁E",
+ "duc"
+ ],
+ [
+ "▁Ed",
+ "uc"
+ ],
+ [
+ "▁Edu",
+ "c"
+ ],
+ [
+ "▁benef",
+ "it"
+ ],
+ [
+ "▁c",
+ "ro"
+ ],
+ [
+ "▁cr",
+ "o"
+ ],
+ [
+ "▁",
+ "cro"
+ ],
+ [
+ "Ap",
+ "plication"
+ ],
+ [
+ "▁cor",
+ "re"
+ ],
+ [
+ "▁gu",
+ "arante"
+ ],
+ [
+ "DA",
+ "TA"
+ ],
+ [
+ "DAT",
+ "A"
+ ],
+ [
+ "D",
+ "ATA"
+ ],
+ [
+ "▁expl",
+ "ained"
+ ],
+ [
+ "▁explain",
+ "ed"
+ ],
+ [
+ "T",
+ "X"
+ ],
+ [
+ "▁o",
+ "nt"
+ ],
+ [
+ "▁on",
+ "t"
+ ],
+ [
+ "▁",
+ "ont"
+ ],
+ [
+ "▁F",
+ "lor"
+ ],
+ [
+ "▁Fl",
+ "or"
+ ],
+ [
+ "▁Flo",
+ "r"
+ ],
+ [
+ "▁re",
+ "ports"
+ ],
+ [
+ "▁rep",
+ "orts"
+ ],
+ [
+ "▁report",
+ "s"
+ ],
+ [
+ "▁Re",
+ "al"
+ ],
+ [
+ "▁",
+ "Real"
+ ],
+ [
+ "ud",
+ "ed"
+ ],
+ [
+ "ude",
+ "d"
+ ],
+ [
+ "u",
+ "ded"
+ ],
+ [
+ "le",
+ "an"
+ ],
+ [
+ "▁cit",
+ "iz"
+ ],
+ [
+ "▁dec",
+ "ide"
+ ],
+ [
+ "▁decid",
+ "e"
+ ],
+ [
+ "W",
+ "S"
+ ],
+ [
+ "▁do",
+ "main"
+ ],
+ [
+ "▁dom",
+ "ain"
+ ],
+ [
+ "▁",
+ "domain"
+ ],
+ [
+ "▁ref",
+ "lect"
+ ],
+ [
+ "▁",
+ "reflect"
+ ],
+ [
+ "▁min",
+ "imum"
+ ],
+ [
+ "▁minim",
+ "um"
+ ],
+ [
+ "▁le",
+ "gs"
+ ],
+ [
+ "▁leg",
+ "s"
+ ],
+ [
+ "▁sm",
+ "iled"
+ ],
+ [
+ "▁smile",
+ "d"
+ ],
+ [
+ "f",
+ "i"
+ ],
+ [
+ "▁p",
+ "ure"
+ ],
+ [
+ "▁pur",
+ "e"
+ ],
+ [
+ "▁pu",
+ "re"
+ ],
+ [
+ "▁C",
+ "ustom"
+ ],
+ [
+ "▁",
+ "Custom"
+ ],
+ [
+ "▁ess",
+ "ential"
+ ],
+ [
+ "▁observ",
+ "ed"
+ ],
+ [
+ "▁observe",
+ "d"
+ ],
+ [
+ "▁obs",
+ "erved"
+ ],
+ [
+ "By",
+ "tes"
+ ],
+ [
+ "Byte",
+ "s"
+ ],
+ [
+ "▁c",
+ "tx"
+ ],
+ [
+ "▁",
+ "ctx"
+ ],
+ [
+ "▁r",
+ "ates"
+ ],
+ [
+ "▁rate",
+ "s"
+ ],
+ [
+ "▁rat",
+ "es"
+ ],
+ [
+ "▁ra",
+ "tes"
+ ],
+ [
+ "mb",
+ "re"
+ ],
+ [
+ "m",
+ "bre"
+ ],
+ [
+ "▁w",
+ "orry"
+ ],
+ [
+ "▁wor",
+ "ry"
+ ],
+ [
+ ")",
+ "^"
+ ],
+ [
+ "▁Re",
+ "search"
+ ],
+ [
+ "▁Res",
+ "earch"
+ ],
+ [
+ "Ro",
+ "ot"
+ ],
+ [
+ "R",
+ "oot"
+ ],
+ [
+ "Window",
+ "s"
+ ],
+ [
+ "ult",
+ "ure"
+ ],
+ [
+ "ultur",
+ "e"
+ ],
+ [
+ "▁rel",
+ "ative"
+ ],
+ [
+ "▁relativ",
+ "e"
+ ],
+ [
+ "▁",
+ "relative"
+ ],
+ [
+ "▁s",
+ "eu"
+ ],
+ [
+ "▁se",
+ "u"
+ ],
+ [
+ "▁n",
+ "ie"
+ ],
+ [
+ "▁ni",
+ "e"
+ ],
+ [
+ "▁",
+ "nie"
+ ],
+ [
+ "▁s",
+ "hook"
+ ],
+ [
+ "▁sh",
+ "ook"
+ ],
+ [
+ "ious",
+ "ly"
+ ],
+ [
+ "i",
+ "ously"
+ ],
+ [
+ "▁ad",
+ "vert"
+ ],
+ [
+ "▁adv",
+ "ert"
+ ],
+ [
+ "Se",
+ "e"
+ ],
+ [
+ "S",
+ "ee"
+ ],
+ [
+ "▁Cent",
+ "ral"
+ ],
+ [
+ "▁b",
+ "atter"
+ ],
+ [
+ "▁batt",
+ "er"
+ ],
+ [
+ "▁bat",
+ "ter"
+ ],
+ [
+ "▁s",
+ "igned"
+ ],
+ [
+ "▁sign",
+ "ed"
+ ],
+ [
+ "▁sig",
+ "ned"
+ ],
+ [
+ "▁",
+ "signed"
+ ],
+ [
+ "T",
+ "S"
+ ],
+ [
+ "on",
+ "i"
+ ],
+ [
+ "o",
+ "ni"
+ ],
+ [
+ "▁pre",
+ "pared"
+ ],
+ [
+ "▁prep",
+ "ared"
+ ],
+ [
+ "▁prepar",
+ "ed"
+ ],
+ [
+ "▁prepare",
+ "d"
+ ],
+ [
+ "ga",
+ "te"
+ ],
+ [
+ "g",
+ "ate"
+ ],
+ [
+ "▁C",
+ "are"
+ ],
+ [
+ "▁Car",
+ "e"
+ ],
+ [
+ "▁Ca",
+ "re"
+ ],
+ [
+ "ca",
+ "re"
+ ],
+ [
+ "car",
+ "e"
+ ],
+ [
+ "c",
+ "are"
+ ],
+ [
+ "▁sup",
+ "ply"
+ ],
+ [
+ "▁supp",
+ "ly"
+ ],
+ [
+ "Ex",
+ "p"
+ ],
+ [
+ "E",
+ "xp"
+ ],
+ [
+ "bol",
+ "ds"
+ ],
+ [
+ "bold",
+ "s"
+ ],
+ [
+ "b",
+ "olds"
+ ],
+ [
+ "▁tr",
+ "ail"
+ ],
+ [
+ "▁tra",
+ "il"
+ ],
+ [
+ "▁f",
+ "ish"
+ ],
+ [
+ "▁fi",
+ "sh"
+ ],
+ [
+ "▁fis",
+ "h"
+ ],
+ [
+ "▁",
+ "fish"
+ ],
+ [
+ "▁un",
+ "its"
+ ],
+ [
+ "▁unit",
+ "s"
+ ],
+ [
+ "▁",
+ "units"
+ ],
+ [
+ "ven",
+ "ue"
+ ],
+ [
+ "v",
+ "enue"
+ ],
+ [
+ "х",
+ "и"
+ ],
+ [
+ "▁W",
+ "ood"
+ ],
+ [
+ "▁Wo",
+ "od"
+ ],
+ [
+ "▁c",
+ "ategory"
+ ],
+ [
+ "▁categ",
+ "ory"
+ ],
+ [
+ "▁categor",
+ "y"
+ ],
+ [
+ "▁",
+ "category"
+ ],
+ [
+ "▁b",
+ "le"
+ ],
+ [
+ "▁bl",
+ "e"
+ ],
+ [
+ "▁",
+ "ble"
+ ],
+ [
+ "▁over",
+ "ride"
+ ],
+ [
+ "▁",
+ "override"
+ ],
+ [
+ "fo",
+ "o"
+ ],
+ [
+ "f",
+ "oo"
+ ],
+ [
+ "▁influ",
+ "ence"
+ ],
+ [
+ "en",
+ "th"
+ ],
+ [
+ "ent",
+ "h"
+ ],
+ [
+ "ri",
+ "j"
+ ],
+ [
+ "r",
+ "ij"
+ ],
+ [
+ "▁ad",
+ "apt"
+ ],
+ [
+ "ic",
+ "ians"
+ ],
+ [
+ "ici",
+ "ans"
+ ],
+ [
+ "ician",
+ "s"
+ ],
+ [
+ "icia",
+ "ns"
+ ],
+ [
+ "de",
+ "leted"
+ ],
+ [
+ "del",
+ "eted"
+ ],
+ [
+ "delete",
+ "d"
+ ],
+ [
+ "▁v",
+ "ision"
+ ],
+ [
+ "▁vis",
+ "ion"
+ ],
+ [
+ "▁",
+ "vision"
+ ],
+ [
+ "ct",
+ "rl"
+ ],
+ [
+ "ctr",
+ "l"
+ ],
+ [
+ "c",
+ "trl"
+ ],
+ [
+ "L",
+ "ambda"
+ ],
+ [
+ "t",
+ "p"
+ ],
+ [
+ "mon",
+ "d"
+ ],
+ [
+ "mo",
+ "nd"
+ ],
+ [
+ "m",
+ "ond"
+ ],
+ [
+ "atur",
+ "day"
+ ],
+ [
+ "norm",
+ "al"
+ ],
+ [
+ "nor",
+ "mal"
+ ],
+ [
+ "n",
+ "ormal"
+ ],
+ [
+ "▁thous",
+ "and"
+ ],
+ [
+ "▁Prof",
+ "ess"
+ ],
+ [
+ "▁dise",
+ "ase"
+ ],
+ [
+ "cl",
+ "ip"
+ ],
+ [
+ "cli",
+ "p"
+ ],
+ [
+ "▁г",
+ "ра"
+ ],
+ [
+ "▁",
+ "гра"
+ ],
+ [
+ "bolds",
+ "ymbol"
+ ],
+ [
+ "bold",
+ "symbol"
+ ],
+ [
+ "O",
+ "B"
+ ],
+ [
+ "▁chall",
+ "enge"
+ ],
+ [
+ "▁challeng",
+ "e"
+ ],
+ [
+ "▁m",
+ "otion"
+ ],
+ [
+ "▁mot",
+ "ion"
+ ],
+ [
+ "▁w",
+ "his"
+ ],
+ [
+ "▁wh",
+ "is"
+ ],
+ [
+ "▁le",
+ "aders"
+ ],
+ [
+ "▁lead",
+ "ers"
+ ],
+ [
+ "▁leader",
+ "s"
+ ],
+ [
+ "▁col",
+ "on"
+ ],
+ [
+ "▁co",
+ "lon"
+ ],
+ [
+ "▁",
+ "colon"
+ ],
+ [
+ "▁s",
+ "uit"
+ ],
+ [
+ "▁su",
+ "it"
+ ],
+ [
+ "▁",
+ "suit"
+ ],
+ [
+ "mi",
+ "d"
+ ],
+ [
+ "m",
+ "id"
+ ],
+ [
+ "amp",
+ "ion"
+ ],
+ [
+ "á",
+ "g"
+ ],
+ [
+ "▁view",
+ "s"
+ ],
+ [
+ "▁vie",
+ "ws"
+ ],
+ [
+ "▁",
+ "views"
+ ],
+ [
+ "▁app",
+ "ears"
+ ],
+ [
+ "▁appe",
+ "ars"
+ ],
+ [
+ "▁appear",
+ "s"
+ ],
+ [
+ "an",
+ "cel"
+ ],
+ [
+ "ance",
+ "l"
+ ],
+ [
+ "anc",
+ "el"
+ ],
+ [
+ "▁z",
+ "we"
+ ],
+ [
+ "▁zw",
+ "e"
+ ],
+ [
+ "IS",
+ "T"
+ ],
+ [
+ "I",
+ "ST"
+ ],
+ [
+ "▁le",
+ "aves"
+ ],
+ [
+ "▁leave",
+ "s"
+ ],
+ [
+ "▁e",
+ "nh"
+ ],
+ [
+ "▁en",
+ "h"
+ ],
+ [
+ "▁",
+ "enh"
+ ],
+ [
+ "Act",
+ "ive"
+ ],
+ [
+ "Activ",
+ "e"
+ ],
+ [
+ "▁d",
+ "it"
+ ],
+ [
+ "▁di",
+ "t"
+ ],
+ [
+ "▁",
+ "dit"
+ ],
+ [
+ "if",
+ "icate"
+ ],
+ [
+ "ific",
+ "ate"
+ ],
+ [
+ "ifica",
+ "te"
+ ],
+ [
+ "mat",
+ "rix"
+ ],
+ [
+ "Ex",
+ "pression"
+ ],
+ [
+ "Exp",
+ "ression"
+ ],
+ [
+ "Expr",
+ "ession"
+ ],
+ [
+ "Express",
+ "ion"
+ ],
+ [
+ "Re",
+ "ader"
+ ],
+ [
+ "Read",
+ "er"
+ ],
+ [
+ "▁m",
+ "ental"
+ ],
+ [
+ "▁men",
+ "tal"
+ ],
+ [
+ "▁ment",
+ "al"
+ ],
+ [
+ "em",
+ "bre"
+ ],
+ [
+ "emb",
+ "re"
+ ],
+ [
+ "e",
+ "mbre"
+ ],
+ [
+ "▁de",
+ "cor"
+ ],
+ [
+ "▁dec",
+ "or"
+ ],
+ [
+ "▁",
+ "decor"
+ ],
+ [
+ "ar",
+ "ts"
+ ],
+ [
+ "art",
+ "s"
+ ],
+ [
+ "▁v",
+ "ent"
+ ],
+ [
+ "▁ve",
+ "nt"
+ ],
+ [
+ "▁ven",
+ "t"
+ ],
+ [
+ "▁",
+ "vent"
+ ],
+ [
+ "ne",
+ "l"
+ ],
+ [
+ "n",
+ "el"
+ ],
+ [
+ "line",
+ "s"
+ ],
+ [
+ "li",
+ "nes"
+ ],
+ [
+ "lin",
+ "es"
+ ],
+ [
+ "l",
+ "ines"
+ ],
+ [
+ "up",
+ "id"
+ ],
+ [
+ "u",
+ "pid"
+ ],
+ [
+ "er",
+ "ved"
+ ],
+ [
+ "erv",
+ "ed"
+ ],
+ [
+ "erve",
+ "d"
+ ],
+ [
+ "▁bo",
+ "ys"
+ ],
+ [
+ "▁boy",
+ "s"
+ ],
+ [
+ "▁",
+ "boys"
+ ],
+ [
+ "ал",
+ "ь"
+ ],
+ [
+ "а",
+ "ль"
+ ],
+ [
+ "MO",
+ "D"
+ ],
+ [
+ "M",
+ "OD"
+ ],
+ [
+ "is",
+ "l"
+ ],
+ [
+ "i",
+ "sl"
+ ],
+ [
+ "▁[",
+ "["
+ ],
+ [
+ "▁",
+ "[["
+ ],
+ [
+ "ph",
+ "y"
+ ],
+ [
+ "p",
+ "hy"
+ ],
+ [
+ "▁.",
+ "."
+ ],
+ [
+ "▁",
+ ".."
+ ],
+ [
+ "▁a",
+ "gent"
+ ],
+ [
+ "▁ag",
+ "ent"
+ ],
+ [
+ "▁age",
+ "nt"
+ ],
+ [
+ "▁",
+ "agent"
+ ],
+ [
+ "▁S",
+ "ervices"
+ ],
+ [
+ "▁Service",
+ "s"
+ ],
+ [
+ "▁Serv",
+ "ices"
+ ],
+ [
+ "▁",
+ "Services"
+ ],
+ [
+ "▁i",
+ "ron"
+ ],
+ [
+ "▁ir",
+ "on"
+ ],
+ [
+ "▁",
+ "iron"
+ ],
+ [
+ "▁com",
+ "ponents"
+ ],
+ [
+ "▁compon",
+ "ents"
+ ],
+ [
+ "▁component",
+ "s"
+ ],
+ [
+ "▁",
+ "components"
+ ],
+ [
+ "▁f",
+ "re"
+ ],
+ [
+ "▁fr",
+ "e"
+ ],
+ [
+ "▁",
+ "fre"
+ ],
+ [
+ "iction",
+ "ary"
+ ],
+ [
+ "▁t",
+ "ests"
+ ],
+ [
+ "▁te",
+ "sts"
+ ],
+ [
+ "▁test",
+ "s"
+ ],
+ [
+ "▁",
+ "tests"
+ ],
+ [
+ ".~",
+ "\\"
+ ],
+ [
+ ".",
+ "~\\"
+ ],
+ [
+ "ob",
+ "s"
+ ],
+ [
+ "o",
+ "bs"
+ ],
+ [
+ "▁М",
+ "и"
+ ],
+ [
+ "▁об",
+ "ла"
+ ],
+ [
+ "▁ass",
+ "ess"
+ ],
+ [
+ "▁Fr",
+ "iday"
+ ],
+ [
+ "▁we",
+ "ather"
+ ],
+ [
+ "k",
+ "g"
+ ],
+ [
+ "ст",
+ "ра"
+ ],
+ [
+ "с",
+ "тра"
+ ],
+ [
+ ".",
+ "}"
+ ],
+ [
+ "end",
+ "ant"
+ ],
+ [
+ "enda",
+ "nt"
+ ],
+ [
+ "an",
+ "na"
+ ],
+ [
+ "ann",
+ "a"
+ ],
+ [
+ "▁Japan",
+ "ese"
+ ],
+ [
+ "cm",
+ "p"
+ ],
+ [
+ "c",
+ "mp"
+ ],
+ [
+ "▁Ar",
+ "my"
+ ],
+ [
+ "▁Arm",
+ "y"
+ ],
+ [
+ "on",
+ "ym"
+ ],
+ [
+ "ony",
+ "m"
+ ],
+ [
+ "o",
+ "nym"
+ ],
+ [
+ "▁rel",
+ "ax"
+ ],
+ [
+ "date",
+ "s"
+ ],
+ [
+ "da",
+ "tes"
+ ],
+ [
+ "dat",
+ "es"
+ ],
+ [
+ "d",
+ "ates"
+ ],
+ [
+ "▁R",
+ "ussian"
+ ],
+ [
+ "▁Russ",
+ "ian"
+ ],
+ [
+ "▁Russia",
+ "n"
+ ],
+ [
+ "▁excell",
+ "ent"
+ ],
+ [
+ "')",
+ ")"
+ ],
+ [
+ "'",
+ "))"
+ ],
+ [
+ "IL",
+ "ITY"
+ ],
+ [
+ "▁sh",
+ "owing"
+ ],
+ [
+ "▁show",
+ "ing"
+ ],
+ [
+ "▁Dan",
+ "iel"
+ ],
+ [
+ "м",
+ "я"
+ ],
+ [
+ "▁M",
+ "ain"
+ ],
+ [
+ "▁Ma",
+ "in"
+ ],
+ [
+ "▁Mai",
+ "n"
+ ],
+ [
+ "▁",
+ "Main"
+ ],
+ [
+ "Ph",
+ "i"
+ ],
+ [
+ "P",
+ "hi"
+ ],
+ [
+ "▁R",
+ "ock"
+ ],
+ [
+ "▁Ro",
+ "ck"
+ ],
+ [
+ "▁Roc",
+ "k"
+ ],
+ [
+ "▁g",
+ "rew"
+ ],
+ [
+ "▁gr",
+ "ew"
+ ],
+ [
+ "▁gre",
+ "w"
+ ],
+ [
+ "▁y",
+ "ield"
+ ],
+ [
+ "i",
+ "ère"
+ ],
+ [
+ "se",
+ "g"
+ ],
+ [
+ "s",
+ "eg"
+ ],
+ [
+ "}}",
+ "$"
+ ],
+ [
+ "}",
+ "}$"
+ ],
+ [
+ "▁st",
+ "rict"
+ ],
+ [
+ "▁str",
+ "ict"
+ ],
+ [
+ "▁stri",
+ "ct"
+ ],
+ [
+ "▁",
+ "strict"
+ ],
+ [
+ "▁v",
+ "ehicle"
+ ],
+ [
+ "▁veh",
+ "icle"
+ ],
+ [
+ "U",
+ "D"
+ ],
+ [
+ "A",
+ "F"
+ ],
+ [
+ "S",
+ "w"
+ ],
+ [
+ "▁c",
+ "hest"
+ ],
+ [
+ "▁ch",
+ "est"
+ ],
+ [
+ "▁che",
+ "st"
+ ],
+ [
+ "▁off",
+ "icer"
+ ],
+ [
+ "▁offic",
+ "er"
+ ],
+ [
+ "▁office",
+ "r"
+ ],
+ [
+ "▁e",
+ "ar"
+ ],
+ [
+ "▁",
+ "ear"
+ ],
+ [
+ "HE",
+ "R"
+ ],
+ [
+ "H",
+ "ER"
+ ],
+ [
+ "no",
+ "on"
+ ],
+ [
+ "n",
+ "oon"
+ ],
+ [
+ "▁jour",
+ "ney"
+ ],
+ [
+ "N",
+ "T"
+ ],
+ [
+ "▁d",
+ "ivers"
+ ],
+ [
+ "▁di",
+ "vers"
+ ],
+ [
+ "▁div",
+ "ers"
+ ],
+ [
+ "▁diver",
+ "s"
+ ],
+ [
+ "▁dive",
+ "rs"
+ ],
+ [
+ "▁Fin",
+ "ally"
+ ],
+ [
+ "▁Final",
+ "ly"
+ ],
+ [
+ "F",
+ "ound"
+ ],
+ [
+ "▁A",
+ "S"
+ ],
+ [
+ "▁",
+ "AS"
+ ],
+ [
+ "ri",
+ "k"
+ ],
+ [
+ "r",
+ "ik"
+ ],
+ [
+ "▁con",
+ "str"
+ ],
+ [
+ "▁const",
+ "r"
+ ],
+ [
+ "▁cons",
+ "tr"
+ ],
+ [
+ "▁s",
+ "ust"
+ ],
+ [
+ "▁su",
+ "st"
+ ],
+ [
+ "▁sus",
+ "t"
+ ],
+ [
+ "ac",
+ "count"
+ ],
+ [
+ "acc",
+ "ount"
+ ],
+ [
+ "acco",
+ "unt"
+ ],
+ [
+ "▁w",
+ "alls"
+ ],
+ [
+ "▁wall",
+ "s"
+ ],
+ [
+ "▁wal",
+ "ls"
+ ],
+ [
+ "▁entire",
+ "ly"
+ ],
+ [
+ "It",
+ "er"
+ ],
+ [
+ "I",
+ "ter"
+ ],
+ [
+ "ch",
+ "a"
+ ],
+ [
+ "c",
+ "ha"
+ ],
+ [
+ "is",
+ "hes"
+ ],
+ [
+ "ish",
+ "es"
+ ],
+ [
+ "IV",
+ "E"
+ ],
+ [
+ "I",
+ "VE"
+ ],
+ [
+ "▁pr",
+ "ime"
+ ],
+ [
+ "▁prim",
+ "e"
+ ],
+ [
+ "▁pri",
+ "me"
+ ],
+ [
+ "▁",
+ "prime"
+ ],
+ [
+ "▁",
+ "…"
+ ],
+ [
+ "x",
+ "e"
+ ],
+ [
+ "ut",
+ "en"
+ ],
+ [
+ "ute",
+ "n"
+ ],
+ [
+ "u",
+ "ten"
+ ],
+ [
+ "ar",
+ "se"
+ ],
+ [
+ "ars",
+ "e"
+ ],
+ [
+ "▁P",
+ "a"
+ ],
+ [
+ "put",
+ "e"
+ ],
+ [
+ "pu",
+ "te"
+ ],
+ [
+ "p",
+ "ute"
+ ],
+ [
+ "ä",
+ "l"
+ ],
+ [
+ "▁prote",
+ "ction"
+ ],
+ [
+ "▁protect",
+ "ion"
+ ],
+ [
+ "▁prot",
+ "ection"
+ ],
+ [
+ "▁ke",
+ "ys"
+ ],
+ [
+ "▁key",
+ "s"
+ ],
+ [
+ "▁",
+ "keys"
+ ],
+ [
+ "Ma",
+ "y"
+ ],
+ [
+ "M",
+ "ay"
+ ],
+ [
+ "By",
+ "te"
+ ],
+ [
+ "Con",
+ "st"
+ ],
+ [
+ "Cons",
+ "t"
+ ],
+ [
+ "B",
+ "L"
+ ],
+ [
+ "▁п",
+ "е"
+ ],
+ [
+ "▁",
+ "пе"
+ ],
+ [
+ "▁s",
+ "pl"
+ ],
+ [
+ "▁sp",
+ "l"
+ ],
+ [
+ "▁",
+ "spl"
+ ],
+ [
+ "▁cl",
+ "othes"
+ ],
+ [
+ "▁cloth",
+ "es"
+ ],
+ [
+ "as",
+ "hed"
+ ],
+ [
+ "ash",
+ "ed"
+ ],
+ [
+ "Mar",
+ "k"
+ ],
+ [
+ "M",
+ "ark"
+ ],
+ [
+ "è",
+ "me"
+ ],
+ [
+ "▁f",
+ "ait"
+ ],
+ [
+ "▁fa",
+ "it"
+ ],
+ [
+ "▁introdu",
+ "ced"
+ ],
+ [
+ "▁introduce",
+ "d"
+ ],
+ [
+ "un",
+ "lock"
+ ],
+ [
+ "▁In",
+ "stead"
+ ],
+ [
+ "▁Inst",
+ "ead"
+ ],
+ [
+ "ans",
+ "ion"
+ ],
+ [
+ "reg",
+ "ion"
+ ],
+ [
+ "▁Amer",
+ "icans"
+ ],
+ [
+ "▁American",
+ "s"
+ ],
+ [
+ "▁America",
+ "ns"
+ ],
+ [
+ "▁ind",
+ "eed"
+ ],
+ [
+ "▁inde",
+ "ed"
+ ],
+ [
+ "wid",
+ "get"
+ ],
+ [
+ "w",
+ "idget"
+ ],
+ [
+ "▁real",
+ "ize"
+ ],
+ [
+ "▁realiz",
+ "e"
+ ],
+ [
+ "▁f",
+ "ro"
+ ],
+ [
+ "▁fr",
+ "o"
+ ],
+ [
+ "BI",
+ "T"
+ ],
+ [
+ "B",
+ "IT"
+ ],
+ [
+ "▁Re",
+ "act"
+ ],
+ [
+ "▁",
+ "React"
+ ],
+ [
+ "RE",
+ "AD"
+ ],
+ [
+ "as",
+ "ket"
+ ],
+ [
+ "ask",
+ "et"
+ ],
+ [
+ "ne",
+ "ver"
+ ],
+ [
+ "n",
+ "ever"
+ ],
+ [
+ "▁p",
+ "oll"
+ ],
+ [
+ "▁pol",
+ "l"
+ ],
+ [
+ "▁po",
+ "ll"
+ ],
+ [
+ "▁",
+ "poll"
+ ],
+ [
+ "ic",
+ "ol"
+ ],
+ [
+ "ico",
+ "l"
+ ],
+ [
+ "i",
+ "col"
+ ],
+ [
+ "▁p",
+ "rev"
+ ],
+ [
+ "▁pre",
+ "v"
+ ],
+ [
+ "▁pr",
+ "ev"
+ ],
+ [
+ "▁",
+ "prev"
+ ],
+ [
+ "▁h",
+ "yp"
+ ],
+ [
+ "▁hy",
+ "p"
+ ],
+ [
+ "▁F",
+ "ur"
+ ],
+ [
+ "▁Fu",
+ "r"
+ ],
+ [
+ "cl",
+ "oud"
+ ],
+ [
+ "▁L",
+ "ee"
+ ],
+ [
+ "▁Le",
+ "e"
+ ],
+ [
+ "pl",
+ "ing"
+ ],
+ [
+ "p",
+ "ling"
+ ],
+ [
+ "▁Ch",
+ "ild"
+ ],
+ [
+ "▁Chi",
+ "ld"
+ ],
+ [
+ "▁",
+ "Child"
+ ],
+ [
+ "▁ide",
+ "al"
+ ],
+ [
+ "▁idea",
+ "l"
+ ],
+ [
+ "Se",
+ "lector"
+ ],
+ [
+ "Select",
+ "or"
+ ],
+ [
+ "STAT",
+ "US"
+ ],
+ [
+ "uct",
+ "ure"
+ ],
+ [
+ "▁w",
+ "ine"
+ ],
+ [
+ "▁win",
+ "e"
+ ],
+ [
+ "▁poss",
+ "ibly"
+ ],
+ [
+ "▁put",
+ "ting"
+ ],
+ [
+ "▁r",
+ "iv"
+ ],
+ [
+ "▁ri",
+ "v"
+ ],
+ [
+ "▁",
+ "riv"
+ ],
+ [
+ "▁w",
+ "earing"
+ ],
+ [
+ "▁we",
+ "aring"
+ ],
+ [
+ "▁wear",
+ "ing"
+ ],
+ [
+ "▁S",
+ "ource"
+ ],
+ [
+ "▁",
+ "Source"
+ ],
+ [
+ "▁C",
+ "as"
+ ],
+ [
+ "▁Ca",
+ "s"
+ ],
+ [
+ "Ch",
+ "anged"
+ ],
+ [
+ "Change",
+ "d"
+ ],
+ [
+ "▁th",
+ "anks"
+ ],
+ [
+ "▁than",
+ "ks"
+ ],
+ [
+ "▁thank",
+ "s"
+ ],
+ [
+ "TI",
+ "ME"
+ ],
+ [
+ "TIM",
+ "E"
+ ],
+ [
+ "T",
+ "IME"
+ ],
+ [
+ "▁s",
+ "port"
+ ],
+ [
+ "▁sp",
+ "ort"
+ ],
+ [
+ "▁spo",
+ "rt"
+ ],
+ [
+ "▁A",
+ "ward"
+ ],
+ [
+ "▁Aw",
+ "ard"
+ ],
+ [
+ "▁g",
+ "lad"
+ ],
+ [
+ "▁gl",
+ "ad"
+ ],
+ [
+ "▁P",
+ "ass"
+ ],
+ [
+ "▁Pa",
+ "ss"
+ ],
+ [
+ "▁Pas",
+ "s"
+ ],
+ [
+ "▁",
+ "Pass"
+ ],
+ [
+ "▁P",
+ "os"
+ ],
+ [
+ "▁Po",
+ "s"
+ ],
+ [
+ "▁",
+ "Pos"
+ ],
+ [
+ "sc",
+ "he"
+ ],
+ [
+ "sch",
+ "e"
+ ],
+ [
+ "s",
+ "che"
+ ],
+ [
+ "▁C",
+ "D"
+ ],
+ [
+ "▁",
+ "CD"
+ ],
+ [
+ "▁aff",
+ "ord"
+ ],
+ [
+ "▁af",
+ "ford"
+ ],
+ [
+ "▁W",
+ "omen"
+ ],
+ [
+ "▁Wo",
+ "men"
+ ],
+ [
+ "▁D",
+ "istrict"
+ ],
+ [
+ "▁Di",
+ "strict"
+ ],
+ [
+ "▁Dist",
+ "rict"
+ ],
+ [
+ "▁id",
+ "entity"
+ ],
+ [
+ "▁ident",
+ "ity"
+ ],
+ [
+ "▁",
+ "identity"
+ ],
+ [
+ "▁part",
+ "ies"
+ ],
+ [
+ "▁par",
+ "ties"
+ ],
+ [
+ "▁partie",
+ "s"
+ ],
+ [
+ "▁parti",
+ "es"
+ ],
+ [
+ ":",
+ "%"
+ ],
+ [
+ "▁d",
+ "rag"
+ ],
+ [
+ "▁dr",
+ "ag"
+ ],
+ [
+ "▁",
+ "drag"
+ ],
+ [
+ "▁m",
+ "ai"
+ ],
+ [
+ "▁ma",
+ "i"
+ ],
+ [
+ "!",
+ "("
+ ],
+ [
+ "lang",
+ "le"
+ ],
+ [
+ "lan",
+ "gle"
+ ],
+ [
+ "l",
+ "angle"
+ ],
+ [
+ "▁kn",
+ "owing"
+ ],
+ [
+ "▁know",
+ "ing"
+ ],
+ [
+ "Pro",
+ "ject"
+ ],
+ [
+ "▁reg",
+ "arding"
+ ],
+ [
+ "▁regard",
+ "ing"
+ ],
+ [
+ "▁Jose",
+ "ph"
+ ],
+ [
+ "▁Jos",
+ "eph"
+ ],
+ [
+ "г",
+ "е"
+ ],
+ [
+ "▁D",
+ "ar"
+ ],
+ [
+ "▁Da",
+ "r"
+ ],
+ [
+ "▁H",
+ "or"
+ ],
+ [
+ "▁Ho",
+ "r"
+ ],
+ [
+ "▁",
+ "Hor"
+ ],
+ [
+ "▁anim",
+ "als"
+ ],
+ [
+ "▁animal",
+ "s"
+ ],
+ [
+ "▁ext",
+ "ension"
+ ],
+ [
+ "▁extens",
+ "ion"
+ ],
+ [
+ "▁",
+ "extension"
+ ],
+ [
+ "ска",
+ "я"
+ ],
+ [
+ "▁H",
+ "an"
+ ],
+ [
+ "▁Ha",
+ "n"
+ ],
+ [
+ "bt",
+ "n"
+ ],
+ [
+ "b",
+ "tn"
+ ],
+ [
+ "ac",
+ "iones"
+ ],
+ [
+ "aci",
+ "ones"
+ ],
+ [
+ "acion",
+ "es"
+ ],
+ [
+ "acio",
+ "nes"
+ ],
+ [
+ "▁f",
+ "amiliar"
+ ],
+ [
+ "▁fam",
+ "iliar"
+ ],
+ [
+ "▁famil",
+ "iar"
+ ],
+ [
+ "▁familia",
+ "r"
+ ],
+ [
+ "hol",
+ "der"
+ ],
+ [
+ "hold",
+ "er"
+ ],
+ [
+ "h",
+ "older"
+ ],
+ [
+ ":",
+ "\r"
+ ],
+ [
+ "st",
+ "ood"
+ ],
+ [
+ "sto",
+ "od"
+ ],
+ [
+ "▁li",
+ "ked"
+ ],
+ [
+ "▁like",
+ "d"
+ ],
+ [
+ "▁lik",
+ "ed"
+ ],
+ [
+ "CO",
+ "DE"
+ ],
+ [
+ "▁en",
+ "able"
+ ],
+ [
+ "▁",
+ "enable"
+ ],
+ [
+ "▁p",
+ "ed"
+ ],
+ [
+ "▁pe",
+ "d"
+ ],
+ [
+ "▁",
+ "ped"
+ ],
+ [
+ "it",
+ "i"
+ ],
+ [
+ "i",
+ "ti"
+ ],
+ [
+ "ha",
+ "b"
+ ],
+ [
+ "h",
+ "ab"
+ ],
+ [
+ "DI",
+ "R"
+ ],
+ [
+ "D",
+ "IR"
+ ],
+ [
+ "▁be",
+ "at"
+ ],
+ [
+ "▁",
+ "beat"
+ ],
+ [
+ "т",
+ "і"
+ ],
+ [
+ "▁Min",
+ "ister"
+ ],
+ [
+ "▁Mini",
+ "ster"
+ ],
+ [
+ "▁p",
+ "y"
+ ],
+ [
+ "▁",
+ "py"
+ ],
+ [
+ "P",
+ "at"
+ ],
+ [
+ "▁ex",
+ "hib"
+ ],
+ [
+ "▁exh",
+ "ib"
+ ],
+ [
+ "▁B",
+ "uild"
+ ],
+ [
+ "▁Bu",
+ "ild"
+ ],
+ [
+ "▁",
+ "Build"
+ ],
+ [
+ "▁F",
+ "ield"
+ ],
+ [
+ "▁Fi",
+ "eld"
+ ],
+ [
+ "▁",
+ "Field"
+ ],
+ [
+ "ic",
+ "ian"
+ ],
+ [
+ "ici",
+ "an"
+ ],
+ [
+ "icia",
+ "n"
+ ],
+ [
+ "▁coll",
+ "abor"
+ ],
+ [
+ "▁qu",
+ "arter"
+ ],
+ [
+ "▁quart",
+ "er"
+ ],
+ [
+ "▁quar",
+ "ter"
+ ],
+ [
+ "▁F",
+ "alse"
+ ],
+ [
+ "▁Fal",
+ "se"
+ ],
+ [
+ "▁",
+ "False"
+ ],
+ [
+ "k",
+ "m"
+ ],
+ [
+ "▁v",
+ "irtual"
+ ],
+ [
+ "▁virt",
+ "ual"
+ ],
+ [
+ "▁",
+ "virtual"
+ ],
+ [
+ "ow",
+ "a"
+ ],
+ [
+ "o",
+ "wa"
+ ],
+ [
+ "▁J",
+ "on"
+ ],
+ [
+ "▁Jo",
+ "n"
+ ],
+ [
+ "am",
+ "in"
+ ],
+ [
+ "ami",
+ "n"
+ ],
+ [
+ "a",
+ "min"
+ ],
+ [
+ "ue",
+ "n"
+ ],
+ [
+ "u",
+ "en"
+ ],
+ [
+ "▁и",
+ "н"
+ ],
+ [
+ "▁",
+ "ин"
+ ],
+ [
+ "im",
+ "ation"
+ ],
+ [
+ "imat",
+ "ion"
+ ],
+ [
+ "ov",
+ "ing"
+ ],
+ [
+ "ovi",
+ "ng"
+ ],
+ [
+ "o",
+ "ving"
+ ],
+ [
+ "▁test",
+ "ing"
+ ],
+ [
+ "▁",
+ "testing"
+ ],
+ [
+ "se",
+ "ct"
+ ],
+ [
+ "sec",
+ "t"
+ ],
+ [
+ "s",
+ "ect"
+ ],
+ [
+ "IT",
+ "ION"
+ ],
+ [
+ "I",
+ "TION"
+ ],
+ [
+ "!",
+ "\\"
+ ],
+ [
+ "ap",
+ "y"
+ ],
+ [
+ "a",
+ "py"
+ ],
+ [
+ "▁trans",
+ "ition"
+ ],
+ [
+ "▁transit",
+ "ion"
+ ],
+ [
+ "▁",
+ "transition"
+ ],
+ [
+ "os",
+ "itory"
+ ],
+ [
+ "OD",
+ "O"
+ ],
+ [
+ "O",
+ "DO"
+ ],
+ [
+ "P",
+ "D"
+ ],
+ [
+ "n",
+ "é"
+ ],
+ [
+ "▁gener",
+ "ate"
+ ],
+ [
+ "▁gene",
+ "rate"
+ ],
+ [
+ "▁",
+ "generate"
+ ],
+ [
+ "▁n",
+ "ative"
+ ],
+ [
+ "▁nat",
+ "ive"
+ ],
+ [
+ "▁",
+ "native"
+ ],
+ [
+ "▁(",
+ "'"
+ ],
+ [
+ "▁",
+ "('"
+ ],
+ [
+ "▁e",
+ "lle"
+ ],
+ [
+ "▁el",
+ "le"
+ ],
+ [
+ "▁ell",
+ "e"
+ ],
+ [
+ "▁",
+ "elle"
+ ],
+ [
+ "R",
+ "R"
+ ],
+ [
+ "▁h",
+ "un"
+ ],
+ [
+ "_-",
+ ">"
+ ],
+ [
+ "_",
+ "->"
+ ],
+ [
+ "ag",
+ "nost"
+ ],
+ [
+ "agn",
+ "ost"
+ ],
+ [
+ "▁pro",
+ "posed"
+ ],
+ [
+ "▁prop",
+ "osed"
+ ],
+ [
+ "▁propos",
+ "ed"
+ ],
+ [
+ "▁propose",
+ "d"
+ ],
+ [
+ "▁G",
+ "ame"
+ ],
+ [
+ "▁Ga",
+ "me"
+ ],
+ [
+ "▁Gam",
+ "e"
+ ],
+ [
+ "▁",
+ "Game"
+ ],
+ [
+ "▁eff",
+ "orts"
+ ],
+ [
+ "▁effort",
+ "s"
+ ],
+ [
+ "в",
+ "я"
+ ],
+ [
+ "t",
+ "c"
+ ],
+ [
+ "с",
+ "к"
+ ],
+ [
+ "▁int",
+ "ent"
+ ],
+ [
+ "▁inte",
+ "nt"
+ ],
+ [
+ "▁",
+ "intent"
+ ],
+ [
+ "▁B",
+ "re"
+ ],
+ [
+ "▁Br",
+ "e"
+ ],
+ [
+ "is",
+ "c"
+ ],
+ [
+ "i",
+ "sc"
+ ],
+ [
+ "▁pro",
+ "test"
+ ],
+ [
+ "▁prote",
+ "st"
+ ],
+ [
+ "▁prot",
+ "est"
+ ],
+ [
+ "▁h",
+ "olds"
+ ],
+ [
+ "▁hold",
+ "s"
+ ],
+ [
+ "▁hol",
+ "ds"
+ ],
+ [
+ "▁",
+ "holds"
+ ],
+ [
+ "om",
+ "etry"
+ ],
+ [
+ "ome",
+ "try"
+ ],
+ [
+ "omet",
+ "ry"
+ ],
+ [
+ "o",
+ "metry"
+ ],
+ [
+ "▁H",
+ "ave"
+ ],
+ [
+ "▁Ha",
+ "ve"
+ ],
+ [
+ "▁Hav",
+ "e"
+ ],
+ [
+ "▁",
+ "Have"
+ ],
+ [
+ "▁de",
+ "tail"
+ ],
+ [
+ "▁det",
+ "ail"
+ ],
+ [
+ "▁",
+ "detail"
+ ],
+ [
+ "▁WIT",
+ "HOUT"
+ ],
+ [
+ "▁WITH",
+ "OUT"
+ ],
+ [
+ "ye",
+ "r"
+ ],
+ [
+ "y",
+ "er"
+ ],
+ [
+ "▁K",
+ "on"
+ ],
+ [
+ "▁Ko",
+ "n"
+ ],
+ [
+ "▁not",
+ "iced"
+ ],
+ [
+ "▁notice",
+ "d"
+ ],
+ [
+ "▁require",
+ "ments"
+ ],
+ [
+ "▁requirement",
+ "s"
+ ],
+ [
+ "DE",
+ "BUG"
+ ],
+ [
+ "ki",
+ "ns"
+ ],
+ [
+ "kin",
+ "s"
+ ],
+ [
+ "k",
+ "ins"
+ ],
+ [
+ "▁S",
+ "pan"
+ ],
+ [
+ "▁Sp",
+ "an"
+ ],
+ [
+ "▁",
+ "Span"
+ ],
+ [
+ "▁c",
+ "ars"
+ ],
+ [
+ "▁car",
+ "s"
+ ],
+ [
+ "▁ca",
+ "rs"
+ ],
+ [
+ "me",
+ "ta"
+ ],
+ [
+ "met",
+ "a"
+ ],
+ [
+ "m",
+ "eta"
+ ],
+ [
+ "▁k",
+ "il"
+ ],
+ [
+ "▁ki",
+ "l"
+ ],
+ [
+ "▁",
+ "kil"
+ ],
+ [
+ "▁B",
+ "ron"
+ ],
+ [
+ "▁Br",
+ "on"
+ ],
+ [
+ "▁Bro",
+ "n"
+ ],
+ [
+ "▁experience",
+ "d"
+ ],
+ [
+ "▁experi",
+ "enced"
+ ],
+ [
+ "▁re",
+ "mind"
+ ],
+ [
+ "▁rem",
+ "ind"
+ ],
+ [
+ "our",
+ "se"
+ ],
+ [
+ "ours",
+ "e"
+ ],
+ [
+ "▁W",
+ "estern"
+ ],
+ [
+ "▁West",
+ "ern"
+ ],
+ [
+ "▁Wes",
+ "tern"
+ ],
+ [
+ "ter",
+ "ed"
+ ],
+ [
+ "te",
+ "red"
+ ],
+ [
+ "tere",
+ "d"
+ ],
+ [
+ "t",
+ "ered"
+ ],
+ [
+ "▁dev",
+ "ices"
+ ],
+ [
+ "▁device",
+ "s"
+ ],
+ [
+ "▁",
+ "devices"
+ ],
+ [
+ "▁pict",
+ "ures"
+ ],
+ [
+ "▁picture",
+ "s"
+ ],
+ [
+ "▁t",
+ "ut"
+ ],
+ [
+ "▁tu",
+ "t"
+ ],
+ [
+ "\"",
+ "`"
+ ],
+ [
+ "▁im",
+ "possible"
+ ],
+ [
+ "▁r",
+ "ail"
+ ],
+ [
+ "▁ra",
+ "il"
+ ],
+ [
+ "▁fe",
+ "els"
+ ],
+ [
+ "▁feel",
+ "s"
+ ],
+ [
+ "▁fee",
+ "ls"
+ ],
+ [
+ "ic",
+ "as"
+ ],
+ [
+ "ica",
+ "s"
+ ],
+ [
+ "i",
+ "cas"
+ ],
+ [
+ "il",
+ "ling"
+ ],
+ [
+ "ill",
+ "ing"
+ ],
+ [
+ "▁acc",
+ "ident"
+ ],
+ [
+ "▁'",
+ "@"
+ ],
+ [
+ "____",
+ "____"
+ ],
+ [
+ "▁n",
+ "otes"
+ ],
+ [
+ "▁not",
+ "es"
+ ],
+ [
+ "▁no",
+ "tes"
+ ],
+ [
+ "▁note",
+ "s"
+ ],
+ [
+ "▁",
+ "notes"
+ ],
+ [
+ "om",
+ "an"
+ ],
+ [
+ "oma",
+ "n"
+ ],
+ [
+ "o",
+ "man"
+ ],
+ [
+ "Par",
+ "ser"
+ ],
+ [
+ "Parse",
+ "r"
+ ],
+ [
+ "Pars",
+ "er"
+ ],
+ [
+ "▁dis",
+ "covered"
+ ],
+ [
+ "▁discover",
+ "ed"
+ ],
+ [
+ "▁R",
+ "oman"
+ ],
+ [
+ "▁Rom",
+ "an"
+ ],
+ [
+ "▁Ro",
+ "man"
+ ],
+ [
+ "▁Roma",
+ "n"
+ ],
+ [
+ "▁bud",
+ "get"
+ ],
+ [
+ "▁gu",
+ "ide"
+ ],
+ [
+ "▁guid",
+ "e"
+ ],
+ [
+ "ki",
+ "ng"
+ ],
+ [
+ "kin",
+ "g"
+ ],
+ [
+ "k",
+ "ing"
+ ],
+ [
+ "▁in",
+ "cred"
+ ],
+ [
+ "▁inc",
+ "red"
+ ],
+ [
+ "▁incre",
+ "d"
+ ],
+ [
+ "ol",
+ "ar"
+ ],
+ [
+ "ola",
+ "r"
+ ],
+ [
+ "o",
+ "lar"
+ ],
+ [
+ "en",
+ "den"
+ ],
+ [
+ "end",
+ "en"
+ ],
+ [
+ "ende",
+ "n"
+ ],
+ [
+ "Des",
+ "c"
+ ],
+ [
+ "De",
+ "sc"
+ ],
+ [
+ "D",
+ "esc"
+ ],
+ [
+ "▁w",
+ "ave"
+ ],
+ [
+ "▁wa",
+ "ve"
+ ],
+ [
+ "▁",
+ "wave"
+ ],
+ [
+ "б",
+ "ли"
+ ],
+ [
+ "ig",
+ "t"
+ ],
+ [
+ "i",
+ "gt"
+ ],
+ [
+ "▁re",
+ "strict"
+ ],
+ [
+ "▁rest",
+ "rict"
+ ],
+ [
+ "▁restr",
+ "ict"
+ ],
+ [
+ "▁R",
+ "et"
+ ],
+ [
+ "▁Re",
+ "t"
+ ],
+ [
+ "▁",
+ "Ret"
+ ],
+ [
+ "▁m",
+ "ac"
+ ],
+ [
+ "▁ma",
+ "c"
+ ],
+ [
+ "▁",
+ "mac"
+ ],
+ [
+ "у",
+ "р"
+ ],
+ [
+ "B",
+ "S"
+ ],
+ [
+ "í",
+ "s"
+ ],
+ [
+ "▁gener",
+ "ation"
+ ],
+ [
+ "de",
+ "m"
+ ],
+ [
+ "d",
+ "em"
+ ],
+ [
+ "al",
+ "o"
+ ],
+ [
+ "a",
+ "lo"
+ ],
+ [
+ "б",
+ "ра"
+ ],
+ [
+ "▁order",
+ "ed"
+ ],
+ [
+ "▁ord",
+ "ered"
+ ],
+ [
+ "▁",
+ "ordered"
+ ],
+ [
+ "dr",
+ "op"
+ ],
+ [
+ "dro",
+ "p"
+ ],
+ [
+ "d",
+ "rop"
+ ],
+ [
+ "▁p",
+ "p"
+ ],
+ [
+ "▁",
+ "pp"
+ ],
+ [
+ "▁Re",
+ "view"
+ ],
+ [
+ "▁Rev",
+ "iew"
+ ],
+ [
+ "▁",
+ "Review"
+ ],
+ [
+ "▁liter",
+ "ally"
+ ],
+ [
+ "▁literal",
+ "ly"
+ ],
+ [
+ "▁S",
+ "ir"
+ ],
+ [
+ "▁Si",
+ "r"
+ ],
+ [
+ "▁",
+ "Sir"
+ ],
+ [
+ "▁Y",
+ "eah"
+ ],
+ [
+ "▁Ye",
+ "ah"
+ ],
+ [
+ "▁",
+ "Yeah"
+ ],
+ [
+ "▁d",
+ "ensity"
+ ],
+ [
+ "▁dens",
+ "ity"
+ ],
+ [
+ "▁",
+ "density"
+ ],
+ [
+ "ri",
+ "z"
+ ],
+ [
+ "r",
+ "iz"
+ ],
+ [
+ "in",
+ "de"
+ ],
+ [
+ "ind",
+ "e"
+ ],
+ [
+ "i",
+ "nde"
+ ],
+ [
+ "▁g",
+ "ain"
+ ],
+ [
+ "▁ga",
+ "in"
+ ],
+ [
+ "▁",
+ "gain"
+ ],
+ [
+ "▁p",
+ "anel"
+ ],
+ [
+ "▁pan",
+ "el"
+ ],
+ [
+ "▁pa",
+ "nel"
+ ],
+ [
+ "▁",
+ "panel"
+ ],
+ [
+ "je",
+ "t"
+ ],
+ [
+ "j",
+ "et"
+ ],
+ [
+ "▁T",
+ "imes"
+ ],
+ [
+ "▁Time",
+ "s"
+ ],
+ [
+ "▁Tim",
+ "es"
+ ],
+ [
+ "▁Ti",
+ "mes"
+ ],
+ [
+ "▁",
+ "Times"
+ ],
+ [
+ "▁n",
+ "ella"
+ ],
+ [
+ "▁ne",
+ "lla"
+ ],
+ [
+ "▁nel",
+ "la"
+ ],
+ [
+ "▁nell",
+ "a"
+ ],
+ [
+ "▁pre",
+ "viously"
+ ],
+ [
+ "▁previous",
+ "ly"
+ ],
+ [
+ "▁prev",
+ "iously"
+ ],
+ [
+ "point",
+ "s"
+ ],
+ [
+ "Se",
+ "nd"
+ ],
+ [
+ "S",
+ "end"
+ ],
+ [
+ "▁B",
+ "rown"
+ ],
+ [
+ "▁Br",
+ "own"
+ ],
+ [
+ "▁Bro",
+ "wn"
+ ],
+ [
+ "▁Brow",
+ "n"
+ ],
+ [
+ "ea",
+ "ch"
+ ],
+ [
+ "e",
+ "ach"
+ ],
+ [
+ "▁tr",
+ "igger"
+ ],
+ [
+ "▁",
+ "trigger"
+ ],
+ [
+ "ome",
+ "times"
+ ],
+ [
+ "omet",
+ "imes"
+ ],
+ [
+ "ic",
+ "os"
+ ],
+ [
+ "ico",
+ "s"
+ ],
+ [
+ "i",
+ "cos"
+ ],
+ [
+ "G",
+ "R"
+ ],
+ [
+ "Pane",
+ "l"
+ ],
+ [
+ "Pan",
+ "el"
+ ],
+ [
+ "P",
+ "anel"
+ ],
+ [
+ "og",
+ "en"
+ ],
+ [
+ "oge",
+ "n"
+ ],
+ [
+ "o",
+ "gen"
+ ],
+ [
+ "▁c",
+ "m"
+ ],
+ [
+ "▁",
+ "cm"
+ ],
+ [
+ "ru",
+ "ctions"
+ ],
+ [
+ "ruct",
+ "ions"
+ ],
+ [
+ "ruction",
+ "s"
+ ],
+ [
+ "▁k",
+ "iss"
+ ],
+ [
+ "▁ki",
+ "ss"
+ ],
+ [
+ "▁s",
+ "olo"
+ ],
+ [
+ "▁so",
+ "lo"
+ ],
+ [
+ "▁sol",
+ "o"
+ ],
+ [
+ "▁f",
+ "amous"
+ ],
+ [
+ "▁fam",
+ "ous"
+ ],
+ [
+ "ra",
+ "n"
+ ],
+ [
+ "r",
+ "an"
+ ],
+ [
+ "п",
+ "ро"
+ ],
+ [
+ "▁th",
+ "ro"
+ ],
+ [
+ "▁thr",
+ "o"
+ ],
+ [
+ "Gr",
+ "aph"
+ ],
+ [
+ "G",
+ "raph"
+ ],
+ [
+ "im",
+ "it"
+ ],
+ [
+ "imi",
+ "t"
+ ],
+ [
+ "i",
+ "mit"
+ ],
+ [
+ "▁V",
+ "alue"
+ ],
+ [
+ "▁Val",
+ "ue"
+ ],
+ [
+ "▁",
+ "Value"
+ ],
+ [
+ "▁st",
+ "arts"
+ ],
+ [
+ "▁start",
+ "s"
+ ],
+ [
+ "▁star",
+ "ts"
+ ],
+ [
+ "ip",
+ "eline"
+ ],
+ [
+ "ipe",
+ "line"
+ ],
+ [
+ "h",
+ "d"
+ ],
+ [
+ "T",
+ "C"
+ ],
+ [
+ "▁dis",
+ "cussion"
+ ],
+ [
+ "▁discuss",
+ "ion"
+ ],
+ [
+ "▁tr",
+ "uck"
+ ],
+ [
+ "ak",
+ "a"
+ ],
+ [
+ "a",
+ "ka"
+ ],
+ [
+ "On",
+ "ly"
+ ],
+ [
+ "▁E",
+ "qu"
+ ],
+ [
+ "▁Eq",
+ "u"
+ ],
+ [
+ "▁",
+ "Equ"
+ ],
+ [
+ "▁k",
+ "ö"
+ ],
+ [
+ "▁",
+ "kö"
+ ],
+ [
+ "▁B",
+ "es"
+ ],
+ [
+ "▁Be",
+ "s"
+ ],
+ [
+ "▁crit",
+ "ic"
+ ],
+ [
+ "▁pro",
+ "pos"
+ ],
+ [
+ "▁prop",
+ "os"
+ ],
+ [
+ "▁b",
+ "att"
+ ],
+ [
+ "▁bat",
+ "t"
+ ],
+ [
+ "▁ba",
+ "tt"
+ ],
+ [
+ "▁S",
+ "ection"
+ ],
+ [
+ "▁Se",
+ "ction"
+ ],
+ [
+ "▁",
+ "Section"
+ ],
+ [
+ "Sh",
+ "ow"
+ ],
+ [
+ "S",
+ "how"
+ ],
+ [
+ "g",
+ "p"
+ ],
+ [
+ "ST",
+ "ATE"
+ ],
+ [
+ "STAT",
+ "E"
+ ],
+ [
+ "PO",
+ "ST"
+ ],
+ [
+ "POS",
+ "T"
+ ],
+ [
+ "P",
+ "OST"
+ ],
+ [
+ "▁N",
+ "ord"
+ ],
+ [
+ "▁No",
+ "rd"
+ ],
+ [
+ "▁Nor",
+ "d"
+ ],
+ [
+ "▁in",
+ "nov"
+ ],
+ [
+ "▁inn",
+ "ov"
+ ],
+ [
+ "▁c",
+ "rim"
+ ],
+ [
+ "▁cr",
+ "im"
+ ],
+ [
+ "▁cri",
+ "m"
+ ],
+ [
+ "▁",
+ "crim"
+ ],
+ [
+ "ax",
+ "is"
+ ],
+ [
+ "a",
+ "xis"
+ ],
+ [
+ "▁T",
+ "urn"
+ ],
+ [
+ "▁Tur",
+ "n"
+ ],
+ [
+ "▁Tu",
+ "rn"
+ ],
+ [
+ "▁",
+ "Turn"
+ ],
+ [
+ "con",
+ "n"
+ ],
+ [
+ "co",
+ "nn"
+ ],
+ [
+ "Run",
+ "time"
+ ],
+ [
+ "▁rem",
+ "aining"
+ ],
+ [
+ "▁remain",
+ "ing"
+ ],
+ [
+ "os",
+ "ton"
+ ],
+ [
+ "ost",
+ "on"
+ ],
+ [
+ "osto",
+ "n"
+ ],
+ [
+ "o",
+ "ston"
+ ],
+ [
+ "▁",
+ "Э"
+ ],
+ [
+ "▁window",
+ "s"
+ ],
+ [
+ "▁wind",
+ "ows"
+ ],
+ [
+ "▁",
+ "windows"
+ ],
+ [
+ "▁R",
+ "oyal"
+ ],
+ [
+ "▁Ro",
+ "yal"
+ ],
+ [
+ "▁Roy",
+ "al"
+ ],
+ [
+ "▁v",
+ "ide"
+ ],
+ [
+ "▁vi",
+ "de"
+ ],
+ [
+ "▁vid",
+ "e"
+ ],
+ [
+ "P",
+ "P"
+ ],
+ [
+ "ch",
+ "ron"
+ ],
+ [
+ "chr",
+ "on"
+ ],
+ [
+ "▁s",
+ "an"
+ ],
+ [
+ "▁sa",
+ "n"
+ ],
+ [
+ "▁",
+ "san"
+ ],
+ [
+ "▁r",
+ "ise"
+ ],
+ [
+ "▁ri",
+ "se"
+ ],
+ [
+ "▁ris",
+ "e"
+ ],
+ [
+ "▁",
+ "rise"
+ ],
+ [
+ "▁d",
+ "elle"
+ ],
+ [
+ "▁de",
+ "lle"
+ ],
+ [
+ "▁del",
+ "le"
+ ],
+ [
+ "▁dell",
+ "e"
+ ],
+ [
+ "▁D",
+ "ur"
+ ],
+ [
+ "▁Du",
+ "r"
+ ],
+ [
+ "▁rap",
+ "id"
+ ],
+ [
+ "▁ra",
+ "pid"
+ ],
+ [
+ "ce",
+ "rt"
+ ],
+ [
+ "cer",
+ "t"
+ ],
+ [
+ "c",
+ "ert"
+ ],
+ [
+ "L",
+ "A"
+ ],
+ [
+ "ed",
+ "ge"
+ ],
+ [
+ "▁\\",
+ "]"
+ ],
+ [
+ "▁",
+ "\\]"
+ ],
+ [
+ "▁en",
+ "tered"
+ ],
+ [
+ "▁ent",
+ "ered"
+ ],
+ [
+ "▁enter",
+ "ed"
+ ],
+ [
+ "▁l",
+ "aws"
+ ],
+ [
+ "▁la",
+ "ws"
+ ],
+ [
+ "▁law",
+ "s"
+ ],
+ [
+ "▁ph",
+ "oto"
+ ],
+ [
+ "▁phot",
+ "o"
+ ],
+ [
+ "▁",
+ "photo"
+ ],
+ [
+ "▁ap",
+ "plications"
+ ],
+ [
+ "▁applic",
+ "ations"
+ ],
+ [
+ "▁application",
+ "s"
+ ],
+ [
+ "▁appl",
+ "ications"
+ ],
+ [
+ "▁Ber",
+ "lin"
+ ],
+ [
+ "▁ar",
+ "rest"
+ ],
+ [
+ "▁arr",
+ "est"
+ ],
+ [
+ "▁f",
+ "ederal"
+ ],
+ [
+ "▁fed",
+ "eral"
+ ],
+ [
+ "▁feder",
+ "al"
+ ],
+ [
+ "▁R",
+ "ussia"
+ ],
+ [
+ "▁Russ",
+ "ia"
+ ],
+ [
+ "▁us",
+ "ual"
+ ],
+ [
+ "▁r",
+ "aw"
+ ],
+ [
+ "▁ra",
+ "w"
+ ],
+ [
+ "▁",
+ "raw"
+ ],
+ [
+ "▁pi",
+ "ù"
+ ],
+ [
+ "êt",
+ "re"
+ ],
+ [
+ "ê",
+ "tre"
+ ],
+ [
+ "JS",
+ "ON"
+ ],
+ [
+ "J",
+ "SON"
+ ],
+ [
+ "SI",
+ "ON"
+ ],
+ [
+ "S",
+ "ION"
+ ],
+ [
+ "xt",
+ "ure"
+ ],
+ [
+ "ist",
+ "ent"
+ ],
+ [
+ "iste",
+ "nt"
+ ],
+ [
+ "isten",
+ "t"
+ ],
+ [
+ "▁P",
+ "ower"
+ ],
+ [
+ "▁Po",
+ "wer"
+ ],
+ [
+ "▁Pow",
+ "er"
+ ],
+ [
+ "▁",
+ "Power"
+ ],
+ [
+ "Bi",
+ "t"
+ ],
+ [
+ "B",
+ "it"
+ ],
+ [
+ "▁cap",
+ "acity"
+ ],
+ [
+ "▁capac",
+ "ity"
+ ],
+ [
+ "▁",
+ "capacity"
+ ],
+ [
+ "▁c",
+ "ards"
+ ],
+ [
+ "▁car",
+ "ds"
+ ],
+ [
+ "▁card",
+ "s"
+ ],
+ [
+ "▁",
+ "cards"
+ ],
+ [
+ "UI",
+ "D"
+ ],
+ [
+ "U",
+ "ID"
+ ],
+ [
+ "im",
+ "ents"
+ ],
+ [
+ "iment",
+ "s"
+ ],
+ [
+ "imen",
+ "ts"
+ ],
+ [
+ "i",
+ "ments"
+ ],
+ [
+ "▁d",
+ "ar"
+ ],
+ [
+ "▁da",
+ "r"
+ ],
+ [
+ "▁",
+ "dar"
+ ],
+ [
+ "▁Ch",
+ "icago"
+ ],
+ [
+ "▁comfort",
+ "able"
+ ],
+ [
+ "ti",
+ "p"
+ ],
+ [
+ "t",
+ "ip"
+ ],
+ [
+ "ba",
+ "s"
+ ],
+ [
+ "b",
+ "as"
+ ],
+ [
+ "▁m",
+ "u"
+ ],
+ [
+ "▁",
+ "mu"
+ ],
+ [
+ "▁en",
+ "emy"
+ ],
+ [
+ "▁enem",
+ "y"
+ ],
+ [
+ "ya",
+ "n"
+ ],
+ [
+ "y",
+ "an"
+ ],
+ [
+ "▁ф",
+ "и"
+ ],
+ [
+ "▁",
+ "фи"
+ ],
+ [
+ "▁up",
+ "dated"
+ ],
+ [
+ "▁update",
+ "d"
+ ],
+ [
+ "▁",
+ "updated"
+ ],
+ [
+ "an",
+ "go"
+ ],
+ [
+ "ang",
+ "o"
+ ],
+ [
+ "E",
+ "v"
+ ],
+ [
+ "E",
+ "ffect"
+ ],
+ [
+ "os",
+ "ing"
+ ],
+ [
+ "osi",
+ "ng"
+ ],
+ [
+ "o",
+ "sing"
+ ],
+ [
+ "ren",
+ "ce"
+ ],
+ [
+ "r",
+ "ence"
+ ],
+ [
+ "▁Con",
+ "gress"
+ ],
+ [
+ "▁Cong",
+ "ress"
+ ],
+ [
+ "▁d",
+ "efe"
+ ],
+ [
+ "▁de",
+ "fe"
+ ],
+ [
+ "▁def",
+ "e"
+ ],
+ [
+ "▁i",
+ "p"
+ ],
+ [
+ "▁",
+ "ip"
+ ],
+ [
+ "▁t",
+ "out"
+ ],
+ [
+ "▁to",
+ "ut"
+ ],
+ [
+ "▁tou",
+ "t"
+ ],
+ [
+ "▁f",
+ "reedom"
+ ],
+ [
+ "▁free",
+ "dom"
+ ],
+ [
+ "▁freed",
+ "om"
+ ],
+ [
+ "▁a",
+ "o"
+ ],
+ [
+ "▁",
+ "ao"
+ ],
+ [
+ "▁There",
+ "fore"
+ ],
+ [
+ "▁Ther",
+ "efore"
+ ],
+ [
+ "Ed",
+ "it"
+ ],
+ [
+ "E",
+ "dit"
+ ],
+ [
+ "▁Vir",
+ "gin"
+ ],
+ [
+ "RE",
+ "E"
+ ],
+ [
+ "R",
+ "EE"
+ ],
+ [
+ "ar",
+ "go"
+ ],
+ [
+ "arg",
+ "o"
+ ],
+ [
+ "▁D",
+ "am"
+ ],
+ [
+ "▁Da",
+ "m"
+ ],
+ [
+ "▁",
+ "Dam"
+ ],
+ [
+ "▁tra",
+ "ffic"
+ ],
+ [
+ "▁traff",
+ "ic"
+ ],
+ [
+ "ño",
+ "s"
+ ],
+ [
+ "ñ",
+ "os"
+ ],
+ [
+ "▁a",
+ "lle"
+ ],
+ [
+ "▁al",
+ "le"
+ ],
+ [
+ "▁all",
+ "e"
+ ],
+ [
+ "▁",
+ "alle"
+ ],
+ [
+ "▁dep",
+ "th"
+ ],
+ [
+ "▁",
+ "depth"
+ ],
+ [
+ "No",
+ "w"
+ ],
+ [
+ "N",
+ "ow"
+ ],
+ [
+ "▁s",
+ "ides"
+ ],
+ [
+ "▁side",
+ "s"
+ ],
+ [
+ "▁si",
+ "des"
+ ],
+ [
+ "▁sid",
+ "es"
+ ],
+ [
+ "▁го",
+ "ди"
+ ],
+ [
+ "▁год",
+ "и"
+ ],
+ [
+ "Des",
+ "criptor"
+ ],
+ [
+ "▁art",
+ "ikel"
+ ],
+ [
+ "▁n",
+ "arrow"
+ ],
+ [
+ "▁narr",
+ "ow"
+ ],
+ [
+ "▁nar",
+ "row"
+ ],
+ [
+ "__",
+ "_"
+ ],
+ [
+ "_",
+ "__"
+ ],
+ [
+ "k",
+ "w"
+ ],
+ [
+ "ut",
+ "o"
+ ],
+ [
+ "u",
+ "to"
+ ],
+ [
+ "▁Face",
+ "book"
+ ],
+ [
+ "▁Fac",
+ "ebook"
+ ],
+ [
+ "te",
+ "gr"
+ ],
+ [
+ "t",
+ "egr"
+ ],
+ [
+ "bo",
+ "olean"
+ ],
+ [
+ "ni",
+ "k"
+ ],
+ [
+ "n",
+ "ik"
+ ],
+ [
+ "b",
+ "d"
+ ],
+ [
+ "Tr",
+ "ack"
+ ],
+ [
+ "Tra",
+ "ck"
+ ],
+ [
+ "▁g",
+ "ran"
+ ],
+ [
+ "▁gr",
+ "an"
+ ],
+ [
+ "▁gra",
+ "n"
+ ],
+ [
+ "res",
+ "hold"
+ ],
+ [
+ "resh",
+ "old"
+ ],
+ [
+ "ве",
+ "т"
+ ],
+ [
+ "в",
+ "ет"
+ ],
+ [
+ "wr",
+ "ap"
+ ],
+ [
+ "w",
+ "rap"
+ ],
+ [
+ "▁n",
+ "oise"
+ ],
+ [
+ "▁no",
+ "ise"
+ ],
+ [
+ "ig",
+ "u"
+ ],
+ [
+ "i",
+ "gu"
+ ],
+ [
+ "▁B",
+ "on"
+ ],
+ [
+ "▁Bo",
+ "n"
+ ],
+ [
+ "▁",
+ "Bon"
+ ],
+ [
+ "▁w",
+ "y"
+ ],
+ [
+ "▁",
+ "wy"
+ ],
+ [
+ "lin",
+ "ux"
+ ],
+ [
+ "ck",
+ "s"
+ ],
+ [
+ "c",
+ "ks"
+ ],
+ [
+ "▁f",
+ "ans"
+ ],
+ [
+ "▁fa",
+ "ns"
+ ],
+ [
+ "▁fan",
+ "s"
+ ],
+ [
+ "▁m",
+ "ach"
+ ],
+ [
+ "▁ma",
+ "ch"
+ ],
+ [
+ "▁mac",
+ "h"
+ ],
+ [
+ "▁p",
+ "rices"
+ ],
+ [
+ "▁pr",
+ "ices"
+ ],
+ [
+ "▁pri",
+ "ces"
+ ],
+ [
+ "▁price",
+ "s"
+ ],
+ [
+ "é",
+ "v"
+ ],
+ [
+ "ou",
+ "ts"
+ ],
+ [
+ "out",
+ "s"
+ ],
+ [
+ "o",
+ "uts"
+ ],
+ [
+ "stand",
+ "ing"
+ ],
+ [
+ "stan",
+ "ding"
+ ],
+ [
+ "▁c",
+ "ateg"
+ ],
+ [
+ "▁cat",
+ "eg"
+ ],
+ [
+ ";",
+ "\\"
+ ],
+ [
+ "▁de",
+ "cre"
+ ],
+ [
+ "▁dec",
+ "re"
+ ],
+ [
+ "▁S",
+ "aturday"
+ ],
+ [
+ "▁m",
+ "enu"
+ ],
+ [
+ "▁me",
+ "nu"
+ ],
+ [
+ "▁men",
+ "u"
+ ],
+ [
+ "▁",
+ "menu"
+ ],
+ [
+ "▁N",
+ "ov"
+ ],
+ [
+ "▁No",
+ "v"
+ ],
+ [
+ "▁Y",
+ "et"
+ ],
+ [
+ "▁Ye",
+ "t"
+ ],
+ [
+ "▁та",
+ "к"
+ ],
+ [
+ "lic",
+ "he"
+ ],
+ [
+ "li",
+ "che"
+ ],
+ [
+ "lich",
+ "e"
+ ],
+ [
+ "l",
+ "iche"
+ ],
+ [
+ "▁Ac",
+ "adem"
+ ],
+ [
+ "▁commun",
+ "ication"
+ ],
+ [
+ "us",
+ "ing"
+ ],
+ [
+ "u",
+ "sing"
+ ],
+ [
+ "▁Soc",
+ "iety"
+ ],
+ [
+ "▁Soci",
+ "ety"
+ ],
+ [
+ "▁n",
+ "uc"
+ ],
+ [
+ "▁nu",
+ "c"
+ ],
+ [
+ "pect",
+ "ive"
+ ],
+ [
+ "or",
+ "ial"
+ ],
+ [
+ "oria",
+ "l"
+ ],
+ [
+ "ori",
+ "al"
+ ],
+ [
+ "o",
+ "rial"
+ ],
+ [
+ "▁af",
+ "raid"
+ ],
+ [
+ "▁an",
+ "imal"
+ ],
+ [
+ "▁anim",
+ "al"
+ ],
+ [
+ "▁turn",
+ "ing"
+ ],
+ [
+ "▁tur",
+ "ning"
+ ],
+ [
+ "ds",
+ "t"
+ ],
+ [
+ "d",
+ "st"
+ ],
+ [
+ "math",
+ "frak"
+ ],
+ [
+ "le",
+ "rs"
+ ],
+ [
+ "ler",
+ "s"
+ ],
+ [
+ "l",
+ "ers"
+ ],
+ [
+ "▁l",
+ "ots"
+ ],
+ [
+ "▁lo",
+ "ts"
+ ],
+ [
+ "▁lot",
+ "s"
+ ],
+ [
+ "▁",
+ "á"
+ ],
+ [
+ "▁T",
+ "ra"
+ ],
+ [
+ "▁Tr",
+ "a"
+ ],
+ [
+ "▁",
+ "Tra"
+ ],
+ [
+ "n",
+ "p"
+ ],
+ [
+ "▁r",
+ "ose"
+ ],
+ [
+ "▁ro",
+ "se"
+ ],
+ [
+ "▁",
+ "rose"
+ ],
+ [
+ "▁G",
+ "L"
+ ],
+ [
+ "▁",
+ "GL"
+ ],
+ [
+ "▁hel",
+ "ping"
+ ],
+ [
+ "▁help",
+ "ing"
+ ],
+ [
+ "▁w",
+ "inter"
+ ],
+ [
+ "▁win",
+ "ter"
+ ],
+ [
+ "▁ко",
+ "м"
+ ],
+ [
+ "▁",
+ "ком"
+ ],
+ [
+ "Mo",
+ "ck"
+ ],
+ [
+ "M",
+ "ock"
+ ],
+ [
+ "▁invest",
+ "ment"
+ ],
+ [
+ "Us",
+ "e"
+ ],
+ [
+ "U",
+ "se"
+ ],
+ [
+ "▁Can",
+ "ad"
+ ],
+ [
+ "н",
+ "д"
+ ],
+ [
+ "Co",
+ "py"
+ ],
+ [
+ "Cop",
+ "y"
+ ],
+ [
+ "C",
+ "opy"
+ ],
+ [
+ "▁f",
+ "ly"
+ ],
+ [
+ "▁fl",
+ "y"
+ ],
+ [
+ "▁",
+ "fly"
+ ],
+ [
+ "SE",
+ "R"
+ ],
+ [
+ "S",
+ "ER"
+ ],
+ [
+ "▁F",
+ "ar"
+ ],
+ [
+ "▁Fa",
+ "r"
+ ],
+ [
+ "▁R",
+ "os"
+ ],
+ [
+ "▁Ro",
+ "s"
+ ],
+ [
+ "am",
+ "il"
+ ],
+ [
+ "ami",
+ "l"
+ ],
+ [
+ "a",
+ "mil"
+ ],
+ [
+ "▁fight",
+ "ing"
+ ],
+ [
+ "▁rel",
+ "igious"
+ ],
+ [
+ "▁relig",
+ "ious"
+ ],
+ [
+ "su",
+ "per"
+ ],
+ [
+ "sup",
+ "er"
+ ],
+ [
+ "s",
+ "uper"
+ ],
+ [
+ "sc",
+ "reen"
+ ],
+ [
+ "scr",
+ "een"
+ ],
+ [
+ "s",
+ "creen"
+ ],
+ [
+ "▁f",
+ "urn"
+ ],
+ [
+ "▁fur",
+ "n"
+ ],
+ [
+ "▁fu",
+ "rn"
+ ],
+ [
+ "▁surpr",
+ "ised"
+ ],
+ [
+ "▁surprise",
+ "d"
+ ],
+ [
+ "▁re",
+ "plied"
+ ],
+ [
+ "▁repl",
+ "ied"
+ ],
+ [
+ "Act",
+ "ivity"
+ ],
+ [
+ "Activ",
+ "ity"
+ ],
+ [
+ "▁D",
+ "own"
+ ],
+ [
+ "▁Do",
+ "wn"
+ ],
+ [
+ "▁Dow",
+ "n"
+ ],
+ [
+ "▁",
+ "Down"
+ ],
+ [
+ "▁in",
+ "sert"
+ ],
+ [
+ "▁ins",
+ "ert"
+ ],
+ [
+ "▁",
+ "insert"
+ ],
+ [
+ "▁O",
+ "lymp"
+ ],
+ [
+ "▁point",
+ "ed"
+ ],
+ [
+ "▁po",
+ "inted"
+ ],
+ [
+ "▁C",
+ "ard"
+ ],
+ [
+ "▁Car",
+ "d"
+ ],
+ [
+ "▁Ca",
+ "rd"
+ ],
+ [
+ "▁",
+ "Card"
+ ],
+ [
+ "dr",
+ "iver"
+ ],
+ [
+ "drive",
+ "r"
+ ],
+ [
+ "d",
+ "river"
+ ],
+ [
+ "▁D",
+ "a"
+ ],
+ [
+ "▁",
+ "Da"
+ ],
+ [
+ "!",
+ "--"
+ ],
+ [
+ "ro",
+ "ud"
+ ],
+ [
+ "rou",
+ "d"
+ ],
+ [
+ "r",
+ "oud"
+ ],
+ [
+ "un",
+ "do"
+ ],
+ [
+ "und",
+ "o"
+ ],
+ [
+ "▁m",
+ "essages"
+ ],
+ [
+ "▁message",
+ "s"
+ ],
+ [
+ "▁mess",
+ "ages"
+ ],
+ [
+ "▁",
+ "messages"
+ ],
+ [
+ "▁P",
+ "oint"
+ ],
+ [
+ "▁Po",
+ "int"
+ ],
+ [
+ "▁",
+ "Point"
+ ],
+ [
+ "V",
+ "M"
+ ],
+ [
+ "▁p",
+ "lane"
+ ],
+ [
+ "▁pl",
+ "ane"
+ ],
+ [
+ "▁plan",
+ "e"
+ ],
+ [
+ "▁",
+ "plane"
+ ],
+ [
+ "x",
+ "c"
+ ],
+ [
+ "▁telev",
+ "ision"
+ ],
+ [
+ "▁tele",
+ "vision"
+ ],
+ [
+ "▁televis",
+ "ion"
+ ],
+ [
+ "ё",
+ "н"
+ ],
+ [
+ "▁thous",
+ "ands"
+ ],
+ [
+ "▁thousand",
+ "s"
+ ],
+ [
+ "▁c",
+ "ris"
+ ],
+ [
+ "▁cr",
+ "is"
+ ],
+ [
+ "▁cri",
+ "s"
+ ],
+ [
+ "▁de",
+ "lay"
+ ],
+ [
+ "▁del",
+ "ay"
+ ],
+ [
+ "▁",
+ "delay"
+ ],
+ [
+ "▁N",
+ "ext"
+ ],
+ [
+ "▁Ne",
+ "xt"
+ ],
+ [
+ "▁",
+ "Next"
+ ],
+ [
+ "▁no",
+ "mbre"
+ ],
+ [
+ "▁nom",
+ "bre"
+ ],
+ [
+ "▁t",
+ "u"
+ ],
+ [
+ "▁",
+ "tu"
+ ],
+ [
+ "▁sk",
+ "ip"
+ ],
+ [
+ "▁ski",
+ "p"
+ ],
+ [
+ "▁",
+ "skip"
+ ],
+ [
+ "ro",
+ "ad"
+ ],
+ [
+ "r",
+ "oad"
+ ],
+ [
+ "istr",
+ "ation"
+ ],
+ [
+ "▁t",
+ "ur"
+ ],
+ [
+ "▁tu",
+ "r"
+ ],
+ [
+ "▁De",
+ "velop"
+ ],
+ [
+ "▁Devel",
+ "op"
+ ],
+ [
+ "▁П",
+ "а"
+ ],
+ [
+ "▁д",
+ "ру"
+ ],
+ [
+ "▁др",
+ "у"
+ ],
+ [
+ "▁wonder",
+ "ful"
+ ],
+ [
+ ">",
+ "&"
+ ],
+ [
+ "▁L",
+ "iber"
+ ],
+ [
+ "▁Li",
+ "ber"
+ ],
+ [
+ "▁Lib",
+ "er"
+ ],
+ [
+ "▁s",
+ "cope"
+ ],
+ [
+ "▁sc",
+ "ope"
+ ],
+ [
+ "▁",
+ "scope"
+ ],
+ [
+ "▁man",
+ "age"
+ ],
+ [
+ "▁ma",
+ "nage"
+ ],
+ [
+ "▁d",
+ "ass"
+ ],
+ [
+ "▁da",
+ "ss"
+ ],
+ [
+ "▁das",
+ "s"
+ ],
+ [
+ "▁re",
+ "call"
+ ],
+ [
+ "▁rec",
+ "all"
+ ],
+ [
+ "P",
+ "M"
+ ],
+ [
+ "▁re",
+ "levant"
+ ],
+ [
+ "▁relev",
+ "ant"
+ ],
+ [
+ "▁E",
+ "arth"
+ ],
+ [
+ "▁ка",
+ "к"
+ ],
+ [
+ "▁a",
+ "pr"
+ ],
+ [
+ "▁ap",
+ "r"
+ ],
+ [
+ "▁A",
+ "SS"
+ ],
+ [
+ "▁AS",
+ "S"
+ ],
+ [
+ "▁",
+ "ASS"
+ ],
+ [
+ "ié",
+ "n"
+ ],
+ [
+ "i",
+ "én"
+ ],
+ [
+ "▁S",
+ "H"
+ ],
+ [
+ "▁",
+ "SH"
+ ],
+ [
+ "oo",
+ "m"
+ ],
+ [
+ "o",
+ "om"
+ ],
+ [
+ "it",
+ "et"
+ ],
+ [
+ "ite",
+ "t"
+ ],
+ [
+ "no",
+ "ne"
+ ],
+ [
+ "non",
+ "e"
+ ],
+ [
+ "n",
+ "one"
+ ],
+ [
+ "as",
+ "i"
+ ],
+ [
+ "a",
+ "si"
+ ],
+ [
+ "▁mot",
+ "or"
+ ],
+ [
+ "▁mo",
+ "tor"
+ ],
+ [
+ "▁S",
+ "how"
+ ],
+ [
+ "▁Sh",
+ "ow"
+ ],
+ [
+ "▁",
+ "Show"
+ ],
+ [
+ "n",
+ "b"
+ ],
+ [
+ "▁fact",
+ "ors"
+ ],
+ [
+ "▁fa",
+ "ctors"
+ ],
+ [
+ "▁factor",
+ "s"
+ ],
+ [
+ "▁f",
+ "orest"
+ ],
+ [
+ "▁for",
+ "est"
+ ],
+ [
+ "▁fore",
+ "st"
+ ],
+ [
+ "▁fo",
+ "rest"
+ ],
+ [
+ "▁в",
+ "ре"
+ ],
+ [
+ "th",
+ "m"
+ ],
+ [
+ "t",
+ "hm"
+ ],
+ [
+ "▁m",
+ "unicip"
+ ],
+ [
+ "▁turn",
+ "s"
+ ],
+ [
+ "▁tur",
+ "ns"
+ ],
+ [
+ "▁Div",
+ "ision"
+ ],
+ [
+ "▁Di",
+ "vision"
+ ],
+ [
+ "E",
+ "C"
+ ],
+ [
+ "▁dis",
+ "appe"
+ ],
+ [
+ "struct",
+ "or"
+ ],
+ [
+ "stru",
+ "ctor"
+ ],
+ [
+ "▁some",
+ "where"
+ ],
+ [
+ "▁Afr",
+ "ican"
+ ],
+ [
+ "▁Africa",
+ "n"
+ ],
+ [
+ "▁Inst",
+ "itute"
+ ],
+ [
+ "▁Institut",
+ "e"
+ ],
+ [
+ "Gr",
+ "id"
+ ],
+ [
+ "G",
+ "rid"
+ ],
+ [
+ "▁te",
+ "acher"
+ ],
+ [
+ "▁teach",
+ "er"
+ ],
+ [
+ "▁tea",
+ "cher"
+ ],
+ [
+ "ur",
+ "ies"
+ ],
+ [
+ "uri",
+ "es"
+ ],
+ [
+ "u",
+ "ries"
+ ],
+ [
+ "▁respect",
+ "ively"
+ ],
+ [
+ "▁respective",
+ "ly"
+ ],
+ [
+ "▁S",
+ "D"
+ ],
+ [
+ "▁",
+ "SD"
+ ],
+ [
+ "▁a",
+ "live"
+ ],
+ [
+ "▁al",
+ "ive"
+ ],
+ [
+ "▁ali",
+ "ve"
+ ],
+ [
+ "▁p",
+ "ou"
+ ],
+ [
+ "▁po",
+ "u"
+ ],
+ [
+ "▁W",
+ "ater"
+ ],
+ [
+ "▁Wat",
+ "er"
+ ],
+ [
+ "▁Wa",
+ "ter"
+ ],
+ [
+ "▁",
+ "Water"
+ ],
+ [
+ "ф",
+ "е"
+ ],
+ [
+ "▁ch",
+ "anging"
+ ],
+ [
+ "▁chang",
+ "ing"
+ ],
+ [
+ "▁",
+ "changing"
+ ],
+ [
+ "▁after",
+ "noon"
+ ],
+ [
+ "▁or",
+ "ders"
+ ],
+ [
+ "▁order",
+ "s"
+ ],
+ [
+ "▁ord",
+ "ers"
+ ],
+ [
+ "▁",
+ "orders"
+ ],
+ [
+ "Re",
+ "t"
+ ],
+ [
+ "R",
+ "et"
+ ],
+ [
+ "Point",
+ "er"
+ ],
+ [
+ "Po",
+ "inter"
+ ],
+ [
+ "▁s",
+ "av"
+ ],
+ [
+ "▁sa",
+ "v"
+ ],
+ [
+ "er",
+ "g"
+ ],
+ [
+ "e",
+ "rg"
+ ],
+ [
+ "ok",
+ "ed"
+ ],
+ [
+ "oke",
+ "d"
+ ],
+ [
+ "o",
+ "ked"
+ ],
+ [
+ "ess",
+ "ions"
+ ],
+ [
+ "ession",
+ "s"
+ ],
+ [
+ "▁F",
+ "ire"
+ ],
+ [
+ "▁Fi",
+ "re"
+ ],
+ [
+ "▁",
+ "Fire"
+ ],
+ [
+ "ar",
+ "et"
+ ],
+ [
+ "are",
+ "t"
+ ],
+ [
+ "a",
+ "ret"
+ ],
+ [
+ "im",
+ "m"
+ ],
+ [
+ "i",
+ "mm"
+ ],
+ [
+ "▁des",
+ "ire"
+ ],
+ [
+ "▁",
+ "що"
+ ],
+ [
+ "▁De",
+ "sign"
+ ],
+ [
+ "▁Des",
+ "ign"
+ ],
+ [
+ "▁",
+ "Design"
+ ],
+ [
+ "ut",
+ "ure"
+ ],
+ [
+ "▁Off",
+ "ice"
+ ],
+ [
+ "▁c",
+ "md"
+ ],
+ [
+ "▁cm",
+ "d"
+ ],
+ [
+ "▁",
+ "cmd"
+ ],
+ [
+ "▁e",
+ "ating"
+ ],
+ [
+ "▁eat",
+ "ing"
+ ],
+ [
+ "Net",
+ "work"
+ ],
+ [
+ "▁r",
+ "ough"
+ ],
+ [
+ "▁ro",
+ "ugh"
+ ],
+ [
+ "▁rou",
+ "gh"
+ ],
+ [
+ "▁",
+ "rough"
+ ],
+ [
+ "oper",
+ "ator"
+ ],
+ [
+ "IG",
+ "N"
+ ],
+ [
+ "I",
+ "GN"
+ ],
+ [
+ "▁s",
+ "ports"
+ ],
+ [
+ "▁sp",
+ "orts"
+ ],
+ [
+ "▁sport",
+ "s"
+ ],
+ [
+ "▁w",
+ "eren"
+ ],
+ [
+ "▁we",
+ "ren"
+ ],
+ [
+ "▁were",
+ "n"
+ ],
+ [
+ "▁wer",
+ "en"
+ ],
+ [
+ "▁n",
+ "oted"
+ ],
+ [
+ "▁not",
+ "ed"
+ ],
+ [
+ "▁no",
+ "ted"
+ ],
+ [
+ "▁note",
+ "d"
+ ],
+ [
+ "▁tw",
+ "ice"
+ ],
+ [
+ "II",
+ "I"
+ ],
+ [
+ "I",
+ "II"
+ ],
+ [
+ "▁a",
+ "nx"
+ ],
+ [
+ "▁an",
+ "x"
+ ],
+ [
+ "▁e",
+ "lim"
+ ],
+ [
+ "▁el",
+ "im"
+ ],
+ [
+ "▁а",
+ "в"
+ ],
+ [
+ "▁i",
+ "o"
+ ],
+ [
+ "▁",
+ "io"
+ ],
+ [
+ "▁spe",
+ "ech"
+ ],
+ [
+ "▁con",
+ "du"
+ ],
+ [
+ "▁cond",
+ "u"
+ ],
+ [
+ "el",
+ "les"
+ ],
+ [
+ "ell",
+ "es"
+ ],
+ [
+ "elle",
+ "s"
+ ],
+ [
+ "id",
+ "ade"
+ ],
+ [
+ "ida",
+ "de"
+ ],
+ [
+ "idad",
+ "e"
+ ],
+ [
+ "▁adv",
+ "ance"
+ ],
+ [
+ "R",
+ "I"
+ ],
+ [
+ "oc",
+ "a"
+ ],
+ [
+ "o",
+ "ca"
+ ],
+ [
+ "/",
+ "\\"
+ ],
+ [
+ "ap",
+ "shot"
+ ],
+ [
+ "aps",
+ "hot"
+ ],
+ [
+ "▁t",
+ "ail"
+ ],
+ [
+ "▁ta",
+ "il"
+ ],
+ [
+ "▁",
+ "tail"
+ ],
+ [
+ "mod",
+ "els"
+ ],
+ [
+ "model",
+ "s"
+ ],
+ [
+ "mode",
+ "ls"
+ ],
+ [
+ "og",
+ "y"
+ ],
+ [
+ "o",
+ "gy"
+ ],
+ [
+ "▁J",
+ "eff"
+ ],
+ [
+ "▁Je",
+ "ff"
+ ],
+ [
+ "ir",
+ "ation"
+ ],
+ [
+ "irat",
+ "ion"
+ ],
+ [
+ "▁K",
+ "ore"
+ ],
+ [
+ "▁Ko",
+ "re"
+ ],
+ [
+ "▁Kor",
+ "e"
+ ],
+ [
+ "▁le",
+ "ads"
+ ],
+ [
+ "▁lead",
+ "s"
+ ],
+ [
+ "ba",
+ "t"
+ ],
+ [
+ "b",
+ "at"
+ ],
+ [
+ "Ad",
+ "apter"
+ ],
+ [
+ "c",
+ "ategory"
+ ],
+ [
+ "ang",
+ "ular"
+ ],
+ [
+ "angu",
+ "lar"
+ ],
+ [
+ "▁s",
+ "aved"
+ ],
+ [
+ "▁sa",
+ "ved"
+ ],
+ [
+ "▁save",
+ "d"
+ ],
+ [
+ "▁sav",
+ "ed"
+ ],
+ [
+ "▁",
+ "saved"
+ ],
+ [
+ "▁un",
+ "iform"
+ ],
+ [
+ "▁",
+ "uniform"
+ ],
+ [
+ "▁n",
+ "é"
+ ],
+ [
+ "▁",
+ "né"
+ ],
+ [
+ "▁business",
+ "es"
+ ],
+ [
+ "His",
+ "t"
+ ],
+ [
+ "Hi",
+ "st"
+ ],
+ [
+ "H",
+ "ist"
+ ],
+ [
+ "▁а",
+ "р"
+ ],
+ [
+ "▁",
+ "ар"
+ ],
+ [
+ "do",
+ "main"
+ ],
+ [
+ "dom",
+ "ain"
+ ],
+ [
+ "▁S",
+ "i"
+ ],
+ [
+ "▁",
+ "Si"
+ ],
+ [
+ "ra",
+ "ise"
+ ],
+ [
+ "rais",
+ "e"
+ ],
+ [
+ "rai",
+ "se"
+ ],
+ [
+ "r",
+ "aise"
+ ],
+ [
+ "▁w",
+ "arn"
+ ],
+ [
+ "▁war",
+ "n"
+ ],
+ [
+ "▁wa",
+ "rn"
+ ],
+ [
+ "▁",
+ "warn"
+ ],
+ [
+ "het",
+ "ic"
+ ],
+ [
+ "h",
+ "etic"
+ ],
+ [
+ "▁G",
+ "ro"
+ ],
+ [
+ "▁Gr",
+ "o"
+ ],
+ [
+ "))",
+ "."
+ ],
+ [
+ ")",
+ ")."
+ ],
+ [
+ "}",
+ ">"
+ ],
+ [
+ "з",
+ "е"
+ ],
+ [
+ "▁Amaz",
+ "on"
+ ],
+ [
+ "▁Or",
+ "gan"
+ ],
+ [
+ "▁",
+ "Organ"
+ ],
+ [
+ "▁L",
+ "ake"
+ ],
+ [
+ "▁La",
+ "ke"
+ ],
+ [
+ "▁ag",
+ "reement"
+ ],
+ [
+ "▁agree",
+ "ment"
+ ],
+ [
+ "▁agre",
+ "ement"
+ ],
+ [
+ "x",
+ "a"
+ ],
+ [
+ "▁p",
+ "erman"
+ ],
+ [
+ "▁per",
+ "man"
+ ],
+ [
+ "▁perm",
+ "an"
+ ],
+ [
+ "▁cont",
+ "aining"
+ ],
+ [
+ "▁contain",
+ "ing"
+ ],
+ [
+ "▁st",
+ "range"
+ ],
+ [
+ "▁str",
+ "ange"
+ ],
+ [
+ "▁strang",
+ "e"
+ ],
+ [
+ "ст",
+ "і"
+ ],
+ [
+ "с",
+ "ті"
+ ],
+ [
+ "▁st",
+ "upid"
+ ],
+ [
+ "▁spe",
+ "aking"
+ ],
+ [
+ "▁speak",
+ "ing"
+ ],
+ [
+ "▁Intern",
+ "et"
+ ],
+ [
+ "▁Inter",
+ "net"
+ ],
+ [
+ "pre",
+ "fix"
+ ],
+ [
+ "pref",
+ "ix"
+ ],
+ [
+ "p",
+ "refix"
+ ],
+ [
+ "es",
+ "c"
+ ],
+ [
+ "e",
+ "sc"
+ ],
+ [
+ "As",
+ "sert"
+ ],
+ [
+ "Ass",
+ "ert"
+ ],
+ [
+ "pro",
+ "te"
+ ],
+ [
+ "pr",
+ "ote"
+ ],
+ [
+ "prot",
+ "e"
+ ],
+ [
+ "p",
+ "rote"
+ ],
+ [
+ "▁m",
+ "anner"
+ ],
+ [
+ "▁man",
+ "ner"
+ ],
+ [
+ "▁S",
+ "z"
+ ],
+ [
+ "un",
+ "te"
+ ],
+ [
+ "unt",
+ "e"
+ ],
+ [
+ "u",
+ "nte"
+ ],
+ [
+ "io",
+ "t"
+ ],
+ [
+ "i",
+ "ot"
+ ],
+ [
+ "Pro",
+ "file"
+ ],
+ [
+ "ov",
+ "en"
+ ],
+ [
+ "ove",
+ "n"
+ ],
+ [
+ "o",
+ "ven"
+ ],
+ [
+ "▁for",
+ "med"
+ ],
+ [
+ "▁form",
+ "ed"
+ ],
+ [
+ "▁forme",
+ "d"
+ ],
+ [
+ "▁",
+ "formed"
+ ],
+ [
+ "▁l",
+ "it"
+ ],
+ [
+ "▁li",
+ "t"
+ ],
+ [
+ "▁",
+ "lit"
+ ],
+ [
+ "▁econom",
+ "y"
+ ],
+ [
+ "▁ec",
+ "onomy"
+ ],
+ [
+ "▁c",
+ "z"
+ ],
+ [
+ "▁",
+ "cz"
+ ],
+ [
+ "wi",
+ "d"
+ ],
+ [
+ "w",
+ "id"
+ ],
+ [
+ "RE",
+ "Q"
+ ],
+ [
+ "R",
+ "EQ"
+ ],
+ [
+ "▁ch",
+ "osen"
+ ],
+ [
+ "▁cho",
+ "sen"
+ ],
+ [
+ "▁chose",
+ "n"
+ ],
+ [
+ "▁P",
+ "rodu"
+ ],
+ [
+ "▁Pro",
+ "du"
+ ],
+ [
+ "▁",
+ "Produ"
+ ],
+ [
+ "os",
+ "ter"
+ ],
+ [
+ "ost",
+ "er"
+ ],
+ [
+ "o",
+ "ster"
+ ],
+ [
+ "st",
+ "ances"
+ ],
+ [
+ "stance",
+ "s"
+ ],
+ [
+ "stan",
+ "ces"
+ ],
+ [
+ "aw",
+ "a"
+ ],
+ [
+ "a",
+ "wa"
+ ],
+ [
+ "▁R",
+ "en"
+ ],
+ [
+ "▁Re",
+ "n"
+ ],
+ [
+ "▁conf",
+ "irm"
+ ],
+ [
+ "▁",
+ "confirm"
+ ],
+ [
+ "▁Б",
+ "о"
+ ],
+ [
+ "▁b",
+ "illion"
+ ],
+ [
+ "▁bill",
+ "ion"
+ ],
+ [
+ "▁d",
+ "éc"
+ ],
+ [
+ "▁dé",
+ "c"
+ ],
+ [
+ "ý",
+ "ch"
+ ],
+ [
+ "▁ill",
+ "ustr"
+ ],
+ [
+ "TI",
+ "ES"
+ ],
+ [
+ "T",
+ "IES"
+ ],
+ [
+ "▁P",
+ "ub"
+ ],
+ [
+ "▁Pu",
+ "b"
+ ],
+ [
+ "▁",
+ "Pub"
+ ],
+ [
+ "▁b",
+ "an"
+ ],
+ [
+ "▁ba",
+ "n"
+ ],
+ [
+ "▁",
+ "ban"
+ ],
+ [
+ "ad",
+ "ed"
+ ],
+ [
+ "ade",
+ "d"
+ ],
+ [
+ "a",
+ "ded"
+ ],
+ [
+ "ah",
+ "n"
+ ],
+ [
+ "a",
+ "hn"
+ ],
+ [
+ "▁C",
+ "ath"
+ ],
+ [
+ "▁Cat",
+ "h"
+ ],
+ [
+ "▁Ca",
+ "th"
+ ],
+ [
+ "no",
+ "number"
+ ],
+ [
+ "non",
+ "umber"
+ ],
+ [
+ "▁wor",
+ "st"
+ ],
+ [
+ "▁М",
+ "е"
+ ],
+ [
+ "▁sugg",
+ "ested"
+ ],
+ [
+ "▁suggest",
+ "ed"
+ ],
+ [
+ "st",
+ "ats"
+ ],
+ [
+ "stat",
+ "s"
+ ],
+ [
+ "sta",
+ "ts"
+ ],
+ [
+ "▁c",
+ "ant"
+ ],
+ [
+ "▁can",
+ "t"
+ ],
+ [
+ "▁ca",
+ "nt"
+ ],
+ [
+ "▁al",
+ "ign"
+ ],
+ [
+ "▁ali",
+ "gn"
+ ],
+ [
+ "▁",
+ "align"
+ ],
+ [
+ "kap",
+ "pa"
+ ],
+ [
+ "k",
+ "appa"
+ ],
+ [
+ "▁h",
+ "en"
+ ],
+ [
+ "▁he",
+ "n"
+ ],
+ [
+ "▁",
+ "hen"
+ ],
+ [
+ "▁in",
+ "iti"
+ ],
+ [
+ "▁init",
+ "i"
+ ],
+ [
+ "']",
+ ")"
+ ],
+ [
+ "'",
+ "])"
+ ],
+ [
+ "B",
+ "I"
+ ],
+ [
+ "▁g",
+ "arden"
+ ],
+ [
+ "▁gar",
+ "den"
+ ],
+ [
+ "▁gard",
+ "en"
+ ],
+ [
+ "▁sec",
+ "ure"
+ ],
+ [
+ "▁secur",
+ "e"
+ ],
+ [
+ "▁",
+ "secure"
+ ],
+ [
+ "▁\\",
+ "["
+ ],
+ [
+ "▁",
+ "\\["
+ ],
+ [
+ "hand",
+ "ler"
+ ],
+ [
+ "handle",
+ "r"
+ ],
+ [
+ "el",
+ "li"
+ ],
+ [
+ "ell",
+ "i"
+ ],
+ [
+ "e",
+ "lli"
+ ],
+ [
+ "ld",
+ "ots"
+ ],
+ [
+ "l",
+ "dots"
+ ],
+ [
+ "se",
+ "cut"
+ ],
+ [
+ "sec",
+ "ut"
+ ],
+ [
+ "s",
+ "ecut"
+ ],
+ [
+ "▁ext",
+ "ended"
+ ],
+ [
+ "▁extend",
+ "ed"
+ ],
+ [
+ "}",
+ "-"
+ ],
+ [
+ "an",
+ "ie"
+ ],
+ [
+ "ani",
+ "e"
+ ],
+ [
+ "a",
+ "nie"
+ ],
+ [
+ "▁F",
+ "ind"
+ ],
+ [
+ "▁Fin",
+ "d"
+ ],
+ [
+ "▁Fi",
+ "nd"
+ ],
+ [
+ "▁",
+ "Find"
+ ],
+ [
+ "▁M",
+ "useum"
+ ],
+ [
+ "▁Muse",
+ "um"
+ ],
+ [
+ "▁C",
+ "onne"
+ ],
+ [
+ "▁Con",
+ "ne"
+ ],
+ [
+ "▁",
+ "Conne"
+ ],
+ [
+ "y",
+ "y"
+ ],
+ [
+ "▁pass",
+ "ion"
+ ],
+ [
+ "ak",
+ "ers"
+ ],
+ [
+ "ake",
+ "rs"
+ ],
+ [
+ "aker",
+ "s"
+ ],
+ [
+ "a",
+ "kers"
+ ],
+ [
+ "ah",
+ "r"
+ ],
+ [
+ "a",
+ "hr"
+ ],
+ [
+ "olog",
+ "ies"
+ ],
+ [
+ "ologie",
+ "s"
+ ],
+ [
+ "▁equ",
+ "ation"
+ ],
+ [
+ "▁eq",
+ "uation"
+ ],
+ [
+ "▁",
+ "equation"
+ ],
+ [
+ "▁occ",
+ "asion"
+ ],
+ [
+ "▁occas",
+ "ion"
+ ],
+ [
+ "Le",
+ "t"
+ ],
+ [
+ "L",
+ "et"
+ ],
+ [
+ "']",
+ "['"
+ ],
+ [
+ "'][",
+ "'"
+ ],
+ [
+ "'",
+ "]['"
+ ],
+ [
+ "Pr",
+ "int"
+ ],
+ [
+ "an",
+ "es"
+ ],
+ [
+ "ane",
+ "s"
+ ],
+ [
+ "a",
+ "nes"
+ ],
+ [
+ "ie",
+ "nte"
+ ],
+ [
+ "ient",
+ "e"
+ ],
+ [
+ "ien",
+ "te"
+ ],
+ [
+ "i",
+ "ente"
+ ],
+ [
+ "▁T",
+ "oday"
+ ],
+ [
+ "▁To",
+ "day"
+ ],
+ [
+ "▁Tod",
+ "ay"
+ ],
+ [
+ "LE",
+ "CT"
+ ],
+ [
+ "L",
+ "ECT"
+ ],
+ [
+ "▁A",
+ "f"
+ ],
+ [
+ "▁",
+ "Af"
+ ],
+ [
+ ",",
+ ","
+ ],
+ [
+ "▁Т",
+ "а"
+ ],
+ [
+ "▁`",
+ "``"
+ ],
+ [
+ "▁``",
+ "`"
+ ],
+ [
+ "ev",
+ "en"
+ ],
+ [
+ "eve",
+ "n"
+ ],
+ [
+ "e",
+ "ven"
+ ],
+ [
+ "si",
+ "n"
+ ],
+ [
+ "s",
+ "in"
+ ],
+ [
+ "ur",
+ "er"
+ ],
+ [
+ "ure",
+ "r"
+ ],
+ [
+ "u",
+ "rer"
+ ],
+ [
+ "▁",
+ "°"
+ ],
+ [
+ "ot",
+ "imes"
+ ],
+ [
+ "oti",
+ "mes"
+ ],
+ [
+ "o",
+ "times"
+ ],
+ [
+ "▁I",
+ "O"
+ ],
+ [
+ "▁",
+ "IO"
+ ],
+ [
+ "▁po",
+ "et"
+ ],
+ [
+ "()",
+ "));"
+ ],
+ [
+ "())",
+ ");"
+ ],
+ [
+ "()))",
+ ";"
+ ],
+ [
+ "(",
+ ")));"
+ ],
+ [
+ "▁",
+ "−"
+ ],
+ [
+ "▁ad",
+ "opt"
+ ],
+ [
+ "ph",
+ "ere"
+ ],
+ [
+ "pher",
+ "e"
+ ],
+ [
+ "p",
+ "here"
+ ],
+ [
+ "#",
+ "["
+ ],
+ [
+ "▁c",
+ "entre"
+ ],
+ [
+ "▁cent",
+ "re"
+ ],
+ [
+ "ov",
+ "es"
+ ],
+ [
+ "ove",
+ "s"
+ ],
+ [
+ "o",
+ "ves"
+ ],
+ [
+ "▁a",
+ "ns"
+ ],
+ [
+ "▁an",
+ "s"
+ ],
+ [
+ "▁",
+ "ans"
+ ],
+ [
+ "d",
+ "p"
+ ],
+ [
+ "▁K",
+ "ir"
+ ],
+ [
+ "▁Ki",
+ "r"
+ ],
+ [
+ "▁applic",
+ "able"
+ ],
+ [
+ "f",
+ "p"
+ ],
+ [
+ "▁vis",
+ "ual"
+ ],
+ [
+ "▁ok",
+ "ay"
+ ],
+ [
+ "or",
+ "o"
+ ],
+ [
+ "o",
+ "ro"
+ ],
+ [
+ "▁opportun",
+ "ities"
+ ],
+ [
+ "Re",
+ "pository"
+ ],
+ [
+ "Rep",
+ "ository"
+ ],
+ [
+ "▁l",
+ "l"
+ ],
+ [
+ "▁",
+ "ll"
+ ],
+ [
+ "▁R",
+ "od"
+ ],
+ [
+ "▁Ro",
+ "d"
+ ],
+ [
+ "▁s",
+ "hel"
+ ],
+ [
+ "▁sh",
+ "el"
+ ],
+ [
+ "▁she",
+ "l"
+ ],
+ [
+ "▁la",
+ "unch"
+ ],
+ [
+ "▁con",
+ "ven"
+ ],
+ [
+ "▁conv",
+ "en"
+ ],
+ [
+ "▁conve",
+ "n"
+ ],
+ [
+ "▁S",
+ "pe"
+ ],
+ [
+ "▁Sp",
+ "e"
+ ],
+ [
+ "▁",
+ "Spe"
+ ],
+ [
+ "Am",
+ "er"
+ ],
+ [
+ "A",
+ "mer"
+ ],
+ [
+ "▁c",
+ "ette"
+ ],
+ [
+ "▁cet",
+ "te"
+ ],
+ [
+ "Con",
+ "d"
+ ],
+ [
+ "Co",
+ "nd"
+ ],
+ [
+ "C",
+ "ond"
+ ],
+ [
+ "de",
+ "p"
+ ],
+ [
+ "d",
+ "ep"
+ ],
+ [
+ "O",
+ "wn"
+ ],
+ [
+ "▁h",
+ "ook"
+ ],
+ [
+ "▁ho",
+ "ok"
+ ],
+ [
+ "▁",
+ "hook"
+ ],
+ [
+ "▁d",
+ "ict"
+ ],
+ [
+ "▁di",
+ "ct"
+ ],
+ [
+ "▁dic",
+ "t"
+ ],
+ [
+ "▁",
+ "dict"
+ ],
+ [
+ "▁Th",
+ "ose"
+ ],
+ [
+ "▁f",
+ "ellow"
+ ],
+ [
+ "▁fell",
+ "ow"
+ ],
+ [
+ "▁fel",
+ "low"
+ ],
+ [
+ "▁phil",
+ "osoph"
+ ],
+ [
+ "▁philos",
+ "oph"
+ ],
+ [
+ "vi",
+ "n"
+ ],
+ [
+ "v",
+ "in"
+ ],
+ [
+ "fer",
+ "ences"
+ ],
+ [
+ "ference",
+ "s"
+ ],
+ [
+ "ha",
+ "v"
+ ],
+ [
+ "h",
+ "av"
+ ],
+ [
+ "▁ad",
+ "ding"
+ ],
+ [
+ "▁add",
+ "ing"
+ ],
+ [
+ "▁",
+ "adding"
+ ],
+ [
+ "ivers",
+ "e"
+ ],
+ [
+ "iver",
+ "se"
+ ],
+ [
+ "i",
+ "verse"
+ ],
+ [
+ "ga",
+ "me"
+ ],
+ [
+ "g",
+ "ame"
+ ],
+ [
+ "▁Bl",
+ "ue"
+ ],
+ [
+ "▁",
+ "Blue"
+ ],
+ [
+ "▁c",
+ "lin"
+ ],
+ [
+ "▁cl",
+ "in"
+ ],
+ [
+ "not",
+ "e"
+ ],
+ [
+ "no",
+ "te"
+ ],
+ [
+ "n",
+ "ote"
+ ],
+ [
+ "▁R",
+ "am"
+ ],
+ [
+ "▁Ra",
+ "m"
+ ],
+ [
+ "ме",
+ "р"
+ ],
+ [
+ "м",
+ "ер"
+ ],
+ [
+ "co",
+ "very"
+ ],
+ [
+ "cover",
+ "y"
+ ],
+ [
+ "cov",
+ "ery"
+ ],
+ [
+ "c",
+ "overy"
+ ],
+ [
+ "ñ",
+ "a"
+ ],
+ [
+ "▁б",
+ "и"
+ ],
+ [
+ "▁",
+ "би"
+ ],
+ [
+ "▁f",
+ "ashion"
+ ],
+ [
+ "▁b",
+ "roke"
+ ],
+ [
+ "▁br",
+ "oke"
+ ],
+ [
+ "▁bro",
+ "ke"
+ ],
+ [
+ "▁'",
+ "\\"
+ ],
+ [
+ "▁",
+ "'\\"
+ ],
+ [
+ "▁re",
+ "ader"
+ ],
+ [
+ "▁read",
+ "er"
+ ],
+ [
+ "▁",
+ "reader"
+ ],
+ [
+ "но",
+ "е"
+ ],
+ [
+ "но",
+ "сти"
+ ],
+ [
+ "ност",
+ "и"
+ ],
+ [
+ "▁pay",
+ "ment"
+ ],
+ [
+ "▁",
+ "payment"
+ ],
+ [
+ "▁L",
+ "ic"
+ ],
+ [
+ "▁Li",
+ "c"
+ ],
+ [
+ "▁l",
+ "ips"
+ ],
+ [
+ "▁li",
+ "ps"
+ ],
+ [
+ "▁lip",
+ "s"
+ ],
+ [
+ "▁ac",
+ "adem"
+ ],
+ [
+ "▁M",
+ "ot"
+ ],
+ [
+ "▁Mo",
+ "t"
+ ],
+ [
+ "el",
+ "ls"
+ ],
+ [
+ "ell",
+ "s"
+ ],
+ [
+ "C",
+ "HECK"
+ ],
+ [
+ "▁р",
+ "у"
+ ],
+ [
+ "▁",
+ "ру"
+ ],
+ [
+ "▁M",
+ "S"
+ ],
+ [
+ "▁",
+ "MS"
+ ],
+ [
+ "Ed",
+ "itor"
+ ],
+ [
+ "Edit",
+ "or"
+ ],
+ [
+ "▁z",
+ "one"
+ ],
+ [
+ "▁zo",
+ "ne"
+ ],
+ [
+ "▁",
+ "zone"
+ ],
+ [
+ "it",
+ "ure"
+ ],
+ [
+ "itu",
+ "re"
+ ],
+ [
+ "▁I",
+ "T"
+ ],
+ [
+ "▁",
+ "IT"
+ ],
+ [
+ "run",
+ "time"
+ ],
+ [
+ "▁pro",
+ "ceed"
+ ],
+ [
+ "▁proc",
+ "eed"
+ ],
+ [
+ "ло",
+ "в"
+ ],
+ [
+ "л",
+ "ов"
+ ],
+ [
+ "▁M",
+ "aria"
+ ],
+ [
+ "▁Mar",
+ "ia"
+ ],
+ [
+ "▁Ma",
+ "ria"
+ ],
+ [
+ "ol",
+ "ver"
+ ],
+ [
+ "olve",
+ "r"
+ ],
+ [
+ "olv",
+ "er"
+ ],
+ [
+ "▁Th",
+ "anks"
+ ],
+ [
+ "▁Thank",
+ "s"
+ ],
+ [
+ "▁",
+ "Thanks"
+ ],
+ [
+ "▁should",
+ "n"
+ ],
+ [
+ "▁J",
+ "oh"
+ ],
+ [
+ "▁Jo",
+ "h"
+ ],
+ [
+ "▁Mod",
+ "el"
+ ],
+ [
+ "▁Mo",
+ "del"
+ ],
+ [
+ "▁Mode",
+ "l"
+ ],
+ [
+ "▁",
+ "Model"
+ ],
+ [
+ "▁S",
+ "ov"
+ ],
+ [
+ "▁So",
+ "v"
+ ],
+ [
+ "!",
+ "'"
+ ],
+ [
+ "D",
+ "i"
+ ],
+ [
+ "▁c",
+ "ancer"
+ ],
+ [
+ "▁can",
+ "cer"
+ ],
+ [
+ "Id",
+ "ent"
+ ],
+ [
+ "▁ex",
+ "change"
+ ],
+ [
+ "il",
+ "ler"
+ ],
+ [
+ "ill",
+ "er"
+ ],
+ [
+ "ille",
+ "r"
+ ],
+ [
+ "in",
+ "f"
+ ],
+ [
+ "i",
+ "nf"
+ ],
+ [
+ "LE",
+ "N"
+ ],
+ [
+ "L",
+ "EN"
+ ],
+ [
+ "()",
+ "{"
+ ],
+ [
+ "(",
+ "){"
+ ],
+ [
+ "ag",
+ "a"
+ ],
+ [
+ "a",
+ "ga"
+ ],
+ [
+ "\"]",
+ ","
+ ],
+ [
+ "\"",
+ "],"
+ ],
+ [
+ "u",
+ "h"
+ ],
+ [
+ "▁K",
+ "en"
+ ],
+ [
+ "▁Ke",
+ "n"
+ ],
+ [
+ "▁ph",
+ "otos"
+ ],
+ [
+ "▁phot",
+ "os"
+ ],
+ [
+ "▁photo",
+ "s"
+ ],
+ [
+ "▁t",
+ "iny"
+ ],
+ [
+ "▁ti",
+ "ny"
+ ],
+ [
+ "▁tin",
+ "y"
+ ],
+ [
+ "▁",
+ "tiny"
+ ],
+ [
+ "▁g",
+ "ent"
+ ],
+ [
+ "▁gen",
+ "t"
+ ],
+ [
+ "▁ge",
+ "nt"
+ ],
+ [
+ "▁",
+ "gent"
+ ],
+ [
+ "ü",
+ "l"
+ ],
+ [
+ "▁T",
+ "ake"
+ ],
+ [
+ "▁Ta",
+ "ke"
+ ],
+ [
+ "▁Tak",
+ "e"
+ ],
+ [
+ "▁",
+ "Take"
+ ],
+ [
+ "id",
+ "el"
+ ],
+ [
+ "ide",
+ "l"
+ ],
+ [
+ "i",
+ "del"
+ ],
+ [
+ "ou",
+ "ting"
+ ],
+ [
+ "out",
+ "ing"
+ ],
+ [
+ "In",
+ "ternal"
+ ],
+ [
+ "Inter",
+ "nal"
+ ],
+ [
+ "Intern",
+ "al"
+ ],
+ [
+ "▁c",
+ "ells"
+ ],
+ [
+ "▁cell",
+ "s"
+ ],
+ [
+ "▁cel",
+ "ls"
+ ],
+ [
+ "ни",
+ "м"
+ ],
+ [
+ "н",
+ "им"
+ ],
+ [
+ "ha",
+ "rd"
+ ],
+ [
+ "har",
+ "d"
+ ],
+ [
+ "h",
+ "ard"
+ ],
+ [
+ "▁T",
+ "own"
+ ],
+ [
+ "▁To",
+ "wn"
+ ],
+ [
+ "▁Tow",
+ "n"
+ ],
+ [
+ "ob",
+ "e"
+ ],
+ [
+ "o",
+ "be"
+ ],
+ [
+ "pl",
+ "ex"
+ ],
+ [
+ "ple",
+ "x"
+ ],
+ [
+ "p",
+ "lex"
+ ],
+ [
+ "те",
+ "р"
+ ],
+ [
+ "т",
+ "ер"
+ ],
+ [
+ "to",
+ "ns"
+ ],
+ [
+ "ton",
+ "s"
+ ],
+ [
+ "t",
+ "ons"
+ ],
+ [
+ "▁conc",
+ "entr"
+ ],
+ [
+ "▁concent",
+ "r"
+ ],
+ [
+ "mo",
+ "ck"
+ ],
+ [
+ "m",
+ "ock"
+ ],
+ [
+ "v",
+ "c"
+ ],
+ [
+ "á",
+ "z"
+ ],
+ [
+ "▁Ch",
+ "ampionship"
+ ],
+ [
+ "▁Champion",
+ "ship"
+ ],
+ [
+ "▁Champions",
+ "hip"
+ ],
+ [
+ "▁б",
+ "е"
+ ],
+ [
+ "▁",
+ "бе"
+ ],
+ [
+ "?",
+ "?"
+ ],
+ [
+ "ér",
+ "i"
+ ],
+ [
+ "é",
+ "ri"
+ ],
+ [
+ "al",
+ "y"
+ ],
+ [
+ "a",
+ "ly"
+ ],
+ [
+ "▁",
+ "Ц"
+ ],
+ [
+ "ier",
+ "te"
+ ],
+ [
+ "iert",
+ "e"
+ ],
+ [
+ "▁tot",
+ "ally"
+ ],
+ [
+ "▁total",
+ "ly"
+ ],
+ [
+ "▁A",
+ "uf"
+ ],
+ [
+ "▁Au",
+ "f"
+ ],
+ [
+ "▁our",
+ "selves"
+ ],
+ [
+ "▁S",
+ "elf"
+ ],
+ [
+ "▁Sel",
+ "f"
+ ],
+ [
+ "▁",
+ "Self"
+ ],
+ [
+ "Form",
+ "s"
+ ],
+ [
+ "For",
+ "ms"
+ ],
+ [
+ "ight",
+ "er"
+ ],
+ [
+ "igh",
+ "ter"
+ ],
+ [
+ "▁is",
+ "land"
+ ],
+ [
+ "fm",
+ "t"
+ ],
+ [
+ "f",
+ "mt"
+ ],
+ [
+ "▁r",
+ "c"
+ ],
+ [
+ "▁",
+ "rc"
+ ],
+ [
+ "▁t",
+ "ells"
+ ],
+ [
+ "▁tell",
+ "s"
+ ],
+ [
+ "▁tel",
+ "ls"
+ ],
+ [
+ "B",
+ "B"
+ ],
+ [
+ "di",
+ "t"
+ ],
+ [
+ "d",
+ "it"
+ ],
+ [
+ "▁vari",
+ "ables"
+ ],
+ [
+ "▁variable",
+ "s"
+ ],
+ [
+ "▁",
+ "variables"
+ ],
+ [
+ "▁int",
+ "ended"
+ ],
+ [
+ "▁intend",
+ "ed"
+ ],
+ [
+ "iz",
+ "ont"
+ ],
+ [
+ "izon",
+ "t"
+ ],
+ [
+ "izo",
+ "nt"
+ ],
+ [
+ "▁pl",
+ "ays"
+ ],
+ [
+ "▁play",
+ "s"
+ ],
+ [
+ "da",
+ "m"
+ ],
+ [
+ "d",
+ "am"
+ ],
+ [
+ "se",
+ "q"
+ ],
+ [
+ "s",
+ "eq"
+ ],
+ [
+ "▁S",
+ "up"
+ ],
+ [
+ "▁Su",
+ "p"
+ ],
+ [
+ "▁",
+ "Sup"
+ ],
+ [
+ "▁c",
+ "ultural"
+ ],
+ [
+ "▁cult",
+ "ural"
+ ],
+ [
+ "▁sc",
+ "ream"
+ ],
+ [
+ "__",
+ ","
+ ],
+ [
+ "_",
+ "_,"
+ ],
+ [
+ "ci",
+ "pl"
+ ],
+ [
+ "cip",
+ "l"
+ ],
+ [
+ "Time",
+ "out"
+ ],
+ [
+ "▁",
+ "ж"
+ ],
+ [
+ "or",
+ "te"
+ ],
+ [
+ "ort",
+ "e"
+ ],
+ [
+ "▁repl",
+ "aced"
+ ],
+ [
+ "▁replace",
+ "d"
+ ],
+ [
+ "E",
+ "M"
+ ],
+ [
+ "▁ab",
+ "andon"
+ ],
+ [
+ "▁Spec",
+ "ial"
+ ],
+ [
+ "▁Spe",
+ "cial"
+ ],
+ [
+ "▁",
+ "Special"
+ ],
+ [
+ "el",
+ "len"
+ ],
+ [
+ "ell",
+ "en"
+ ],
+ [
+ "elle",
+ "n"
+ ],
+ [
+ "▁B",
+ "ru"
+ ],
+ [
+ "▁Br",
+ "u"
+ ],
+ [
+ "ir",
+ "med"
+ ],
+ [
+ "irm",
+ "ed"
+ ],
+ [
+ "T",
+ "e"
+ ],
+ [
+ "ol",
+ "t"
+ ],
+ [
+ "o",
+ "lt"
+ ],
+ [
+ "j",
+ "u"
+ ],
+ [
+ "Arg",
+ "ument"
+ ],
+ [
+ "▁ne",
+ "ut"
+ ],
+ [
+ "▁neu",
+ "t"
+ ],
+ [
+ "▁",
+ "neut"
+ ],
+ [
+ "sc",
+ "ape"
+ ],
+ [
+ "▁R",
+ "ay"
+ ],
+ [
+ "▁Ra",
+ "y"
+ ],
+ [
+ "▁",
+ "Ray"
+ ],
+ [
+ "▁Pol",
+ "it"
+ ],
+ [
+ "▁Po",
+ "lit"
+ ],
+ [
+ "▁crow",
+ "d"
+ ],
+ [
+ "▁cro",
+ "wd"
+ ],
+ [
+ "▁Window",
+ "s"
+ ],
+ [
+ "▁Wind",
+ "ows"
+ ],
+ [
+ "▁",
+ "Windows"
+ ],
+ [
+ "ie",
+ "go"
+ ],
+ [
+ "ieg",
+ "o"
+ ],
+ [
+ "i",
+ "ego"
+ ],
+ [
+ "▁e",
+ "scape"
+ ],
+ [
+ "▁esc",
+ "ape"
+ ],
+ [
+ "▁",
+ "escape"
+ ],
+ [
+ "▁Ap",
+ "ache"
+ ],
+ [
+ "sy",
+ "nc"
+ ],
+ [
+ "syn",
+ "c"
+ ],
+ [
+ "s",
+ "ync"
+ ],
+ [
+ "eb",
+ "en"
+ ],
+ [
+ "e",
+ "ben"
+ ],
+ [
+ "if",
+ "ies"
+ ],
+ [
+ "ifi",
+ "es"
+ ],
+ [
+ "et",
+ "her"
+ ],
+ [
+ "eth",
+ "er"
+ ],
+ [
+ "ethe",
+ "r"
+ ],
+ [
+ "e",
+ "ther"
+ ],
+ [
+ "Met",
+ "a"
+ ],
+ [
+ "Me",
+ "ta"
+ ],
+ [
+ "M",
+ "eta"
+ ],
+ [
+ "▁big",
+ "gest"
+ ],
+ [
+ "Ga",
+ "me"
+ ],
+ [
+ "G",
+ "ame"
+ ],
+ [
+ "▁trans",
+ "action"
+ ],
+ [
+ "▁",
+ "transaction"
+ ],
+ [
+ "En",
+ "v"
+ ],
+ [
+ "E",
+ "nv"
+ ],
+ [
+ "▁М",
+ "о"
+ ],
+ [
+ "▁pl",
+ "enty"
+ ],
+ [
+ "▁m",
+ "el"
+ ],
+ [
+ "▁me",
+ "l"
+ ],
+ [
+ "▁",
+ "mel"
+ ],
+ [
+ "п",
+ "ре"
+ ],
+ [
+ "▁mot",
+ "iv"
+ ],
+ [
+ "▁о",
+ "р"
+ ],
+ [
+ "▁",
+ "ор"
+ ],
+ [
+ "or",
+ "gan"
+ ],
+ [
+ "org",
+ "an"
+ ],
+ [
+ "▁m",
+ "ock"
+ ],
+ [
+ "▁mo",
+ "ck"
+ ],
+ [
+ "▁",
+ "mock"
+ ],
+ [
+ "▁$",
+ "_"
+ ],
+ [
+ "▁",
+ "$_"
+ ],
+ [
+ "ен",
+ "е"
+ ],
+ [
+ "е",
+ "не"
+ ],
+ [
+ "▁N",
+ "umber"
+ ],
+ [
+ "▁Num",
+ "ber"
+ ],
+ [
+ "▁Nu",
+ "mber"
+ ],
+ [
+ "▁",
+ "Number"
+ ],
+ [
+ "ck",
+ "now"
+ ],
+ [
+ "c",
+ "know"
+ ],
+ [
+ "▁Up",
+ "date"
+ ],
+ [
+ "▁",
+ "Update"
+ ],
+ [
+ "ze",
+ "ro"
+ ],
+ [
+ "zer",
+ "o"
+ ],
+ [
+ "z",
+ "ero"
+ ],
+ [
+ "▁sur",
+ "prise"
+ ],
+ [
+ "▁surpr",
+ "ise"
+ ],
+ [
+ "ce",
+ "an"
+ ],
+ [
+ "pd",
+ "f"
+ ],
+ [
+ "p",
+ "df"
+ ],
+ [
+ "Gl",
+ "obal"
+ ],
+ [
+ "▁att",
+ "end"
+ ],
+ [
+ "▁f",
+ "ond"
+ ],
+ [
+ "▁fo",
+ "nd"
+ ],
+ [
+ "▁fon",
+ "d"
+ ],
+ [
+ "▁under",
+ "stood"
+ ],
+ [
+ "Na",
+ "v"
+ ],
+ [
+ "N",
+ "av"
+ ],
+ [
+ "▁M",
+ "ic"
+ ],
+ [
+ "▁Mi",
+ "c"
+ ],
+ [
+ "▁",
+ "Mic"
+ ],
+ [
+ "=",
+ "$"
+ ],
+ [
+ "ok",
+ "ing"
+ ],
+ [
+ "oki",
+ "ng"
+ ],
+ [
+ "o",
+ "king"
+ ],
+ [
+ "▁Stad",
+ "ium"
+ ],
+ [
+ "Cl",
+ "ose"
+ ],
+ [
+ "▁compet",
+ "ition"
+ ],
+ [
+ "▁sold",
+ "iers"
+ ],
+ [
+ "▁soldier",
+ "s"
+ ],
+ [
+ "▁O",
+ "P"
+ ],
+ [
+ "▁",
+ "OP"
+ ],
+ [
+ "ag",
+ "ne"
+ ],
+ [
+ "agn",
+ "e"
+ ],
+ [
+ "▁An",
+ "ton"
+ ],
+ [
+ "▁Ant",
+ "on"
+ ],
+ [
+ "Ma",
+ "in"
+ ],
+ [
+ "M",
+ "ain"
+ ],
+ [
+ "á",
+ "k"
+ ],
+ [
+ "▁#",
+ "["
+ ],
+ [
+ "▁",
+ "#["
+ ],
+ [
+ "▁Com",
+ "mit"
+ ],
+ [
+ "▁Comm",
+ "it"
+ ],
+ [
+ "▁",
+ "Commit"
+ ],
+ [
+ "py",
+ "x"
+ ],
+ [
+ "▁e",
+ "ast"
+ ],
+ [
+ "▁eas",
+ "t"
+ ],
+ [
+ "▁",
+ "east"
+ ],
+ [
+ "▁Or",
+ "der"
+ ],
+ [
+ "▁Ord",
+ "er"
+ ],
+ [
+ "▁",
+ "Order"
+ ],
+ [
+ "F",
+ "loat"
+ ],
+ [
+ "▁accept",
+ "ed"
+ ],
+ [
+ "▁mon",
+ "itor"
+ ],
+ [
+ "▁",
+ "monitor"
+ ],
+ [
+ "▁p",
+ "ad"
+ ],
+ [
+ "▁pa",
+ "d"
+ ],
+ [
+ "▁",
+ "pad"
+ ],
+ [
+ "on",
+ "ic"
+ ],
+ [
+ "oni",
+ "c"
+ ],
+ [
+ "o",
+ "nic"
+ ],
+ [
+ "▁p",
+ "ushed"
+ ],
+ [
+ "▁push",
+ "ed"
+ ],
+ [
+ "▁re",
+ "place"
+ ],
+ [
+ "▁rep",
+ "lace"
+ ],
+ [
+ "▁repl",
+ "ace"
+ ],
+ [
+ "▁",
+ "replace"
+ ],
+ [
+ "CR",
+ "E"
+ ],
+ [
+ "C",
+ "RE"
+ ],
+ [
+ "▁r",
+ "ide"
+ ],
+ [
+ "▁ri",
+ "de"
+ ],
+ [
+ "▁rid",
+ "e"
+ ],
+ [
+ "▁",
+ "ride"
+ ],
+ [
+ "fo",
+ "und"
+ ],
+ [
+ "f",
+ "ound"
+ ],
+ [
+ "=",
+ "%"
+ ],
+ [
+ "во",
+ "й"
+ ],
+ [
+ "▁mat",
+ "ches"
+ ],
+ [
+ "▁match",
+ "es"
+ ],
+ [
+ "▁",
+ "matches"
+ ],
+ [
+ "▁L",
+ "ie"
+ ],
+ [
+ "▁Li",
+ "e"
+ ],
+ [
+ "▁exper",
+ "iences"
+ ],
+ [
+ "▁experience",
+ "s"
+ ],
+ [
+ "▁experi",
+ "ences"
+ ],
+ [
+ "Po",
+ "ol"
+ ],
+ [
+ "P",
+ "ool"
+ ],
+ [
+ "up",
+ "s"
+ ],
+ [
+ "u",
+ "ps"
+ ],
+ [
+ "A",
+ "V"
+ ],
+ [
+ "▁ex",
+ "istence"
+ ],
+ [
+ "▁exist",
+ "ence"
+ ],
+ [
+ "▁t",
+ "hin"
+ ],
+ [
+ "▁th",
+ "in"
+ ],
+ [
+ "▁m",
+ "agn"
+ ],
+ [
+ "▁mag",
+ "n"
+ ],
+ [
+ "▁ma",
+ "gn"
+ ],
+ [
+ "CO",
+ "MP"
+ ],
+ [
+ "COM",
+ "P"
+ ],
+ [
+ "ho",
+ "me"
+ ],
+ [
+ "hom",
+ "e"
+ ],
+ [
+ "h",
+ "ome"
+ ],
+ [
+ "▁n",
+ "i"
+ ],
+ [
+ "▁",
+ "ni"
+ ],
+ [
+ "▁wur",
+ "den"
+ ],
+ [
+ "▁wurde",
+ "n"
+ ],
+ [
+ "ла",
+ "в"
+ ],
+ [
+ "▁te",
+ "eth"
+ ],
+ [
+ "▁S",
+ "tan"
+ ],
+ [
+ "▁St",
+ "an"
+ ],
+ [
+ "▁Sta",
+ "n"
+ ],
+ [
+ "ap",
+ "pro"
+ ],
+ [
+ "app",
+ "ro"
+ ],
+ [
+ "an",
+ "ny"
+ ],
+ [
+ "ann",
+ "y"
+ ],
+ [
+ "if",
+ "ts"
+ ],
+ [
+ "ift",
+ "s"
+ ],
+ [
+ "▁un",
+ "known"
+ ],
+ [
+ "▁",
+ "unknown"
+ ],
+ [
+ "▁h",
+ "omes"
+ ],
+ [
+ "▁home",
+ "s"
+ ],
+ [
+ "▁hom",
+ "es"
+ ],
+ [
+ "▁ho",
+ "mes"
+ ],
+ [
+ "▁ent",
+ "ity"
+ ],
+ [
+ "▁",
+ "entity"
+ ],
+ [
+ "ci",
+ "e"
+ ],
+ [
+ "c",
+ "ie"
+ ],
+ [
+ "ле",
+ "ние"
+ ],
+ [
+ "ia",
+ "r"
+ ],
+ [
+ "i",
+ "ar"
+ ],
+ [
+ "▁compl",
+ "iance"
+ ],
+ [
+ "▁focus",
+ "ed"
+ ],
+ [
+ "uz",
+ "z"
+ ],
+ [
+ "u",
+ "zz"
+ ],
+ [
+ "=\\",
+ "\""
+ ],
+ [
+ "=",
+ "\\\""
+ ],
+ [
+ "com",
+ "ponents"
+ ],
+ [
+ "component",
+ "s"
+ ],
+ [
+ "Att",
+ "r"
+ ],
+ [
+ "At",
+ "tr"
+ ],
+ [
+ "all",
+ "ery"
+ ],
+ [
+ "alle",
+ "ry"
+ ],
+ [
+ "aller",
+ "y"
+ ],
+ [
+ "▁ident",
+ "ify"
+ ],
+ [
+ "O",
+ "k"
+ ],
+ [
+ "pi",
+ "e"
+ ],
+ [
+ "p",
+ "ie"
+ ],
+ [
+ "▁St",
+ "ill"
+ ],
+ [
+ "▁off",
+ "ering"
+ ],
+ [
+ "▁offer",
+ "ing"
+ ],
+ [
+ "▁bu",
+ "sy"
+ ],
+ [
+ "▁bus",
+ "y"
+ ],
+ [
+ "ct",
+ "l"
+ ],
+ [
+ "c",
+ "tl"
+ ],
+ [
+ "it",
+ "ors"
+ ],
+ [
+ "itor",
+ "s"
+ ],
+ [
+ "ito",
+ "rs"
+ ],
+ [
+ "▁concern",
+ "ed"
+ ],
+ [
+ "▁concer",
+ "ned"
+ ],
+ [
+ "▁b",
+ "rown"
+ ],
+ [
+ "▁br",
+ "own"
+ ],
+ [
+ "▁bro",
+ "wn"
+ ],
+ [
+ "▁brow",
+ "n"
+ ],
+ [
+ "cl",
+ "k"
+ ],
+ [
+ "Se",
+ "lected"
+ ],
+ [
+ "Select",
+ "ed"
+ ],
+ [
+ "▁B",
+ "lock"
+ ],
+ [
+ "▁Bl",
+ "ock"
+ ],
+ [
+ "▁Blo",
+ "ck"
+ ],
+ [
+ "▁",
+ "Block"
+ ],
+ [
+ "▁e",
+ "gy"
+ ],
+ [
+ "▁eg",
+ "y"
+ ],
+ [
+ "▁",
+ "egy"
+ ],
+ [
+ "ic",
+ "ing"
+ ],
+ [
+ "ici",
+ "ng"
+ ],
+ [
+ "i",
+ "cing"
+ ],
+ [
+ "▁U",
+ "RL"
+ ],
+ [
+ "▁",
+ "URL"
+ ],
+ [
+ "▁t",
+ "opic"
+ ],
+ [
+ "▁to",
+ "pic"
+ ],
+ [
+ "▁top",
+ "ic"
+ ],
+ [
+ "▁",
+ "topic"
+ ],
+ [
+ "▁Pro",
+ "duct"
+ ],
+ [
+ "▁Produ",
+ "ct"
+ ],
+ [
+ "▁",
+ "Product"
+ ],
+ [
+ "▁ч",
+ "и"
+ ],
+ [
+ "▁",
+ "чи"
+ ],
+ [
+ "▁t",
+ "rial"
+ ],
+ [
+ "▁tr",
+ "ial"
+ ],
+ [
+ "▁tri",
+ "al"
+ ],
+ [
+ "▁week",
+ "end"
+ ],
+ [
+ "l",
+ "u"
+ ],
+ [
+ "▁I",
+ "V"
+ ],
+ [
+ "▁",
+ "IV"
+ ],
+ [
+ "▁E",
+ "gy"
+ ],
+ [
+ "▁Eg",
+ "y"
+ ],
+ [
+ "x",
+ "C"
+ ],
+ [
+ "▁n",
+ "ove"
+ ],
+ [
+ "▁no",
+ "ve"
+ ],
+ [
+ "▁nov",
+ "e"
+ ],
+ [
+ "▁l",
+ "ett"
+ ],
+ [
+ "▁le",
+ "tt"
+ ],
+ [
+ "▁let",
+ "t"
+ ],
+ [
+ "▁",
+ "lett"
+ ],
+ [
+ "en",
+ "ne"
+ ],
+ [
+ "enn",
+ "e"
+ ],
+ [
+ "()",
+ ")."
+ ],
+ [
+ "())",
+ "."
+ ],
+ [
+ "(",
+ "))."
+ ],
+ [
+ ".*",
+ "*"
+ ],
+ [
+ ".",
+ "**"
+ ],
+ [
+ "▁p",
+ "romise"
+ ],
+ [
+ "▁prom",
+ "ise"
+ ],
+ [
+ "el",
+ "ection"
+ ],
+ [
+ "ele",
+ "ction"
+ ],
+ [
+ "elect",
+ "ion"
+ ],
+ [
+ "e",
+ "lection"
+ ],
+ [
+ "Aut",
+ "h"
+ ],
+ [
+ "A",
+ "uth"
+ ],
+ [
+ "r",
+ "v"
+ ],
+ [
+ "ri",
+ "l"
+ ],
+ [
+ "r",
+ "il"
+ ],
+ [
+ "▁con",
+ "duct"
+ ],
+ [
+ "▁cond",
+ "uct"
+ ],
+ [
+ "▁condu",
+ "ct"
+ ],
+ [
+ "▁",
+ "conduct"
+ ],
+ [
+ "▁main",
+ "tain"
+ ],
+ [
+ "▁maint",
+ "ain"
+ ],
+ [
+ "▁bo",
+ "at"
+ ],
+ [
+ "▁",
+ "boat"
+ ],
+ [
+ "▁op",
+ "posite"
+ ],
+ [
+ "▁oppos",
+ "ite"
+ ],
+ [
+ "sp",
+ "in"
+ ],
+ [
+ "spi",
+ "n"
+ ],
+ [
+ "s",
+ "pin"
+ ],
+ [
+ "web",
+ "pack"
+ ],
+ [
+ "an",
+ "ta"
+ ],
+ [
+ "ant",
+ "a"
+ ],
+ [
+ "▁o",
+ "rient"
+ ],
+ [
+ "▁or",
+ "ient"
+ ],
+ [
+ "▁",
+ "orient"
+ ],
+ [
+ "▁s",
+ "uc"
+ ],
+ [
+ "▁su",
+ "c"
+ ],
+ [
+ "▁ex",
+ "ercise"
+ ],
+ [
+ "▁exerc",
+ "ise"
+ ],
+ [
+ "▁eff",
+ "icient"
+ ],
+ [
+ "▁",
+ "efficient"
+ ],
+ [
+ "▁trad",
+ "ition"
+ ],
+ [
+ "▁z",
+ "w"
+ ],
+ [
+ "▁",
+ "zw"
+ ],
+ [
+ "▁S",
+ "ud"
+ ],
+ [
+ "▁Su",
+ "d"
+ ],
+ [
+ "go",
+ "ing"
+ ],
+ [
+ "▁P",
+ "ier"
+ ],
+ [
+ "▁Pi",
+ "er"
+ ],
+ [
+ "in",
+ "v"
+ ],
+ [
+ "i",
+ "nv"
+ ],
+ [
+ "ip",
+ "es"
+ ],
+ [
+ "ipe",
+ "s"
+ ],
+ [
+ "i",
+ "pes"
+ ],
+ [
+ "ensure",
+ "math"
+ ],
+ [
+ "▁con",
+ "ver"
+ ],
+ [
+ "▁conv",
+ "er"
+ ],
+ [
+ "▁conve",
+ "r"
+ ],
+ [
+ "cre",
+ "en"
+ ],
+ [
+ "cr",
+ "een"
+ ],
+ [
+ "c",
+ "reen"
+ ],
+ [
+ "▁t",
+ "error"
+ ],
+ [
+ "▁ter",
+ "ror"
+ ],
+ [
+ "▁terr",
+ "or"
+ ],
+ [
+ "▁D",
+ "ou"
+ ],
+ [
+ "▁Do",
+ "u"
+ ],
+ [
+ "▁in",
+ "valid"
+ ],
+ [
+ "▁",
+ "invalid"
+ ],
+ [
+ "ce",
+ "ived"
+ ],
+ [
+ "ceive",
+ "d"
+ ],
+ [
+ "▁A",
+ "rab"
+ ],
+ [
+ "▁Ar",
+ "ab"
+ ],
+ [
+ "▁w",
+ "ire"
+ ],
+ [
+ "▁wir",
+ "e"
+ ],
+ [
+ "▁",
+ "wire"
+ ],
+ [
+ "ap",
+ "plication"
+ ],
+ [
+ "sh",
+ "ift"
+ ],
+ [
+ "Gener",
+ "ic"
+ ],
+ [
+ "▁P",
+ "lan"
+ ],
+ [
+ "▁Pl",
+ "an"
+ ],
+ [
+ "▁",
+ "Plan"
+ ],
+ [
+ "▁W",
+ "all"
+ ],
+ [
+ "▁Wal",
+ "l"
+ ],
+ [
+ "▁Wa",
+ "ll"
+ ],
+ [
+ "▁",
+ "Wall"
+ ],
+ [
+ "▁direct",
+ "ory"
+ ],
+ [
+ "▁director",
+ "y"
+ ],
+ [
+ "▁",
+ "directory"
+ ],
+ [
+ "▁e",
+ "gg"
+ ],
+ [
+ "▁eg",
+ "g"
+ ],
+ [
+ "▁we",
+ "alth"
+ ],
+ [
+ "▁",
+ "wealth"
+ ],
+ [
+ "ran",
+ "dom"
+ ],
+ [
+ "rand",
+ "om"
+ ],
+ [
+ "r",
+ "andom"
+ ],
+ [
+ "att",
+ "ribute"
+ ],
+ [
+ "▁h",
+ "ide"
+ ],
+ [
+ "▁hi",
+ "de"
+ ],
+ [
+ "▁hid",
+ "e"
+ ],
+ [
+ "▁",
+ "hide"
+ ],
+ [
+ "Se",
+ "rial"
+ ],
+ [
+ "Ser",
+ "ial"
+ ],
+ [
+ "S",
+ "erial"
+ ],
+ [
+ "ca",
+ "m"
+ ],
+ [
+ "c",
+ "am"
+ ],
+ [
+ "▁it",
+ "al"
+ ],
+ [
+ "▁i",
+ "tal"
+ ],
+ [
+ "▁",
+ "ital"
+ ],
+ [
+ "▁L",
+ "ine"
+ ],
+ [
+ "▁Lin",
+ "e"
+ ],
+ [
+ "▁Li",
+ "ne"
+ ],
+ [
+ "▁",
+ "Line"
+ ],
+ [
+ "▁C",
+ "HECK"
+ ],
+ [
+ "▁",
+ "CHECK"
+ ],
+ [
+ "ploy",
+ "ment"
+ ],
+ [
+ "▁mass",
+ "ive"
+ ],
+ [
+ "▁ex",
+ "tract"
+ ],
+ [
+ "▁ext",
+ "ract"
+ ],
+ [
+ "▁extra",
+ "ct"
+ ],
+ [
+ "▁extr",
+ "act"
+ ],
+ [
+ "▁",
+ "extract"
+ ],
+ [
+ "ch",
+ "ain"
+ ],
+ [
+ "cha",
+ "in"
+ ],
+ [
+ "Res",
+ "t"
+ ],
+ [
+ "Re",
+ "st"
+ ],
+ [
+ "R",
+ "est"
+ ],
+ [
+ "▁L",
+ "as"
+ ],
+ [
+ "▁La",
+ "s"
+ ],
+ [
+ "▁b",
+ "ear"
+ ],
+ [
+ "▁be",
+ "ar"
+ ],
+ [
+ "▁",
+ "bear"
+ ],
+ [
+ "▁l",
+ "inks"
+ ],
+ [
+ "▁link",
+ "s"
+ ],
+ [
+ "▁lin",
+ "ks"
+ ],
+ [
+ "▁",
+ "links"
+ ],
+ [
+ "▁new",
+ "sp"
+ ],
+ [
+ "▁news",
+ "p"
+ ],
+ [
+ "▁F",
+ "C"
+ ],
+ [
+ "▁",
+ "FC"
+ ],
+ [
+ "Car",
+ "d"
+ ],
+ [
+ "C",
+ "ard"
+ ],
+ [
+ "ak",
+ "s"
+ ],
+ [
+ "a",
+ "ks"
+ ],
+ [
+ "▁v",
+ "isible"
+ ],
+ [
+ "▁vis",
+ "ible"
+ ],
+ [
+ "▁",
+ "visible"
+ ],
+ [
+ "▁M",
+ "arc"
+ ],
+ [
+ "▁Mar",
+ "c"
+ ],
+ [
+ "▁Ma",
+ "rc"
+ ],
+ [
+ "▁B",
+ "oston"
+ ],
+ [
+ "▁Bo",
+ "ston"
+ ],
+ [
+ "▁Bos",
+ "ton"
+ ],
+ [
+ "▁res",
+ "erved"
+ ],
+ [
+ "▁reserv",
+ "ed"
+ ],
+ [
+ "▁reserve",
+ "d"
+ ],
+ [
+ "▁ro",
+ "of"
+ ],
+ [
+ "lic",
+ "enses"
+ ],
+ [
+ "license",
+ "s"
+ ],
+ [
+ "d",
+ "c"
+ ],
+ [
+ "▁In",
+ "formation"
+ ],
+ [
+ "▁",
+ "Information"
+ ],
+ [
+ "▁w",
+ "itness"
+ ],
+ [
+ "S",
+ "k"
+ ],
+ [
+ "*)",
+ ","
+ ],
+ [
+ "*",
+ "),"
+ ],
+ [
+ "Sc",
+ "ope"
+ ],
+ [
+ "S",
+ "cope"
+ ],
+ [
+ "']",
+ ";"
+ ],
+ [
+ "'",
+ "];"
+ ],
+ [
+ "▁M",
+ "ir"
+ ],
+ [
+ "▁Mi",
+ "r"
+ ],
+ [
+ "▁",
+ "Mir"
+ ],
+ [
+ "ud",
+ "ing"
+ ],
+ [
+ "udi",
+ "ng"
+ ],
+ [
+ "u",
+ "ding"
+ ],
+ [
+ "▁t",
+ "rend"
+ ],
+ [
+ "▁tr",
+ "end"
+ ],
+ [
+ "▁tre",
+ "nd"
+ ],
+ [
+ "▁tren",
+ "d"
+ ],
+ [
+ "re",
+ "p"
+ ],
+ [
+ "r",
+ "ep"
+ ],
+ [
+ "▁mus",
+ "ical"
+ ],
+ [
+ "▁music",
+ "al"
+ ],
+ [
+ "▁ne",
+ "ither"
+ ],
+ [
+ "▁nei",
+ "ther"
+ ],
+ [
+ "▁C",
+ "reat"
+ ],
+ [
+ "▁Cre",
+ "at"
+ ],
+ [
+ "▁",
+ "Creat"
+ ],
+ [
+ "▁pos",
+ "itions"
+ ],
+ [
+ "▁position",
+ "s"
+ ],
+ [
+ "▁posit",
+ "ions"
+ ],
+ [
+ "L",
+ "C"
+ ],
+ [
+ "rid",
+ "ge"
+ ],
+ [
+ "r",
+ "idge"
+ ],
+ [
+ "▁offic",
+ "ers"
+ ],
+ [
+ "▁office",
+ "rs"
+ ],
+ [
+ "▁officer",
+ "s"
+ ],
+ [
+ "▁vi",
+ "olence"
+ ],
+ [
+ "▁viol",
+ "ence"
+ ],
+ [
+ "▁T",
+ "em"
+ ],
+ [
+ "▁Te",
+ "m"
+ ],
+ [
+ "▁S",
+ "us"
+ ],
+ [
+ "▁Su",
+ "s"
+ ],
+ [
+ "▁W",
+ "ay"
+ ],
+ [
+ "▁Wa",
+ "y"
+ ],
+ [
+ "Af",
+ "ter"
+ ],
+ [
+ "A",
+ "fter"
+ ],
+ [
+ "ac",
+ "ket"
+ ],
+ [
+ "ack",
+ "et"
+ ],
+ [
+ "▁S",
+ "ou"
+ ],
+ [
+ "▁So",
+ "u"
+ ],
+ [
+ "ac",
+ "er"
+ ],
+ [
+ "ace",
+ "r"
+ ],
+ [
+ "a",
+ "cer"
+ ],
+ [
+ "|",
+ "|"
+ ],
+ [
+ "▁re",
+ "mark"
+ ],
+ [
+ "▁r",
+ "emark"
+ ],
+ [
+ "▁rem",
+ "ark"
+ ],
+ [
+ "▁",
+ "remark"
+ ],
+ [
+ "wa",
+ "ter"
+ ],
+ [
+ "w",
+ "ater"
+ ],
+ [
+ "n",
+ "ě"
+ ],
+ [
+ "▁С",
+ "а"
+ ],
+ [
+ "▁s",
+ "ed"
+ ],
+ [
+ "▁se",
+ "d"
+ ],
+ [
+ "▁",
+ "sed"
+ ],
+ [
+ "E",
+ "ach"
+ ],
+ [
+ "▁phot",
+ "ograph"
+ ],
+ [
+ "▁photo",
+ "graph"
+ ],
+ [
+ "▁let",
+ "ters"
+ ],
+ [
+ "▁letter",
+ "s"
+ ],
+ [
+ "▁lett",
+ "ers"
+ ],
+ [
+ "▁in",
+ "vent"
+ ],
+ [
+ "▁inv",
+ "ent"
+ ],
+ [
+ "▁M",
+ "as"
+ ],
+ [
+ "▁Ma",
+ "s"
+ ],
+ [
+ "▁s",
+ "ongs"
+ ],
+ [
+ "▁son",
+ "gs"
+ ],
+ [
+ "▁song",
+ "s"
+ ],
+ [
+ "ó",
+ "l"
+ ],
+ [
+ "ki",
+ "nd"
+ ],
+ [
+ "kin",
+ "d"
+ ],
+ [
+ "k",
+ "ind"
+ ],
+ [
+ "▁N",
+ "on"
+ ],
+ [
+ "▁No",
+ "n"
+ ],
+ [
+ "▁",
+ "Non"
+ ],
+ [
+ "▁d",
+ "ust"
+ ],
+ [
+ "▁du",
+ "st"
+ ],
+ [
+ "**",
+ ":"
+ ],
+ [
+ "*",
+ "*:"
+ ],
+ [
+ "nab",
+ "la"
+ ],
+ [
+ ".\"",
+ ","
+ ],
+ [
+ ".",
+ "\","
+ ],
+ [
+ "Loc",
+ "k"
+ ],
+ [
+ "Lo",
+ "ck"
+ ],
+ [
+ "L",
+ "ock"
+ ],
+ [
+ "▁Д",
+ "о"
+ ],
+ [
+ "▁cl",
+ "uster"
+ ],
+ [
+ "▁",
+ "cluster"
+ ],
+ [
+ "lo",
+ "ss"
+ ],
+ [
+ "los",
+ "s"
+ ],
+ [
+ "l",
+ "oss"
+ ],
+ [
+ "▁ASS",
+ "ERT"
+ ],
+ [
+ "▁",
+ "ASSERT"
+ ],
+ [
+ "fa",
+ "ll"
+ ],
+ [
+ "f",
+ "all"
+ ],
+ [
+ "▁re",
+ "ject"
+ ],
+ [
+ "▁",
+ "reject"
+ ],
+ [
+ "▁Sp",
+ "ring"
+ ],
+ [
+ "▁Spr",
+ "ing"
+ ],
+ [
+ "▁",
+ "Spring"
+ ],
+ [
+ "▁wed",
+ "ding"
+ ],
+ [
+ "▁g",
+ "rav"
+ ],
+ [
+ "▁gr",
+ "av"
+ ],
+ [
+ "▁gra",
+ "v"
+ ],
+ [
+ "▁",
+ "grav"
+ ],
+ [
+ "ress",
+ "ion"
+ ],
+ [
+ "r",
+ "ession"
+ ],
+ [
+ "li",
+ "mit"
+ ],
+ [
+ "lim",
+ "it"
+ ],
+ [
+ "l",
+ "imit"
+ ],
+ [
+ "RE",
+ "S"
+ ],
+ [
+ "R",
+ "ES"
+ ],
+ [
+ "]",
+ "}"
+ ],
+ [
+ "▁l",
+ "isted"
+ ],
+ [
+ "▁li",
+ "sted"
+ ],
+ [
+ "▁list",
+ "ed"
+ ],
+ [
+ "▁",
+ "listed"
+ ],
+ [
+ "▁T",
+ "ele"
+ ],
+ [
+ "▁Te",
+ "le"
+ ],
+ [
+ "▁Tel",
+ "e"
+ ],
+ [
+ "▁",
+ "Tele"
+ ],
+ [
+ "hl",
+ "ine"
+ ],
+ [
+ "h",
+ "line"
+ ],
+ [
+ "▁ch",
+ "ief"
+ ],
+ [
+ "▁chi",
+ "ef"
+ ],
+ [
+ "ME",
+ "M"
+ ],
+ [
+ "M",
+ "EM"
+ ],
+ [
+ "да",
+ "р"
+ ],
+ [
+ "д",
+ "ар"
+ ],
+ [
+ "▁exp",
+ "ensive"
+ ],
+ [
+ "tr",
+ "ace"
+ ],
+ [
+ "tra",
+ "ce"
+ ],
+ [
+ "▁R",
+ "og"
+ ],
+ [
+ "▁Ro",
+ "g"
+ ],
+ [
+ "▁C",
+ "oll"
+ ],
+ [
+ "▁Col",
+ "l"
+ ],
+ [
+ "▁Co",
+ "ll"
+ ],
+ [
+ "▁",
+ "Coll"
+ ],
+ [
+ "▁Aut",
+ "hor"
+ ],
+ [
+ "▁Auth",
+ "or"
+ ],
+ [
+ "▁",
+ "Author"
+ ],
+ [
+ "▁B",
+ "oard"
+ ],
+ [
+ "▁Bo",
+ "ard"
+ ],
+ [
+ "▁",
+ "Board"
+ ],
+ [
+ "▁C",
+ "apt"
+ ],
+ [
+ "▁Cap",
+ "t"
+ ],
+ [
+ "▁Ca",
+ "pt"
+ ],
+ [
+ "▁",
+ "Capt"
+ ],
+ [
+ "TE",
+ "XT"
+ ],
+ [
+ "T",
+ "EXT"
+ ],
+ [
+ "▁re",
+ "con"
+ ],
+ [
+ "▁rec",
+ "on"
+ ],
+ [
+ "es",
+ "ta"
+ ],
+ [
+ "est",
+ "a"
+ ],
+ [
+ "e",
+ "sta"
+ ],
+ [
+ "▁proper",
+ "ly"
+ ],
+ [
+ "▁&",
+ "\\"
+ ],
+ [
+ "▁",
+ "&\\"
+ ],
+ [
+ "le",
+ "ton"
+ ],
+ [
+ "let",
+ "on"
+ ],
+ [
+ "l",
+ "eton"
+ ],
+ [
+ "ik",
+ "er"
+ ],
+ [
+ "ike",
+ "r"
+ ],
+ [
+ "i",
+ "ker"
+ ],
+ [
+ "G",
+ "u"
+ ],
+ [
+ "▁K",
+ "om"
+ ],
+ [
+ "▁Ko",
+ "m"
+ ],
+ [
+ "oc",
+ "o"
+ ],
+ [
+ "o",
+ "co"
+ ],
+ [
+ "▁any",
+ "more"
+ ],
+ [
+ "▁t",
+ "aste"
+ ],
+ [
+ "▁ta",
+ "ste"
+ ],
+ [
+ "▁tast",
+ "e"
+ ],
+ [
+ "▁S",
+ "anta"
+ ],
+ [
+ "▁San",
+ "ta"
+ ],
+ [
+ "▁Sant",
+ "a"
+ ],
+ [
+ "ge",
+ "x"
+ ],
+ [
+ "g",
+ "ex"
+ ],
+ [
+ "▁Se",
+ "cret"
+ ],
+ [
+ "▁Sec",
+ "ret"
+ ],
+ [
+ "▁",
+ "Secret"
+ ],
+ [
+ "▁tal",
+ "ent"
+ ],
+ [
+ "▁tale",
+ "nt"
+ ],
+ [
+ "▁mom",
+ "ents"
+ ],
+ [
+ "▁moment",
+ "s"
+ ],
+ [
+ "▁mo",
+ "ments"
+ ],
+ [
+ "▁B",
+ "a"
+ ],
+ [
+ "▁ex",
+ "tr"
+ ],
+ [
+ "▁ext",
+ "r"
+ ],
+ [
+ "▁",
+ "extr"
+ ],
+ [
+ "▁Com",
+ "mission"
+ ],
+ [
+ "▁Comm",
+ "ission"
+ ],
+ [
+ "▁mod",
+ "ify"
+ ],
+ [
+ "▁Fig",
+ "ure"
+ ],
+ [
+ "▁",
+ "Figure"
+ ],
+ [
+ "▁d",
+ "omin"
+ ],
+ [
+ "▁do",
+ "min"
+ ],
+ [
+ "▁dom",
+ "in"
+ ],
+ [
+ "▁",
+ "domin"
+ ],
+ [
+ "▁p",
+ "lot"
+ ],
+ [
+ "▁pl",
+ "ot"
+ ],
+ [
+ "▁",
+ "plot"
+ ],
+ [
+ "en",
+ "ger"
+ ],
+ [
+ "eng",
+ "er"
+ ],
+ [
+ "enge",
+ "r"
+ ],
+ [
+ "ut",
+ "ch"
+ ],
+ [
+ "▁c",
+ "ities"
+ ],
+ [
+ "▁cit",
+ "ies"
+ ],
+ [
+ "▁ci",
+ "ties"
+ ],
+ [
+ "▁n",
+ "ut"
+ ],
+ [
+ "▁nu",
+ "t"
+ ],
+ [
+ "▁",
+ "nut"
+ ],
+ [
+ "pro",
+ "file"
+ ],
+ [
+ "prof",
+ "ile"
+ ],
+ [
+ "▁S",
+ "tat"
+ ],
+ [
+ "▁St",
+ "at"
+ ],
+ [
+ "▁Sta",
+ "t"
+ ],
+ [
+ "▁",
+ "Stat"
+ ],
+ [
+ "▁n",
+ "odes"
+ ],
+ [
+ "▁no",
+ "des"
+ ],
+ [
+ "▁node",
+ "s"
+ ],
+ [
+ "▁nod",
+ "es"
+ ],
+ [
+ "▁",
+ "nodes"
+ ],
+ [
+ "▁n",
+ "s"
+ ],
+ [
+ "▁",
+ "ns"
+ ],
+ [
+ "ess",
+ "ages"
+ ],
+ [
+ "essage",
+ "s"
+ ],
+ [
+ "essa",
+ "ges"
+ ],
+ [
+ "im",
+ "pl"
+ ],
+ [
+ "imp",
+ "l"
+ ],
+ [
+ "ic",
+ "ker"
+ ],
+ [
+ "ick",
+ "er"
+ ],
+ [
+ "i",
+ "cker"
+ ],
+ [
+ "▁ex",
+ "amples"
+ ],
+ [
+ "▁example",
+ "s"
+ ],
+ [
+ "▁exam",
+ "ples"
+ ],
+ [
+ "ab",
+ "eth"
+ ],
+ [
+ "abe",
+ "th"
+ ],
+ [
+ "abet",
+ "h"
+ ],
+ [
+ "▁st",
+ "ated"
+ ],
+ [
+ "▁stat",
+ "ed"
+ ],
+ [
+ "▁state",
+ "d"
+ ],
+ [
+ "▁sta",
+ "ted"
+ ],
+ [
+ "fi",
+ "re"
+ ],
+ [
+ "f",
+ "ire"
+ ],
+ [
+ "bu",
+ "l"
+ ],
+ [
+ "b",
+ "ul"
+ ],
+ [
+ "▁danger",
+ "ous"
+ ],
+ [
+ "▁P",
+ "ay"
+ ],
+ [
+ "▁Pa",
+ "y"
+ ],
+ [
+ "▁",
+ "Pay"
+ ],
+ [
+ "▁G",
+ "re"
+ ],
+ [
+ "▁Gr",
+ "e"
+ ],
+ [
+ "▁",
+ "Gre"
+ ],
+ [
+ "▁Mon",
+ "day"
+ ],
+ [
+ "▁Mond",
+ "ay"
+ ],
+ [
+ "es",
+ "ome"
+ ],
+ [
+ "eso",
+ "me"
+ ],
+ [
+ "e",
+ "some"
+ ],
+ [
+ "ig",
+ "an"
+ ],
+ [
+ "iga",
+ "n"
+ ],
+ [
+ "i",
+ "gan"
+ ],
+ [
+ "ru",
+ "nd"
+ ],
+ [
+ "run",
+ "d"
+ ],
+ [
+ "r",
+ "und"
+ ],
+ [
+ "pr",
+ "ise"
+ ],
+ [
+ "p",
+ "rise"
+ ],
+ [
+ "fa",
+ "il"
+ ],
+ [
+ "f",
+ "ail"
+ ],
+ [
+ "▁N",
+ "ever"
+ ],
+ [
+ "▁Ne",
+ "ver"
+ ],
+ [
+ "▁Nev",
+ "er"
+ ],
+ [
+ "▁",
+ "Never"
+ ],
+ [
+ "A",
+ "v"
+ ],
+ [
+ "▁line",
+ "ar"
+ ],
+ [
+ "▁lin",
+ "ear"
+ ],
+ [
+ "▁",
+ "linear"
+ ],
+ [
+ "▁u",
+ "l"
+ ],
+ [
+ "▁",
+ "ul"
+ ],
+ [
+ "WA",
+ "R"
+ ],
+ [
+ "W",
+ "AR"
+ ],
+ [
+ "ре",
+ "н"
+ ],
+ [
+ "р",
+ "ен"
+ ],
+ [
+ "▁A",
+ "T"
+ ],
+ [
+ "▁",
+ "AT"
+ ],
+ [
+ "▁d",
+ "op"
+ ],
+ [
+ "▁do",
+ "p"
+ ],
+ [
+ "▁n",
+ "ou"
+ ],
+ [
+ "▁no",
+ "u"
+ ],
+ [
+ "Des",
+ "t"
+ ],
+ [
+ "De",
+ "st"
+ ],
+ [
+ "D",
+ "est"
+ ],
+ [
+ "▁claim",
+ "s"
+ ],
+ [
+ "en",
+ "da"
+ ],
+ [
+ "end",
+ "a"
+ ],
+ [
+ "▁c",
+ "razy"
+ ],
+ [
+ "▁cr",
+ "azy"
+ ],
+ [
+ "ge",
+ "l"
+ ],
+ [
+ "g",
+ "el"
+ ],
+ [
+ "og",
+ "gle"
+ ],
+ [
+ "ogg",
+ "le"
+ ],
+ [
+ "▁rep",
+ "resentation"
+ ],
+ [
+ "▁represent",
+ "ation"
+ ],
+ [
+ "in",
+ "en"
+ ],
+ [
+ "ine",
+ "n"
+ ],
+ [
+ "i",
+ "nen"
+ ],
+ [
+ "▁altern",
+ "ative"
+ ],
+ [
+ "▁alter",
+ "native"
+ ],
+ [
+ "D",
+ "M"
+ ],
+ [
+ "AB",
+ "ILITY"
+ ],
+ [
+ "face",
+ "s"
+ ],
+ [
+ "fa",
+ "ces"
+ ],
+ [
+ "fac",
+ "es"
+ ],
+ [
+ "f",
+ "aces"
+ ],
+ [
+ "▁do",
+ "ors"
+ ],
+ [
+ "▁door",
+ "s"
+ ],
+ [
+ "▁",
+ "doors"
+ ],
+ [
+ "at",
+ "iv"
+ ],
+ [
+ "ati",
+ "v"
+ ],
+ [
+ "Lo",
+ "ok"
+ ],
+ [
+ "L",
+ "ook"
+ ],
+ [
+ "▁J",
+ "SON"
+ ],
+ [
+ "▁JS",
+ "ON"
+ ],
+ [
+ "▁",
+ "JSON"
+ ],
+ [
+ "▁appe",
+ "arance"
+ ],
+ [
+ "▁appear",
+ "ance"
+ ],
+ [
+ "б",
+ "ря"
+ ],
+ [
+ "S",
+ "QL"
+ ],
+ [
+ "▁sil",
+ "ence"
+ ],
+ [
+ "ud",
+ "o"
+ ],
+ [
+ "u",
+ "do"
+ ],
+ [
+ "▁Direct",
+ "or"
+ ],
+ [
+ "▁Dire",
+ "ctor"
+ ],
+ [
+ "▁Dir",
+ "ector"
+ ],
+ [
+ "State",
+ "ment"
+ ],
+ [
+ "Stat",
+ "ement"
+ ],
+ [
+ "se",
+ "lected"
+ ],
+ [
+ "select",
+ "ed"
+ ],
+ [
+ "hi",
+ "gh"
+ ],
+ [
+ "h",
+ "igh"
+ ],
+ [
+ "pr",
+ "ime"
+ ],
+ [
+ "prim",
+ "e"
+ ],
+ [
+ "▁ign",
+ "ore"
+ ],
+ [
+ "▁ignor",
+ "e"
+ ],
+ [
+ "▁",
+ "ignore"
+ ],
+ [
+ "▁col",
+ "ors"
+ ],
+ [
+ "▁color",
+ "s"
+ ],
+ [
+ "▁",
+ "colors"
+ ],
+ [
+ "us",
+ "hing"
+ ],
+ [
+ "ush",
+ "ing"
+ ],
+ [
+ "▁v",
+ "irt"
+ ],
+ [
+ "▁vi",
+ "rt"
+ ],
+ [
+ "▁vir",
+ "t"
+ ],
+ [
+ "▁",
+ "virt"
+ ],
+ [
+ "man",
+ "ager"
+ ],
+ [
+ "▁rem",
+ "ote"
+ ],
+ [
+ "▁remot",
+ "e"
+ ],
+ [
+ "▁",
+ "remote"
+ ],
+ [
+ "ł",
+ "o"
+ ],
+ [
+ "sm",
+ "all"
+ ],
+ [
+ "▁cr",
+ "ime"
+ ],
+ [
+ "▁crim",
+ "e"
+ ],
+ [
+ "▁cri",
+ "me"
+ ],
+ [
+ "r",
+ "b"
+ ],
+ [
+ "▁c",
+ "reation"
+ ],
+ [
+ "▁cre",
+ "ation"
+ ],
+ [
+ "▁creat",
+ "ion"
+ ],
+ [
+ "▁f",
+ "light"
+ ],
+ [
+ "▁fl",
+ "ight"
+ ],
+ [
+ "▁S",
+ "ign"
+ ],
+ [
+ "▁Si",
+ "gn"
+ ],
+ [
+ "▁Sig",
+ "n"
+ ],
+ [
+ "▁",
+ "Sign"
+ ],
+ [
+ "IL",
+ "E"
+ ],
+ [
+ "I",
+ "LE"
+ ],
+ [
+ "▁D",
+ "O"
+ ],
+ [
+ "▁",
+ "DO"
+ ],
+ [
+ "com",
+ "ment"
+ ],
+ [
+ "comm",
+ "ent"
+ ],
+ [
+ "▁C",
+ "ost"
+ ],
+ [
+ "▁Co",
+ "st"
+ ],
+ [
+ "▁Cos",
+ "t"
+ ],
+ [
+ "▁",
+ "Cost"
+ ],
+ [
+ "._",
+ "_"
+ ],
+ [
+ ".",
+ "__"
+ ],
+ [
+ "▁C",
+ "op"
+ ],
+ [
+ "▁Co",
+ "p"
+ ],
+ [
+ "▁",
+ "Cop"
+ ],
+ [
+ "▁v",
+ "om"
+ ],
+ [
+ "▁vo",
+ "m"
+ ],
+ [
+ "▁Sc",
+ "ience"
+ ],
+ [
+ "▁Sci",
+ "ence"
+ ],
+ [
+ "ле",
+ "ния"
+ ],
+ [
+ "oo",
+ "p"
+ ],
+ [
+ "o",
+ "op"
+ ],
+ [
+ "inter",
+ "face"
+ ],
+ [
+ "▁WARRAN",
+ "TIES"
+ ],
+ [
+ "▁P",
+ "age"
+ ],
+ [
+ "▁Pa",
+ "ge"
+ ],
+ [
+ "▁",
+ "Page"
+ ],
+ [
+ "**",
+ "****"
+ ],
+ [
+ "****",
+ "**"
+ ],
+ [
+ "***",
+ "***"
+ ],
+ [
+ "ско",
+ "м"
+ ],
+ [
+ "с",
+ "ком"
+ ],
+ [
+ "TR",
+ "UE"
+ ],
+ [
+ "▁re",
+ "peated"
+ ],
+ [
+ "▁repe",
+ "ated"
+ ],
+ [
+ "▁repeat",
+ "ed"
+ ],
+ [
+ "▁е",
+ "го"
+ ],
+ [
+ "ш",
+ "о"
+ ],
+ [
+ "▁r",
+ "oz"
+ ],
+ [
+ "▁ro",
+ "z"
+ ],
+ [
+ "▁",
+ "roz"
+ ],
+ [
+ "P",
+ "e"
+ ],
+ [
+ "▁IS",
+ "BN"
+ ],
+ [
+ "ir",
+ "ts"
+ ],
+ [
+ "irt",
+ "s"
+ ],
+ [
+ "pos",
+ "es"
+ ],
+ [
+ "po",
+ "ses"
+ ],
+ [
+ "pose",
+ "s"
+ ],
+ [
+ "p",
+ "oses"
+ ],
+ [
+ "})",
+ "$"
+ ],
+ [
+ "}",
+ ")$"
+ ],
+ [
+ "▁",
+ "І"
+ ],
+ [
+ "child",
+ "ren"
+ ],
+ [
+ "ble",
+ "s"
+ ],
+ [
+ "bl",
+ "es"
+ ],
+ [
+ "b",
+ "les"
+ ],
+ [
+ "EC",
+ "T"
+ ],
+ [
+ "E",
+ "CT"
+ ],
+ [
+ "▁i",
+ "z"
+ ],
+ [
+ "▁",
+ "iz"
+ ],
+ [
+ "▁b",
+ "uilder"
+ ],
+ [
+ "▁build",
+ "er"
+ ],
+ [
+ "▁",
+ "builder"
+ ],
+ [
+ "▁M",
+ "edia"
+ ],
+ [
+ "▁Med",
+ "ia"
+ ],
+ [
+ "▁",
+ "Media"
+ ],
+ [
+ "ia",
+ "t"
+ ],
+ [
+ "i",
+ "at"
+ ],
+ [
+ "▁contr",
+ "ast"
+ ],
+ [
+ "▁contra",
+ "st"
+ ],
+ [
+ "”",
+ ","
+ ],
+ [
+ "▁L",
+ "ink"
+ ],
+ [
+ "▁Lin",
+ "k"
+ ],
+ [
+ "▁",
+ "Link"
+ ],
+ [
+ "▁Educ",
+ "ation"
+ ],
+ [
+ "▁j",
+ "oint"
+ ],
+ [
+ "▁join",
+ "t"
+ ],
+ [
+ "▁jo",
+ "int"
+ ],
+ [
+ "▁",
+ "joint"
+ ],
+ [
+ "▁ex",
+ "ternal"
+ ],
+ [
+ "▁extern",
+ "al"
+ ],
+ [
+ "▁",
+ "external"
+ ],
+ [
+ "▁ро",
+ "з"
+ ],
+ [
+ "▁b",
+ "its"
+ ],
+ [
+ "▁bit",
+ "s"
+ ],
+ [
+ "▁bi",
+ "ts"
+ ],
+ [
+ "▁",
+ "bits"
+ ],
+ [
+ "FO",
+ "RM"
+ ],
+ [
+ "FOR",
+ "M"
+ ],
+ [
+ "F",
+ "ORM"
+ ],
+ [
+ "er",
+ "man"
+ ],
+ [
+ "erm",
+ "an"
+ ],
+ [
+ "w",
+ "p"
+ ],
+ [
+ "▁M",
+ "ike"
+ ],
+ [
+ "▁Mi",
+ "ke"
+ ],
+ [
+ "▁Mik",
+ "e"
+ ],
+ [
+ "▁M",
+ "aster"
+ ],
+ [
+ "▁Ma",
+ "ster"
+ ],
+ [
+ "▁Mas",
+ "ter"
+ ],
+ [
+ "▁",
+ "Master"
+ ],
+ [
+ "▁sen",
+ "ior"
+ ],
+ [
+ "▁N",
+ "av"
+ ],
+ [
+ "▁Na",
+ "v"
+ ],
+ [
+ "▁",
+ "Nav"
+ ],
+ [
+ "▁record",
+ "ed"
+ ],
+ [
+ "el",
+ "ing"
+ ],
+ [
+ "eli",
+ "ng"
+ ],
+ [
+ "elin",
+ "g"
+ ],
+ [
+ "e",
+ "ling"
+ ],
+ [
+ "es",
+ "h"
+ ],
+ [
+ "e",
+ "sh"
+ ],
+ [
+ "f",
+ "x"
+ ],
+ [
+ "ка",
+ "н"
+ ],
+ [
+ "к",
+ "ан"
+ ],
+ [
+ "▁t",
+ "all"
+ ],
+ [
+ "▁tal",
+ "l"
+ ],
+ [
+ "▁ta",
+ "ll"
+ ],
+ [
+ "▁John",
+ "son"
+ ],
+ [
+ "▁s",
+ "ono"
+ ],
+ [
+ "▁so",
+ "no"
+ ],
+ [
+ "▁son",
+ "o"
+ ],
+ [
+ "▁an",
+ "che"
+ ],
+ [
+ "▁anc",
+ "he"
+ ],
+ [
+ "▁anch",
+ "e"
+ ],
+ [
+ "▁",
+ "anche"
+ ],
+ [
+ "ic",
+ "ken"
+ ],
+ [
+ "ick",
+ "en"
+ ],
+ [
+ "i",
+ "cken"
+ ],
+ [
+ "lo",
+ "op"
+ ],
+ [
+ "l",
+ "oop"
+ ],
+ [
+ "ici",
+ "ency"
+ ],
+ [
+ "empor",
+ "ary"
+ ],
+ [
+ "▁D",
+ "oes"
+ ],
+ [
+ "▁Do",
+ "es"
+ ],
+ [
+ "▁",
+ "Does"
+ ],
+ [
+ "▁re",
+ "lation"
+ ],
+ [
+ "▁rel",
+ "ation"
+ ],
+ [
+ "▁",
+ "relation"
+ ],
+ [
+ "м",
+ "ы"
+ ],
+ [
+ "wa",
+ "s"
+ ],
+ [
+ "w",
+ "as"
+ ],
+ [
+ "lo",
+ "w"
+ ],
+ [
+ "l",
+ "ow"
+ ],
+ [
+ "ich",
+ "te"
+ ],
+ [
+ "icht",
+ "e"
+ ],
+ [
+ "i",
+ "chte"
+ ],
+ [
+ "▁J",
+ "ones"
+ ],
+ [
+ "▁Jo",
+ "nes"
+ ],
+ [
+ "▁Jon",
+ "es"
+ ],
+ [
+ "▁bed",
+ "room"
+ ],
+ [
+ "DI",
+ "S"
+ ],
+ [
+ "D",
+ "IS"
+ ],
+ [
+ "▁mag",
+ "net"
+ ],
+ [
+ "▁magn",
+ "et"
+ ],
+ [
+ "▁Eng",
+ "ine"
+ ],
+ [
+ "▁",
+ "Engine"
+ ],
+ [
+ "▁feel",
+ "ings"
+ ],
+ [
+ "▁feeling",
+ "s"
+ ],
+ [
+ "▁fee",
+ "lings"
+ ],
+ [
+ "G",
+ "C"
+ ],
+ [
+ "▁t",
+ "orn"
+ ],
+ [
+ "▁to",
+ "rn"
+ ],
+ [
+ "▁tor",
+ "n"
+ ],
+ [
+ "▁relationship",
+ "s"
+ ],
+ [
+ "▁relation",
+ "ships"
+ ],
+ [
+ "▁Р",
+ "е"
+ ],
+ [
+ "▁p",
+ "roud"
+ ],
+ [
+ "▁pro",
+ "ud"
+ ],
+ [
+ "▁pr",
+ "oud"
+ ],
+ [
+ "▁t",
+ "we"
+ ],
+ [
+ "▁tw",
+ "e"
+ ],
+ [
+ "ov",
+ "al"
+ ],
+ [
+ "ova",
+ "l"
+ ],
+ [
+ "o",
+ "val"
+ ],
+ [
+ "▁w",
+ "aste"
+ ],
+ [
+ "▁was",
+ "te"
+ ],
+ [
+ "▁wa",
+ "ste"
+ ],
+ [
+ "▁red",
+ "uced"
+ ],
+ [
+ "▁redu",
+ "ced"
+ ],
+ [
+ "▁reduce",
+ "d"
+ ],
+ [
+ "il",
+ "ton"
+ ],
+ [
+ "ilt",
+ "on"
+ ],
+ [
+ "B",
+ "P"
+ ],
+ [
+ "▁for",
+ "got"
+ ],
+ [
+ "▁forg",
+ "ot"
+ ],
+ [
+ "▁bod",
+ "ies"
+ ],
+ [
+ "▁H",
+ "aw"
+ ],
+ [
+ "▁Ha",
+ "w"
+ ],
+ [
+ "la",
+ "g"
+ ],
+ [
+ "l",
+ "ag"
+ ],
+ [
+ "▁w",
+ "ww"
+ ],
+ [
+ "▁",
+ "www"
+ ],
+ [
+ "do",
+ "or"
+ ],
+ [
+ "d",
+ "oor"
+ ],
+ [
+ "▁s",
+ "ufficient"
+ ],
+ [
+ "▁suff",
+ "icient"
+ ],
+ [
+ "▁doll",
+ "ars"
+ ],
+ [
+ "▁dollar",
+ "s"
+ ],
+ [
+ "Le",
+ "n"
+ ],
+ [
+ "L",
+ "en"
+ ],
+ [
+ "▁talk",
+ "ed"
+ ],
+ [
+ "▁tal",
+ "ked"
+ ],
+ [
+ "▁b",
+ "ond"
+ ],
+ [
+ "▁bo",
+ "nd"
+ ],
+ [
+ "▁bon",
+ "d"
+ ],
+ [
+ "▁B",
+ "or"
+ ],
+ [
+ "▁Bo",
+ "r"
+ ],
+ [
+ "}}",
+ "{"
+ ],
+ [
+ "}",
+ "}{"
+ ],
+ [
+ "ro",
+ "d"
+ ],
+ [
+ "r",
+ "od"
+ ],
+ [
+ "Pass",
+ "word"
+ ],
+ [
+ "qu",
+ "are"
+ ],
+ [
+ "▁l",
+ "ights"
+ ],
+ [
+ "▁light",
+ "s"
+ ],
+ [
+ "▁",
+ "lights"
+ ],
+ [
+ "er",
+ "en"
+ ],
+ [
+ "ere",
+ "n"
+ ],
+ [
+ "e",
+ "ren"
+ ],
+ [
+ "▁th",
+ "irty"
+ ],
+ [
+ "N",
+ "C"
+ ],
+ [
+ "▁T",
+ "ODO"
+ ],
+ [
+ "▁TO",
+ "DO"
+ ],
+ [
+ "▁res",
+ "pond"
+ ],
+ [
+ "▁respon",
+ "d"
+ ],
+ [
+ "▁resp",
+ "ond"
+ ],
+ [
+ "▁",
+ "respond"
+ ],
+ [
+ "ки",
+ "х"
+ ],
+ [
+ "dir",
+ "ect"
+ ],
+ [
+ "di",
+ "rect"
+ ],
+ [
+ "dire",
+ "ct"
+ ],
+ [
+ "d",
+ "irect"
+ ],
+ [
+ "a",
+ "ção"
+ ],
+ [
+ "▁he",
+ "av"
+ ],
+ [
+ "Med",
+ "ia"
+ ],
+ [
+ "M",
+ "edia"
+ ],
+ [
+ "ex",
+ "it"
+ ],
+ [
+ "e",
+ "xit"
+ ],
+ [
+ "L",
+ "icense"
+ ],
+ [
+ "`",
+ "."
+ ],
+ [
+ "▁m",
+ "ixed"
+ ],
+ [
+ "▁mix",
+ "ed"
+ ],
+ [
+ "▁d",
+ "esk"
+ ],
+ [
+ "▁de",
+ "sk"
+ ],
+ [
+ "▁des",
+ "k"
+ ],
+ [
+ "▁te",
+ "aching"
+ ],
+ [
+ "▁teach",
+ "ing"
+ ],
+ [
+ "▁tea",
+ "ching"
+ ],
+ [
+ "▁m",
+ "aj"
+ ],
+ [
+ "▁ma",
+ "j"
+ ],
+ [
+ "▁n",
+ "erv"
+ ],
+ [
+ "▁ne",
+ "rv"
+ ],
+ [
+ "▁ner",
+ "v"
+ ],
+ [
+ "in",
+ "ations"
+ ],
+ [
+ "ination",
+ "s"
+ ],
+ [
+ "type",
+ "of"
+ ],
+ [
+ "▁co",
+ "ast"
+ ],
+ [
+ "▁ж",
+ "е"
+ ],
+ [
+ "▁",
+ "же"
+ ],
+ [
+ "▁be",
+ "side"
+ ],
+ [
+ "▁bes",
+ "ide"
+ ],
+ [
+ "um",
+ "my"
+ ],
+ [
+ "umm",
+ "y"
+ ],
+ [
+ "Do",
+ "c"
+ ],
+ [
+ "D",
+ "oc"
+ ],
+ [
+ "▁sche",
+ "dule"
+ ],
+ [
+ "▁schedul",
+ "e"
+ ],
+ [
+ "▁sched",
+ "ule"
+ ],
+ [
+ "▁",
+ "schedule"
+ ],
+ [
+ "▁re",
+ "cover"
+ ],
+ [
+ "▁rec",
+ "over"
+ ],
+ [
+ "▁Fur",
+ "ther"
+ ],
+ [
+ "▁ste",
+ "el"
+ ],
+ [
+ "bo",
+ "ot"
+ ],
+ [
+ "b",
+ "oot"
+ ],
+ [
+ "▁Per",
+ "haps"
+ ],
+ [
+ "▁с",
+ "ъ"
+ ],
+ [
+ "▁O",
+ "s"
+ ],
+ [
+ "▁",
+ "Os"
+ ],
+ [
+ "ri",
+ "ck"
+ ],
+ [
+ "ric",
+ "k"
+ ],
+ [
+ "r",
+ "ick"
+ ],
+ [
+ "▁В",
+ "и"
+ ],
+ [
+ "Supp",
+ "ort"
+ ],
+ [
+ "Sup",
+ "port"
+ ],
+ [
+ "S",
+ "upport"
+ ],
+ [
+ "▁(",
+ "_"
+ ],
+ [
+ "▁",
+ "(_"
+ ],
+ [
+ "ni",
+ "l"
+ ],
+ [
+ "n",
+ "il"
+ ],
+ [
+ "pi",
+ "s"
+ ],
+ [
+ "p",
+ "is"
+ ],
+ [
+ "x",
+ "pected"
+ ],
+ [
+ "▁process",
+ "ing"
+ ],
+ [
+ "▁proces",
+ "sing"
+ ],
+ [
+ "▁",
+ "processing"
+ ],
+ [
+ "Bu",
+ "ild"
+ ],
+ [
+ "B",
+ "uild"
+ ],
+ [
+ "ar",
+ "ian"
+ ],
+ [
+ "ari",
+ "an"
+ ],
+ [
+ "aria",
+ "n"
+ ],
+ [
+ "a",
+ "rian"
+ ],
+ [
+ "▁i",
+ "con"
+ ],
+ [
+ "▁ic",
+ "on"
+ ],
+ [
+ "▁",
+ "icon"
+ ],
+ [
+ "▁C",
+ "A"
+ ],
+ [
+ "▁",
+ "CA"
+ ],
+ [
+ "wi",
+ "ck"
+ ],
+ [
+ "w",
+ "ick"
+ ],
+ [
+ "=",
+ "("
+ ],
+ [
+ "▁al",
+ "gorithm"
+ ],
+ [
+ "▁",
+ "algorithm"
+ ],
+ [
+ "▁You",
+ "ng"
+ ],
+ [
+ "▁Man",
+ "agement"
+ ],
+ [
+ "▁",
+ "Management"
+ ],
+ [
+ "▁anc",
+ "ient"
+ ],
+ [
+ "▁anci",
+ "ent"
+ ],
+ [
+ "но",
+ "сть"
+ ],
+ [
+ "ност",
+ "ь"
+ ],
+ [
+ "ot",
+ "i"
+ ],
+ [
+ "o",
+ "ti"
+ ],
+ [
+ "▁comb",
+ "ination"
+ ],
+ [
+ "wor",
+ "ld"
+ ],
+ [
+ "w",
+ "orld"
+ ],
+ [
+ "n",
+ "n"
+ ],
+ [
+ "▁d",
+ "ram"
+ ],
+ [
+ "▁dr",
+ "am"
+ ],
+ [
+ "en",
+ "abled"
+ ],
+ [
+ "ena",
+ "bled"
+ ],
+ [
+ "enable",
+ "d"
+ ],
+ [
+ "A",
+ "c"
+ ],
+ [
+ "C",
+ "CESS"
+ ],
+ [
+ "ar",
+ "ation"
+ ],
+ [
+ "▁bl",
+ "ocks"
+ ],
+ [
+ "▁block",
+ "s"
+ ],
+ [
+ "▁blo",
+ "cks"
+ ],
+ [
+ "▁",
+ "blocks"
+ ],
+ [
+ "▁Ang",
+ "eles"
+ ],
+ [
+ "▁Angel",
+ "es"
+ ],
+ [
+ "▁Q",
+ "ual"
+ ],
+ [
+ "▁Qu",
+ "al"
+ ],
+ [
+ "▁",
+ "Qual"
+ ],
+ [
+ "▁suc",
+ "ceed"
+ ],
+ [
+ "▁succ",
+ "eed"
+ ],
+ [
+ "net",
+ "work"
+ ],
+ [
+ "▁ob",
+ "lig"
+ ],
+ [
+ "spring",
+ "framework"
+ ],
+ [
+ "▁T",
+ "re"
+ ],
+ [
+ "▁Tr",
+ "e"
+ ],
+ [
+ "ok",
+ "es"
+ ],
+ [
+ "oke",
+ "s"
+ ],
+ [
+ "o",
+ "kes"
+ ],
+ [
+ "mu",
+ "n"
+ ],
+ [
+ "m",
+ "un"
+ ],
+ [
+ "▁Net",
+ "work"
+ ],
+ [
+ "▁",
+ "Network"
+ ],
+ [
+ "De",
+ "l"
+ ],
+ [
+ "D",
+ "el"
+ ],
+ [
+ "▁e",
+ "state"
+ ],
+ [
+ "▁est",
+ "ate"
+ ],
+ [
+ "▁esta",
+ "te"
+ ],
+ [
+ "▁l",
+ "iqu"
+ ],
+ [
+ "▁li",
+ "qu"
+ ],
+ [
+ "▁p",
+ "ob"
+ ],
+ [
+ "▁po",
+ "b"
+ ],
+ [
+ "▁d",
+ "ad"
+ ],
+ [
+ "▁da",
+ "d"
+ ],
+ [
+ "▁dist",
+ "inct"
+ ],
+ [
+ "▁T",
+ "it"
+ ],
+ [
+ "▁Ti",
+ "t"
+ ],
+ [
+ "▁L",
+ "ear"
+ ],
+ [
+ "▁Le",
+ "ar"
+ ],
+ [
+ "fer",
+ "red"
+ ],
+ [
+ "and",
+ "roid"
+ ],
+ [
+ "andro",
+ "id"
+ ],
+ [
+ "▁sub",
+ "sequ"
+ ],
+ [
+ "▁subs",
+ "equ"
+ ],
+ [
+ "▁Flor",
+ "ida"
+ ],
+ [
+ "sub",
+ "set"
+ ],
+ [
+ "▁whis",
+ "per"
+ ],
+ [
+ "Vo",
+ "l"
+ ],
+ [
+ "V",
+ "ol"
+ ],
+ [
+ "ul",
+ "ous"
+ ],
+ [
+ "ulo",
+ "us"
+ ],
+ [
+ "▁c",
+ "rew"
+ ],
+ [
+ "▁cre",
+ "w"
+ ],
+ [
+ "▁cr",
+ "ew"
+ ],
+ [
+ "▁l",
+ "ug"
+ ],
+ [
+ "▁lu",
+ "g"
+ ],
+ [
+ "pi",
+ "d"
+ ],
+ [
+ "p",
+ "id"
+ ],
+ [
+ "oc",
+ "ity"
+ ],
+ [
+ "oci",
+ "ty"
+ ],
+ [
+ "o",
+ "city"
+ ],
+ [
+ "sk",
+ "b"
+ ],
+ [
+ "s",
+ "kb"
+ ],
+ [
+ "▁t",
+ "ea"
+ ],
+ [
+ "▁te",
+ "a"
+ ],
+ [
+ "у",
+ "н"
+ ],
+ [
+ "▁hon",
+ "or"
+ ],
+ [
+ "▁ho",
+ "nor"
+ ],
+ [
+ "▁I",
+ "ns"
+ ],
+ [
+ "▁In",
+ "s"
+ ],
+ [
+ "▁",
+ "Ins"
+ ],
+ [
+ "▁g",
+ "ew"
+ ],
+ [
+ "▁ge",
+ "w"
+ ],
+ [
+ "▁",
+ "gew"
+ ],
+ [
+ "Det",
+ "ails"
+ ],
+ [
+ "Detail",
+ "s"
+ ],
+ [
+ "ene",
+ "ath"
+ ],
+ [
+ "e",
+ "neath"
+ ],
+ [
+ "at",
+ "ar"
+ ],
+ [
+ "ata",
+ "r"
+ ],
+ [
+ "a",
+ "tar"
+ ],
+ [
+ "▁_",
+ "{"
+ ],
+ [
+ "▁",
+ "_{"
+ ],
+ [
+ "am",
+ "en"
+ ],
+ [
+ "ame",
+ "n"
+ ],
+ [
+ "a",
+ "men"
+ ],
+ [
+ "▁set",
+ "up"
+ ],
+ [
+ "▁",
+ "setup"
+ ],
+ [
+ "Trans",
+ "action"
+ ],
+ [
+ "▁bl",
+ "ank"
+ ],
+ [
+ "▁",
+ "blank"
+ ],
+ [
+ "Fail",
+ "ed"
+ ],
+ [
+ "F",
+ "ailed"
+ ],
+ [
+ "jo",
+ "b"
+ ],
+ [
+ "j",
+ "ob"
+ ],
+ [
+ "▁p",
+ "ret"
+ ],
+ [
+ "▁pre",
+ "t"
+ ],
+ [
+ "▁pr",
+ "et"
+ ],
+ [
+ "▁",
+ "pret"
+ ],
+ [
+ "ß",
+ "e"
+ ],
+ [
+ "lo",
+ "or"
+ ],
+ [
+ "l",
+ "oor"
+ ],
+ [
+ "ř",
+ "í"
+ ],
+ [
+ "nc",
+ "ia"
+ ],
+ [
+ "n",
+ "cia"
+ ],
+ [
+ "▁any",
+ "where"
+ ],
+ [
+ "▁L",
+ "ight"
+ ],
+ [
+ "▁Li",
+ "ght"
+ ],
+ [
+ "▁",
+ "Light"
+ ],
+ [
+ "▁A",
+ "k"
+ ],
+ [
+ "B",
+ "D"
+ ],
+ [
+ "▁exc",
+ "ited"
+ ],
+ [
+ "▁excit",
+ "ed"
+ ],
+ [
+ "ag",
+ "ers"
+ ],
+ [
+ "age",
+ "rs"
+ ],
+ [
+ "ager",
+ "s"
+ ],
+ [
+ "a",
+ "gers"
+ ],
+ [
+ "▁w",
+ "arning"
+ ],
+ [
+ "▁war",
+ "ning"
+ ],
+ [
+ "▁warn",
+ "ing"
+ ],
+ [
+ "▁",
+ "warning"
+ ],
+ [
+ "▁process",
+ "es"
+ ],
+ [
+ "▁proces",
+ "ses"
+ ],
+ [
+ "h",
+ "u"
+ ],
+ [
+ "▁y",
+ "outh"
+ ],
+ [
+ "▁you",
+ "th"
+ ],
+ [
+ "▁yo",
+ "uth"
+ ],
+ [
+ "▁d",
+ "ogs"
+ ],
+ [
+ "▁do",
+ "gs"
+ ],
+ [
+ "▁dog",
+ "s"
+ ],
+ [
+ "▁o",
+ "ct"
+ ],
+ [
+ "▁oc",
+ "t"
+ ],
+ [
+ "▁",
+ "oct"
+ ],
+ [
+ "▁n",
+ "ine"
+ ],
+ [
+ "▁ni",
+ "ne"
+ ],
+ [
+ "▁nin",
+ "e"
+ ],
+ [
+ "Write",
+ "r"
+ ],
+ [
+ "Wr",
+ "iter"
+ ],
+ [
+ "Writ",
+ "er"
+ ],
+ [
+ "W",
+ "riter"
+ ],
+ [
+ "gr",
+ "id"
+ ],
+ [
+ "g",
+ "rid"
+ ],
+ [
+ "▁import",
+ "ance"
+ ],
+ [
+ "est",
+ "ic"
+ ],
+ [
+ "▁care",
+ "fully"
+ ],
+ [
+ "▁careful",
+ "ly"
+ ],
+ [
+ "ma",
+ "ster"
+ ],
+ [
+ "mas",
+ "ter"
+ ],
+ [
+ "m",
+ "aster"
+ ],
+ [
+ "▁dec",
+ "isions"
+ ],
+ [
+ "▁decision",
+ "s"
+ ],
+ [
+ "▁decis",
+ "ions"
+ ],
+ [
+ "▁p",
+ "in"
+ ],
+ [
+ "▁pi",
+ "n"
+ ],
+ [
+ "▁",
+ "pin"
+ ],
+ [
+ "▁cr",
+ "ack"
+ ],
+ [
+ "TE",
+ "ST"
+ ],
+ [
+ "TES",
+ "T"
+ ],
+ [
+ "T",
+ "EST"
+ ],
+ [
+ "▁L",
+ "ocal"
+ ],
+ [
+ "▁Loc",
+ "al"
+ ],
+ [
+ "▁Lo",
+ "cal"
+ ],
+ [
+ "▁",
+ "Local"
+ ],
+ [
+ "▁R",
+ "ight"
+ ],
+ [
+ "▁",
+ "Right"
+ ],
+ [
+ "▁v",
+ "ast"
+ ],
+ [
+ "▁va",
+ "st"
+ ],
+ [
+ "▁vas",
+ "t"
+ ],
+ [
+ "▁f",
+ "aster"
+ ],
+ [
+ "▁fa",
+ "ster"
+ ],
+ [
+ "▁fast",
+ "er"
+ ],
+ [
+ "▁inst",
+ "itut"
+ ],
+ [
+ "▁ann",
+ "ual"
+ ],
+ [
+ "LA",
+ "N"
+ ],
+ [
+ "L",
+ "AN"
+ ],
+ [
+ "▁e",
+ "pisode"
+ ],
+ [
+ "▁epis",
+ "ode"
+ ],
+ [
+ "▁X",
+ "V"
+ ],
+ [
+ "▁del",
+ "ivery"
+ ],
+ [
+ "▁deliver",
+ "y"
+ ],
+ [
+ "t",
+ "l"
+ ],
+ [
+ "F",
+ "P"
+ ],
+ [
+ "ci",
+ "rc"
+ ],
+ [
+ "cir",
+ "c"
+ ],
+ [
+ "▁typ",
+ "ically"
+ ],
+ [
+ "▁typical",
+ "ly"
+ ],
+ [
+ "ig",
+ "o"
+ ],
+ [
+ "i",
+ "go"
+ ],
+ [
+ "▁int",
+ "el"
+ ],
+ [
+ "▁inte",
+ "l"
+ ],
+ [
+ "▁",
+ "intel"
+ ],
+ [
+ "na",
+ "t"
+ ],
+ [
+ "n",
+ "at"
+ ],
+ [
+ "x",
+ "b"
+ ],
+ [
+ "ст",
+ "ро"
+ ],
+ [
+ "с",
+ "тро"
+ ],
+ [
+ ")",
+ "-"
+ ],
+ [
+ "▁B",
+ "al"
+ ],
+ [
+ "▁Ba",
+ "l"
+ ],
+ [
+ "▁",
+ "Bal"
+ ],
+ [
+ "▁J",
+ "os"
+ ],
+ [
+ "▁Jo",
+ "s"
+ ],
+ [
+ "▁g",
+ "onna"
+ ],
+ [
+ "▁R",
+ "est"
+ ],
+ [
+ "▁Re",
+ "st"
+ ],
+ [
+ "▁Res",
+ "t"
+ ],
+ [
+ "▁",
+ "Rest"
+ ],
+ [
+ "jo",
+ "r"
+ ],
+ [
+ "j",
+ "or"
+ ],
+ [
+ "on",
+ "ia"
+ ],
+ [
+ "oni",
+ "a"
+ ],
+ [
+ "o",
+ "nia"
+ ],
+ [
+ "or",
+ "ship"
+ ],
+ [
+ "ors",
+ "hip"
+ ],
+ [
+ "ov",
+ "ery"
+ ],
+ [
+ "ove",
+ "ry"
+ ],
+ [
+ "over",
+ "y"
+ ],
+ [
+ "o",
+ "very"
+ ],
+ [
+ "LI",
+ "NE"
+ ],
+ [
+ "LIN",
+ "E"
+ ],
+ [
+ "L",
+ "INE"
+ ],
+ [
+ "]",
+ ":"
+ ],
+ [
+ "Que",
+ "ue"
+ ],
+ [
+ "▁com",
+ "pare"
+ ],
+ [
+ "▁comp",
+ "are"
+ ],
+ [
+ "▁compar",
+ "e"
+ ],
+ [
+ "▁",
+ "compare"
+ ],
+ [
+ "▁ap",
+ "artment"
+ ],
+ [
+ "▁apart",
+ "ment"
+ ],
+ [
+ "▁r",
+ "ul"
+ ],
+ [
+ "▁ru",
+ "l"
+ ],
+ [
+ "D",
+ "r"
+ ],
+ [
+ "gen",
+ "cy"
+ ],
+ [
+ "g",
+ "ency"
+ ],
+ [
+ "▁ob",
+ "viously"
+ ],
+ [
+ "▁obvious",
+ "ly"
+ ],
+ [
+ "zi",
+ "e"
+ ],
+ [
+ "z",
+ "ie"
+ ],
+ [
+ "yc",
+ "l"
+ ],
+ [
+ "y",
+ "cl"
+ ],
+ [
+ "fort",
+ "unately"
+ ],
+ [
+ "fortun",
+ "ately"
+ ],
+ [
+ "fortunate",
+ "ly"
+ ],
+ [
+ "▁ste",
+ "pped"
+ ],
+ [
+ "▁step",
+ "ped"
+ ],
+ [
+ "▁S",
+ "eg"
+ ],
+ [
+ "▁Se",
+ "g"
+ ],
+ [
+ "▁",
+ "Seg"
+ ],
+ [
+ "▁Wh",
+ "ich"
+ ],
+ [
+ "▁",
+ "Which"
+ ],
+ [
+ "▁P",
+ "C"
+ ],
+ [
+ "▁",
+ "PC"
+ ],
+ [
+ "▁a",
+ "st"
+ ],
+ [
+ "▁as",
+ "t"
+ ],
+ [
+ "▁",
+ "ast"
+ ],
+ [
+ "end",
+ "or"
+ ],
+ [
+ "endo",
+ "r"
+ ],
+ [
+ "▁per",
+ "mission"
+ ],
+ [
+ "▁perm",
+ "ission"
+ ],
+ [
+ "▁",
+ "permission"
+ ],
+ [
+ "CO",
+ "L"
+ ],
+ [
+ "C",
+ "OL"
+ ],
+ [
+ "▁T",
+ "EST"
+ ],
+ [
+ "▁TE",
+ "ST"
+ ],
+ [
+ "▁",
+ "TEST"
+ ],
+ [
+ "P",
+ "ay"
+ ],
+ [
+ "ère",
+ "s"
+ ],
+ [
+ "è",
+ "res"
+ ],
+ [
+ "▁stud",
+ "ied"
+ ],
+ [
+ "▁accom",
+ "pl"
+ ],
+ [
+ "▁accomp",
+ "l"
+ ],
+ [
+ "ro",
+ "le"
+ ],
+ [
+ "rol",
+ "e"
+ ],
+ [
+ "r",
+ "ole"
+ ],
+ [
+ "Wh",
+ "ere"
+ ],
+ [
+ "Whe",
+ "re"
+ ],
+ [
+ "W",
+ "here"
+ ],
+ [
+ "proto",
+ "buf"
+ ],
+ [
+ "met",
+ "adata"
+ ],
+ [
+ "meta",
+ "data"
+ ],
+ [
+ "Jo",
+ "b"
+ ],
+ [
+ "J",
+ "ob"
+ ],
+ [
+ "▁F",
+ "our"
+ ],
+ [
+ "▁Fou",
+ "r"
+ ],
+ [
+ "▁Fo",
+ "ur"
+ ],
+ [
+ "pl",
+ "ements"
+ ],
+ [
+ "ple",
+ "ments"
+ ],
+ [
+ "plement",
+ "s"
+ ],
+ [
+ "dis",
+ "able"
+ ],
+ [
+ "▁l",
+ "oud"
+ ],
+ [
+ "▁lo",
+ "ud"
+ ],
+ [
+ "▁lou",
+ "d"
+ ],
+ [
+ "▁happ",
+ "ening"
+ ],
+ [
+ "▁happen",
+ "ing"
+ ],
+ [
+ "▁U",
+ "sing"
+ ],
+ [
+ "▁Us",
+ "ing"
+ ],
+ [
+ "▁",
+ "Using"
+ ],
+ [
+ "ro",
+ "g"
+ ],
+ [
+ "r",
+ "og"
+ ],
+ [
+ "▁depend",
+ "s"
+ ],
+ [
+ "▁dep",
+ "ends"
+ ],
+ [
+ "í",
+ "m"
+ ],
+ [
+ "'",
+ "\\"
+ ],
+ [
+ "▁t",
+ "aught"
+ ],
+ [
+ "sh",
+ "ared"
+ ],
+ [
+ "sha",
+ "red"
+ ],
+ [
+ "share",
+ "d"
+ ],
+ [
+ "▁att",
+ "ributes"
+ ],
+ [
+ "▁attribute",
+ "s"
+ ],
+ [
+ "▁attribut",
+ "es"
+ ],
+ [
+ "▁",
+ "attributes"
+ ],
+ [
+ "▁A",
+ "ction"
+ ],
+ [
+ "▁Act",
+ "ion"
+ ],
+ [
+ "▁",
+ "Action"
+ ],
+ [
+ "▁d",
+ "ess"
+ ],
+ [
+ "▁de",
+ "ss"
+ ],
+ [
+ "▁des",
+ "s"
+ ],
+ [
+ "▁",
+ "dess"
+ ],
+ [
+ "▁h",
+ "ouses"
+ ],
+ [
+ "▁house",
+ "s"
+ ],
+ [
+ "▁hous",
+ "es"
+ ],
+ [
+ "▁ho",
+ "uses"
+ ],
+ [
+ "▁re",
+ "set"
+ ],
+ [
+ "▁res",
+ "et"
+ ],
+ [
+ "▁",
+ "reset"
+ ],
+ [
+ "▁b",
+ "ien"
+ ],
+ [
+ "▁bi",
+ "en"
+ ],
+ [
+ "▁ex",
+ "plicit"
+ ],
+ [
+ "▁expl",
+ "icit"
+ ],
+ [
+ "LO",
+ "W"
+ ],
+ [
+ "->",
+ "_"
+ ],
+ [
+ "▁P",
+ "M"
+ ],
+ [
+ "▁",
+ "PM"
+ ],
+ [
+ "C",
+ "ategory"
+ ],
+ [
+ "oi",
+ "ce"
+ ],
+ [
+ "o",
+ "ice"
+ ],
+ [
+ "in",
+ "to"
+ ],
+ [
+ "int",
+ "o"
+ ],
+ [
+ "▁m",
+ "ail"
+ ],
+ [
+ "▁ma",
+ "il"
+ ],
+ [
+ "▁mai",
+ "l"
+ ],
+ [
+ "▁",
+ "mail"
+ ],
+ [
+ "▁author",
+ "ity"
+ ],
+ [
+ "▁un",
+ "able"
+ ],
+ [
+ "▁una",
+ "ble"
+ ],
+ [
+ "file",
+ "name"
+ ],
+ [
+ "fil",
+ "ename"
+ ],
+ [
+ "é",
+ "k"
+ ],
+ [
+ "ле",
+ "й"
+ ],
+ [
+ "л",
+ "ей"
+ ],
+ [
+ "▁s",
+ "ector"
+ ],
+ [
+ "▁se",
+ "ctor"
+ ],
+ [
+ "▁sec",
+ "tor"
+ ],
+ [
+ "▁sect",
+ "or"
+ ],
+ [
+ "ap",
+ "point"
+ ],
+ [
+ "app",
+ "oint"
+ ],
+ [
+ "▁h",
+ "ang"
+ ],
+ [
+ "▁ha",
+ "ng"
+ ],
+ [
+ "▁han",
+ "g"
+ ],
+ [
+ "▁",
+ "hang"
+ ],
+ [
+ "▁c",
+ "el"
+ ],
+ [
+ "▁ce",
+ "l"
+ ],
+ [
+ "▁",
+ "cel"
+ ],
+ [
+ "rel",
+ "ated"
+ ],
+ [
+ "it",
+ "ate"
+ ],
+ [
+ "ita",
+ "te"
+ ],
+ [
+ "itat",
+ "e"
+ ],
+ [
+ "▁'",
+ "<"
+ ],
+ [
+ "am",
+ "ber"
+ ],
+ [
+ "amb",
+ "er"
+ ],
+ [
+ "a",
+ "mber"
+ ],
+ [
+ "▁c",
+ "heap"
+ ],
+ [
+ "▁che",
+ "ap"
+ ],
+ [
+ "▁en",
+ "abled"
+ ],
+ [
+ "▁enable",
+ "d"
+ ],
+ [
+ "▁",
+ "enabled"
+ ],
+ [
+ "▁di",
+ "vision"
+ ],
+ [
+ "▁div",
+ "ision"
+ ],
+ [
+ "▁divis",
+ "ion"
+ ],
+ [
+ "An",
+ "y"
+ ],
+ [
+ "A",
+ "ny"
+ ],
+ [
+ "▁h",
+ "ier"
+ ],
+ [
+ "▁hi",
+ "er"
+ ],
+ [
+ "▁H",
+ "ead"
+ ],
+ [
+ "▁He",
+ "ad"
+ ],
+ [
+ "▁",
+ "Head"
+ ],
+ [
+ "nt",
+ "ax"
+ ],
+ [
+ "n",
+ "tax"
+ ],
+ [
+ "ud",
+ "a"
+ ],
+ [
+ "u",
+ "da"
+ ],
+ [
+ "▁lim",
+ "itations"
+ ],
+ [
+ "▁limit",
+ "ations"
+ ],
+ [
+ "▁limitation",
+ "s"
+ ],
+ [
+ "▁st",
+ "udio"
+ ],
+ [
+ "▁stud",
+ "io"
+ ],
+ [
+ "med",
+ "ia"
+ ],
+ [
+ "medi",
+ "a"
+ ],
+ [
+ "m",
+ "edia"
+ ],
+ [
+ "▁cir",
+ "cle"
+ ],
+ [
+ "▁circ",
+ "le"
+ ],
+ [
+ "▁",
+ "circle"
+ ],
+ [
+ "но",
+ "ва"
+ ],
+ [
+ "нов",
+ "а"
+ ],
+ [
+ "▁l",
+ "aug"
+ ],
+ [
+ "▁la",
+ "ug"
+ ],
+ [
+ "ac",
+ "ts"
+ ],
+ [
+ "act",
+ "s"
+ ],
+ [
+ "▁В",
+ "о"
+ ],
+ [
+ "ó",
+ "d"
+ ],
+ [
+ "pl",
+ "ed"
+ ],
+ [
+ "ple",
+ "d"
+ ],
+ [
+ "p",
+ "led"
+ ],
+ [
+ "LO",
+ "C"
+ ],
+ [
+ "L",
+ "OC"
+ ],
+ [
+ "Ex",
+ "pr"
+ ],
+ [
+ "Exp",
+ "r"
+ ],
+ [
+ ">",
+ ":"
+ ],
+ [
+ "▁pr",
+ "és"
+ ],
+ [
+ "▁pré",
+ "s"
+ ],
+ [
+ "▁",
+ "prés"
+ ],
+ [
+ "▁laugh",
+ "ed"
+ ],
+ [
+ "▁laug",
+ "hed"
+ ],
+ [
+ "▁Th",
+ "ree"
+ ],
+ [
+ "▁",
+ "Three"
+ ],
+ [
+ "л",
+ "ы"
+ ],
+ [
+ "▁en",
+ "ds"
+ ],
+ [
+ "▁end",
+ "s"
+ ],
+ [
+ "▁",
+ "ends"
+ ],
+ [
+ "▁fund",
+ "ament"
+ ],
+ [
+ "▁in",
+ "her"
+ ],
+ [
+ "▁",
+ "inher"
+ ],
+ [
+ "▁l",
+ "iv"
+ ],
+ [
+ "▁li",
+ "v"
+ ],
+ [
+ "▁",
+ "liv"
+ ],
+ [
+ "bi",
+ "d"
+ ],
+ [
+ "b",
+ "id"
+ ],
+ [
+ "▁respons",
+ "ibility"
+ ],
+ [
+ "▁check",
+ "ed"
+ ],
+ [
+ "▁",
+ "checked"
+ ],
+ [
+ "▁P",
+ "ac"
+ ],
+ [
+ "▁Pa",
+ "c"
+ ],
+ [
+ "▁f",
+ "ault"
+ ],
+ [
+ "▁fa",
+ "ult"
+ ],
+ [
+ "▁y",
+ "ellow"
+ ],
+ [
+ "▁s",
+ "alt"
+ ],
+ [
+ "▁sa",
+ "lt"
+ ],
+ [
+ "▁sal",
+ "t"
+ ],
+ [
+ "▁Franc",
+ "isco"
+ ],
+ [
+ "▁Francis",
+ "co"
+ ],
+ [
+ "▁",
+ "^"
+ ],
+ [
+ "▁O",
+ "N"
+ ],
+ [
+ "▁",
+ "ON"
+ ],
+ [
+ "▁beaut",
+ "y"
+ ],
+ [
+ "y",
+ "g"
+ ],
+ [
+ "▁A",
+ "ff"
+ ],
+ [
+ "▁Af",
+ "f"
+ ],
+ [
+ "▁",
+ "Aff"
+ ],
+ [
+ "▁E",
+ "q"
+ ],
+ [
+ "▁",
+ "Eq"
+ ],
+ [
+ "▁mag",
+ "ic"
+ ],
+ [
+ "▁hand",
+ "ler"
+ ],
+ [
+ "▁handle",
+ "r"
+ ],
+ [
+ "▁",
+ "handler"
+ ],
+ [
+ "x",
+ "E"
+ ],
+ [
+ "▁numer",
+ "ous"
+ ],
+ [
+ "▁numero",
+ "us"
+ ],
+ [
+ "▁h",
+ "ole"
+ ],
+ [
+ "▁hol",
+ "e"
+ ],
+ [
+ "▁ho",
+ "le"
+ ],
+ [
+ "▁",
+ "hole"
+ ],
+ [
+ "▁ro",
+ "oms"
+ ],
+ [
+ "▁room",
+ "s"
+ ],
+ [
+ "▁",
+ "rooms"
+ ],
+ [
+ "cc",
+ "ión"
+ ],
+ [
+ "cció",
+ "n"
+ ],
+ [
+ "c",
+ "ción"
+ ],
+ [
+ "▁A",
+ "rm"
+ ],
+ [
+ "▁Ar",
+ "m"
+ ],
+ [
+ "▁",
+ "Arm"
+ ],
+ [
+ "per",
+ "son"
+ ],
+ [
+ "pers",
+ "on"
+ ],
+ [
+ "p",
+ "erson"
+ ],
+ [
+ "▁build",
+ "ings"
+ ],
+ [
+ "▁building",
+ "s"
+ ],
+ [
+ "▁p",
+ "late"
+ ],
+ [
+ "▁pl",
+ "ate"
+ ],
+ [
+ "▁plat",
+ "e"
+ ],
+ [
+ "ble",
+ "d"
+ ],
+ [
+ "bl",
+ "ed"
+ ],
+ [
+ "b",
+ "led"
+ ],
+ [
+ "er",
+ "rors"
+ ],
+ [
+ "err",
+ "ors"
+ ],
+ [
+ "error",
+ "s"
+ ],
+ [
+ "▁A",
+ "gain"
+ ],
+ [
+ "▁Ag",
+ "ain"
+ ],
+ [
+ "▁Def",
+ "ault"
+ ],
+ [
+ "▁",
+ "Default"
+ ],
+ [
+ "▁H",
+ "ard"
+ ],
+ [
+ "▁Har",
+ "d"
+ ],
+ [
+ "▁Ha",
+ "rd"
+ ],
+ [
+ "▁",
+ "Hard"
+ ],
+ [
+ "t",
+ "ó"
+ ],
+ [
+ "hu",
+ "s"
+ ],
+ [
+ "h",
+ "us"
+ ],
+ [
+ "▁dim",
+ "ension"
+ ],
+ [
+ "ial",
+ "e"
+ ],
+ [
+ "ia",
+ "le"
+ ],
+ [
+ "i",
+ "ale"
+ ],
+ [
+ "▁M",
+ "ult"
+ ],
+ [
+ "▁Mu",
+ "lt"
+ ],
+ [
+ "▁Mul",
+ "t"
+ ],
+ [
+ "▁",
+ "Mult"
+ ],
+ [
+ "▁Govern",
+ "ment"
+ ],
+ [
+ "Fun",
+ "c"
+ ],
+ [
+ "F",
+ "unc"
+ ],
+ [
+ "▁b",
+ "low"
+ ],
+ [
+ "▁bl",
+ "ow"
+ ],
+ [
+ "▁blo",
+ "w"
+ ],
+ [
+ "▁re",
+ "ct"
+ ],
+ [
+ "▁r",
+ "ect"
+ ],
+ [
+ "▁rec",
+ "t"
+ ],
+ [
+ "▁",
+ "rect"
+ ],
+ [
+ "er",
+ "ra"
+ ],
+ [
+ "err",
+ "a"
+ ],
+ [
+ "conne",
+ "ction"
+ ],
+ [
+ "connect",
+ "ion"
+ ],
+ [
+ "conn",
+ "ection"
+ ],
+ [
+ "▁pass",
+ "ing"
+ ],
+ [
+ "▁pas",
+ "sing"
+ ],
+ [
+ "ße",
+ "n"
+ ],
+ [
+ "ß",
+ "en"
+ ],
+ [
+ "ph",
+ "as"
+ ],
+ [
+ "pha",
+ "s"
+ ],
+ [
+ "p",
+ "has"
+ ],
+ [
+ "ens",
+ "ional"
+ ],
+ [
+ "ension",
+ "al"
+ ],
+ [
+ "re",
+ "cord"
+ ],
+ [
+ "rec",
+ "ord"
+ ],
+ [
+ "co",
+ "hol"
+ ],
+ [
+ "▁H",
+ "arry"
+ ],
+ [
+ "▁Har",
+ "ry"
+ ],
+ [
+ "▁Harr",
+ "y"
+ ],
+ [
+ "izont",
+ "al"
+ ],
+ [
+ "izon",
+ "tal"
+ ],
+ [
+ "▁f",
+ "inger"
+ ],
+ [
+ "▁fin",
+ "ger"
+ ],
+ [
+ "▁fing",
+ "er"
+ ],
+ [
+ "▁young",
+ "er"
+ ],
+ [
+ "▁S",
+ "C"
+ ],
+ [
+ "▁",
+ "SC"
+ ],
+ [
+ "oper",
+ "ation"
+ ],
+ [
+ "B",
+ "Y"
+ ],
+ [
+ "he",
+ "im"
+ ],
+ [
+ "▁B",
+ "ad"
+ ],
+ [
+ "▁Ba",
+ "d"
+ ],
+ [
+ "▁",
+ "Bad"
+ ],
+ [
+ "▁st",
+ "orm"
+ ],
+ [
+ "▁stor",
+ "m"
+ ],
+ [
+ "▁sto",
+ "rm"
+ ],
+ [
+ "▁",
+ "storm"
+ ],
+ [
+ "▁N",
+ "at"
+ ],
+ [
+ "▁Na",
+ "t"
+ ],
+ [
+ "▁bu",
+ "ying"
+ ],
+ [
+ "▁buy",
+ "ing"
+ ],
+ [
+ "▁S",
+ "ometimes"
+ ],
+ [
+ "▁Some",
+ "times"
+ ],
+ [
+ "▁С",
+ "та"
+ ],
+ [
+ "es",
+ "sed"
+ ],
+ [
+ "ess",
+ "ed"
+ ],
+ [
+ "esse",
+ "d"
+ ],
+ [
+ "▁da",
+ "mn"
+ ],
+ [
+ "▁dam",
+ "n"
+ ],
+ [
+ "▁m",
+ "eg"
+ ],
+ [
+ "▁me",
+ "g"
+ ],
+ [
+ "um",
+ "es"
+ ],
+ [
+ "ume",
+ "s"
+ ],
+ [
+ "u",
+ "mes"
+ ],
+ [
+ "ün",
+ "d"
+ ],
+ [
+ "ü",
+ "nd"
+ ],
+ [
+ "т",
+ "ра"
+ ],
+ [
+ "▁sil",
+ "ver"
+ ],
+ [
+ "w",
+ "d"
+ ],
+ [
+ "hid",
+ "den"
+ ],
+ [
+ "h",
+ "idden"
+ ],
+ [
+ "ar",
+ "do"
+ ],
+ [
+ "ard",
+ "o"
+ ],
+ [
+ "▁commun",
+ "ities"
+ ],
+ [
+ "▁d",
+ "iet"
+ ],
+ [
+ "▁di",
+ "et"
+ ],
+ [
+ "▁die",
+ "t"
+ ],
+ [
+ "ot",
+ "ted"
+ ],
+ [
+ "ott",
+ "ed"
+ ],
+ [
+ "otte",
+ "d"
+ ],
+ [
+ "▁b",
+ "at"
+ ],
+ [
+ "▁ba",
+ "t"
+ ],
+ [
+ "▁",
+ "bat"
+ ],
+ [
+ "an",
+ "cer"
+ ],
+ [
+ "ance",
+ "r"
+ ],
+ [
+ "anc",
+ "er"
+ ],
+ [
+ "▁f",
+ "mt"
+ ],
+ [
+ "▁",
+ "fmt"
+ ],
+ [
+ "▁P",
+ "en"
+ ],
+ [
+ "▁Pe",
+ "n"
+ ],
+ [
+ "▁",
+ "Pen"
+ ],
+ [
+ "▁t",
+ "il"
+ ],
+ [
+ "▁ti",
+ "l"
+ ],
+ [
+ "▁",
+ "til"
+ ],
+ [
+ "En",
+ "um"
+ ],
+ [
+ "E",
+ "num"
+ ],
+ [
+ "PA",
+ "TH"
+ ],
+ [
+ "P",
+ "ATH"
+ ],
+ [
+ "▁mat",
+ "ters"
+ ],
+ [
+ "▁matter",
+ "s"
+ ],
+ [
+ "▁matt",
+ "ers"
+ ],
+ [
+ "time",
+ "out"
+ ],
+ [
+ "--",
+ "----------"
+ ],
+ [
+ "----",
+ "--------"
+ ],
+ [
+ "--------",
+ "----"
+ ],
+ [
+ "---",
+ "---------"
+ ],
+ [
+ "-----",
+ "-------"
+ ],
+ [
+ "----------",
+ "--"
+ ],
+ [
+ "------",
+ "------"
+ ],
+ [
+ "---------",
+ "---"
+ ],
+ [
+ "-------",
+ "-----"
+ ],
+ [
+ "-----------",
+ "-"
+ ],
+ [
+ "-",
+ "-----------"
+ ],
+ [
+ "ka",
+ "n"
+ ],
+ [
+ "k",
+ "an"
+ ],
+ [
+ "▁Cor",
+ "por"
+ ],
+ [
+ "=\"",
+ "../../"
+ ],
+ [
+ "=\"../",
+ "../"
+ ],
+ [
+ "▁A",
+ "le"
+ ],
+ [
+ "▁Al",
+ "e"
+ ],
+ [
+ "hent",
+ "ication"
+ ],
+ [
+ "hentic",
+ "ation"
+ ],
+ [
+ "▁com",
+ "plic"
+ ],
+ [
+ "▁comp",
+ "lic"
+ ],
+ [
+ "▁compl",
+ "ic"
+ ],
+ [
+ "▁Se",
+ "curity"
+ ],
+ [
+ "▁Sec",
+ "urity"
+ ],
+ [
+ "▁",
+ "Security"
+ ],
+ [
+ "OF",
+ "F"
+ ],
+ [
+ "O",
+ "FF"
+ ],
+ [
+ "R",
+ "ad"
+ ],
+ [
+ "ap",
+ "se"
+ ],
+ [
+ "aps",
+ "e"
+ ],
+ [
+ "a",
+ "pse"
+ ],
+ [
+ "▁d",
+ "ance"
+ ],
+ [
+ "▁dan",
+ "ce"
+ ],
+ [
+ "▁perm",
+ "issions"
+ ],
+ [
+ "▁permission",
+ "s"
+ ],
+ [
+ "▁war",
+ "rant"
+ ],
+ [
+ "▁l",
+ "ad"
+ ],
+ [
+ "▁la",
+ "d"
+ ],
+ [
+ "▁",
+ "lad"
+ ],
+ [
+ "▁is",
+ "ol"
+ ],
+ [
+ "▁i",
+ "sol"
+ ],
+ [
+ "d",
+ "l"
+ ],
+ [
+ "▁A",
+ "u"
+ ],
+ [
+ "ye",
+ "s"
+ ],
+ [
+ "y",
+ "es"
+ ],
+ [
+ "▁t",
+ "v"
+ ],
+ [
+ "▁",
+ "tv"
+ ],
+ [
+ "▁pro",
+ "vider"
+ ],
+ [
+ "▁prov",
+ "ider"
+ ],
+ [
+ "▁provide",
+ "r"
+ ],
+ [
+ "▁",
+ "provider"
+ ],
+ [
+ "▁ter",
+ "rible"
+ ],
+ [
+ "▁terr",
+ "ible"
+ ],
+ [
+ "▁dep",
+ "artment"
+ ],
+ [
+ "▁depart",
+ "ment"
+ ],
+ [
+ "er",
+ "al"
+ ],
+ [
+ "era",
+ "l"
+ ],
+ [
+ "e",
+ "ral"
+ ],
+ [
+ "▁implement",
+ "ation"
+ ],
+ [
+ "S",
+ "R"
+ ],
+ [
+ "▁h",
+ "earing"
+ ],
+ [
+ "▁he",
+ "aring"
+ ],
+ [
+ "▁hear",
+ "ing"
+ ],
+ [
+ "▁K",
+ "n"
+ ],
+ [
+ "F",
+ "R"
+ ],
+ [
+ "t",
+ "v"
+ ],
+ [
+ "▁d",
+ "iss"
+ ],
+ [
+ "▁dis",
+ "s"
+ ],
+ [
+ "▁di",
+ "ss"
+ ],
+ [
+ "F",
+ "UN"
+ ],
+ [
+ "▁dur",
+ "ante"
+ ],
+ [
+ "▁durant",
+ "e"
+ ],
+ [
+ "os",
+ "is"
+ ],
+ [
+ "osi",
+ "s"
+ ],
+ [
+ "o",
+ "sis"
+ ],
+ [
+ "▁task",
+ "s"
+ ],
+ [
+ "▁",
+ "tasks"
+ ],
+ [
+ "▁B",
+ "lo"
+ ],
+ [
+ "▁Bl",
+ "o"
+ ],
+ [
+ "▁",
+ "Blo"
+ ],
+ [
+ "во",
+ "д"
+ ],
+ [
+ "▁br",
+ "anch"
+ ],
+ [
+ "▁",
+ "branch"
+ ],
+ [
+ "▁polit",
+ "ics"
+ ],
+ [
+ "▁E",
+ "lle"
+ ],
+ [
+ "▁El",
+ "le"
+ ],
+ [
+ "▁Ell",
+ "e"
+ ],
+ [
+ "▁lead",
+ "ership"
+ ],
+ [
+ "▁leader",
+ "ship"
+ ],
+ [
+ "▁leaders",
+ "hip"
+ ],
+ [
+ "ex",
+ "pr"
+ ],
+ [
+ "exp",
+ "r"
+ ],
+ [
+ "▁techn",
+ "iques"
+ ],
+ [
+ "▁technique",
+ "s"
+ ],
+ [
+ "pr",
+ "ec"
+ ],
+ [
+ "pre",
+ "c"
+ ],
+ [
+ "p",
+ "rec"
+ ],
+ [
+ "Sig",
+ "ma"
+ ],
+ [
+ "S",
+ "igma"
+ ],
+ [
+ "im",
+ "ately"
+ ],
+ [
+ "imate",
+ "ly"
+ ],
+ [
+ "imat",
+ "ely"
+ ],
+ [
+ "t",
+ "k"
+ ],
+ [
+ "ach",
+ "ment"
+ ],
+ [
+ "▁En",
+ "ter"
+ ],
+ [
+ "▁Ent",
+ "er"
+ ],
+ [
+ "▁",
+ "Enter"
+ ],
+ [
+ "▁cre",
+ "ative"
+ ],
+ [
+ "▁creat",
+ "ive"
+ ],
+ [
+ "▁з",
+ "на"
+ ],
+ [
+ "▁",
+ "зна"
+ ],
+ [
+ "ap",
+ "py"
+ ],
+ [
+ "app",
+ "y"
+ ],
+ [
+ "un",
+ "ched"
+ ],
+ [
+ "unch",
+ "ed"
+ ],
+ [
+ "unc",
+ "hed"
+ ],
+ [
+ "▁'",
+ "',"
+ ],
+ [
+ "▁''",
+ ","
+ ],
+ [
+ "on",
+ "der"
+ ],
+ [
+ "ond",
+ "er"
+ ],
+ [
+ "onde",
+ "r"
+ ],
+ [
+ "o",
+ "nder"
+ ],
+ [
+ "{",
+ "-"
+ ],
+ [
+ "NU",
+ "M"
+ ],
+ [
+ "N",
+ "UM"
+ ],
+ [
+ "▁n",
+ "arr"
+ ],
+ [
+ "▁na",
+ "rr"
+ ],
+ [
+ "▁nar",
+ "r"
+ ],
+ [
+ "Mem",
+ "ory"
+ ],
+ [
+ "▁win",
+ "ning"
+ ],
+ [
+ "▁",
+ "winning"
+ ],
+ [
+ "▁F",
+ "ollow"
+ ],
+ [
+ "▁Fol",
+ "low"
+ ],
+ [
+ "▁",
+ "Follow"
+ ],
+ [
+ "*/",
+ "\r"
+ ],
+ [
+ "vis",
+ "ion"
+ ],
+ [
+ "v",
+ "ision"
+ ],
+ [
+ "res",
+ "ents"
+ ],
+ [
+ "resent",
+ "s"
+ ],
+ [
+ "zi",
+ "one"
+ ],
+ [
+ "z",
+ "ione"
+ ],
+ [
+ "▁l",
+ "atter"
+ ],
+ [
+ "▁lat",
+ "ter"
+ ],
+ [
+ "▁requ",
+ "ests"
+ ],
+ [
+ "▁request",
+ "s"
+ ],
+ [
+ "▁",
+ "requests"
+ ],
+ [
+ "▁m",
+ "argin"
+ ],
+ [
+ "▁mar",
+ "gin"
+ ],
+ [
+ "▁marg",
+ "in"
+ ],
+ [
+ "▁",
+ "margin"
+ ],
+ [
+ "▁{",
+ "\""
+ ],
+ [
+ "▁",
+ "{\""
+ ],
+ [
+ "v",
+ "ideo"
+ ],
+ [
+ "c",
+ "n"
+ ],
+ [
+ "▁Im",
+ "age"
+ ],
+ [
+ "▁",
+ "Image"
+ ],
+ [
+ "T",
+ "im"
+ ],
+ [
+ "CON",
+ "FIG"
+ ],
+ [
+ "CONF",
+ "IG"
+ ],
+ [
+ "▁all",
+ "owing"
+ ],
+ [
+ "▁allow",
+ "ing"
+ ],
+ [
+ "▁comb",
+ "ined"
+ ],
+ [
+ "▁combine",
+ "d"
+ ],
+ [
+ "PU",
+ "T"
+ ],
+ [
+ "P",
+ "UT"
+ ],
+ [
+ "▁instance",
+ "of"
+ ],
+ [
+ "ig",
+ "in"
+ ],
+ [
+ "igi",
+ "n"
+ ],
+ [
+ "i",
+ "gin"
+ ],
+ [
+ "▁p",
+ "ero"
+ ],
+ [
+ "▁per",
+ "o"
+ ],
+ [
+ "▁pe",
+ "ro"
+ ],
+ [
+ "▁'",
+ "'"
+ ],
+ [
+ "▁",
+ "''"
+ ],
+ [
+ "▁conf",
+ "idence"
+ ],
+ [
+ "▁equ",
+ "ivalent"
+ ],
+ [
+ "▁equival",
+ "ent"
+ ],
+ [
+ "pa",
+ "d"
+ ],
+ [
+ "p",
+ "ad"
+ ],
+ [
+ "ef",
+ "fect"
+ ],
+ [
+ "eff",
+ "ect"
+ ],
+ [
+ "e",
+ "ffect"
+ ],
+ [
+ "R",
+ "X"
+ ],
+ [
+ "▁l",
+ "ang"
+ ],
+ [
+ "▁la",
+ "ng"
+ ],
+ [
+ "▁lan",
+ "g"
+ ],
+ [
+ "▁",
+ "lang"
+ ],
+ [
+ "str",
+ "ong"
+ ],
+ [
+ "▁b",
+ "ridge"
+ ],
+ [
+ "▁br",
+ "idge"
+ ],
+ [
+ "▁",
+ "bridge"
+ ],
+ [
+ "ay",
+ "a"
+ ],
+ [
+ "a",
+ "ya"
+ ],
+ [
+ "▁t",
+ "reated"
+ ],
+ [
+ "▁tre",
+ "ated"
+ ],
+ [
+ "▁treat",
+ "ed"
+ ],
+ [
+ "▁f",
+ "orth"
+ ],
+ [
+ "▁for",
+ "th"
+ ],
+ [
+ "▁fort",
+ "h"
+ ],
+ [
+ "S",
+ "W"
+ ],
+ [
+ "▁account",
+ "s"
+ ],
+ [
+ "▁P",
+ "O"
+ ],
+ [
+ "▁",
+ "PO"
+ ],
+ [
+ "▁list",
+ "ening"
+ ],
+ [
+ "▁listen",
+ "ing"
+ ],
+ [
+ "Ro",
+ "ute"
+ ],
+ [
+ "R",
+ "oute"
+ ],
+ [
+ "()",
+ "))"
+ ],
+ [
+ "())",
+ ")"
+ ],
+ [
+ "(",
+ ")))"
+ ],
+ [
+ "cp",
+ "y"
+ ],
+ [
+ "c",
+ "py"
+ ],
+ [
+ "▁re",
+ "form"
+ ],
+ [
+ "▁ref",
+ "orm"
+ ],
+ [
+ "▁g",
+ "ate"
+ ],
+ [
+ "▁ga",
+ "te"
+ ],
+ [
+ "▁",
+ "gate"
+ ],
+ [
+ "▁W",
+ "alk"
+ ],
+ [
+ "▁Wal",
+ "k"
+ ],
+ [
+ "▁",
+ "Walk"
+ ],
+ [
+ "▁some",
+ "how"
+ ],
+ [
+ "t",
+ "f"
+ ],
+ [
+ "▁l",
+ "ayout"
+ ],
+ [
+ "▁la",
+ "yout"
+ ],
+ [
+ "▁lay",
+ "out"
+ ],
+ [
+ "▁",
+ "layout"
+ ],
+ [
+ "um",
+ "in"
+ ],
+ [
+ "umi",
+ "n"
+ ],
+ [
+ "u",
+ "min"
+ ],
+ [
+ "▁consider",
+ "ing"
+ ],
+ [
+ "▁consid",
+ "ering"
+ ],
+ [
+ "▁pre",
+ "mi"
+ ],
+ [
+ "▁pr",
+ "emi"
+ ],
+ [
+ "▁prem",
+ "i"
+ ],
+ [
+ "▁M",
+ "om"
+ ],
+ [
+ "▁Mo",
+ "m"
+ ],
+ [
+ "at",
+ "han"
+ ],
+ [
+ "ath",
+ "an"
+ ],
+ [
+ "a",
+ "than"
+ ],
+ [
+ "Ge",
+ "n"
+ ],
+ [
+ "G",
+ "en"
+ ],
+ [
+ "▁plan",
+ "et"
+ ],
+ [
+ "▁plane",
+ "t"
+ ],
+ [
+ "am",
+ "ples"
+ ],
+ [
+ "amp",
+ "les"
+ ],
+ [
+ "ample",
+ "s"
+ ],
+ [
+ "▁M",
+ "O"
+ ],
+ [
+ "▁",
+ "MO"
+ ],
+ [
+ "sh",
+ "op"
+ ],
+ [
+ "s",
+ "hop"
+ ],
+ [
+ "▁prem",
+ "ier"
+ ],
+ [
+ "▁premi",
+ "er"
+ ],
+ [
+ "▁s",
+ "impl"
+ ],
+ [
+ "▁sim",
+ "pl"
+ ],
+ [
+ "▁s",
+ "egu"
+ ],
+ [
+ "▁se",
+ "gu"
+ ],
+ [
+ "▁seg",
+ "u"
+ ],
+ [
+ "L",
+ "Y"
+ ],
+ [
+ "Su",
+ "m"
+ ],
+ [
+ "S",
+ "um"
+ ],
+ [
+ "▁t",
+ "ables"
+ ],
+ [
+ "▁table",
+ "s"
+ ],
+ [
+ "▁tab",
+ "les"
+ ],
+ [
+ "▁ta",
+ "bles"
+ ],
+ [
+ "▁",
+ "tables"
+ ],
+ [
+ "sk",
+ "a"
+ ],
+ [
+ "s",
+ "ka"
+ ],
+ [
+ "▁",
+ "ž"
+ ],
+ [
+ "p",
+ "d"
+ ],
+ [
+ "▁s",
+ "ous"
+ ],
+ [
+ "▁so",
+ "us"
+ ],
+ [
+ "▁sou",
+ "s"
+ ],
+ [
+ "▁con",
+ "ference"
+ ],
+ [
+ "▁confer",
+ "ence"
+ ],
+ [
+ "▁D",
+ "at"
+ ],
+ [
+ "▁Da",
+ "t"
+ ],
+ [
+ "▁",
+ "Dat"
+ ],
+ [
+ "Sc",
+ "roll"
+ ],
+ [
+ "▁stand",
+ "ards"
+ ],
+ [
+ "▁standard",
+ "s"
+ ],
+ [
+ "▁г",
+ "ру"
+ ],
+ [
+ "es",
+ "se"
+ ],
+ [
+ "ess",
+ "e"
+ ],
+ [
+ "▁citiz",
+ "ens"
+ ],
+ [
+ "▁citizen",
+ "s"
+ ],
+ [
+ "▁occur",
+ "red"
+ ],
+ [
+ "▁dem",
+ "ocr"
+ ],
+ [
+ "▁demo",
+ "cr"
+ ],
+ [
+ "▁e",
+ "lev"
+ ],
+ [
+ "▁el",
+ "ev"
+ ],
+ [
+ "▁ele",
+ "v"
+ ],
+ [
+ "▁S",
+ "em"
+ ],
+ [
+ "▁Se",
+ "m"
+ ],
+ [
+ "▁",
+ "Sem"
+ ],
+ [
+ "ens",
+ "us"
+ ],
+ [
+ "he",
+ "aders"
+ ],
+ [
+ "head",
+ "ers"
+ ],
+ [
+ "header",
+ "s"
+ ],
+ [
+ "▁Ch",
+ "ris"
+ ],
+ [
+ "im",
+ "ento"
+ ],
+ [
+ "iment",
+ "o"
+ ],
+ [
+ "imen",
+ "to"
+ ],
+ [
+ "ko",
+ "m"
+ ],
+ [
+ "k",
+ "om"
+ ],
+ [
+ "Co",
+ "r"
+ ],
+ [
+ "C",
+ "or"
+ ],
+ [
+ "MI",
+ "N"
+ ],
+ [
+ "M",
+ "IN"
+ ],
+ [
+ "us",
+ "her"
+ ],
+ [
+ "ush",
+ "er"
+ ],
+ [
+ "Data",
+ "base"
+ ],
+ [
+ "Dat",
+ "abase"
+ ],
+ [
+ "▁f",
+ "ormal"
+ ],
+ [
+ "▁for",
+ "mal"
+ ],
+ [
+ "▁form",
+ "al"
+ ],
+ [
+ "▁forma",
+ "l"
+ ],
+ [
+ "ig",
+ "ne"
+ ],
+ [
+ "ign",
+ "e"
+ ],
+ [
+ "▁organ",
+ "izations"
+ ],
+ [
+ "▁organiz",
+ "ations"
+ ],
+ [
+ "▁organization",
+ "s"
+ ],
+ [
+ "▁I",
+ "re"
+ ],
+ [
+ "▁Ir",
+ "e"
+ ],
+ [
+ "X",
+ "ml"
+ ],
+ [
+ "и",
+ "з"
+ ],
+ [
+ "▁p",
+ "ray"
+ ],
+ [
+ "▁pr",
+ "ay"
+ ],
+ [
+ "▁pra",
+ "y"
+ ],
+ [
+ "▁b",
+ "omb"
+ ],
+ [
+ "▁bo",
+ "mb"
+ ],
+ [
+ "▁bom",
+ "b"
+ ],
+ [
+ "▁m",
+ "and"
+ ],
+ [
+ "▁man",
+ "d"
+ ],
+ [
+ "▁ma",
+ "nd"
+ ],
+ [
+ "▁",
+ "mand"
+ ],
+ [
+ "er",
+ "ts"
+ ],
+ [
+ "ert",
+ "s"
+ ],
+ [
+ "▁c",
+ "lock"
+ ],
+ [
+ "▁cl",
+ "ock"
+ ],
+ [
+ "▁clo",
+ "ck"
+ ],
+ [
+ "▁",
+ "clock"
+ ],
+ [
+ "▁b",
+ "uck"
+ ],
+ [
+ "▁bu",
+ "ck"
+ ],
+ [
+ "ва",
+ "ли"
+ ],
+ [
+ "вал",
+ "и"
+ ],
+ [
+ "в",
+ "али"
+ ],
+ [
+ "en",
+ "sch"
+ ],
+ [
+ "ens",
+ "ch"
+ ],
+ [
+ "▁v",
+ "olt"
+ ],
+ [
+ "▁vo",
+ "lt"
+ ],
+ [
+ "▁vol",
+ "t"
+ ],
+ [
+ "▁",
+ "volt"
+ ],
+ [
+ "▁fil",
+ "ms"
+ ],
+ [
+ "▁film",
+ "s"
+ ],
+ [
+ "▁pl",
+ "ants"
+ ],
+ [
+ "▁plan",
+ "ts"
+ ],
+ [
+ "▁plant",
+ "s"
+ ],
+ [
+ "in",
+ "ode"
+ ],
+ [
+ "ino",
+ "de"
+ ],
+ [
+ "i",
+ "node"
+ ],
+ [
+ "Bo",
+ "olean"
+ ],
+ [
+ "▁restaur",
+ "ant"
+ ],
+ [
+ "ía",
+ "n"
+ ],
+ [
+ "í",
+ "an"
+ ],
+ [
+ "▁de",
+ "but"
+ ],
+ [
+ "▁deb",
+ "ut"
+ ],
+ [
+ "page",
+ "s"
+ ],
+ [
+ "pa",
+ "ges"
+ ],
+ [
+ "pag",
+ "es"
+ ],
+ [
+ "p",
+ "ages"
+ ],
+ [
+ "▁wor",
+ "dt"
+ ],
+ [
+ "▁word",
+ "t"
+ ],
+ [
+ "▁Б",
+ "а"
+ ],
+ [
+ "▁great",
+ "est"
+ ],
+ [
+ "(\"",
+ "/"
+ ],
+ [
+ "▁c",
+ "opyright"
+ ],
+ [
+ "▁copy",
+ "right"
+ ],
+ [
+ "▁",
+ "copyright"
+ ],
+ [
+ "▁r",
+ "it"
+ ],
+ [
+ "▁ri",
+ "t"
+ ],
+ [
+ "▁",
+ "rit"
+ ],
+ [
+ "size",
+ "of"
+ ],
+ [
+ "Tr",
+ "ace"
+ ],
+ [
+ "Tra",
+ "ce"
+ ],
+ [
+ "ue",
+ "nt"
+ ],
+ [
+ "uen",
+ "t"
+ ],
+ [
+ "u",
+ "ent"
+ ],
+ [
+ "ту",
+ "р"
+ ],
+ [
+ "т",
+ "ур"
+ ],
+ [
+ "▁k",
+ "o"
+ ],
+ [
+ "▁",
+ "ko"
+ ],
+ [
+ ":",
+ "\\"
+ ],
+ [
+ "▁b",
+ "igger"
+ ],
+ [
+ "▁big",
+ "ger"
+ ],
+ [
+ "▁perfect",
+ "ly"
+ ],
+ [
+ "ten",
+ "ance"
+ ],
+ [
+ "MA",
+ "SK"
+ ],
+ [
+ "M",
+ "ASK"
+ ],
+ [
+ "r",
+ "é"
+ ],
+ [
+ "▁e",
+ "tt"
+ ],
+ [
+ "▁et",
+ "t"
+ ],
+ [
+ "▁",
+ "ett"
+ ],
+ [
+ "▁n",
+ "ose"
+ ],
+ [
+ "▁no",
+ "se"
+ ],
+ [
+ "▁nos",
+ "e"
+ ],
+ [
+ "▁c",
+ "raft"
+ ],
+ [
+ "▁cr",
+ "aft"
+ ],
+ [
+ "▁",
+ "craft"
+ ],
+ [
+ "it",
+ "eral"
+ ],
+ [
+ "ite",
+ "ral"
+ ],
+ [
+ "iter",
+ "al"
+ ],
+ [
+ "▁discuss",
+ "ed"
+ ],
+ [
+ "▁Jew",
+ "ish"
+ ],
+ [
+ "C",
+ "ap"
+ ],
+ [
+ "▁Un",
+ "less"
+ ],
+ [
+ "▁Jack",
+ "son"
+ ],
+ [
+ "Att",
+ "ributes"
+ ],
+ [
+ "Attribute",
+ "s"
+ ],
+ [
+ "Attrib",
+ "utes"
+ ],
+ [
+ "▁l",
+ "unch"
+ ],
+ [
+ "▁lun",
+ "ch"
+ ],
+ [
+ "ö",
+ "l"
+ ],
+ [
+ "at",
+ "r"
+ ],
+ [
+ "a",
+ "tr"
+ ],
+ [
+ "▁pay",
+ "ing"
+ ],
+ [
+ "▁pa",
+ "ying"
+ ],
+ [
+ "Par",
+ "se"
+ ],
+ [
+ "Pars",
+ "e"
+ ],
+ [
+ "P",
+ "arse"
+ ],
+ [
+ "()",
+ "\r"
+ ],
+ [
+ "(",
+ ")\r"
+ ],
+ [
+ "la",
+ "d"
+ ],
+ [
+ "l",
+ "ad"
+ ],
+ [
+ "▁r",
+ "are"
+ ],
+ [
+ "▁ra",
+ "re"
+ ],
+ [
+ "▁[",
+ "];"
+ ],
+ [
+ "▁[]",
+ ";"
+ ],
+ [
+ "▁",
+ "[];"
+ ],
+ [
+ "st",
+ "one"
+ ],
+ [
+ "ston",
+ "e"
+ ],
+ [
+ "sto",
+ "ne"
+ ],
+ [
+ "▁u",
+ "nc"
+ ],
+ [
+ "▁un",
+ "c"
+ ],
+ [
+ "▁",
+ "unc"
+ ],
+ [
+ "▁def",
+ "ense"
+ ],
+ [
+ "▁defens",
+ "e"
+ ],
+ [
+ "}",
+ "+"
+ ],
+ [
+ "▁Gl",
+ "obal"
+ ],
+ [
+ "▁",
+ "Global"
+ ],
+ [
+ "▁Sov",
+ "iet"
+ ],
+ [
+ "▁Austral",
+ "ian"
+ ],
+ [
+ "▁Australia",
+ "n"
+ ],
+ [
+ "▁g",
+ "li"
+ ],
+ [
+ "▁gl",
+ "i"
+ ],
+ [
+ "var",
+ "iant"
+ ],
+ [
+ "vari",
+ "ant"
+ ],
+ [
+ "▁R",
+ "on"
+ ],
+ [
+ "▁Ro",
+ "n"
+ ],
+ [
+ "▁lo",
+ "an"
+ ],
+ [
+ "St",
+ "ep"
+ ],
+ [
+ "Ste",
+ "p"
+ ],
+ [
+ "me",
+ "mber"
+ ],
+ [
+ "mem",
+ "ber"
+ ],
+ [
+ "m",
+ "ember"
+ ],
+ [
+ "Sc",
+ "h"
+ ],
+ [
+ "S",
+ "ch"
+ ],
+ [
+ "▁Commit",
+ "tee"
+ ],
+ [
+ "▁s",
+ "pending"
+ ],
+ [
+ "▁sp",
+ "ending"
+ ],
+ [
+ "▁spend",
+ "ing"
+ ],
+ [
+ "▁T",
+ "ri"
+ ],
+ [
+ "▁Tr",
+ "i"
+ ],
+ [
+ "▁",
+ "Tri"
+ ],
+ [
+ "▁J",
+ "ournal"
+ ],
+ [
+ "▁Jour",
+ "nal"
+ ],
+ [
+ "▁",
+ "Journal"
+ ],
+ [
+ "▁s",
+ "ugar"
+ ],
+ [
+ "▁su",
+ "gar"
+ ],
+ [
+ "▁sug",
+ "ar"
+ ],
+ [
+ "el",
+ "ly"
+ ],
+ [
+ "ell",
+ "y"
+ ],
+ [
+ "HT",
+ "ML"
+ ],
+ [
+ "▁ad",
+ "vent"
+ ],
+ [
+ "▁adv",
+ "ent"
+ ],
+ [
+ "win",
+ "g"
+ ],
+ [
+ "wi",
+ "ng"
+ ],
+ [
+ "w",
+ "ing"
+ ],
+ [
+ "▁Wh",
+ "ether"
+ ],
+ [
+ "▁Whe",
+ "ther"
+ ],
+ [
+ "or",
+ "ation"
+ ],
+ [
+ "▁N",
+ "E"
+ ],
+ [
+ "▁",
+ "NE"
+ ],
+ [
+ "iv",
+ "eness"
+ ],
+ [
+ "ive",
+ "ness"
+ ],
+ [
+ "iven",
+ "ess"
+ ],
+ [
+ "▁h",
+ "av"
+ ],
+ [
+ "▁ha",
+ "v"
+ ],
+ [
+ "▁",
+ "hav"
+ ],
+ [
+ "▁con",
+ "scious"
+ ],
+ [
+ "▁",
+ "conscious"
+ ],
+ [
+ "ee",
+ "n"
+ ],
+ [
+ "e",
+ "en"
+ ],
+ [
+ "Sym",
+ "bol"
+ ],
+ [
+ "S",
+ "ymbol"
+ ],
+ [
+ "▁к",
+ "у"
+ ],
+ [
+ "▁",
+ "ку"
+ ],
+ [
+ "Log",
+ "ger"
+ ],
+ [
+ "▁L",
+ "ittle"
+ ],
+ [
+ "▁Lit",
+ "tle"
+ ],
+ [
+ "wide",
+ "t"
+ ],
+ [
+ "wi",
+ "det"
+ ],
+ [
+ "wid",
+ "et"
+ ],
+ [
+ "oc",
+ "ation"
+ ],
+ [
+ "pi",
+ "n"
+ ],
+ [
+ "p",
+ "in"
+ ],
+ [
+ "▁sym",
+ "met"
+ ],
+ [
+ "▁A",
+ "D"
+ ],
+ [
+ "▁",
+ "AD"
+ ],
+ [
+ "▁pos",
+ "ts"
+ ],
+ [
+ "▁po",
+ "sts"
+ ],
+ [
+ "▁post",
+ "s"
+ ],
+ [
+ "▁",
+ "posts"
+ ],
+ [
+ "sh",
+ "al"
+ ],
+ [
+ "sha",
+ "l"
+ ],
+ [
+ "s",
+ "hal"
+ ],
+ [
+ "▁Con",
+ "f"
+ ],
+ [
+ "▁Co",
+ "nf"
+ ],
+ [
+ "▁",
+ "Conf"
+ ],
+ [
+ "▁ch",
+ "ose"
+ ],
+ [
+ "▁cho",
+ "se"
+ ],
+ [
+ "ma",
+ "l"
+ ],
+ [
+ "m",
+ "al"
+ ],
+ [
+ "ul",
+ "o"
+ ],
+ [
+ "u",
+ "lo"
+ ],
+ [
+ "▁M",
+ "ethod"
+ ],
+ [
+ "▁",
+ "Method"
+ ],
+ [
+ "▁miss",
+ "ed"
+ ],
+ [
+ "▁mis",
+ "sed"
+ ],
+ [
+ "Re",
+ "move"
+ ],
+ [
+ "Rem",
+ "ove"
+ ],
+ [
+ "Aut",
+ "o"
+ ],
+ [
+ "A",
+ "uto"
+ ],
+ [
+ "VAL",
+ "UE"
+ ],
+ [
+ "th",
+ "let"
+ ],
+ [
+ "▁For",
+ "ce"
+ ],
+ [
+ "▁",
+ "Force"
+ ],
+ [
+ "p",
+ "f"
+ ],
+ [
+ "▁",
+ "Я"
+ ],
+ [
+ "la",
+ "te"
+ ],
+ [
+ "lat",
+ "e"
+ ],
+ [
+ "l",
+ "ate"
+ ],
+ [
+ "▁p",
+ "ul"
+ ],
+ [
+ "▁pu",
+ "l"
+ ],
+ [
+ "▁",
+ "pul"
+ ],
+ [
+ "Po",
+ "p"
+ ],
+ [
+ "P",
+ "op"
+ ],
+ [
+ "▁adv",
+ "anced"
+ ],
+ [
+ "▁advance",
+ "d"
+ ],
+ [
+ "air",
+ "es"
+ ],
+ [
+ "ai",
+ "res"
+ ],
+ [
+ "aire",
+ "s"
+ ],
+ [
+ "a",
+ "ires"
+ ],
+ [
+ "res",
+ "sed"
+ ],
+ [
+ "ress",
+ "ed"
+ ],
+ [
+ "resse",
+ "d"
+ ],
+ [
+ "r",
+ "essed"
+ ],
+ [
+ "AM",
+ "E"
+ ],
+ [
+ "A",
+ "ME"
+ ],
+ [
+ "be",
+ "ll"
+ ],
+ [
+ "bel",
+ "l"
+ ],
+ [
+ "b",
+ "ell"
+ ],
+ [
+ "ac",
+ "hing"
+ ],
+ [
+ "ach",
+ "ing"
+ ],
+ [
+ "achi",
+ "ng"
+ ],
+ [
+ "a",
+ "ching"
+ ],
+ [
+ "i",
+ "ć"
+ ],
+ [
+ "ec",
+ "ho"
+ ],
+ [
+ "ech",
+ "o"
+ ],
+ [
+ "e",
+ "cho"
+ ],
+ [
+ "H",
+ "S"
+ ],
+ [
+ "▁fun",
+ "ny"
+ ],
+ [
+ "ри",
+ "и"
+ ],
+ [
+ "▁e",
+ "er"
+ ],
+ [
+ "▁ve",
+ "get"
+ ],
+ [
+ "▁four",
+ "th"
+ ],
+ [
+ "c",
+ "f"
+ ],
+ [
+ "trans",
+ "form"
+ ],
+ [
+ "▁g",
+ "rown"
+ ],
+ [
+ "▁gr",
+ "own"
+ ],
+ [
+ "▁grow",
+ "n"
+ ],
+ [
+ "▁gro",
+ "wn"
+ ],
+ [
+ "▁Mc",
+ "C"
+ ],
+ [
+ "si",
+ "te"
+ ],
+ [
+ "s",
+ "ite"
+ ],
+ [
+ "▁b",
+ "eneath"
+ ],
+ [
+ "▁be",
+ "neath"
+ ],
+ [
+ "▁s",
+ "hell"
+ ],
+ [
+ "▁sh",
+ "ell"
+ ],
+ [
+ "▁she",
+ "ll"
+ ],
+ [
+ "▁shel",
+ "l"
+ ],
+ [
+ "▁",
+ "shell"
+ ],
+ [
+ "x",
+ "d"
+ ],
+ [
+ "Pl",
+ "ay"
+ ],
+ [
+ "P",
+ "lay"
+ ],
+ [
+ "sh",
+ "ort"
+ ],
+ [
+ "Ro",
+ "le"
+ ],
+ [
+ "R",
+ "ole"
+ ],
+ [
+ "▁relig",
+ "ion"
+ ],
+ [
+ "in",
+ "ator"
+ ],
+ [
+ "ina",
+ "tor"
+ ],
+ [
+ "}",
+ ""
+ ],
+ [
+ "▁El",
+ "iz"
+ ],
+ [
+ "▁Eli",
+ "z"
+ ],
+ [
+ "M",
+ "icrosoft"
+ ],
+ [
+ "▁v",
+ "ez"
+ ],
+ [
+ "▁ve",
+ "z"
+ ],
+ [
+ "▁",
+ "vez"
+ ],
+ [
+ "▁ра",
+ "бо"
+ ],
+ [
+ "▁",
+ "рабо"
+ ],
+ [
+ "re",
+ "ich"
+ ],
+ [
+ "rei",
+ "ch"
+ ],
+ [
+ "ve",
+ "t"
+ ],
+ [
+ "v",
+ "et"
+ ],
+ [
+ "en",
+ "um"
+ ],
+ [
+ "enu",
+ "m"
+ ],
+ [
+ "e",
+ "num"
+ ],
+ [
+ "▁w",
+ "elcome"
+ ],
+ [
+ "▁wel",
+ "come"
+ ],
+ [
+ "name",
+ "nt"
+ ],
+ [
+ "na",
+ "ment"
+ ],
+ [
+ "nam",
+ "ent"
+ ],
+ [
+ "n",
+ "ament"
+ ],
+ [
+ "▁j",
+ "an"
+ ],
+ [
+ "▁ja",
+ "n"
+ ],
+ [
+ "▁",
+ "jan"
+ ],
+ [
+ "▁c",
+ "ycle"
+ ],
+ [
+ "▁cy",
+ "cle"
+ ],
+ [
+ "▁cycl",
+ "e"
+ ],
+ [
+ "▁",
+ "cycle"
+ ],
+ [
+ "▁a",
+ "cknow"
+ ],
+ [
+ "▁ac",
+ "know"
+ ],
+ [
+ "▁w",
+ "ound"
+ ],
+ [
+ "▁wo",
+ "und"
+ ],
+ [
+ "id",
+ "i"
+ ],
+ [
+ "i",
+ "di"
+ ],
+ [
+ "▁poss",
+ "ibility"
+ ],
+ [
+ "an",
+ "notation"
+ ],
+ [
+ "annot",
+ "ation"
+ ],
+ [
+ "▁techn",
+ "ical"
+ ],
+ [
+ "▁f",
+ "old"
+ ],
+ [
+ "▁fol",
+ "d"
+ ],
+ [
+ "▁fo",
+ "ld"
+ ],
+ [
+ "▁",
+ "fold"
+ ],
+ [
+ "e",
+ "h"
+ ],
+ [
+ "ist",
+ "ence"
+ ],
+ [
+ "isten",
+ "ce"
+ ],
+ [
+ "▁re",
+ "ply"
+ ],
+ [
+ "▁rep",
+ "ly"
+ ],
+ [
+ "▁repl",
+ "y"
+ ],
+ [
+ "▁",
+ "reply"
+ ],
+ [
+ "et",
+ "es"
+ ],
+ [
+ "ete",
+ "s"
+ ],
+ [
+ "e",
+ "tes"
+ ],
+ [
+ "▁dec",
+ "ades"
+ ],
+ [
+ "▁decade",
+ "s"
+ ],
+ [
+ "wa",
+ "n"
+ ],
+ [
+ "w",
+ "an"
+ ],
+ [
+ "▁к",
+ "ра"
+ ],
+ [
+ "▁",
+ "кра"
+ ],
+ [
+ "▁L",
+ "ab"
+ ],
+ [
+ "▁La",
+ "b"
+ ],
+ [
+ "▁u",
+ "nf"
+ ],
+ [
+ "▁un",
+ "f"
+ ],
+ [
+ "▁im",
+ "per"
+ ],
+ [
+ "▁imp",
+ "er"
+ ],
+ [
+ "▁",
+ "imper"
+ ],
+ [
+ "▁b",
+ "ug"
+ ],
+ [
+ "▁bu",
+ "g"
+ ],
+ [
+ "▁",
+ "bug"
+ ],
+ [
+ "▁Th",
+ "ough"
+ ],
+ [
+ "th",
+ "rows"
+ ],
+ [
+ "throw",
+ "s"
+ ],
+ [
+ "Vis",
+ "ible"
+ ],
+ [
+ "V",
+ "isible"
+ ],
+ [
+ "pr",
+ "ev"
+ ],
+ [
+ "pre",
+ "v"
+ ],
+ [
+ "p",
+ "rev"
+ ],
+ [
+ "▁T",
+ "y"
+ ],
+ [
+ "▁",
+ "Ty"
+ ],
+ [
+ "▁de",
+ "pending"
+ ],
+ [
+ "▁depend",
+ "ing"
+ ],
+ [
+ "▁dep",
+ "ending"
+ ],
+ [
+ "▁pol",
+ "icies"
+ ],
+ [
+ "▁polic",
+ "ies"
+ ],
+ [
+ "an",
+ "dy"
+ ],
+ [
+ "and",
+ "y"
+ ],
+ [
+ "▁Ital",
+ "ian"
+ ],
+ [
+ "▁Italia",
+ "n"
+ ],
+ [
+ "um",
+ "a"
+ ],
+ [
+ "u",
+ "ma"
+ ],
+ [
+ "▁sign",
+ "s"
+ ],
+ [
+ "▁sig",
+ "ns"
+ ],
+ [
+ "▁Th",
+ "rough"
+ ],
+ [
+ "б",
+ "ы"
+ ],
+ [
+ "bo",
+ "t"
+ ],
+ [
+ "b",
+ "ot"
+ ],
+ [
+ "▁pub",
+ "lish"
+ ],
+ [
+ "▁publi",
+ "sh"
+ ],
+ [
+ "▁",
+ "publish"
+ ],
+ [
+ ")*",
+ "*"
+ ],
+ [
+ ")",
+ "**"
+ ],
+ [
+ "AT",
+ "TR"
+ ],
+ [
+ "ATT",
+ "R"
+ ],
+ [
+ "ir",
+ "al"
+ ],
+ [
+ "ira",
+ "l"
+ ],
+ [
+ "i",
+ "ral"
+ ],
+ [
+ "V",
+ "T"
+ ],
+ [
+ "▁recogn",
+ "ized"
+ ],
+ [
+ "▁recognize",
+ "d"
+ ],
+ [
+ "▁L",
+ "ind"
+ ],
+ [
+ "▁Lin",
+ "d"
+ ],
+ [
+ "▁Li",
+ "nd"
+ ],
+ [
+ "ect",
+ "ion"
+ ],
+ [
+ "e",
+ "ction"
+ ],
+ [
+ "▁rel",
+ "atively"
+ ],
+ [
+ "▁relative",
+ "ly"
+ ],
+ [
+ "▁relativ",
+ "ely"
+ ],
+ [
+ "▁A",
+ "h"
+ ],
+ [
+ "▁",
+ "Ah"
+ ],
+ [
+ "▁D",
+ "ig"
+ ],
+ [
+ "▁Di",
+ "g"
+ ],
+ [
+ "▁",
+ "Dig"
+ ],
+ [
+ "ц",
+ "ь"
+ ],
+ [
+ "ic",
+ "ket"
+ ],
+ [
+ "ick",
+ "et"
+ ],
+ [
+ "▁specific",
+ "ally"
+ ],
+ [
+ "no",
+ "st"
+ ],
+ [
+ "nos",
+ "t"
+ ],
+ [
+ "n",
+ "ost"
+ ],
+ [
+ "▁g",
+ "rass"
+ ],
+ [
+ "▁gr",
+ "ass"
+ ],
+ [
+ "▁gra",
+ "ss"
+ ],
+ [
+ "▁gras",
+ "s"
+ ],
+ [
+ "▁c",
+ "auses"
+ ],
+ [
+ "▁caus",
+ "es"
+ ],
+ [
+ "▁cause",
+ "s"
+ ],
+ [
+ "▁ca",
+ "uses"
+ ],
+ [
+ "т",
+ "во"
+ ],
+ [
+ "ut",
+ "ter"
+ ],
+ [
+ "utt",
+ "er"
+ ],
+ [
+ "▁F",
+ "estival"
+ ],
+ [
+ "▁Fest",
+ "ival"
+ ],
+ [
+ "gr",
+ "eg"
+ ],
+ [
+ "gre",
+ "g"
+ ],
+ [
+ "g",
+ "reg"
+ ],
+ [
+ "▁weap",
+ "ons"
+ ],
+ [
+ "▁weapon",
+ "s"
+ ],
+ [
+ "▁s",
+ "ir"
+ ],
+ [
+ "▁si",
+ "r"
+ ],
+ [
+ "▁Virgin",
+ "ia"
+ ],
+ [
+ "lo",
+ "gin"
+ ],
+ [
+ "log",
+ "in"
+ ],
+ [
+ "▁s",
+ "chedul"
+ ],
+ [
+ "▁sched",
+ "ul"
+ ],
+ [
+ "сь",
+ "кого"
+ ],
+ [
+ "сько",
+ "го"
+ ],
+ [
+ "▁l",
+ "osing"
+ ],
+ [
+ "▁lo",
+ "sing"
+ ],
+ [
+ "▁los",
+ "ing"
+ ],
+ [
+ "▁E",
+ "urop"
+ ],
+ [
+ "▁Euro",
+ "p"
+ ],
+ [
+ "▁Eu",
+ "rop"
+ ],
+ [
+ "\">",
+ "<"
+ ],
+ [
+ "\"",
+ "><"
+ ],
+ [
+ "as",
+ "p"
+ ],
+ [
+ "a",
+ "sp"
+ ],
+ [
+ "aj",
+ "o"
+ ],
+ [
+ "a",
+ "jo"
+ ],
+ [
+ "ex",
+ "ports"
+ ],
+ [
+ "exp",
+ "orts"
+ ],
+ [
+ "export",
+ "s"
+ ],
+ [
+ "▁N",
+ "ode"
+ ],
+ [
+ "▁No",
+ "de"
+ ],
+ [
+ "▁",
+ "Node"
+ ],
+ [
+ "▁j",
+ "ako"
+ ],
+ [
+ "▁ja",
+ "ko"
+ ],
+ [
+ "▁jak",
+ "o"
+ ],
+ [
+ "▁y",
+ "a"
+ ],
+ [
+ "▁",
+ "ya"
+ ],
+ [
+ "▁success",
+ "fully"
+ ],
+ [
+ "▁successful",
+ "ly"
+ ],
+ [
+ "▁friend",
+ "ly"
+ ],
+ [
+ "▁",
+ "friendly"
+ ],
+ [
+ "buf",
+ "f"
+ ],
+ [
+ "bu",
+ "ff"
+ ],
+ [
+ "b",
+ "uff"
+ ],
+ [
+ "DE",
+ "FAULT"
+ ],
+ [
+ "▁pre",
+ "gn"
+ ],
+ [
+ "▁preg",
+ "n"
+ ],
+ [
+ "Requ",
+ "ired"
+ ],
+ [
+ "Require",
+ "d"
+ ],
+ [
+ "▁b",
+ "inary"
+ ],
+ [
+ "▁bin",
+ "ary"
+ ],
+ [
+ "▁",
+ "binary"
+ ],
+ [
+ "is",
+ "ting"
+ ],
+ [
+ "ist",
+ "ing"
+ ],
+ [
+ "isti",
+ "ng"
+ ],
+ [
+ "▁st",
+ "ared"
+ ],
+ [
+ "▁star",
+ "ed"
+ ],
+ [
+ "▁stare",
+ "d"
+ ],
+ [
+ "▁sta",
+ "red"
+ ],
+ [
+ "▁circum",
+ "stances"
+ ],
+ [
+ "▁х",
+ "о"
+ ],
+ [
+ "▁",
+ "хо"
+ ],
+ [
+ "re",
+ "i"
+ ],
+ [
+ "r",
+ "ei"
+ ],
+ [
+ "▁Г",
+ "о"
+ ],
+ [
+ "Trans",
+ "form"
+ ],
+ [
+ "cn",
+ "t"
+ ],
+ [
+ "c",
+ "nt"
+ ],
+ [
+ "▁E",
+ "xt"
+ ],
+ [
+ "▁Ex",
+ "t"
+ ],
+ [
+ "▁",
+ "Ext"
+ ],
+ [
+ "re",
+ "port"
+ ],
+ [
+ "rep",
+ "ort"
+ ],
+ [
+ "repo",
+ "rt"
+ ],
+ [
+ "VER",
+ "SION"
+ ],
+ [
+ "▁an",
+ "aly"
+ ],
+ [
+ "▁anal",
+ "y"
+ ],
+ [
+ "▁",
+ "analy"
+ ],
+ [
+ "▁M",
+ "arg"
+ ],
+ [
+ "▁Mar",
+ "g"
+ ],
+ [
+ "▁Ma",
+ "rg"
+ ],
+ [
+ "▁al",
+ "leg"
+ ],
+ [
+ "▁all",
+ "eg"
+ ],
+ [
+ "▁alle",
+ "g"
+ ],
+ [
+ "build",
+ "er"
+ ],
+ [
+ "b",
+ "uilder"
+ ],
+ [
+ "To",
+ "String"
+ ],
+ [
+ "La",
+ "yer"
+ ],
+ [
+ "L",
+ "ayer"
+ ],
+ [
+ "ís",
+ "t"
+ ],
+ [
+ "í",
+ "st"
+ ],
+ [
+ "Pro",
+ "p"
+ ],
+ [
+ "Pr",
+ "op"
+ ],
+ [
+ "P",
+ "rop"
+ ],
+ [
+ "▁E",
+ "mp"
+ ],
+ [
+ "▁Em",
+ "p"
+ ],
+ [
+ "▁",
+ "Emp"
+ ],
+ [
+ "}",
+ "]"
+ ],
+ [
+ "▁s",
+ "elling"
+ ],
+ [
+ "▁sell",
+ "ing"
+ ],
+ [
+ "▁sel",
+ "ling"
+ ],
+ [
+ "▁",
+ "selling"
+ ],
+ [
+ "▁que",
+ "ue"
+ ],
+ [
+ "▁",
+ "queue"
+ ],
+ [
+ "▁ser",
+ "iously"
+ ],
+ [
+ "▁serious",
+ "ly"
+ ],
+ [
+ "▁L",
+ "ead"
+ ],
+ [
+ "▁Le",
+ "ad"
+ ],
+ [
+ "▁",
+ "Lead"
+ ],
+ [
+ "text",
+ "it"
+ ],
+ [
+ "tex",
+ "tit"
+ ],
+ [
+ "test",
+ "ing"
+ ],
+ [
+ "tes",
+ "ting"
+ ],
+ [
+ "▁П",
+ "ре"
+ ],
+ [
+ "se",
+ "curity"
+ ],
+ [
+ "sec",
+ "urity"
+ ],
+ [
+ "ia",
+ "ł"
+ ],
+ [
+ "i",
+ "ał"
+ ],
+ [
+ "ú",
+ "n"
+ ],
+ [
+ "ch",
+ "ip"
+ ],
+ [
+ "chi",
+ "p"
+ ],
+ [
+ "c",
+ "hip"
+ ],
+ [
+ "▁c",
+ "andidate"
+ ],
+ [
+ "▁candid",
+ "ate"
+ ],
+ [
+ "▁min",
+ "ister"
+ ],
+ [
+ "▁mini",
+ "ster"
+ ],
+ [
+ "▁minist",
+ "er"
+ ],
+ [
+ "▁",
+ "minister"
+ ],
+ [
+ "er",
+ "ia"
+ ],
+ [
+ "eri",
+ "a"
+ ],
+ [
+ "e",
+ "ria"
+ ],
+ [
+ "▁H",
+ "et"
+ ],
+ [
+ "▁He",
+ "t"
+ ],
+ [
+ "ди",
+ "н"
+ ],
+ [
+ "д",
+ "ин"
+ ],
+ [
+ "▁Brit",
+ "ain"
+ ],
+ [
+ "▁b",
+ "arely"
+ ],
+ [
+ "▁bar",
+ "ely"
+ ],
+ [
+ "▁bare",
+ "ly"
+ ],
+ [
+ "▁s",
+ "ty"
+ ],
+ [
+ "▁st",
+ "y"
+ ],
+ [
+ "▁",
+ "sty"
+ ],
+ [
+ "▁Span",
+ "ish"
+ ],
+ [
+ "▁V",
+ "en"
+ ],
+ [
+ "▁Ve",
+ "n"
+ ],
+ [
+ "time",
+ "r"
+ ],
+ [
+ "ti",
+ "mer"
+ ],
+ [
+ "tim",
+ "er"
+ ],
+ [
+ "t",
+ "imer"
+ ],
+ [
+ "кі",
+ "в"
+ ],
+ [
+ "к",
+ "ів"
+ ],
+ [
+ "▁document",
+ "s"
+ ],
+ [
+ "▁doc",
+ "uments"
+ ],
+ [
+ "('",
+ "."
+ ],
+ [
+ "(",
+ "'."
+ ],
+ [
+ "▁d",
+ "ebug"
+ ],
+ [
+ "▁de",
+ "bug"
+ ],
+ [
+ "▁deb",
+ "ug"
+ ],
+ [
+ "▁",
+ "debug"
+ ],
+ [
+ "▁cont",
+ "ro"
+ ],
+ [
+ "▁contr",
+ "o"
+ ],
+ [
+ "сто",
+ "я"
+ ],
+ [
+ "▁j",
+ "oy"
+ ],
+ [
+ "▁jo",
+ "y"
+ ],
+ [
+ "▁",
+ "joy"
+ ],
+ [
+ "S",
+ "n"
+ ],
+ [
+ "In",
+ "v"
+ ],
+ [
+ "I",
+ "nv"
+ ],
+ [
+ "▁pro",
+ "tocol"
+ ],
+ [
+ "▁proto",
+ "col"
+ ],
+ [
+ "▁prot",
+ "ocol"
+ ],
+ [
+ "▁",
+ "protocol"
+ ],
+ [
+ "▁f",
+ "aces"
+ ],
+ [
+ "▁face",
+ "s"
+ ],
+ [
+ "▁fac",
+ "es"
+ ],
+ [
+ "▁fa",
+ "ces"
+ ],
+ [
+ "▁",
+ "faces"
+ ],
+ [
+ "▁Des",
+ "pite"
+ ],
+ [
+ "se",
+ "d"
+ ],
+ [
+ "s",
+ "ed"
+ ],
+ [
+ "Con",
+ "f"
+ ],
+ [
+ "Co",
+ "nf"
+ ],
+ [
+ "AR",
+ "G"
+ ],
+ [
+ "A",
+ "RG"
+ ],
+ [
+ "▁e",
+ "volution"
+ ],
+ [
+ "▁ev",
+ "olution"
+ ],
+ [
+ "▁t",
+ "od"
+ ],
+ [
+ "▁to",
+ "d"
+ ],
+ [
+ "▁P",
+ "romise"
+ ],
+ [
+ "▁Prom",
+ "ise"
+ ],
+ [
+ "▁",
+ "Promise"
+ ],
+ [
+ "▁pos",
+ "ted"
+ ],
+ [
+ "▁po",
+ "sted"
+ ],
+ [
+ "▁post",
+ "ed"
+ ],
+ [
+ "Per",
+ "m"
+ ],
+ [
+ "Pe",
+ "rm"
+ ],
+ [
+ "P",
+ "erm"
+ ],
+ [
+ "be",
+ "t"
+ ],
+ [
+ "b",
+ "et"
+ ],
+ [
+ "An",
+ "g"
+ ],
+ [
+ "A",
+ "ng"
+ ],
+ [
+ "J",
+ "ust"
+ ],
+ [
+ "▁r",
+ "um"
+ ],
+ [
+ "▁ru",
+ "m"
+ ],
+ [
+ "▁",
+ "rum"
+ ],
+ [
+ "la",
+ "yer"
+ ],
+ [
+ "lay",
+ "er"
+ ],
+ [
+ "l",
+ "ayer"
+ ],
+ [
+ "▁beh",
+ "avi"
+ ],
+ [
+ "▁behav",
+ "i"
+ ],
+ [
+ "ip",
+ "ping"
+ ],
+ [
+ "ipp",
+ "ing"
+ ],
+ [
+ "ippi",
+ "ng"
+ ],
+ [
+ "i",
+ "pping"
+ ],
+ [
+ "▁d",
+ "ynam"
+ ],
+ [
+ "▁dy",
+ "nam"
+ ],
+ [
+ "▁dyn",
+ "am"
+ ],
+ [
+ "▁sch",
+ "eme"
+ ],
+ [
+ "▁sche",
+ "me"
+ ],
+ [
+ "▁",
+ "scheme"
+ ],
+ [
+ "▁pro",
+ "to"
+ ],
+ [
+ "▁pr",
+ "oto"
+ ],
+ [
+ "▁prot",
+ "o"
+ ],
+ [
+ "▁",
+ "proto"
+ ],
+ [
+ ")",
+ "/"
+ ],
+ [
+ "Col",
+ "lections"
+ ],
+ [
+ "Collection",
+ "s"
+ ],
+ [
+ "Collect",
+ "ions"
+ ],
+ [
+ "ri",
+ "ev"
+ ],
+ [
+ "rie",
+ "v"
+ ],
+ [
+ "r",
+ "iev"
+ ],
+ [
+ "▁C",
+ "lick"
+ ],
+ [
+ "▁Cl",
+ "ick"
+ ],
+ [
+ "▁",
+ "Click"
+ ],
+ [
+ "▁u",
+ "ns"
+ ],
+ [
+ "▁un",
+ "s"
+ ],
+ [
+ "▁",
+ "uns"
+ ],
+ [
+ "wide",
+ "tilde"
+ ],
+ [
+ "widet",
+ "ilde"
+ ],
+ [
+ "▁remember",
+ "ed"
+ ],
+ [
+ "г",
+ "і"
+ ],
+ [
+ "in",
+ "ates"
+ ],
+ [
+ "ina",
+ "tes"
+ ],
+ [
+ "inate",
+ "s"
+ ],
+ [
+ "▁incor",
+ "por"
+ ],
+ [
+ "▁De",
+ "scription"
+ ],
+ [
+ "▁Des",
+ "cription"
+ ],
+ [
+ "▁",
+ "Description"
+ ],
+ [
+ "▁pre",
+ "pare"
+ ],
+ [
+ "▁prep",
+ "are"
+ ],
+ [
+ "▁prepar",
+ "e"
+ ],
+ [
+ "▁",
+ "prepare"
+ ],
+ [
+ "▁F",
+ "inal"
+ ],
+ [
+ "▁Fin",
+ "al"
+ ],
+ [
+ "▁Fi",
+ "nal"
+ ],
+ [
+ "▁",
+ "Final"
+ ],
+ [
+ "u",
+ "ation"
+ ],
+ [
+ "▁Qu",
+ "een"
+ ],
+ [
+ "▁Que",
+ "en"
+ ],
+ [
+ ">",
+ ";"
+ ],
+ [
+ "▁autom",
+ "atically"
+ ],
+ [
+ "▁automatic",
+ "ally"
+ ],
+ [
+ "▁sh",
+ "arp"
+ ],
+ [
+ "▁shar",
+ "p"
+ ],
+ [
+ "▁sha",
+ "rp"
+ ],
+ [
+ "▁me",
+ "at"
+ ],
+ [
+ "at",
+ "eur"
+ ],
+ [
+ "ate",
+ "ur"
+ ],
+ [
+ "as",
+ "tern"
+ ],
+ [
+ "ast",
+ "ern"
+ ],
+ [
+ "aster",
+ "n"
+ ],
+ [
+ "aste",
+ "rn"
+ ],
+ [
+ "▁st",
+ "uck"
+ ],
+ [
+ "ASS",
+ "ERT"
+ ],
+ [
+ "▁pl",
+ "anned"
+ ],
+ [
+ "▁plan",
+ "ned"
+ ],
+ [
+ "do",
+ "ts"
+ ],
+ [
+ "dot",
+ "s"
+ ],
+ [
+ "d",
+ "ots"
+ ],
+ [
+ "ook",
+ "ie"
+ ],
+ [
+ "oo",
+ "kie"
+ ],
+ [
+ "▁His",
+ "tor"
+ ],
+ [
+ "▁Hist",
+ "or"
+ ],
+ [
+ "▁re",
+ "views"
+ ],
+ [
+ "▁review",
+ "s"
+ ],
+ [
+ "IM",
+ "P"
+ ],
+ [
+ "I",
+ "MP"
+ ],
+ [
+ "▁answ",
+ "ered"
+ ],
+ [
+ "▁answer",
+ "ed"
+ ],
+ [
+ "To",
+ "tal"
+ ],
+ [
+ "T",
+ "otal"
+ ],
+ [
+ "▁s",
+ "au"
+ ],
+ [
+ "▁sa",
+ "u"
+ ],
+ [
+ "▁Me",
+ "xico"
+ ],
+ [
+ "▁Mex",
+ "ico"
+ ],
+ [
+ "contin",
+ "ue"
+ ],
+ [
+ "▁App",
+ "le"
+ ],
+ [
+ "▁Ap",
+ "ple"
+ ],
+ [
+ "like",
+ "ly"
+ ],
+ [
+ "lik",
+ "ely"
+ ],
+ [
+ "з",
+ "ва"
+ ],
+ [
+ "us",
+ "ers"
+ ],
+ [
+ "use",
+ "rs"
+ ],
+ [
+ "user",
+ "s"
+ ],
+ [
+ "▁ident",
+ "ified"
+ ],
+ [
+ "▁L",
+ "ev"
+ ],
+ [
+ "▁Le",
+ "v"
+ ],
+ [
+ "▁m",
+ "ol"
+ ],
+ [
+ "▁mo",
+ "l"
+ ],
+ [
+ "▁Is",
+ "lam"
+ ],
+ [
+ "▁com",
+ "mitted"
+ ],
+ [
+ "▁comm",
+ "itted"
+ ],
+ [
+ "▁commit",
+ "ted"
+ ],
+ [
+ "wr",
+ "it"
+ ],
+ [
+ "w",
+ "rit"
+ ],
+ [
+ "бе",
+ "р"
+ ],
+ [
+ "б",
+ "ер"
+ ],
+ [
+ "ri",
+ "ft"
+ ],
+ [
+ "rif",
+ "t"
+ ],
+ [
+ "r",
+ "ift"
+ ],
+ [
+ "▁inter",
+ "rupt"
+ ],
+ [
+ "▁",
+ "interrupt"
+ ],
+ [
+ "▁read",
+ "only"
+ ],
+ [
+ "sch",
+ "ema"
+ ],
+ [
+ "sche",
+ "ma"
+ ],
+ [
+ "s",
+ "chema"
+ ],
+ [
+ "S",
+ "m"
+ ],
+ [
+ "D",
+ "ouble"
+ ],
+ [
+ "az",
+ "a"
+ ],
+ [
+ "a",
+ "za"
+ ],
+ [
+ "▁H",
+ "al"
+ ],
+ [
+ "▁Ha",
+ "l"
+ ],
+ [
+ "▁",
+ "Hal"
+ ],
+ [
+ "Mo",
+ "ve"
+ ],
+ [
+ "M",
+ "ove"
+ ],
+ [
+ "▁S",
+ "eries"
+ ],
+ [
+ "▁Se",
+ "ries"
+ ],
+ [
+ "▁Ser",
+ "ies"
+ ],
+ [
+ "▁Serie",
+ "s"
+ ],
+ [
+ "▁",
+ "Series"
+ ],
+ [
+ "in",
+ "line"
+ ],
+ [
+ "▁кото",
+ "ры"
+ ],
+ [
+ "so",
+ "c"
+ ],
+ [
+ "s",
+ "oc"
+ ],
+ [
+ "▁t",
+ "ent"
+ ],
+ [
+ "▁te",
+ "nt"
+ ],
+ [
+ "▁ten",
+ "t"
+ ],
+ [
+ "▁a",
+ "mer"
+ ],
+ [
+ "▁am",
+ "er"
+ ],
+ [
+ "▁",
+ "amer"
+ ],
+ [
+ "ak",
+ "i"
+ ],
+ [
+ "a",
+ "ki"
+ ],
+ [
+ "▁l",
+ "ady"
+ ],
+ [
+ "▁la",
+ "dy"
+ ],
+ [
+ "▁lad",
+ "y"
+ ],
+ [
+ "▁t",
+ "ired"
+ ],
+ [
+ "▁ti",
+ "red"
+ ],
+ [
+ "▁tire",
+ "d"
+ ],
+ [
+ "▁tir",
+ "ed"
+ ],
+ [
+ "if",
+ "i"
+ ],
+ [
+ "i",
+ "fi"
+ ],
+ [
+ "▁m",
+ "ême"
+ ],
+ [
+ "▁",
+ "même"
+ ],
+ [
+ "ou",
+ "ver"
+ ],
+ [
+ "▁a",
+ "side"
+ ],
+ [
+ "▁as",
+ "ide"
+ ],
+ [
+ "Di",
+ "d"
+ ],
+ [
+ "D",
+ "id"
+ ],
+ [
+ "',",
+ "\r"
+ ],
+ [
+ "'",
+ ",\r"
+ ],
+ [
+ "▁br",
+ "inging"
+ ],
+ [
+ "▁bring",
+ "ing"
+ ],
+ [
+ "Draw",
+ "ing"
+ ],
+ [
+ "ar",
+ "o"
+ ],
+ [
+ "a",
+ "ro"
+ ],
+ [
+ "▁R",
+ "h"
+ ],
+ [
+ "▁N",
+ "az"
+ ],
+ [
+ "▁Na",
+ "z"
+ ],
+ [
+ "es",
+ "so"
+ ],
+ [
+ "ess",
+ "o"
+ ],
+ [
+ "▁re",
+ "action"
+ ],
+ [
+ "▁react",
+ "ion"
+ ],
+ [
+ "mit",
+ "ted"
+ ],
+ [
+ "mitt",
+ "ed"
+ ],
+ [
+ "m",
+ "itted"
+ ],
+ [
+ "▁abs",
+ "olute"
+ ],
+ [
+ "▁absolut",
+ "e"
+ ],
+ [
+ "▁",
+ "absolute"
+ ],
+ [
+ "ha",
+ "ust"
+ ],
+ [
+ "haus",
+ "t"
+ ],
+ [
+ "((",
+ ")"
+ ],
+ [
+ "(",
+ "()"
+ ],
+ [
+ "▁T",
+ "ask"
+ ],
+ [
+ "▁Ta",
+ "sk"
+ ],
+ [
+ "▁",
+ "Task"
+ ],
+ [
+ "ER",
+ "S"
+ ],
+ [
+ "E",
+ "RS"
+ ],
+ [
+ "▁^",
+ "{"
+ ],
+ [
+ "▁",
+ "^{"
+ ],
+ [
+ "V",
+ "D"
+ ],
+ [
+ "▁t",
+ "one"
+ ],
+ [
+ "▁to",
+ "ne"
+ ],
+ [
+ "▁ton",
+ "e"
+ ],
+ [
+ "dis",
+ "t"
+ ],
+ [
+ "di",
+ "st"
+ ],
+ [
+ "d",
+ "ist"
+ ],
+ [
+ "v",
+ "s"
+ ],
+ [
+ "▁whe",
+ "el"
+ ],
+ [
+ "▁",
+ "wheel"
+ ],
+ [
+ "▁administr",
+ "ation"
+ ],
+ [
+ "▁admin",
+ "istration"
+ ],
+ [
+ "▁inter",
+ "ests"
+ ],
+ [
+ "▁interest",
+ "s"
+ ],
+ [
+ "▁point",
+ "er"
+ ],
+ [
+ "▁po",
+ "inter"
+ ],
+ [
+ "▁",
+ "pointer"
+ ],
+ [
+ "▁en",
+ "counter"
+ ],
+ [
+ "▁enc",
+ "ounter"
+ ],
+ [
+ "ave",
+ "r"
+ ],
+ [
+ "av",
+ "er"
+ ],
+ [
+ "a",
+ "ver"
+ ],
+ [
+ "▁n",
+ "ord"
+ ],
+ [
+ "▁no",
+ "rd"
+ ],
+ [
+ "▁nor",
+ "d"
+ ],
+ [
+ "ke",
+ "t"
+ ],
+ [
+ "k",
+ "et"
+ ],
+ [
+ "▁b",
+ "each"
+ ],
+ [
+ "▁be",
+ "ach"
+ ],
+ [
+ "▁enjoy",
+ "ed"
+ ],
+ [
+ "cont",
+ "ains"
+ ],
+ [
+ "▁app",
+ "end"
+ ],
+ [
+ "▁ap",
+ "pend"
+ ],
+ [
+ "▁appe",
+ "nd"
+ ],
+ [
+ "▁",
+ "append"
+ ],
+ [
+ "W",
+ "ait"
+ ],
+ [
+ "▁s",
+ "quad"
+ ],
+ [
+ "▁squ",
+ "ad"
+ ],
+ [
+ "ze",
+ "l"
+ ],
+ [
+ "z",
+ "el"
+ ],
+ [
+ "▁med",
+ "ium"
+ ],
+ [
+ "▁medi",
+ "um"
+ ],
+ [
+ "▁",
+ "medium"
+ ],
+ [
+ "▁s",
+ "ending"
+ ],
+ [
+ "▁send",
+ "ing"
+ ],
+ [
+ "▁sen",
+ "ding"
+ ],
+ [
+ "▁L",
+ "ady"
+ ],
+ [
+ "▁La",
+ "dy"
+ ],
+ [
+ "▁Lad",
+ "y"
+ ],
+ [
+ "ç",
+ "ões"
+ ],
+ [
+ "▁dest",
+ "ination"
+ ],
+ [
+ "▁destin",
+ "ation"
+ ],
+ [
+ "▁",
+ "destination"
+ ],
+ [
+ "ny",
+ "ch"
+ ],
+ [
+ "n",
+ "ych"
+ ],
+ [
+ "▁conf",
+ "lict"
+ ],
+ [
+ "▁conflic",
+ "t"
+ ],
+ [
+ "▁L",
+ "y"
+ ],
+ [
+ "▁v",
+ "ul"
+ ],
+ [
+ "▁vu",
+ "l"
+ ],
+ [
+ "▁bas",
+ "ically"
+ ],
+ [
+ "▁basic",
+ "ally"
+ ],
+ [
+ "re",
+ "ated"
+ ],
+ [
+ "reat",
+ "ed"
+ ],
+ [
+ "reate",
+ "d"
+ ],
+ [
+ "rea",
+ "ted"
+ ],
+ [
+ "bl",
+ "ack"
+ ],
+ [
+ "ug",
+ "ins"
+ ],
+ [
+ "ugin",
+ "s"
+ ],
+ [
+ "▁cal",
+ "m"
+ ],
+ [
+ "▁ca",
+ "lm"
+ ],
+ [
+ "ér",
+ "ie"
+ ],
+ [
+ "éri",
+ "e"
+ ],
+ [
+ "é",
+ "rie"
+ ],
+ [
+ "ha",
+ "r"
+ ],
+ [
+ "h",
+ "ar"
+ ],
+ [
+ "ла",
+ "н"
+ ],
+ [
+ "л",
+ "ан"
+ ],
+ [
+ "▁С",
+ "е"
+ ],
+ [
+ "w",
+ "atch"
+ ],
+ [
+ "▁P",
+ "ut"
+ ],
+ [
+ "▁Pu",
+ "t"
+ ],
+ [
+ "▁",
+ "Put"
+ ],
+ [
+ "▁d",
+ "ump"
+ ],
+ [
+ "▁du",
+ "mp"
+ ],
+ [
+ "▁",
+ "dump"
+ ],
+ [
+ "ac",
+ "her"
+ ],
+ [
+ "ach",
+ "er"
+ ],
+ [
+ "ache",
+ "r"
+ ],
+ [
+ "a",
+ "cher"
+ ],
+ [
+ "sc",
+ "roll"
+ ],
+ [
+ "scr",
+ "oll"
+ ],
+ [
+ "▁cl",
+ "aimed"
+ ],
+ [
+ "▁claim",
+ "ed"
+ ],
+ [
+ "▁",
+ "claimed"
+ ],
+ [
+ "▁Cont",
+ "rol"
+ ],
+ [
+ "▁",
+ "Control"
+ ],
+ [
+ "▁bl",
+ "ind"
+ ],
+ [
+ "en",
+ "ti"
+ ],
+ [
+ "ent",
+ "i"
+ ],
+ [
+ "▁Ke",
+ "ep"
+ ],
+ [
+ "▁",
+ "Keep"
+ ],
+ [
+ "▁Develop",
+ "ment"
+ ],
+ [
+ "im",
+ "ages"
+ ],
+ [
+ "image",
+ "s"
+ ],
+ [
+ "ima",
+ "ges"
+ ],
+ [
+ "imag",
+ "es"
+ ],
+ [
+ "▁t",
+ "ough"
+ ],
+ [
+ "▁to",
+ "ugh"
+ ],
+ [
+ "▁tou",
+ "gh"
+ ],
+ [
+ "ge",
+ "bra"
+ ],
+ [
+ "geb",
+ "ra"
+ ],
+ [
+ "▁se",
+ "pt"
+ ],
+ [
+ "▁sep",
+ "t"
+ ],
+ [
+ "he",
+ "w"
+ ],
+ [
+ "h",
+ "ew"
+ ],
+ [
+ "▁s",
+ "kill"
+ ],
+ [
+ "▁sk",
+ "ill"
+ ],
+ [
+ "▁ski",
+ "ll"
+ ],
+ [
+ "▁",
+ "skill"
+ ],
+ [
+ "▁T",
+ "ay"
+ ],
+ [
+ "▁Ta",
+ "y"
+ ],
+ [
+ "▁k",
+ "tó"
+ ],
+ [
+ "ow",
+ "ner"
+ ],
+ [
+ "own",
+ "er"
+ ],
+ [
+ "par",
+ "e"
+ ],
+ [
+ "pa",
+ "re"
+ ],
+ [
+ "p",
+ "are"
+ ],
+ [
+ "▁f",
+ "ee"
+ ],
+ [
+ "▁fe",
+ "e"
+ ],
+ [
+ "▁",
+ "fee"
+ ],
+ [
+ "▁contin",
+ "ues"
+ ],
+ [
+ "▁continue",
+ "s"
+ ],
+ [
+ "▁continu",
+ "es"
+ ],
+ [
+ "▁k",
+ "an"
+ ],
+ [
+ "▁ka",
+ "n"
+ ],
+ [
+ "▁",
+ "kan"
+ ],
+ [
+ "be",
+ "s"
+ ],
+ [
+ "b",
+ "es"
+ ],
+ [
+ "▁c",
+ "ha"
+ ],
+ [
+ "▁ch",
+ "a"
+ ],
+ [
+ "▁",
+ "cha"
+ ],
+ [
+ "ov",
+ "o"
+ ],
+ [
+ "o",
+ "vo"
+ ],
+ [
+ "▁N",
+ "ight"
+ ],
+ [
+ "▁Ni",
+ "ght"
+ ],
+ [
+ "ict",
+ "ure"
+ ],
+ [
+ "sh",
+ "ire"
+ ],
+ [
+ "s",
+ "hire"
+ ],
+ [
+ "▁es",
+ "say"
+ ],
+ [
+ "▁ess",
+ "ay"
+ ],
+ [
+ "▁sup",
+ "pose"
+ ],
+ [
+ "▁supp",
+ "ose"
+ ],
+ [
+ "et",
+ "ic"
+ ],
+ [
+ "eti",
+ "c"
+ ],
+ [
+ "Ar",
+ "t"
+ ],
+ [
+ "A",
+ "rt"
+ ],
+ [
+ "ac",
+ "on"
+ ],
+ [
+ "aco",
+ "n"
+ ],
+ [
+ "a",
+ "con"
+ ],
+ [
+ "ll",
+ "a"
+ ],
+ [
+ "l",
+ "la"
+ ],
+ [
+ "word",
+ "s"
+ ],
+ [
+ "wor",
+ "ds"
+ ],
+ [
+ "w",
+ "ords"
+ ],
+ [
+ "▁compar",
+ "ison"
+ ],
+ [
+ "▁B",
+ "E"
+ ],
+ [
+ "▁",
+ "BE"
+ ],
+ [
+ "▁challeng",
+ "es"
+ ],
+ [
+ "▁challenge",
+ "s"
+ ],
+ [
+ "▁o",
+ "l"
+ ],
+ [
+ "▁",
+ "ol"
+ ],
+ [
+ "cite",
+ "p"
+ ],
+ [
+ "cit",
+ "ep"
+ ],
+ [
+ "▁F",
+ "oot"
+ ],
+ [
+ "▁Fo",
+ "ot"
+ ],
+ [
+ "▁",
+ "Foot"
+ ],
+ [
+ "▁S",
+ "uch"
+ ],
+ [
+ "▁Su",
+ "ch"
+ ],
+ [
+ "▁",
+ "Such"
+ ],
+ [
+ "▁p",
+ "apers"
+ ],
+ [
+ "▁paper",
+ "s"
+ ],
+ [
+ "▁pa",
+ "pers"
+ ],
+ [
+ "▁pap",
+ "ers"
+ ],
+ [
+ "act",
+ "iv"
+ ],
+ [
+ "qu",
+ "er"
+ ],
+ [
+ "que",
+ "r"
+ ],
+ [
+ "q",
+ "uer"
+ ],
+ [
+ "т",
+ "я"
+ ],
+ [
+ "▁Т",
+ "о"
+ ],
+ [
+ "сь",
+ "кий"
+ ],
+ [
+ "th",
+ "ur"
+ ],
+ [
+ "do",
+ "ne"
+ ],
+ [
+ "don",
+ "e"
+ ],
+ [
+ "d",
+ "one"
+ ],
+ [
+ "▁sh",
+ "ock"
+ ],
+ [
+ "▁ded",
+ "icated"
+ ],
+ [
+ "▁dedic",
+ "ated"
+ ],
+ [
+ "▁cor",
+ "respond"
+ ],
+ [
+ "▁correspon",
+ "d"
+ ],
+ [
+ "Se",
+ "cond"
+ ],
+ [
+ "Sec",
+ "ond"
+ ],
+ [
+ "▁b",
+ "ull"
+ ],
+ [
+ "▁bu",
+ "ll"
+ ],
+ [
+ "▁bul",
+ "l"
+ ],
+ [
+ "li",
+ "fe"
+ ],
+ [
+ "lif",
+ "e"
+ ],
+ [
+ "l",
+ "ife"
+ ],
+ [
+ "ind",
+ "ent"
+ ],
+ [
+ "inde",
+ "nt"
+ ],
+ [
+ "inden",
+ "t"
+ ],
+ [
+ "▁fig",
+ "ures"
+ ],
+ [
+ "▁figure",
+ "s"
+ ],
+ [
+ "▁And",
+ "rew"
+ ],
+ [
+ "▁Andre",
+ "w"
+ ],
+ [
+ "▁Andr",
+ "ew"
+ ],
+ [
+ "is",
+ "p"
+ ],
+ [
+ "i",
+ "sp"
+ ],
+ [
+ "▁fav",
+ "our"
+ ],
+ [
+ "зд",
+ "а"
+ ],
+ [
+ "з",
+ "да"
+ ],
+ [
+ "▁E",
+ "lect"
+ ],
+ [
+ "▁El",
+ "ect"
+ ],
+ [
+ "▁Ele",
+ "ct"
+ ],
+ [
+ "F",
+ "ull"
+ ],
+ [
+ "▁near",
+ "by"
+ ],
+ [
+ "▁Reg",
+ "ister"
+ ],
+ [
+ "▁",
+ "Register"
+ ],
+ [
+ "Sc",
+ "ale"
+ ],
+ [
+ "Scal",
+ "e"
+ ],
+ [
+ "ic",
+ "ations"
+ ],
+ [
+ "ication",
+ "s"
+ ],
+ [
+ "и",
+ "н"
+ ],
+ [
+ "▁A",
+ "M"
+ ],
+ [
+ "▁",
+ "AM"
+ ],
+ [
+ "pa",
+ "ir"
+ ],
+ [
+ "p",
+ "air"
+ ],
+ [
+ "▁pers",
+ "pective"
+ ],
+ [
+ "▁n",
+ "os"
+ ],
+ [
+ "▁no",
+ "s"
+ ],
+ [
+ "▁",
+ "nos"
+ ],
+ [
+ "ap",
+ "a"
+ ],
+ [
+ "a",
+ "pa"
+ ],
+ [
+ "ost",
+ "ał"
+ ],
+ [
+ "osta",
+ "ł"
+ ],
+ [
+ "▁P",
+ "ers"
+ ],
+ [
+ "▁Per",
+ "s"
+ ],
+ [
+ "▁Pe",
+ "rs"
+ ],
+ [
+ "▁",
+ "Pers"
+ ],
+ [
+ "ic",
+ "er"
+ ],
+ [
+ "ice",
+ "r"
+ ],
+ [
+ "i",
+ "cer"
+ ],
+ [
+ "▁pl",
+ "astic"
+ ],
+ [
+ "до",
+ "в"
+ ],
+ [
+ "д",
+ "ов"
+ ],
+ [
+ "ci",
+ "ples"
+ ],
+ [
+ "cipl",
+ "es"
+ ],
+ [
+ "cip",
+ "les"
+ ],
+ [
+ "z",
+ "ą"
+ ],
+ [
+ "cl",
+ "os"
+ ],
+ [
+ "c",
+ "los"
+ ],
+ [
+ "▁у",
+ "ча"
+ ],
+ [
+ "▁",
+ "Á"
+ ],
+ [
+ "pl",
+ "ugin"
+ ],
+ [
+ "plug",
+ "in"
+ ],
+ [
+ "▁an",
+ "gle"
+ ],
+ [
+ "▁ang",
+ "le"
+ ],
+ [
+ "▁angl",
+ "e"
+ ],
+ [
+ "▁",
+ "angle"
+ ],
+ [
+ "▁com",
+ "mission"
+ ],
+ [
+ "▁comm",
+ "ission"
+ ],
+ [
+ "▁fun",
+ "ds"
+ ],
+ [
+ "▁fund",
+ "s"
+ ],
+ [
+ "▁in",
+ "du"
+ ],
+ [
+ "▁ind",
+ "u"
+ ],
+ [
+ "▁d",
+ "rawn"
+ ],
+ [
+ "▁dr",
+ "awn"
+ ],
+ [
+ "▁draw",
+ "n"
+ ],
+ [
+ "á",
+ "m"
+ ],
+ [
+ "▁develop",
+ "ing"
+ ],
+ [
+ "▁seg",
+ "ment"
+ ],
+ [
+ "▁",
+ "segment"
+ ],
+ [
+ "is",
+ "me"
+ ],
+ [
+ "ism",
+ "e"
+ ],
+ [
+ "sc",
+ "r"
+ ],
+ [
+ "s",
+ "cr"
+ ],
+ [
+ "▁l",
+ "ies"
+ ],
+ [
+ "▁li",
+ "es"
+ ],
+ [
+ "▁lie",
+ "s"
+ ],
+ [
+ "▁I",
+ "L"
+ ],
+ [
+ "▁",
+ "IL"
+ ],
+ [
+ "▁a",
+ "pi"
+ ],
+ [
+ "▁ap",
+ "i"
+ ],
+ [
+ "▁",
+ "api"
+ ],
+ [
+ "Ext",
+ "ension"
+ ],
+ [
+ "▁s",
+ "cal"
+ ],
+ [
+ "▁sc",
+ "al"
+ ],
+ [
+ "▁",
+ "scal"
+ ],
+ [
+ "inst",
+ "all"
+ ],
+ [
+ "▁We",
+ "ek"
+ ],
+ [
+ "▁",
+ "Week"
+ ],
+ [
+ "▁gen",
+ "tle"
+ ],
+ [
+ "▁gent",
+ "le"
+ ],
+ [
+ "▁Canad",
+ "ian"
+ ],
+ [
+ "▁d",
+ "ialog"
+ ],
+ [
+ "▁dial",
+ "og"
+ ],
+ [
+ "▁dia",
+ "log"
+ ],
+ [
+ "▁",
+ "dialog"
+ ],
+ [
+ "▁art",
+ "icles"
+ ],
+ [
+ "▁article",
+ "s"
+ ],
+ [
+ "▁artic",
+ "les"
+ ],
+ [
+ "The",
+ "me"
+ ],
+ [
+ "Th",
+ "eme"
+ ],
+ [
+ "S",
+ "M"
+ ],
+ [
+ "▁B",
+ "ul"
+ ],
+ [
+ "▁Bu",
+ "l"
+ ],
+ [
+ "▁",
+ "Bul"
+ ],
+ [
+ "▁l",
+ "eur"
+ ],
+ [
+ "▁le",
+ "ur"
+ ],
+ [
+ "▁s",
+ "tom"
+ ],
+ [
+ "▁st",
+ "om"
+ ],
+ [
+ "▁sto",
+ "m"
+ ],
+ [
+ "Pl",
+ "ugin"
+ ],
+ [
+ "▁по",
+ "сле"
+ ],
+ [
+ "▁пос",
+ "ле"
+ ],
+ [
+ "▁st",
+ "ead"
+ ],
+ [
+ "▁ste",
+ "ad"
+ ],
+ [
+ "▁",
+ "stead"
+ ],
+ [
+ "▁",
+ "ś"
+ ],
+ [
+ "ip",
+ "her"
+ ],
+ [
+ "iph",
+ "er"
+ ],
+ [
+ "i",
+ "pher"
+ ],
+ [
+ "▁pr",
+ "ze"
+ ],
+ [
+ "▁prz",
+ "e"
+ ],
+ [
+ "▁d",
+ "raft"
+ ],
+ [
+ "▁dr",
+ "aft"
+ ],
+ [
+ "▁",
+ "draft"
+ ],
+ [
+ "bot",
+ "tom"
+ ],
+ [
+ "b",
+ "ottom"
+ ],
+ [
+ "▁{",
+ "};"
+ ],
+ [
+ "▁{}",
+ ";"
+ ],
+ [
+ "▁stay",
+ "ed"
+ ],
+ [
+ "fe",
+ "ature"
+ ],
+ [
+ "feat",
+ "ure"
+ ],
+ [
+ "▁v",
+ "ot"
+ ],
+ [
+ "▁vo",
+ "t"
+ ],
+ [
+ "▁fab",
+ "ric"
+ ],
+ [
+ "ç",
+ "a"
+ ],
+ [
+ "('",
+ "#"
+ ],
+ [
+ "re",
+ "a"
+ ],
+ [
+ "r",
+ "ea"
+ ],
+ [
+ "▁re",
+ "put"
+ ],
+ [
+ "▁rep",
+ "ut"
+ ],
+ [
+ "▁C",
+ "ir"
+ ],
+ [
+ "▁Ci",
+ "r"
+ ],
+ [
+ "▁",
+ "Cir"
+ ],
+ [
+ "▁A",
+ "L"
+ ],
+ [
+ "▁",
+ "AL"
+ ],
+ [
+ "▁assert",
+ "Equals"
+ ],
+ [
+ "▁",
+ "assertEquals"
+ ],
+ [
+ "result",
+ "s"
+ ],
+ [
+ "▁C",
+ "ross"
+ ],
+ [
+ "▁Cr",
+ "oss"
+ ],
+ [
+ "▁Cro",
+ "ss"
+ ],
+ [
+ "▁",
+ "Cross"
+ ],
+ [
+ "urs",
+ "day"
+ ],
+ [
+ "▁a",
+ "udio"
+ ],
+ [
+ "▁aud",
+ "io"
+ ],
+ [
+ "▁",
+ "audio"
+ ],
+ [
+ "▁g",
+ "ap"
+ ],
+ [
+ "▁ga",
+ "p"
+ ],
+ [
+ "▁stre",
+ "ets"
+ ],
+ [
+ "▁street",
+ "s"
+ ],
+ [
+ "▁scient",
+ "ific"
+ ],
+ [
+ "pl",
+ "atform"
+ ],
+ [
+ "▁a",
+ "uss"
+ ],
+ [
+ "▁au",
+ "ss"
+ ],
+ [
+ "▁aus",
+ "s"
+ ],
+ [
+ "▁C",
+ "ro"
+ ],
+ [
+ "▁Cr",
+ "o"
+ ],
+ [
+ "▁part",
+ "ial"
+ ],
+ [
+ "▁parti",
+ "al"
+ ],
+ [
+ "▁",
+ "partial"
+ ],
+ [
+ "un",
+ "c"
+ ],
+ [
+ "u",
+ "nc"
+ ],
+ [
+ "▁cho",
+ "ices"
+ ],
+ [
+ "▁choice",
+ "s"
+ ],
+ [
+ "▁и",
+ "ли"
+ ],
+ [
+ "pr",
+ "ed"
+ ],
+ [
+ "pre",
+ "d"
+ ],
+ [
+ "p",
+ "red"
+ ],
+ [
+ "▁he",
+ "ads"
+ ],
+ [
+ "▁head",
+ "s"
+ ],
+ [
+ "▁",
+ "heads"
+ ],
+ [
+ "ter",
+ "day"
+ ],
+ [
+ "▁N",
+ "ick"
+ ],
+ [
+ "▁Nic",
+ "k"
+ ],
+ [
+ "▁Ni",
+ "ck"
+ ],
+ [
+ "▁we",
+ "ird"
+ ],
+ [
+ "as",
+ "ant"
+ ],
+ [
+ "asa",
+ "nt"
+ ],
+ [
+ "▁represent",
+ "ed"
+ ],
+ [
+ "▁п",
+ "и"
+ ],
+ [
+ "▁",
+ "пи"
+ ],
+ [
+ "D",
+ "P"
+ ],
+ [
+ "or",
+ "ders"
+ ],
+ [
+ "ord",
+ "ers"
+ ],
+ [
+ "order",
+ "s"
+ ],
+ [
+ "cl",
+ "ock"
+ ],
+ [
+ "c",
+ "lock"
+ ],
+ [
+ "▁H",
+ "o"
+ ],
+ [
+ "ar",
+ "ters"
+ ],
+ [
+ "art",
+ "ers"
+ ],
+ [
+ "arter",
+ "s"
+ ],
+ [
+ "arte",
+ "rs"
+ ],
+ [
+ "C",
+ "md"
+ ],
+ [
+ "og",
+ "a"
+ ],
+ [
+ "o",
+ "ga"
+ ],
+ [
+ "Key",
+ "s"
+ ],
+ [
+ "Ke",
+ "ys"
+ ],
+ [
+ "Re",
+ "port"
+ ],
+ [
+ "Rep",
+ "ort"
+ ],
+ [
+ "Repo",
+ "rt"
+ ],
+ [
+ "▁V",
+ "ill"
+ ],
+ [
+ "▁Vi",
+ "ll"
+ ],
+ [
+ "▁Vil",
+ "l"
+ ],
+ [
+ "▁M",
+ "u"
+ ],
+ [
+ "▁",
+ "Mu"
+ ],
+ [
+ "▁own",
+ "ed"
+ ],
+ [
+ "▁",
+ "owned"
+ ],
+ [
+ "SU",
+ "CCESS"
+ ],
+ [
+ "▁type",
+ "of"
+ ],
+ [
+ "▁",
+ "typeof"
+ ],
+ [
+ "hd",
+ "r"
+ ],
+ [
+ "h",
+ "dr"
+ ],
+ [
+ "ua",
+ "ble"
+ ],
+ [
+ "u",
+ "able"
+ ],
+ [
+ "▁neighbor",
+ "hood"
+ ],
+ [
+ "▁A",
+ "P"
+ ],
+ [
+ "▁",
+ "AP"
+ ],
+ [
+ "▁result",
+ "ing"
+ ],
+ [
+ "▁sh",
+ "adow"
+ ],
+ [
+ "▁",
+ "shadow"
+ ],
+ [
+ "STR",
+ "ING"
+ ],
+ [
+ "▁video",
+ "s"
+ ],
+ [
+ "▁vide",
+ "os"
+ ],
+ [
+ "ле",
+ "ння"
+ ],
+ [
+ "лен",
+ "ня"
+ ],
+ [
+ "ex",
+ "pect"
+ ],
+ [
+ "exp",
+ "ect"
+ ],
+ [
+ "▁Val",
+ "ley"
+ ],
+ [
+ "▁Vall",
+ "ey"
+ ],
+ [
+ "▁g",
+ "oto"
+ ],
+ [
+ "▁go",
+ "to"
+ ],
+ [
+ "▁got",
+ "o"
+ ],
+ [
+ "▁",
+ "goto"
+ ],
+ [
+ "▁S",
+ "her"
+ ],
+ [
+ "▁She",
+ "r"
+ ],
+ [
+ "▁Sh",
+ "er"
+ ],
+ [
+ "fr",
+ "astr"
+ ],
+ [
+ "▁oper",
+ "ating"
+ ],
+ [
+ "▁opera",
+ "ting"
+ ],
+ [
+ "▁э",
+ "то"
+ ],
+ [
+ "▁License",
+ "d"
+ ],
+ [
+ "▁Lic",
+ "ensed"
+ ],
+ [
+ "Var",
+ "iable"
+ ],
+ [
+ "Vari",
+ "able"
+ ],
+ [
+ "▁P",
+ "R"
+ ],
+ [
+ "▁",
+ "PR"
+ ],
+ [
+ "▁H",
+ "ans"
+ ],
+ [
+ "▁Ha",
+ "ns"
+ ],
+ [
+ "▁Han",
+ "s"
+ ],
+ [
+ "cl",
+ "one"
+ ],
+ [
+ "▁G",
+ "esch"
+ ],
+ [
+ "▁Ge",
+ "sch"
+ ],
+ [
+ "▁Ges",
+ "ch"
+ ],
+ [
+ "▁B",
+ "and"
+ ],
+ [
+ "▁Ba",
+ "nd"
+ ],
+ [
+ "▁Ban",
+ "d"
+ ],
+ [
+ "▁",
+ "Band"
+ ],
+ [
+ "...",
+ "....."
+ ],
+ [
+ "....",
+ "...."
+ ],
+ [
+ ".....",
+ "..."
+ ],
+ [
+ "ui",
+ "ng"
+ ],
+ [
+ "u",
+ "ing"
+ ],
+ [
+ "▁hundred",
+ "s"
+ ],
+ [
+ "▁о",
+ "к"
+ ],
+ [
+ "▁emot",
+ "ional"
+ ],
+ [
+ "▁emotion",
+ "al"
+ ],
+ [
+ "▁Ind",
+ "ust"
+ ],
+ [
+ ")",
+ "+"
+ ],
+ [
+ "▁Egy",
+ "pt"
+ ],
+ [
+ "▁fr",
+ "anç"
+ ],
+ [
+ "▁",
+ "š"
+ ],
+ [
+ "▁f",
+ "asc"
+ ],
+ [
+ "▁fa",
+ "sc"
+ ],
+ [
+ "on",
+ "to"
+ ],
+ [
+ "ont",
+ "o"
+ ],
+ [
+ "▁A",
+ "dam"
+ ],
+ [
+ "▁Ad",
+ "am"
+ ],
+ [
+ "▁l",
+ "aid"
+ ],
+ [
+ "▁la",
+ "id"
+ ],
+ [
+ "▁r",
+ "ig"
+ ],
+ [
+ "▁ri",
+ "g"
+ ],
+ [
+ "▁",
+ "rig"
+ ],
+ [
+ "▁det",
+ "ailed"
+ ],
+ [
+ "▁detail",
+ "ed"
+ ],
+ [
+ "▁im",
+ "plements"
+ ],
+ [
+ "▁implement",
+ "s"
+ ],
+ [
+ "▁impl",
+ "ements"
+ ],
+ [
+ "▁univers",
+ "ity"
+ ],
+ [
+ "▁H",
+ "y"
+ ],
+ [
+ "▁",
+ "Hy"
+ ],
+ [
+ "▁g",
+ "rid"
+ ],
+ [
+ "▁gr",
+ "id"
+ ],
+ [
+ "▁gri",
+ "d"
+ ],
+ [
+ "▁",
+ "grid"
+ ],
+ [
+ "▁reg",
+ "ions"
+ ],
+ [
+ "▁region",
+ "s"
+ ],
+ [
+ "St",
+ "op"
+ ],
+ [
+ "S",
+ "top"
+ ],
+ [
+ "▁s",
+ "lot"
+ ],
+ [
+ "▁sl",
+ "ot"
+ ],
+ [
+ "▁",
+ "slot"
+ ],
+ [
+ "▁ang",
+ "ry"
+ ],
+ [
+ "▁-",
+ "="
+ ],
+ [
+ "▁wait",
+ "ed"
+ ],
+ [
+ "▁wa",
+ "ited"
+ ],
+ [
+ "Ver",
+ "t"
+ ],
+ [
+ "V",
+ "ert"
+ ],
+ [
+ "\":",
+ "\""
+ ],
+ [
+ "\"",
+ ":\""
+ ],
+ [
+ "▁e",
+ "lem"
+ ],
+ [
+ "▁el",
+ "em"
+ ],
+ [
+ "▁ele",
+ "m"
+ ],
+ [
+ "▁",
+ "elem"
+ ],
+ [
+ "▁r",
+ "ég"
+ ],
+ [
+ "▁ré",
+ "g"
+ ],
+ [
+ "ow",
+ "ed"
+ ],
+ [
+ "owe",
+ "d"
+ ],
+ [
+ "o",
+ "wed"
+ ],
+ [
+ "Mem",
+ "ber"
+ ],
+ [
+ "Me",
+ "mber"
+ ],
+ [
+ "M",
+ "ember"
+ ],
+ [
+ "▁r",
+ "atio"
+ ],
+ [
+ "▁rat",
+ "io"
+ ],
+ [
+ "▁",
+ "ratio"
+ ],
+ [
+ "is",
+ "en"
+ ],
+ [
+ "ise",
+ "n"
+ ],
+ [
+ "i",
+ "sen"
+ ],
+ [
+ "▁L",
+ "em"
+ ],
+ [
+ "▁Le",
+ "m"
+ ],
+ [
+ "ge",
+ "ry"
+ ],
+ [
+ "ger",
+ "y"
+ ],
+ [
+ "g",
+ "ery"
+ ],
+ [
+ "▁c",
+ "ream"
+ ],
+ [
+ "▁cre",
+ "am"
+ ],
+ [
+ "▁ét",
+ "ait"
+ ],
+ [
+ "▁",
+ "était"
+ ],
+ [
+ "▁g",
+ "eb"
+ ],
+ [
+ "▁ge",
+ "b"
+ ],
+ [
+ "▁",
+ "geb"
+ ],
+ [
+ "un",
+ "ique"
+ ],
+ [
+ "uni",
+ "que"
+ ],
+ [
+ "▁D",
+ "eb"
+ ],
+ [
+ "▁De",
+ "b"
+ ],
+ [
+ "▁f",
+ "actory"
+ ],
+ [
+ "▁fact",
+ "ory"
+ ],
+ [
+ "▁factor",
+ "y"
+ ],
+ [
+ "▁",
+ "factory"
+ ],
+ [
+ "ż",
+ "e"
+ ],
+ [
+ "d",
+ "ialog"
+ ],
+ [
+ "▁Con",
+ "fig"
+ ],
+ [
+ "▁Conf",
+ "ig"
+ ],
+ [
+ "▁",
+ "Config"
+ ],
+ [
+ "Sy",
+ "nc"
+ ],
+ [
+ "S",
+ "ync"
+ ],
+ [
+ "an",
+ "gers"
+ ],
+ [
+ "ang",
+ "ers"
+ ],
+ [
+ "ange",
+ "rs"
+ ],
+ [
+ "anger",
+ "s"
+ ],
+ [
+ "▁gover",
+ "ning"
+ ],
+ [
+ "▁govern",
+ "ing"
+ ],
+ [
+ "▁H",
+ "un"
+ ],
+ [
+ "▁Hu",
+ "n"
+ ],
+ [
+ "Sp",
+ "ace"
+ ],
+ [
+ "S",
+ "pace"
+ ],
+ [
+ "▁j",
+ "est"
+ ],
+ [
+ "▁je",
+ "st"
+ ],
+ [
+ "ic",
+ "ious"
+ ],
+ [
+ "ici",
+ "ous"
+ ],
+ [
+ "icio",
+ "us"
+ ],
+ [
+ "▁em",
+ "phas"
+ ],
+ [
+ "▁emp",
+ "has"
+ ],
+ [
+ "um",
+ "ps"
+ ],
+ [
+ "ump",
+ "s"
+ ],
+ [
+ "▁E",
+ "sp"
+ ],
+ [
+ "▁Es",
+ "p"
+ ],
+ [
+ "▁",
+ "Esp"
+ ],
+ [
+ "▁s",
+ "ul"
+ ],
+ [
+ "▁su",
+ "l"
+ ],
+ [
+ "▁histor",
+ "ical"
+ ],
+ [
+ "▁historic",
+ "al"
+ ],
+ [
+ "ij",
+ "a"
+ ],
+ [
+ "i",
+ "ja"
+ ],
+ [
+ "▁l",
+ "ying"
+ ],
+ [
+ "▁ly",
+ "ing"
+ ],
+ [
+ "▁",
+ "lying"
+ ],
+ [
+ "▁St",
+ "eve"
+ ],
+ [
+ "▁Ste",
+ "ve"
+ ],
+ [
+ "▁me",
+ "asures"
+ ],
+ [
+ "▁measure",
+ "s"
+ ],
+ [
+ "▁meas",
+ "ures"
+ ],
+ [
+ "os",
+ "to"
+ ],
+ [
+ "ost",
+ "o"
+ ],
+ [
+ "o",
+ "sto"
+ ],
+ [
+ "?",
+ "”"
+ ],
+ [
+ "▁p",
+ "ocket"
+ ],
+ [
+ "▁poc",
+ "ket"
+ ],
+ [
+ "▁S",
+ "at"
+ ],
+ [
+ "▁Sa",
+ "t"
+ ],
+ [
+ "▁p",
+ "itch"
+ ],
+ [
+ "▁pit",
+ "ch"
+ ],
+ [
+ "▁n",
+ "atur"
+ ],
+ [
+ "▁nat",
+ "ur"
+ ],
+ [
+ "▁hum",
+ "ans"
+ ],
+ [
+ "▁human",
+ "s"
+ ],
+ [
+ "▁Sim",
+ "on"
+ ],
+ [
+ "▁Si",
+ "mon"
+ ],
+ [
+ "ad",
+ "ores"
+ ],
+ [
+ "ado",
+ "res"
+ ],
+ [
+ "ador",
+ "es"
+ ],
+ [
+ "(\"",
+ "\\"
+ ],
+ [
+ "(",
+ "\"\\"
+ ],
+ [
+ "in",
+ "king"
+ ],
+ [
+ "ink",
+ "ing"
+ ],
+ [
+ "▁ex",
+ "pos"
+ ],
+ [
+ "▁exp",
+ "os"
+ ],
+ [
+ "mat",
+ "erial"
+ ],
+ [
+ "mate",
+ "rial"
+ ],
+ [
+ "m",
+ "aterial"
+ ],
+ [
+ "▁app",
+ "arently"
+ ],
+ [
+ "▁apparent",
+ "ly"
+ ],
+ [
+ "▁appar",
+ "ently"
+ ],
+ [
+ "▁C",
+ "amb"
+ ],
+ [
+ "▁Cam",
+ "b"
+ ],
+ [
+ "▁Ca",
+ "mb"
+ ],
+ [
+ "▁B",
+ "ox"
+ ],
+ [
+ "▁Bo",
+ "x"
+ ],
+ [
+ "▁",
+ "Box"
+ ],
+ [
+ "▁s",
+ "paces"
+ ],
+ [
+ "▁sp",
+ "aces"
+ ],
+ [
+ "▁space",
+ "s"
+ ],
+ [
+ "ex",
+ "ists"
+ ],
+ [
+ "exist",
+ "s"
+ ],
+ [
+ "▁act",
+ "ing"
+ ],
+ [
+ "▁ac",
+ "ting"
+ ],
+ [
+ "OR",
+ "Y"
+ ],
+ [
+ "зо",
+ "ва"
+ ],
+ [
+ "Go",
+ "od"
+ ],
+ [
+ "G",
+ "ood"
+ ],
+ [
+ "ien",
+ "ne"
+ ],
+ [
+ "i",
+ "enne"
+ ],
+ [
+ "▁William",
+ "s"
+ ],
+ [
+ "▁f",
+ "ruit"
+ ],
+ [
+ "▁fr",
+ "uit"
+ ],
+ [
+ "▁fru",
+ "it"
+ ],
+ [
+ "ie",
+ "ra"
+ ],
+ [
+ "ier",
+ "a"
+ ],
+ [
+ "i",
+ "era"
+ ],
+ [
+ "▁L",
+ "im"
+ ],
+ [
+ "▁Li",
+ "m"
+ ],
+ [
+ "▁",
+ "Lim"
+ ],
+ [
+ "▁t",
+ "rait"
+ ],
+ [
+ "▁tr",
+ "ait"
+ ],
+ [
+ "▁tra",
+ "it"
+ ],
+ [
+ "▁",
+ "trait"
+ ],
+ [
+ "▁art",
+ "ists"
+ ],
+ [
+ "▁artist",
+ "s"
+ ],
+ [
+ "▁ab",
+ "sor"
+ ],
+ [
+ "▁abs",
+ "or"
+ ],
+ [
+ "ra",
+ "it"
+ ],
+ [
+ "rai",
+ "t"
+ ],
+ [
+ "r",
+ "ait"
+ ],
+ [
+ "LO",
+ "AD"
+ ],
+ [
+ "▁mov",
+ "ies"
+ ],
+ [
+ "▁movie",
+ "s"
+ ],
+ [
+ "▁d",
+ "ynamic"
+ ],
+ [
+ "▁dynam",
+ "ic"
+ ],
+ [
+ "▁dyn",
+ "amic"
+ ],
+ [
+ "▁",
+ "dynamic"
+ ],
+ [
+ "as",
+ "ts"
+ ],
+ [
+ "ast",
+ "s"
+ ],
+ [
+ "a",
+ "sts"
+ ],
+ [
+ "▁In",
+ "teger"
+ ],
+ [
+ "▁",
+ "Integer"
+ ],
+ [
+ "▁sm",
+ "oke"
+ ],
+ [
+ "п",
+ "і"
+ ],
+ [
+ "an",
+ "gel"
+ ],
+ [
+ "ang",
+ "el"
+ ],
+ [
+ "ange",
+ "l"
+ ],
+ [
+ ">(",
+ "\""
+ ],
+ [
+ ">",
+ "(\""
+ ],
+ [
+ "▁in",
+ "strument"
+ ],
+ [
+ "▁instr",
+ "ument"
+ ],
+ [
+ "▁f",
+ "uel"
+ ],
+ [
+ "▁fue",
+ "l"
+ ],
+ [
+ "▁fu",
+ "el"
+ ],
+ [
+ "но",
+ "ї"
+ ],
+ [
+ "atal",
+ "ogue"
+ ],
+ [
+ "atalog",
+ "ue"
+ ],
+ [
+ "▁s",
+ "erial"
+ ],
+ [
+ "▁se",
+ "rial"
+ ],
+ [
+ "▁ser",
+ "ial"
+ ],
+ [
+ "▁",
+ "serial"
+ ],
+ [
+ "File",
+ "s"
+ ],
+ [
+ "Fil",
+ "es"
+ ],
+ [
+ "Fi",
+ "les"
+ ],
+ [
+ "F",
+ "iles"
+ ],
+ [
+ "▁bath",
+ "room"
+ ],
+ [
+ "il",
+ "o"
+ ],
+ [
+ "i",
+ "lo"
+ ],
+ [
+ "es",
+ "to"
+ ],
+ [
+ "est",
+ "o"
+ ],
+ [
+ "e",
+ "sto"
+ ],
+ [
+ "▁p",
+ "m"
+ ],
+ [
+ "▁",
+ "pm"
+ ],
+ [
+ "ent",
+ "ials"
+ ],
+ [
+ "ential",
+ "s"
+ ],
+ [
+ "enti",
+ "als"
+ ],
+ [
+ "▁On",
+ "line"
+ ],
+ [
+ "wh",
+ "ite"
+ ],
+ [
+ "▁t",
+ "ips"
+ ],
+ [
+ "▁tip",
+ "s"
+ ],
+ [
+ "▁ti",
+ "ps"
+ ],
+ [
+ "▁cap",
+ "able"
+ ],
+ [
+ "Fi",
+ "g"
+ ],
+ [
+ "F",
+ "ig"
+ ],
+ [
+ "T",
+ "V"
+ ],
+ [
+ "▁о",
+ "н"
+ ],
+ [
+ "▁",
+ "он"
+ ],
+ [
+ "k",
+ "é"
+ ],
+ [
+ "bit",
+ "r"
+ ],
+ [
+ "bi",
+ "tr"
+ ],
+ [
+ "b",
+ "itr"
+ ],
+ [
+ "Map",
+ "ping"
+ ],
+ [
+ "Ma",
+ "pping"
+ ],
+ [
+ "M",
+ "apping"
+ ],
+ [
+ "▁t",
+ "ak"
+ ],
+ [
+ "▁ta",
+ "k"
+ ],
+ [
+ "ю",
+ "щи"
+ ],
+ [
+ "в",
+ "ля"
+ ],
+ [
+ ")\"",
+ ","
+ ],
+ [
+ ")",
+ "\","
+ ],
+ [
+ "▁K",
+ "arl"
+ ],
+ [
+ "▁Kar",
+ "l"
+ ],
+ [
+ "▁Ka",
+ "rl"
+ ],
+ [
+ "▁H",
+ "uman"
+ ],
+ [
+ "▁Hu",
+ "man"
+ ],
+ [
+ "▁Hum",
+ "an"
+ ],
+ [
+ "▁P",
+ "ot"
+ ],
+ [
+ "▁Po",
+ "t"
+ ],
+ [
+ "▁rep",
+ "resents"
+ ],
+ [
+ "▁represent",
+ "s"
+ ],
+ [
+ "▁cons",
+ "istent"
+ ],
+ [
+ "▁consist",
+ "ent"
+ ],
+ [
+ "_",
+ "("
+ ],
+ [
+ "we",
+ "n"
+ ],
+ [
+ "w",
+ "en"
+ ],
+ [
+ "▁R",
+ "ose"
+ ],
+ [
+ "▁Ro",
+ "se"
+ ],
+ [
+ "▁Ros",
+ "e"
+ ],
+ [
+ "la",
+ "w"
+ ],
+ [
+ "l",
+ "aw"
+ ],
+ [
+ "▁F",
+ "ROM"
+ ],
+ [
+ "▁FR",
+ "OM"
+ ],
+ [
+ "▁",
+ "FROM"
+ ],
+ [
+ "▁beg",
+ "ins"
+ ],
+ [
+ "▁begin",
+ "s"
+ ],
+ [
+ "▁e",
+ "dit"
+ ],
+ [
+ "▁ed",
+ "it"
+ ],
+ [
+ "▁",
+ "edit"
+ ],
+ [
+ "▁mount",
+ "ain"
+ ],
+ [
+ "▁ch",
+ "apter"
+ ],
+ [
+ "▁chap",
+ "ter"
+ ],
+ [
+ "▁wonder",
+ "ed"
+ ],
+ [
+ "▁indust",
+ "rial"
+ ],
+ [
+ "▁M",
+ "ajor"
+ ],
+ [
+ "▁Ma",
+ "jor"
+ ],
+ [
+ "▁Maj",
+ "or"
+ ],
+ [
+ "▁g",
+ "es"
+ ],
+ [
+ "▁ge",
+ "s"
+ ],
+ [
+ "▁",
+ "ges"
+ ],
+ [
+ "▁direct",
+ "ed"
+ ],
+ [
+ "▁dire",
+ "cted"
+ ],
+ [
+ "er",
+ "os"
+ ],
+ [
+ "ero",
+ "s"
+ ],
+ [
+ "e",
+ "ros"
+ ],
+ [
+ "▁W",
+ "ild"
+ ],
+ [
+ "▁Wil",
+ "d"
+ ],
+ [
+ "▁Wi",
+ "ld"
+ ],
+ [
+ "li",
+ "ament"
+ ],
+ [
+ "lia",
+ "ment"
+ ],
+ [
+ "Bo",
+ "ok"
+ ],
+ [
+ "B",
+ "ook"
+ ],
+ [
+ "user",
+ "name"
+ ],
+ [
+ "ho",
+ "t"
+ ],
+ [
+ "h",
+ "ot"
+ ],
+ [
+ "▁n",
+ "am"
+ ],
+ [
+ "▁na",
+ "m"
+ ],
+ [
+ "▁",
+ "nam"
+ ],
+ [
+ "▁le",
+ "ague"
+ ],
+ [
+ "br",
+ "a"
+ ],
+ [
+ "b",
+ "ra"
+ ],
+ [
+ "ко",
+ "н"
+ ],
+ [
+ "к",
+ "он"
+ ],
+ [
+ "▁T",
+ "al"
+ ],
+ [
+ "▁Ta",
+ "l"
+ ],
+ [
+ "▁В",
+ "а"
+ ],
+ [
+ "▁ex",
+ "ports"
+ ],
+ [
+ "▁exp",
+ "orts"
+ ],
+ [
+ "▁export",
+ "s"
+ ],
+ [
+ "▁",
+ "exports"
+ ],
+ [
+ "(",
+ "@"
+ ],
+ [
+ "▁sh",
+ "aring"
+ ],
+ [
+ "▁shar",
+ "ing"
+ ],
+ [
+ "▁sha",
+ "ring"
+ ],
+ [
+ "▁T",
+ "ro"
+ ],
+ [
+ "▁Tr",
+ "o"
+ ],
+ [
+ "ś",
+ "ć"
+ ],
+ [
+ "ues",
+ "day"
+ ],
+ [
+ "yl",
+ "v"
+ ],
+ [
+ "y",
+ "lv"
+ ],
+ [
+ "▁gu",
+ "itar"
+ ],
+ [
+ "el",
+ "en"
+ ],
+ [
+ "ele",
+ "n"
+ ],
+ [
+ "e",
+ "len"
+ ],
+ [
+ "Se",
+ "lection"
+ ],
+ [
+ "Select",
+ "ion"
+ ],
+ [
+ "S",
+ "election"
+ ],
+ [
+ "▁conf",
+ "ident"
+ ],
+ [
+ "ry",
+ "pto"
+ ],
+ [
+ "rypt",
+ "o"
+ ],
+ [
+ "▁h",
+ "ors"
+ ],
+ [
+ "▁hor",
+ "s"
+ ],
+ [
+ "▁ho",
+ "rs"
+ ],
+ [
+ "ed",
+ "itor"
+ ],
+ [
+ "edit",
+ "or"
+ ],
+ [
+ "edi",
+ "tor"
+ ],
+ [
+ "▁should",
+ "ers"
+ ],
+ [
+ "▁shoulder",
+ "s"
+ ],
+ [
+ "get",
+ "Name"
+ ],
+ [
+ "en",
+ "cing"
+ ],
+ [
+ "enc",
+ "ing"
+ ],
+ [
+ "enci",
+ "ng"
+ ],
+ [
+ "SE",
+ "LECT"
+ ],
+ [
+ "SEL",
+ "ECT"
+ ],
+ [
+ "в",
+ "ши"
+ ],
+ [
+ "▁kind",
+ "s"
+ ],
+ [
+ "▁kin",
+ "ds"
+ ],
+ [
+ "▁W",
+ "el"
+ ],
+ [
+ "▁We",
+ "l"
+ ],
+ [
+ "▁pur",
+ "poses"
+ ],
+ [
+ "▁purpose",
+ "s"
+ ],
+ [
+ "Mat",
+ "rix"
+ ],
+ [
+ "in",
+ "valid"
+ ],
+ [
+ "▁own",
+ "ers"
+ ],
+ [
+ "▁owner",
+ "s"
+ ],
+ [
+ "▁",
+ "owners"
+ ],
+ [
+ "▁Rec",
+ "ords"
+ ],
+ [
+ "▁Record",
+ "s"
+ ],
+ [
+ "▁",
+ "Records"
+ ],
+ [
+ "▁Pro",
+ "cess"
+ ],
+ [
+ "▁",
+ "Process"
+ ],
+ [
+ "▁c",
+ "hat"
+ ],
+ [
+ "▁ch",
+ "at"
+ ],
+ [
+ "▁cha",
+ "t"
+ ],
+ [
+ "▁",
+ "chat"
+ ],
+ [
+ "▁D",
+ "or"
+ ],
+ [
+ "▁Do",
+ "r"
+ ],
+ [
+ "▁b",
+ "in"
+ ],
+ [
+ "▁bi",
+ "n"
+ ],
+ [
+ "▁",
+ "bin"
+ ],
+ [
+ "re",
+ "dit"
+ ],
+ [
+ "red",
+ "it"
+ ],
+ [
+ "r",
+ "edit"
+ ],
+ [
+ "oi",
+ "re"
+ ],
+ [
+ "oir",
+ "e"
+ ],
+ [
+ "o",
+ "ire"
+ ],
+ [
+ "▁T",
+ "otal"
+ ],
+ [
+ "▁To",
+ "tal"
+ ],
+ [
+ "▁Tot",
+ "al"
+ ],
+ [
+ "▁",
+ "Total"
+ ],
+ [
+ "▁F",
+ "amily"
+ ],
+ [
+ "▁Famil",
+ "y"
+ ],
+ [
+ "▁",
+ "Family"
+ ],
+ [
+ "AR",
+ "Y"
+ ],
+ [
+ "▁b",
+ "read"
+ ],
+ [
+ "▁br",
+ "ead"
+ ],
+ [
+ "▁bre",
+ "ad"
+ ],
+ [
+ "▁",
+ "bread"
+ ],
+ [
+ "▁com",
+ "pre"
+ ],
+ [
+ "▁comp",
+ "re"
+ ],
+ [
+ "▁compr",
+ "e"
+ ],
+ [
+ "▁sh",
+ "oes"
+ ],
+ [
+ "▁shoe",
+ "s"
+ ],
+ [
+ "▁r",
+ "az"
+ ],
+ [
+ "▁ra",
+ "z"
+ ],
+ [
+ "▁",
+ "raz"
+ ],
+ [
+ "▁tr",
+ "ace"
+ ],
+ [
+ "▁tra",
+ "ce"
+ ],
+ [
+ "▁",
+ "trace"
+ ],
+ [
+ "ne",
+ "j"
+ ],
+ [
+ "n",
+ "ej"
+ ],
+ [
+ "or",
+ "ted"
+ ],
+ [
+ "ort",
+ "ed"
+ ],
+ [
+ "orte",
+ "d"
+ ],
+ [
+ "h",
+ "n"
+ ],
+ [
+ "▁pro",
+ "cedure"
+ ],
+ [
+ "▁proced",
+ "ure"
+ ],
+ [
+ "pro",
+ "perties"
+ ],
+ [
+ "pl",
+ "ier"
+ ],
+ [
+ "▁h",
+ "ero"
+ ],
+ [
+ "▁he",
+ "ro"
+ ],
+ [
+ "▁her",
+ "o"
+ ],
+ [
+ "▁",
+ "hero"
+ ],
+ [
+ "pan",
+ "el"
+ ],
+ [
+ "pa",
+ "nel"
+ ],
+ [
+ "p",
+ "anel"
+ ],
+ [
+ "▁mark",
+ "ed"
+ ],
+ [
+ "▁mar",
+ "ked"
+ ],
+ [
+ "▁wor",
+ "ried"
+ ],
+ [
+ "\\",
+ "|"
+ ],
+ [
+ "pt",
+ "s"
+ ],
+ [
+ "p",
+ "ts"
+ ],
+ [
+ "▁S",
+ "upport"
+ ],
+ [
+ "▁Sup",
+ "port"
+ ],
+ [
+ "▁Supp",
+ "ort"
+ ],
+ [
+ "▁",
+ "Support"
+ ],
+ [
+ "▁ser",
+ "ving"
+ ],
+ [
+ "▁serv",
+ "ing"
+ ],
+ [
+ "F",
+ "ail"
+ ],
+ [
+ "▁dis",
+ "appoint"
+ ],
+ [
+ "▁Sc",
+ "ot"
+ ],
+ [
+ "▁ple",
+ "asure"
+ ],
+ [
+ "▁j",
+ "udge"
+ ],
+ [
+ "▁jud",
+ "ge"
+ ],
+ [
+ "▁judg",
+ "e"
+ ],
+ [
+ "ze",
+ "ich"
+ ],
+ [
+ "▁for",
+ "ever"
+ ],
+ [
+ "▁fore",
+ "ver"
+ ],
+ [
+ "▁Ze",
+ "it"
+ ],
+ [
+ "uo",
+ "us"
+ ],
+ [
+ "u",
+ "ous"
+ ],
+ [
+ "in",
+ "ent"
+ ],
+ [
+ "ine",
+ "nt"
+ ],
+ [
+ "inen",
+ "t"
+ ],
+ [
+ "i",
+ "nent"
+ ],
+ [
+ "▁d",
+ "w"
+ ],
+ [
+ "▁",
+ "dw"
+ ],
+ [
+ "▁w",
+ "aren"
+ ],
+ [
+ "▁war",
+ "en"
+ ],
+ [
+ "▁wa",
+ "ren"
+ ],
+ [
+ "▁ware",
+ "n"
+ ],
+ [
+ "▁fl",
+ "ash"
+ ],
+ [
+ "▁",
+ "flash"
+ ],
+ [
+ "▁tro",
+ "ops"
+ ],
+ [
+ "▁dr",
+ "ugs"
+ ],
+ [
+ "▁dru",
+ "gs"
+ ],
+ [
+ "▁drug",
+ "s"
+ ],
+ [
+ "▁d",
+ "iam"
+ ],
+ [
+ "▁di",
+ "am"
+ ],
+ [
+ "▁dia",
+ "m"
+ ],
+ [
+ ".",
+ "~"
+ ],
+ [
+ "im",
+ "p"
+ ],
+ [
+ "i",
+ "mp"
+ ],
+ [
+ "in",
+ "ned"
+ ],
+ [
+ "inn",
+ "ed"
+ ],
+ [
+ "▁E",
+ "V"
+ ],
+ [
+ "▁",
+ "EV"
+ ],
+ [
+ "St",
+ "ruct"
+ ],
+ [
+ "Str",
+ "uct"
+ ],
+ [
+ "▁just",
+ "ice"
+ ],
+ [
+ "▁offic",
+ "ials"
+ ],
+ [
+ "▁official",
+ "s"
+ ],
+ [
+ "ff",
+ "ff"
+ ],
+ [
+ "fff",
+ "f"
+ ],
+ [
+ "f",
+ "fff"
+ ],
+ [
+ "▁Com",
+ "mon"
+ ],
+ [
+ "▁Comm",
+ "on"
+ ],
+ [
+ "▁",
+ "Common"
+ ],
+ [
+ "▁C",
+ "at"
+ ],
+ [
+ "▁Ca",
+ "t"
+ ],
+ [
+ "▁",
+ "Cat"
+ ],
+ [
+ "▁tom",
+ "orrow"
+ ],
+ [
+ "▁é",
+ "l"
+ ],
+ [
+ "▁",
+ "él"
+ ],
+ [
+ "Text",
+ "ure"
+ ],
+ [
+ "Te",
+ "xture"
+ ],
+ [
+ "qp",
+ "oint"
+ ],
+ [
+ "q",
+ "point"
+ ],
+ [
+ "▁F",
+ "ried"
+ ],
+ [
+ "▁Fr",
+ "ied"
+ ],
+ [
+ "▁T",
+ "erm"
+ ],
+ [
+ "▁Te",
+ "rm"
+ ],
+ [
+ "▁Ter",
+ "m"
+ ],
+ [
+ "▁",
+ "Term"
+ ],
+ [
+ "pgf",
+ "qpoint"
+ ],
+ [
+ "▁n",
+ "em"
+ ],
+ [
+ "▁ne",
+ "m"
+ ],
+ [
+ "▁",
+ "nem"
+ ],
+ [
+ "no",
+ "rm"
+ ],
+ [
+ "nor",
+ "m"
+ ],
+ [
+ "n",
+ "orm"
+ ],
+ [
+ "▁hard",
+ "ly"
+ ],
+ [
+ "od",
+ "a"
+ ],
+ [
+ "o",
+ "da"
+ ],
+ [
+ "ze",
+ "ta"
+ ],
+ [
+ "zet",
+ "a"
+ ],
+ [
+ "z",
+ "eta"
+ ],
+ [
+ "em",
+ "ic"
+ ],
+ [
+ "emi",
+ "c"
+ ],
+ [
+ "e",
+ "mic"
+ ],
+ [
+ "▁по",
+ "лу"
+ ],
+ [
+ "▁пол",
+ "у"
+ ],
+ [
+ "▁lo",
+ "aded"
+ ],
+ [
+ "▁load",
+ "ed"
+ ],
+ [
+ "▁",
+ "loaded"
+ ],
+ [
+ "ke",
+ "s"
+ ],
+ [
+ "k",
+ "es"
+ ],
+ [
+ "ci",
+ "ó"
+ ],
+ [
+ "c",
+ "ió"
+ ],
+ [
+ "▁f",
+ "ool"
+ ],
+ [
+ "▁fo",
+ "ol"
+ ],
+ [
+ "▁foo",
+ "l"
+ ],
+ [
+ "▁t",
+ "rick"
+ ],
+ [
+ "▁tr",
+ "ick"
+ ],
+ [
+ "▁tri",
+ "ck"
+ ],
+ [
+ "▁d",
+ "st"
+ ],
+ [
+ "▁ds",
+ "t"
+ ],
+ [
+ "▁",
+ "dst"
+ ],
+ [
+ "Fin",
+ "d"
+ ],
+ [
+ "Fi",
+ "nd"
+ ],
+ [
+ "F",
+ "ind"
+ ],
+ [
+ "▁в",
+ "се"
+ ],
+ [
+ "}}",
+ ","
+ ],
+ [
+ "}",
+ "},"
+ ],
+ [
+ "▁frame",
+ "work"
+ ],
+ [
+ "▁",
+ "framework"
+ ],
+ [
+ "▁mer",
+ "ely"
+ ],
+ [
+ "▁mere",
+ "ly"
+ ],
+ [
+ "▁un",
+ "ion"
+ ],
+ [
+ "▁",
+ "union"
+ ],
+ [
+ "▁Ed",
+ "ward"
+ ],
+ [
+ "ri",
+ "f"
+ ],
+ [
+ "r",
+ "if"
+ ],
+ [
+ "Fl",
+ "ag"
+ ],
+ [
+ "F",
+ "lag"
+ ],
+ [
+ "▁cris",
+ "is"
+ ],
+ [
+ "▁cri",
+ "sis"
+ ],
+ [
+ "▁fin",
+ "ite"
+ ],
+ [
+ "▁",
+ "finite"
+ ],
+ [
+ "▁l",
+ "ol"
+ ],
+ [
+ "▁lo",
+ "l"
+ ],
+ [
+ "▁K",
+ "im"
+ ],
+ [
+ "▁Ki",
+ "m"
+ ],
+ [
+ "на",
+ "та"
+ ],
+ [
+ "sin",
+ "ce"
+ ],
+ [
+ "s",
+ "ince"
+ ],
+ [
+ "▁com",
+ "pat"
+ ],
+ [
+ "▁comp",
+ "at"
+ ],
+ [
+ "▁",
+ "compat"
+ ],
+ [
+ "▁p",
+ "ert"
+ ],
+ [
+ "▁per",
+ "t"
+ ],
+ [
+ "▁pe",
+ "rt"
+ ],
+ [
+ "▁",
+ "pert"
+ ],
+ [
+ "ib",
+ "ilities"
+ ],
+ [
+ "ibil",
+ "ities"
+ ],
+ [
+ "▁tamb",
+ "ién"
+ ],
+ [
+ "ib",
+ "li"
+ ],
+ [
+ "▁t",
+ "een"
+ ],
+ [
+ "▁te",
+ "en"
+ ],
+ [
+ "▁",
+ "teen"
+ ],
+ [
+ "▁sym",
+ "pt"
+ ],
+ [
+ "or",
+ "al"
+ ],
+ [
+ "ora",
+ "l"
+ ],
+ [
+ "o",
+ "ral"
+ ],
+ [
+ "de",
+ "rs"
+ ],
+ [
+ "der",
+ "s"
+ ],
+ [
+ "d",
+ "ers"
+ ],
+ [
+ "ot",
+ "te"
+ ],
+ [
+ "ott",
+ "e"
+ ],
+ [
+ "п",
+ "ри"
+ ],
+ [
+ "▁J",
+ "ane"
+ ],
+ [
+ "▁Jan",
+ "e"
+ ],
+ [
+ "▁Ja",
+ "ne"
+ ],
+ [
+ "▁original",
+ "ly"
+ ],
+ [
+ "▁origin",
+ "ally"
+ ],
+ [
+ "▁thro",
+ "at"
+ ],
+ [
+ "ma",
+ "g"
+ ],
+ [
+ "m",
+ "ag"
+ ],
+ [
+ "su",
+ "p"
+ ],
+ [
+ "s",
+ "up"
+ ],
+ [
+ "un",
+ "i"
+ ],
+ [
+ "u",
+ "ni"
+ ],
+ [
+ "$",
+ "$"
+ ],
+ [
+ "▁L",
+ "ibrary"
+ ],
+ [
+ "▁",
+ "Library"
+ ],
+ [
+ "▁att",
+ "acks"
+ ],
+ [
+ "▁attack",
+ "s"
+ ],
+ [
+ "in",
+ "gen"
+ ],
+ [
+ "ing",
+ "en"
+ ],
+ [
+ "inge",
+ "n"
+ ],
+ [
+ "('",
+ "/"
+ ],
+ [
+ "▁h",
+ "es"
+ ],
+ [
+ "▁he",
+ "s"
+ ],
+ [
+ "▁",
+ "hes"
+ ],
+ [
+ "co",
+ "in"
+ ],
+ [
+ "c",
+ "oin"
+ ],
+ [
+ "oun",
+ "ce"
+ ],
+ [
+ "▁Academ",
+ "y"
+ ],
+ [
+ "MOD",
+ "ULE"
+ ],
+ [
+ "is",
+ "ms"
+ ],
+ [
+ "ism",
+ "s"
+ ],
+ [
+ "▁A",
+ "dv"
+ ],
+ [
+ "▁Ad",
+ "v"
+ ],
+ [
+ "▁",
+ "Adv"
+ ],
+ [
+ "▁B",
+ "ol"
+ ],
+ [
+ "▁Bo",
+ "l"
+ ],
+ [
+ "▁inc",
+ "ident"
+ ],
+ [
+ ")^",
+ "{"
+ ],
+ [
+ ")",
+ "^{"
+ ],
+ [
+ "▁b",
+ "ij"
+ ],
+ [
+ "▁bi",
+ "j"
+ ],
+ [
+ "▁R",
+ "ome"
+ ],
+ [
+ "▁Rom",
+ "e"
+ ],
+ [
+ "▁Ro",
+ "me"
+ ],
+ [
+ "▁It",
+ "aly"
+ ],
+ [
+ "▁Ital",
+ "y"
+ ],
+ [
+ "ev",
+ "ents"
+ ],
+ [
+ "event",
+ "s"
+ ],
+ [
+ "even",
+ "ts"
+ ],
+ [
+ "▁F",
+ "ern"
+ ],
+ [
+ "▁Fe",
+ "rn"
+ ],
+ [
+ "▁Fer",
+ "n"
+ ],
+ [
+ "▁b",
+ "er"
+ ],
+ [
+ "▁be",
+ "r"
+ ],
+ [
+ "▁",
+ "ber"
+ ],
+ [
+ "▁sil",
+ "ent"
+ ],
+ [
+ "▁p",
+ "ier"
+ ],
+ [
+ "▁pie",
+ "r"
+ ],
+ [
+ "▁pi",
+ "er"
+ ],
+ [
+ "▁Y",
+ "O"
+ ],
+ [
+ "▁pl",
+ "ain"
+ ],
+ [
+ "▁",
+ "plain"
+ ],
+ [
+ "B",
+ "as"
+ ],
+ [
+ "▁p",
+ "ill"
+ ],
+ [
+ "▁pi",
+ "ll"
+ ],
+ [
+ "▁pil",
+ "l"
+ ],
+ [
+ "ra",
+ "se"
+ ],
+ [
+ "ras",
+ "e"
+ ],
+ [
+ "r",
+ "ase"
+ ],
+ [
+ "▁car",
+ "rying"
+ ],
+ [
+ "▁carry",
+ "ing"
+ ],
+ [
+ "▁re",
+ "sp"
+ ],
+ [
+ "▁r",
+ "esp"
+ ],
+ [
+ "▁res",
+ "p"
+ ],
+ [
+ "▁",
+ "resp"
+ ],
+ [
+ "ну",
+ "ю"
+ ],
+ [
+ "▁typ",
+ "ical"
+ ],
+ [
+ "Wrap",
+ "per"
+ ],
+ [
+ "W",
+ "rapper"
+ ],
+ [
+ "▁g",
+ "au"
+ ],
+ [
+ "▁ga",
+ "u"
+ ],
+ [
+ "▁chem",
+ "ical"
+ ],
+ [
+ "▁h",
+ "al"
+ ],
+ [
+ "▁ha",
+ "l"
+ ],
+ [
+ "▁",
+ "hal"
+ ],
+ [
+ "th",
+ "row"
+ ],
+ [
+ "Cl",
+ "uster"
+ ],
+ [
+ "▁G",
+ "ab"
+ ],
+ [
+ "▁Ga",
+ "b"
+ ],
+ [
+ "▁G",
+ "irl"
+ ],
+ [
+ "▁Gi",
+ "rl"
+ ],
+ [
+ "▁Gir",
+ "l"
+ ],
+ [
+ "qu",
+ "ir"
+ ],
+ [
+ "▁A",
+ "rg"
+ ],
+ [
+ "▁Ar",
+ "g"
+ ],
+ [
+ "▁",
+ "Arg"
+ ],
+ [
+ "▁rel",
+ "ief"
+ ],
+ [
+ "▁relie",
+ "f"
+ ],
+ [
+ "▁reli",
+ "ef"
+ ],
+ [
+ "▁В",
+ "е"
+ ],
+ [
+ "d",
+ "m"
+ ],
+ [
+ "▁fr",
+ "ustr"
+ ],
+ [
+ "▁fru",
+ "str"
+ ],
+ [
+ "\\",
+ "%"
+ ],
+ [
+ "▁st",
+ "ores"
+ ],
+ [
+ "▁store",
+ "s"
+ ],
+ [
+ "▁stor",
+ "es"
+ ],
+ [
+ "▁sto",
+ "res"
+ ],
+ [
+ "▁bott",
+ "le"
+ ],
+ [
+ "▁bot",
+ "tle"
+ ],
+ [
+ "▁L",
+ "ew"
+ ],
+ [
+ "▁Le",
+ "w"
+ ],
+ [
+ "tw",
+ "o"
+ ],
+ [
+ "t",
+ "wo"
+ ],
+ [
+ "st",
+ "ad"
+ ],
+ [
+ "sta",
+ "d"
+ ],
+ [
+ "▁che",
+ "ek"
+ ],
+ [
+ "▁concern",
+ "s"
+ ],
+ [
+ "▁concer",
+ "ns"
+ ],
+ [
+ "▁help",
+ "ful"
+ ],
+ [
+ "▁co",
+ "verage"
+ ],
+ [
+ "▁cover",
+ "age"
+ ],
+ [
+ "is",
+ "i"
+ ],
+ [
+ "i",
+ "si"
+ ],
+ [
+ "AD",
+ "D"
+ ],
+ [
+ "A",
+ "DD"
+ ],
+ [
+ "as",
+ "ync"
+ ],
+ [
+ "asy",
+ "nc"
+ ],
+ [
+ "a",
+ "sync"
+ ],
+ [
+ "▁approxim",
+ "ately"
+ ],
+ [
+ "▁approx",
+ "imately"
+ ],
+ [
+ "▁approximate",
+ "ly"
+ ],
+ [
+ "if",
+ "fer"
+ ],
+ [
+ "iff",
+ "er"
+ ],
+ [
+ "iffe",
+ "r"
+ ],
+ [
+ "ho",
+ "ok"
+ ],
+ [
+ "h",
+ "ook"
+ ],
+ [
+ "▁e",
+ "num"
+ ],
+ [
+ "▁en",
+ "um"
+ ],
+ [
+ "▁",
+ "enum"
+ ],
+ [
+ "ov",
+ "á"
+ ],
+ [
+ "o",
+ "vá"
+ ],
+ [
+ "▁e",
+ "vil"
+ ],
+ [
+ "▁ev",
+ "il"
+ ],
+ [
+ "▁const",
+ "antly"
+ ],
+ [
+ "▁constant",
+ "ly"
+ ],
+ [
+ "ap",
+ "ply"
+ ],
+ [
+ "app",
+ "ly"
+ ],
+ [
+ "▁si",
+ "è"
+ ],
+ [
+ "▁pract",
+ "ices"
+ ],
+ [
+ "▁practice",
+ "s"
+ ],
+ [
+ "▁te",
+ "achers"
+ ],
+ [
+ "▁teach",
+ "ers"
+ ],
+ [
+ "▁teacher",
+ "s"
+ ],
+ [
+ "▁S",
+ "n"
+ ],
+ [
+ "▁",
+ "Sn"
+ ],
+ [
+ "▁A",
+ "wards"
+ ],
+ [
+ "▁Award",
+ "s"
+ ],
+ [
+ "▁Aw",
+ "ards"
+ ],
+ [
+ "▁sub",
+ "stant"
+ ],
+ [
+ "▁subst",
+ "ant"
+ ],
+ [
+ "▁$",
+ "."
+ ],
+ [
+ "▁",
+ "$."
+ ],
+ [
+ "d",
+ "k"
+ ],
+ [
+ "▁m",
+ "ob"
+ ],
+ [
+ "▁mo",
+ "b"
+ ],
+ [
+ "▁",
+ "mob"
+ ],
+ [
+ "▁ing",
+ "red"
+ ],
+ [
+ "ve",
+ "re"
+ ],
+ [
+ "ver",
+ "e"
+ ],
+ [
+ "v",
+ "ere"
+ ],
+ [
+ "Mult",
+ "i"
+ ],
+ [
+ "пе",
+ "р"
+ ],
+ [
+ "п",
+ "ер"
+ ],
+ [
+ "st",
+ "al"
+ ],
+ [
+ "sta",
+ "l"
+ ],
+ [
+ "s",
+ "tal"
+ ],
+ [
+ "ya",
+ "rd"
+ ],
+ [
+ "yar",
+ "d"
+ ],
+ [
+ "y",
+ "ard"
+ ],
+ [
+ "requ",
+ "ired"
+ ],
+ [
+ "require",
+ "d"
+ ],
+ [
+ "ve",
+ "ment"
+ ],
+ [
+ "v",
+ "ement"
+ ],
+ [
+ "▁int",
+ "elligence"
+ ],
+ [
+ "▁intellig",
+ "ence"
+ ],
+ [
+ "▁th",
+ "inks"
+ ],
+ [
+ "▁think",
+ "s"
+ ],
+ [
+ "▁thin",
+ "ks"
+ ],
+ [
+ "▁person",
+ "ally"
+ ],
+ [
+ "▁personal",
+ "ly"
+ ],
+ [
+ "▁tr",
+ "ained"
+ ],
+ [
+ "▁tra",
+ "ined"
+ ],
+ [
+ "▁train",
+ "ed"
+ ],
+ [
+ "▁",
+ "trained"
+ ],
+ [
+ "or",
+ "ney"
+ ],
+ [
+ "orn",
+ "ey"
+ ],
+ [
+ "orne",
+ "y"
+ ],
+ [
+ ")",
+ ""
+ ],
+ [
+ "gg",
+ "ed"
+ ],
+ [
+ "g",
+ "ged"
+ ],
+ [
+ "E",
+ "INVAL"
+ ],
+ [
+ "ar",
+ "na"
+ ],
+ [
+ "arn",
+ "a"
+ ],
+ [
+ "▁Ham",
+ "ilton"
+ ],
+ [
+ "mer",
+ "ce"
+ ],
+ [
+ "ek",
+ "t"
+ ],
+ [
+ "e",
+ "kt"
+ ],
+ [
+ "O",
+ "F"
+ ],
+ [
+ ")",
+ "["
+ ],
+ [
+ "ru",
+ "g"
+ ],
+ [
+ "r",
+ "ug"
+ ],
+ [
+ "ic",
+ "ión"
+ ],
+ [
+ "ici",
+ "ón"
+ ],
+ [
+ "ició",
+ "n"
+ ],
+ [
+ "i",
+ "ción"
+ ],
+ [
+ "▁sur",
+ "vey"
+ ],
+ [
+ "▁surv",
+ "ey"
+ ],
+ [
+ "▁surve",
+ "y"
+ ],
+ [
+ "nes",
+ "day"
+ ],
+ [
+ "▁p",
+ "ag"
+ ],
+ [
+ "▁pa",
+ "g"
+ ],
+ [
+ "▁",
+ "pag"
+ ],
+ [
+ "▁bound",
+ "ary"
+ ],
+ [
+ "▁quant",
+ "um"
+ ],
+ [
+ "▁draw",
+ "ing"
+ ],
+ [
+ "▁vol",
+ "unte"
+ ],
+ [
+ "▁volunt",
+ "e"
+ ],
+ [
+ "▁W",
+ "ord"
+ ],
+ [
+ "▁Wo",
+ "rd"
+ ],
+ [
+ "▁Wor",
+ "d"
+ ],
+ [
+ "▁",
+ "Word"
+ ],
+ [
+ "sk",
+ "y"
+ ],
+ [
+ "s",
+ "ky"
+ ],
+ [
+ "▁G",
+ "reg"
+ ],
+ [
+ "▁Gr",
+ "eg"
+ ],
+ [
+ "▁Gre",
+ "g"
+ ],
+ [
+ "co",
+ "ll"
+ ],
+ [
+ "col",
+ "l"
+ ],
+ [
+ "c",
+ "oll"
+ ],
+ [
+ "hi",
+ "de"
+ ],
+ [
+ "hid",
+ "e"
+ ],
+ [
+ "h",
+ "ide"
+ ],
+ [
+ "▁sw",
+ "im"
+ ],
+ [
+ "▁reve",
+ "aled"
+ ],
+ [
+ "▁reveal",
+ "ed"
+ ],
+ [
+ "ad",
+ "v"
+ ],
+ [
+ "a",
+ "dv"
+ ],
+ [
+ "д",
+ "я"
+ ],
+ [
+ ".\"",
+ ");"
+ ],
+ [
+ ".\")",
+ ";"
+ ],
+ [
+ ".",
+ "\");"
+ ],
+ [
+ "▁ex",
+ "plan"
+ ],
+ [
+ "▁expl",
+ "an"
+ ],
+ [
+ "▁exp",
+ "lan"
+ ],
+ [
+ "▁Cur",
+ "rent"
+ ],
+ [
+ "▁",
+ "Current"
+ ],
+ [
+ "▁got",
+ "ten"
+ ],
+ [
+ "▁f",
+ "alling"
+ ],
+ [
+ "▁fall",
+ "ing"
+ ],
+ [
+ "▁fal",
+ "ling"
+ ],
+ [
+ "▁cont",
+ "ained"
+ ],
+ [
+ "▁contain",
+ "ed"
+ ],
+ [
+ "UN",
+ "D"
+ ],
+ [
+ "U",
+ "ND"
+ ],
+ [
+ "▁Sh",
+ "ould"
+ ],
+ [
+ "▁",
+ "Should"
+ ],
+ [
+ "▁k",
+ "illing"
+ ],
+ [
+ "▁kill",
+ "ing"
+ ],
+ [
+ "▁kil",
+ "ling"
+ ],
+ [
+ "▁aspect",
+ "s"
+ ],
+ [
+ "ic",
+ "ted"
+ ],
+ [
+ "ict",
+ "ed"
+ ],
+ [
+ "i",
+ "cted"
+ ],
+ [
+ "▁P",
+ "aram"
+ ],
+ [
+ "▁Par",
+ "am"
+ ],
+ [
+ "▁Pa",
+ "ram"
+ ],
+ [
+ "▁Para",
+ "m"
+ ],
+ [
+ "▁",
+ "Param"
+ ],
+ [
+ "\",",
+ "\r"
+ ],
+ [
+ "\"",
+ ",\r"
+ ],
+ [
+ "TI",
+ "ON"
+ ],
+ [
+ "T",
+ "ION"
+ ],
+ [
+ "))",
+ ";\r"
+ ],
+ [
+ "));",
+ "\r"
+ ],
+ [
+ ")",
+ ");\r"
+ ],
+ [
+ "▁I",
+ "ran"
+ ],
+ [
+ "▁Ir",
+ "an"
+ ],
+ [
+ "▁Ira",
+ "n"
+ ],
+ [
+ "be",
+ "it"
+ ],
+ [
+ "▁B",
+ "u"
+ ],
+ [
+ "▁",
+ "Bu"
+ ],
+ [
+ "▁[",
+ "],"
+ ],
+ [
+ "▁[]",
+ ","
+ ],
+ [
+ "▁",
+ "[],"
+ ],
+ [
+ "SS",
+ "ION"
+ ],
+ [
+ "S",
+ "SION"
+ ],
+ [
+ "▁M",
+ "ah"
+ ],
+ [
+ "▁Ma",
+ "h"
+ ],
+ [
+ "▁res",
+ "olution"
+ ],
+ [
+ "▁b",
+ "oss"
+ ],
+ [
+ "▁bo",
+ "ss"
+ ],
+ [
+ "▁bos",
+ "s"
+ ],
+ [
+ "l",
+ "g"
+ ],
+ [
+ "ch",
+ "or"
+ ],
+ [
+ "cho",
+ "r"
+ ],
+ [
+ "c",
+ "hor"
+ ],
+ [
+ "▁Un",
+ "ter"
+ ],
+ [
+ "▁de",
+ "bt"
+ ],
+ [
+ "▁deb",
+ "t"
+ ],
+ [
+ "▁v",
+ "id"
+ ],
+ [
+ "▁vi",
+ "d"
+ ],
+ [
+ "▁",
+ "vid"
+ ],
+ [
+ "gi",
+ "e"
+ ],
+ [
+ "g",
+ "ie"
+ ],
+ [
+ "▁u",
+ "no"
+ ],
+ [
+ "▁un",
+ "o"
+ ],
+ [
+ "▁",
+ "uno"
+ ],
+ [
+ "C",
+ "B"
+ ],
+ [
+ "pl",
+ "om"
+ ],
+ [
+ "plo",
+ "m"
+ ],
+ [
+ "LIC",
+ "ENSE"
+ ],
+ [
+ "L",
+ "ICENSE"
+ ],
+ [
+ "▁K",
+ "enn"
+ ],
+ [
+ "▁Ke",
+ "nn"
+ ],
+ [
+ "▁Ken",
+ "n"
+ ],
+ [
+ "▁fin",
+ "ns"
+ ],
+ [
+ "ON",
+ "G"
+ ],
+ [
+ "O",
+ "NG"
+ ],
+ [
+ "▁some",
+ "what"
+ ],
+ [
+ "▁a",
+ "ctor"
+ ],
+ [
+ "▁act",
+ "or"
+ ],
+ [
+ "▁ac",
+ "tor"
+ ],
+ [
+ "▁",
+ "actor"
+ ],
+ [
+ "▁St",
+ "atus"
+ ],
+ [
+ "▁Stat",
+ "us"
+ ],
+ [
+ "▁",
+ "Status"
+ ],
+ [
+ "▁prob",
+ "ability"
+ ],
+ [
+ "f",
+ "b"
+ ],
+ [
+ "▁ch",
+ "art"
+ ],
+ [
+ "▁char",
+ "t"
+ ],
+ [
+ "▁cha",
+ "rt"
+ ],
+ [
+ "▁",
+ "chart"
+ ],
+ [
+ "▁st",
+ "ands"
+ ],
+ [
+ "▁stand",
+ "s"
+ ],
+ [
+ "▁stan",
+ "ds"
+ ],
+ [
+ "pol",
+ "icy"
+ ],
+ [
+ "▁o",
+ "nder"
+ ],
+ [
+ "▁on",
+ "der"
+ ],
+ [
+ "▁onde",
+ "r"
+ ],
+ [
+ "▁",
+ "onder"
+ ],
+ [
+ "tab",
+ "ular"
+ ],
+ [
+ "▁A",
+ "sh"
+ ],
+ [
+ "▁As",
+ "h"
+ ],
+ [
+ "▁bo",
+ "ost"
+ ],
+ [
+ "▁",
+ "boost"
+ ],
+ [
+ "▁des",
+ "per"
+ ],
+ [
+ "▁desp",
+ "er"
+ ],
+ [
+ "mon",
+ "th"
+ ],
+ [
+ "mont",
+ "h"
+ ],
+ [
+ "▁al",
+ "ert"
+ ],
+ [
+ "▁ale",
+ "rt"
+ ],
+ [
+ "▁",
+ "alert"
+ ],
+ [
+ "▁su",
+ "ite"
+ ],
+ [
+ "▁suit",
+ "e"
+ ],
+ [
+ "▁",
+ "suite"
+ ],
+ [
+ "▁g",
+ "én"
+ ],
+ [
+ "▁gé",
+ "n"
+ ],
+ [
+ "▁v",
+ "acc"
+ ],
+ [
+ "▁va",
+ "cc"
+ ],
+ [
+ "▁vac",
+ "c"
+ ],
+ [
+ "▁H",
+ "as"
+ ],
+ [
+ "▁Ha",
+ "s"
+ ],
+ [
+ "▁",
+ "Has"
+ ],
+ [
+ "Ma",
+ "sk"
+ ],
+ [
+ "M",
+ "ask"
+ ],
+ [
+ "▁Th",
+ "ursday"
+ ],
+ [
+ "▁pro",
+ "ved"
+ ],
+ [
+ "▁pr",
+ "oved"
+ ],
+ [
+ "▁prov",
+ "ed"
+ ],
+ [
+ "▁prove",
+ "d"
+ ],
+ [
+ "▁N",
+ "el"
+ ],
+ [
+ "▁Ne",
+ "l"
+ ],
+ [
+ "▁m",
+ "oral"
+ ],
+ [
+ "▁mor",
+ "al"
+ ],
+ [
+ "▁mo",
+ "ral"
+ ],
+ [
+ "▁j",
+ "a"
+ ],
+ [
+ "▁",
+ "ja"
+ ],
+ [
+ "au",
+ "er"
+ ],
+ [
+ "a",
+ "uer"
+ ],
+ [
+ "co",
+ "dec"
+ ],
+ [
+ "code",
+ "c"
+ ],
+ [
+ "cod",
+ "ec"
+ ],
+ [
+ "▁in",
+ "stant"
+ ],
+ [
+ "▁inst",
+ "ant"
+ ],
+ [
+ "am",
+ "ps"
+ ],
+ [
+ "amp",
+ "s"
+ ],
+ [
+ "▁mil",
+ "k"
+ ],
+ [
+ "WO",
+ "RD"
+ ],
+ [
+ "W",
+ "ORD"
+ ],
+ [
+ "▁",
+ "Ö"
+ ],
+ [
+ "Em",
+ "ail"
+ ],
+ [
+ "E",
+ "mail"
+ ],
+ [
+ "Element",
+ "s"
+ ],
+ [
+ "El",
+ "ements"
+ ],
+ [
+ "Elem",
+ "ents"
+ ],
+ [
+ "▁for",
+ "ma"
+ ],
+ [
+ "▁form",
+ "a"
+ ],
+ [
+ "Fr",
+ "ee"
+ ],
+ [
+ "F",
+ "ree"
+ ],
+ [
+ "MA",
+ "P"
+ ],
+ [
+ "M",
+ "AP"
+ ],
+ [
+ "▁",
+ "Ж"
+ ],
+ [
+ "sy",
+ "m"
+ ],
+ [
+ "s",
+ "ym"
+ ],
+ [
+ "▁т",
+ "и"
+ ],
+ [
+ "▁",
+ "ти"
+ ],
+ [
+ "▁E",
+ "conom"
+ ],
+ [
+ "▁Ec",
+ "onom"
+ ],
+ [
+ "▁V",
+ "i"
+ ],
+ [
+ "▁",
+ "Vi"
+ ],
+ [
+ "▁Col",
+ "umb"
+ ],
+ [
+ "▁_",
+ ","
+ ],
+ [
+ "▁",
+ "_,"
+ ],
+ [
+ "or",
+ "et"
+ ],
+ [
+ "ore",
+ "t"
+ ],
+ [
+ "o",
+ "ret"
+ ],
+ [
+ "Se",
+ "qu"
+ ],
+ [
+ "Seq",
+ "u"
+ ],
+ [
+ "S",
+ "equ"
+ ],
+ [
+ "pl",
+ "an"
+ ],
+ [
+ "p",
+ "lan"
+ ],
+ [
+ "▁f",
+ "requency"
+ ],
+ [
+ "▁frequ",
+ "ency"
+ ],
+ [
+ "▁",
+ "frequency"
+ ],
+ [
+ "ir",
+ "ement"
+ ],
+ [
+ "ire",
+ "ment"
+ ],
+ [
+ "▁ass",
+ "umed"
+ ],
+ [
+ "▁assum",
+ "ed"
+ ],
+ [
+ "▁assume",
+ "d"
+ ],
+ [
+ "▁C",
+ "a"
+ ],
+ [
+ "▁B",
+ "it"
+ ],
+ [
+ "▁Bi",
+ "t"
+ ],
+ [
+ "▁",
+ "Bit"
+ ],
+ [
+ "▁ко",
+ "ман"
+ ],
+ [
+ "▁ком",
+ "ан"
+ ],
+ [
+ "▁sm",
+ "ell"
+ ],
+ [
+ "Se",
+ "curity"
+ ],
+ [
+ "Sec",
+ "urity"
+ ],
+ [
+ "▁a",
+ "qu"
+ ],
+ [
+ "▁",
+ "aqu"
+ ],
+ [
+ "oo",
+ "r"
+ ],
+ [
+ "o",
+ "or"
+ ],
+ [
+ "pr",
+ "ice"
+ ],
+ [
+ "p",
+ "rice"
+ ],
+ [
+ "in",
+ "ity"
+ ],
+ [
+ "init",
+ "y"
+ ],
+ [
+ "ini",
+ "ty"
+ ],
+ [
+ "▁a",
+ "xis"
+ ],
+ [
+ "▁ax",
+ "is"
+ ],
+ [
+ "▁",
+ "axis"
+ ],
+ [
+ "re",
+ "lease"
+ ],
+ [
+ "▁res",
+ "olve"
+ ],
+ [
+ "▁",
+ "resolve"
+ ],
+ [
+ "▁t",
+ "ears"
+ ],
+ [
+ "▁te",
+ "ars"
+ ],
+ [
+ "▁tea",
+ "rs"
+ ],
+ [
+ "▁tear",
+ "s"
+ ],
+ [
+ "▁b",
+ "other"
+ ],
+ [
+ "▁bo",
+ "ther"
+ ],
+ [
+ "▁both",
+ "er"
+ ],
+ [
+ "▁bot",
+ "her"
+ ],
+ [
+ "▁Comm",
+ "unity"
+ ],
+ [
+ "▁Commun",
+ "ity"
+ ],
+ [
+ "▁register",
+ "ed"
+ ],
+ [
+ "▁re",
+ "volution"
+ ],
+ [
+ "▁rev",
+ "olution"
+ ],
+ [
+ "▁revol",
+ "ution"
+ ],
+ [
+ "?",
+ "."
+ ],
+ [
+ "▁version",
+ "s"
+ ],
+ [
+ "▁vers",
+ "ions"
+ ],
+ [
+ "▁",
+ "versions"
+ ],
+ [
+ "%%",
+ "%%"
+ ],
+ [
+ "yd",
+ "ro"
+ ],
+ [
+ "y",
+ "dro"
+ ],
+ [
+ "Su",
+ "ccess"
+ ],
+ [
+ "▁W",
+ "in"
+ ],
+ [
+ "▁Wi",
+ "n"
+ ],
+ [
+ "▁",
+ "Win"
+ ],
+ [
+ "▁B",
+ "oy"
+ ],
+ [
+ "▁Bo",
+ "y"
+ ],
+ [
+ "▁D",
+ "ub"
+ ],
+ [
+ "▁Du",
+ "b"
+ ],
+ [
+ "▁k",
+ "w"
+ ],
+ [
+ "▁",
+ "kw"
+ ],
+ [
+ "▁n",
+ "och"
+ ],
+ [
+ "▁no",
+ "ch"
+ ],
+ [
+ "▁char",
+ "ges"
+ ],
+ [
+ "▁charg",
+ "es"
+ ],
+ [
+ "▁charge",
+ "s"
+ ],
+ [
+ "ar",
+ "ios"
+ ],
+ [
+ "ari",
+ "os"
+ ],
+ [
+ "ario",
+ "s"
+ ],
+ [
+ "a",
+ "rios"
+ ],
+ [
+ "ua",
+ "r"
+ ],
+ [
+ "u",
+ "ar"
+ ],
+ [
+ ";",
+ "&"
+ ],
+ [
+ "▁hab",
+ "ía"
+ ],
+ [
+ "(",
+ "`"
+ ],
+ [
+ "▁t",
+ "x"
+ ],
+ [
+ "▁",
+ "tx"
+ ],
+ [
+ "el",
+ "ve"
+ ],
+ [
+ "▁a",
+ "ños"
+ ],
+ [
+ "▁año",
+ "s"
+ ],
+ [
+ "▁m",
+ "ath"
+ ],
+ [
+ "▁mat",
+ "h"
+ ],
+ [
+ "▁ma",
+ "th"
+ ],
+ [
+ "▁",
+ "math"
+ ],
+ [
+ "▁Al",
+ "f"
+ ],
+ [
+ "▁F",
+ "und"
+ ],
+ [
+ "▁Fun",
+ "d"
+ ],
+ [
+ "▁Fu",
+ "nd"
+ ],
+ [
+ "▁man",
+ "ifest"
+ ],
+ [
+ "▁manif",
+ "est"
+ ],
+ [
+ "▁att",
+ "ached"
+ ],
+ [
+ "▁attach",
+ "ed"
+ ],
+ [
+ "▁spirit",
+ "ual"
+ ],
+ [
+ "▁Alex",
+ "ander"
+ ],
+ [
+ "▁Alexand",
+ "er"
+ ],
+ [
+ "un",
+ "es"
+ ],
+ [
+ "une",
+ "s"
+ ],
+ [
+ "u",
+ "nes"
+ ],
+ [
+ "▁s",
+ "eed"
+ ],
+ [
+ "▁se",
+ "ed"
+ ],
+ [
+ "▁see",
+ "d"
+ ],
+ [
+ "▁",
+ "seed"
+ ],
+ [
+ "▁Н",
+ "о"
+ ],
+ [
+ "▁mag",
+ "azine"
+ ],
+ [
+ "▁magaz",
+ "ine"
+ ],
+ [
+ "▁e",
+ "igen"
+ ],
+ [
+ "▁о",
+ "бра"
+ ],
+ [
+ "▁об",
+ "ра"
+ ],
+ [
+ "▁",
+ "обра"
+ ],
+ [
+ "e",
+ "a"
+ ],
+ [
+ "▁P",
+ "H"
+ ],
+ [
+ "▁",
+ "PH"
+ ],
+ [
+ "sw",
+ "ing"
+ ],
+ [
+ "s",
+ "wing"
+ ],
+ [
+ "▁As",
+ "ia"
+ ],
+ [
+ "ј",
+ "у"
+ ],
+ [
+ "▁K",
+ "IND"
+ ],
+ [
+ "Ident",
+ "ifier"
+ ],
+ [
+ "on",
+ "ce"
+ ],
+ [
+ "▁al",
+ "cohol"
+ ],
+ [
+ "ці",
+ "ї"
+ ],
+ [
+ "st",
+ "yles"
+ ],
+ [
+ "style",
+ "s"
+ ],
+ [
+ "sty",
+ "les"
+ ],
+ [
+ "assert",
+ "Equal"
+ ],
+ [
+ "▁R",
+ "a"
+ ],
+ [
+ "гра",
+ "фи"
+ ],
+ [
+ "▁mill",
+ "ions"
+ ],
+ [
+ "▁million",
+ "s"
+ ],
+ [
+ "▁ch",
+ "unk"
+ ],
+ [
+ "▁",
+ "chunk"
+ ],
+ [
+ "де",
+ "р"
+ ],
+ [
+ "д",
+ "ер"
+ ],
+ [
+ "Pack",
+ "age"
+ ],
+ [
+ "US",
+ "T"
+ ],
+ [
+ "U",
+ "ST"
+ ],
+ [
+ "▁N",
+ "othing"
+ ],
+ [
+ "▁No",
+ "thing"
+ ],
+ [
+ "▁Not",
+ "hing"
+ ],
+ [
+ "▁",
+ "Nothing"
+ ],
+ [
+ "(\"",
+ "#"
+ ],
+ [
+ "▁M",
+ "id"
+ ],
+ [
+ "▁Mi",
+ "d"
+ ],
+ [
+ "▁на",
+ "ча"
+ ],
+ [
+ "▁",
+ "нача"
+ ],
+ [
+ "ł",
+ "y"
+ ],
+ [
+ "AA",
+ "AA"
+ ],
+ [
+ "▁la",
+ "unched"
+ ],
+ [
+ "▁launch",
+ "ed"
+ ],
+ [
+ "▁w",
+ "ake"
+ ],
+ [
+ "▁wa",
+ "ke"
+ ],
+ [
+ "▁",
+ "wake"
+ ],
+ [
+ "▁gu",
+ "ests"
+ ],
+ [
+ "▁guest",
+ "s"
+ ],
+ [
+ "▁dif",
+ "ferences"
+ ],
+ [
+ "▁differ",
+ "ences"
+ ],
+ [
+ "▁difference",
+ "s"
+ ],
+ [
+ "ud",
+ "i"
+ ],
+ [
+ "u",
+ "di"
+ ],
+ [
+ "▁a",
+ "id"
+ ],
+ [
+ "▁ai",
+ "d"
+ ],
+ [
+ "▁",
+ "aid"
+ ],
+ [
+ "▁S",
+ "port"
+ ],
+ [
+ "▁Sp",
+ "ort"
+ ],
+ [
+ "ul",
+ "ator"
+ ],
+ [
+ "ula",
+ "tor"
+ ],
+ [
+ "ex",
+ "ecute"
+ ],
+ [
+ "exec",
+ "ute"
+ ],
+ [
+ "execut",
+ "e"
+ ],
+ [
+ "pl",
+ "ot"
+ ],
+ [
+ "plo",
+ "t"
+ ],
+ [
+ "p",
+ "lot"
+ ],
+ [
+ "ch",
+ "ing"
+ ],
+ [
+ "chi",
+ "ng"
+ ],
+ [
+ "c",
+ "hing"
+ ],
+ [
+ "▁N",
+ "orm"
+ ],
+ [
+ "▁No",
+ "rm"
+ ],
+ [
+ "▁Nor",
+ "m"
+ ],
+ [
+ "▁",
+ "Norm"
+ ],
+ [
+ "t",
+ "m"
+ ],
+ [
+ "\\",
+ "+"
+ ],
+ [
+ "AR",
+ "D"
+ ],
+ [
+ "A",
+ "RD"
+ ],
+ [
+ "▁be",
+ "er"
+ ],
+ [
+ "▁п",
+ "ід"
+ ],
+ [
+ "▁пі",
+ "д"
+ ],
+ [
+ "IA",
+ "L"
+ ],
+ [
+ "I",
+ "AL"
+ ],
+ [
+ "st",
+ "orage"
+ ],
+ [
+ "sto",
+ "rage"
+ ],
+ [
+ "▁An",
+ "na"
+ ],
+ [
+ "▁Ann",
+ "a"
+ ],
+ [
+ "▁y",
+ "ards"
+ ],
+ [
+ "▁yard",
+ "s"
+ ],
+ [
+ "▁techn",
+ "ique"
+ ],
+ [
+ "▁o",
+ "ù"
+ ],
+ [
+ "at",
+ "ten"
+ ],
+ [
+ "att",
+ "en"
+ ],
+ [
+ "atte",
+ "n"
+ ],
+ [
+ "UN",
+ "T"
+ ],
+ [
+ "U",
+ "NT"
+ ],
+ [
+ "do",
+ "n"
+ ],
+ [
+ "d",
+ "on"
+ ],
+ [
+ "фо",
+ "р"
+ ],
+ [
+ "ф",
+ "ор"
+ ],
+ [
+ "▁h",
+ "oping"
+ ],
+ [
+ "▁hop",
+ "ing"
+ ],
+ [
+ "▁ho",
+ "ping"
+ ],
+ [
+ "▁vict",
+ "ory"
+ ],
+ [
+ "it",
+ "at"
+ ],
+ [
+ "ita",
+ "t"
+ ],
+ [
+ "i",
+ "tat"
+ ],
+ [
+ "▁signific",
+ "antly"
+ ],
+ [
+ "▁significant",
+ "ly"
+ ],
+ [
+ "▁pract",
+ "ical"
+ ],
+ [
+ "ij",
+ "e"
+ ],
+ [
+ "i",
+ "je"
+ ],
+ [
+ "▁exp",
+ "ansion"
+ ],
+ [
+ "▁expans",
+ "ion"
+ ],
+ [
+ "J",
+ "S"
+ ],
+ [
+ "ix",
+ "els"
+ ],
+ [
+ "ixel",
+ "s"
+ ],
+ [
+ "US",
+ "ER"
+ ],
+ [
+ "USE",
+ "R"
+ ],
+ [
+ "U",
+ "SER"
+ ],
+ [
+ "Sh",
+ "ape"
+ ],
+ [
+ "▁ext",
+ "ent"
+ ],
+ [
+ "li",
+ "o"
+ ],
+ [
+ "l",
+ "io"
+ ],
+ [
+ "▁p",
+ "ued"
+ ],
+ [
+ "▁pu",
+ "ed"
+ ],
+ [
+ "ol",
+ "id"
+ ],
+ [
+ "oli",
+ "d"
+ ],
+ [
+ "▁g",
+ "am"
+ ],
+ [
+ "▁ga",
+ "m"
+ ],
+ [
+ "▁s",
+ "event"
+ ],
+ [
+ "▁se",
+ "vent"
+ ],
+ [
+ "▁seven",
+ "t"
+ ],
+ [
+ "▁G",
+ "a"
+ ],
+ [
+ "▁",
+ "Ga"
+ ],
+ [
+ "angu",
+ "ages"
+ ],
+ [
+ "anguage",
+ "s"
+ ],
+ [
+ "((",
+ "("
+ ],
+ [
+ "(",
+ "(("
+ ],
+ [
+ "ъ",
+ "л"
+ ],
+ [
+ "▁Ex",
+ "per"
+ ],
+ [
+ "▁Exp",
+ "er"
+ ],
+ [
+ "▁",
+ "Exper"
+ ],
+ [
+ "as",
+ "ty"
+ ],
+ [
+ "ast",
+ "y"
+ ],
+ [
+ "a",
+ "sty"
+ ],
+ [
+ "ri",
+ "eg"
+ ],
+ [
+ "rie",
+ "g"
+ ],
+ [
+ "r",
+ "ieg"
+ ],
+ [
+ "gi",
+ "o"
+ ],
+ [
+ "g",
+ "io"
+ ],
+ [
+ "od",
+ "o"
+ ],
+ [
+ "o",
+ "do"
+ ],
+ [
+ "▁col",
+ "le"
+ ],
+ [
+ "▁co",
+ "lle"
+ ],
+ [
+ "▁coll",
+ "e"
+ ],
+ [
+ "▁st",
+ "ored"
+ ],
+ [
+ "▁store",
+ "d"
+ ],
+ [
+ "▁stor",
+ "ed"
+ ],
+ [
+ "▁sto",
+ "red"
+ ],
+ [
+ "▁S",
+ "che"
+ ],
+ [
+ "▁Sch",
+ "e"
+ ],
+ [
+ "▁Sc",
+ "he"
+ ],
+ [
+ "▁",
+ "Sche"
+ ],
+ [
+ "ist",
+ "ant"
+ ],
+ [
+ "ista",
+ "nt"
+ ],
+ [
+ "istan",
+ "t"
+ ],
+ [
+ "i",
+ "stant"
+ ],
+ [
+ "▁l",
+ "ip"
+ ],
+ [
+ "▁li",
+ "p"
+ ],
+ [
+ "B",
+ "R"
+ ],
+ [
+ "▁a",
+ "ug"
+ ],
+ [
+ "▁au",
+ "g"
+ ],
+ [
+ "▁",
+ "aug"
+ ],
+ [
+ "▁S",
+ "earch"
+ ],
+ [
+ "▁Se",
+ "arch"
+ ],
+ [
+ "▁",
+ "Search"
+ ],
+ [
+ ")=",
+ "\\"
+ ],
+ [
+ ")",
+ "=\\"
+ ],
+ [
+ "▁U",
+ "r"
+ ],
+ [
+ "▁s",
+ "ole"
+ ],
+ [
+ "▁so",
+ "le"
+ ],
+ [
+ "▁sol",
+ "e"
+ ],
+ [
+ "▁",
+ "sole"
+ ],
+ [
+ "il",
+ "lo"
+ ],
+ [
+ "ill",
+ "o"
+ ],
+ [
+ "▁me",
+ "hr"
+ ],
+ [
+ "ki",
+ "t"
+ ],
+ [
+ "k",
+ "it"
+ ],
+ [
+ "▁in",
+ "terior"
+ ],
+ [
+ "▁inter",
+ "ior"
+ ],
+ [
+ "▁inte",
+ "rior"
+ ],
+ [
+ "LI",
+ "ST"
+ ],
+ [
+ "L",
+ "IST"
+ ],
+ [
+ "ad",
+ "el"
+ ],
+ [
+ "ade",
+ "l"
+ ],
+ [
+ "a",
+ "del"
+ ],
+ [
+ "▁shop",
+ "ping"
+ ],
+ [
+ "▁s",
+ "lä"
+ ],
+ [
+ "▁sl",
+ "ä"
+ ],
+ [
+ "You",
+ "r"
+ ],
+ [
+ "Y",
+ "our"
+ ],
+ [
+ "DI",
+ "TION"
+ ],
+ [
+ "D",
+ "ITION"
+ ],
+ [
+ "▁H",
+ "ttp"
+ ],
+ [
+ "▁",
+ "Http"
+ ],
+ [
+ "ra",
+ "ham"
+ ],
+ [
+ "rah",
+ "am"
+ ],
+ [
+ "т",
+ "ри"
+ ],
+ [
+ "▁b",
+ "rings"
+ ],
+ [
+ "▁br",
+ "ings"
+ ],
+ [
+ "▁bring",
+ "s"
+ ],
+ [
+ "Re",
+ "v"
+ ],
+ [
+ "R",
+ "ev"
+ ],
+ [
+ "▁pro",
+ "pag"
+ ],
+ [
+ "▁prop",
+ "ag"
+ ],
+ [
+ "ity",
+ "Engine"
+ ],
+ [
+ "()",
+ "),"
+ ],
+ [
+ "())",
+ ","
+ ],
+ [
+ "(",
+ ")),"
+ ],
+ [
+ "▁ing",
+ "år"
+ ],
+ [
+ "▁Ir",
+ "eland"
+ ],
+ [
+ "▁Ire",
+ "land"
+ ],
+ [
+ "▁\"",
+ "./"
+ ],
+ [
+ "▁\".",
+ "/"
+ ],
+ [
+ "▁H",
+ "arr"
+ ],
+ [
+ "▁Har",
+ "r"
+ ],
+ [
+ "▁Ha",
+ "rr"
+ ],
+ [
+ "▁ad",
+ "min"
+ ],
+ [
+ "▁adm",
+ "in"
+ ],
+ [
+ "▁",
+ "admin"
+ ],
+ [
+ "en",
+ "o"
+ ],
+ [
+ "e",
+ "no"
+ ],
+ [
+ "▁k",
+ "r"
+ ],
+ [
+ "▁",
+ "kr"
+ ],
+ [
+ "▁est",
+ "á"
+ ],
+ [
+ "▁pro",
+ "ps"
+ ],
+ [
+ "▁pr",
+ "ops"
+ ],
+ [
+ "▁prop",
+ "s"
+ ],
+ [
+ "▁",
+ "props"
+ ],
+ [
+ "to",
+ "k"
+ ],
+ [
+ "t",
+ "ok"
+ ],
+ [
+ "om",
+ "orph"
+ ],
+ [
+ "▁affect",
+ "ed"
+ ],
+ [
+ "Ph",
+ "one"
+ ],
+ [
+ "▁deg",
+ "rees"
+ ],
+ [
+ "▁degree",
+ "s"
+ ],
+ [
+ "so",
+ "me"
+ ],
+ [
+ "som",
+ "e"
+ ],
+ [
+ "s",
+ "ome"
+ ],
+ [
+ "▁n",
+ "in"
+ ],
+ [
+ "▁ni",
+ "n"
+ ],
+ [
+ "EV",
+ "ENT"
+ ],
+ [
+ "▁inter",
+ "action"
+ ],
+ [
+ "▁inte",
+ "raction"
+ ],
+ [
+ "▁interact",
+ "ion"
+ ],
+ [
+ "▁T",
+ "uesday"
+ ],
+ [
+ "iter",
+ "ator"
+ ],
+ [
+ "▁N",
+ "ob"
+ ],
+ [
+ "▁No",
+ "b"
+ ],
+ [
+ "▁sc",
+ "atter"
+ ],
+ [
+ "uck",
+ "et"
+ ],
+ [
+ "uc",
+ "ket"
+ ],
+ [
+ "com",
+ "plete"
+ ],
+ [
+ "comp",
+ "lete"
+ ],
+ [
+ "▁d",
+ "uty"
+ ],
+ [
+ "▁du",
+ "ty"
+ ],
+ [
+ "▁dut",
+ "y"
+ ],
+ [
+ "▁answ",
+ "ers"
+ ],
+ [
+ "▁answer",
+ "s"
+ ],
+ [
+ "Pro",
+ "gress"
+ ],
+ [
+ "ee",
+ "d"
+ ],
+ [
+ "e",
+ "ed"
+ ],
+ [
+ "ро",
+ "н"
+ ],
+ [
+ "р",
+ "он"
+ ],
+ [
+ "▁v",
+ "ie"
+ ],
+ [
+ "▁vi",
+ "e"
+ ],
+ [
+ "▁de",
+ "pos"
+ ],
+ [
+ "▁dep",
+ "os"
+ ],
+ [
+ "▁p",
+ "acket"
+ ],
+ [
+ "▁pack",
+ "et"
+ ],
+ [
+ "▁pac",
+ "ket"
+ ],
+ [
+ "▁",
+ "packet"
+ ],
+ [
+ "▁t",
+ "ow"
+ ],
+ [
+ "▁to",
+ "w"
+ ],
+ [
+ "▁de",
+ "leg"
+ ],
+ [
+ "▁del",
+ "eg"
+ ],
+ [
+ "▁",
+ "deleg"
+ ],
+ [
+ "aud",
+ "io"
+ ],
+ [
+ "a",
+ "udio"
+ ],
+ [
+ "▁v",
+ "ary"
+ ],
+ [
+ "▁var",
+ "y"
+ ],
+ [
+ "▁va",
+ "ry"
+ ],
+ [
+ "▁m",
+ "igr"
+ ],
+ [
+ "▁mi",
+ "gr"
+ ],
+ [
+ "▁mig",
+ "r"
+ ],
+ [
+ "▁",
+ "migr"
+ ],
+ [
+ "ф",
+ "і"
+ ],
+ [
+ "es",
+ "a"
+ ],
+ [
+ "e",
+ "sa"
+ ],
+ [
+ "Event",
+ "s"
+ ],
+ [
+ "Ev",
+ "ents"
+ ],
+ [
+ "Even",
+ "ts"
+ ],
+ [
+ "ha",
+ "us"
+ ],
+ [
+ "h",
+ "aus"
+ ],
+ [
+ "▁S",
+ "av"
+ ],
+ [
+ "▁Sa",
+ "v"
+ ],
+ [
+ "▁Port",
+ "ug"
+ ],
+ [
+ "▁с",
+ "то"
+ ],
+ [
+ "▁ст",
+ "о"
+ ],
+ [
+ "▁",
+ "сто"
+ ],
+ [
+ "il",
+ "ation"
+ ],
+ [
+ "i",
+ "lation"
+ ],
+ [
+ "▁met",
+ "adata"
+ ],
+ [
+ "▁meta",
+ "data"
+ ],
+ [
+ "▁",
+ "metadata"
+ ],
+ [
+ "la",
+ "s"
+ ],
+ [
+ "l",
+ "as"
+ ],
+ [
+ "▁a",
+ "i"
+ ],
+ [
+ "▁",
+ "ai"
+ ],
+ [
+ "▁an",
+ "ger"
+ ],
+ [
+ "▁ang",
+ "er"
+ ],
+ [
+ "▁ange",
+ "r"
+ ],
+ [
+ "▁",
+ "anger"
+ ],
+ [
+ "▁h",
+ "am"
+ ],
+ [
+ "▁ha",
+ "m"
+ ],
+ [
+ "▁",
+ "ham"
+ ],
+ [
+ "▁A",
+ "nal"
+ ],
+ [
+ "▁An",
+ "al"
+ ],
+ [
+ "▁Ana",
+ "l"
+ ],
+ [
+ "▁",
+ "Anal"
+ ],
+ [
+ "▁frequ",
+ "ently"
+ ],
+ [
+ "▁frequent",
+ "ly"
+ ],
+ [
+ "▁F",
+ "ALSE"
+ ],
+ [
+ "▁",
+ "FALSE"
+ ],
+ [
+ "oc",
+ "he"
+ ],
+ [
+ "och",
+ "e"
+ ],
+ [
+ "o",
+ "che"
+ ],
+ [
+ "re",
+ "z"
+ ],
+ [
+ "r",
+ "ez"
+ ],
+ [
+ "▁V",
+ "iet"
+ ],
+ [
+ "▁Vi",
+ "et"
+ ],
+ [
+ "qu",
+ "is"
+ ],
+ [
+ "q",
+ "uis"
+ ],
+ [
+ "▁char",
+ "ged"
+ ],
+ [
+ "▁charg",
+ "ed"
+ ],
+ [
+ "▁charge",
+ "d"
+ ],
+ [
+ "ä",
+ "s"
+ ],
+ [
+ "▁P",
+ "ath"
+ ],
+ [
+ "▁Pat",
+ "h"
+ ],
+ [
+ "▁Pa",
+ "th"
+ ],
+ [
+ "▁",
+ "Path"
+ ],
+ [
+ "▁accur",
+ "ate"
+ ],
+ [
+ "▁Pl",
+ "us"
+ ],
+ [
+ "▁",
+ "Plus"
+ ],
+ [
+ "ke",
+ "it"
+ ],
+ [
+ "▁In",
+ "put"
+ ],
+ [
+ "▁",
+ "Input"
+ ],
+ [
+ "wh",
+ "en"
+ ],
+ [
+ "whe",
+ "n"
+ ],
+ [
+ "w",
+ "hen"
+ ],
+ [
+ "er",
+ "as"
+ ],
+ [
+ "era",
+ "s"
+ ],
+ [
+ "e",
+ "ras"
+ ],
+ [
+ "▁во",
+ "з"
+ ],
+ [
+ "▁de",
+ "rived"
+ ],
+ [
+ "▁der",
+ "ived"
+ ],
+ [
+ "▁deriv",
+ "ed"
+ ],
+ [
+ "▁derive",
+ "d"
+ ],
+ [
+ "aj",
+ "e"
+ ],
+ [
+ "a",
+ "je"
+ ],
+ [
+ "▁H",
+ "ad"
+ ],
+ [
+ "▁Ha",
+ "d"
+ ],
+ [
+ "ur",
+ "en"
+ ],
+ [
+ "ure",
+ "n"
+ ],
+ [
+ "u",
+ "ren"
+ ],
+ [
+ "ó",
+ "r"
+ ],
+ [
+ "}=",
+ "\\"
+ ],
+ [
+ "}",
+ "=\\"
+ ],
+ [
+ "ur",
+ "eau"
+ ],
+ [
+ "ure",
+ "au"
+ ],
+ [
+ "al",
+ "and"
+ ],
+ [
+ "ala",
+ "nd"
+ ],
+ [
+ "a",
+ "land"
+ ],
+ [
+ "Execut",
+ "ion"
+ ],
+ [
+ "Exec",
+ "ution"
+ ],
+ [
+ "ed",
+ "en"
+ ],
+ [
+ "ede",
+ "n"
+ ],
+ [
+ "e",
+ "den"
+ ],
+ [
+ "▁se",
+ "eking"
+ ],
+ [
+ "▁see",
+ "king"
+ ],
+ [
+ "▁seek",
+ "ing"
+ ],
+ [
+ "ch",
+ "anged"
+ ],
+ [
+ "change",
+ "d"
+ ],
+ [
+ "chan",
+ "ged"
+ ],
+ [
+ "▁t",
+ "rem"
+ ],
+ [
+ "▁tr",
+ "em"
+ ],
+ [
+ "▁tre",
+ "m"
+ ],
+ [
+ "ск",
+ "у"
+ ],
+ [
+ "с",
+ "ку"
+ ],
+ [
+ "▁G",
+ "eme"
+ ],
+ [
+ "▁Ge",
+ "me"
+ ],
+ [
+ "▁Gem",
+ "e"
+ ],
+ [
+ "in",
+ "ating"
+ ],
+ [
+ "ina",
+ "ting"
+ ],
+ [
+ "▁column",
+ "s"
+ ],
+ [
+ "▁",
+ "columns"
+ ],
+ [
+ "E",
+ "P"
+ ],
+ [
+ "▁inj",
+ "ury"
+ ],
+ [
+ "end",
+ "ent"
+ ],
+ [
+ "ende",
+ "nt"
+ ],
+ [
+ "enden",
+ "t"
+ ],
+ [
+ "▁he",
+ "aded"
+ ],
+ [
+ "▁head",
+ "ed"
+ ],
+ [
+ "▁",
+ "headed"
+ ],
+ [
+ "AS",
+ "E"
+ ],
+ [
+ "A",
+ "SE"
+ ],
+ [
+ "▁Mus",
+ "lim"
+ ],
+ [
+ "▁cl",
+ "imate"
+ ],
+ [
+ "▁clim",
+ "ate"
+ ],
+ [
+ "▁f",
+ "ake"
+ ],
+ [
+ "▁fa",
+ "ke"
+ ],
+ [
+ "▁",
+ "fake"
+ ],
+ [
+ "CM",
+ "D"
+ ],
+ [
+ "C",
+ "MD"
+ ],
+ [
+ "ј",
+ "и"
+ ],
+ [
+ "▁Ar",
+ "ts"
+ ],
+ [
+ "▁Art",
+ "s"
+ ],
+ [
+ "fe",
+ "ction"
+ ],
+ [
+ "fect",
+ "ion"
+ ],
+ [
+ "f",
+ "ection"
+ ],
+ [
+ "▁p",
+ "it"
+ ],
+ [
+ "▁pi",
+ "t"
+ ],
+ [
+ "▁",
+ "pit"
+ ],
+ [
+ ">",
+ "\\"
+ ],
+ [
+ "an",
+ "al"
+ ],
+ [
+ "ana",
+ "l"
+ ],
+ [
+ "a",
+ "nal"
+ ],
+ [
+ "Se",
+ "ction"
+ ],
+ [
+ "S",
+ "ection"
+ ],
+ [
+ "pl",
+ "us"
+ ],
+ [
+ "ü",
+ "t"
+ ],
+ [
+ "▁em",
+ "bed"
+ ],
+ [
+ "▁emb",
+ "ed"
+ ],
+ [
+ "▁",
+ "embed"
+ ],
+ [
+ "▁st",
+ "rings"
+ ],
+ [
+ "▁str",
+ "ings"
+ ],
+ [
+ "▁string",
+ "s"
+ ],
+ [
+ "▁",
+ "strings"
+ ],
+ [
+ "Be",
+ "fore"
+ ],
+ [
+ "B",
+ "efore"
+ ],
+ [
+ "pro",
+ "c"
+ ],
+ [
+ "pr",
+ "oc"
+ ],
+ [
+ "p",
+ "roc"
+ ],
+ [
+ "▁с",
+ "по"
+ ],
+ [
+ "▁сп",
+ "о"
+ ],
+ [
+ "▁",
+ "спо"
+ ],
+ [
+ "tr",
+ "l"
+ ],
+ [
+ "t",
+ "rl"
+ ],
+ [
+ "v",
+ "r"
+ ],
+ [
+ "Back",
+ "ground"
+ ],
+ [
+ "log",
+ "ger"
+ ],
+ [
+ "ag",
+ "raph"
+ ],
+ [
+ "agr",
+ "aph"
+ ],
+ [
+ "agra",
+ "ph"
+ ],
+ [
+ "a",
+ "graph"
+ ],
+ [
+ "ie",
+ "st"
+ ],
+ [
+ "ies",
+ "t"
+ ],
+ [
+ "i",
+ "est"
+ ],
+ [
+ "▁good",
+ "s"
+ ],
+ [
+ "bat",
+ "ch"
+ ],
+ [
+ "b",
+ "atch"
+ ],
+ [
+ "▁opt",
+ "ional"
+ ],
+ [
+ "▁option",
+ "al"
+ ],
+ [
+ "▁",
+ "optional"
+ ],
+ [
+ "▁Tay",
+ "lor"
+ ],
+ [
+ "▁recogn",
+ "ize"
+ ],
+ [
+ "wal",
+ "k"
+ ],
+ [
+ "w",
+ "alk"
+ ],
+ [
+ "▁H",
+ "it"
+ ],
+ [
+ "▁Hi",
+ "t"
+ ],
+ [
+ "▁",
+ "Hit"
+ ],
+ [
+ "▁Eliz",
+ "abeth"
+ ],
+ [
+ "}",
+ ":"
+ ],
+ [
+ "▁care",
+ "ful"
+ ],
+ [
+ "кра",
+ "ї"
+ ],
+ [
+ "▁loc",
+ "ations"
+ ],
+ [
+ "▁location",
+ "s"
+ ],
+ [
+ "▁struct",
+ "ures"
+ ],
+ [
+ "▁structure",
+ "s"
+ ],
+ [
+ "▁d",
+ "isk"
+ ],
+ [
+ "▁dis",
+ "k"
+ ],
+ [
+ "▁di",
+ "sk"
+ ],
+ [
+ "▁",
+ "disk"
+ ],
+ [
+ "▁sh",
+ "ips"
+ ],
+ [
+ "▁ship",
+ "s"
+ ],
+ [
+ "▁",
+ "ships"
+ ],
+ [
+ "▁s",
+ "uo"
+ ],
+ [
+ "▁su",
+ "o"
+ ],
+ [
+ "▁s",
+ "owie"
+ ],
+ [
+ "▁so",
+ "wie"
+ ],
+ [
+ "▁sow",
+ "ie"
+ ],
+ [
+ "▁E",
+ "ss"
+ ],
+ [
+ "▁Es",
+ "s"
+ ],
+ [
+ "▁H",
+ "ash"
+ ],
+ [
+ "▁Ha",
+ "sh"
+ ],
+ [
+ "▁Has",
+ "h"
+ ],
+ [
+ "▁",
+ "Hash"
+ ],
+ [
+ "▁reason",
+ "able"
+ ],
+ [
+ "▁More",
+ "over"
+ ],
+ [
+ "▁form",
+ "ula"
+ ],
+ [
+ "▁C",
+ "entre"
+ ],
+ [
+ "▁Cent",
+ "re"
+ ],
+ [
+ "▁res",
+ "idents"
+ ],
+ [
+ "▁resident",
+ "s"
+ ],
+ [
+ "▁resid",
+ "ents"
+ ],
+ [
+ "R",
+ "S"
+ ],
+ [
+ "Id",
+ "s"
+ ],
+ [
+ "I",
+ "ds"
+ ],
+ [
+ "▁K",
+ "now"
+ ],
+ [
+ "▁Kn",
+ "ow"
+ ],
+ [
+ "▁t",
+ "rib"
+ ],
+ [
+ "▁tr",
+ "ib"
+ ],
+ [
+ "▁tri",
+ "b"
+ ],
+ [
+ "▁r",
+ "és"
+ ],
+ [
+ "▁ré",
+ "s"
+ ],
+ [
+ "▁s",
+ "table"
+ ],
+ [
+ "▁st",
+ "able"
+ ],
+ [
+ "▁sta",
+ "ble"
+ ],
+ [
+ "▁stab",
+ "le"
+ ],
+ [
+ "▁",
+ "stable"
+ ],
+ [
+ "▁W",
+ "ould"
+ ],
+ [
+ "▁Wo",
+ "uld"
+ ],
+ [
+ "▁",
+ "Would"
+ ],
+ [
+ "▁break",
+ "ing"
+ ],
+ [
+ "▁bre",
+ "aking"
+ ],
+ [
+ "▁",
+ "breaking"
+ ],
+ [
+ "▁me",
+ "al"
+ ],
+ [
+ "▁p",
+ "hen"
+ ],
+ [
+ "▁ph",
+ "en"
+ ],
+ [
+ "▁f",
+ "el"
+ ],
+ [
+ "▁fe",
+ "l"
+ ],
+ [
+ "▁",
+ "fel"
+ ],
+ [
+ "▁F",
+ "red"
+ ],
+ [
+ "▁Fr",
+ "ed"
+ ],
+ [
+ "▁Fre",
+ "d"
+ ],
+ [
+ "Aut",
+ "hor"
+ ],
+ [
+ "Auth",
+ "or"
+ ],
+ [
+ "▁c",
+ "apture"
+ ],
+ [
+ "▁capt",
+ "ure"
+ ],
+ [
+ "▁",
+ "capture"
+ ],
+ [
+ "op",
+ "ts"
+ ],
+ [
+ "opt",
+ "s"
+ ],
+ [
+ "o",
+ "pts"
+ ],
+ [
+ "▁every",
+ "where"
+ ],
+ [
+ "▁s",
+ "que"
+ ],
+ [
+ "▁squ",
+ "e"
+ ],
+ [
+ "▁sq",
+ "ue"
+ ],
+ [
+ "▁m",
+ "oder"
+ ],
+ [
+ "▁mod",
+ "er"
+ ],
+ [
+ "▁mo",
+ "der"
+ ],
+ [
+ "▁mode",
+ "r"
+ ],
+ [
+ "set",
+ "up"
+ ],
+ [
+ "▁S",
+ "upp"
+ ],
+ [
+ "▁Su",
+ "pp"
+ ],
+ [
+ "▁Sup",
+ "p"
+ ],
+ [
+ "▁",
+ "Supp"
+ ],
+ [
+ "▁when",
+ "ever"
+ ],
+ [
+ "▁whe",
+ "never"
+ ],
+ [
+ "{",
+ "("
+ ],
+ [
+ "wa",
+ "rt"
+ ],
+ [
+ "war",
+ "t"
+ ],
+ [
+ "w",
+ "art"
+ ],
+ [
+ "▁t",
+ "oe"
+ ],
+ [
+ "▁to",
+ "e"
+ ],
+ [
+ "Pre",
+ "fix"
+ ],
+ [
+ "Pref",
+ "ix"
+ ],
+ [
+ "P",
+ "refix"
+ ],
+ [
+ "ho",
+ "u"
+ ],
+ [
+ "h",
+ "ou"
+ ],
+ [
+ "ga",
+ "ge"
+ ],
+ [
+ "g",
+ "age"
+ ],
+ [
+ ">",
+ "\""
+ ],
+ [
+ "▁f",
+ "rag"
+ ],
+ [
+ "▁fr",
+ "ag"
+ ],
+ [
+ "▁fra",
+ "g"
+ ],
+ [
+ "▁",
+ "frag"
+ ],
+ [
+ "▁The",
+ "orem"
+ ],
+ [
+ "mem",
+ "ory"
+ ],
+ [
+ "▁cont",
+ "ents"
+ ],
+ [
+ "▁content",
+ "s"
+ ],
+ [
+ "▁conten",
+ "ts"
+ ],
+ [
+ "▁",
+ "contents"
+ ],
+ [
+ "do",
+ "cs"
+ ],
+ [
+ "doc",
+ "s"
+ ],
+ [
+ "}",
+ "'"
+ ],
+ [
+ "▁Ir",
+ "ish"
+ ],
+ [
+ "The",
+ "n"
+ ],
+ [
+ "Th",
+ "en"
+ ],
+ [
+ "T",
+ "hen"
+ ],
+ [
+ "aa",
+ "ts"
+ ],
+ [
+ "aat",
+ "s"
+ ],
+ [
+ "a",
+ "ats"
+ ],
+ [
+ "Sa",
+ "ve"
+ ],
+ [
+ "S",
+ "ave"
+ ],
+ [
+ "▁a",
+ "gency"
+ ],
+ [
+ "▁ag",
+ "ency"
+ ],
+ [
+ "▁и",
+ "ме"
+ ],
+ [
+ "▁им",
+ "е"
+ ],
+ [
+ "до",
+ "ва"
+ ],
+ [
+ "дов",
+ "а"
+ ],
+ [
+ "▁F",
+ "unction"
+ ],
+ [
+ "▁Fun",
+ "ction"
+ ],
+ [
+ "▁",
+ "Function"
+ ],
+ [
+ "N",
+ "N"
+ ],
+ [
+ "dest",
+ "roy"
+ ],
+ [
+ "▁M",
+ "essage"
+ ],
+ [
+ "▁Mess",
+ "age"
+ ],
+ [
+ "▁",
+ "Message"
+ ],
+ [
+ "▁c",
+ "ancel"
+ ],
+ [
+ "▁can",
+ "cel"
+ ],
+ [
+ "▁",
+ "cancel"
+ ],
+ [
+ "▁super",
+ "ior"
+ ],
+ [
+ "▁e",
+ "c"
+ ],
+ [
+ "▁",
+ "ec"
+ ],
+ [
+ "▁liter",
+ "ature"
+ ],
+ [
+ "▁P",
+ "ART"
+ ],
+ [
+ "▁PA",
+ "RT"
+ ],
+ [
+ "▁PAR",
+ "T"
+ ],
+ [
+ "▁",
+ "PART"
+ ],
+ [
+ "I",
+ "l"
+ ],
+ [
+ "▁C",
+ "ab"
+ ],
+ [
+ "▁Ca",
+ "b"
+ ],
+ [
+ "eng",
+ "ine"
+ ],
+ [
+ "▁b",
+ "asket"
+ ],
+ [
+ "▁bas",
+ "ket"
+ ],
+ [
+ "wor",
+ "th"
+ ],
+ [
+ "wort",
+ "h"
+ ],
+ [
+ "w",
+ "orth"
+ ],
+ [
+ "▁S",
+ "el"
+ ],
+ [
+ "▁Se",
+ "l"
+ ],
+ [
+ "f",
+ "etch"
+ ],
+ [
+ "▁St",
+ "adt"
+ ],
+ [
+ "▁Stad",
+ "t"
+ ],
+ [
+ "▁Sta",
+ "dt"
+ ],
+ [
+ "▁К",
+ "и"
+ ],
+ [
+ "▁con",
+ "j"
+ ],
+ [
+ "▁se",
+ "iner"
+ ],
+ [
+ "▁sein",
+ "er"
+ ],
+ [
+ "▁seine",
+ "r"
+ ],
+ [
+ "▁sei",
+ "ner"
+ ],
+ [
+ "▁conf",
+ "irmed"
+ ],
+ [
+ "▁confirm",
+ "ed"
+ ],
+ [
+ "▁Ar",
+ "gent"
+ ],
+ [
+ "▁Arg",
+ "ent"
+ ],
+ [
+ "am",
+ "ar"
+ ],
+ [
+ "ama",
+ "r"
+ ],
+ [
+ "a",
+ "mar"
+ ],
+ [
+ "pgf",
+ "path"
+ ],
+ [
+ "▁strugg",
+ "le"
+ ],
+ [
+ "Pat",
+ "tern"
+ ],
+ [
+ "▁M",
+ "iddle"
+ ],
+ [
+ "it",
+ "an"
+ ],
+ [
+ "ita",
+ "n"
+ ],
+ [
+ "i",
+ "tan"
+ ],
+ [
+ "▁m",
+ "oon"
+ ],
+ [
+ "▁mo",
+ "on"
+ ],
+ [
+ "or",
+ "ough"
+ ],
+ [
+ "oro",
+ "ugh"
+ ],
+ [
+ "o",
+ "rough"
+ ],
+ [
+ "▁Cath",
+ "olic"
+ ],
+ [
+ "▁str",
+ "uck"
+ ],
+ [
+ "▁stru",
+ "ck"
+ ],
+ [
+ "]",
+ "->"
+ ],
+ [
+ "▁we",
+ "apon"
+ ],
+ [
+ "▁weap",
+ "on"
+ ],
+ [
+ "▁su",
+ "bst"
+ ],
+ [
+ "▁sub",
+ "st"
+ ],
+ [
+ "▁subs",
+ "t"
+ ],
+ [
+ "▁inst",
+ "ructions"
+ ],
+ [
+ "▁instruct",
+ "ions"
+ ],
+ [
+ "▁instruction",
+ "s"
+ ],
+ [
+ "▁occ",
+ "as"
+ ],
+ [
+ "▁oc",
+ "cas"
+ ],
+ [
+ "prote",
+ "cted"
+ ],
+ [
+ "▁L",
+ "ess"
+ ],
+ [
+ "▁Le",
+ "ss"
+ ],
+ [
+ "▁Les",
+ "s"
+ ],
+ [
+ "▁",
+ "Less"
+ ],
+ [
+ "▁b",
+ "atch"
+ ],
+ [
+ "▁bat",
+ "ch"
+ ],
+ [
+ "▁",
+ "batch"
+ ],
+ [
+ "▁con",
+ "tra"
+ ],
+ [
+ "▁cont",
+ "ra"
+ ],
+ [
+ "▁contr",
+ "a"
+ ],
+ [
+ "▁de",
+ "ck"
+ ],
+ [
+ "▁dec",
+ "k"
+ ],
+ [
+ "▁",
+ "deck"
+ ],
+ [
+ "▁ign",
+ "ored"
+ ],
+ [
+ "▁ignore",
+ "d"
+ ],
+ [
+ "▁ignor",
+ "ed"
+ ],
+ [
+ "▁ref",
+ "used"
+ ],
+ [
+ "▁refuse",
+ "d"
+ ],
+ [
+ "tr",
+ "igger"
+ ],
+ [
+ "▁crim",
+ "inal"
+ ],
+ [
+ "G",
+ "A"
+ ],
+ [
+ "ol",
+ "ly"
+ ],
+ [
+ "oll",
+ "y"
+ ],
+ [
+ "▁B",
+ "ell"
+ ],
+ [
+ "▁Be",
+ "ll"
+ ],
+ [
+ "▁Bel",
+ "l"
+ ],
+ [
+ "▁",
+ "Ю"
+ ],
+ [
+ "for",
+ "ward"
+ ],
+ [
+ "▁p",
+ "refix"
+ ],
+ [
+ "▁pre",
+ "fix"
+ ],
+ [
+ "▁pref",
+ "ix"
+ ],
+ [
+ "▁",
+ "prefix"
+ ],
+ [
+ "▁im",
+ "mediate"
+ ],
+ [
+ "▁immedi",
+ "ate"
+ ],
+ [
+ "▁as",
+ "signed"
+ ],
+ [
+ "▁ass",
+ "igned"
+ ],
+ [
+ "▁assign",
+ "ed"
+ ],
+ [
+ "▁e",
+ "lected"
+ ],
+ [
+ "▁elect",
+ "ed"
+ ],
+ [
+ "▁ele",
+ "cted"
+ ],
+ [
+ "▁to",
+ "night"
+ ],
+ [
+ "▁ton",
+ "ight"
+ ],
+ [
+ "▁D",
+ "ies"
+ ],
+ [
+ "▁Die",
+ "s"
+ ],
+ [
+ "▁Di",
+ "es"
+ ],
+ [
+ "▁B",
+ "each"
+ ],
+ [
+ "▁Be",
+ "ach"
+ ],
+ [
+ "▁pre",
+ "ced"
+ ],
+ [
+ "▁prec",
+ "ed"
+ ],
+ [
+ "ow",
+ "ał"
+ ],
+ [
+ "owa",
+ "ł"
+ ],
+ [
+ "▁gal",
+ "ax"
+ ],
+ [
+ "▁log",
+ "ic"
+ ],
+ [
+ "en",
+ "za"
+ ],
+ [
+ "enz",
+ "a"
+ ],
+ [
+ "▁Cap",
+ "tain"
+ ],
+ [
+ "▁Capt",
+ "ain"
+ ],
+ [
+ "▁H",
+ "ay"
+ ],
+ [
+ "▁Ha",
+ "y"
+ ],
+ [
+ "▁f",
+ "acts"
+ ],
+ [
+ "▁fact",
+ "s"
+ ],
+ [
+ "▁fac",
+ "ts"
+ ],
+ [
+ "▁н",
+ "и"
+ ],
+ [
+ "▁",
+ "ни"
+ ],
+ [
+ "t",
+ "é"
+ ],
+ [
+ "▁s",
+ "b"
+ ],
+ [
+ "▁",
+ "sb"
+ ],
+ [
+ "op",
+ "ed"
+ ],
+ [
+ "ope",
+ "d"
+ ],
+ [
+ "o",
+ "ped"
+ ],
+ [
+ "▁com",
+ "bat"
+ ],
+ [
+ "▁comb",
+ "at"
+ ],
+ [
+ "▁expl",
+ "ore"
+ ],
+ [
+ "▁explo",
+ "re"
+ ],
+ [
+ "▁(",
+ "-"
+ ],
+ [
+ "▁",
+ "(-"
+ ],
+ [
+ "Load",
+ "er"
+ ],
+ [
+ "Lo",
+ "ader"
+ ],
+ [
+ "▁Wil",
+ "son"
+ ],
+ [
+ "▁l",
+ "ocked"
+ ],
+ [
+ "▁loc",
+ "ked"
+ ],
+ [
+ "▁lock",
+ "ed"
+ ],
+ [
+ "▁",
+ "locked"
+ ],
+ [
+ ":",
+ ""
+ ],
+ [
+ "▁O",
+ "d"
+ ],
+ [
+ "▁P",
+ "rote"
+ ],
+ [
+ "▁Pro",
+ "te"
+ ],
+ [
+ "▁Pr",
+ "ote"
+ ],
+ [
+ "▁",
+ "Prote"
+ ],
+ [
+ "▁dis",
+ "abled"
+ ],
+ [
+ "▁disable",
+ "d"
+ ],
+ [
+ "▁",
+ "disabled"
+ ],
+ [
+ "▁h",
+ "atte"
+ ],
+ [
+ "▁hat",
+ "te"
+ ],
+ [
+ "▁sh",
+ "out"
+ ],
+ [
+ "▁con",
+ "structor"
+ ],
+ [
+ "▁construct",
+ "or"
+ ],
+ [
+ "▁constru",
+ "ctor"
+ ],
+ [
+ "▁",
+ "constructor"
+ ],
+ [
+ "б",
+ "і"
+ ],
+ [
+ "▁t",
+ "ras"
+ ],
+ [
+ "▁tr",
+ "as"
+ ],
+ [
+ "▁tra",
+ "s"
+ ],
+ [
+ "▁",
+ "tras"
+ ],
+ [
+ "▁F",
+ "ather"
+ ],
+ [
+ "▁Fa",
+ "ther"
+ ],
+ [
+ "▁Fat",
+ "her"
+ ],
+ [
+ "▁ad",
+ "j"
+ ],
+ [
+ "▁",
+ "adj"
+ ],
+ [
+ "▁Carol",
+ "ina"
+ ],
+ [
+ "▁F",
+ "ood"
+ ],
+ [
+ "▁Fo",
+ "od"
+ ],
+ [
+ "ba",
+ "d"
+ ],
+ [
+ "b",
+ "ad"
+ ],
+ [
+ "at",
+ "ore"
+ ],
+ [
+ "ator",
+ "e"
+ ],
+ [
+ "ato",
+ "re"
+ ],
+ [
+ "param",
+ "eters"
+ ],
+ [
+ "parameter",
+ "s"
+ ],
+ [
+ "▁F",
+ "ull"
+ ],
+ [
+ "▁Fu",
+ "ll"
+ ],
+ [
+ "▁",
+ "Full"
+ ],
+ [
+ "[",
+ "-"
+ ],
+ [
+ "▁\"",
+ "#"
+ ],
+ [
+ "▁T",
+ "ry"
+ ],
+ [
+ "▁Tr",
+ "y"
+ ],
+ [
+ "▁",
+ "Try"
+ ],
+ [
+ "сь",
+ "кої"
+ ],
+ [
+ "сько",
+ "ї"
+ ],
+ [
+ "▁ex",
+ "haust"
+ ],
+ [
+ "▁sc",
+ "roll"
+ ],
+ [
+ "▁scr",
+ "oll"
+ ],
+ [
+ "▁",
+ "scroll"
+ ],
+ [
+ "_",
+ ";"
+ ],
+ [
+ "Wh",
+ "o"
+ ],
+ [
+ "W",
+ "ho"
+ ],
+ [
+ "▁deliver",
+ "ed"
+ ],
+ [
+ "▁re",
+ "ferred"
+ ],
+ [
+ "▁refer",
+ "red"
+ ],
+ [
+ "▁pro",
+ "spect"
+ ],
+ [
+ "▁pros",
+ "pect"
+ ],
+ [
+ "sc",
+ "an"
+ ],
+ [
+ "s",
+ "can"
+ ],
+ [
+ "▁mod",
+ "ified"
+ ],
+ [
+ "▁",
+ "modified"
+ ],
+ [
+ "Gener",
+ "ator"
+ ],
+ [
+ "▁ex",
+ "cess"
+ ],
+ [
+ "▁exc",
+ "ess"
+ ],
+ [
+ "▁k",
+ "g"
+ ],
+ [
+ "▁",
+ "kg"
+ ],
+ [
+ "ze",
+ "t"
+ ],
+ [
+ "z",
+ "et"
+ ],
+ [
+ "ic",
+ "z"
+ ],
+ [
+ "i",
+ "cz"
+ ],
+ [
+ "clip",
+ "se"
+ ],
+ [
+ "cli",
+ "pse"
+ ],
+ [
+ "▁t",
+ "ank"
+ ],
+ [
+ "▁tan",
+ "k"
+ ],
+ [
+ "▁g",
+ "uns"
+ ],
+ [
+ "▁gu",
+ "ns"
+ ],
+ [
+ "▁gun",
+ "s"
+ ],
+ [
+ "▁G",
+ "es"
+ ],
+ [
+ "▁Ge",
+ "s"
+ ],
+ [
+ "in",
+ "ton"
+ ],
+ [
+ "int",
+ "on"
+ ],
+ [
+ "into",
+ "n"
+ ],
+ [
+ "▁Wed",
+ "nesday"
+ ],
+ [
+ "▁main",
+ "ly"
+ ],
+ [
+ "par",
+ "ser"
+ ],
+ [
+ "parse",
+ "r"
+ ],
+ [
+ "pars",
+ "er"
+ ],
+ [
+ "▁effect",
+ "ively"
+ ],
+ [
+ "▁effective",
+ "ly"
+ ],
+ [
+ "▁К",
+ "у"
+ ],
+ [
+ "▁res",
+ "ident"
+ ],
+ [
+ "▁resid",
+ "ent"
+ ],
+ [
+ "▁L",
+ "i"
+ ],
+ [
+ "▁",
+ "Li"
+ ],
+ [
+ "▁f",
+ "lying"
+ ],
+ [
+ "▁fl",
+ "ying"
+ ],
+ [
+ "▁fly",
+ "ing"
+ ],
+ [
+ "▁may",
+ "or"
+ ],
+ [
+ "▁mayo",
+ "r"
+ ],
+ [
+ "ü",
+ "h"
+ ],
+ [
+ "ut",
+ "a"
+ ],
+ [
+ "u",
+ "ta"
+ ],
+ [
+ "▁col",
+ "our"
+ ],
+ [
+ "▁air",
+ "craft"
+ ],
+ [
+ "ter",
+ "ior"
+ ],
+ [
+ "te",
+ "rior"
+ ],
+ [
+ "n",
+ "r"
+ ],
+ [
+ "▁ke",
+ "eps"
+ ],
+ [
+ "▁keep",
+ "s"
+ ],
+ [
+ "fa",
+ "n"
+ ],
+ [
+ "f",
+ "an"
+ ],
+ [
+ "▁sh",
+ "irt"
+ ],
+ [
+ "▁",
+ "shirt"
+ ],
+ [
+ "Com",
+ "par"
+ ],
+ [
+ "Comp",
+ "ar"
+ ],
+ [
+ "▁E",
+ "th"
+ ],
+ [
+ "▁Et",
+ "h"
+ ],
+ [
+ "Ma",
+ "c"
+ ],
+ [
+ "M",
+ "ac"
+ ],
+ [
+ "cle",
+ "an"
+ ],
+ [
+ "c",
+ "lean"
+ ],
+ [
+ "sl",
+ "ice"
+ ],
+ [
+ "cz",
+ "y"
+ ],
+ [
+ "c",
+ "zy"
+ ],
+ [
+ "▁g",
+ "ender"
+ ],
+ [
+ "▁gen",
+ "der"
+ ],
+ [
+ "▁ge",
+ "nder"
+ ],
+ [
+ "▁",
+ "gender"
+ ],
+ [
+ "▁b",
+ "utter"
+ ],
+ [
+ "▁but",
+ "ter"
+ ],
+ [
+ "▁butt",
+ "er"
+ ],
+ [
+ "AU",
+ "T"
+ ],
+ [
+ "A",
+ "UT"
+ ],
+ [
+ "▁E",
+ "lement"
+ ],
+ [
+ "▁El",
+ "ement"
+ ],
+ [
+ "▁Ele",
+ "ment"
+ ],
+ [
+ "▁",
+ "Element"
+ ],
+ [
+ "Fi",
+ "n"
+ ],
+ [
+ "F",
+ "in"
+ ],
+ [
+ "dm",
+ "a"
+ ],
+ [
+ "d",
+ "ma"
+ ],
+ [
+ "sam",
+ "ple"
+ ],
+ [
+ "s",
+ "ample"
+ ],
+ [
+ "Reg",
+ "istry"
+ ],
+ [
+ "▁class",
+ "ic"
+ ],
+ [
+ "▁dr",
+ "ove"
+ ],
+ [
+ "▁dro",
+ "ve"
+ ],
+ [
+ "p",
+ "b"
+ ],
+ [
+ "def",
+ "ined"
+ ],
+ [
+ "define",
+ "d"
+ ],
+ [
+ "d",
+ "efined"
+ ],
+ [
+ "▁re",
+ "ward"
+ ],
+ [
+ "▁r",
+ "eward"
+ ],
+ [
+ "ya",
+ "l"
+ ],
+ [
+ "y",
+ "al"
+ ],
+ [
+ "])",
+ ","
+ ],
+ [
+ "]",
+ "),"
+ ],
+ [
+ "▁B",
+ "AS"
+ ],
+ [
+ "▁BA",
+ "S"
+ ],
+ [
+ "▁hy",
+ "per"
+ ],
+ [
+ "▁hyp",
+ "er"
+ ],
+ [
+ "▁",
+ "hyper"
+ ],
+ [
+ "▁Н",
+ "и"
+ ],
+ [
+ "▁)",
+ "."
+ ],
+ [
+ "▁",
+ ")."
+ ],
+ [
+ "Ps",
+ "i"
+ ],
+ [
+ "P",
+ "si"
+ ],
+ [
+ "▁ent",
+ "ries"
+ ],
+ [
+ "▁entr",
+ "ies"
+ ],
+ [
+ "▁",
+ "entries"
+ ],
+ [
+ "▁King",
+ "dom"
+ ],
+ [
+ "▁S",
+ "ong"
+ ],
+ [
+ "▁So",
+ "ng"
+ ],
+ [
+ "▁Son",
+ "g"
+ ],
+ [
+ "▁prom",
+ "pt"
+ ],
+ [
+ "cent",
+ "ering"
+ ],
+ [
+ "center",
+ "ing"
+ ],
+ [
+ "▁H",
+ "olly"
+ ],
+ [
+ "▁Hol",
+ "ly"
+ ],
+ [
+ "▁Holl",
+ "y"
+ ],
+ [
+ "em",
+ "an"
+ ],
+ [
+ "ema",
+ "n"
+ ],
+ [
+ "e",
+ "man"
+ ],
+ [
+ "▁pain",
+ "ting"
+ ],
+ [
+ "▁paint",
+ "ing"
+ ],
+ [
+ "▁form",
+ "ation"
+ ],
+ [
+ "▁format",
+ "ion"
+ ],
+ [
+ "▁",
+ "formation"
+ ],
+ [
+ "▁Re",
+ "quest"
+ ],
+ [
+ "▁Requ",
+ "est"
+ ],
+ [
+ "▁",
+ "Request"
+ ],
+ [
+ "cont",
+ "roller"
+ ],
+ [
+ "control",
+ "ler"
+ ],
+ [
+ "Reg",
+ "ion"
+ ],
+ [
+ "P",
+ "Y"
+ ],
+ [
+ "id",
+ "ades"
+ ],
+ [
+ "ida",
+ "des"
+ ],
+ [
+ "idad",
+ "es"
+ ],
+ [
+ "idade",
+ "s"
+ ],
+ [
+ "T",
+ "L"
+ ],
+ [
+ "▁dis",
+ "able"
+ ],
+ [
+ "▁",
+ "disable"
+ ],
+ [
+ "▁re",
+ "in"
+ ],
+ [
+ "ri",
+ "cal"
+ ],
+ [
+ "ric",
+ "al"
+ ],
+ [
+ "r",
+ "ical"
+ ],
+ [
+ "\"",
+ "\r"
+ ],
+ [
+ "%",
+ ")"
+ ],
+ [
+ "▁S",
+ "ab"
+ ],
+ [
+ "▁Sa",
+ "b"
+ ],
+ [
+ "▁With",
+ "out"
+ ],
+ [
+ "▁",
+ "Without"
+ ],
+ [
+ "Se",
+ "rv"
+ ],
+ [
+ "Ser",
+ "v"
+ ],
+ [
+ "S",
+ "erv"
+ ],
+ [
+ "▁Sh",
+ "ort"
+ ],
+ [
+ "▁",
+ "Short"
+ ],
+ [
+ "▁",
+ "ю"
+ ],
+ [
+ "▁re",
+ "sc"
+ ],
+ [
+ "▁r",
+ "esc"
+ ],
+ [
+ "▁res",
+ "c"
+ ],
+ [
+ "▁",
+ "resc"
+ ],
+ [
+ "▁pattern",
+ "s"
+ ],
+ [
+ "▁Array",
+ "List"
+ ],
+ [
+ "▁",
+ "ArrayList"
+ ],
+ [
+ "sym",
+ "bol"
+ ],
+ [
+ "s",
+ "ymbol"
+ ],
+ [
+ "ac",
+ "o"
+ ],
+ [
+ "a",
+ "co"
+ ],
+ [
+ "▁H",
+ "om"
+ ],
+ [
+ "▁Ho",
+ "m"
+ ],
+ [
+ "▁",
+ "Hom"
+ ],
+ [
+ "he",
+ "lp"
+ ],
+ [
+ "hel",
+ "p"
+ ],
+ [
+ "▁h",
+ "asta"
+ ],
+ [
+ "▁has",
+ "ta"
+ ],
+ [
+ "▁ha",
+ "sta"
+ ],
+ [
+ "▁hast",
+ "a"
+ ],
+ [
+ "▁inst",
+ "alled"
+ ],
+ [
+ "▁install",
+ "ed"
+ ],
+ [
+ "at",
+ "ie"
+ ],
+ [
+ "ati",
+ "e"
+ ],
+ [
+ "▁vis",
+ "ited"
+ ],
+ [
+ "▁visit",
+ "ed"
+ ],
+ [
+ "▁Б",
+ "е"
+ ],
+ [
+ "){",
+ "\\"
+ ],
+ [
+ ")",
+ "{\\"
+ ],
+ [
+ "▁des",
+ "de"
+ ],
+ [
+ "J",
+ "ECT"
+ ],
+ [
+ "▁d",
+ "rew"
+ ],
+ [
+ "▁dr",
+ "ew"
+ ],
+ [
+ "▁dre",
+ "w"
+ ],
+ [
+ "▁St",
+ "ock"
+ ],
+ [
+ "▁Sto",
+ "ck"
+ ],
+ [
+ "▁C",
+ "ru"
+ ],
+ [
+ "▁Cr",
+ "u"
+ ],
+ [
+ "DE",
+ "F"
+ ],
+ [
+ "D",
+ "EF"
+ ],
+ [
+ "ob",
+ "by"
+ ],
+ [
+ "obb",
+ "y"
+ ],
+ [
+ "iz",
+ "able"
+ ],
+ [
+ "iza",
+ "ble"
+ ],
+ [
+ "og",
+ "ether"
+ ],
+ [
+ "oge",
+ "ther"
+ ],
+ [
+ "▁a",
+ "ber"
+ ],
+ [
+ "▁ab",
+ "er"
+ ],
+ [
+ "▁d",
+ "an"
+ ],
+ [
+ "▁da",
+ "n"
+ ],
+ [
+ "▁",
+ "dan"
+ ],
+ [
+ "al",
+ "is"
+ ],
+ [
+ "ali",
+ "s"
+ ],
+ [
+ "ta",
+ "il"
+ ],
+ [
+ "t",
+ "ail"
+ ],
+ [
+ "▁ex",
+ "pressed"
+ ],
+ [
+ "▁exp",
+ "ressed"
+ ],
+ [
+ "▁express",
+ "ed"
+ ],
+ [
+ "▁expr",
+ "essed"
+ ],
+ [
+ "▁A",
+ "ccess"
+ ],
+ [
+ "▁Acc",
+ "ess"
+ ],
+ [
+ "▁Ac",
+ "cess"
+ ],
+ [
+ "▁",
+ "Access"
+ ],
+ [
+ "Se",
+ "g"
+ ],
+ [
+ "S",
+ "eg"
+ ],
+ [
+ "▁L",
+ "ib"
+ ],
+ [
+ "▁Li",
+ "b"
+ ],
+ [
+ "▁",
+ "Lib"
+ ],
+ [
+ "▁sup",
+ "ports"
+ ],
+ [
+ "▁support",
+ "s"
+ ],
+ [
+ "▁supp",
+ "orts"
+ ],
+ [
+ "back",
+ "ground"
+ ],
+ [
+ "▁comm",
+ "une"
+ ],
+ [
+ "▁commun",
+ "e"
+ ],
+ [
+ "cal",
+ "led"
+ ],
+ [
+ "call",
+ "ed"
+ ],
+ [
+ "c",
+ "alled"
+ ],
+ [
+ "▁print",
+ "f"
+ ],
+ [
+ "▁prin",
+ "tf"
+ ],
+ [
+ "▁",
+ "printf"
+ ],
+ [
+ "▁Pr",
+ "ince"
+ ],
+ [
+ "▁Prin",
+ "ce"
+ ],
+ [
+ "ни",
+ "те"
+ ],
+ [
+ "de",
+ "pend"
+ ],
+ [
+ "dep",
+ "end"
+ ],
+ [
+ "▁d",
+ "els"
+ ],
+ [
+ "▁de",
+ "ls"
+ ],
+ [
+ "▁del",
+ "s"
+ ],
+ [
+ "ne",
+ "ur"
+ ],
+ [
+ "n",
+ "eur"
+ ],
+ [
+ "▁recomm",
+ "ended"
+ ],
+ [
+ "▁recommend",
+ "ed"
+ ],
+ [
+ "▁found",
+ "ed"
+ ],
+ [
+ "▁mark",
+ "ets"
+ ],
+ [
+ "▁market",
+ "s"
+ ],
+ [
+ "▁destroy",
+ "ed"
+ ],
+ [
+ "▁ab",
+ "stract"
+ ],
+ [
+ "▁abs",
+ "tract"
+ ],
+ [
+ "▁",
+ "abstract"
+ ],
+ [
+ "▁s",
+ "erie"
+ ],
+ [
+ "▁se",
+ "rie"
+ ],
+ [
+ "▁ser",
+ "ie"
+ ],
+ [
+ "▁",
+ "serie"
+ ],
+ [
+ "▁D",
+ "un"
+ ],
+ [
+ "▁Du",
+ "n"
+ ],
+ [
+ "Te",
+ "rm"
+ ],
+ [
+ "T",
+ "erm"
+ ],
+ [
+ "▁p",
+ "ortion"
+ ],
+ [
+ "▁port",
+ "ion"
+ ],
+ [
+ "ad",
+ "apter"
+ ],
+ [
+ "is",
+ "set"
+ ],
+ [
+ "iss",
+ "et"
+ ],
+ [
+ "isse",
+ "t"
+ ],
+ [
+ "че",
+ "ски"
+ ],
+ [
+ "▁in",
+ "teger"
+ ],
+ [
+ "▁inte",
+ "ger"
+ ],
+ [
+ "▁",
+ "integer"
+ ],
+ [
+ "▁return",
+ "ing"
+ ],
+ [
+ "en",
+ "ties"
+ ],
+ [
+ "ent",
+ "ies"
+ ],
+ [
+ "enti",
+ "es"
+ ],
+ [
+ "▁F",
+ "air"
+ ],
+ [
+ "▁Fa",
+ "ir"
+ ],
+ [
+ "▁U",
+ "SB"
+ ],
+ [
+ "▁US",
+ "B"
+ ],
+ [
+ "▁",
+ "USB"
+ ],
+ [
+ "▁P",
+ "rice"
+ ],
+ [
+ "▁Pr",
+ "ice"
+ ],
+ [
+ "▁Pri",
+ "ce"
+ ],
+ [
+ "▁",
+ "Price"
+ ],
+ [
+ "ig",
+ "ate"
+ ],
+ [
+ "iga",
+ "te"
+ ],
+ [
+ "i",
+ "gate"
+ ],
+ [
+ "▁sett",
+ "led"
+ ],
+ [
+ "▁settle",
+ "d"
+ ],
+ [
+ "({",
+ "\\"
+ ],
+ [
+ "(",
+ "{\\"
+ ],
+ [
+ "ne",
+ "k"
+ ],
+ [
+ "n",
+ "ek"
+ ],
+ [
+ "▁the",
+ "rm"
+ ],
+ [
+ "▁th",
+ "erm"
+ ],
+ [
+ "▁ther",
+ "m"
+ ],
+ [
+ "▁c",
+ "ig"
+ ],
+ [
+ "▁ci",
+ "g"
+ ],
+ [
+ "án",
+ "y"
+ ],
+ [
+ "á",
+ "ny"
+ ],
+ [
+ "▁invest",
+ "igation"
+ ],
+ [
+ "▁investig",
+ "ation"
+ ],
+ [
+ "om",
+ "eter"
+ ],
+ [
+ "ome",
+ "ter"
+ ],
+ [
+ "omet",
+ "er"
+ ],
+ [
+ "SU",
+ "P"
+ ],
+ [
+ "S",
+ "UP"
+ ],
+ [
+ "So",
+ "me"
+ ],
+ [
+ "Som",
+ "e"
+ ],
+ [
+ "S",
+ "ome"
+ ],
+ [
+ "si",
+ "ng"
+ ],
+ [
+ "sin",
+ "g"
+ ],
+ [
+ "s",
+ "ing"
+ ],
+ [
+ "Con",
+ "stant"
+ ],
+ [
+ "Const",
+ "ant"
+ ],
+ [
+ "▁re",
+ "tail"
+ ],
+ [
+ "▁ret",
+ "ail"
+ ],
+ [
+ "ż",
+ "y"
+ ],
+ [
+ "▁dr",
+ "inking"
+ ],
+ [
+ "▁drink",
+ "ing"
+ ],
+ [
+ "▁In",
+ "vest"
+ ],
+ [
+ "▁Inv",
+ "est"
+ ],
+ [
+ "S",
+ "V"
+ ],
+ [
+ "ig",
+ "inal"
+ ],
+ [
+ "igin",
+ "al"
+ ],
+ [
+ "igi",
+ "nal"
+ ],
+ [
+ "▁B",
+ "ow"
+ ],
+ [
+ "▁Bo",
+ "w"
+ ],
+ [
+ "{{",
+ "\\"
+ ],
+ [
+ "{",
+ "{\\"
+ ],
+ [
+ "▁ass",
+ "istance"
+ ],
+ [
+ "▁assist",
+ "ance"
+ ],
+ [
+ "▁intel",
+ "lect"
+ ],
+ [
+ "IN",
+ "IT"
+ ],
+ [
+ "au",
+ "g"
+ ],
+ [
+ "a",
+ "ug"
+ ],
+ [
+ "▁Le",
+ "on"
+ ],
+ [
+ "▁Leo",
+ "n"
+ ],
+ [
+ "Su",
+ "r"
+ ],
+ [
+ "S",
+ "ur"
+ ],
+ [
+ "▁ad",
+ "mit"
+ ],
+ [
+ "▁adm",
+ "it"
+ ],
+ [
+ "▁Com",
+ "mand"
+ ],
+ [
+ "▁Comm",
+ "and"
+ ],
+ [
+ "▁",
+ "Command"
+ ],
+ [
+ "il",
+ "les"
+ ],
+ [
+ "ill",
+ "es"
+ ],
+ [
+ "ille",
+ "s"
+ ],
+ [
+ "ro",
+ "v"
+ ],
+ [
+ "r",
+ "ov"
+ ],
+ [
+ "▁o",
+ "h"
+ ],
+ [
+ "▁",
+ "oh"
+ ],
+ [
+ "▁n",
+ "ão"
+ ],
+ [
+ "▁mat",
+ "ching"
+ ],
+ [
+ "▁match",
+ "ing"
+ ],
+ [
+ "▁g",
+ "enu"
+ ],
+ [
+ "▁gen",
+ "u"
+ ],
+ [
+ "▁ge",
+ "nu"
+ ],
+ [
+ "▁O",
+ "x"
+ ],
+ [
+ "т",
+ "ся"
+ ],
+ [
+ "not",
+ "ation"
+ ],
+ [
+ "G",
+ "O"
+ ],
+ [
+ "▁N",
+ "ap"
+ ],
+ [
+ "▁Na",
+ "p"
+ ],
+ [
+ "▁ver",
+ "ify"
+ ],
+ [
+ "▁",
+ "verify"
+ ],
+ [
+ "▁aus",
+ "si"
+ ],
+ [
+ "▁auss",
+ "i"
+ ],
+ [
+ "Date",
+ "Time"
+ ],
+ [
+ "▁su",
+ "itable"
+ ],
+ [
+ "▁suit",
+ "able"
+ ],
+ [
+ "▁ind",
+ "icate"
+ ],
+ [
+ "▁indic",
+ "ate"
+ ],
+ [
+ "▁L",
+ "ive"
+ ],
+ [
+ "▁Li",
+ "ve"
+ ],
+ [
+ "▁Liv",
+ "e"
+ ],
+ [
+ "▁",
+ "Live"
+ ],
+ [
+ "Fe",
+ "ature"
+ ],
+ [
+ "▁tr",
+ "acks"
+ ],
+ [
+ "▁track",
+ "s"
+ ],
+ [
+ "▁tra",
+ "cks"
+ ],
+ [
+ "▁has",
+ "n"
+ ],
+ [
+ "▁ha",
+ "sn"
+ ],
+ [
+ "▁J",
+ "ava"
+ ],
+ [
+ "▁Ja",
+ "va"
+ ],
+ [
+ "▁",
+ "Java"
+ ],
+ [
+ "▁close",
+ "ly"
+ ],
+ [
+ "▁clos",
+ "ely"
+ ],
+ [
+ "▁D",
+ "ad"
+ ],
+ [
+ "▁Da",
+ "d"
+ ],
+ [
+ "ce",
+ "ive"
+ ],
+ [
+ "▁Mar",
+ "ket"
+ ],
+ [
+ "▁Mark",
+ "et"
+ ],
+ [
+ "ag",
+ "y"
+ ],
+ [
+ "a",
+ "gy"
+ ],
+ [
+ "▁\"",
+ "-"
+ ],
+ [
+ "aw",
+ "n"
+ ],
+ [
+ "a",
+ "wn"
+ ],
+ [
+ "st",
+ "ell"
+ ],
+ [
+ "ste",
+ "ll"
+ ],
+ [
+ "pt",
+ "on"
+ ],
+ [
+ "pto",
+ "n"
+ ],
+ [
+ "p",
+ "ton"
+ ],
+ [
+ "ze",
+ "it"
+ ],
+ [
+ "▁V",
+ "ector"
+ ],
+ [
+ "▁Ve",
+ "ctor"
+ ],
+ [
+ "▁Vec",
+ "tor"
+ ],
+ [
+ "▁",
+ "Vector"
+ ],
+ [
+ "▁M",
+ "AX"
+ ],
+ [
+ "▁MA",
+ "X"
+ ],
+ [
+ "▁",
+ "MAX"
+ ],
+ [
+ "▁F",
+ "ederal"
+ ],
+ [
+ "▁Feder",
+ "al"
+ ],
+ [
+ "▁Fed",
+ "eral"
+ ],
+ [
+ "wa",
+ "ll"
+ ],
+ [
+ "wal",
+ "l"
+ ],
+ [
+ "w",
+ "all"
+ ],
+ [
+ "▁J",
+ "en"
+ ],
+ [
+ "▁Je",
+ "n"
+ ],
+ [
+ "de",
+ "lay"
+ ],
+ [
+ "del",
+ "ay"
+ ],
+ [
+ "▁lim",
+ "its"
+ ],
+ [
+ "▁limit",
+ "s"
+ ],
+ [
+ "▁",
+ "limits"
+ ],
+ [
+ "▁Q",
+ "uest"
+ ],
+ [
+ "▁Qu",
+ "est"
+ ],
+ [
+ "▁Que",
+ "st"
+ ],
+ [
+ "▁",
+ "Quest"
+ ],
+ [
+ "C",
+ "am"
+ ],
+ [
+ "▁F",
+ "el"
+ ],
+ [
+ "▁Fe",
+ "l"
+ ],
+ [
+ "write",
+ "r"
+ ],
+ [
+ "wr",
+ "iter"
+ ],
+ [
+ "writ",
+ "er"
+ ],
+ [
+ "w",
+ "riter"
+ ],
+ [
+ "L",
+ "P"
+ ],
+ [
+ "▁m",
+ "oves"
+ ],
+ [
+ "▁mov",
+ "es"
+ ],
+ [
+ "▁move",
+ "s"
+ ],
+ [
+ "▁mo",
+ "ves"
+ ],
+ [
+ "▁Ex",
+ "ecut"
+ ],
+ [
+ "▁",
+ "Execut"
+ ],
+ [
+ "▁D",
+ "B"
+ ],
+ [
+ "▁",
+ "DB"
+ ],
+ [
+ "ok",
+ "er"
+ ],
+ [
+ "oke",
+ "r"
+ ],
+ [
+ "o",
+ "ker"
+ ],
+ [
+ "sc",
+ "ribe"
+ ],
+ [
+ "scri",
+ "be"
+ ],
+ [
+ "scr",
+ "ibe"
+ ],
+ [
+ "scrib",
+ "e"
+ ],
+ [
+ "el",
+ "ijk"
+ ],
+ [
+ "elij",
+ "k"
+ ],
+ [
+ "eli",
+ "jk"
+ ],
+ [
+ "Const",
+ "ants"
+ ],
+ [
+ "Constant",
+ "s"
+ ],
+ [
+ "Add",
+ "r"
+ ],
+ [
+ "Ad",
+ "dr"
+ ],
+ [
+ "▁}",
+ "}"
+ ],
+ [
+ "▁",
+ "}}"
+ ],
+ [
+ "▁ch",
+ "annels"
+ ],
+ [
+ "▁channel",
+ "s"
+ ],
+ [
+ "▁",
+ "channels"
+ ],
+ [
+ "i",
+ "y"
+ ],
+ [
+ "rior",
+ "ity"
+ ],
+ [
+ "▁tr",
+ "ading"
+ ],
+ [
+ "▁trad",
+ "ing"
+ ],
+ [
+ "▁tra",
+ "ding"
+ ],
+ [
+ "▁fac",
+ "ilities"
+ ],
+ [
+ "▁facil",
+ "ities"
+ ],
+ [
+ "▁P",
+ "ack"
+ ],
+ [
+ "▁Pa",
+ "ck"
+ ],
+ [
+ "▁Pac",
+ "k"
+ ],
+ [
+ "▁",
+ "Pack"
+ ],
+ [
+ "▁s",
+ "ys"
+ ],
+ [
+ "▁sy",
+ "s"
+ ],
+ [
+ "▁",
+ "sys"
+ ],
+ [
+ "▁m",
+ "eta"
+ ],
+ [
+ "▁me",
+ "ta"
+ ],
+ [
+ "▁met",
+ "a"
+ ],
+ [
+ "▁",
+ "meta"
+ ],
+ [
+ "▁est",
+ "imate"
+ ],
+ [
+ "▁estim",
+ "ate"
+ ],
+ [
+ "▁L",
+ "ater"
+ ],
+ [
+ "▁La",
+ "ter"
+ ],
+ [
+ "▁Lat",
+ "er"
+ ],
+ [
+ "▁Late",
+ "r"
+ ],
+ [
+ "iss",
+ "ue"
+ ],
+ [
+ "▁H",
+ "aving"
+ ],
+ [
+ "▁Ha",
+ "ving"
+ ],
+ [
+ "▁Hav",
+ "ing"
+ ],
+ [
+ "▁g",
+ "uest"
+ ],
+ [
+ "▁gu",
+ "est"
+ ],
+ [
+ "▁no",
+ "body"
+ ],
+ [
+ "▁nob",
+ "ody"
+ ],
+ [
+ "dep",
+ "th"
+ ],
+ [
+ "▁z",
+ "ostał"
+ ],
+ [
+ "пе",
+ "ра"
+ ],
+ [
+ "пер",
+ "а"
+ ],
+ [
+ ")}",
+ "\\"
+ ],
+ [
+ ")",
+ "}\\"
+ ],
+ [
+ "b",
+ "g"
+ ],
+ [
+ "▁Tw",
+ "itter"
+ ],
+ [
+ "▁dark",
+ "ness"
+ ],
+ [
+ "j",
+ "pg"
+ ],
+ [
+ "con",
+ "tr"
+ ],
+ [
+ "cont",
+ "r"
+ ],
+ [
+ "ker",
+ "nel"
+ ],
+ [
+ "kern",
+ "el"
+ ],
+ [
+ "k",
+ "ernel"
+ ],
+ [
+ "]",
+ "\\"
+ ],
+ [
+ "▁ext",
+ "end"
+ ],
+ [
+ "▁",
+ "extend"
+ ],
+ [
+ "ro",
+ "c"
+ ],
+ [
+ "r",
+ "oc"
+ ],
+ [
+ "NE",
+ "T"
+ ],
+ [
+ "N",
+ "ET"
+ ],
+ [
+ "MS",
+ "G"
+ ],
+ [
+ "M",
+ "SG"
+ ],
+ [
+ "▁b",
+ "urst"
+ ],
+ [
+ "▁bur",
+ "st"
+ ],
+ [
+ "▁re",
+ "pair"
+ ],
+ [
+ "▁rep",
+ "air"
+ ],
+ [
+ "▁f",
+ "etch"
+ ],
+ [
+ "▁fet",
+ "ch"
+ ],
+ [
+ "▁",
+ "fetch"
+ ],
+ [
+ "ie",
+ "g"
+ ],
+ [
+ "i",
+ "eg"
+ ],
+ [
+ "ú",
+ "s"
+ ],
+ [
+ "Sc",
+ "reen"
+ ],
+ [
+ "S",
+ "creen"
+ ],
+ [
+ "ble",
+ "m"
+ ],
+ [
+ "bl",
+ "em"
+ ],
+ [
+ "b",
+ "lem"
+ ],
+ [
+ "App",
+ "Compat"
+ ],
+ [
+ "▁ch",
+ "ap"
+ ],
+ [
+ "▁cha",
+ "p"
+ ],
+ [
+ "▁",
+ "chap"
+ ],
+ [
+ "EL",
+ "D"
+ ],
+ [
+ "E",
+ "LD"
+ ],
+ [
+ "▁P",
+ "enn"
+ ],
+ [
+ "▁Pe",
+ "nn"
+ ],
+ [
+ "▁Pen",
+ "n"
+ ],
+ [
+ "▁prom",
+ "ote"
+ ],
+ [
+ "▁promot",
+ "e"
+ ],
+ [
+ "▁U",
+ "kr"
+ ],
+ [
+ "ar",
+ "est"
+ ],
+ [
+ "are",
+ "st"
+ ],
+ [
+ "ares",
+ "t"
+ ],
+ [
+ "a",
+ "rest"
+ ],
+ [
+ "▁s",
+ "amples"
+ ],
+ [
+ "▁sam",
+ "ples"
+ ],
+ [
+ "▁sample",
+ "s"
+ ],
+ [
+ "▁",
+ "samples"
+ ],
+ [
+ "▁G",
+ "reek"
+ ],
+ [
+ "▁Gre",
+ "ek"
+ ],
+ [
+ "▁Gree",
+ "k"
+ ],
+ [
+ "▁con",
+ "stru"
+ ],
+ [
+ "▁const",
+ "ru"
+ ],
+ [
+ "▁constr",
+ "u"
+ ],
+ [
+ "▁un",
+ "iverse"
+ ],
+ [
+ "▁univers",
+ "e"
+ ],
+ [
+ "elij",
+ "ke"
+ ],
+ [
+ "elijk",
+ "e"
+ ],
+ [
+ "▁pre",
+ "ferred"
+ ],
+ [
+ "▁prefer",
+ "red"
+ ],
+ [
+ "▁Д",
+ "е"
+ ],
+ [
+ "▁I",
+ "ra"
+ ],
+ [
+ "▁Ir",
+ "a"
+ ],
+ [
+ "▁d",
+ "ow"
+ ],
+ [
+ "▁do",
+ "w"
+ ],
+ [
+ "ag",
+ "ues"
+ ],
+ [
+ "ague",
+ "s"
+ ],
+ [
+ "agu",
+ "es"
+ ],
+ [
+ "HE",
+ "RE"
+ ],
+ [
+ "HER",
+ "E"
+ ],
+ [
+ "H",
+ "ERE"
+ ],
+ [
+ "▁exper",
+ "ts"
+ ],
+ [
+ "▁exp",
+ "erts"
+ ],
+ [
+ "▁expert",
+ "s"
+ ],
+ [
+ "Pro",
+ "tocol"
+ ],
+ [
+ "Proto",
+ "col"
+ ],
+ [
+ "PI",
+ "O"
+ ],
+ [
+ "P",
+ "IO"
+ ],
+ [
+ "▁n",
+ "az"
+ ],
+ [
+ "▁na",
+ "z"
+ ],
+ [
+ "▁K",
+ "h"
+ ],
+ [
+ "hö",
+ "r"
+ ],
+ [
+ "h",
+ "ör"
+ ],
+ [
+ "▁dist",
+ "ingu"
+ ],
+ [
+ "▁B",
+ "Y"
+ ],
+ [
+ "▁",
+ "BY"
+ ],
+ [
+ "▁se",
+ "ine"
+ ],
+ [
+ "▁sein",
+ "e"
+ ],
+ [
+ "▁sei",
+ "ne"
+ ],
+ [
+ "ep",
+ "ing"
+ ],
+ [
+ "e",
+ "ping"
+ ],
+ [
+ "▁fair",
+ "ly"
+ ],
+ [
+ "▁Me",
+ "an"
+ ],
+ [
+ "ix",
+ "er"
+ ],
+ [
+ "in",
+ "si"
+ ],
+ [
+ "ins",
+ "i"
+ ],
+ [
+ "▁author",
+ "s"
+ ],
+ [
+ "▁auth",
+ "ors"
+ ],
+ [
+ "**",
+ "."
+ ],
+ [
+ "*",
+ "*."
+ ],
+ [
+ "A",
+ "I"
+ ],
+ [
+ "▁ed",
+ "ges"
+ ],
+ [
+ "▁edge",
+ "s"
+ ],
+ [
+ "▁",
+ "edges"
+ ],
+ [
+ "▁shoot",
+ "ing"
+ ],
+ [
+ "Ad",
+ "min"
+ ],
+ [
+ "▁m",
+ "aps"
+ ],
+ [
+ "▁map",
+ "s"
+ ],
+ [
+ "▁ma",
+ "ps"
+ ],
+ [
+ "▁",
+ "maps"
+ ],
+ [
+ "ch",
+ "ant"
+ ],
+ [
+ "chan",
+ "t"
+ ],
+ [
+ "cha",
+ "nt"
+ ],
+ [
+ "▁CO",
+ "VID"
+ ],
+ [
+ "▁link",
+ "ed"
+ ],
+ [
+ "▁lin",
+ "ked"
+ ],
+ [
+ "▁",
+ "linked"
+ ],
+ [
+ "▁s",
+ "ke"
+ ],
+ [
+ "▁sk",
+ "e"
+ ],
+ [
+ "▁",
+ "ske"
+ ],
+ [
+ "▁power",
+ "s"
+ ],
+ [
+ "▁pow",
+ "ers"
+ ],
+ [
+ "á",
+ "d"
+ ],
+ [
+ "▁stom",
+ "ach"
+ ],
+ [
+ "▁us",
+ "age"
+ ],
+ [
+ "▁",
+ "usage"
+ ],
+ [
+ "▁def",
+ "end"
+ ],
+ [
+ "▁defe",
+ "nd"
+ ],
+ [
+ "▁s",
+ "ustain"
+ ],
+ [
+ "▁sus",
+ "tain"
+ ],
+ [
+ "▁sust",
+ "ain"
+ ],
+ [
+ "▁up",
+ "dates"
+ ],
+ [
+ "▁update",
+ "s"
+ ],
+ [
+ "▁as",
+ "sign"
+ ],
+ [
+ "▁ass",
+ "ign"
+ ],
+ [
+ "▁",
+ "assign"
+ ],
+ [
+ "H",
+ "L"
+ ],
+ [
+ "▁S",
+ "ea"
+ ],
+ [
+ "▁Se",
+ "a"
+ ],
+ [
+ "▁dis",
+ "cipl"
+ ],
+ [
+ "V",
+ "ideo"
+ ],
+ [
+ "▁Ch",
+ "ief"
+ ],
+ [
+ "▁Chi",
+ "ef"
+ ],
+ [
+ "▁b",
+ "unch"
+ ],
+ [
+ "▁Ob",
+ "ama"
+ ],
+ [
+ "ni",
+ "s"
+ ],
+ [
+ "n",
+ "is"
+ ],
+ [
+ "vo",
+ "r"
+ ],
+ [
+ "v",
+ "or"
+ ],
+ [
+ "▁ag",
+ "ents"
+ ],
+ [
+ "▁agent",
+ "s"
+ ],
+ [
+ "ca",
+ "s"
+ ],
+ [
+ "c",
+ "as"
+ ],
+ [
+ "ch",
+ "ter"
+ ],
+ [
+ "cht",
+ "er"
+ ],
+ [
+ "chte",
+ "r"
+ ],
+ [
+ "▁gl",
+ "anced"
+ ],
+ [
+ "▁glance",
+ "d"
+ ],
+ [
+ "support",
+ "ed"
+ ],
+ [
+ "supp",
+ "orted"
+ ],
+ [
+ "▁Cons",
+ "ider"
+ ],
+ [
+ "▁Every",
+ "one"
+ ],
+ [
+ "▁l",
+ "ect"
+ ],
+ [
+ "▁le",
+ "ct"
+ ],
+ [
+ "▁",
+ "lect"
+ ],
+ [
+ "▁St",
+ "one"
+ ],
+ [
+ "▁Sto",
+ "ne"
+ ],
+ [
+ "▁J",
+ "am"
+ ],
+ [
+ "▁Ja",
+ "m"
+ ],
+ [
+ "og",
+ "ram"
+ ],
+ [
+ "o",
+ "gram"
+ ],
+ [
+ "form",
+ "ance"
+ ],
+ [
+ "▁\\",
+ "\""
+ ],
+ [
+ "▁",
+ "\\\""
+ ],
+ [
+ "▁p",
+ "atch"
+ ],
+ [
+ "▁pat",
+ "ch"
+ ],
+ [
+ "▁",
+ "patch"
+ ],
+ [
+ "▁v",
+ "it"
+ ],
+ [
+ "▁vi",
+ "t"
+ ],
+ [
+ "Po",
+ "wer"
+ ],
+ [
+ "P",
+ "ower"
+ ],
+ [
+ "▁hard",
+ "er"
+ ],
+ [
+ "▁har",
+ "der"
+ ],
+ [
+ "An",
+ "al"
+ ],
+ [
+ "A",
+ "nal"
+ ],
+ [
+ "▁des",
+ "ired"
+ ],
+ [
+ "▁desire",
+ "d"
+ ],
+ [
+ "▁j",
+ "ug"
+ ],
+ [
+ "▁ju",
+ "g"
+ ],
+ [
+ "▁support",
+ "ing"
+ ],
+ [
+ "D",
+ "U"
+ ],
+ [
+ "]]",
+ ","
+ ],
+ [
+ "]",
+ "],"
+ ],
+ [
+ "▁Ad",
+ "ministr"
+ ],
+ [
+ "▁Admin",
+ "istr"
+ ],
+ [
+ "uck",
+ "y"
+ ],
+ [
+ "uc",
+ "ky"
+ ],
+ [
+ "▁cont",
+ "roller"
+ ],
+ [
+ "▁control",
+ "ler"
+ ],
+ [
+ "▁",
+ "controller"
+ ],
+ [
+ "▁iss",
+ "ued"
+ ],
+ [
+ "▁issue",
+ "d"
+ ],
+ [
+ "▁S",
+ "in"
+ ],
+ [
+ "▁Si",
+ "n"
+ ],
+ [
+ "▁aff",
+ "ili"
+ ],
+ [
+ "▁part",
+ "ners"
+ ],
+ [
+ "▁partner",
+ "s"
+ ],
+ [
+ "cd",
+ "ots"
+ ],
+ [
+ "cdot",
+ "s"
+ ],
+ [
+ "c",
+ "dots"
+ ],
+ [
+ "ct",
+ "ic"
+ ],
+ [
+ "C",
+ "ar"
+ ],
+ [
+ "▁N",
+ "Y"
+ ],
+ [
+ "▁",
+ "NY"
+ ],
+ [
+ "▁p",
+ "riority"
+ ],
+ [
+ "▁prior",
+ "ity"
+ ],
+ [
+ "▁",
+ "priority"
+ ],
+ [
+ "or",
+ "iginal"
+ ],
+ [
+ "orig",
+ "inal"
+ ],
+ [
+ "origin",
+ "al"
+ ],
+ [
+ "S",
+ "ql"
+ ],
+ [
+ "▁decl",
+ "ared"
+ ],
+ [
+ "▁declare",
+ "d"
+ ],
+ [
+ "▁declar",
+ "ed"
+ ],
+ [
+ "▁Hot",
+ "el"
+ ],
+ [
+ "▁b",
+ "rowser"
+ ],
+ [
+ "▁brow",
+ "ser"
+ ],
+ [
+ "▁brows",
+ "er"
+ ],
+ [
+ "▁",
+ "browser"
+ ],
+ [
+ "▁gr",
+ "ande"
+ ],
+ [
+ "▁grand",
+ "e"
+ ],
+ [
+ "▁gran",
+ "de"
+ ],
+ [
+ "▁gra",
+ "nde"
+ ],
+ [
+ "}^",
+ "\\"
+ ],
+ [
+ "}",
+ "^\\"
+ ],
+ [
+ "bo",
+ "w"
+ ],
+ [
+ "b",
+ "ow"
+ ],
+ [
+ "▁accom",
+ "mod"
+ ],
+ [
+ "Direct",
+ "ory"
+ ],
+ [
+ "▁suff",
+ "ering"
+ ],
+ [
+ "▁suffer",
+ "ing"
+ ],
+ [
+ "▁log",
+ "ger"
+ ],
+ [
+ "▁",
+ "logger"
+ ],
+ [
+ "▁break",
+ "fast"
+ ],
+ [
+ "ul",
+ "i"
+ ],
+ [
+ "u",
+ "li"
+ ],
+ [
+ "▁b",
+ "oot"
+ ],
+ [
+ "▁bo",
+ "ot"
+ ],
+ [
+ "▁",
+ "boot"
+ ],
+ [
+ "▁contribut",
+ "ion"
+ ],
+ [
+ "NE",
+ "SS"
+ ],
+ [
+ "▁T",
+ "en"
+ ],
+ [
+ "▁Te",
+ "n"
+ ],
+ [
+ "▁",
+ "Ten"
+ ],
+ [
+ "sem",
+ "ble"
+ ],
+ [
+ "semb",
+ "le"
+ ],
+ [
+ "sembl",
+ "e"
+ ],
+ [
+ "▁h",
+ "ousing"
+ ],
+ [
+ "▁hous",
+ "ing"
+ ],
+ [
+ "▁ho",
+ "using"
+ ],
+ [
+ "R",
+ "aw"
+ ],
+ [
+ "AN",
+ "CE"
+ ],
+ [
+ "▁П",
+ "ри"
+ ],
+ [
+ "▁b",
+ "rit"
+ ],
+ [
+ "▁br",
+ "it"
+ ],
+ [
+ "▁",
+ "brit"
+ ],
+ [
+ "es",
+ "sa"
+ ],
+ [
+ "ess",
+ "a"
+ ],
+ [
+ "in",
+ "son"
+ ],
+ [
+ "ins",
+ "on"
+ ],
+ [
+ "▁B",
+ "all"
+ ],
+ [
+ "▁Ba",
+ "ll"
+ ],
+ [
+ "▁Bal",
+ "l"
+ ],
+ [
+ "en",
+ "tes"
+ ],
+ [
+ "ent",
+ "es"
+ ],
+ [
+ "ente",
+ "s"
+ ],
+ [
+ "▁B",
+ "ra"
+ ],
+ [
+ "▁Br",
+ "a"
+ ],
+ [
+ "sc",
+ "ore"
+ ],
+ [
+ "s",
+ "core"
+ ],
+ [
+ "GE",
+ "R"
+ ],
+ [
+ "G",
+ "ER"
+ ],
+ [
+ "ro",
+ "ute"
+ ],
+ [
+ "rou",
+ "te"
+ ],
+ [
+ "r",
+ "oute"
+ ],
+ [
+ "ap",
+ "sed"
+ ],
+ [
+ "aps",
+ "ed"
+ ],
+ [
+ "apse",
+ "d"
+ ],
+ [
+ "ро",
+ "й"
+ ],
+ [
+ "di",
+ "ff"
+ ],
+ [
+ "d",
+ "iff"
+ ],
+ [
+ "▁broad",
+ "cast"
+ ],
+ [
+ "▁t",
+ "ar"
+ ],
+ [
+ "▁ta",
+ "r"
+ ],
+ [
+ "▁",
+ "tar"
+ ],
+ [
+ "▁de",
+ "light"
+ ],
+ [
+ "▁del",
+ "ight"
+ ],
+ [
+ ")",
+ "?"
+ ],
+ [
+ "ch",
+ "ester"
+ ],
+ [
+ "che",
+ "ster"
+ ],
+ [
+ "ches",
+ "ter"
+ ],
+ [
+ "Pl",
+ "atform"
+ ],
+ [
+ "▁emer",
+ "gency"
+ ],
+ [
+ "▁c",
+ "es"
+ ],
+ [
+ "▁ce",
+ "s"
+ ],
+ [
+ "▁",
+ "ces"
+ ],
+ [
+ "ner",
+ "ship"
+ ],
+ [
+ "ners",
+ "hip"
+ ],
+ [
+ "n",
+ "ership"
+ ],
+ [
+ "▁sit",
+ "uations"
+ ],
+ [
+ "▁situ",
+ "ations"
+ ],
+ [
+ "▁situation",
+ "s"
+ ],
+ [
+ "▁famil",
+ "jen"
+ ],
+ [
+ "▁G",
+ "eb"
+ ],
+ [
+ "▁Ge",
+ "b"
+ ],
+ [
+ "en",
+ "ta"
+ ],
+ [
+ "ent",
+ "a"
+ ],
+ [
+ "ú",
+ "blic"
+ ],
+ [
+ "▁P",
+ "lace"
+ ],
+ [
+ "▁Pl",
+ "ace"
+ ],
+ [
+ "▁",
+ "Place"
+ ],
+ [
+ "IL",
+ "L"
+ ],
+ [
+ "I",
+ "LL"
+ ],
+ [
+ "▁m",
+ "arch"
+ ],
+ [
+ "▁mar",
+ "ch"
+ ],
+ [
+ "▁fundament",
+ "al"
+ ],
+ [
+ "att",
+ "ributes"
+ ],
+ [
+ "attribute",
+ "s"
+ ],
+ [
+ "кт",
+ "и"
+ ],
+ [
+ "к",
+ "ти"
+ ],
+ [
+ "▁F",
+ "u"
+ ],
+ [
+ "F",
+ "D"
+ ],
+ [
+ "▁ра",
+ "с"
+ ],
+ [
+ "▁academ",
+ "ic"
+ ],
+ [
+ "pr",
+ "es"
+ ],
+ [
+ "pre",
+ "s"
+ ],
+ [
+ "p",
+ "res"
+ ],
+ [
+ "▁r",
+ "ising"
+ ],
+ [
+ "▁ri",
+ "sing"
+ ],
+ [
+ "▁ris",
+ "ing"
+ ],
+ [
+ "▁B",
+ "raz"
+ ],
+ [
+ "▁Br",
+ "az"
+ ],
+ [
+ "▁Bra",
+ "z"
+ ],
+ [
+ "▁rece",
+ "iving"
+ ],
+ [
+ "WAR",
+ "N"
+ ],
+ [
+ "▁jud",
+ "g"
+ ],
+ [
+ "▁necess",
+ "arily"
+ ],
+ [
+ "]",
+ "="
+ ],
+ [
+ "▁deep",
+ "ly"
+ ],
+ [
+ "▁g",
+ "ray"
+ ],
+ [
+ "▁gr",
+ "ay"
+ ],
+ [
+ "▁gra",
+ "y"
+ ],
+ [
+ "▁",
+ "gray"
+ ],
+ [
+ "He",
+ "aders"
+ ],
+ [
+ "Head",
+ "ers"
+ ],
+ [
+ "Header",
+ "s"
+ ],
+ [
+ "▁co",
+ "al"
+ ],
+ [
+ "\\",
+ "{"
+ ],
+ [
+ "Mu",
+ "t"
+ ],
+ [
+ "M",
+ "ut"
+ ],
+ [
+ "ba",
+ "ch"
+ ],
+ [
+ "b",
+ "ach"
+ ],
+ [
+ "▁pro",
+ "fit"
+ ],
+ [
+ "▁prof",
+ "it"
+ ],
+ [
+ "▁",
+ "profit"
+ ],
+ [
+ "во",
+ "го"
+ ],
+ [
+ "в",
+ "ого"
+ ],
+ [
+ "ig",
+ "s"
+ ],
+ [
+ "i",
+ "gs"
+ ],
+ [
+ "og",
+ "rap"
+ ],
+ [
+ "\";",
+ "\r"
+ ],
+ [
+ "\"",
+ ";\r"
+ ],
+ [
+ "▁adv",
+ "oc"
+ ],
+ [
+ "Gener",
+ "ated"
+ ],
+ [
+ "Generate",
+ "d"
+ ],
+ [
+ "ме",
+ "ри"
+ ],
+ [
+ "мер",
+ "и"
+ ],
+ [
+ "▁C",
+ "ond"
+ ],
+ [
+ "▁Con",
+ "d"
+ ],
+ [
+ "▁Co",
+ "nd"
+ ],
+ [
+ "▁",
+ "Cond"
+ ],
+ [
+ "▁ag",
+ "ric"
+ ],
+ [
+ "BA",
+ "SE"
+ ],
+ [
+ "B",
+ "ASE"
+ ],
+ [
+ "▁arr",
+ "ang"
+ ],
+ [
+ "▁flow",
+ "ers"
+ ],
+ [
+ "▁flower",
+ "s"
+ ],
+ [
+ "i",
+ "w"
+ ],
+ [
+ "▁]",
+ ";"
+ ],
+ [
+ "▁",
+ "];"
+ ],
+ [
+ "▁во",
+ "й"
+ ],
+ [
+ "▁",
+ "вой"
+ ],
+ [
+ "ume",
+ "rate"
+ ],
+ [
+ "umer",
+ "ate"
+ ],
+ [
+ "▁i",
+ "hr"
+ ],
+ [
+ "▁ih",
+ "r"
+ ],
+ [
+ "▁п",
+ "ар"
+ ],
+ [
+ "▁па",
+ "р"
+ ],
+ [
+ "▁",
+ "пар"
+ ],
+ [
+ "▁m",
+ "ont"
+ ],
+ [
+ "▁mon",
+ "t"
+ ],
+ [
+ "▁mo",
+ "nt"
+ ],
+ [
+ "▁",
+ "mont"
+ ],
+ [
+ "wide",
+ "hat"
+ ],
+ [
+ "m",
+ "g"
+ ],
+ [
+ "▁b",
+ "tn"
+ ],
+ [
+ "▁bt",
+ "n"
+ ],
+ [
+ "▁",
+ "btn"
+ ],
+ [
+ "▁b",
+ "esk"
+ ],
+ [
+ "▁be",
+ "sk"
+ ],
+ [
+ "▁bes",
+ "k"
+ ],
+ [
+ "▁act",
+ "s"
+ ],
+ [
+ "▁ac",
+ "ts"
+ ],
+ [
+ "▁",
+ "acts"
+ ],
+ [
+ "ó",
+ "s"
+ ],
+ [
+ "~~",
+ "~~"
+ ],
+ [
+ "▁cur",
+ "ve"
+ ],
+ [
+ "▁curv",
+ "e"
+ ],
+ [
+ "l",
+ "anguage"
+ ],
+ [
+ "▁TR",
+ "UE"
+ ],
+ [
+ "▁",
+ "TRUE"
+ ],
+ [
+ "▁cle",
+ "aning"
+ ],
+ [
+ "▁clean",
+ "ing"
+ ],
+ [
+ "Mat",
+ "h"
+ ],
+ [
+ "Ma",
+ "th"
+ ],
+ [
+ "M",
+ "ath"
+ ],
+ [
+ "▁reg",
+ "ional"
+ ],
+ [
+ "▁region",
+ "al"
+ ],
+ [
+ "▁est",
+ "imated"
+ ],
+ [
+ "▁estim",
+ "ated"
+ ],
+ [
+ "▁estimate",
+ "d"
+ ],
+ [
+ "ar",
+ "ity"
+ ],
+ [
+ "ari",
+ "ty"
+ ],
+ [
+ "ier",
+ "ung"
+ ],
+ [
+ "/",
+ "{"
+ ],
+ [
+ "jan",
+ "go"
+ ],
+ [
+ "j",
+ "ango"
+ ],
+ [
+ "$",
+ "_"
+ ],
+ [
+ "▁th",
+ "rew"
+ ],
+ [
+ "▁thr",
+ "ew"
+ ],
+ [
+ "r",
+ "q"
+ ],
+ [
+ "co",
+ "p"
+ ],
+ [
+ "c",
+ "op"
+ ],
+ [
+ "ner",
+ "gy"
+ ],
+ [
+ "▁Acc",
+ "ount"
+ ],
+ [
+ "▁Ac",
+ "count"
+ ],
+ [
+ "▁",
+ "Account"
+ ],
+ [
+ "pa",
+ "l"
+ ],
+ [
+ "p",
+ "al"
+ ],
+ [
+ "▁N",
+ "ic"
+ ],
+ [
+ "▁Ni",
+ "c"
+ ],
+ [
+ "])",
+ ")"
+ ],
+ [
+ "]",
+ "))"
+ ],
+ [
+ "▁aw",
+ "esome"
+ ],
+ [
+ "▁L",
+ "oad"
+ ],
+ [
+ "▁Lo",
+ "ad"
+ ],
+ [
+ "▁",
+ "Load"
+ ],
+ [
+ "un",
+ "nel"
+ ],
+ [
+ "unn",
+ "el"
+ ],
+ [
+ "▁r",
+ "ows"
+ ],
+ [
+ "▁ro",
+ "ws"
+ ],
+ [
+ "▁row",
+ "s"
+ ],
+ [
+ "▁",
+ "rows"
+ ],
+ [
+ "▁for",
+ "each"
+ ],
+ [
+ "▁fore",
+ "ach"
+ ],
+ [
+ "▁fo",
+ "reach"
+ ],
+ [
+ "▁",
+ "foreach"
+ ],
+ [
+ "▁P",
+ "od"
+ ],
+ [
+ "▁Po",
+ "d"
+ ],
+ [
+ "▁",
+ "Pod"
+ ],
+ [
+ "▁E",
+ "N"
+ ],
+ [
+ "▁",
+ "EN"
+ ],
+ [
+ "▁.",
+ "="
+ ],
+ [
+ "ua",
+ "te"
+ ],
+ [
+ "u",
+ "ate"
+ ],
+ [
+ "frastr",
+ "ucture"
+ ],
+ [
+ "▁W",
+ "atch"
+ ],
+ [
+ "▁Wat",
+ "ch"
+ ],
+ [
+ "▁",
+ "Watch"
+ ],
+ [
+ "St",
+ "and"
+ ],
+ [
+ "▁r",
+ "outine"
+ ],
+ [
+ "▁rout",
+ "ine"
+ ],
+ [
+ "▁p",
+ "ic"
+ ],
+ [
+ "▁pi",
+ "c"
+ ],
+ [
+ "▁",
+ "pic"
+ ],
+ [
+ "hel",
+ "per"
+ ],
+ [
+ "help",
+ "er"
+ ],
+ [
+ "▁hor",
+ "ses"
+ ],
+ [
+ "▁horse",
+ "s"
+ ],
+ [
+ "▁hors",
+ "es"
+ ],
+ [
+ "▁requ",
+ "ested"
+ ],
+ [
+ "▁request",
+ "ed"
+ ],
+ [
+ "▁-",
+ "--"
+ ],
+ [
+ "▁--",
+ "-"
+ ],
+ [
+ "▁",
+ "---"
+ ],
+ [
+ "bor",
+ "der"
+ ],
+ [
+ "b",
+ "order"
+ ],
+ [
+ "▁lif",
+ "ted"
+ ],
+ [
+ "▁lift",
+ "ed"
+ ],
+ [
+ "▁P",
+ "ed"
+ ],
+ [
+ "▁Pe",
+ "d"
+ ],
+ [
+ "Im",
+ "port"
+ ],
+ [
+ "Imp",
+ "ort"
+ ],
+ [
+ "љ",
+ "е"
+ ],
+ [
+ "▁Л",
+ "и"
+ ],
+ [
+ "▁m",
+ "yst"
+ ],
+ [
+ "▁my",
+ "st"
+ ],
+ [
+ "TH",
+ "ER"
+ ],
+ [
+ "THE",
+ "R"
+ ],
+ [
+ "T",
+ "HER"
+ ],
+ [
+ "▁A",
+ "C"
+ ],
+ [
+ "▁",
+ "AC"
+ ],
+ [
+ "Pro",
+ "xy"
+ ],
+ [
+ "Pr",
+ "oxy"
+ ],
+ [
+ "pro",
+ "v"
+ ],
+ [
+ "pr",
+ "ov"
+ ],
+ [
+ "p",
+ "rov"
+ ],
+ [
+ "▁N",
+ "ik"
+ ],
+ [
+ "▁Ni",
+ "k"
+ ],
+ [
+ "he",
+ "mat"
+ ],
+ [
+ "hem",
+ "at"
+ ],
+ [
+ "h",
+ "emat"
+ ],
+ [
+ "он",
+ "аль"
+ ],
+ [
+ "она",
+ "ль"
+ ],
+ [
+ "о",
+ "наль"
+ ],
+ [
+ "▁\"",
+ "."
+ ],
+ [
+ "▁",
+ "\"."
+ ],
+ [
+ "ul",
+ "ui"
+ ],
+ [
+ "ulu",
+ "i"
+ ],
+ [
+ "▁impro",
+ "ved"
+ ],
+ [
+ "▁improve",
+ "d"
+ ],
+ [
+ "ie",
+ "ren"
+ ],
+ [
+ "ier",
+ "en"
+ ],
+ [
+ "iere",
+ "n"
+ ],
+ [
+ "i",
+ "eren"
+ ],
+ [
+ "oc",
+ "olate"
+ ],
+ [
+ "ocol",
+ "ate"
+ ],
+ [
+ "oco",
+ "late"
+ ],
+ [
+ "Sc",
+ "he"
+ ],
+ [
+ "Sch",
+ "e"
+ ],
+ [
+ "S",
+ "che"
+ ],
+ [
+ "un",
+ "ic"
+ ],
+ [
+ "uni",
+ "c"
+ ],
+ [
+ "u",
+ "nic"
+ ],
+ [
+ "▁Profess",
+ "or"
+ ],
+ [
+ "ie",
+ "ler"
+ ],
+ [
+ "iel",
+ "er"
+ ],
+ [
+ "iele",
+ "r"
+ ],
+ [
+ "i",
+ "eler"
+ ],
+ [
+ "▁d",
+ "uration"
+ ],
+ [
+ "▁dur",
+ "ation"
+ ],
+ [
+ "▁",
+ "duration"
+ ],
+ [
+ "▁time",
+ "out"
+ ],
+ [
+ "▁",
+ "timeout"
+ ],
+ [
+ "ho",
+ "m"
+ ],
+ [
+ "h",
+ "om"
+ ],
+ [
+ "▁l",
+ "ux"
+ ],
+ [
+ "▁lu",
+ "x"
+ ],
+ [
+ "▁t",
+ "rab"
+ ],
+ [
+ "▁tr",
+ "ab"
+ ],
+ [
+ "▁tra",
+ "b"
+ ],
+ [
+ "it",
+ "ary"
+ ],
+ [
+ "ita",
+ "ry"
+ ],
+ [
+ "itar",
+ "y"
+ ],
+ [
+ "њ",
+ "е"
+ ],
+ [
+ "▁insp",
+ "ired"
+ ],
+ [
+ "▁inspir",
+ "ed"
+ ],
+ [
+ "▁inspire",
+ "d"
+ ],
+ [
+ "})",
+ "\\"
+ ],
+ [
+ "}",
+ ")\\"
+ ],
+ [
+ "is",
+ "ely"
+ ],
+ [
+ "ise",
+ "ly"
+ ],
+ [
+ "ial",
+ "s"
+ ],
+ [
+ "ia",
+ "ls"
+ ],
+ [
+ "i",
+ "als"
+ ],
+ [
+ "▁V",
+ "or"
+ ],
+ [
+ "▁Vo",
+ "r"
+ ],
+ [
+ "▁enh",
+ "ance"
+ ],
+ [
+ "▁l",
+ "ucky"
+ ],
+ [
+ "▁luck",
+ "y"
+ ],
+ [
+ "▁luc",
+ "ky"
+ ],
+ [
+ "W",
+ "orld"
+ ],
+ [
+ "el",
+ "o"
+ ],
+ [
+ "e",
+ "lo"
+ ],
+ [
+ "if",
+ "iers"
+ ],
+ [
+ "ifier",
+ "s"
+ ],
+ [
+ "ifi",
+ "ers"
+ ],
+ [
+ "▁f",
+ "acing"
+ ],
+ [
+ "▁fac",
+ "ing"
+ ],
+ [
+ "▁fa",
+ "cing"
+ ],
+ [
+ "▁appreci",
+ "ate"
+ ],
+ [
+ "▁",
+ "être"
+ ],
+ [
+ "▁ben",
+ "ch"
+ ],
+ [
+ "▁",
+ "bench"
+ ],
+ [
+ "at",
+ "ted"
+ ],
+ [
+ "att",
+ "ed"
+ ],
+ [
+ "atte",
+ "d"
+ ],
+ [
+ "gen",
+ "ce"
+ ],
+ [
+ "g",
+ "ence"
+ ],
+ [
+ "c",
+ "ourse"
+ ],
+ [
+ "▁t",
+ "ub"
+ ],
+ [
+ "▁tu",
+ "b"
+ ],
+ [
+ "▁l",
+ "ors"
+ ],
+ [
+ "▁lo",
+ "rs"
+ ],
+ [
+ "▁mis",
+ "take"
+ ],
+ [
+ "▁mist",
+ "ake"
+ ],
+ [
+ "no",
+ "m"
+ ],
+ [
+ "n",
+ "om"
+ ],
+ [
+ "▁p",
+ "aus"
+ ],
+ [
+ "▁pa",
+ "us"
+ ],
+ [
+ "▁\"",
+ "\";"
+ ],
+ [
+ "▁\"\"",
+ ";"
+ ],
+ [
+ "▁su",
+ "bs"
+ ],
+ [
+ "▁sub",
+ "s"
+ ],
+ [
+ "▁st",
+ "ato"
+ ],
+ [
+ "▁stat",
+ "o"
+ ],
+ [
+ "▁sta",
+ "to"
+ ],
+ [
+ "$",
+ ")"
+ ],
+ [
+ "▁g",
+ "ay"
+ ],
+ [
+ "▁ga",
+ "y"
+ ],
+ [
+ "or",
+ "ry"
+ ],
+ [
+ "orr",
+ "y"
+ ],
+ [
+ "▁veh",
+ "icles"
+ ],
+ [
+ "▁vehicle",
+ "s"
+ ],
+ [
+ "▁br",
+ "ill"
+ ],
+ [
+ "ma",
+ "y"
+ ],
+ [
+ "m",
+ "ay"
+ ],
+ [
+ "re",
+ "sp"
+ ],
+ [
+ "res",
+ "p"
+ ],
+ [
+ "r",
+ "esp"
+ ],
+ [
+ "▁w",
+ "ore"
+ ],
+ [
+ "▁wor",
+ "e"
+ ],
+ [
+ "▁wo",
+ "re"
+ ],
+ [
+ "j",
+ "ą"
+ ],
+ [
+ "b",
+ "p"
+ ],
+ [
+ "on",
+ "el"
+ ],
+ [
+ "one",
+ "l"
+ ],
+ [
+ "o",
+ "nel"
+ ],
+ [
+ "▁C",
+ "R"
+ ],
+ [
+ "▁",
+ "CR"
+ ],
+ [
+ "▁di",
+ "agn"
+ ],
+ [
+ "▁dia",
+ "gn"
+ ],
+ [
+ "math",
+ "sf"
+ ],
+ [
+ "▁hol",
+ "iday"
+ ],
+ [
+ "▁achie",
+ "ved"
+ ],
+ [
+ "▁achieve",
+ "d"
+ ],
+ [
+ "▁{",
+ "'"
+ ],
+ [
+ "▁",
+ "{'"
+ ],
+ [
+ "▁Re",
+ "source"
+ ],
+ [
+ "▁Res",
+ "ource"
+ ],
+ [
+ "▁",
+ "Resource"
+ ],
+ [
+ "▁h",
+ "i"
+ ],
+ [
+ "▁",
+ "hi"
+ ],
+ [
+ "▁b",
+ "ra"
+ ],
+ [
+ "▁br",
+ "a"
+ ],
+ [
+ "▁",
+ "bra"
+ ],
+ [
+ "▁CON",
+ "DITION"
+ ],
+ [
+ "ct",
+ "r"
+ ],
+ [
+ "c",
+ "tr"
+ ],
+ [
+ "▁W",
+ "rite"
+ ],
+ [
+ "▁Writ",
+ "e"
+ ],
+ [
+ "▁Wr",
+ "ite"
+ ],
+ [
+ "▁",
+ "Write"
+ ],
+ [
+ "is",
+ "hop"
+ ],
+ [
+ "ish",
+ "op"
+ ],
+ [
+ "i",
+ "shop"
+ ],
+ [
+ "OL",
+ "D"
+ ],
+ [
+ "O",
+ "LD"
+ ],
+ [
+ "▁c",
+ "pu"
+ ],
+ [
+ "▁cp",
+ "u"
+ ],
+ [
+ "▁",
+ "cpu"
+ ],
+ [
+ "▁occ",
+ "urs"
+ ],
+ [
+ "▁occur",
+ "s"
+ ],
+ [
+ "▁oc",
+ "curs"
+ ],
+ [
+ "ó",
+ "ł"
+ ],
+ [
+ "str",
+ "aint"
+ ],
+ [
+ "stra",
+ "int"
+ ],
+ [
+ "▁nu",
+ "clear"
+ ],
+ [
+ "▁nuc",
+ "lear"
+ ],
+ [
+ "▁nucle",
+ "ar"
+ ],
+ [
+ "Ar",
+ "ea"
+ ],
+ [
+ "Are",
+ "a"
+ ],
+ [
+ "A",
+ "rea"
+ ],
+ [
+ "cl",
+ "uster"
+ ],
+ [
+ "▁surround",
+ "ing"
+ ],
+ [
+ "▁J",
+ "uan"
+ ],
+ [
+ "▁Ju",
+ "an"
+ ],
+ [
+ "▁pr",
+ "ima"
+ ],
+ [
+ "▁prim",
+ "a"
+ ],
+ [
+ "▁pri",
+ "ma"
+ ],
+ [
+ "▁South",
+ "ern"
+ ],
+ [
+ "▁Sou",
+ "thern"
+ ],
+ [
+ "it",
+ "ty"
+ ],
+ [
+ "itt",
+ "y"
+ ],
+ [
+ "i",
+ "tty"
+ ],
+ [
+ "▁As",
+ "sembly"
+ ],
+ [
+ "▁",
+ "Assembly"
+ ],
+ [
+ "el",
+ "em"
+ ],
+ [
+ "ele",
+ "m"
+ ],
+ [
+ "e",
+ "lem"
+ ],
+ [
+ "ad",
+ "i"
+ ],
+ [
+ "a",
+ "di"
+ ],
+ [
+ "ér",
+ "al"
+ ],
+ [
+ "éra",
+ "l"
+ ],
+ [
+ "é",
+ "ral"
+ ],
+ [
+ "▁W",
+ "at"
+ ],
+ [
+ "▁Wa",
+ "t"
+ ],
+ [
+ "▁R",
+ "adio"
+ ],
+ [
+ "▁Rad",
+ "io"
+ ],
+ [
+ "▁",
+ "Radio"
+ ],
+ [
+ "▁g",
+ "egen"
+ ],
+ [
+ "▁ge",
+ "gen"
+ ],
+ [
+ "▁T",
+ "ony"
+ ],
+ [
+ "▁To",
+ "ny"
+ ],
+ [
+ "▁Ton",
+ "y"
+ ],
+ [
+ "pr",
+ "essed"
+ ],
+ [
+ "press",
+ "ed"
+ ],
+ [
+ "pres",
+ "sed"
+ ],
+ [
+ "p",
+ "ressed"
+ ],
+ [
+ "▁An",
+ "ne"
+ ],
+ [
+ "▁Ann",
+ "e"
+ ],
+ [
+ "▁N",
+ "S"
+ ],
+ [
+ "▁",
+ "NS"
+ ],
+ [
+ "▁P",
+ "ak"
+ ],
+ [
+ "▁Pa",
+ "k"
+ ],
+ [
+ "▁C",
+ "ivil"
+ ],
+ [
+ "▁Ci",
+ "vil"
+ ],
+ [
+ "▁th",
+ "rown"
+ ],
+ [
+ "▁throw",
+ "n"
+ ],
+ [
+ "▁thr",
+ "own"
+ ],
+ [
+ "▁thro",
+ "wn"
+ ],
+ [
+ "NO",
+ "NE"
+ ],
+ [
+ "NON",
+ "E"
+ ],
+ [
+ "N",
+ "ONE"
+ ],
+ [
+ "▁p",
+ "ump"
+ ],
+ [
+ "▁pu",
+ "mp"
+ ],
+ [
+ "▁s",
+ "olve"
+ ],
+ [
+ "▁sol",
+ "ve"
+ ],
+ [
+ "EN",
+ "ABLE"
+ ],
+ [
+ "▁Ph",
+ "ys"
+ ],
+ [
+ "▁",
+ "Phys"
+ ],
+ [
+ "▁]",
+ ","
+ ],
+ [
+ "▁",
+ "],"
+ ],
+ [
+ "PO",
+ "SE"
+ ],
+ [
+ "POS",
+ "E"
+ ],
+ [
+ "kt",
+ "et"
+ ],
+ [
+ "kte",
+ "t"
+ ],
+ [
+ "▁F",
+ "ab"
+ ],
+ [
+ "▁Fa",
+ "b"
+ ],
+ [
+ "valid",
+ "ate"
+ ],
+ [
+ "Iter",
+ "ator"
+ ],
+ [
+ "cond",
+ "ition"
+ ],
+ [
+ "re",
+ "du"
+ ],
+ [
+ "red",
+ "u"
+ ],
+ [
+ "r",
+ "edu"
+ ],
+ [
+ "▁neg",
+ "oti"
+ ],
+ [
+ "an",
+ "no"
+ ],
+ [
+ "ann",
+ "o"
+ ],
+ [
+ "▁s",
+ "ans"
+ ],
+ [
+ "▁sa",
+ "ns"
+ ],
+ [
+ "▁san",
+ "s"
+ ],
+ [
+ "▁U",
+ "l"
+ ],
+ [
+ "CH",
+ "AR"
+ ],
+ [
+ "▁ed",
+ "ition"
+ ],
+ [
+ "▁edit",
+ "ion"
+ ],
+ [
+ "▁spect",
+ "rum"
+ ],
+ [
+ "or",
+ "ie"
+ ],
+ [
+ "ori",
+ "e"
+ ],
+ [
+ "o",
+ "rie"
+ ],
+ [
+ "▁execut",
+ "ion"
+ ],
+ [
+ "▁exec",
+ "ution"
+ ],
+ [
+ "P",
+ "lease"
+ ],
+ [
+ "▁B",
+ "O"
+ ],
+ [
+ "▁",
+ "BO"
+ ],
+ [
+ "UR",
+ "N"
+ ],
+ [
+ "▁c",
+ "ow"
+ ],
+ [
+ "▁co",
+ "w"
+ ],
+ [
+ "▁",
+ "cow"
+ ],
+ [
+ "ст",
+ "ан"
+ ],
+ [
+ "ста",
+ "н"
+ ],
+ [
+ "с",
+ "тан"
+ ],
+ [
+ "istribut",
+ "ion"
+ ],
+ [
+ "Do",
+ "main"
+ ],
+ [
+ "Dom",
+ "ain"
+ ],
+ [
+ "▁re",
+ "aders"
+ ],
+ [
+ "▁read",
+ "ers"
+ ],
+ [
+ "▁reader",
+ "s"
+ ],
+ [
+ "▁cons",
+ "umer"
+ ],
+ [
+ "▁consum",
+ "er"
+ ],
+ [
+ "▁consume",
+ "r"
+ ],
+ [
+ "▁st",
+ "yles"
+ ],
+ [
+ "▁style",
+ "s"
+ ],
+ [
+ "▁sty",
+ "les"
+ ],
+ [
+ "▁",
+ "styles"
+ ],
+ [
+ "en",
+ "code"
+ ],
+ [
+ "enc",
+ "ode"
+ ],
+ [
+ "▁C",
+ "y"
+ ],
+ [
+ "Com",
+ "mon"
+ ],
+ [
+ "Comm",
+ "on"
+ ],
+ [
+ "▁P",
+ "rop"
+ ],
+ [
+ "▁Pro",
+ "p"
+ ],
+ [
+ "▁Pr",
+ "op"
+ ],
+ [
+ "▁",
+ "Prop"
+ ],
+ [
+ "▁ex",
+ "ecute"
+ ],
+ [
+ "▁execut",
+ "e"
+ ],
+ [
+ "▁exec",
+ "ute"
+ ],
+ [
+ "▁",
+ "execute"
+ ],
+ [
+ "▁e",
+ "q"
+ ],
+ [
+ "▁",
+ "eq"
+ ],
+ [
+ "▁vis",
+ "itors"
+ ],
+ [
+ "▁visit",
+ "ors"
+ ],
+ [
+ "▁visitor",
+ "s"
+ ],
+ [
+ "▁A",
+ "mb"
+ ],
+ [
+ "▁Am",
+ "b"
+ ],
+ [
+ "ud",
+ "ad"
+ ],
+ [
+ "uda",
+ "d"
+ ],
+ [
+ "q",
+ "quad"
+ ],
+ [
+ "▁C",
+ "ert"
+ ],
+ [
+ "▁Ce",
+ "rt"
+ ],
+ [
+ "▁Cer",
+ "t"
+ ],
+ [
+ "▁",
+ "Cert"
+ ],
+ [
+ "▁t",
+ "rop"
+ ],
+ [
+ "▁tr",
+ "op"
+ ],
+ [
+ "▁tro",
+ "p"
+ ],
+ [
+ "▁yes",
+ "terday"
+ ],
+ [
+ "ta",
+ "in"
+ ],
+ [
+ "t",
+ "ain"
+ ],
+ [
+ "L",
+ "D"
+ ],
+ [
+ "at",
+ "ro"
+ ],
+ [
+ "atr",
+ "o"
+ ],
+ [
+ "▁incre",
+ "ases"
+ ],
+ [
+ "▁increase",
+ "s"
+ ],
+ [
+ "▁W",
+ "ars"
+ ],
+ [
+ "▁War",
+ "s"
+ ],
+ [
+ "▁Wa",
+ "rs"
+ ],
+ [
+ "ne",
+ "d"
+ ],
+ [
+ "n",
+ "ed"
+ ],
+ [
+ "be",
+ "fore"
+ ],
+ [
+ "b",
+ "efore"
+ ],
+ [
+ "au",
+ "pt"
+ ],
+ [
+ "a",
+ "upt"
+ ],
+ [
+ "▁E",
+ "RR"
+ ],
+ [
+ "▁ER",
+ "R"
+ ],
+ [
+ "▁",
+ "ERR"
+ ],
+ [
+ "▁F",
+ "ord"
+ ],
+ [
+ "▁For",
+ "d"
+ ],
+ [
+ "▁Fo",
+ "rd"
+ ],
+ [
+ "▁d",
+ "alla"
+ ],
+ [
+ "▁da",
+ "lla"
+ ],
+ [
+ "▁dal",
+ "la"
+ ],
+ [
+ "▁dall",
+ "a"
+ ],
+ [
+ "UL",
+ "AR"
+ ],
+ [
+ "▁st",
+ "rike"
+ ],
+ [
+ "▁str",
+ "ike"
+ ],
+ [
+ "▁stri",
+ "ke"
+ ],
+ [
+ "Ar",
+ "r"
+ ],
+ [
+ "A",
+ "rr"
+ ],
+ [
+ "▁re",
+ "covery"
+ ],
+ [
+ "▁rec",
+ "overy"
+ ],
+ [
+ "▁recover",
+ "y"
+ ],
+ [
+ "▁Res",
+ "ponse"
+ ],
+ [
+ "▁",
+ "Response"
+ ],
+ [
+ "▁strateg",
+ "ies"
+ ],
+ [
+ "▁і",
+ "н"
+ ],
+ [
+ "▁",
+ "ін"
+ ],
+ [
+ "▁re",
+ "ar"
+ ],
+ [
+ "▁r",
+ "ear"
+ ],
+ [
+ "▁adult",
+ "s"
+ ],
+ [
+ "▁Н",
+ "е"
+ ],
+ [
+ "window",
+ "s"
+ ],
+ [
+ "wind",
+ "ows"
+ ],
+ [
+ "de",
+ "cl"
+ ],
+ [
+ "dec",
+ "l"
+ ],
+ [
+ "ol",
+ "en"
+ ],
+ [
+ "ole",
+ "n"
+ ],
+ [
+ "o",
+ "len"
+ ],
+ [
+ "▁J",
+ "ord"
+ ],
+ [
+ "▁Jo",
+ "rd"
+ ],
+ [
+ "▁K",
+ "al"
+ ],
+ [
+ "▁Ka",
+ "l"
+ ],
+ [
+ "▁c",
+ "ui"
+ ],
+ [
+ "▁cu",
+ "i"
+ ],
+ [
+ "▁П",
+ "ро"
+ ],
+ [
+ "▁S",
+ "ever"
+ ],
+ [
+ "▁Se",
+ "ver"
+ ],
+ [
+ "▁Sev",
+ "er"
+ ],
+ [
+ "▁a",
+ "le"
+ ],
+ [
+ "▁al",
+ "e"
+ ],
+ [
+ "▁",
+ "ale"
+ ],
+ [
+ "▁pe",
+ "ut"
+ ],
+ [
+ "▁peu",
+ "t"
+ ],
+ [
+ "St",
+ "ats"
+ ],
+ [
+ "Stat",
+ "s"
+ ],
+ [
+ "▁R",
+ "oss"
+ ],
+ [
+ "▁Ro",
+ "ss"
+ ],
+ [
+ "▁Ros",
+ "s"
+ ],
+ [
+ "ar",
+ "ten"
+ ],
+ [
+ "art",
+ "en"
+ ],
+ [
+ "arte",
+ "n"
+ ],
+ [
+ "sh",
+ "all"
+ ],
+ [
+ "shal",
+ "l"
+ ],
+ [
+ "sha",
+ "ll"
+ ],
+ [
+ "s",
+ "hall"
+ ],
+ [
+ "▁ent",
+ "ertain"
+ ],
+ [
+ "▁enter",
+ "tain"
+ ],
+ [
+ "▁entert",
+ "ain"
+ ],
+ [
+ "▁par",
+ "king"
+ ],
+ [
+ "▁park",
+ "ing"
+ ],
+ [
+ "но",
+ "ви"
+ ],
+ [
+ "нов",
+ "и"
+ ],
+ [
+ "er",
+ "re"
+ ],
+ [
+ "err",
+ "e"
+ ],
+ [
+ "▁fun",
+ "ding"
+ ],
+ [
+ "▁fund",
+ "ing"
+ ],
+ [
+ "▁C",
+ "le"
+ ],
+ [
+ "▁Cl",
+ "e"
+ ],
+ [
+ "▁O",
+ "t"
+ ],
+ [
+ "un",
+ "st"
+ ],
+ [
+ "uns",
+ "t"
+ ],
+ [
+ "assert",
+ "Equals"
+ ],
+ [
+ "assertEqual",
+ "s"
+ ],
+ [
+ "▁c",
+ "ancell"
+ ],
+ [
+ "▁can",
+ "cell"
+ ],
+ [
+ "▁cancel",
+ "l"
+ ],
+ [
+ "TA",
+ "G"
+ ],
+ [
+ "T",
+ "AG"
+ ],
+ [
+ "▁E",
+ "arly"
+ ],
+ [
+ "▁Earl",
+ "y"
+ ],
+ [
+ "▁feed",
+ "back"
+ ],
+ [
+ "▁p",
+ "and"
+ ],
+ [
+ "▁pan",
+ "d"
+ ],
+ [
+ "▁pa",
+ "nd"
+ ],
+ [
+ "y",
+ "o"
+ ],
+ [
+ "▁mir",
+ "ror"
+ ],
+ [
+ "▁ver",
+ "b"
+ ],
+ [
+ "▁ve",
+ "rb"
+ ],
+ [
+ "▁",
+ "verb"
+ ],
+ [
+ "▁high",
+ "light"
+ ],
+ [
+ "er",
+ "ialize"
+ ],
+ [
+ "erial",
+ "ize"
+ ],
+ [
+ "▁g",
+ "rade"
+ ],
+ [
+ "▁gr",
+ "ade"
+ ],
+ [
+ "▁grad",
+ "e"
+ ],
+ [
+ "▁gra",
+ "de"
+ ],
+ [
+ "▁",
+ "grade"
+ ],
+ [
+ "ла",
+ "сь"
+ ],
+ [
+ "▁Br",
+ "ook"
+ ],
+ [
+ "▁Bro",
+ "ok"
+ ],
+ [
+ "▁L",
+ "I"
+ ],
+ [
+ "▁",
+ "LI"
+ ],
+ [
+ "▁im",
+ "plies"
+ ],
+ [
+ "▁impl",
+ "ies"
+ ],
+ [
+ "▁e",
+ "norm"
+ ],
+ [
+ "▁en",
+ "orm"
+ ],
+ [
+ "aj",
+ "ą"
+ ],
+ [
+ "a",
+ "ją"
+ ],
+ [
+ "▁W",
+ "er"
+ ],
+ [
+ "▁We",
+ "r"
+ ],
+ [
+ "aw",
+ "ay"
+ ],
+ [
+ "awa",
+ "y"
+ ],
+ [
+ "a",
+ "way"
+ ],
+ [
+ "▁machine",
+ "s"
+ ],
+ [
+ "▁mach",
+ "ines"
+ ],
+ [
+ "▁d",
+ "ent"
+ ],
+ [
+ "▁de",
+ "nt"
+ ],
+ [
+ "▁den",
+ "t"
+ ],
+ [
+ "Id",
+ "x"
+ ],
+ [
+ "I",
+ "dx"
+ ],
+ [
+ "▁t",
+ "id"
+ ],
+ [
+ "▁ti",
+ "d"
+ ],
+ [
+ "▁",
+ "tid"
+ ],
+ [
+ ")",
+ "\""
+ ],
+ [
+ "▁m",
+ "ole"
+ ],
+ [
+ "▁mo",
+ "le"
+ ],
+ [
+ "▁mol",
+ "e"
+ ],
+ [
+ "bo",
+ "ld"
+ ],
+ [
+ "bol",
+ "d"
+ ],
+ [
+ "b",
+ "old"
+ ],
+ [
+ "CO",
+ "NT"
+ ],
+ [
+ "CON",
+ "T"
+ ],
+ [
+ "C",
+ "ONT"
+ ],
+ [
+ "▁é",
+ "p"
+ ],
+ [
+ "▁",
+ "ép"
+ ],
+ [
+ "▁cut",
+ "ting"
+ ],
+ [
+ "▁N",
+ "eg"
+ ],
+ [
+ "▁Ne",
+ "g"
+ ],
+ [
+ "▁",
+ "Neg"
+ ],
+ [
+ "▁t",
+ "ong"
+ ],
+ [
+ "▁to",
+ "ng"
+ ],
+ [
+ "▁ton",
+ "g"
+ ],
+ [
+ "▁net",
+ "works"
+ ],
+ [
+ "▁network",
+ "s"
+ ],
+ [
+ "▁F",
+ "all"
+ ],
+ [
+ "▁Fa",
+ "ll"
+ ],
+ [
+ "▁Fal",
+ "l"
+ ],
+ [
+ "▁",
+ "Fall"
+ ],
+ [
+ "gener",
+ "ated"
+ ],
+ [
+ "generate",
+ "d"
+ ],
+ [
+ "▁P",
+ "ri"
+ ],
+ [
+ "▁Pr",
+ "i"
+ ],
+ [
+ "UE",
+ "ST"
+ ],
+ [
+ "UES",
+ "T"
+ ],
+ [
+ "U",
+ "EST"
+ ],
+ [
+ "▁Be",
+ "lg"
+ ],
+ [
+ "▁Bel",
+ "g"
+ ],
+ [
+ "▁s",
+ "heet"
+ ],
+ [
+ "▁she",
+ "et"
+ ],
+ [
+ "▁",
+ "sheet"
+ ],
+ [
+ "кс",
+ "и"
+ ],
+ [
+ "к",
+ "си"
+ ],
+ [
+ "▁",
+ "†"
+ ],
+ [
+ "▁y",
+ "eah"
+ ],
+ [
+ "▁ye",
+ "ah"
+ ],
+ [
+ "▁Vict",
+ "or"
+ ],
+ [
+ "▁Vi",
+ "ctor"
+ ],
+ [
+ "▁Vic",
+ "tor"
+ ],
+ [
+ "▁R",
+ "ub"
+ ],
+ [
+ "▁Ru",
+ "b"
+ ],
+ [
+ "▁candid",
+ "ates"
+ ],
+ [
+ "▁candidate",
+ "s"
+ ],
+ [
+ "pr",
+ "és"
+ ],
+ [
+ "▁E",
+ "U"
+ ],
+ [
+ "et",
+ "r"
+ ],
+ [
+ "e",
+ "tr"
+ ],
+ [
+ "▁roll",
+ "ed"
+ ],
+ [
+ "▁",
+ "rolled"
+ ],
+ [
+ "▁P",
+ "as"
+ ],
+ [
+ "▁Pa",
+ "s"
+ ],
+ [
+ "▁Ar",
+ "thur"
+ ],
+ [
+ "Ar",
+ "ch"
+ ],
+ [
+ "Arc",
+ "h"
+ ],
+ [
+ "▁M",
+ "ann"
+ ],
+ [
+ "▁Man",
+ "n"
+ ],
+ [
+ "▁Ma",
+ "nn"
+ ],
+ [
+ "Amer",
+ "ican"
+ ],
+ [
+ "America",
+ "n"
+ ],
+ [
+ "ze",
+ "s"
+ ],
+ [
+ "z",
+ "es"
+ ],
+ [
+ "in",
+ "ners"
+ ],
+ [
+ "inn",
+ "ers"
+ ],
+ [
+ "inner",
+ "s"
+ ],
+ [
+ "▁A",
+ "uto"
+ ],
+ [
+ "▁Aut",
+ "o"
+ ],
+ [
+ "▁Au",
+ "to"
+ ],
+ [
+ "▁",
+ "Auto"
+ ],
+ [
+ "▁profess",
+ "or"
+ ],
+ [
+ "▁profes",
+ "sor"
+ ],
+ [
+ "▁)",
+ ";\r"
+ ],
+ [
+ "▁);",
+ "\r"
+ ],
+ [
+ "▁",
+ ");\r"
+ ],
+ [
+ "▁ad",
+ "dr"
+ ],
+ [
+ "▁add",
+ "r"
+ ],
+ [
+ "▁",
+ "addr"
+ ],
+ [
+ "▁Med",
+ "ical"
+ ],
+ [
+ "▁Medic",
+ "al"
+ ],
+ [
+ "▁f",
+ "ired"
+ ],
+ [
+ "▁fire",
+ "d"
+ ],
+ [
+ "▁fi",
+ "red"
+ ],
+ [
+ "▁fir",
+ "ed"
+ ],
+ [
+ "▁C",
+ "ore"
+ ],
+ [
+ "▁Co",
+ "re"
+ ],
+ [
+ "▁Cor",
+ "e"
+ ],
+ [
+ "▁",
+ "Core"
+ ],
+ [
+ "▁CON",
+ "FIG"
+ ],
+ [
+ "▁",
+ "CONFIG"
+ ],
+ [
+ "▁s",
+ "ql"
+ ],
+ [
+ "▁sq",
+ "l"
+ ],
+ [
+ "▁",
+ "sql"
+ ],
+ [
+ "▁Con",
+ "serv"
+ ],
+ [
+ "▁Cons",
+ "erv"
+ ],
+ [
+ "▁Conse",
+ "rv"
+ ],
+ [
+ "ic",
+ "hen"
+ ],
+ [
+ "ich",
+ "en"
+ ],
+ [
+ "iche",
+ "n"
+ ],
+ [
+ "i",
+ "chen"
+ ],
+ [
+ "Ver",
+ "tex"
+ ],
+ [
+ "Vert",
+ "ex"
+ ],
+ [
+ "▁H",
+ "O"
+ ],
+ [
+ "▁",
+ "HO"
+ ],
+ [
+ "Y",
+ "eah"
+ ],
+ [
+ "No",
+ "te"
+ ],
+ [
+ "Not",
+ "e"
+ ],
+ [
+ "N",
+ "ote"
+ ],
+ [
+ "▁O",
+ "K"
+ ],
+ [
+ "▁",
+ "OK"
+ ],
+ [
+ "mu",
+ "s"
+ ],
+ [
+ "m",
+ "us"
+ ],
+ [
+ "f",
+ "ocus"
+ ],
+ [
+ "aj",
+ "a"
+ ],
+ [
+ "a",
+ "ja"
+ ],
+ [
+ "r",
+ "á"
+ ],
+ [
+ "▁h",
+ "ence"
+ ],
+ [
+ "▁hen",
+ "ce"
+ ],
+ [
+ "▁execut",
+ "ive"
+ ],
+ [
+ "▁liqu",
+ "id"
+ ],
+ [
+ "uj",
+ "e"
+ ],
+ [
+ "u",
+ "je"
+ ],
+ [
+ "▁d",
+ "riven"
+ ],
+ [
+ "▁dr",
+ "iven"
+ ],
+ [
+ "▁dri",
+ "ven"
+ ],
+ [
+ "▁driv",
+ "en"
+ ],
+ [
+ "▁drive",
+ "n"
+ ],
+ [
+ "▁",
+ "driven"
+ ],
+ [
+ "ig",
+ "ue"
+ ],
+ [
+ "igu",
+ "e"
+ ],
+ [
+ "i",
+ "gue"
+ ],
+ [
+ "▁W",
+ "ik"
+ ],
+ [
+ "▁Wi",
+ "k"
+ ],
+ [
+ "R",
+ "ate"
+ ],
+ [
+ "ra",
+ "nd"
+ ],
+ [
+ "ran",
+ "d"
+ ],
+ [
+ "r",
+ "and"
+ ],
+ [
+ "Result",
+ "s"
+ ],
+ [
+ "▁cop",
+ "ies"
+ ],
+ [
+ "▁t",
+ "an"
+ ],
+ [
+ "▁ta",
+ "n"
+ ],
+ [
+ "▁",
+ "tan"
+ ],
+ [
+ "rit",
+ "eria"
+ ],
+ [
+ "rite",
+ "ria"
+ ],
+ [
+ "riter",
+ "ia"
+ ],
+ [
+ "en",
+ "en"
+ ],
+ [
+ "ene",
+ "n"
+ ],
+ [
+ "e",
+ "nen"
+ ],
+ [
+ "}_",
+ "\\"
+ ],
+ [
+ "}",
+ "_\\"
+ ],
+ [
+ "▁po",
+ "bl"
+ ],
+ [
+ "▁pob",
+ "l"
+ ],
+ [
+ "▁sou",
+ "thern"
+ ],
+ [
+ "▁south",
+ "ern"
+ ],
+ [
+ "el",
+ "n"
+ ],
+ [
+ "e",
+ "ln"
+ ],
+ [
+ "▁z",
+ "wei"
+ ],
+ [
+ "▁zwe",
+ "i"
+ ],
+ [
+ "▁zw",
+ "ei"
+ ],
+ [
+ "▁con",
+ "crete"
+ ],
+ [
+ "▁CONDITION",
+ "S"
+ ],
+ [
+ "▁dream",
+ "s"
+ ],
+ [
+ "▁dre",
+ "ams"
+ ],
+ [
+ "▁min",
+ "im"
+ ],
+ [
+ "▁mi",
+ "nim"
+ ],
+ [
+ "▁mini",
+ "m"
+ ],
+ [
+ "▁em",
+ "ployee"
+ ],
+ [
+ "▁employ",
+ "ee"
+ ],
+ [
+ "▁n",
+ "ap"
+ ],
+ [
+ "▁na",
+ "p"
+ ],
+ [
+ "▁su",
+ "spect"
+ ],
+ [
+ "▁sus",
+ "pect"
+ ],
+ [
+ "▁susp",
+ "ect"
+ ],
+ [
+ "Mo",
+ "use"
+ ],
+ [
+ "M",
+ "ouse"
+ ],
+ [
+ "▁ther",
+ "apy"
+ ],
+ [
+ "▁therap",
+ "y"
+ ],
+ [
+ "av",
+ "al"
+ ],
+ [
+ "ava",
+ "l"
+ ],
+ [
+ "a",
+ "val"
+ ],
+ [
+ "▁An",
+ "th"
+ ],
+ [
+ "▁Ant",
+ "h"
+ ],
+ [
+ "ST",
+ "ART"
+ ],
+ [
+ "st",
+ "ers"
+ ],
+ [
+ "ster",
+ "s"
+ ],
+ [
+ "ste",
+ "rs"
+ ],
+ [
+ "s",
+ "ters"
+ ],
+ [
+ "ish",
+ "ment"
+ ],
+ [
+ "fin",
+ "ite"
+ ],
+ [
+ "W",
+ "A"
+ ],
+ [
+ "v",
+ "y"
+ ],
+ [
+ "▁m",
+ "ood"
+ ],
+ [
+ "▁mo",
+ "od"
+ ],
+ [
+ "com",
+ "fort"
+ ],
+ [
+ "▁s",
+ "hr"
+ ],
+ [
+ "▁sh",
+ "r"
+ ],
+ [
+ "▁dec",
+ "ade"
+ ],
+ [
+ "я",
+ "бря"
+ ],
+ [
+ "▁'",
+ "#"
+ ],
+ [
+ "▁d",
+ "ot"
+ ],
+ [
+ "▁do",
+ "t"
+ ],
+ [
+ "▁",
+ "dot"
+ ],
+ [
+ "▁h",
+ "ill"
+ ],
+ [
+ "▁hi",
+ "ll"
+ ],
+ [
+ "▁",
+ "hill"
+ ],
+ [
+ "ar",
+ "ry"
+ ],
+ [
+ "arr",
+ "y"
+ ],
+ [
+ "cat",
+ "ch"
+ ],
+ [
+ "c",
+ "atch"
+ ],
+ [
+ "▁j",
+ "Query"
+ ],
+ [
+ "▁",
+ "jQuery"
+ ],
+ [
+ "▁corpor",
+ "ate"
+ ],
+ [
+ "▁BAS",
+ "IS"
+ ],
+ [
+ "▁appoint",
+ "ed"
+ ],
+ [
+ "▁em",
+ "bar"
+ ],
+ [
+ "▁emb",
+ "ar"
+ ],
+ [
+ "ograph",
+ "ie"
+ ],
+ [
+ "▁p",
+ "ressed"
+ ],
+ [
+ "▁pr",
+ "essed"
+ ],
+ [
+ "▁pres",
+ "sed"
+ ],
+ [
+ "▁press",
+ "ed"
+ ],
+ [
+ "▁",
+ "pressed"
+ ],
+ [
+ "▁ch",
+ "ampion"
+ ],
+ [
+ "▁champ",
+ "ion"
+ ],
+ [
+ "em",
+ "it"
+ ],
+ [
+ "emi",
+ "t"
+ ],
+ [
+ "e",
+ "mit"
+ ],
+ [
+ "▁B",
+ "ed"
+ ],
+ [
+ "▁Be",
+ "d"
+ ],
+ [
+ "ва",
+ "ння"
+ ],
+ [
+ "ван",
+ "ня"
+ ],
+ [
+ "Gu",
+ "i"
+ ],
+ [
+ "G",
+ "ui"
+ ],
+ [
+ "▁P",
+ "UR"
+ ],
+ [
+ "▁ur",
+ "ban"
+ ],
+ [
+ "▁urb",
+ "an"
+ ],
+ [
+ "▁sent",
+ "ence"
+ ],
+ [
+ "bu",
+ "ry"
+ ],
+ [
+ "bur",
+ "y"
+ ],
+ [
+ "b",
+ "ury"
+ ],
+ [
+ "▁V",
+ "ideo"
+ ],
+ [
+ "▁",
+ "Video"
+ ],
+ [
+ "▁regular",
+ "ly"
+ ],
+ [
+ "▁regul",
+ "arly"
+ ],
+ [
+ "v",
+ "l"
+ ],
+ [
+ "▁с",
+ "лу"
+ ],
+ [
+ "▁",
+ "слу"
+ ],
+ [
+ "oc",
+ "key"
+ ],
+ [
+ "ock",
+ "ey"
+ ],
+ [
+ "ev",
+ "in"
+ ],
+ [
+ "e",
+ "vin"
+ ],
+ [
+ "ult",
+ "ural"
+ ],
+ [
+ "ultur",
+ "al"
+ ],
+ [
+ "▁pass",
+ "age"
+ ],
+ [
+ "▁со",
+ "став"
+ ],
+ [
+ "▁соста",
+ "в"
+ ],
+ [
+ "▁large",
+ "ly"
+ ],
+ [
+ "▁larg",
+ "ely"
+ ],
+ [
+ "or",
+ "ters"
+ ],
+ [
+ "ort",
+ "ers"
+ ],
+ [
+ "orter",
+ "s"
+ ],
+ [
+ "orte",
+ "rs"
+ ],
+ [
+ "▁conne",
+ "ctions"
+ ],
+ [
+ "▁connection",
+ "s"
+ ],
+ [
+ "▁connect",
+ "ions"
+ ],
+ [
+ "▁surpr",
+ "ising"
+ ],
+ [
+ "b",
+ "c"
+ ],
+ [
+ "▁strong",
+ "ly"
+ ],
+ [
+ "ans",
+ "as"
+ ],
+ [
+ "▁s",
+ "ist"
+ ],
+ [
+ "▁si",
+ "st"
+ ],
+ [
+ "▁ext",
+ "reme"
+ ],
+ [
+ "▁extrem",
+ "e"
+ ],
+ [
+ "▁extr",
+ "eme"
+ ],
+ [
+ "wh",
+ "el"
+ ],
+ [
+ "whe",
+ "l"
+ ],
+ [
+ "w",
+ "hel"
+ ],
+ [
+ "▁de",
+ "aling"
+ ],
+ [
+ "▁deal",
+ "ing"
+ ],
+ [
+ "ograph",
+ "ic"
+ ],
+ [
+ "▁Republic",
+ "an"
+ ],
+ [
+ "▁gr",
+ "anted"
+ ],
+ [
+ "▁gran",
+ "ted"
+ ],
+ [
+ "▁grant",
+ "ed"
+ ],
+ [
+ "▁C",
+ "L"
+ ],
+ [
+ "▁",
+ "CL"
+ ],
+ [
+ "▁H",
+ "ope"
+ ],
+ [
+ "▁Ho",
+ "pe"
+ ],
+ [
+ "▁Hop",
+ "e"
+ ],
+ [
+ "less",
+ "ly"
+ ],
+ [
+ "▁u",
+ "pload"
+ ],
+ [
+ "▁up",
+ "load"
+ ],
+ [
+ "▁",
+ "upload"
+ ],
+ [
+ "▁-",
+ "\\"
+ ],
+ [
+ "▁",
+ "-\\"
+ ],
+ [
+ "ни",
+ "ю"
+ ],
+ [
+ "▁val",
+ "uable"
+ ],
+ [
+ "=",
+ "["
+ ],
+ [
+ "Pr",
+ "ice"
+ ],
+ [
+ "P",
+ "rice"
+ ],
+ [
+ "iss",
+ "ance"
+ ],
+ [
+ "ie",
+ "ns"
+ ],
+ [
+ "ien",
+ "s"
+ ],
+ [
+ "i",
+ "ens"
+ ],
+ [
+ "he",
+ "it"
+ ],
+ [
+ "▁sugg",
+ "ests"
+ ],
+ [
+ "▁suggest",
+ "s"
+ ],
+ [
+ "с",
+ "ло"
+ ],
+ [
+ "▁j",
+ "ur"
+ ],
+ [
+ "▁ju",
+ "r"
+ ],
+ [
+ "}",
+ "|"
+ ],
+ [
+ "l",
+ "p"
+ ],
+ [
+ "▁inv",
+ "ited"
+ ],
+ [
+ "▁invite",
+ "d"
+ ],
+ [
+ "▁de",
+ "riv"
+ ],
+ [
+ "▁der",
+ "iv"
+ ],
+ [
+ "IM",
+ "IT"
+ ],
+ [
+ "I",
+ "MIT"
+ ],
+ [
+ "ra",
+ "ss"
+ ],
+ [
+ "ras",
+ "s"
+ ],
+ [
+ "r",
+ "ass"
+ ],
+ [
+ "▁in",
+ "struct"
+ ],
+ [
+ "▁inst",
+ "ruct"
+ ],
+ [
+ "▁instr",
+ "uct"
+ ],
+ [
+ "▁c",
+ "ourses"
+ ],
+ [
+ "▁cour",
+ "ses"
+ ],
+ [
+ "▁course",
+ "s"
+ ],
+ [
+ "▁cours",
+ "es"
+ ],
+ [
+ "ä",
+ "ch"
+ ],
+ [
+ "▁fif",
+ "ty"
+ ],
+ [
+ "▁fi",
+ "fty"
+ ],
+ [
+ "DE",
+ "VICE"
+ ],
+ [
+ "DEV",
+ "ICE"
+ ],
+ [
+ "AS",
+ "H"
+ ],
+ [
+ "A",
+ "SH"
+ ],
+ [
+ "▁h",
+ "ip"
+ ],
+ [
+ "▁hi",
+ "p"
+ ],
+ [
+ "▁",
+ "hip"
+ ],
+ [
+ "Un",
+ "known"
+ ],
+ [
+ "▁C",
+ "atalogue"
+ ],
+ [
+ "▁Catal",
+ "ogue"
+ ],
+ [
+ "▁R",
+ "oll"
+ ],
+ [
+ "▁Ro",
+ "ll"
+ ],
+ [
+ "▁Rol",
+ "l"
+ ],
+ [
+ "▁",
+ "Roll"
+ ],
+ [
+ "▁t",
+ "ensor"
+ ],
+ [
+ "▁ten",
+ "sor"
+ ],
+ [
+ "▁tens",
+ "or"
+ ],
+ [
+ "▁",
+ "tensor"
+ ],
+ [
+ "be",
+ "c"
+ ],
+ [
+ "b",
+ "ec"
+ ],
+ [
+ "ét",
+ "é"
+ ],
+ [
+ "é",
+ "té"
+ ],
+ [
+ "Id",
+ "entity"
+ ],
+ [
+ "Ident",
+ "ity"
+ ],
+ [
+ "&",
+ "\\"
+ ],
+ [
+ "▁Step",
+ "hen"
+ ],
+ [
+ "▁Steph",
+ "en"
+ ],
+ [
+ "no",
+ "des"
+ ],
+ [
+ "node",
+ "s"
+ ],
+ [
+ "nod",
+ "es"
+ ],
+ [
+ "n",
+ "odes"
+ ],
+ [
+ "Di",
+ "m"
+ ],
+ [
+ "D",
+ "im"
+ ],
+ [
+ "▁cons",
+ "ists"
+ ],
+ [
+ "▁consist",
+ "s"
+ ],
+ [
+ "▁normal",
+ "ly"
+ ],
+ [
+ "▁norm",
+ "ally"
+ ],
+ [
+ "ub",
+ "l"
+ ],
+ [
+ "u",
+ "bl"
+ ],
+ [
+ "▁Pol",
+ "ice"
+ ],
+ [
+ "▁G",
+ "ames"
+ ],
+ [
+ "▁Game",
+ "s"
+ ],
+ [
+ "▁Ga",
+ "mes"
+ ],
+ [
+ "▁Gam",
+ "es"
+ ],
+ [
+ "fi",
+ "ve"
+ ],
+ [
+ "f",
+ "ive"
+ ],
+ [
+ "Ha",
+ "ve"
+ ],
+ [
+ "H",
+ "ave"
+ ],
+ [
+ "▁p",
+ "adding"
+ ],
+ [
+ "▁pad",
+ "ding"
+ ],
+ [
+ "▁",
+ "padding"
+ ],
+ [
+ "er",
+ "es"
+ ],
+ [
+ "ere",
+ "s"
+ ],
+ [
+ "e",
+ "res"
+ ],
+ [
+ "an",
+ "th"
+ ],
+ [
+ "ant",
+ "h"
+ ],
+ [
+ "▁p",
+ "uts"
+ ],
+ [
+ "▁put",
+ "s"
+ ],
+ [
+ "▁pu",
+ "ts"
+ ],
+ [
+ "um",
+ "inate"
+ ],
+ [
+ "umin",
+ "ate"
+ ],
+ [
+ "umi",
+ "nate"
+ ],
+ [
+ "ov",
+ "ie"
+ ],
+ [
+ "ovi",
+ "e"
+ ],
+ [
+ "▁In",
+ "dex"
+ ],
+ [
+ "▁Ind",
+ "ex"
+ ],
+ [
+ "▁",
+ "Index"
+ ],
+ [
+ "bl",
+ "ue"
+ ],
+ [
+ "Sc",
+ "al"
+ ],
+ [
+ "S",
+ "cal"
+ ],
+ [
+ "▁g",
+ "iant"
+ ],
+ [
+ "▁gi",
+ "ant"
+ ],
+ [
+ "T",
+ "F"
+ ],
+ [
+ "ps",
+ "on"
+ ],
+ [
+ "p",
+ "son"
+ ],
+ [
+ "▁vict",
+ "im"
+ ],
+ [
+ "▁vic",
+ "tim"
+ ],
+ [
+ "se",
+ "rial"
+ ],
+ [
+ "ser",
+ "ial"
+ ],
+ [
+ "s",
+ "erial"
+ ],
+ [
+ "▁S",
+ "ym"
+ ],
+ [
+ "▁Sy",
+ "m"
+ ],
+ [
+ "▁",
+ "Sym"
+ ],
+ [
+ "Sing",
+ "le"
+ ],
+ [
+ "S",
+ "ingle"
+ ],
+ [
+ "▁m",
+ "d"
+ ],
+ [
+ "▁",
+ "md"
+ ],
+ [
+ "▁att",
+ "ended"
+ ],
+ [
+ "▁attend",
+ "ed"
+ ],
+ [
+ "▁S",
+ "tra"
+ ],
+ [
+ "▁St",
+ "ra"
+ ],
+ [
+ "▁Str",
+ "a"
+ ],
+ [
+ "▁D",
+ "ark"
+ ],
+ [
+ "▁Dar",
+ "k"
+ ],
+ [
+ "▁",
+ "Dark"
+ ],
+ [
+ ")",
+ "|"
+ ],
+ [
+ "▁s",
+ "pan"
+ ],
+ [
+ "▁sp",
+ "an"
+ ],
+ [
+ "▁",
+ "span"
+ ],
+ [
+ "▁main",
+ "tenance"
+ ],
+ [
+ "▁b",
+ "ind"
+ ],
+ [
+ "▁bi",
+ "nd"
+ ],
+ [
+ "▁bin",
+ "d"
+ ],
+ [
+ "▁",
+ "bind"
+ ],
+ [
+ "Be",
+ "an"
+ ],
+ [
+ "il",
+ "arly"
+ ],
+ [
+ "ilar",
+ "ly"
+ ],
+ [
+ "▁con",
+ "vent"
+ ],
+ [
+ "▁conv",
+ "ent"
+ ],
+ [
+ "▁conven",
+ "t"
+ ],
+ [
+ "▁conve",
+ "nt"
+ ],
+ [
+ "▁Jos",
+ "é"
+ ],
+ [
+ "ud",
+ "d"
+ ],
+ [
+ "u",
+ "dd"
+ ],
+ [
+ "▁p",
+ "oly"
+ ],
+ [
+ "▁pol",
+ "y"
+ ],
+ [
+ "▁po",
+ "ly"
+ ],
+ [
+ "▁",
+ "poly"
+ ],
+ [
+ "▁i",
+ "dx"
+ ],
+ [
+ "▁id",
+ "x"
+ ],
+ [
+ "▁",
+ "idx"
+ ],
+ [
+ "▁as",
+ "ks"
+ ],
+ [
+ "▁ask",
+ "s"
+ ],
+ [
+ "▁ent",
+ "hus"
+ ],
+ [
+ "▁s",
+ "uck"
+ ],
+ [
+ "▁su",
+ "ck"
+ ],
+ [
+ "▁suc",
+ "k"
+ ],
+ [
+ "▁C",
+ "ou"
+ ],
+ [
+ "▁Co",
+ "u"
+ ],
+ [
+ "▁Corpor",
+ "ation"
+ ],
+ [
+ "us",
+ "ions"
+ ],
+ [
+ "usion",
+ "s"
+ ],
+ [
+ "op",
+ "her"
+ ],
+ [
+ "oph",
+ "er"
+ ],
+ [
+ "o",
+ "pher"
+ ],
+ [
+ "▁sympt",
+ "oms"
+ ],
+ [
+ "▁Joh",
+ "ann"
+ ],
+ [
+ "▁п",
+ "у"
+ ],
+ [
+ "▁",
+ "пу"
+ ],
+ [
+ "▁h",
+ "tml"
+ ],
+ [
+ "▁",
+ "html"
+ ],
+ [
+ "▁p",
+ "s"
+ ],
+ [
+ "▁",
+ "ps"
+ ],
+ [
+ "ear",
+ "ing"
+ ],
+ [
+ "ea",
+ "ring"
+ ],
+ [
+ "e",
+ "aring"
+ ],
+ [
+ "ge",
+ "sch"
+ ],
+ [
+ "ges",
+ "ch"
+ ],
+ [
+ "g",
+ "esch"
+ ],
+ [
+ "▁M",
+ "other"
+ ],
+ [
+ "▁Mo",
+ "ther"
+ ],
+ [
+ "▁Mot",
+ "her"
+ ],
+ [
+ "RE",
+ "T"
+ ],
+ [
+ "R",
+ "ET"
+ ],
+ [
+ "▁furn",
+ "iture"
+ ],
+ [
+ "P",
+ "F"
+ ],
+ [
+ "▁Gu",
+ "ard"
+ ],
+ [
+ "▁",
+ "Guard"
+ ],
+ [
+ "pat",
+ "tern"
+ ],
+ [
+ "▁love",
+ "ly"
+ ],
+ [
+ "▁lov",
+ "ely"
+ ],
+ [
+ "al",
+ "g"
+ ],
+ [
+ "a",
+ "lg"
+ ],
+ [
+ "ed",
+ "ly"
+ ],
+ [
+ "se",
+ "x"
+ ],
+ [
+ "s",
+ "ex"
+ ],
+ [
+ "▁fin",
+ "ds"
+ ],
+ [
+ "▁find",
+ "s"
+ ],
+ [
+ "Bu",
+ "f"
+ ],
+ [
+ "B",
+ "uf"
+ ],
+ [
+ "▁на",
+ "д"
+ ],
+ [
+ "▁",
+ "над"
+ ],
+ [
+ "▁к",
+ "м"
+ ],
+ [
+ "▁P",
+ "or"
+ ],
+ [
+ "▁Po",
+ "r"
+ ],
+ [
+ "С",
+ "Р"
+ ],
+ [
+ "En",
+ "ter"
+ ],
+ [
+ "Ent",
+ "er"
+ ],
+ [
+ "▁e",
+ "sta"
+ ],
+ [
+ "▁est",
+ "a"
+ ],
+ [
+ "▁es",
+ "ta"
+ ],
+ [
+ "▁",
+ "esta"
+ ],
+ [
+ "▁т",
+ "ре"
+ ],
+ [
+ "▁",
+ "тре"
+ ],
+ [
+ "▁\"",
+ "*"
+ ],
+ [
+ "▁F",
+ "ox"
+ ],
+ [
+ "▁Fo",
+ "x"
+ ],
+ [
+ "▁c",
+ "ock"
+ ],
+ [
+ "▁co",
+ "ck"
+ ],
+ [
+ "▁coc",
+ "k"
+ ],
+ [
+ "▁",
+ "cock"
+ ],
+ [
+ "B",
+ "undle"
+ ],
+ [
+ "▁p",
+ "uis"
+ ],
+ [
+ "▁pu",
+ "is"
+ ],
+ [
+ "▁",
+ "puis"
+ ],
+ [
+ "▁ann",
+ "ounce"
+ ],
+ [
+ "▁announ",
+ "ce"
+ ],
+ [
+ "▁g",
+ "uid"
+ ],
+ [
+ "▁gu",
+ "id"
+ ],
+ [
+ "▁",
+ "guid"
+ ],
+ [
+ "check",
+ "ed"
+ ],
+ [
+ "ic",
+ "ide"
+ ],
+ [
+ "ici",
+ "de"
+ ],
+ [
+ "ne",
+ "g"
+ ],
+ [
+ "n",
+ "eg"
+ ],
+ [
+ "▁G",
+ "il"
+ ],
+ [
+ "▁Gi",
+ "l"
+ ],
+ [
+ "sc",
+ "hen"
+ ],
+ [
+ "sch",
+ "en"
+ ],
+ [
+ "sche",
+ "n"
+ ],
+ [
+ "s",
+ "chen"
+ ],
+ [
+ "olog",
+ "ist"
+ ],
+ [
+ "is",
+ "o"
+ ],
+ [
+ "i",
+ "so"
+ ],
+ [
+ "group",
+ "s"
+ ],
+ [
+ "gro",
+ "ups"
+ ],
+ [
+ "g",
+ "roups"
+ ],
+ [
+ "▁some",
+ "body"
+ ],
+ [
+ "Da",
+ "y"
+ ],
+ [
+ "D",
+ "ay"
+ ],
+ [
+ "tr",
+ "as"
+ ],
+ [
+ "tra",
+ "s"
+ ],
+ [
+ "t",
+ "ras"
+ ],
+ [
+ "▁comp",
+ "act"
+ ],
+ [
+ "▁organ",
+ "ized"
+ ],
+ [
+ "▁organiz",
+ "ed"
+ ],
+ [
+ "▁organize",
+ "d"
+ ],
+ [
+ "▁r",
+ "oles"
+ ],
+ [
+ "▁ro",
+ "les"
+ ],
+ [
+ "▁role",
+ "s"
+ ],
+ [
+ "▁h",
+ "int"
+ ],
+ [
+ "▁hi",
+ "nt"
+ ],
+ [
+ "▁",
+ "hint"
+ ],
+ [
+ "▁s",
+ "å"
+ ],
+ [
+ "▁p",
+ "ays"
+ ],
+ [
+ "▁pay",
+ "s"
+ ],
+ [
+ "▁pa",
+ "ys"
+ ],
+ [
+ "▁С",
+ "и"
+ ],
+ [
+ "▁h",
+ "oped"
+ ],
+ [
+ "▁hope",
+ "d"
+ ],
+ [
+ "▁hop",
+ "ed"
+ ],
+ [
+ "▁ho",
+ "ped"
+ ],
+ [
+ "▁s",
+ "ail"
+ ],
+ [
+ "▁sa",
+ "il"
+ ],
+ [
+ "▁V",
+ "ers"
+ ],
+ [
+ "▁Ver",
+ "s"
+ ],
+ [
+ "▁Ve",
+ "rs"
+ ],
+ [
+ "▁",
+ "Vers"
+ ],
+ [
+ "▁em",
+ "br"
+ ],
+ [
+ "▁emb",
+ "r"
+ ],
+ [
+ "▁b",
+ "ot"
+ ],
+ [
+ "▁bo",
+ "t"
+ ],
+ [
+ "▁",
+ "bot"
+ ],
+ [
+ "▁ex",
+ "ceed"
+ ],
+ [
+ "▁exc",
+ "eed"
+ ],
+ [
+ "BA",
+ "CK"
+ ],
+ [
+ "B",
+ "ACK"
+ ],
+ [
+ "▁g",
+ "aze"
+ ],
+ [
+ "▁gaz",
+ "e"
+ ],
+ [
+ "▁ga",
+ "ze"
+ ],
+ [
+ "▁s",
+ "pons"
+ ],
+ [
+ "▁sp",
+ "ons"
+ ],
+ [
+ "▁spo",
+ "ns"
+ ],
+ [
+ "AS",
+ "T"
+ ],
+ [
+ "A",
+ "ST"
+ ],
+ [
+ "▁tor",
+ "ch"
+ ],
+ [
+ "▁",
+ "torch"
+ ],
+ [
+ "▁news",
+ "paper"
+ ],
+ [
+ "▁newsp",
+ "aper"
+ ],
+ [
+ "▁D",
+ "ist"
+ ],
+ [
+ "▁Dis",
+ "t"
+ ],
+ [
+ "▁Di",
+ "st"
+ ],
+ [
+ "▁",
+ "Dist"
+ ],
+ [
+ "▁b",
+ "ass"
+ ],
+ [
+ "▁bas",
+ "s"
+ ],
+ [
+ "▁ba",
+ "ss"
+ ],
+ [
+ "▁h",
+ "anging"
+ ],
+ [
+ "▁han",
+ "ging"
+ ],
+ [
+ "▁hang",
+ "ing"
+ ],
+ [
+ "▁e",
+ "ars"
+ ],
+ [
+ "▁ear",
+ "s"
+ ],
+ [
+ "▁",
+ "ears"
+ ],
+ [
+ "ń",
+ "sk"
+ ],
+ [
+ "get",
+ "Value"
+ ],
+ [
+ "▁un",
+ "us"
+ ],
+ [
+ "▁E",
+ "le"
+ ],
+ [
+ "▁El",
+ "e"
+ ],
+ [
+ "serv",
+ "ices"
+ ],
+ [
+ "service",
+ "s"
+ ],
+ [
+ "s",
+ "ervices"
+ ],
+ [
+ "▁d",
+ "ressed"
+ ],
+ [
+ "▁dr",
+ "essed"
+ ],
+ [
+ "▁dress",
+ "ed"
+ ],
+ [
+ "la",
+ "v"
+ ],
+ [
+ "l",
+ "av"
+ ],
+ [
+ "▁п",
+ "ла"
+ ],
+ [
+ "▁",
+ "пла"
+ ],
+ [
+ "Priv",
+ "ate"
+ ],
+ [
+ "P",
+ "rivate"
+ ],
+ [
+ "mi",
+ "c"
+ ],
+ [
+ "m",
+ "ic"
+ ],
+ [
+ "▁par",
+ "ser"
+ ],
+ [
+ "▁parse",
+ "r"
+ ],
+ [
+ "▁",
+ "parser"
+ ],
+ [
+ "▁se",
+ "ctions"
+ ],
+ [
+ "▁section",
+ "s"
+ ],
+ [
+ "▁sect",
+ "ions"
+ ],
+ [
+ "▁",
+ "sections"
+ ],
+ [
+ "▁f",
+ "o"
+ ],
+ [
+ "▁",
+ "fo"
+ ],
+ [
+ "Err",
+ "orf"
+ ],
+ [
+ "Error",
+ "f"
+ ],
+ [
+ "in",
+ "z"
+ ],
+ [
+ "ör",
+ "d"
+ ],
+ [
+ "ö",
+ "rd"
+ ],
+ [
+ "▁m",
+ "etric"
+ ],
+ [
+ "▁met",
+ "ric"
+ ],
+ [
+ "▁",
+ "metric"
+ ],
+ [
+ "UR",
+ "I"
+ ],
+ [
+ "U",
+ "RI"
+ ],
+ [
+ "▁v",
+ "ice"
+ ],
+ [
+ "▁vi",
+ "ce"
+ ],
+ [
+ "▁vic",
+ "e"
+ ],
+ [
+ "RE",
+ "D"
+ ],
+ [
+ "R",
+ "ED"
+ ],
+ [
+ "▁n",
+ "ue"
+ ],
+ [
+ "▁nu",
+ "e"
+ ],
+ [
+ "re",
+ "vs"
+ ],
+ [
+ "rev",
+ "s"
+ ],
+ [
+ "▁col",
+ "lected"
+ ],
+ [
+ "▁collect",
+ "ed"
+ ],
+ [
+ "▁colle",
+ "cted"
+ ],
+ [
+ "oo",
+ "se"
+ ],
+ [
+ "o",
+ "ose"
+ ],
+ [
+ "▁m",
+ "ond"
+ ],
+ [
+ "▁mon",
+ "d"
+ ],
+ [
+ "▁mo",
+ "nd"
+ ],
+ [
+ "▁",
+ "mond"
+ ],
+ [
+ "▁n",
+ "as"
+ ],
+ [
+ "▁na",
+ "s"
+ ],
+ [
+ "▁",
+ "nas"
+ ],
+ [
+ "▁На",
+ "се"
+ ],
+ [
+ "▁",
+ "å"
+ ],
+ [
+ "Dr",
+ "op"
+ ],
+ [
+ "D",
+ "rop"
+ ],
+ [
+ "▁ab",
+ "use"
+ ],
+ [
+ "▁s",
+ "ees"
+ ],
+ [
+ "▁se",
+ "es"
+ ],
+ [
+ "▁see",
+ "s"
+ ],
+ [
+ "▁H",
+ "ence"
+ ],
+ [
+ "▁Hen",
+ "ce"
+ ],
+ [
+ "ex",
+ "ec"
+ ],
+ [
+ "}\\",
+ ","
+ ],
+ [
+ "}",
+ "\\,"
+ ],
+ [
+ "▁ar",
+ "bitr"
+ ],
+ [
+ "▁Ap",
+ "plication"
+ ],
+ [
+ "▁",
+ "Application"
+ ],
+ [
+ "f",
+ "amily"
+ ],
+ [
+ "ü",
+ "d"
+ ],
+ [
+ "▁mag",
+ "netic"
+ ],
+ [
+ "▁magn",
+ "etic"
+ ],
+ [
+ "▁magnet",
+ "ic"
+ ],
+ [
+ "▁new",
+ "ly"
+ ],
+ [
+ "▁re",
+ "produ"
+ ],
+ [
+ "▁rep",
+ "rodu"
+ ],
+ [
+ "▁writ",
+ "ers"
+ ],
+ [
+ "▁write",
+ "rs"
+ ],
+ [
+ "▁writer",
+ "s"
+ ],
+ [
+ "▁he",
+ "aders"
+ ],
+ [
+ "▁head",
+ "ers"
+ ],
+ [
+ "▁header",
+ "s"
+ ],
+ [
+ "▁",
+ "headers"
+ ],
+ [
+ "š",
+ "í"
+ ],
+ [
+ "р",
+ "т"
+ ],
+ [
+ "YP",
+ "E"
+ ],
+ [
+ "Y",
+ "PE"
+ ],
+ [
+ "▁s",
+ "chema"
+ ],
+ [
+ "▁sch",
+ "ema"
+ ],
+ [
+ "▁sche",
+ "ma"
+ ],
+ [
+ "▁",
+ "schema"
+ ],
+ [
+ "▁C",
+ "e"
+ ],
+ [
+ "▁Je",
+ "ws"
+ ],
+ [
+ "▁Jew",
+ "s"
+ ],
+ [
+ "▁Re",
+ "cord"
+ ],
+ [
+ "▁Rec",
+ "ord"
+ ],
+ [
+ "▁",
+ "Record"
+ ],
+ [
+ "pre",
+ "sent"
+ ],
+ [
+ "pres",
+ "ent"
+ ],
+ [
+ "p",
+ "resent"
+ ],
+ [
+ "▁так",
+ "же"
+ ],
+ [
+ "▁label",
+ "s"
+ ],
+ [
+ "▁lab",
+ "els"
+ ],
+ [
+ "▁",
+ "labels"
+ ],
+ [
+ "S",
+ "ocket"
+ ],
+ [
+ "▁equ",
+ "ations"
+ ],
+ [
+ "▁equation",
+ "s"
+ ],
+ [
+ "▁eq",
+ "uations"
+ ],
+ [
+ "▁medic",
+ "ine"
+ ],
+ [
+ "▁author",
+ "ities"
+ ],
+ [
+ "}",
+ "`"
+ ],
+ [
+ "ст",
+ "ви"
+ ],
+ [
+ "ств",
+ "и"
+ ],
+ [
+ "▁C",
+ "orn"
+ ],
+ [
+ "▁Co",
+ "rn"
+ ],
+ [
+ "▁Cor",
+ "n"
+ ],
+ [
+ "▁environment",
+ "al"
+ ],
+ [
+ "WAR",
+ "E"
+ ],
+ [
+ "WA",
+ "RE"
+ ],
+ [
+ "W",
+ "ARE"
+ ],
+ [
+ "Me",
+ "r"
+ ],
+ [
+ "M",
+ "er"
+ ],
+ [
+ "▁са",
+ "мо"
+ ],
+ [
+ "▁Techn",
+ "ology"
+ ],
+ [
+ "▁S",
+ "af"
+ ],
+ [
+ "▁Sa",
+ "f"
+ ],
+ [
+ "▁con",
+ "n"
+ ],
+ [
+ "▁co",
+ "nn"
+ ],
+ [
+ "▁",
+ "conn"
+ ],
+ [
+ "▁U",
+ "m"
+ ],
+ [
+ "▁Pac",
+ "ific"
+ ],
+ [
+ "те",
+ "л"
+ ],
+ [
+ "ja",
+ "n"
+ ],
+ [
+ "j",
+ "an"
+ ],
+ [
+ "▁unc",
+ "ertain"
+ ],
+ [
+ "▁bel",
+ "ief"
+ ],
+ [
+ "▁belie",
+ "f"
+ ],
+ [
+ "co",
+ "unter"
+ ],
+ [
+ "count",
+ "er"
+ ],
+ [
+ "c",
+ "ounter"
+ ],
+ [
+ "to",
+ "Be"
+ ],
+ [
+ "IN",
+ "S"
+ ],
+ [
+ "I",
+ "NS"
+ ],
+ [
+ "we",
+ "et"
+ ],
+ [
+ "Li",
+ "ght"
+ ],
+ [
+ "L",
+ "ight"
+ ],
+ [
+ "pr",
+ "imary"
+ ],
+ [
+ "prim",
+ "ary"
+ ],
+ [
+ "▁feature",
+ "d"
+ ],
+ [
+ "▁feat",
+ "ured"
+ ],
+ [
+ "▁touch",
+ "ed"
+ ],
+ [
+ "▁tou",
+ "ched"
+ ],
+ [
+ "HT",
+ "TP"
+ ],
+ [
+ "▁t",
+ "act"
+ ],
+ [
+ "▁ta",
+ "ct"
+ ],
+ [
+ "pos",
+ "itory"
+ ],
+ [
+ "p",
+ "ository"
+ ],
+ [
+ "▁e",
+ "ines"
+ ],
+ [
+ "▁ein",
+ "es"
+ ],
+ [
+ "▁eine",
+ "s"
+ ],
+ [
+ "la",
+ "ss"
+ ],
+ [
+ "las",
+ "s"
+ ],
+ [
+ "l",
+ "ass"
+ ],
+ [
+ "сь",
+ "ка"
+ ],
+ [
+ "▁prz",
+ "ez"
+ ],
+ [
+ "▁prze",
+ "z"
+ ],
+ [
+ "▁f",
+ "uer"
+ ],
+ [
+ "▁fue",
+ "r"
+ ],
+ [
+ "▁fu",
+ "er"
+ ],
+ [
+ "▁exc",
+ "iting"
+ ],
+ [
+ "▁excit",
+ "ing"
+ ],
+ [
+ "▁C",
+ "ub"
+ ],
+ [
+ "▁Cu",
+ "b"
+ ],
+ [
+ "ag",
+ "an"
+ ],
+ [
+ "aga",
+ "n"
+ ],
+ [
+ "a",
+ "gan"
+ ],
+ [
+ "V",
+ "O"
+ ],
+ [
+ "▁'",
+ "%"
+ ],
+ [
+ "▁\\",
+ "{"
+ ],
+ [
+ "▁",
+ "\\{"
+ ],
+ [
+ "ub",
+ "ble"
+ ],
+ [
+ "▁F",
+ "ol"
+ ],
+ [
+ "▁Fo",
+ "l"
+ ],
+ [
+ "▁K",
+ "ong"
+ ],
+ [
+ "▁Kon",
+ "g"
+ ],
+ [
+ "▁Ko",
+ "ng"
+ ],
+ [
+ "▁ver",
+ "sch"
+ ],
+ [
+ "▁vers",
+ "ch"
+ ],
+ [
+ "FA",
+ "IL"
+ ],
+ [
+ "F",
+ "AIL"
+ ],
+ [
+ "▁na",
+ "ar"
+ ],
+ [
+ "ö",
+ "s"
+ ],
+ [
+ "sp",
+ "eed"
+ ],
+ [
+ "spe",
+ "ed"
+ ],
+ [
+ "s",
+ "peed"
+ ],
+ [
+ "▁terr",
+ "itor"
+ ],
+ [
+ "▁territo",
+ "r"
+ ],
+ [
+ "▁w",
+ "rap"
+ ],
+ [
+ "▁wr",
+ "ap"
+ ],
+ [
+ "▁",
+ "wrap"
+ ],
+ [
+ "▁Jah",
+ "re"
+ ],
+ [
+ "▁Jahr",
+ "e"
+ ],
+ [
+ "▁Ja",
+ "hre"
+ ],
+ [
+ "le",
+ "e"
+ ],
+ [
+ "l",
+ "ee"
+ ],
+ [
+ "▁cross",
+ "ed"
+ ],
+ [
+ "res",
+ "olve"
+ ],
+ [
+ "▁s",
+ "tim"
+ ],
+ [
+ "▁st",
+ "im"
+ ],
+ [
+ "N",
+ "ative"
+ ],
+ [
+ "ur",
+ "sor"
+ ],
+ [
+ "urs",
+ "or"
+ ],
+ [
+ "Not",
+ "Null"
+ ],
+ [
+ "▁Al",
+ "bert"
+ ],
+ [
+ "▁Alber",
+ "t"
+ ],
+ [
+ "▁Alb",
+ "ert"
+ ],
+ [
+ "▁sign",
+ "ature"
+ ],
+ [
+ "▁",
+ "signature"
+ ],
+ [
+ "▁R",
+ "u"
+ ],
+ [
+ "id",
+ "as"
+ ],
+ [
+ "ida",
+ "s"
+ ],
+ [
+ "i",
+ "das"
+ ],
+ [
+ "▁de",
+ "cent"
+ ],
+ [
+ "▁dec",
+ "ent"
+ ],
+ [
+ "▁dece",
+ "nt"
+ ],
+ [
+ "▁f",
+ "aced"
+ ],
+ [
+ "▁face",
+ "d"
+ ],
+ [
+ "▁fac",
+ "ed"
+ ],
+ [
+ "▁fa",
+ "ced"
+ ],
+ [
+ "▁",
+ "faced"
+ ],
+ [
+ "▁",
+ "лю"
+ ],
+ [
+ "▁Sp",
+ "ain"
+ ],
+ [
+ "▁res",
+ "istance"
+ ],
+ [
+ "▁resist",
+ "ance"
+ ],
+ [
+ "▁B",
+ "rian"
+ ],
+ [
+ "▁Br",
+ "ian"
+ ],
+ [
+ "kw",
+ "args"
+ ],
+ [
+ "▁inter",
+ "val"
+ ],
+ [
+ "▁",
+ "interval"
+ ],
+ [
+ "▁Л",
+ "е"
+ ],
+ [
+ "▁ex",
+ "plo"
+ ],
+ [
+ "▁expl",
+ "o"
+ ],
+ [
+ "▁exp",
+ "lo"
+ ],
+ [
+ "▁s",
+ "emi"
+ ],
+ [
+ "▁se",
+ "mi"
+ ],
+ [
+ "▁sem",
+ "i"
+ ],
+ [
+ "▁wide",
+ "ly"
+ ],
+ [
+ "▁wid",
+ "ely"
+ ],
+ [
+ "d",
+ "x"
+ ],
+ [
+ "ko",
+ "v"
+ ],
+ [
+ "k",
+ "ov"
+ ],
+ [
+ "▁C",
+ "ome"
+ ],
+ [
+ "▁Com",
+ "e"
+ ],
+ [
+ "▁Co",
+ "me"
+ ],
+ [
+ "▁",
+ "Come"
+ ],
+ [
+ "▁kn",
+ "ife"
+ ],
+ [
+ "As",
+ "p"
+ ],
+ [
+ "A",
+ "sp"
+ ],
+ [
+ "un",
+ "o"
+ ],
+ [
+ "u",
+ "no"
+ ],
+ [
+ "line",
+ "to"
+ ],
+ [
+ "lin",
+ "eto"
+ ],
+ [
+ "▁B",
+ "und"
+ ],
+ [
+ "▁Bu",
+ "nd"
+ ],
+ [
+ "▁Bun",
+ "d"
+ ],
+ [
+ "C",
+ "ert"
+ ],
+ [
+ "▁t",
+ "odo"
+ ],
+ [
+ "▁to",
+ "do"
+ ],
+ [
+ "▁tod",
+ "o"
+ ],
+ [
+ "ta",
+ "gs"
+ ],
+ [
+ "tag",
+ "s"
+ ],
+ [
+ "t",
+ "ags"
+ ],
+ [
+ "▁guarante",
+ "e"
+ ],
+ [
+ "▁v",
+ "ital"
+ ],
+ [
+ "▁vi",
+ "tal"
+ ],
+ [
+ "▁vit",
+ "al"
+ ],
+ [
+ "▁vita",
+ "l"
+ ],
+ [
+ "▁f",
+ "ought"
+ ],
+ [
+ "▁fou",
+ "ght"
+ ],
+ [
+ "▁E",
+ "nv"
+ ],
+ [
+ "▁En",
+ "v"
+ ],
+ [
+ "▁",
+ "Env"
+ ],
+ [
+ "H",
+ "D"
+ ],
+ [
+ "Lo",
+ "wer"
+ ],
+ [
+ "Low",
+ "er"
+ ],
+ [
+ "L",
+ "ower"
+ ],
+ [
+ "T",
+ "x"
+ ],
+ [
+ "▁F",
+ "a"
+ ],
+ [
+ "▁ant",
+ "icip"
+ ],
+ [
+ "▁anti",
+ "cip"
+ ],
+ [
+ "Time",
+ "r"
+ ],
+ [
+ "Tim",
+ "er"
+ ],
+ [
+ "T",
+ "imer"
+ ],
+ [
+ "med",
+ "iate"
+ ],
+ [
+ "medi",
+ "ate"
+ ],
+ [
+ "media",
+ "te"
+ ],
+ [
+ "▁pro",
+ "ven"
+ ],
+ [
+ "▁pr",
+ "oven"
+ ],
+ [
+ "▁prov",
+ "en"
+ ],
+ [
+ "▁prove",
+ "n"
+ ],
+ [
+ "▁part",
+ "ir"
+ ],
+ [
+ "▁parti",
+ "r"
+ ],
+ [
+ "A",
+ "E"
+ ],
+ [
+ "cur",
+ "sor"
+ ],
+ [
+ "curs",
+ "or"
+ ],
+ [
+ "c",
+ "ursor"
+ ],
+ [
+ "▁wood",
+ "en"
+ ],
+ [
+ "▁wo",
+ "oden"
+ ],
+ [
+ "▁Cont",
+ "act"
+ ],
+ [
+ "▁",
+ "Contact"
+ ],
+ [
+ "re",
+ "gs"
+ ],
+ [
+ "reg",
+ "s"
+ ],
+ [
+ "▁prov",
+ "inc"
+ ],
+ [
+ "▁provin",
+ "c"
+ ],
+ [
+ "▁D",
+ "C"
+ ],
+ [
+ "▁",
+ "DC"
+ ],
+ [
+ "▁mem",
+ "ories"
+ ],
+ [
+ "▁memor",
+ "ies"
+ ],
+ [
+ "▁memo",
+ "ries"
+ ],
+ [
+ "▁f",
+ "t"
+ ],
+ [
+ "▁",
+ "ft"
+ ],
+ [
+ "▁b",
+ "attery"
+ ],
+ [
+ "▁batter",
+ "y"
+ ],
+ [
+ "▁batt",
+ "ery"
+ ],
+ [
+ "▁bat",
+ "tery"
+ ],
+ [
+ "ute",
+ "nant"
+ ],
+ [
+ "uten",
+ "ant"
+ ],
+ [
+ "u",
+ "tenant"
+ ],
+ [
+ "Log",
+ "in"
+ ],
+ [
+ "Lo",
+ "gin"
+ ],
+ [
+ "ount",
+ "ry"
+ ],
+ [
+ "oun",
+ "try"
+ ],
+ [
+ "▁comp",
+ "ens"
+ ],
+ [
+ "operator",
+ "name"
+ ],
+ [
+ "▁Jac",
+ "ob"
+ ],
+ [
+ "ze",
+ "d"
+ ],
+ [
+ "z",
+ "ed"
+ ],
+ [
+ "AD",
+ "DR"
+ ],
+ [
+ "ADD",
+ "R"
+ ],
+ [
+ "▁qu",
+ "ad"
+ ],
+ [
+ "▁",
+ "quad"
+ ],
+ [
+ "*)",
+ "."
+ ],
+ [
+ "*",
+ ")."
+ ],
+ [
+ "▁co",
+ "at"
+ ],
+ [
+ "▁f",
+ "ir"
+ ],
+ [
+ "▁fi",
+ "r"
+ ],
+ [
+ "▁Mich",
+ "el"
+ ],
+ [
+ "▁Mic",
+ "hel"
+ ],
+ [
+ "▁Mi",
+ "chel"
+ ],
+ [
+ "▁Miche",
+ "l"
+ ],
+ [
+ "▁Stand",
+ "ard"
+ ],
+ [
+ "▁",
+ "Standard"
+ ],
+ [
+ "r",
+ "f"
+ ],
+ [
+ "me",
+ "l"
+ ],
+ [
+ "m",
+ "el"
+ ],
+ [
+ "▁co",
+ "eff"
+ ],
+ [
+ "▁Ira",
+ "q"
+ ],
+ [
+ "▁G",
+ "iven"
+ ],
+ [
+ "▁Gi",
+ "ven"
+ ],
+ [
+ "▁Give",
+ "n"
+ ],
+ [
+ "ни",
+ "ма"
+ ],
+ [
+ "ним",
+ "а"
+ ],
+ [
+ "▁F",
+ "IT"
+ ],
+ [
+ "▁FI",
+ "T"
+ ],
+ [
+ "▁p",
+ "eu"
+ ],
+ [
+ "▁pe",
+ "u"
+ ],
+ [
+ "▁i",
+ "g"
+ ],
+ [
+ "▁",
+ "ig"
+ ],
+ [
+ "▁C",
+ "ase"
+ ],
+ [
+ "▁Cas",
+ "e"
+ ],
+ [
+ "▁Ca",
+ "se"
+ ],
+ [
+ "▁",
+ "Case"
+ ],
+ [
+ "m",
+ "é"
+ ],
+ [
+ "▁par",
+ "allel"
+ ],
+ [
+ "▁",
+ "parallel"
+ ],
+ [
+ "ci",
+ "o"
+ ],
+ [
+ "c",
+ "io"
+ ],
+ [
+ "ko",
+ "w"
+ ],
+ [
+ "k",
+ "ow"
+ ],
+ [
+ "▁institut",
+ "ions"
+ ],
+ [
+ "▁institution",
+ "s"
+ ],
+ [
+ "í",
+ "cul"
+ ],
+ [
+ "ab",
+ "an"
+ ],
+ [
+ "aba",
+ "n"
+ ],
+ [
+ "a",
+ "ban"
+ ],
+ [
+ "U",
+ "X"
+ ],
+ [
+ "▁Sa",
+ "rah"
+ ],
+ [
+ "▁Sar",
+ "ah"
+ ],
+ [
+ "▁Sara",
+ "h"
+ ],
+ [
+ "▁m",
+ "és"
+ ],
+ [
+ "▁mé",
+ "s"
+ ],
+ [
+ "▁at",
+ "mos"
+ ],
+ [
+ "▁atm",
+ "os"
+ ],
+ [
+ "▁slä",
+ "ktet"
+ ],
+ [
+ "▁br",
+ "others"
+ ],
+ [
+ "▁bro",
+ "thers"
+ ],
+ [
+ "▁brother",
+ "s"
+ ],
+ [
+ "▁want",
+ "ing"
+ ],
+ [
+ "aa",
+ "aa"
+ ],
+ [
+ "▁f",
+ "est"
+ ],
+ [
+ "▁fe",
+ "st"
+ ],
+ [
+ "=",
+ "-"
+ ],
+ [
+ "▁for",
+ "ty"
+ ],
+ [
+ "▁fort",
+ "y"
+ ],
+ [
+ "▁cre",
+ "ates"
+ ],
+ [
+ "▁create",
+ "s"
+ ],
+ [
+ "▁creat",
+ "es"
+ ],
+ [
+ "h",
+ "h"
+ ],
+ [
+ "▁And",
+ "roid"
+ ],
+ [
+ "▁Andr",
+ "oid"
+ ],
+ [
+ "▁",
+ "Android"
+ ],
+ [
+ "an",
+ "ches"
+ ],
+ [
+ "anc",
+ "hes"
+ ],
+ [
+ "anch",
+ "es"
+ ],
+ [
+ "anche",
+ "s"
+ ],
+ [
+ "B",
+ "T"
+ ],
+ [
+ "up",
+ "load"
+ ],
+ [
+ "u",
+ "pload"
+ ],
+ [
+ "xi",
+ "s"
+ ],
+ [
+ "x",
+ "is"
+ ],
+ [
+ "H",
+ "z"
+ ],
+ [
+ "бо",
+ "р"
+ ],
+ [
+ "б",
+ "ор"
+ ],
+ [
+ "RA",
+ "Y"
+ ],
+ [
+ "R",
+ "AY"
+ ],
+ [
+ "nt",
+ "il"
+ ],
+ [
+ "n",
+ "til"
+ ],
+ [
+ "▁le",
+ "aned"
+ ],
+ [
+ "▁lean",
+ "ed"
+ ],
+ [
+ "un",
+ "da"
+ ],
+ [
+ "und",
+ "a"
+ ],
+ [
+ "▁ult",
+ "imately"
+ ],
+ [
+ "▁ultimate",
+ "ly"
+ ],
+ [
+ "▁t",
+ "ok"
+ ],
+ [
+ "▁to",
+ "k"
+ ],
+ [
+ "▁",
+ "tok"
+ ],
+ [
+ "ne",
+ "h"
+ ],
+ [
+ "n",
+ "eh"
+ ],
+ [
+ "▁law",
+ "yer"
+ ],
+ [
+ "he",
+ "nd"
+ ],
+ [
+ "hen",
+ "d"
+ ],
+ [
+ "h",
+ "end"
+ ],
+ [
+ "▁V",
+ "in"
+ ],
+ [
+ "▁Vi",
+ "n"
+ ],
+ [
+ "▁fac",
+ "ility"
+ ],
+ [
+ "▁facil",
+ "ity"
+ ],
+ [
+ "▁l",
+ "ikes"
+ ],
+ [
+ "▁li",
+ "kes"
+ ],
+ [
+ "▁like",
+ "s"
+ ],
+ [
+ "▁lik",
+ "es"
+ ],
+ [
+ "en",
+ "to"
+ ],
+ [
+ "ent",
+ "o"
+ ],
+ [
+ "Node",
+ "s"
+ ],
+ [
+ "No",
+ "des"
+ ],
+ [
+ "N",
+ "odes"
+ ],
+ [
+ "▁entr",
+ "ance"
+ ],
+ [
+ "at",
+ "to"
+ ],
+ [
+ "att",
+ "o"
+ ],
+ [
+ "a",
+ "tto"
+ ],
+ [
+ "re",
+ "tt"
+ ],
+ [
+ "ret",
+ "t"
+ ],
+ [
+ "r",
+ "ett"
+ ],
+ [
+ "ac",
+ "cept"
+ ],
+ [
+ "th",
+ "eme"
+ ],
+ [
+ "the",
+ "me"
+ ],
+ [
+ "та",
+ "н"
+ ],
+ [
+ "т",
+ "ан"
+ ],
+ [
+ "os",
+ "i"
+ ],
+ [
+ "o",
+ "si"
+ ],
+ [
+ "▁{",
+ "},"
+ ],
+ [
+ "▁{}",
+ ","
+ ],
+ [
+ "▁",
+ "{},"
+ ],
+ [
+ "pgfpath",
+ "lineto"
+ ],
+ [
+ "go",
+ "od"
+ ],
+ [
+ "g",
+ "ood"
+ ],
+ [
+ "sl",
+ "ot"
+ ],
+ [
+ "s",
+ "lot"
+ ],
+ [
+ "▁in",
+ "noc"
+ ],
+ [
+ "▁inn",
+ "oc"
+ ],
+ [
+ "▁pro",
+ "port"
+ ],
+ [
+ "▁pr",
+ "oport"
+ ],
+ [
+ "▁prop",
+ "ort"
+ ],
+ [
+ "▁ar",
+ "rive"
+ ],
+ [
+ "▁arriv",
+ "e"
+ ],
+ [
+ "▁arr",
+ "ive"
+ ],
+ [
+ "é",
+ "ho"
+ ],
+ [
+ "▁p",
+ "airs"
+ ],
+ [
+ "▁pa",
+ "irs"
+ ],
+ [
+ "▁pair",
+ "s"
+ ],
+ [
+ "▁wr",
+ "apped"
+ ],
+ [
+ "▁wrap",
+ "ped"
+ ],
+ [
+ "▁un",
+ "w"
+ ],
+ [
+ "▁expl",
+ "os"
+ ],
+ [
+ "▁exp",
+ "los"
+ ],
+ [
+ "▁explo",
+ "s"
+ ],
+ [
+ "▁g",
+ "el"
+ ],
+ [
+ "▁ge",
+ "l"
+ ],
+ [
+ "▁",
+ "gel"
+ ],
+ [
+ "W",
+ "ill"
+ ],
+ [
+ "▁Ze",
+ "aland"
+ ],
+ [
+ "ía",
+ "s"
+ ],
+ [
+ "í",
+ "as"
+ ],
+ [
+ "▁J",
+ "r"
+ ],
+ [
+ "▁F",
+ "ra"
+ ],
+ [
+ "▁Fr",
+ "a"
+ ],
+ [
+ "▁le",
+ "git"
+ ],
+ [
+ "▁leg",
+ "it"
+ ],
+ [
+ "▁il",
+ "legal"
+ ],
+ [
+ "к",
+ "лю"
+ ],
+ [
+ "▁t",
+ "ort"
+ ],
+ [
+ "▁to",
+ "rt"
+ ],
+ [
+ "▁tor",
+ "t"
+ ],
+ [
+ "▁p",
+ "ron"
+ ],
+ [
+ "▁pro",
+ "n"
+ ],
+ [
+ "▁pr",
+ "on"
+ ],
+ [
+ "F",
+ "i"
+ ],
+ [
+ "▁f",
+ "org"
+ ],
+ [
+ "▁for",
+ "g"
+ ],
+ [
+ "▁fo",
+ "rg"
+ ],
+ [
+ "ex",
+ "port"
+ ],
+ [
+ "exp",
+ "ort"
+ ],
+ [
+ "▁Child",
+ "ren"
+ ],
+ [
+ "▁",
+ "Children"
+ ],
+ [
+ "▁A",
+ "bs"
+ ],
+ [
+ "▁Ab",
+ "s"
+ ],
+ [
+ "▁",
+ "Abs"
+ ],
+ [
+ "▁S",
+ "end"
+ ],
+ [
+ "▁Se",
+ "nd"
+ ],
+ [
+ "▁Sen",
+ "d"
+ ],
+ [
+ "▁",
+ "Send"
+ ],
+ [
+ "▁dis",
+ "count"
+ ],
+ [
+ "▁disc",
+ "ount"
+ ],
+ [
+ "▁disco",
+ "unt"
+ ],
+ [
+ "▁p",
+ "oster"
+ ],
+ [
+ "▁pos",
+ "ter"
+ ],
+ [
+ "▁po",
+ "ster"
+ ],
+ [
+ "▁post",
+ "er"
+ ],
+ [
+ "en",
+ "ted"
+ ],
+ [
+ "ent",
+ "ed"
+ ],
+ [
+ "ente",
+ "d"
+ ],
+ [
+ "an",
+ "im"
+ ],
+ [
+ "ani",
+ "m"
+ ],
+ [
+ "a",
+ "nim"
+ ],
+ [
+ "ve",
+ "rb"
+ ],
+ [
+ "ver",
+ "b"
+ ],
+ [
+ "st",
+ "o"
+ ],
+ [
+ "s",
+ "to"
+ ],
+ [
+ "▁B",
+ "ible"
+ ],
+ [
+ "▁Bi",
+ "ble"
+ ],
+ [
+ "pend",
+ "ing"
+ ],
+ [
+ "pen",
+ "ding"
+ ],
+ [
+ "p",
+ "ending"
+ ],
+ [
+ "▁P",
+ "hot"
+ ],
+ [
+ "▁Ph",
+ "ot"
+ ],
+ [
+ "st",
+ "rap"
+ ],
+ [
+ "str",
+ "ap"
+ ],
+ [
+ "stra",
+ "p"
+ ],
+ [
+ "ie",
+ "ron"
+ ],
+ [
+ "ier",
+ "on"
+ ],
+ [
+ "iero",
+ "n"
+ ],
+ [
+ "i",
+ "eron"
+ ],
+ [
+ "P",
+ "G"
+ ],
+ [
+ "cul",
+ "ar"
+ ],
+ [
+ "cu",
+ "lar"
+ ],
+ [
+ "c",
+ "ular"
+ ],
+ [
+ "cri",
+ "t"
+ ],
+ [
+ "cr",
+ "it"
+ ],
+ [
+ "c",
+ "rit"
+ ],
+ [
+ "ur",
+ "d"
+ ],
+ [
+ "u",
+ "rd"
+ ],
+ [
+ "EN",
+ "O"
+ ],
+ [
+ "E",
+ "NO"
+ ],
+ [
+ "▁nor",
+ "thern"
+ ],
+ [
+ "▁north",
+ "ern"
+ ],
+ [
+ "▁natural",
+ "ly"
+ ],
+ [
+ "▁natur",
+ "ally"
+ ],
+ [
+ "<",
+ "'"
+ ],
+ [
+ "we",
+ "g"
+ ],
+ [
+ "w",
+ "eg"
+ ],
+ [
+ "▁dr",
+ "unk"
+ ],
+ [
+ "▁D",
+ "al"
+ ],
+ [
+ "▁Da",
+ "l"
+ ],
+ [
+ "▁m",
+ "ouse"
+ ],
+ [
+ "▁mo",
+ "use"
+ ],
+ [
+ "▁mou",
+ "se"
+ ],
+ [
+ "▁",
+ "mouse"
+ ],
+ [
+ "▁contin",
+ "uous"
+ ],
+ [
+ "▁continu",
+ "ous"
+ ],
+ [
+ "▁init",
+ "ially"
+ ],
+ [
+ "▁initial",
+ "ly"
+ ],
+ [
+ "▁initi",
+ "ally"
+ ],
+ [
+ "ag",
+ "u"
+ ],
+ [
+ "a",
+ "gu"
+ ],
+ [
+ "м",
+ "пи"
+ ],
+ [
+ "AN",
+ "T"
+ ],
+ [
+ "A",
+ "NT"
+ ],
+ [
+ "Di",
+ "v"
+ ],
+ [
+ "D",
+ "iv"
+ ],
+ [
+ "▁rec",
+ "ording"
+ ],
+ [
+ "▁record",
+ "ing"
+ ],
+ [
+ "Bin",
+ "d"
+ ],
+ [
+ "Bi",
+ "nd"
+ ],
+ [
+ "B",
+ "ind"
+ ],
+ [
+ "▁correct",
+ "ly"
+ ],
+ [
+ "init",
+ "ial"
+ ],
+ [
+ "▁R",
+ "ights"
+ ],
+ [
+ "▁Right",
+ "s"
+ ],
+ [
+ "▁deb",
+ "ate"
+ ],
+ [
+ "WR",
+ "ITE"
+ ],
+ [
+ "bu",
+ "ilt"
+ ],
+ [
+ "▁per",
+ "mit"
+ ],
+ [
+ "▁perm",
+ "it"
+ ],
+ [
+ "▁professional",
+ "s"
+ ],
+ [
+ "▁profession",
+ "als"
+ ],
+ [
+ "c",
+ "v"
+ ],
+ [
+ "▁D",
+ "I"
+ ],
+ [
+ "▁",
+ "DI"
+ ],
+ [
+ "▁h",
+ "anded"
+ ],
+ [
+ "▁hand",
+ "ed"
+ ],
+ [
+ "▁han",
+ "ded"
+ ],
+ [
+ "▁",
+ "handed"
+ ],
+ [
+ "▁C",
+ "u"
+ ],
+ [
+ "▁H",
+ "ospital"
+ ],
+ [
+ "▁besk",
+ "revs"
+ ],
+ [
+ "не",
+ "й"
+ ],
+ [
+ "н",
+ "ей"
+ ],
+ [
+ "но",
+ "ст"
+ ],
+ [
+ "▁anx",
+ "iety"
+ ],
+ [
+ "▁heav",
+ "ily"
+ ],
+ [
+ "▁V",
+ "ar"
+ ],
+ [
+ "▁Va",
+ "r"
+ ],
+ [
+ "▁",
+ "Var"
+ ],
+ [
+ "▁dis",
+ "pos"
+ ],
+ [
+ "▁disp",
+ "os"
+ ],
+ [
+ "+",
+ "\""
+ ],
+ [
+ "▁E",
+ "ver"
+ ],
+ [
+ "▁Ev",
+ "er"
+ ],
+ [
+ "▁Eve",
+ "r"
+ ],
+ [
+ "iz",
+ "on"
+ ],
+ [
+ "izo",
+ "n"
+ ],
+ [
+ "i",
+ "zon"
+ ],
+ [
+ "▁oper",
+ "ators"
+ ],
+ [
+ "▁operator",
+ "s"
+ ],
+ [
+ "ne",
+ "go"
+ ],
+ [
+ "neg",
+ "o"
+ ],
+ [
+ "n",
+ "ego"
+ ],
+ [
+ "▁B",
+ "ry"
+ ],
+ [
+ "▁Br",
+ "y"
+ ],
+ [
+ "▁v",
+ "otes"
+ ],
+ [
+ "▁vo",
+ "tes"
+ ],
+ [
+ "▁vote",
+ "s"
+ ],
+ [
+ "▁vot",
+ "es"
+ ],
+ [
+ "iz",
+ "ione"
+ ],
+ [
+ "izi",
+ "one"
+ ],
+ [
+ "izio",
+ "ne"
+ ],
+ [
+ "i",
+ "zione"
+ ],
+ [
+ "▁ра",
+ "й"
+ ],
+ [
+ "▁fe",
+ "at"
+ ],
+ [
+ "▁",
+ "feat"
+ ],
+ [
+ "▁w",
+ "estern"
+ ],
+ [
+ "▁west",
+ "ern"
+ ],
+ [
+ "▁",
+ "western"
+ ],
+ [
+ "▁con",
+ "front"
+ ],
+ [
+ "▁strong",
+ "er"
+ ],
+ [
+ "▁ф",
+ "а"
+ ],
+ [
+ "▁",
+ "фа"
+ ],
+ [
+ "st",
+ "re"
+ ],
+ [
+ "str",
+ "e"
+ ],
+ [
+ "s",
+ "tre"
+ ],
+ [
+ "▁Val",
+ "id"
+ ],
+ [
+ "▁",
+ "Valid"
+ ],
+ [
+ "▁n",
+ "ad"
+ ],
+ [
+ "▁na",
+ "d"
+ ],
+ [
+ "▁check",
+ "ing"
+ ],
+ [
+ "▁bird",
+ "s"
+ ],
+ [
+ "▁North",
+ "ern"
+ ],
+ [
+ "▁Nor",
+ "thern"
+ ],
+ [
+ "▁int",
+ "ention"
+ ],
+ [
+ "▁intent",
+ "ion"
+ ],
+ [
+ "uc",
+ "e"
+ ],
+ [
+ "u",
+ "ce"
+ ],
+ [
+ "▁co",
+ "vers"
+ ],
+ [
+ "▁cover",
+ "s"
+ ],
+ [
+ "▁cov",
+ "ers"
+ ],
+ [
+ "▁wonder",
+ "ing"
+ ],
+ [
+ "▁Option",
+ "al"
+ ],
+ [
+ "▁Opt",
+ "ional"
+ ],
+ [
+ "▁",
+ "Optional"
+ ],
+ [
+ "pro",
+ "tocol"
+ ],
+ [
+ "proto",
+ "col"
+ ],
+ [
+ "prot",
+ "ocol"
+ ],
+ [
+ "▁ag",
+ "gress"
+ ],
+ [
+ "—",
+ "—"
+ ],
+ [
+ "V",
+ "ec"
+ ],
+ [
+ "▁d",
+ "ates"
+ ],
+ [
+ "▁da",
+ "tes"
+ ],
+ [
+ "▁dat",
+ "es"
+ ],
+ [
+ "▁date",
+ "s"
+ ],
+ [
+ "▁",
+ "dates"
+ ],
+ [
+ "qu",
+ "ot"
+ ],
+ [
+ "▁b",
+ "om"
+ ],
+ [
+ "▁bo",
+ "m"
+ ],
+ [
+ "▁s",
+ "can"
+ ],
+ [
+ "▁sc",
+ "an"
+ ],
+ [
+ "▁",
+ "scan"
+ ],
+ [
+ "▁I",
+ "tem"
+ ],
+ [
+ "▁It",
+ "em"
+ ],
+ [
+ "▁",
+ "Item"
+ ],
+ [
+ "▁N",
+ "avy"
+ ],
+ [
+ "▁Na",
+ "vy"
+ ],
+ [
+ "▁Nav",
+ "y"
+ ],
+ [
+ "▁G",
+ "ran"
+ ],
+ [
+ "▁Gr",
+ "an"
+ ],
+ [
+ "▁Gra",
+ "n"
+ ],
+ [
+ "▁every",
+ "body"
+ ],
+ [
+ "▁un",
+ "expected"
+ ],
+ [
+ "▁une",
+ "xpected"
+ ],
+ [
+ "▁di",
+ "vor"
+ ],
+ [
+ "▁div",
+ "or"
+ ],
+ [
+ "▁e",
+ "ase"
+ ],
+ [
+ "▁eas",
+ "e"
+ ],
+ [
+ "um",
+ "bled"
+ ],
+ [
+ "umb",
+ "led"
+ ],
+ [
+ "umble",
+ "d"
+ ],
+ [
+ "^",
+ "+"
+ ],
+ [
+ "cu",
+ "ss"
+ ],
+ [
+ "c",
+ "uss"
+ ],
+ [
+ "▁p",
+ "ale"
+ ],
+ [
+ "▁pal",
+ "e"
+ ],
+ [
+ "▁pa",
+ "le"
+ ],
+ [
+ "▁In",
+ "ga"
+ ],
+ [
+ "▁Ing",
+ "a"
+ ],
+ [
+ "▁B",
+ "road"
+ ],
+ [
+ "▁Br",
+ "oad"
+ ],
+ [
+ "▁Bro",
+ "ad"
+ ],
+ [
+ "▁",
+ "Broad"
+ ],
+ [
+ "▁Med",
+ "ic"
+ ],
+ [
+ "▁R",
+ "oy"
+ ],
+ [
+ "▁Ro",
+ "y"
+ ],
+ [
+ "▁I",
+ "nn"
+ ],
+ [
+ "▁In",
+ "n"
+ ],
+ [
+ "▁p",
+ "ens"
+ ],
+ [
+ "▁pe",
+ "ns"
+ ],
+ [
+ "▁pen",
+ "s"
+ ],
+ [
+ "P",
+ "N"
+ ],
+ [
+ ".",
+ ":"
+ ],
+ [
+ "▁princip",
+ "le"
+ ],
+ [
+ "▁let",
+ "ting"
+ ],
+ [
+ "▁lett",
+ "ing"
+ ],
+ [
+ "▁condu",
+ "cted"
+ ],
+ [
+ "▁conduct",
+ "ed"
+ ],
+ [
+ "F",
+ "ALSE"
+ ],
+ [
+ "▁O",
+ "S"
+ ],
+ [
+ "▁",
+ "OS"
+ ],
+ [
+ "F",
+ "ocus"
+ ],
+ [
+ "▁measure",
+ "d"
+ ],
+ [
+ "▁meas",
+ "ured"
+ ],
+ [
+ "▁Dem",
+ "ocratic"
+ ],
+ [
+ "▁Democr",
+ "atic"
+ ],
+ [
+ "▁Democrat",
+ "ic"
+ ],
+ [
+ "Hi",
+ "gh"
+ ],
+ [
+ "H",
+ "igh"
+ ],
+ [
+ "▁p",
+ "ré"
+ ],
+ [
+ "▁pr",
+ "é"
+ ],
+ [
+ "en",
+ "nes"
+ ],
+ [
+ "enn",
+ "es"
+ ],
+ [
+ "enne",
+ "s"
+ ],
+ [
+ "▁ind",
+ "icates"
+ ],
+ [
+ "▁indic",
+ "ates"
+ ],
+ [
+ "▁indicate",
+ "s"
+ ],
+ [
+ "▁en",
+ "ding"
+ ],
+ [
+ "▁end",
+ "ing"
+ ],
+ [
+ "▁",
+ "ending"
+ ],
+ [
+ "▁Sm",
+ "all"
+ ],
+ [
+ "▁",
+ "Small"
+ ],
+ [
+ "▁<",
+ "!--"
+ ],
+ [
+ "▁",
+ "