2022-04-08 22:02:12,850 INFO [decode.py:583] Decoding started 2022-04-08 22:02:12,851 INFO [decode.py:584] {'subsampling_factor': 4, 'vgg_frontend': False, 'use_feat_batchnorm': True, 'feature_dim': 80, 'nhead': 8, 'attention_dim': 512, 'num_decoder_layers': 6, 'search_beam': 20, 'output_beam': 8, 'min_active_states': 30, 'max_active_states': 10000, 'use_double_scores': True, 'env_info': {'k2-version': '1.14', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '6833270cb228aba7bf9681fccd41e2b52f7d984c', 'k2-git-date': 'Wed Mar 16 11:16:05 2022', 'lhotse-version': '1.0.0.dev+git.d917411.clean', 'torch-cuda-available': True, 'torch-cuda-version': '11.1', 'python-version': '3.7', 'icefall-git-branch': 'gigaspeech_recipe', 'icefall-git-sha1': 'c3993a5-dirty', 'icefall-git-date': 'Mon Mar 21 13:49:39 2022', 'icefall-path': '/userhome/user/guanbo/icefall_decode', 'k2-path': '/opt/conda/lib/python3.7/site-packages/k2-1.14.dev20220408+cuda11.1.torch1.10.0-py3.7-linux-x86_64.egg/k2/__init__.py', 'lhotse-path': '/userhome/user/guanbo/lhotse/lhotse/__init__.py', 'hostname': 'd7b02ab00b70c011ec0a3ee069db84328338-chenx8564-0', 'IP address': '10.9.150.18'}, 'epoch': 18, 'avg': 6, 'method': 'attention-decoder', 'num_paths': 1000, 'nbest_scale': 0.5, 'exp_dir': PosixPath('conformer_ctc/exp_500_8_2'), 'lang_dir': PosixPath('data/lang_bpe_500'), 'lm_dir': PosixPath('data/lm'), 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 20, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'return_cuts': True, 'num_workers': 1, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'subset': 'XL', 'lazy_load': True, 'small_dev': False} 2022-04-08 22:02:13,611 INFO [lexicon.py:176] Loading pre-compiled data/lang_bpe_500/Linv.pt 2022-04-08 22:02:13,897 INFO [decode.py:594] device: cuda:0 2022-04-08 22:02:19,463 INFO [decode.py:656] Loading pre-compiled G_4_gram.pt 2022-04-08 22:02:23,064 INFO [decode.py:692] averaging ['conformer_ctc/exp_500_8_2/epoch-13.pt', 'conformer_ctc/exp_500_8_2/epoch-14.pt', 'conformer_ctc/exp_500_8_2/epoch-15.pt', 'conformer_ctc/exp_500_8_2/epoch-16.pt', 'conformer_ctc/exp_500_8_2/epoch-17.pt', 'conformer_ctc/exp_500_8_2/epoch-18.pt'] 2022-04-08 22:04:17,302 INFO [decode.py:699] Number of model parameters: 109226120 2022-04-08 22:04:17,303 INFO [asr_datamodule.py:372] About to get dev cuts 2022-04-08 22:04:21,114 INFO [decode.py:497] batch 0/?, cuts processed until now is 3 2022-04-08 22:06:56,367 INFO [decode.py:497] batch 100/?, cuts processed until now is 243 2022-04-08 22:09:33,967 INFO [decode.py:497] batch 200/?, cuts processed until now is 464 2022-04-08 22:12:05,730 INFO [decode.py:497] batch 300/?, cuts processed until now is 665 2022-04-08 22:13:23,989 INFO [decode.py:736] Caught exception: CUDA out of memory. Tried to allocate 4.93 GiB (GPU 0; 31.75 GiB total capacity; 24.54 GiB already allocated; 3.87 GiB free; 26.53 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 2022-04-08 22:13:23,989 INFO [decode.py:743] num_arcs before pruning: 333034 2022-04-08 22:13:23,989 INFO [decode.py:746] This OOM is not an error. You can ignore it. If your model does not converge well, or --max-duration is too large, or the input sound file is difficult to decode, you will meet this exception. 2022-04-08 22:13:24,010 INFO [decode.py:757] num_arcs after pruning: 7258 2022-04-08 22:14:38,171 INFO [decode.py:497] batch 400/?, cuts processed until now is 891 2022-04-08 22:17:05,640 INFO [decode.py:497] batch 500/?, cuts processed until now is 1098 2022-04-08 22:19:29,901 INFO [decode.py:497] batch 600/?, cuts processed until now is 1363 2022-04-08 22:20:05,953 INFO [decode.py:736] Caught exception: CUDA out of memory. Tried to allocate 8.00 GiB (GPU 0; 31.75 GiB total capacity; 19.51 GiB already allocated; 7.07 GiB free; 23.32 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 2022-04-08 22:20:05,954 INFO [decode.py:743] num_arcs before pruning: 514392 2022-04-08 22:20:05,954 INFO [decode.py:746] This OOM is not an error. You can ignore it. If your model does not converge well, or --max-duration is too large, or the input sound file is difficult to decode, you will meet this exception. 2022-04-08 22:20:05,966 INFO [decode.py:757] num_arcs after pruning: 13888 2022-04-08 22:22:02,765 INFO [decode.py:497] batch 700/?, cuts processed until now is 1626 2022-04-08 22:24:05,393 INFO [decode.py:736] Caught exception: CUDA out of memory. Tried to allocate 8.00 GiB (GPU 0; 31.75 GiB total capacity; 14.24 GiB already allocated; 7.07 GiB free; 23.33 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 2022-04-08 22:24:05,393 INFO [decode.py:743] num_arcs before pruning: 164808 2022-04-08 22:24:05,393 INFO [decode.py:746] This OOM is not an error. You can ignore it. If your model does not converge well, or --max-duration is too large, or the input sound file is difficult to decode, you will meet this exception. 2022-04-08 22:24:05,404 INFO [decode.py:757] num_arcs after pruning: 8771 2022-04-08 22:24:40,652 INFO [decode.py:497] batch 800/?, cuts processed until now is 1870 2022-04-08 22:25:03,574 INFO [decode.py:736] Caught exception: CUDA out of memory. Tried to allocate 8.00 GiB (GPU 0; 31.75 GiB total capacity; 14.28 GiB already allocated; 7.07 GiB free; 23.32 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 2022-04-08 22:25:03,575 INFO [decode.py:743] num_arcs before pruning: 267824 2022-04-08 22:25:03,575 INFO [decode.py:746] This OOM is not an error. You can ignore it. If your model does not converge well, or --max-duration is too large, or the input sound file is difficult to decode, you will meet this exception. 2022-04-08 22:25:03,582 INFO [decode.py:757] num_arcs after pruning: 9250 2022-04-08 22:27:25,872 INFO [decode.py:497] batch 900/?, cuts processed until now is 2134 2022-04-08 22:29:45,824 INFO [decode.py:736] Caught exception: CUDA out of memory. Tried to allocate 8.00 GiB (GPU 0; 31.75 GiB total capacity; 14.45 GiB already allocated; 7.06 GiB free; 23.33 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 2022-04-08 22:29:45,825 INFO [decode.py:743] num_arcs before pruning: 236799 2022-04-08 22:29:45,825 INFO [decode.py:746] This OOM is not an error. You can ignore it. If your model does not converge well, or --max-duration is too large, or the input sound file is difficult to decode, you will meet this exception. 2022-04-08 22:29:45,837 INFO [decode.py:757] num_arcs after pruning: 7885 2022-04-08 22:30:03,747 INFO [decode.py:497] batch 1000/?, cuts processed until now is 2380 2022-04-08 22:30:44,532 INFO [decode.py:736] Caught exception: Some bad things happened. Please read the above error messages and stack trace. If you are using Python, the following command may be helpful: gdb --args python /path/to/your/code.py (You can use `gdb` to debug the code. Please consider compiling a debug version of k2.). If you are unable to fix it, please open an issue at: https://github.com/k2-fsa/k2/issues/new 2022-04-08 22:30:44,532 INFO [decode.py:743] num_arcs before pruning: 632546 2022-04-08 22:30:44,533 INFO [decode.py:746] This OOM is not an error. You can ignore it. If your model does not converge well, or --max-duration is too large, or the input sound file is difficult to decode, you will meet this exception. 2022-04-08 22:30:44,585 INFO [decode.py:757] num_arcs after pruning: 10602 2022-04-08 22:32:41,978 INFO [decode.py:497] batch 1100/?, cuts processed until now is 2624 2022-04-08 22:34:54,199 INFO [decode.py:736] Caught exception: CUDA out of memory. Tried to allocate 8.00 GiB (GPU 0; 31.75 GiB total capacity; 19.67 GiB already allocated; 5.68 GiB free; 24.72 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 2022-04-08 22:34:54,200 INFO [decode.py:743] num_arcs before pruning: 227558 2022-04-08 22:34:54,200 INFO [decode.py:746] This OOM is not an error. You can ignore it. If your model does not converge well, or --max-duration is too large, or the input sound file is difficult to decode, you will meet this exception. 2022-04-08 22:34:54,218 INFO [decode.py:757] num_arcs after pruning: 8505 2022-04-08 22:35:25,806 INFO [decode.py:497] batch 1200/?, cuts processed until now is 2889 2022-04-08 22:38:28,827 INFO [decode.py:497] batch 1300/?, cuts processed until now is 3182 2022-04-08 22:39:35,318 INFO [decode.py:736] Caught exception: CUDA out of memory. Tried to allocate 2.65 GiB (GPU 0; 31.75 GiB total capacity; 27.28 GiB already allocated; 1.20 GiB free; 29.19 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 2022-04-08 22:39:35,318 INFO [decode.py:743] num_arcs before pruning: 348294 2022-04-08 22:39:35,318 INFO [decode.py:746] This OOM is not an error. You can ignore it. If your model does not converge well, or --max-duration is too large, or the input sound file is difficult to decode, you will meet this exception. 2022-04-08 22:39:35,324 INFO [decode.py:757] num_arcs after pruning: 4422 2022-04-08 22:41:48,886 INFO [decode.py:497] batch 1400/?, cuts processed until now is 3491 2022-04-08 22:42:03,583 INFO [decode.py:736] Caught exception: CUDA out of memory. Tried to allocate 4.53 GiB (GPU 0; 31.75 GiB total capacity; 24.43 GiB already allocated; 1.20 GiB free; 29.19 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 2022-04-08 22:42:03,584 INFO [decode.py:743] num_arcs before pruning: 446338 2022-04-08 22:42:03,584 INFO [decode.py:746] This OOM is not an error. You can ignore it. If your model does not converge well, or --max-duration is too large, or the input sound file is difficult to decode, you will meet this exception. 2022-04-08 22:42:03,592 INFO [decode.py:757] num_arcs after pruning: 13422 2022-04-08 22:44:41,081 INFO [decode.py:497] batch 1500/?, cuts processed until now is 3738 2022-04-08 22:44:48,819 INFO [decode.py:736] Caught exception: CUDA out of memory. Tried to allocate 1.94 GiB (GPU 0; 31.75 GiB total capacity; 29.06 GiB already allocated; 231.75 MiB free; 30.17 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 2022-04-08 22:44:48,820 INFO [decode.py:743] num_arcs before pruning: 263598 2022-04-08 22:44:48,820 INFO [decode.py:746] This OOM is not an error. You can ignore it. If your model does not converge well, or --max-duration is too large, or the input sound file is difficult to decode, you will meet this exception. 2022-04-08 22:44:48,833 INFO [decode.py:757] num_arcs after pruning: 7847 2022-04-08 22:47:10,728 INFO [decode.py:497] batch 1600/?, cuts processed until now is 3970 2022-04-08 22:47:52,235 INFO [decode.py:736] Caught exception: CUDA out of memory. Tried to allocate 5.20 GiB (GPU 0; 31.75 GiB total capacity; 24.71 GiB already allocated; 231.75 MiB free; 30.17 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 2022-04-08 22:47:52,236 INFO [decode.py:743] num_arcs before pruning: 317009 2022-04-08 22:47:52,236 INFO [decode.py:746] This OOM is not an error. You can ignore it. If your model does not converge well, or --max-duration is too large, or the input sound file is difficult to decode, you will meet this exception. 2022-04-08 22:47:52,252 INFO [decode.py:757] num_arcs after pruning: 9354 2022-04-08 22:49:32,370 INFO [decode.py:736] Caught exception: CUDA out of memory. Tried to allocate 4.55 GiB (GPU 0; 31.75 GiB total capacity; 24.05 GiB already allocated; 231.75 MiB free; 30.17 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 2022-04-08 22:49:32,371 INFO [decode.py:743] num_arcs before pruning: 136624 2022-04-08 22:49:32,371 INFO [decode.py:746] This OOM is not an error. You can ignore it. If your model does not converge well, or --max-duration is too large, or the input sound file is difficult to decode, you will meet this exception. 2022-04-08 22:49:32,402 INFO [decode.py:757] num_arcs after pruning: 5456 2022-04-08 22:49:36,398 INFO [decode.py:497] batch 1700/?, cuts processed until now is 4192 2022-04-08 22:50:50,382 INFO [decode.py:736] Caught exception: CUDA out of memory. Tried to allocate 8.00 GiB (GPU 0; 31.75 GiB total capacity; 19.56 GiB already allocated; 2.10 GiB free; 28.29 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 2022-04-08 22:50:50,383 INFO [decode.py:743] num_arcs before pruning: 303893 2022-04-08 22:50:50,383 INFO [decode.py:746] This OOM is not an error. You can ignore it. If your model does not converge well, or --max-duration is too large, or the input sound file is difficult to decode, you will meet this exception. 2022-04-08 22:50:50,400 INFO [decode.py:757] num_arcs after pruning: 9312 2022-04-08 22:52:09,335 INFO [decode.py:497] batch 1800/?, cuts processed until now is 4416 2022-04-08 22:52:51,744 INFO [decode.py:736] Caught exception: CUDA out of memory. Tried to allocate 5.02 GiB (GPU 0; 31.75 GiB total capacity; 26.25 GiB already allocated; 2.10 GiB free; 28.29 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 2022-04-08 22:52:51,745 INFO [decode.py:743] num_arcs before pruning: 379292 2022-04-08 22:52:51,745 INFO [decode.py:746] This OOM is not an error. You can ignore it. If your model does not converge well, or --max-duration is too large, or the input sound file is difficult to decode, you will meet this exception. 2022-04-08 22:52:51,751 INFO [decode.py:757] num_arcs after pruning: 14317 2022-04-08 22:54:33,478 INFO [decode.py:497] batch 1900/?, cuts processed until now is 4619 2022-04-08 22:56:34,371 INFO [decode.py:736] Caught exception: CUDA out of memory. Tried to allocate 8.00 GiB (GPU 0; 31.75 GiB total capacity; 19.32 GiB already allocated; 3.07 GiB free; 27.33 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 2022-04-08 22:56:34,372 INFO [decode.py:743] num_arcs before pruning: 294097 2022-04-08 22:56:34,372 INFO [decode.py:746] This OOM is not an error. You can ignore it. If your model does not converge well, or --max-duration is too large, or the input sound file is difficult to decode, you will meet this exception. 2022-04-08 22:56:34,389 INFO [decode.py:757] num_arcs after pruning: 5895 2022-04-08 22:56:47,967 INFO [decode.py:497] batch 2000/?, cuts processed until now is 4816 2022-04-08 22:58:06,236 INFO [decode.py:736] Caught exception: CUDA out of memory. Tried to allocate 8.00 GiB (GPU 0; 31.75 GiB total capacity; 19.41 GiB already allocated; 3.06 GiB free; 27.33 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 2022-04-08 22:58:06,236 INFO [decode.py:743] num_arcs before pruning: 253855 2022-04-08 22:58:06,236 INFO [decode.py:746] This OOM is not an error. You can ignore it. If your model does not converge well, or --max-duration is too large, or the input sound file is difficult to decode, you will meet this exception. 2022-04-08 22:58:06,253 INFO [decode.py:757] num_arcs after pruning: 9191 2022-04-08 22:58:17,534 INFO [decode.py:736] Caught exception: CUDA out of memory. Tried to allocate 2.17 GiB (GPU 0; 31.75 GiB total capacity; 26.06 GiB already allocated; 1.56 GiB free; 28.83 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 2022-04-08 22:58:17,535 INFO [decode.py:743] num_arcs before pruning: 242689 2022-04-08 22:58:17,535 INFO [decode.py:746] This OOM is not an error. You can ignore it. If your model does not converge well, or --max-duration is too large, or the input sound file is difficult to decode, you will meet this exception. 2022-04-08 22:58:17,549 INFO [decode.py:757] num_arcs after pruning: 4733 2022-04-08 22:58:32,154 INFO [decode.py:736] Caught exception: CUDA out of memory. Tried to allocate 2.38 GiB (GPU 0; 31.75 GiB total capacity; 26.65 GiB already allocated; 1.57 GiB free; 28.82 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 2022-04-08 22:58:32,155 INFO [decode.py:743] num_arcs before pruning: 288302 2022-04-08 22:58:32,155 INFO [decode.py:746] This OOM is not an error. You can ignore it. If your model does not converge well, or --max-duration is too large, or the input sound file is difficult to decode, you will meet this exception. 2022-04-08 22:58:32,164 INFO [decode.py:757] num_arcs after pruning: 5472 2022-04-08 22:59:15,988 INFO [decode.py:497] batch 2100/?, cuts processed until now is 4981 2022-04-08 23:00:31,937 INFO [decode.py:736] Caught exception: Some bad things happened. Please read the above error messages and stack trace. If you are using Python, the following command may be helpful: gdb --args python /path/to/your/code.py (You can use `gdb` to debug the code. Please consider compiling a debug version of k2.). If you are unable to fix it, please open an issue at: https://github.com/k2-fsa/k2/issues/new 2022-04-08 23:00:31,937 INFO [decode.py:743] num_arcs before pruning: 745182 2022-04-08 23:00:31,937 INFO [decode.py:746] This OOM is not an error. You can ignore it. If your model does not converge well, or --max-duration is too large, or the input sound file is difficult to decode, you will meet this exception. 2022-04-08 23:00:31,989 INFO [decode.py:757] num_arcs after pruning: 13933 2022-04-08 23:01:49,408 INFO [decode.py:497] batch 2200/?, cuts processed until now is 5132 2022-04-08 23:04:08,911 INFO [decode.py:497] batch 2300/?, cuts processed until now is 5273 2022-04-08 23:06:50,854 INFO [decode.py:497] batch 2400/?, cuts processed until now is 5388 2022-04-08 23:06:53,493 INFO [decode.py:736] Caught exception: Some bad things happened. Please read the above error messages and stack trace. If you are using Python, the following command may be helpful: gdb --args python /path/to/your/code.py (You can use `gdb` to debug the code. Please consider compiling a debug version of k2.). If you are unable to fix it, please open an issue at: https://github.com/k2-fsa/k2/issues/new 2022-04-08 23:06:53,493 INFO [decode.py:743] num_arcs before pruning: 203946 2022-04-08 23:06:53,493 INFO [decode.py:746] This OOM is not an error. You can ignore it. If your model does not converge well, or --max-duration is too large, or the input sound file is difficult to decode, you will meet this exception. 2022-04-08 23:06:53,545 INFO [decode.py:757] num_arcs after pruning: 7172 2022-04-08 23:09:08,764 INFO [decode.py:497] batch 2500/?, cuts processed until now is 5488 2022-04-08 23:10:26,345 INFO [decode.py:841] Caught exception: CUDA out of memory. Tried to allocate 5.79 GiB (GPU 0; 31.75 GiB total capacity; 24.31 GiB already allocated; 1.58 GiB free; 28.82 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 2022-04-08 23:10:26,346 INFO [decode.py:843] num_paths before decreasing: 1000 2022-04-08 23:10:26,346 INFO [decode.py:852] This OOM is not an error. You can ignore it. If your model does not converge well, or --max-duration is too large, or the input sound file is difficult to decode, you will meet this exception. 2022-04-08 23:10:26,346 INFO [decode.py:858] num_paths after decreasing: 500 2022-04-08 23:11:31,973 INFO [decode.py:497] batch 2600/?, cuts processed until now is 5588 2022-04-08 23:13:41,208 INFO [decode.py:497] batch 2700/?, cuts processed until now is 5688 2022-04-08 23:20:49,158 INFO [decode.py:567] For dev, WER of different settings are: ngram_lm_scale_0.6_attention_scale_1.5 10.46 best for dev ngram_lm_scale_0.6_attention_scale_1.7 10.46 ngram_lm_scale_0.5_attention_scale_0.9 10.47 ngram_lm_scale_0.5_attention_scale_1.0 10.47 ngram_lm_scale_0.5_attention_scale_1.1 10.47 ngram_lm_scale_0.5_attention_scale_1.2 10.47 ngram_lm_scale_0.5_attention_scale_1.3 10.47 ngram_lm_scale_0.5_attention_scale_1.5 10.47 ngram_lm_scale_0.5_attention_scale_1.7 10.47 ngram_lm_scale_0.6_attention_scale_1.3 10.47 ngram_lm_scale_0.6_attention_scale_1.9 10.47 ngram_lm_scale_0.6_attention_scale_2.0 10.47 ngram_lm_scale_0.6_attention_scale_2.1 10.47 ngram_lm_scale_0.7_attention_scale_1.9 10.47 ngram_lm_scale_0.7_attention_scale_2.0 10.47 ngram_lm_scale_0.7_attention_scale_2.1 10.47 ngram_lm_scale_0.7_attention_scale_2.2 10.47 ngram_lm_scale_0.5_attention_scale_1.9 10.48 ngram_lm_scale_0.6_attention_scale_1.1 10.48 ngram_lm_scale_0.6_attention_scale_1.2 10.48 ngram_lm_scale_0.6_attention_scale_2.2 10.48 ngram_lm_scale_0.6_attention_scale_2.3 10.48 ngram_lm_scale_0.7_attention_scale_1.5 10.48 ngram_lm_scale_0.7_attention_scale_1.7 10.48 ngram_lm_scale_0.7_attention_scale_2.3 10.48 ngram_lm_scale_0.7_attention_scale_2.5 10.48 ngram_lm_scale_0.9_attention_scale_4.0 10.48 ngram_lm_scale_0.3_attention_scale_1.1 10.49 ngram_lm_scale_0.5_attention_scale_0.6 10.49 ngram_lm_scale_0.5_attention_scale_0.7 10.49 ngram_lm_scale_0.5_attention_scale_2.0 10.49 ngram_lm_scale_0.5_attention_scale_2.1 10.49 ngram_lm_scale_0.5_attention_scale_2.5 10.49 ngram_lm_scale_0.5_attention_scale_3.0 10.49 ngram_lm_scale_0.6_attention_scale_1.0 10.49 ngram_lm_scale_0.6_attention_scale_2.5 10.49 ngram_lm_scale_0.6_attention_scale_3.0 10.49 ngram_lm_scale_0.7_attention_scale_1.3 10.49 ngram_lm_scale_0.7_attention_scale_3.0 10.49 ngram_lm_scale_0.7_attention_scale_4.0 10.49 ngram_lm_scale_0.9_attention_scale_3.0 10.49 ngram_lm_scale_0.9_attention_scale_5.0 10.49 ngram_lm_scale_1.0_attention_scale_4.0 10.49 ngram_lm_scale_1.0_attention_scale_5.0 10.49 ngram_lm_scale_1.1_attention_scale_4.0 10.49 ngram_lm_scale_1.1_attention_scale_5.0 10.49 ngram_lm_scale_1.2_attention_scale_4.0 10.49 ngram_lm_scale_1.2_attention_scale_5.0 10.49 ngram_lm_scale_1.3_attention_scale_5.0 10.49 ngram_lm_scale_1.5_attention_scale_5.0 10.49 ngram_lm_scale_0.3_attention_scale_0.7 10.5 ngram_lm_scale_0.3_attention_scale_0.9 10.5 ngram_lm_scale_0.3_attention_scale_1.0 10.5 ngram_lm_scale_0.3_attention_scale_1.2 10.5 ngram_lm_scale_0.3_attention_scale_1.3 10.5 ngram_lm_scale_0.3_attention_scale_1.5 10.5 ngram_lm_scale_0.5_attention_scale_2.2 10.5 ngram_lm_scale_0.5_attention_scale_2.3 10.5 ngram_lm_scale_0.6_attention_scale_0.7 10.5 ngram_lm_scale_0.6_attention_scale_0.9 10.5 ngram_lm_scale_0.7_attention_scale_1.0 10.5 ngram_lm_scale_0.7_attention_scale_1.1 10.5 ngram_lm_scale_0.7_attention_scale_5.0 10.5 ngram_lm_scale_0.9_attention_scale_2.1 10.5 ngram_lm_scale_1.0_attention_scale_3.0 10.5 ngram_lm_scale_1.3_attention_scale_4.0 10.5 ngram_lm_scale_1.5_attention_scale_4.0 10.5 ngram_lm_scale_0.3_attention_scale_1.7 10.51 ngram_lm_scale_0.3_attention_scale_1.9 10.51 ngram_lm_scale_0.3_attention_scale_2.0 10.51 ngram_lm_scale_0.3_attention_scale_2.1 10.51 ngram_lm_scale_0.3_attention_scale_2.2 10.51 ngram_lm_scale_0.3_attention_scale_2.3 10.51 ngram_lm_scale_0.3_attention_scale_2.5 10.51 ngram_lm_scale_0.3_attention_scale_3.0 10.51 ngram_lm_scale_0.3_attention_scale_4.0 10.51 ngram_lm_scale_0.5_attention_scale_0.5 10.51 ngram_lm_scale_0.5_attention_scale_4.0 10.51 ngram_lm_scale_0.5_attention_scale_5.0 10.51 ngram_lm_scale_0.6_attention_scale_4.0 10.51 ngram_lm_scale_0.6_attention_scale_5.0 10.51 ngram_lm_scale_0.7_attention_scale_1.2 10.51 ngram_lm_scale_0.9_attention_scale_2.0 10.51 ngram_lm_scale_0.9_attention_scale_2.2 10.51 ngram_lm_scale_0.9_attention_scale_2.3 10.51 ngram_lm_scale_0.9_attention_scale_2.5 10.51 ngram_lm_scale_1.0_attention_scale_2.2 10.51 ngram_lm_scale_1.0_attention_scale_2.3 10.51 ngram_lm_scale_1.0_attention_scale_2.5 10.51 ngram_lm_scale_1.1_attention_scale_2.5 10.51 ngram_lm_scale_1.2_attention_scale_3.0 10.51 ngram_lm_scale_1.7_attention_scale_5.0 10.51 ngram_lm_scale_0.05_attention_scale_2.5 10.52 ngram_lm_scale_0.05_attention_scale_3.0 10.52 ngram_lm_scale_0.08_attention_scale_2.5 10.52 ngram_lm_scale_0.08_attention_scale_4.0 10.52 ngram_lm_scale_0.08_attention_scale_5.0 10.52 ngram_lm_scale_0.1_attention_scale_2.5 10.52 ngram_lm_scale_0.1_attention_scale_3.0 10.52 ngram_lm_scale_0.1_attention_scale_4.0 10.52 ngram_lm_scale_0.1_attention_scale_5.0 10.52 ngram_lm_scale_0.3_attention_scale_0.5 10.52 ngram_lm_scale_0.3_attention_scale_0.6 10.52 ngram_lm_scale_0.3_attention_scale_5.0 10.52 ngram_lm_scale_0.6_attention_scale_0.6 10.52 ngram_lm_scale_0.7_attention_scale_0.9 10.52 ngram_lm_scale_0.9_attention_scale_1.7 10.52 ngram_lm_scale_0.9_attention_scale_1.9 10.52 ngram_lm_scale_1.0_attention_scale_2.0 10.52 ngram_lm_scale_1.0_attention_scale_2.1 10.52 ngram_lm_scale_1.1_attention_scale_2.3 10.52 ngram_lm_scale_1.1_attention_scale_3.0 10.52 ngram_lm_scale_1.9_attention_scale_5.0 10.52 ngram_lm_scale_0.01_attention_scale_2.5 10.53 ngram_lm_scale_0.01_attention_scale_3.0 10.53 ngram_lm_scale_0.01_attention_scale_4.0 10.53 ngram_lm_scale_0.01_attention_scale_5.0 10.53 ngram_lm_scale_0.05_attention_scale_1.9 10.53 ngram_lm_scale_0.05_attention_scale_2.1 10.53 ngram_lm_scale_0.05_attention_scale_2.3 10.53 ngram_lm_scale_0.05_attention_scale_4.0 10.53 ngram_lm_scale_0.05_attention_scale_5.0 10.53 ngram_lm_scale_0.08_attention_scale_1.9 10.53 ngram_lm_scale_0.08_attention_scale_2.1 10.53 ngram_lm_scale_0.08_attention_scale_2.2 10.53 ngram_lm_scale_0.08_attention_scale_2.3 10.53 ngram_lm_scale_0.08_attention_scale_3.0 10.53 ngram_lm_scale_0.1_attention_scale_2.2 10.53 ngram_lm_scale_0.1_attention_scale_2.3 10.53 ngram_lm_scale_0.3_attention_scale_0.3 10.53 ngram_lm_scale_0.9_attention_scale_1.5 10.53 ngram_lm_scale_1.0_attention_scale_1.9 10.53 ngram_lm_scale_1.1_attention_scale_2.1 10.53 ngram_lm_scale_1.1_attention_scale_2.2 10.53 ngram_lm_scale_1.2_attention_scale_2.5 10.53 ngram_lm_scale_1.3_attention_scale_3.0 10.53 ngram_lm_scale_1.7_attention_scale_4.0 10.53 ngram_lm_scale_2.0_attention_scale_5.0 10.53 ngram_lm_scale_0.01_attention_scale_2.2 10.54 ngram_lm_scale_0.01_attention_scale_2.3 10.54 ngram_lm_scale_0.05_attention_scale_1.7 10.54 ngram_lm_scale_0.05_attention_scale_2.0 10.54 ngram_lm_scale_0.05_attention_scale_2.2 10.54 ngram_lm_scale_0.08_attention_scale_1.2 10.54 ngram_lm_scale_0.08_attention_scale_1.3 10.54 ngram_lm_scale_0.08_attention_scale_1.7 10.54 ngram_lm_scale_0.08_attention_scale_2.0 10.54 ngram_lm_scale_0.1_attention_scale_1.5 10.54 ngram_lm_scale_0.1_attention_scale_1.7 10.54 ngram_lm_scale_0.1_attention_scale_1.9 10.54 ngram_lm_scale_0.1_attention_scale_2.0 10.54 ngram_lm_scale_0.1_attention_scale_2.1 10.54 ngram_lm_scale_0.9_attention_scale_1.2 10.54 ngram_lm_scale_1.0_attention_scale_1.7 10.54 ngram_lm_scale_1.2_attention_scale_2.3 10.54 ngram_lm_scale_1.3_attention_scale_2.3 10.54 ngram_lm_scale_1.5_attention_scale_3.0 10.54 ngram_lm_scale_0.01_attention_scale_1.9 10.55 ngram_lm_scale_0.01_attention_scale_2.0 10.55 ngram_lm_scale_0.01_attention_scale_2.1 10.55 ngram_lm_scale_0.05_attention_scale_1.2 10.55 ngram_lm_scale_0.05_attention_scale_1.3 10.55 ngram_lm_scale_0.08_attention_scale_1.1 10.55 ngram_lm_scale_0.08_attention_scale_1.5 10.55 ngram_lm_scale_0.1_attention_scale_1.1 10.55 ngram_lm_scale_0.1_attention_scale_1.2 10.55 ngram_lm_scale_0.1_attention_scale_1.3 10.55 ngram_lm_scale_0.6_attention_scale_0.5 10.55 ngram_lm_scale_0.7_attention_scale_0.7 10.55 ngram_lm_scale_0.9_attention_scale_1.3 10.55 ngram_lm_scale_1.0_attention_scale_1.5 10.55 ngram_lm_scale_1.1_attention_scale_2.0 10.55 ngram_lm_scale_1.2_attention_scale_2.0 10.55 ngram_lm_scale_1.2_attention_scale_2.1 10.55 ngram_lm_scale_1.2_attention_scale_2.2 10.55 ngram_lm_scale_1.3_attention_scale_2.2 10.55 ngram_lm_scale_1.3_attention_scale_2.5 10.55 ngram_lm_scale_2.1_attention_scale_5.0 10.55 ngram_lm_scale_0.01_attention_scale_1.1 10.56 ngram_lm_scale_0.01_attention_scale_1.3 10.56 ngram_lm_scale_0.01_attention_scale_1.7 10.56 ngram_lm_scale_0.05_attention_scale_1.1 10.56 ngram_lm_scale_0.05_attention_scale_1.5 10.56 ngram_lm_scale_0.08_attention_scale_1.0 10.56 ngram_lm_scale_0.1_attention_scale_1.0 10.56 ngram_lm_scale_0.7_attention_scale_0.6 10.56 ngram_lm_scale_0.9_attention_scale_1.1 10.56 ngram_lm_scale_1.0_attention_scale_1.3 10.56 ngram_lm_scale_1.1_attention_scale_1.7 10.56 ngram_lm_scale_1.1_attention_scale_1.9 10.56 ngram_lm_scale_1.2_attention_scale_1.9 10.56 ngram_lm_scale_1.3_attention_scale_2.0 10.56 ngram_lm_scale_1.9_attention_scale_4.0 10.56 ngram_lm_scale_2.2_attention_scale_5.0 10.56 ngram_lm_scale_0.01_attention_scale_1.2 10.57 ngram_lm_scale_0.01_attention_scale_1.5 10.57 ngram_lm_scale_0.05_attention_scale_1.0 10.57 ngram_lm_scale_0.1_attention_scale_0.5 10.57 ngram_lm_scale_0.1_attention_scale_0.7 10.57 ngram_lm_scale_0.1_attention_scale_0.9 10.57 ngram_lm_scale_0.5_attention_scale_0.3 10.57 ngram_lm_scale_0.9_attention_scale_1.0 10.57 ngram_lm_scale_1.1_attention_scale_1.5 10.57 ngram_lm_scale_1.2_attention_scale_1.7 10.57 ngram_lm_scale_1.3_attention_scale_2.1 10.57 ngram_lm_scale_0.01_attention_scale_1.0 10.58 ngram_lm_scale_0.05_attention_scale_0.9 10.58 ngram_lm_scale_0.08_attention_scale_0.7 10.58 ngram_lm_scale_0.08_attention_scale_0.9 10.58 ngram_lm_scale_0.1_attention_scale_0.6 10.58 ngram_lm_scale_0.3_attention_scale_0.1 10.58 ngram_lm_scale_0.9_attention_scale_0.9 10.58 ngram_lm_scale_1.0_attention_scale_1.2 10.58 ngram_lm_scale_1.3_attention_scale_1.9 10.58 ngram_lm_scale_1.5_attention_scale_2.5 10.58 ngram_lm_scale_2.0_attention_scale_4.0 10.58 ngram_lm_scale_0.01_attention_scale_0.9 10.59 ngram_lm_scale_0.08_attention_scale_0.5 10.59 ngram_lm_scale_0.08_attention_scale_0.6 10.59 ngram_lm_scale_0.1_attention_scale_0.3 10.59 ngram_lm_scale_0.3_attention_scale_0.08 10.59 ngram_lm_scale_0.6_attention_scale_0.3 10.59 ngram_lm_scale_0.7_attention_scale_0.5 10.59 ngram_lm_scale_1.7_attention_scale_3.0 10.59 ngram_lm_scale_2.3_attention_scale_5.0 10.59 ngram_lm_scale_0.05_attention_scale_0.6 10.6 ngram_lm_scale_0.05_attention_scale_0.7 10.6 ngram_lm_scale_0.08_attention_scale_0.3 10.6 ngram_lm_scale_0.3_attention_scale_0.05 10.6 ngram_lm_scale_1.0_attention_scale_1.1 10.6 ngram_lm_scale_1.1_attention_scale_1.3 10.6 ngram_lm_scale_1.2_attention_scale_1.5 10.6 ngram_lm_scale_1.5_attention_scale_2.3 10.6 ngram_lm_scale_0.01_attention_scale_0.7 10.61 ngram_lm_scale_1.3_attention_scale_1.7 10.61 ngram_lm_scale_0.01_attention_scale_0.6 10.62 ngram_lm_scale_0.05_attention_scale_0.3 10.62 ngram_lm_scale_0.05_attention_scale_0.5 10.62 ngram_lm_scale_0.1_attention_scale_0.1 10.62 ngram_lm_scale_2.1_attention_scale_4.0 10.62 ngram_lm_scale_0.01_attention_scale_0.5 10.63 ngram_lm_scale_1.0_attention_scale_1.0 10.63 ngram_lm_scale_1.5_attention_scale_2.2 10.63 ngram_lm_scale_2.5_attention_scale_5.0 10.63 ngram_lm_scale_0.08_attention_scale_0.1 10.64 ngram_lm_scale_0.1_attention_scale_0.08 10.64 ngram_lm_scale_0.3_attention_scale_0.01 10.64 ngram_lm_scale_1.1_attention_scale_1.2 10.64 ngram_lm_scale_0.01_attention_scale_0.3 10.65 ngram_lm_scale_0.5_attention_scale_0.1 10.65 ngram_lm_scale_0.7_attention_scale_0.3 10.65 ngram_lm_scale_1.5_attention_scale_2.1 10.65 ngram_lm_scale_0.08_attention_scale_0.08 10.66 ngram_lm_scale_0.1_attention_scale_0.05 10.66 ngram_lm_scale_0.5_attention_scale_0.08 10.66 ngram_lm_scale_0.9_attention_scale_0.7 10.66 ngram_lm_scale_2.2_attention_scale_4.0 10.66 ngram_lm_scale_0.1_attention_scale_0.01 10.67 ngram_lm_scale_1.0_attention_scale_0.9 10.67 ngram_lm_scale_1.1_attention_scale_1.1 10.67 ngram_lm_scale_1.7_attention_scale_2.5 10.67 ngram_lm_scale_0.05_attention_scale_0.1 10.68 ngram_lm_scale_0.5_attention_scale_0.05 10.68 ngram_lm_scale_1.5_attention_scale_2.0 10.68 ngram_lm_scale_0.05_attention_scale_0.08 10.69 ngram_lm_scale_0.08_attention_scale_0.05 10.69 ngram_lm_scale_1.2_attention_scale_1.3 10.69 ngram_lm_scale_1.9_attention_scale_3.0 10.69 ngram_lm_scale_0.08_attention_scale_0.01 10.7 ngram_lm_scale_0.6_attention_scale_0.1 10.7 ngram_lm_scale_1.3_attention_scale_1.5 10.7 ngram_lm_scale_2.3_attention_scale_4.0 10.7 ngram_lm_scale_0.05_attention_scale_0.05 10.71 ngram_lm_scale_0.5_attention_scale_0.01 10.71 ngram_lm_scale_0.9_attention_scale_0.6 10.71 ngram_lm_scale_1.1_attention_scale_1.0 10.71 ngram_lm_scale_1.5_attention_scale_1.9 10.71 ngram_lm_scale_0.01_attention_scale_0.1 10.72 ngram_lm_scale_0.01_attention_scale_0.08 10.73 ngram_lm_scale_0.05_attention_scale_0.01 10.73 ngram_lm_scale_0.6_attention_scale_0.08 10.73 ngram_lm_scale_1.2_attention_scale_1.2 10.73 ngram_lm_scale_0.01_attention_scale_0.05 10.75 ngram_lm_scale_0.9_attention_scale_0.5 10.75 ngram_lm_scale_1.0_attention_scale_0.7 10.75 ngram_lm_scale_1.1_attention_scale_0.9 10.75 ngram_lm_scale_1.2_attention_scale_1.1 10.75 ngram_lm_scale_1.3_attention_scale_1.3 10.76 ngram_lm_scale_1.7_attention_scale_2.3 10.76 ngram_lm_scale_2.0_attention_scale_3.0 10.77 ngram_lm_scale_0.6_attention_scale_0.05 10.78 ngram_lm_scale_0.01_attention_scale_0.01 10.79 ngram_lm_scale_1.5_attention_scale_1.7 10.79 ngram_lm_scale_1.7_attention_scale_2.2 10.79 ngram_lm_scale_1.2_attention_scale_1.0 10.8 ngram_lm_scale_1.3_attention_scale_1.2 10.8 ngram_lm_scale_2.5_attention_scale_4.0 10.81 ngram_lm_scale_1.7_attention_scale_2.1 10.82 ngram_lm_scale_1.0_attention_scale_0.6 10.83 ngram_lm_scale_2.1_attention_scale_3.0 10.84 ngram_lm_scale_0.6_attention_scale_0.01 10.85 ngram_lm_scale_1.7_attention_scale_2.0 10.85 ngram_lm_scale_1.9_attention_scale_2.5 10.85 ngram_lm_scale_3.0_attention_scale_5.0 10.86 ngram_lm_scale_1.3_attention_scale_1.1 10.87 ngram_lm_scale_0.7_attention_scale_0.1 10.88 ngram_lm_scale_1.5_attention_scale_1.5 10.88 ngram_lm_scale_1.2_attention_scale_0.9 10.89 ngram_lm_scale_1.7_attention_scale_1.9 10.89 ngram_lm_scale_2.2_attention_scale_3.0 10.9 ngram_lm_scale_1.1_attention_scale_0.7 10.91 ngram_lm_scale_1.9_attention_scale_2.3 10.91 ngram_lm_scale_2.0_attention_scale_2.5 10.91 ngram_lm_scale_0.7_attention_scale_0.08 10.92 ngram_lm_scale_0.7_attention_scale_0.05 10.96 ngram_lm_scale_1.0_attention_scale_0.5 10.96 ngram_lm_scale_1.9_attention_scale_2.2 10.97 ngram_lm_scale_2.3_attention_scale_3.0 10.97 ngram_lm_scale_1.3_attention_scale_1.0 10.99 ngram_lm_scale_1.7_attention_scale_1.7 11.01 ngram_lm_scale_2.1_attention_scale_2.5 11.02 ngram_lm_scale_0.9_attention_scale_0.3 11.03 ngram_lm_scale_1.9_attention_scale_2.1 11.03 ngram_lm_scale_0.7_attention_scale_0.01 11.04 ngram_lm_scale_1.5_attention_scale_1.3 11.04 ngram_lm_scale_2.0_attention_scale_2.3 11.04 ngram_lm_scale_1.1_attention_scale_0.6 11.05 ngram_lm_scale_1.9_attention_scale_2.0 11.1 ngram_lm_scale_2.0_attention_scale_2.2 11.1 ngram_lm_scale_1.3_attention_scale_0.9 11.11 ngram_lm_scale_1.2_attention_scale_0.7 11.14 ngram_lm_scale_1.5_attention_scale_1.2 11.15 ngram_lm_scale_2.2_attention_scale_2.5 11.16 ngram_lm_scale_2.1_attention_scale_2.3 11.17 ngram_lm_scale_3.0_attention_scale_4.0 11.17 ngram_lm_scale_1.9_attention_scale_1.9 11.18 ngram_lm_scale_2.0_attention_scale_2.1 11.18 ngram_lm_scale_1.1_attention_scale_0.5 11.19 ngram_lm_scale_2.5_attention_scale_3.0 11.19 ngram_lm_scale_1.7_attention_scale_1.5 11.21 ngram_lm_scale_2.1_attention_scale_2.2 11.25 ngram_lm_scale_1.2_attention_scale_0.6 11.26 ngram_lm_scale_1.5_attention_scale_1.1 11.26 ngram_lm_scale_2.0_attention_scale_2.0 11.26 ngram_lm_scale_1.0_attention_scale_0.3 11.29 ngram_lm_scale_2.3_attention_scale_2.5 11.3 ngram_lm_scale_2.2_attention_scale_2.3 11.31 ngram_lm_scale_2.1_attention_scale_2.1 11.32 ngram_lm_scale_2.0_attention_scale_1.9 11.34 ngram_lm_scale_1.3_attention_scale_0.7 11.36 ngram_lm_scale_1.9_attention_scale_1.7 11.37 ngram_lm_scale_1.5_attention_scale_1.0 11.4 ngram_lm_scale_2.2_attention_scale_2.2 11.4 ngram_lm_scale_2.1_attention_scale_2.0 11.41 ngram_lm_scale_0.9_attention_scale_0.1 11.42 ngram_lm_scale_1.7_attention_scale_1.3 11.44 ngram_lm_scale_1.2_attention_scale_0.5 11.45 ngram_lm_scale_0.9_attention_scale_0.08 11.47 ngram_lm_scale_2.3_attention_scale_2.3 11.48 ngram_lm_scale_2.2_attention_scale_2.1 11.51 ngram_lm_scale_2.1_attention_scale_1.9 11.54 ngram_lm_scale_1.3_attention_scale_0.6 11.55 ngram_lm_scale_1.5_attention_scale_0.9 11.56 ngram_lm_scale_0.9_attention_scale_0.05 11.57 ngram_lm_scale_2.0_attention_scale_1.7 11.57 ngram_lm_scale_2.3_attention_scale_2.2 11.58 ngram_lm_scale_1.1_attention_scale_0.3 11.59 ngram_lm_scale_1.7_attention_scale_1.2 11.59 ngram_lm_scale_1.9_attention_scale_1.5 11.63 ngram_lm_scale_2.2_attention_scale_2.0 11.63 ngram_lm_scale_2.5_attention_scale_2.5 11.63 ngram_lm_scale_4.0_attention_scale_5.0 11.67 ngram_lm_scale_2.3_attention_scale_2.1 11.7 ngram_lm_scale_0.9_attention_scale_0.01 11.71 ngram_lm_scale_2.2_attention_scale_1.9 11.73 ngram_lm_scale_1.3_attention_scale_0.5 11.76 ngram_lm_scale_1.7_attention_scale_1.1 11.76 ngram_lm_scale_1.0_attention_scale_0.1 11.78 ngram_lm_scale_2.1_attention_scale_1.7 11.8 ngram_lm_scale_2.3_attention_scale_2.0 11.8 ngram_lm_scale_2.5_attention_scale_2.3 11.83 ngram_lm_scale_2.0_attention_scale_1.5 11.86 ngram_lm_scale_1.0_attention_scale_0.08 11.89 ngram_lm_scale_1.9_attention_scale_1.3 11.93 ngram_lm_scale_3.0_attention_scale_3.0 11.94 ngram_lm_scale_1.2_attention_scale_0.3 11.95 ngram_lm_scale_1.7_attention_scale_1.0 11.95 ngram_lm_scale_2.3_attention_scale_1.9 11.95 ngram_lm_scale_2.5_attention_scale_2.2 11.96 ngram_lm_scale_1.5_attention_scale_0.7 11.98 ngram_lm_scale_1.0_attention_scale_0.05 12.0 ngram_lm_scale_2.2_attention_scale_1.7 12.02 ngram_lm_scale_2.1_attention_scale_1.5 12.09 ngram_lm_scale_2.5_attention_scale_2.1 12.09 ngram_lm_scale_1.9_attention_scale_1.2 12.12 ngram_lm_scale_1.7_attention_scale_0.9 12.16 ngram_lm_scale_1.0_attention_scale_0.01 12.19 ngram_lm_scale_2.0_attention_scale_1.3 12.2 ngram_lm_scale_2.5_attention_scale_2.0 12.22 ngram_lm_scale_1.5_attention_scale_0.6 12.24 ngram_lm_scale_2.3_attention_scale_1.7 12.24 ngram_lm_scale_1.1_attention_scale_0.1 12.27 ngram_lm_scale_1.9_attention_scale_1.1 12.3 ngram_lm_scale_4.0_attention_scale_4.0 12.31 ngram_lm_scale_2.2_attention_scale_1.5 12.32 ngram_lm_scale_2.5_attention_scale_1.9 12.35 ngram_lm_scale_1.1_attention_scale_0.08 12.36 ngram_lm_scale_2.0_attention_scale_1.2 12.37 ngram_lm_scale_1.3_attention_scale_0.3 12.4 ngram_lm_scale_2.1_attention_scale_1.3 12.43 ngram_lm_scale_3.0_attention_scale_2.5 12.46 ngram_lm_scale_1.1_attention_scale_0.05 12.51 ngram_lm_scale_1.9_attention_scale_1.0 12.52 ngram_lm_scale_2.3_attention_scale_1.5 12.53 ngram_lm_scale_1.5_attention_scale_0.5 12.54 ngram_lm_scale_2.0_attention_scale_1.1 12.58 ngram_lm_scale_5.0_attention_scale_5.0 12.62 ngram_lm_scale_2.1_attention_scale_1.2 12.63 ngram_lm_scale_2.5_attention_scale_1.7 12.64 ngram_lm_scale_1.7_attention_scale_0.7 12.68 ngram_lm_scale_2.2_attention_scale_1.3 12.68 ngram_lm_scale_1.1_attention_scale_0.01 12.72 ngram_lm_scale_3.0_attention_scale_2.3 12.72 ngram_lm_scale_1.9_attention_scale_0.9 12.78 ngram_lm_scale_1.2_attention_scale_0.1 12.79 ngram_lm_scale_2.0_attention_scale_1.0 12.82 ngram_lm_scale_2.1_attention_scale_1.1 12.86 ngram_lm_scale_3.0_attention_scale_2.2 12.87 ngram_lm_scale_1.2_attention_scale_0.08 12.88 ngram_lm_scale_2.2_attention_scale_1.2 12.92 ngram_lm_scale_2.3_attention_scale_1.3 12.97 ngram_lm_scale_1.7_attention_scale_0.6 12.98 ngram_lm_scale_3.0_attention_scale_2.1 13.03 ngram_lm_scale_2.5_attention_scale_1.5 13.04 ngram_lm_scale_1.2_attention_scale_0.05 13.05 ngram_lm_scale_2.0_attention_scale_0.9 13.11 ngram_lm_scale_2.1_attention_scale_1.0 13.17 ngram_lm_scale_2.2_attention_scale_1.1 13.2 ngram_lm_scale_3.0_attention_scale_2.0 13.2 ngram_lm_scale_2.3_attention_scale_1.2 13.24 ngram_lm_scale_1.2_attention_scale_0.01 13.27 ngram_lm_scale_1.3_attention_scale_0.1 13.3 ngram_lm_scale_1.5_attention_scale_0.3 13.32 ngram_lm_scale_1.7_attention_scale_0.5 13.33 ngram_lm_scale_1.3_attention_scale_0.08 13.4 ngram_lm_scale_4.0_attention_scale_3.0 13.41 ngram_lm_scale_1.9_attention_scale_0.7 13.42 ngram_lm_scale_3.0_attention_scale_1.9 13.42 ngram_lm_scale_2.1_attention_scale_0.9 13.45 ngram_lm_scale_2.2_attention_scale_1.0 13.46 ngram_lm_scale_2.3_attention_scale_1.1 13.47 ngram_lm_scale_2.5_attention_scale_1.3 13.53 ngram_lm_scale_1.3_attention_scale_0.05 13.56 ngram_lm_scale_5.0_attention_scale_4.0 13.57 ngram_lm_scale_2.0_attention_scale_0.7 13.73 ngram_lm_scale_2.2_attention_scale_0.9 13.74 ngram_lm_scale_1.9_attention_scale_0.6 13.75 ngram_lm_scale_2.3_attention_scale_1.0 13.75 ngram_lm_scale_2.5_attention_scale_1.2 13.78 ngram_lm_scale_1.3_attention_scale_0.01 13.81 ngram_lm_scale_3.0_attention_scale_1.7 13.84 ngram_lm_scale_2.5_attention_scale_1.1 14.05 ngram_lm_scale_2.1_attention_scale_0.7 14.07 ngram_lm_scale_2.3_attention_scale_0.9 14.07 ngram_lm_scale_2.0_attention_scale_0.6 14.1 ngram_lm_scale_1.9_attention_scale_0.5 14.14 ngram_lm_scale_1.7_attention_scale_0.3 14.18 ngram_lm_scale_4.0_attention_scale_2.5 14.2 ngram_lm_scale_3.0_attention_scale_1.5 14.28 ngram_lm_scale_1.5_attention_scale_0.1 14.3 ngram_lm_scale_2.5_attention_scale_1.0 14.35 ngram_lm_scale_1.5_attention_scale_0.08 14.41 ngram_lm_scale_2.2_attention_scale_0.7 14.42 ngram_lm_scale_2.1_attention_scale_0.6 14.47 ngram_lm_scale_2.0_attention_scale_0.5 14.51 ngram_lm_scale_4.0_attention_scale_2.3 14.56 ngram_lm_scale_1.5_attention_scale_0.05 14.57 ngram_lm_scale_2.5_attention_scale_0.9 14.66 ngram_lm_scale_2.3_attention_scale_0.7 14.72 ngram_lm_scale_4.0_attention_scale_2.2 14.75 ngram_lm_scale_2.2_attention_scale_0.6 14.76 ngram_lm_scale_3.0_attention_scale_1.3 14.76 ngram_lm_scale_2.1_attention_scale_0.5 14.8 ngram_lm_scale_1.5_attention_scale_0.01 14.82 ngram_lm_scale_5.0_attention_scale_3.0 14.84 ngram_lm_scale_4.0_attention_scale_2.1 14.9 ngram_lm_scale_1.9_attention_scale_0.3 14.93 ngram_lm_scale_3.0_attention_scale_1.2 14.98 ngram_lm_scale_2.3_attention_scale_0.6 15.04 ngram_lm_scale_4.0_attention_scale_2.0 15.07 ngram_lm_scale_2.2_attention_scale_0.5 15.13 ngram_lm_scale_1.7_attention_scale_0.1 15.2 ngram_lm_scale_3.0_attention_scale_1.1 15.24 ngram_lm_scale_4.0_attention_scale_1.9 15.25 ngram_lm_scale_2.5_attention_scale_0.7 15.26 ngram_lm_scale_1.7_attention_scale_0.08 15.3 ngram_lm_scale_2.0_attention_scale_0.3 15.31 ngram_lm_scale_2.3_attention_scale_0.5 15.41 ngram_lm_scale_1.7_attention_scale_0.05 15.48 ngram_lm_scale_3.0_attention_scale_1.0 15.54 ngram_lm_scale_2.5_attention_scale_0.6 15.59 ngram_lm_scale_5.0_attention_scale_2.5 15.61 ngram_lm_scale_2.1_attention_scale_0.3 15.62 ngram_lm_scale_4.0_attention_scale_1.7 15.66 ngram_lm_scale_1.7_attention_scale_0.01 15.73 ngram_lm_scale_3.0_attention_scale_0.9 15.8 ngram_lm_scale_5.0_attention_scale_2.3 15.9 ngram_lm_scale_1.9_attention_scale_0.1 15.91 ngram_lm_scale_2.2_attention_scale_0.3 15.93 ngram_lm_scale_2.5_attention_scale_0.5 15.96 ngram_lm_scale_1.9_attention_scale_0.08 16.02 ngram_lm_scale_4.0_attention_scale_1.5 16.04 ngram_lm_scale_5.0_attention_scale_2.2 16.04 ngram_lm_scale_1.9_attention_scale_0.05 16.18 ngram_lm_scale_5.0_attention_scale_2.1 16.2 ngram_lm_scale_2.3_attention_scale_0.3 16.21 ngram_lm_scale_2.0_attention_scale_0.1 16.25 ngram_lm_scale_3.0_attention_scale_0.7 16.34 ngram_lm_scale_2.0_attention_scale_0.08 16.35 ngram_lm_scale_5.0_attention_scale_2.0 16.37 ngram_lm_scale_1.9_attention_scale_0.01 16.42 ngram_lm_scale_4.0_attention_scale_1.3 16.45 ngram_lm_scale_2.0_attention_scale_0.05 16.5 ngram_lm_scale_5.0_attention_scale_1.9 16.52 ngram_lm_scale_2.1_attention_scale_0.1 16.55 ngram_lm_scale_4.0_attention_scale_1.2 16.62 ngram_lm_scale_2.1_attention_scale_0.08 16.64 ngram_lm_scale_3.0_attention_scale_0.6 16.64 ngram_lm_scale_2.5_attention_scale_0.3 16.67 ngram_lm_scale_2.0_attention_scale_0.01 16.71 ngram_lm_scale_2.1_attention_scale_0.05 16.77 ngram_lm_scale_2.2_attention_scale_0.1 16.8 ngram_lm_scale_5.0_attention_scale_1.7 16.82 ngram_lm_scale_4.0_attention_scale_1.1 16.84 ngram_lm_scale_2.2_attention_scale_0.08 16.89 ngram_lm_scale_3.0_attention_scale_0.5 16.95 ngram_lm_scale_2.1_attention_scale_0.01 16.99 ngram_lm_scale_2.2_attention_scale_0.05 17.02 ngram_lm_scale_2.3_attention_scale_0.1 17.02 ngram_lm_scale_4.0_attention_scale_1.0 17.07 ngram_lm_scale_2.3_attention_scale_0.08 17.09 ngram_lm_scale_5.0_attention_scale_1.5 17.16 ngram_lm_scale_2.2_attention_scale_0.01 17.18 ngram_lm_scale_2.3_attention_scale_0.05 17.2 ngram_lm_scale_4.0_attention_scale_0.9 17.24 ngram_lm_scale_2.3_attention_scale_0.01 17.38 ngram_lm_scale_2.5_attention_scale_0.1 17.4 ngram_lm_scale_5.0_attention_scale_1.3 17.45 ngram_lm_scale_2.5_attention_scale_0.08 17.47 ngram_lm_scale_3.0_attention_scale_0.3 17.53 ngram_lm_scale_2.5_attention_scale_0.05 17.58 ngram_lm_scale_5.0_attention_scale_1.2 17.63 ngram_lm_scale_2.5_attention_scale_0.01 17.7 ngram_lm_scale_4.0_attention_scale_0.7 17.7 ngram_lm_scale_5.0_attention_scale_1.1 17.8 ngram_lm_scale_4.0_attention_scale_0.6 17.89 ngram_lm_scale_5.0_attention_scale_1.0 17.94 ngram_lm_scale_3.0_attention_scale_0.1 18.09 ngram_lm_scale_4.0_attention_scale_0.5 18.09 ngram_lm_scale_5.0_attention_scale_0.9 18.09 ngram_lm_scale_3.0_attention_scale_0.08 18.14 ngram_lm_scale_3.0_attention_scale_0.05 18.21 ngram_lm_scale_3.0_attention_scale_0.01 18.31 ngram_lm_scale_5.0_attention_scale_0.7 18.41 ngram_lm_scale_4.0_attention_scale_0.3 18.49 ngram_lm_scale_5.0_attention_scale_0.6 18.57 ngram_lm_scale_5.0_attention_scale_0.5 18.71 ngram_lm_scale_4.0_attention_scale_0.1 18.85 ngram_lm_scale_4.0_attention_scale_0.08 18.88 ngram_lm_scale_4.0_attention_scale_0.05 18.95 ngram_lm_scale_5.0_attention_scale_0.3 19.01 ngram_lm_scale_4.0_attention_scale_0.01 19.02 ngram_lm_scale_5.0_attention_scale_0.1 19.3 ngram_lm_scale_5.0_attention_scale_0.08 19.32 ngram_lm_scale_5.0_attention_scale_0.05 19.37 ngram_lm_scale_5.0_attention_scale_0.01 19.43 2022-04-08 23:20:49,165 INFO [decode.py:730] Done!