File size: 37,771 Bytes
e2226d2
 
 
6d567db
3c620ba
 
 
6d567db
 
0772494
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3c620ba
 
6d567db
3c620ba
6d567db
3c620ba
 
6d567db
3c620ba
 
6d567db
3c620ba
 
bf0925c
3c620ba
 
 
bf0925c
 
3c620ba
 
 
 
 
21fe8f4
3c620ba
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
import streamlit as st
from streamlit_chat import message
import requests

st.set_page_config(
    page_title="SAI",
    page_icon=":robot:"
)

#########################
#######################""

import streamlit as st
#from streamlit_chat import message
import requests
from transformers import AutoModelWithLMHead, AutoTokenizer


import streamlit as st
#from streamlit_chat import message
import requests
from transformers import AutoModelWithLMHead, AutoTokenizer

# st.write("yoyoyo")
import pandas as pd

data = {'Question': ['What did Conan and Heiji see at the museum?',
 'What did Natsumi Kosaka ask for?',
 'What did Heiji say about the time?',
 "What did Conan figure out about Kid's message?",
 'What did Heiji realize about the "Shining Sky chamber"?',
 'What did Kid do at Osaka Castle?',
 'What did Conan ask Nishino?',
 'Where did Nakamori take the egg?',
 'What did Kid do at the Haginochaya electric substation?',
 'How did Kid get to the warehouse?',
 'Who did Conan find in the warehouse?',
 'What did Kid do to escape from the warehouse?',
 'What happened to Kid at the end?','What is the Memories Egg?',
 'How many of the Fabergé eggs have been found so far?',
 'Who currently owns the Memories Egg?',
 'Where is the Suzuki corporation planning to display the Memories Egg?',
 'Who is Kaito Kid?',
 'What do the police believe "between the dusk of the Lion and the dawn of the Virgin" means?',
 'Who did the police ask to help protect the egg?',
 'Where do the Detective Boys go while Kogoro, Ran, and Conan go to Osaka?',
 'What riddle does Professor Agasa give the Detective Boys?',
 'Who is Sergei Ovchinnikov?',
 'Who is Shoichi Inui?',
 'What is the value of the Memories Egg?',
 'What is inside the Memories Egg?',
 'What is the significance of the lock on the front of the Memories Egg?',
 'Why is the Tsarina missing from the Memories Egg?',
 'Who is planning to steal the Memories Egg and what is their message?','Who is the thief?',
 'What is the target of the theft?',
 'Who is the owner of the Imperial Easter Egg?',
 'Where is the Imperial Easter Egg located?',
 'When is the theft planned to take place?',
 'Is there an advanced notice of the theft?',
 'What is the Memories Egg?',
 'Who is Kaitou Kid?',
 'Has Kaitou Kid stolen anything before?',
 'Is there anyone trying to stop Kaitou Kid from stealing the Imperial Easter Egg?','Who is Conan Edogawa?',
 'What happened to Shinichi Kudo?',
 "What is Conan's goal?",
 'How does Conan plan to achieve his goal?',
 'Who is Kogoro Mouri?',
 'Who is Ran Mouri?',
 'Who is Juzo Megure?',
 'Who is Ninzaburo Shiratori?',
 'Who is Wataru Takagi?',
 'Who is Hiroshi Agasa?',
 'Who is Sonoko Suzuki?',
 'Who is Ai Haibara?',
 'Who is Ayumi Yoshida?',
 'Who is Mitsuhiko Tsuburaya?',
 'Who is Kaitou Kid?',
 'Who is Heiji Hattori?',
 'Who is Kazuha Toyama?',
 'Who is Ginzo Nakamori?',
 'Who is Shintaro Chaki?',
 'Who is Shiro Suzuki?',
 'What was Ayumi doing when her mother told her to go to bed?',
 'What happened when Ayumi went to bed?',
 'Did Ayumi ask Kaito Kid if he was Dracula?',
 'What did Kaito Kid tell Ayumi?',
 'What did Kaito Kid do before he left Ayumi?',
 'What happened when the police helicopter arrived?',
 'Did Ayumi tell her friends about meeting Kaito Kid?',
 'What did Conan think about Kaito Kid?',
 'What is the Memories Egg?',
 'What did Kaito Kid announce in his cryptic fashion?'],
'Answer': ["They saw Natsumi Kosaka, the owner of a sweetshop and her family's assistant Kuranosuke Sawabe argue with Nishino.",
 'She asked for an urgent meeting with Chairman Suzuki because she saw the picture of the Imperial Easter egg on the leaflets announcing the exhibition and realized that this egg looked different from the sketch her great-grandfather made.',
 'Heiji said "that\'s interesting. 3 AM looks like a "L" and now we\'ll almost have a "he"."',
 'Conan figured out the second part of Kid\'s message - "he" is the twelfth character of the phrase. Kid didn\'t mean 3 AM, but 7:20 PM.',
 'Heiji realized that "Shining Sky chamber" was meant to refer to the Tsuuten Tower because there is a weather station on top of Tsuuten Tower that "shines".',
 'Kid set off fireworks at Osaka Castle to divert attention away from Tsuuten Tower.',
 'Conan asked Nishino if he knows where the egg is.',
 'Nakamori took the egg to a different secret location.',
 'Kid planted bombs at the station to watch where emergency power is turned back on after the blackout, helping him figure out where the egg actually is.',
 'Kid flew to the warehouse on his hang glider, chased by Conan on his skateboard.',
 'Conan found Kid, who had already taken the egg and had knocked Nakamori and his officers out.',
 'Kid activated his car gun, filling the room with smoke and used the confusion to fly away.',
 'Kid was shot in the right eye and crashed, falling into the sea.','The Memories Egg is a rare Fabergé egg that was originally a gift from the Russian czar to his wife on Easter Sunday.',
 'Fifty Fabergé eggs have been found worldwide, meaning the Memories Egg will be the fifty-first piece.',
 'The Suzuki corporation currently owns the Memories Egg.',
 'The Suzuki corporation plans to display the Memories Egg in the Modern Arts museum in Osaka beginning on August 23rd.',
 'Kaito Kid is a notorious thief who is planning to steal the Memories Egg.',
 'The police believe that "between the dusk of the Lion and the dawn of the Virgin" indicates the day Kaito Kid plans to commit his heist. The astrological sign Leo ends on August 23 and the astrological sign Virgo begins on August 23, meaning Kaito Kid wants to steal the egg between dusk on August 22 and dawn on August 23.',
 "Following Director Suzuki's wishes, the police asked Kogoro Mori to help protect the egg.",
 'The Detective Boys visit Professor Agasa.',
 'Professor Agasa\'s riddle is: "I (meaning Agasa) have many grandchildren, how old are they?"',
 'Sergei Ovchinnikov is a high-level representative of the Russian Embassy.',
 'Shoichi Inui is an arts dealer.',
 'The Memories Egg is valued at 600 million yen.',
 'The Memories Egg contains several figures, Tsar Nicholas II and his family crowded together around a book, all made of gold.',
 "The lock on the front of the Memories Egg is for a key, and when a key is inserted there, the pages of the golden book on the Tsar's knees begin to turn.",
 'The Tsarina is missing from the Memories Egg, which is odd to Conan as the egg was supposed to be a gift to her. Chairman Suzuki says that the egg was created in a time of great financial hardship for Russia.',
 'Kaito Kid is planning to steal the Memories Egg, and his message includes a reference to the "Shining Sky Chamber," which Heiji and Kazuha try to decipher.','The thief is Kaitou Kid.',
 'The target of the theft is the Imperial Easter Egg.',
 'The owner of the Imperial Easter Egg is the Suzuki Financial Group.',
 'The Imperial Easter Egg is located at the Suzuki Modern Art Museum in Osaka.',
 'The theft is planned to take place on August 22.',
 'Yes, there is an advanced notice of the theft. The notice says, "Between the dusk of the Lion and the dawn of the Virgin, when the second hand on the clock indicates the twelfth symbol, I will take the Memories Egg from Shining Sky Chamber."',
 'The Memories Egg is another name for the Imperial Easter Egg.',
 'Kaitou Kid is a master thief who often targets valuable items and leaves challenging clues for the authorities to solve.',
 'Yes, Kaitou Kid has a reputation for being a skilled and daring thief who has stolen many valuable items in the past.',
 'Yes, there are likely to be security measures in place at the museum to prevent the theft, and the police may also be involved in trying to catch Kaitou Kid.','Conan Edogawa is the alias used by Shinichi Kudo in his shrunken form after being exposed to the poison APTX 4869.',
 'Shinichi Kudo was forced to swallow the poison APTX 4869 by two men in black, which de-aged his body but left his nervous system intact.',
 "Conan's goal is to hunt down the Black Organization and have them arrested for their crimes, as well as find an antidote to the APTX 4869.",
 'Conan plans to make the washout detective Kogoro Mouri famous in hopes of attracting cases related to the Black Organization.',
 'Kogoro Mouri is a private detective and the father of Ran Mouri, who is also a childhood friend of Shinichi Kudo.',
 'Ran Mouri is the daughter of Kogoro Mouri and a childhood friend of Shinichi Kudo.',
 'Juzo Megure is a Police Inspector in Division 1 of the Tokyo Metropolitan Police Department.',
 'Ninzaburo Shiratori is the boyfriend of Sumiko Kobayashi.',
 "Wataru Takagi is a police sergeant and detective from the Tokyo Metropolitan Police District's Criminal Investigation First Division, and the love interest of Miwako Sato.",
 "Hiroshi Agasa is Shinichi Kudo's next door neighbor and family friend.",
 "Sonoko Suzuki is Ran Mouri's best friend and the girlfriend of Makoto Kyogoku, with whom she is currently in a long-distance relationship.",
 'Ai Haibara is a former member of the Black Organization, known as Sherry, who is now on the run from them and lives with Professor Agasa.',
 'Ayumi Yoshida is a student in Teitan Elementary School.',
 'Mitsuhiko Tsuburaya is also a student in Teitan Elementary School.',
 'Kaitou Kid is a master thief who first appeared 18 years ago in Paris.',
 'Heiji Hattori is an Osakan high school detective and a childhood friend and romantic interest of Kazuha Toyama.',
 'Kazuha Toyama is a childhood friend and the romantic interest of Heiji Hattori.',
 'Ginzo Nakamori is an inspector for the Tokyo district who is nominally devoted to fraud cases, but spends most of his time and energy capturing Kaitou Kid.',
 "Shintaro Chaki is the superintendent of the Tokyo Metropolitan Police 2nd Division and Ginzo Nakamori's direct superior.",
 "Shiro Suzuki is the chairman and CEO of the Suzuki family and Sonoko's father, with the second wealthiest family after Renya Karasuma.",
 'Ayumi was watching a vampire movie on TV.',
 'Ayumi saw a strange shadow on her balcony, which turned out to be Kaito Kid, the master thief.',
 'Yes, Ayumi asked Kaito Kid if he was Dracula because she was still impressed by the vampire movie.',
 "Kaito Kid told Ayumi that he wasn't Dracula, he just needed a little rest, and that she shouldn't tell anyone she had seen him.",
 "Kaito Kid gently pressed a kiss to the back of Ayumi's hand before he left.",
 'The police helicopter arrived, and Inspector Ginzo Nakamori shouted that he had spotted Kaito Kid and they needed to catch him.',
 'Yes, Ayumi excitedly told her friends at school that she had met "handsome" Kaito Kid.',
 'Conan thought that he would catch the master thief one day.',
 'The Memories Egg is a priceless egg made by famous jeweler Fabergé and belonged to the Russian Imperial family, the Romanovs.',
 "Kaito Kid recently announced his new heist in his usual cryptic fashion: Between the dusk of the Lion and the dawn of the Virgin, when the second hand on the clock indicates the twelfth symbol, I will take the Memories Egg from Shining Sky chamber. The last wizard of the century, Kaito Kid."]}


df = pd.DataFrame(data)

# ! pip -q install transformers

import torch
import os


tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-large")
model = AutoModelWithLMHead.from_pretrained("microsoft/DialoGPT-large")

"""
Fine-tuning the library models for language modeling on a text file (GPT, GPT-2, BERT, RoBERTa).
GPT and GPT-2 are fine-tuned using a causal language modeling (CLM) loss while BERT and RoBERTa are fine-tuned
using a masked language modeling (MLM) loss.
"""

import glob
import logging
import os
import pickle
import random
import re
import shutil
from typing import Dict, List, Tuple
import json

import pandas as pd
import numpy as np
import torch

from sklearn.model_selection import train_test_split

from torch.nn.utils.rnn import pad_sequence
from torch.utils.data import DataLoader, Dataset, RandomSampler, SequentialSampler
from torch.utils.data.distributed import DistributedSampler
from tqdm.notebook import tqdm, trange

from pathlib import Path

from transformers import (
    MODEL_WITH_LM_HEAD_MAPPING,
    WEIGHTS_NAME,
    AdamW,
    AutoConfig,
    AutoModelWithLMHead,
    AutoTokenizer,
    PreTrainedModel,
    PreTrainedTokenizer,
    get_linear_schedule_with_warmup,
)


try:
    from torch.utils.tensorboard import SummaryWriter
except ImportError:
    from tensorboardX import SummaryWriter

# Configs
logger = logging.getLogger(__name__)

MODEL_CONFIG_CLASSES = list(MODEL_WITH_LM_HEAD_MAPPING.keys())
MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES)

# Args to allow for easy convertion of python script to notebook
class Args():
    def __init__(self):
        self.output_dir = 'output-small-save'
        self.model_type = 'gpt2'
        self.model_name_or_path = 'microsoft/DialoGPT-small'
        self.config_name = 'microsoft/DialoGPT-small'
        self.tokenizer_name = 'microsoft/DialoGPT-small'
        self.cache_dir = 'cached'
        self.block_size = 120
        self.do_train = True
        self.do_eval = True
        self.evaluate_during_training = False
        self.per_gpu_train_batch_size = 4
        self.per_gpu_eval_batch_size = 4
        self.gradient_accumulation_steps = 1
        self.learning_rate = 5e-5
        self.weight_decay = 0.0
        self.adam_epsilon = 1e-8
        self.max_grad_norm = 1.0
        self.num_train_epochs = 5
        self.max_steps = -1
        self.warmup_steps = 0
        self.logging_steps = 1000
        self.save_steps = 3500
        self.save_total_limit = None
        self.eval_all_checkpoints = False
        self.no_cuda = False
        self.overwrite_output_dir = True
        self.overwrite_cache = True
        self.should_continue = False
        self.seed = 42
        self.local_rank = -1
        self.fp16 = False
        self.fp16_opt_level = 'O1'

args = Args()

df.head()

def construct_conv(row, tokenizer, eos = True):
    flatten = lambda l: [item for sublist in l for item in sublist]
    conv = list(reversed([tokenizer.encode(x) + [tokenizer.eos_token_id] for x in row]))
    conv = flatten(conv)
    return conv

class ConversationDataset(Dataset):
    def __init__(self, tokenizer: PreTrainedTokenizer, args, df, block_size=512):

        block_size = block_size - (tokenizer.model_max_length - tokenizer.max_len_single_sentence)

        directory = args.cache_dir
        cached_features_file = os.path.join(
            directory, args.model_type + "_cached_lm_" + str(block_size)
        )

        if os.path.exists(cached_features_file) and not args.overwrite_cache:
            logger.info("Loading features from cached file %s", cached_features_file)
            with open(cached_features_file, "rb") as handle:
                self.examples = pickle.load(handle)
        else:
            logger.info("Creating features from dataset file at %s", directory)

            self.examples = []
            for _, row in df.iterrows():
                conv = construct_conv(row, tokenizer)
                self.examples.append(conv)

            logger.info("Saving features into cached file %s", cached_features_file)
            with open(cached_features_file, "wb") as handle:
                pickle.dump(self.examples, handle, protocol=pickle.HIGHEST_PROTOCOL)

    def __len__(self):
        return len(self.examples)

    def __getitem__(self, item):
        return torch.tensor(self.examples[item], dtype=torch.long)

# Cacheing and storing of data/checkpoints

def load_and_cache_examples(args, tokenizer, df_trn, df_val, evaluate=False):
    return ConversationDataset(tokenizer, args, df_val if evaluate else df_trn)


def set_seed(args):
    random.seed(args.seed)
    np.random.seed(args.seed)
    torch.manual_seed(args.seed)
    if args.n_gpu > 0:
        torch.cuda.manual_seed_all(args.seed)


def _sorted_checkpoints(args, checkpoint_prefix="checkpoint", use_mtime=False) -> List[str]:
    ordering_and_checkpoint_path = []

    glob_checkpoints = glob.glob(os.path.join(args.output_dir, "{}-*".format(checkpoint_prefix)))

    for path in glob_checkpoints:
        if use_mtime:
            ordering_and_checkpoint_path.append((os.path.getmtime(path), path))
        else:
            regex_match = re.match(".*{}-([0-9]+)".format(checkpoint_prefix), path)
            if regex_match and regex_match.groups():
                ordering_and_checkpoint_path.append((int(regex_match.groups()[0]), path))

    checkpoints_sorted = sorted(ordering_and_checkpoint_path)
    checkpoints_sorted = [checkpoint[1] for checkpoint in checkpoints_sorted]
    return checkpoints_sorted


def _rotate_checkpoints(args, checkpoint_prefix="checkpoint", use_mtime=False) -> None:
    if not args.save_total_limit:
        return
    if args.save_total_limit <= 0:
        return

    # Check if we should delete older checkpoint(s)
    checkpoints_sorted = _sorted_checkpoints(args, checkpoint_prefix, use_mtime)
    if len(checkpoints_sorted) <= args.save_total_limit:
        return

    number_of_checkpoints_to_delete = max(0, len(checkpoints_sorted) - args.save_total_limit)
    checkpoints_to_be_deleted = checkpoints_sorted[:number_of_checkpoints_to_delete]
    for checkpoint in checkpoints_to_be_deleted:
        logger.info("Deleting older checkpoint [{}] due to args.save_total_limit".format(checkpoint))
        shutil.rmtree(checkpoint)

def train(args, train_dataset, model: PreTrainedModel, tokenizer: PreTrainedTokenizer) -> Tuple[int, float]:
    """ Train the model """
    if args.local_rank in [-1, 0]:
        tb_writer = SummaryWriter()

    args.train_batch_size = args.per_gpu_train_batch_size * max(1, args.n_gpu)

    def collate(examples: List[torch.Tensor]):
        if tokenizer._pad_token is None:
            return pad_sequence(examples, batch_first=True)
        return pad_sequence(examples, batch_first=True, padding_value=tokenizer.pad_token_id)

    train_sampler = RandomSampler(train_dataset) if args.local_rank == -1 else DistributedSampler(train_dataset)
    train_dataloader = DataLoader(
        train_dataset, sampler=train_sampler, batch_size=args.train_batch_size, collate_fn=collate, drop_last = True
    )

    if args.max_steps > 0:
        t_total = args.max_steps
        args.num_train_epochs = args.max_steps // (len(train_dataloader) // args.gradient_accumulation_steps) + 1
    else:
        t_total = len(train_dataloader) // args.gradient_accumulation_steps * args.num_train_epochs

    model = model.module if hasattr(model, "module") else model  # Take care of distributed/parallel training
    model.resize_token_embeddings(len(tokenizer))
    # add_special_tokens_(model, tokenizer)


    # Prepare optimizer and schedule (linear warmup and decay)
    no_decay = ["bias", "LayerNorm.weight"]
    optimizer_grouped_parameters = [
        {
            "params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
            "weight_decay": args.weight_decay,
        },
        {"params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], "weight_decay": 0.0},
    ]
    optimizer = AdamW(optimizer_grouped_parameters, lr=args.learning_rate, eps=args.adam_epsilon)
    scheduler = get_linear_schedule_with_warmup(
        optimizer, num_warmup_steps=args.warmup_steps, num_training_steps=t_total
    )

    # Check if saved optimizer or scheduler states exist
    if (
        args.model_name_or_path
        and os.path.isfile(os.path.join(args.model_name_or_path, "optimizer.pt"))
        and os.path.isfile(os.path.join(args.model_name_or_path, "scheduler.pt"))
    ):
        # Load in optimizer and scheduler states
        optimizer.load_state_dict(torch.load(os.path.join(args.model_name_or_path, "optimizer.pt")))
        scheduler.load_state_dict(torch.load(os.path.join(args.model_name_or_path, "scheduler.pt")))

    if args.fp16:
        try:
            from apex import amp
        except ImportError:
            raise ImportError("Please install apex from https://www.github.com/nvidia/apex to use fp16 training.")
        model, optimizer = amp.initialize(model, optimizer, opt_level=args.fp16_opt_level)

    # multi-gpu training (should be after apex fp16 initialization)
    if args.n_gpu > 1:
        model = torch.nn.DataParallel(model)

    # Distributed training (should be after apex fp16 initialization)
    if args.local_rank != -1:
        model = torch.nn.parallel.DistributedDataParallel(
            model, device_ids=[args.local_rank], output_device=args.local_rank, find_unused_parameters=True
        )

    # Train!
    logger.info("***** Running training *****")
    logger.info("  Num examples = %d", len(train_dataset))
    logger.info("  Num Epochs = %d", args.num_train_epochs)
    logger.info("  Instantaneous batch size per GPU = %d", args.per_gpu_train_batch_size)
    logger.info(
        "  Total train batch size (w. parallel, distributed & accumulation) = %d",
        args.train_batch_size
        * args.gradient_accumulation_steps
        * (torch.distributed.get_world_size() if args.local_rank != -1 else 1),
    )
    logger.info("  Gradient Accumulation steps = %d", args.gradient_accumulation_steps)
    logger.info("  Total optimization steps = %d", t_total)

    global_step = 0
    epochs_trained = 0
    steps_trained_in_current_epoch = 0
    # Check if continuing training from a checkpoint
    if args.model_name_or_path and os.path.exists(args.model_name_or_path):
        try:
            # set global_step to gobal_step of last saved checkpoint from model path
            checkpoint_suffix = args.model_name_or_path.split("-")[-1].split("/")[0]
            global_step = int(checkpoint_suffix)
            epochs_trained = global_step // (len(train_dataloader) // args.gradient_accumulation_steps)
            steps_trained_in_current_epoch = global_step % (len(train_dataloader) // args.gradient_accumulation_steps)

            logger.info("  Continuing training from checkpoint, will skip to saved global_step")
            logger.info("  Continuing training from epoch %d", epochs_trained)
            logger.info("  Continuing training from global step %d", global_step)
            logger.info("  Will skip the first %d steps in the first epoch", steps_trained_in_current_epoch)
        except ValueError:
            logger.info("  Starting fine-tuning.")

    tr_loss, logging_loss = 0.0, 0.0

    model.zero_grad()
    train_iterator = trange(
        epochs_trained, int(args.num_train_epochs), desc="Epoch", disable=args.local_rank not in [-1, 0]
    )
    set_seed(args)  # Added here for reproducibility
    for _ in train_iterator:
        epoch_iterator = tqdm(train_dataloader, desc="Iteration", disable=args.local_rank not in [-1, 0])
        for step, batch in enumerate(epoch_iterator):

            # Skip past any already trained steps if resuming training
            if steps_trained_in_current_epoch > 0:
                steps_trained_in_current_epoch -= 1
                continue

            inputs, labels = (batch, batch)
            if inputs.shape[1] > 1024: continue
            inputs = inputs.to(args.device)
            labels = labels.to(args.device)
            model.train()
            outputs = model(inputs, labels=labels)
            loss = outputs[0]  # model outputs are always tuple in transformers (see doc)

            if args.n_gpu > 1:
                loss = loss.mean()  # mean() to average on multi-gpu parallel training
            if args.gradient_accumulation_steps > 1:
                loss = loss / args.gradient_accumulation_steps

            if args.fp16:
                with amp.scale_loss(loss, optimizer) as scaled_loss:
                    scaled_loss.backward()
            else:
                loss.backward()

            tr_loss += loss.item()
            if (step + 1) % args.gradient_accumulation_steps == 0:
                if args.fp16:
                    torch.nn.utils.clip_grad_norm_(amp.master_params(optimizer), args.max_grad_norm)
                else:
                    torch.nn.utils.clip_grad_norm_(model.parameters(), args.max_grad_norm)
                optimizer.step()
                scheduler.step()  # Update learning rate schedule
                model.zero_grad()
                global_step += 1

                if args.local_rank in [-1, 0] and args.logging_steps > 0 and global_step % args.logging_steps == 0:
                    # Log metrics
                    if (
                        args.local_rank == -1 and args.evaluate_during_training
                    ):  # Only evaluate when single GPU otherwise metrics may not average well
                        results = evaluate(args, model, tokenizer)
                        for key, value in results.items():
                            tb_writer.add_scalar("eval_{}".format(key), value, global_step)
                    tb_writer.add_scalar("lr", scheduler.get_lr()[0], global_step)
                    tb_writer.add_scalar("loss", (tr_loss - logging_loss) / args.logging_steps, global_step)
                    logging_loss = tr_loss

                if args.local_rank in [-1, 0] and args.save_steps > 0 and global_step % args.save_steps == 0:
                    checkpoint_prefix = "checkpoint"
                    # Save model checkpoint
                    output_dir = os.path.join(args.output_dir, "{}-{}".format(checkpoint_prefix, global_step))
                    os.makedirs(output_dir, exist_ok=True)
                    model_to_save = (
                        model.module if hasattr(model, "module") else model
                    )  # Take care of distributed/parallel training
                    model_to_save.save_pretrained(output_dir)
                    tokenizer.save_pretrained(output_dir)

                    torch.save(args, os.path.join(output_dir, "training_args.bin"))
                    logger.info("Saving model checkpoint to %s", output_dir)

                    _rotate_checkpoints(args, checkpoint_prefix)

                    torch.save(optimizer.state_dict(), os.path.join(output_dir, "optimizer.pt"))
                    torch.save(scheduler.state_dict(), os.path.join(output_dir, "scheduler.pt"))
                    logger.info("Saving optimizer and scheduler states to %s", output_dir)

            if args.max_steps > 0 and global_step > args.max_steps:
                epoch_iterator.close()
                break
        if args.max_steps > 0 and global_step > args.max_steps:
            train_iterator.close()
            break

    if args.local_rank in [-1, 0]:
        tb_writer.close()

    return global_step, tr_loss / global_step

# Evaluation of some model

def evaluate(args, model: PreTrainedModel, tokenizer: PreTrainedTokenizer, df_trn, df_val, prefix="") -> Dict:
    # Loop to handle MNLI double evaluation (matched, mis-matched)
    eval_output_dir = args.output_dir

    eval_dataset = load_and_cache_examples(args, tokenizer, df_trn, df_val, evaluate=True)
    os.makedirs(eval_output_dir, exist_ok=True)
    args.eval_batch_size = args.per_gpu_eval_batch_size * max(1, args.n_gpu)
    # Note that DistributedSampler samples randomly

    def collate(examples: List[torch.Tensor]):
        if tokenizer._pad_token is None:
            return pad_sequence(examples, batch_first=True)
        return pad_sequence(examples, batch_first=True, padding_value=tokenizer.pad_token_id)

    eval_sampler = SequentialSampler(eval_dataset)
    eval_dataloader = DataLoader(
        eval_dataset, sampler=eval_sampler, batch_size=args.eval_batch_size, collate_fn=collate, drop_last = True
    )

    # multi-gpu evaluate
    if args.n_gpu > 1:
        model = torch.nn.DataParallel(model)

    # Eval!
    logger.info("***** Running evaluation {} *****".format(prefix))
    logger.info("  Num examples = %d", len(eval_dataset))
    logger.info("  Batch size = %d", args.eval_batch_size)
    eval_loss = 0.0
    nb_eval_steps = 0
    model.eval()

    for batch in tqdm(eval_dataloader, desc="Evaluating"):
        inputs, labels = (batch, batch)
        inputs = inputs.to(args.device)
        labels = labels.to(args.device)

        with torch.no_grad():
            outputs = model(inputs, labels=labels)
            lm_loss = outputs[0]
            eval_loss += lm_loss.mean().item()
        nb_eval_steps += 1

    eval_loss = eval_loss / nb_eval_steps
    perplexity = torch.exp(torch.tensor(eval_loss))

    result = {"perplexity": perplexity}

    output_eval_file = os.path.join(eval_output_dir, prefix, "eval_results.txt")
    with open(output_eval_file, "w") as writer:
        logger.info("***** Eval results {} *****".format(prefix))
        for key in sorted(result.keys()):
            logger.info("  %s = %s", key, str(result[key]))
            writer.write("%s = %s\n" % (key, str(result[key])))

    return result

def main(df_trn, df_val):
    args = Args()
    
    if args.should_continue:
        sorted_checkpoints = _sorted_checkpoints(args)
        if len(sorted_checkpoints) == 0:
            raise ValueError("Used --should_continue but no checkpoint was found in --output_dir.")
        else:
            args.model_name_or_path = sorted_checkpoints[-1]

    if (
        os.path.exists(args.output_dir)
        and os.listdir(args.output_dir)
        and args.do_train
        and not args.overwrite_output_dir
        and not args.should_continue
    ):
        raise ValueError(
            "Output directory ({}) already exists and is not empty. Use --overwrite_output_dir to overcome.".format(
                args.output_dir
            )
        )

    # Setup CUDA, GPU & distributed training
    device = torch.device("cuda")
    args.n_gpu = torch.cuda.device_count()
    args.device = device

    # Setup logging
    logging.basicConfig(
        format="%(asctime)s - %(levelname)s - %(name)s -   %(message)s",
        datefmt="%m/%d/%Y %H:%M:%S",
        level=logging.INFO if args.local_rank in [-1, 0] else logging.WARN,
    )
    logger.warning(
        "Process rank: %s, device: %s, n_gpu: %s, distributed training: %s, 16-bits training: %s",
        args.local_rank,
        device,
        args.n_gpu,
        bool(args.local_rank != -1),
        args.fp16,
    )

    # Set seed
    set_seed(args)

    config = AutoConfig.from_pretrained(args.config_name, cache_dir=args.cache_dir)
    tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name, cache_dir=args.cache_dir)
    model = AutoModelWithLMHead.from_pretrained(
        args.model_name_or_path,
        from_tf=False,
        config=config,
        cache_dir=args.cache_dir,
    )
    model.to(args.device)
    
    logger.info("Training/evaluation parameters %s", args)

    # Training
    if args.do_train:
        train_dataset = load_and_cache_examples(args, tokenizer, df_trn, df_val, evaluate=False)

        global_step, tr_loss = train(args, train_dataset, model, tokenizer)
        logger.info(" global_step = %s, average loss = %s", global_step, tr_loss)

    # Saving best-practices: if you use save_pretrained for the model and tokenizer, you can reload them using from_pretrained()
    if args.do_train:
        # Create output directory if needed
        os.makedirs(args.output_dir, exist_ok=True)

        logger.info("Saving model checkpoint to %s", args.output_dir)
        # Save a trained model, configuration and tokenizer using `save_pretrained()`.
        # They can then be reloaded using `from_pretrained()`
        model_to_save = (
            model.module if hasattr(model, "module") else model
        )  # Take care of distributed/parallel training
        model_to_save.save_pretrained(args.output_dir)
        tokenizer.save_pretrained(args.output_dir)

        # Good practice: save your training arguments together with the trained model
        torch.save(args, os.path.join(args.output_dir, "training_args.bin"))

        # Load a trained model and vocabulary that you have fine-tuned
        model = AutoModelWithLMHead.from_pretrained(args.output_dir)
        tokenizer = AutoTokenizer.from_pretrained(args.output_dir)
        model.to(args.device)

    # Evaluation
    results = {}
    if args.do_eval and args.local_rank in [-1, 0]:
        checkpoints = [args.output_dir]
        if args.eval_all_checkpoints:
            checkpoints = list(
                os.path.dirname(c) for c in sorted(glob.glob(args.output_dir + "/**/" + WEIGHTS_NAME, recursive=True))
            )
            logging.getLogger("transformers.modeling_utils").setLevel(logging.WARN)  # Reduce logging
        logger.info("Evaluate the following checkpoints: %s", checkpoints)
        for checkpoint in checkpoints:
            global_step = checkpoint.split("-")[-1] if len(checkpoints) > 1 else ""
            prefix = checkpoint.split("/")[-1] if checkpoint.find("checkpoint") != -1 else ""

            model = AutoModelWithLMHead.from_pretrained(checkpoint)
            model.to(args.device)
            result = evaluate(args, model, tokenizer, df_trn, df_val, prefix=prefix)
            result = dict((k + "_{}".format(global_step), v) for k, v in result.items())
            results.update(result)

    return results

df = df.rename(columns={'Answer': 'response'})
df = df.rename(columns={'Question': 'context'})

df

main(df,df)

test_chatbot = []
text = "Hello"
# for i in range(len(test_query)):
tokenizer = AutoTokenizer.from_pretrained('microsoft/DialoGPT-small')
model = AutoModelWithLMHead.from_pretrained('output-small-save')
# append the new user input tokens to the chat history
bot_input_ids = tokenizer.encode(text + tokenizer.eos_token, return_tensors='pt')
print("Patient: {} \n".format(text))
print("Reference:  {} \n".format(text))


# generated a response while limiting the total chat history to 1000 tokens, 
chat_history_ids = model.generate(
    bot_input_ids, max_length=100,
    pad_token_id=tokenizer.eos_token_id,  
    no_repeat_ngram_size=3,       
    do_sample=True, 
    top_k=10, 
    top_p=0.7,
    temperature = 0.8
)

# pretty print last ouput tokens from bot
st.write("Predict: {} \n\n".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
test_chatbot.append(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))

print(len(test_chatbot))




text = 'Who is the thief'
# for i in range(len(test_query)):
tokenizer = AutoTokenizer.from_pretrained('microsoft/DialoGPT-small')
model = AutoModelWithLMHead.from_pretrained('output-small-save')
# append the new user input tokens to the chat history
bot_input_ids = tokenizer.encode(text + tokenizer.eos_token, return_tensors='pt')
print("Patient: {} \n".format(text))
print("Reference:  {} \n".format(text))
chat_history_ids = model.generate(
    bot_input_ids, max_length=100,
    pad_token_id=tokenizer.eos_token_id,  
    no_repeat_ngram_size=3,       
    do_sample=True, 
    top_k=10, 
    top_p=0.7,
    temperature = 0.8
)

# pretty print last ouput tokens from bot
st.write("Predict: {} \n\n".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))



#######################""
#########################""
st.header("Hello - Welcome to SAI")
st.write("""The Phantom Thief Kid sends a heist notice, warning of another heist. The police deduce that his next target is a recently discovered Fabergé egg, which Suzuki Modern Art Museum in Osaka will display on August 22. The night of the heist, Kid steals the egg and flies off, and Conan and Heiji give chase. However, in the middle of the chase, an unknown assailant shoots Kid in the right eye, and Kid apparently falls into the sea to his death. After recovering the egg, the police fruitlessly search for Kid's body.

The next day, Conan, Ran, and Kogoro board a boat to Tokyo. They meet Natsumi Kousaka, whose great-grandfather worked in Fabergé's factory. She shows them a part of a sketch of two eggs and a key, which were found among her late grandmother's mementos. Conan suspects that the person who shot Kid is on the ship. That night, Ryu Sagawa, a freelance photographer covering the press with news of the egg, is murdered, shot in the right eye in the same fashion as Kid. Soon after his body is discovered, Inspector Megure, along with officers Takagi and Shiratori, arrive by helicopter to inspect the crime scene. At first, they suspect Sonoko's father's servant, Mr. Nishino, but the police and Conan conclude the culprit is Scorpion - a mysterious killer who always shoots his victims in the right eye. A missing lifeboat hints that Scorpion has escaped, and the boat's passengers go to Yokosuka Castle, which holds Scorpion's next target: the second egg.

While exploring the castle, the group stumbles across secret passages beneath the castle. As they traverse the tunnels, Inui, an art dealer, pursues a shadowy figure he sees in one of the tunnels, and is shot by a silenced handgun. Delving farther into the tunnel, they find a coffin with a corpse clutching the second egg. Suddenly, the two eggs are snatched away.
""")

if 'generated' not in st.session_state:
    st.session_state['generated'] = []

if 'past' not in st.session_state:
    st.session_state['past'] = []

def get_text():
    input_text = st.text_input("You: "," ", key="input")
    return input_text 


if st.session_state['generated']:

    for i in range(len(st.session_state['generated'])-1, -1, -1):
        message(st.session_state["generated"][i], key=str(i))
        message(st.session_state['past'][i], is_user=True, key=str(i) + '_user')

message("Tell me what happened?",is_user=True)
user_input = get_text()