File size: 2,605 Bytes
acbba86
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
2024-05-28 12:27 - Cuda check
2024-05-28 12:27 - True
2024-05-28 12:27 - 2
2024-05-28 12:27 - Configue Model and tokenizer
2024-05-28 12:38 - Memory usage in   2.13 GB
2024-05-28 12:38 - Dataset loaded successfully:
 train-Jingmei/Pandemic_Wiki
 test -Jingmei/Pandemic_WHO
2024-05-28 12:39 - Tokenize data: DatasetDict({
    train: Dataset({
        features: ['input_ids', 'attention_mask'],
        num_rows: 2152
    })
    test: Dataset({
        features: ['input_ids', 'attention_mask'],
        num_rows: 8264
    })
})
2024-05-28 12:40 - Split data into chunks:DatasetDict({
    train: Dataset({
        features: ['input_ids', 'attention_mask'],
        num_rows: 24856
    })
    test: Dataset({
        features: ['input_ids', 'attention_mask'],
        num_rows: 198964
    })
})
2024-05-28 12:40 - Setup PEFT
2024-05-28 12:40 - Setup optimizer
2024-05-28 12:47 - Cuda check
2024-05-28 12:47 - True
2024-05-28 12:47 - 2
2024-05-28 12:47 - Configue Model and tokenizer
2024-05-28 12:47 - Memory usage in   2.13 GB
2024-05-28 12:47 - Dataset loaded successfully:
 train-Jingmei/Pandemic_Wiki
 test -Jingmei/Pandemic_WHO
2024-05-28 12:47 - Tokenize data: DatasetDict({
    train: Dataset({
        features: ['input_ids', 'attention_mask'],
        num_rows: 2152
    })
    test: Dataset({
        features: ['input_ids', 'attention_mask'],
        num_rows: 8264
    })
})
2024-05-28 12:47 - Split data into chunks:DatasetDict({
    train: Dataset({
        features: ['input_ids', 'attention_mask'],
        num_rows: 24856
    })
    test: Dataset({
        features: ['input_ids', 'attention_mask'],
        num_rows: 198964
    })
})
2024-05-28 12:47 - Setup PEFT
2024-05-28 12:47 - Setup optimizer
2024-05-28 13:15 - Cuda check
2024-05-28 13:15 - True
2024-05-28 13:15 - 2
2024-05-28 13:15 - Configue Model and tokenizer
2024-05-28 13:15 - Memory usage in   2.13 GB
2024-05-28 13:15 - Dataset loaded successfully:
 train-Jingmei/Pandemic_Wiki
 test -Jingmei/Pandemic_WHO
2024-05-28 13:15 - Tokenize data: DatasetDict({
    train: Dataset({
        features: ['input_ids', 'attention_mask'],
        num_rows: 2152
    })
    test: Dataset({
        features: ['input_ids', 'attention_mask'],
        num_rows: 8264
    })
})
2024-05-28 13:15 - Split data into chunks:DatasetDict({
    train: Dataset({
        features: ['input_ids', 'attention_mask'],
        num_rows: 24856
    })
    test: Dataset({
        features: ['input_ids', 'attention_mask'],
        num_rows: 198964
    })
})
2024-05-28 13:15 - Setup PEFT
2024-05-28 13:15 - Setup optimizer
2024-05-28 13:15 - Start training