PMC_LLAMA2_7B_trainer_lora / trainer_peft.log
Jingmei's picture
End of training
6e53a75 verified
raw
history blame contribute delete
No virus
25.7 kB
2024-06-01 14:49 - Cuda check
2024-06-01 14:49 - True
2024-06-01 14:49 - 2
2024-06-01 14:49 - Configue Model and tokenizer
2024-06-01 14:49 - Cuda check
2024-06-01 14:49 - True
2024-06-01 14:49 - 2
2024-06-01 14:49 - Configue Model and tokenizer
2024-06-01 14:49 - Memory usage in 0.00 GB
2024-06-01 14:49 - Memory usage in 0.00 GB
2024-06-01 14:49 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic
2024-06-01 14:49 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic
2024-06-01 14:49 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 14:49 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 14:49 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 14:49 - Setup PEFT
2024-06-01 14:49 - Setup optimizer
2024-06-01 14:49 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 14:49 - Setup PEFT
2024-06-01 14:49 - Setup optimizer
2024-06-01 14:49 - Start training!!
2024-06-01 14:49 - Start training!!
2024-06-01 14:51 - Cuda check
2024-06-01 14:51 - True
2024-06-01 14:51 - 2
2024-06-01 14:51 - Configue Model and tokenizer
2024-06-01 14:51 - Cuda check
2024-06-01 14:51 - True
2024-06-01 14:51 - 2
2024-06-01 14:51 - Configue Model and tokenizer
2024-06-01 14:51 - Memory usage in 0.00 GB
2024-06-01 14:51 - Memory usage in 0.00 GB
2024-06-01 14:51 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic
2024-06-01 14:51 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 14:51 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic
2024-06-01 14:51 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 14:51 - Setup PEFT
2024-06-01 14:51 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 14:51 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 14:51 - Setup PEFT
2024-06-01 14:51 - Setup optimizer
2024-06-01 14:51 - Setup optimizer
2024-06-01 14:51 - Start training!!
2024-06-01 14:51 - Start training!!
2024-06-01 15:49 - Training complete!!!
2024-06-01 15:49 - Training complete!!!
2024-06-01 20:49 - Cuda check
2024-06-01 20:49 - True
2024-06-01 20:49 - 2
2024-06-01 20:49 - Configue Model and tokenizer
2024-06-01 20:49 - Cuda check
2024-06-01 20:49 - True
2024-06-01 20:49 - 2
2024-06-01 20:49 - Configue Model and tokenizer
2024-06-01 20:49 - Memory usage in 0.00 GB
2024-06-01 20:49 - Memory usage in 0.00 GB
2024-06-01 20:49 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic
2024-06-01 20:49 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic
2024-06-01 20:49 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 20:49 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 20:49 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 20:49 - Setup PEFT
2024-06-01 20:49 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 20:49 - Setup PEFT
2024-06-01 20:49 - Setup optimizer
2024-06-01 20:49 - Setup optimizer
2024-06-01 20:49 - Start training!!
2024-06-01 20:49 - Start training!!
2024-06-01 20:55 - Cuda check
2024-06-01 20:55 - True
2024-06-01 20:55 - 2
2024-06-01 20:55 - Configue Model and tokenizer
2024-06-01 20:55 - Cuda check
2024-06-01 20:55 - True
2024-06-01 20:55 - 2
2024-06-01 20:55 - Configue Model and tokenizer
2024-06-01 20:55 - Memory usage in 0.00 GB
2024-06-01 20:55 - Memory usage in 0.00 GB
2024-06-01 20:55 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic
2024-06-01 20:55 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 20:55 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic
2024-06-01 20:55 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 20:55 - Setup PEFT
2024-06-01 20:55 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 20:55 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 20:55 - Setup PEFT
2024-06-01 20:55 - Setup optimizer
2024-06-01 20:55 - Setup optimizer
2024-06-01 20:55 - Continue training!!
2024-06-01 20:55 - Continue training!!
2024-06-01 20:56 - Training complete!!!
2024-06-01 20:56 - Training complete!!!
2024-06-01 20:58 - Cuda check
2024-06-01 20:58 - True
2024-06-01 20:58 - 2
2024-06-01 20:58 - Configue Model and tokenizer
2024-06-01 20:58 - Cuda check
2024-06-01 20:58 - True
2024-06-01 20:58 - 2
2024-06-01 20:58 - Configue Model and tokenizer
2024-06-01 20:58 - Memory usage in 0.00 GB
2024-06-01 20:58 - Memory usage in 0.00 GB
2024-06-01 20:58 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic
2024-06-01 20:58 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 20:58 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 20:58 - Setup PEFT
2024-06-01 20:58 - Setup optimizer
2024-06-01 20:58 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic
2024-06-01 20:58 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 20:58 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 20:58 - Setup PEFT
2024-06-01 20:58 - Setup optimizer
2024-06-01 20:58 - Continue training!!
2024-06-01 20:58 - Continue training!!
2024-06-01 20:59 - Training complete!!!
2024-06-01 20:59 - Training complete!!!
2024-06-01 21:04 - Cuda check
2024-06-01 21:04 - True
2024-06-01 21:04 - 2
2024-06-01 21:04 - Configue Model and tokenizer
2024-06-01 21:04 - Cuda check
2024-06-01 21:04 - True
2024-06-01 21:04 - 2
2024-06-01 21:04 - Configue Model and tokenizer
2024-06-01 21:04 - Memory usage in 0.00 GB
2024-06-01 21:04 - Memory usage in 0.00 GB
2024-06-01 21:04 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic
2024-06-01 21:04 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 21:04 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 21:04 - Setup PEFT
2024-06-01 21:04 - Setup optimizer
2024-06-01 21:04 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic
2024-06-01 21:04 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 21:04 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 21:04 - Setup PEFT
2024-06-01 21:04 - Setup optimizer
2024-06-01 21:05 - Continue training!!
2024-06-01 21:05 - Continue training!!
2024-06-01 21:05 - Training complete!!!
2024-06-01 21:05 - Training complete!!!
2024-06-01 21:07 - Cuda check
2024-06-01 21:07 - True
2024-06-01 21:07 - 2
2024-06-01 21:07 - Configue Model and tokenizer
2024-06-01 21:07 - Cuda check
2024-06-01 21:07 - True
2024-06-01 21:07 - 2
2024-06-01 21:07 - Configue Model and tokenizer
2024-06-01 21:07 - Memory usage in 0.00 GB
2024-06-01 21:07 - Memory usage in 0.00 GB
2024-06-01 21:07 - Dataset loaded successfully:
train-Jingmei/Pandemic_ACDC
test -Jingmei/Pandemic
2024-06-01 21:07 - Dataset loaded successfully:
train-Jingmei/Pandemic_ACDC
test -Jingmei/Pandemic
2024-06-01 21:07 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 625
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 21:07 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 625
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 21:07 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 3938
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 21:07 - Setup PEFT
2024-06-01 21:07 - Setup optimizer
2024-06-01 21:07 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 3938
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 21:07 - Setup PEFT
2024-06-01 21:07 - Setup optimizer
2024-06-01 21:07 - Continue training!!
2024-06-01 21:07 - Continue training!!
2024-06-01 21:08 - Training complete!!!
2024-06-01 21:08 - Training complete!!!
2024-06-01 21:09 - Cuda check
2024-06-01 21:09 - True
2024-06-01 21:09 - 2
2024-06-01 21:09 - Configue Model and tokenizer
2024-06-01 21:09 - Cuda check
2024-06-01 21:09 - True
2024-06-01 21:09 - 2
2024-06-01 21:09 - Configue Model and tokenizer
2024-06-01 21:09 - Memory usage in 0.00 GB
2024-06-01 21:09 - Memory usage in 0.00 GB
2024-06-01 21:09 - Dataset loaded successfully:
train-Jingmei/Pandemic_ACDC
test -Jingmei/Pandemic
2024-06-01 21:09 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 625
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 21:09 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 3938
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 21:09 - Setup PEFT
2024-06-01 21:09 - Setup optimizer
2024-06-01 21:09 - Dataset loaded successfully:
train-Jingmei/Pandemic_ACDC
test -Jingmei/Pandemic
2024-06-01 21:09 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 625
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 21:09 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 3938
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 21:09 - Setup PEFT
2024-06-01 21:09 - Setup optimizer
2024-06-01 21:09 - Continue training!!
2024-06-01 21:09 - Continue training!!
2024-06-01 21:19 - Training complete!!!
2024-06-01 21:19 - Training complete!!!
2024-06-01 21:20 - Cuda check
2024-06-01 21:20 - True
2024-06-01 21:20 - 2
2024-06-01 21:20 - Configue Model and tokenizer
2024-06-01 21:20 - Cuda check
2024-06-01 21:20 - True
2024-06-01 21:20 - 2
2024-06-01 21:20 - Configue Model and tokenizer
2024-06-01 21:20 - Memory usage in 0.00 GB
2024-06-01 21:20 - Memory usage in 0.00 GB
2024-06-01 21:20 - Dataset loaded successfully:
train-Jingmei/Pandemic_ACDC
test -Jingmei/Pandemic
2024-06-01 21:20 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 625
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 21:20 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 3938
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 21:20 - Setup PEFT
2024-06-01 21:20 - Dataset loaded successfully:
train-Jingmei/Pandemic_ACDC
test -Jingmei/Pandemic
2024-06-01 21:20 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 625
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 21:20 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 3938
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 21:20 - Setup PEFT
2024-06-01 21:20 - Setup optimizer
2024-06-01 21:20 - Setup optimizer
2024-06-01 21:20 - Continue training!!
2024-06-01 21:20 - Continue training!!
2024-06-01 21:21 - Training complete!!!
2024-06-01 21:21 - Training complete!!!
2024-06-01 21:22 - Cuda check
2024-06-01 21:22 - True
2024-06-01 21:22 - 2
2024-06-01 21:22 - Configue Model and tokenizer
2024-06-01 21:22 - Cuda check
2024-06-01 21:22 - True
2024-06-01 21:22 - 2
2024-06-01 21:22 - Configue Model and tokenizer
2024-06-01 21:23 - Memory usage in 0.00 GB
2024-06-01 21:23 - Memory usage in 0.00 GB
2024-06-01 21:23 - Dataset loaded successfully:
train-Jingmei/Pandemic_ECDC
test -Jingmei/Pandemic
2024-06-01 21:23 - Dataset loaded successfully:
train-Jingmei/Pandemic_ECDC
test -Jingmei/Pandemic
2024-06-01 21:23 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 7008
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 21:23 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 7008
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 21:25 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 103936
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 21:25 - Setup PEFT
2024-06-01 21:25 - Setup optimizer
2024-06-01 21:25 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 103936
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 21:25 - Setup PEFT
2024-06-01 21:25 - Setup optimizer
2024-06-01 21:25 - Continue training!!
2024-06-01 21:25 - Continue training!!
2024-06-02 08:00 - Cuda check
2024-06-02 08:00 - True
2024-06-02 08:00 - 2
2024-06-02 08:00 - Configue Model and tokenizer
2024-06-02 08:00 - Cuda check
2024-06-02 08:00 - True
2024-06-02 08:00 - 2
2024-06-02 08:00 - Configue Model and tokenizer
2024-06-02 08:00 - Memory usage in 0.00 GB
2024-06-02 08:00 - Memory usage in 0.00 GB
2024-06-02 08:00 - Dataset loaded successfully:
train-Jingmei/Pandemic_ECDC
test -Jingmei/Pandemic
2024-06-02 08:00 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 7008
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-02 08:00 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 103936
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-02 08:00 - Setup PEFT
2024-06-02 08:00 - Dataset loaded successfully:
train-Jingmei/Pandemic_ECDC
test -Jingmei/Pandemic
2024-06-02 08:00 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 7008
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-02 08:00 - Setup optimizer
2024-06-02 08:00 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 103936
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-02 08:00 - Setup PEFT
2024-06-02 08:00 - Setup optimizer
2024-06-02 08:00 - Continue training!!
2024-06-02 08:00 - Continue training!!
2024-06-02 08:28 - Training complete!!!
2024-06-02 08:28 - Training complete!!!
2024-06-02 10:07 - Cuda check
2024-06-02 10:07 - True
2024-06-02 10:07 - 2
2024-06-02 10:07 - Configue Model and tokenizer
2024-06-02 10:07 - Cuda check
2024-06-02 10:07 - True
2024-06-02 10:07 - 2
2024-06-02 10:07 - Configue Model and tokenizer
2024-06-02 10:07 - Memory usage in 0.00 GB
2024-06-02 10:07 - Memory usage in 0.00 GB
2024-06-02 10:07 - Dataset loaded successfully:
train-Jingmei/Pandemic_CDC
test -Jingmei/Pandemic
2024-06-02 10:07 - Dataset loaded successfully:
train-Jingmei/Pandemic_CDC
test -Jingmei/Pandemic
2024-06-02 10:09 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 15208
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-02 10:09 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 15208
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-02 10:14 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 364678
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-02 10:14 - Setup PEFT
2024-06-02 10:14 - Setup optimizer
2024-06-02 10:14 - Continue training!!
2024-06-02 10:14 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 364678
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-02 10:14 - Setup PEFT
2024-06-02 10:14 - Setup optimizer
2024-06-02 10:15 - Continue training!!
2024-06-02 20:22 - Training complete!!!
2024-06-02 20:22 - Training complete!!!
2024-06-04 12:41 - Cuda check
2024-06-04 12:41 - True
2024-06-04 12:41 - 3
2024-06-04 12:41 - Configue Model and tokenizer
2024-06-04 12:41 - Cuda check
2024-06-04 12:41 - True
2024-06-04 12:41 - 3
2024-06-04 12:41 - Configue Model and tokenizer
2024-06-04 12:41 - Cuda check
2024-06-04 12:41 - True
2024-06-04 12:41 - 3
2024-06-04 12:41 - Configue Model and tokenizer
2024-06-04 12:42 - Memory usage in 0.00 GB
2024-06-04 12:42 - Memory usage in 0.00 GB
2024-06-04 12:42 - Memory usage in 0.00 GB
2024-06-04 12:42 - Dataset loaded successfully:
train-Jingmei/Pandemic_Books
test -Jingmei/Pandemic
2024-06-04 12:42 - Dataset loaded successfully:
train-Jingmei/Pandemic_Books
test -Jingmei/Pandemic
2024-06-04 12:42 - Dataset loaded successfully:
train-Jingmei/Pandemic_Books
test -Jingmei/Pandemic
2024-06-04 12:44 - Cuda check
2024-06-04 12:44 - True
2024-06-04 12:44 - 3
2024-06-04 12:44 - Configue Model and tokenizer
2024-06-04 12:44 - Cuda check
2024-06-04 12:44 - True
2024-06-04 12:44 - 3
2024-06-04 12:44 - Configue Model and tokenizer
2024-06-04 12:44 - Cuda check
2024-06-04 12:44 - True
2024-06-04 12:44 - 3
2024-06-04 12:44 - Configue Model and tokenizer
2024-06-04 12:44 - Memory usage in 0.00 GB
2024-06-04 12:44 - Memory usage in 0.00 GB
2024-06-04 12:44 - Memory usage in 0.00 GB
2024-06-04 12:44 - Dataset loaded successfully:
train-Jingmei/Pandemic_Books
test -Jingmei/Pandemic_WHO
2024-06-04 12:44 - Dataset loaded successfully:
train-Jingmei/Pandemic_Books
test -Jingmei/Pandemic_WHO
2024-06-04 12:44 - Dataset loaded successfully:
train-Jingmei/Pandemic_Books
test -Jingmei/Pandemic_WHO
2024-06-04 12:46 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 5966
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-04 12:46 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 5966
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-04 12:46 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 5966
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-04 12:51 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 388202
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198960
})
})
2024-06-04 12:51 - Setup PEFT
2024-06-04 12:51 - Setup optimizer
2024-06-04 12:51 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 388202
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198960
})
})
2024-06-04 12:51 - Setup PEFT
2024-06-04 12:51 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 388202
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198960
})
})
2024-06-04 12:51 - Setup PEFT
2024-06-04 12:51 - Setup optimizer
2024-06-04 12:51 - Setup optimizer
2024-06-04 12:51 - Continue training!!
2024-06-04 12:51 - Continue training!!
2024-06-04 12:51 - Continue training!!
2024-06-04 12:52 - Training complete!!!
2024-06-04 12:52 - Training complete!!!