Jingmei commited on
Commit
fa407c9
1 Parent(s): 71e1e6b

End of training

Browse files
Files changed (4) hide show
  1. README.md +1 -1
  2. adapter_model.safetensors +1 -1
  3. trainer_peft.log +64 -0
  4. training_args.bin +2 -2
README.md CHANGED
@@ -12,7 +12,7 @@ model-index:
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/noc-lab/PMC_LLAMA2_7B_trainer_lora/runs/326cgjks)
16
  # PMC_LLAMA2_7B_trainer_lora
17
 
18
  This model is a fine-tuned version of [chaoyi-wu/PMC_LLAMA_7B](https://huggingface.co/chaoyi-wu/PMC_LLAMA_7B) on an unknown dataset.
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/noc-lab/PMC_LLAMA2_7B_trainer_lora/runs/0svox411)
16
  # PMC_LLAMA2_7B_trainer_lora
17
 
18
  This model is a fine-tuned version of [chaoyi-wu/PMC_LLAMA_7B](https://huggingface.co/chaoyi-wu/PMC_LLAMA_7B) on an unknown dataset.
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ac7c0e2a74f3e011f89fe704618fac2802f699b0096420c9e04e3b5777ca0a8a
3
  size 16794200
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6de47217183878a8a9f061967affb701f9eccc19ffed21be5d7dcca3cf4043d8
3
  size 16794200
trainer_peft.log CHANGED
@@ -505,3 +505,67 @@
505
  2024-06-01 21:09 - Continue training!!
506
  2024-06-01 21:09 - Continue training!!
507
  2024-06-01 21:19 - Training complete!!!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
505
  2024-06-01 21:09 - Continue training!!
506
  2024-06-01 21:09 - Continue training!!
507
  2024-06-01 21:19 - Training complete!!!
508
+ 2024-06-01 21:19 - Training complete!!!
509
+ 2024-06-01 21:20 - Cuda check
510
+ 2024-06-01 21:20 - True
511
+ 2024-06-01 21:20 - 2
512
+ 2024-06-01 21:20 - Configue Model and tokenizer
513
+ 2024-06-01 21:20 - Cuda check
514
+ 2024-06-01 21:20 - True
515
+ 2024-06-01 21:20 - 2
516
+ 2024-06-01 21:20 - Configue Model and tokenizer
517
+ 2024-06-01 21:20 - Memory usage in 0.00 GB
518
+ 2024-06-01 21:20 - Memory usage in 0.00 GB
519
+ 2024-06-01 21:20 - Dataset loaded successfully:
520
+ train-Jingmei/Pandemic_ACDC
521
+ test -Jingmei/Pandemic
522
+ 2024-06-01 21:20 - Tokenize data: DatasetDict({
523
+ train: Dataset({
524
+ features: ['input_ids', 'attention_mask'],
525
+ num_rows: 625
526
+ })
527
+ test: Dataset({
528
+ features: ['input_ids', 'attention_mask'],
529
+ num_rows: 8264
530
+ })
531
+ })
532
+ 2024-06-01 21:20 - Split data into chunks:DatasetDict({
533
+ train: Dataset({
534
+ features: ['input_ids', 'attention_mask'],
535
+ num_rows: 3938
536
+ })
537
+ test: Dataset({
538
+ features: ['input_ids', 'attention_mask'],
539
+ num_rows: 198964
540
+ })
541
+ })
542
+ 2024-06-01 21:20 - Setup PEFT
543
+ 2024-06-01 21:20 - Dataset loaded successfully:
544
+ train-Jingmei/Pandemic_ACDC
545
+ test -Jingmei/Pandemic
546
+ 2024-06-01 21:20 - Tokenize data: DatasetDict({
547
+ train: Dataset({
548
+ features: ['input_ids', 'attention_mask'],
549
+ num_rows: 625
550
+ })
551
+ test: Dataset({
552
+ features: ['input_ids', 'attention_mask'],
553
+ num_rows: 8264
554
+ })
555
+ })
556
+ 2024-06-01 21:20 - Split data into chunks:DatasetDict({
557
+ train: Dataset({
558
+ features: ['input_ids', 'attention_mask'],
559
+ num_rows: 3938
560
+ })
561
+ test: Dataset({
562
+ features: ['input_ids', 'attention_mask'],
563
+ num_rows: 198964
564
+ })
565
+ })
566
+ 2024-06-01 21:20 - Setup PEFT
567
+ 2024-06-01 21:20 - Setup optimizer
568
+ 2024-06-01 21:20 - Setup optimizer
569
+ 2024-06-01 21:20 - Continue training!!
570
+ 2024-06-01 21:20 - Continue training!!
571
+ 2024-06-01 21:21 - Training complete!!!
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a231d5967b772f8233812684530610afb10acb8fb0bb11265063035356d04fca
3
- size 5240
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:679debbaaad76aea9c6e7042804b230c1810abd3863bab16edb081229855e2aa
3
+ size 5176