pkbiswas commited on
Commit
9d4188e
1 Parent(s): f9676e9

End of training

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,71 +1,57 @@
1
  ---
2
  base_model: meta-llama/Llama-3.2-1B
3
- datasets:
4
- - scitldr
5
- library_name: peft
6
- license: llama3.2
7
  tags:
8
  - generated_from_trainer
9
- model-index:
10
- - name: Llama-3.2-1B-Summarization-QLoRa
11
- results: []
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
 
17
- # Llama-3.2-1B-Summarization-QLoRa
 
18
 
19
- This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on the scitldr dataset.
20
- It achieves the following results on the evaluation set:
21
- - Loss: 2.5948
22
 
23
- ## Model description
 
24
 
25
- More information needed
26
-
27
- ## Intended uses & limitations
28
-
29
- More information needed
30
-
31
- ## Training and evaluation data
32
-
33
- More information needed
34
 
35
  ## Training procedure
36
 
37
- ### Training hyperparameters
38
 
39
- The following hyperparameters were used during training:
40
- - learning_rate: 0.0002
41
- - train_batch_size: 2
42
- - eval_batch_size: 2
43
- - seed: 42
44
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
- - lr_scheduler_type: linear
46
- - lr_scheduler_warmup_steps: 2
47
- - num_epochs: 2
48
- - mixed_precision_training: Native AMP
49
 
50
- ### Training results
51
 
52
- | Training Loss | Epoch | Step | Validation Loss |
53
- |:-------------:|:------:|:----:|:---------------:|
54
- | 2.5121 | 0.2008 | 200 | 2.5718 |
55
- | 2.485 | 0.4016 | 400 | 2.5652 |
56
- | 2.4865 | 0.6024 | 600 | 2.5660 |
57
- | 2.4767 | 0.8032 | 800 | 2.5562 |
58
- | 2.4744 | 1.0040 | 1000 | 2.5508 |
59
- | 2.137 | 1.2048 | 1200 | 2.5899 |
60
- | 2.1268 | 1.4056 | 1400 | 2.5914 |
61
- | 2.108 | 1.6064 | 1600 | 2.5874 |
62
- | 2.0804 | 1.8072 | 1800 | 2.5948 |
63
 
64
 
65
- ### Framework versions
66
 
67
- - PEFT 0.13.2
68
- - Transformers 4.44.2
69
- - Pytorch 2.5.0+cu121
70
- - Datasets 3.0.2
71
- - Tokenizers 0.19.1
 
 
 
 
 
 
 
 
1
  ---
2
  base_model: meta-llama/Llama-3.2-1B
3
+ library_name: transformers
4
+ model_name: Llama-3.2-1B-Summarization-QLoRa
 
 
5
  tags:
6
  - generated_from_trainer
7
+ - trl
8
+ - sft
9
+ licence: license
10
  ---
11
 
12
+ # Model Card for Llama-3.2-1B-Summarization-QLoRa
 
13
 
14
+ This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B).
15
+ It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
+ ## Quick start
 
 
18
 
19
+ ```python
20
+ from transformers import pipeline
21
 
22
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="pkbiswas/Llama-3.2-1B-Summarization-QLoRa", device="cuda")
24
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
+ print(output["generated_text"])
26
+ ```
 
 
 
 
27
 
28
  ## Training procedure
29
 
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/pkbiswas-verizon/huggingface/runs/tnejvjab)
31
 
32
+ This model was trained with SFT.
 
 
 
 
 
 
 
 
 
33
 
34
+ ### Framework versions
35
 
36
+ - TRL: 0.12.1
37
+ - Transformers: 4.46.2
38
+ - Pytorch: 2.5.1+cu121
39
+ - Datasets: 3.1.0
40
+ - Tokenizers: 0.20.3
41
+
42
+ ## Citations
 
 
 
 
43
 
44
 
 
45
 
46
+ Cite TRL as:
47
+
48
+ ```bibtex
49
+ @misc{vonwerra2022trl,
50
+ title = {{TRL: Transformer Reinforcement Learning}},
51
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
52
+ year = 2020,
53
+ journal = {GitHub repository},
54
+ publisher = {GitHub},
55
+ howpublished = {\url{https://github.com/huggingface/trl}}
56
+ }
57
+ ```
adapter_config.json CHANGED
@@ -20,13 +20,13 @@
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
- "v_proj",
24
  "up_proj",
25
- "gate_proj",
26
  "down_proj",
27
  "k_proj",
 
28
  "o_proj",
29
- "q_proj"
30
  ],
31
  "task_type": "CAUSAL_LM",
32
  "use_dora": false,
 
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
+ "q_proj",
24
  "up_proj",
 
25
  "down_proj",
26
  "k_proj",
27
+ "v_proj",
28
  "o_proj",
29
+ "gate_proj"
30
  ],
31
  "task_type": "CAUSAL_LM",
32
  "use_dora": false,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1bbaf748ce2a57db9c6185530e8f90efe9c3ef68965f1ccbb44cf55cba675f75
3
  size 45118424
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a1d35ddbd3ffcbce10f5013d65d880eeba4ecfc93e1068a728798e096516dd45
3
  size 45118424
runs/Nov17_03-26-00_510a8698afae/events.out.tfevents.1731814003.510a8698afae.247.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8bdc5eac1bafc5bfd89405ce357b899c116f9433b4ac3ebdc03b9072e8d4f3cd
3
+ size 10347
tokenizer.json CHANGED
The diff for this file is too large to render. See raw diff
 
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bdaf7306f331551cde9f4cc36f7fd53fbb6d57feefbfaed6342bde0284410cad
3
- size 5240
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a590901d37a372144280fe551b3905773c523adf287405c06d1a2f08946d8f65
3
+ size 5560