PEFT
Safetensors
English
jinjieyuan commited on
Commit
d586b76
1 Parent(s): e6cb895

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +119 -119
README.md CHANGED
@@ -1,119 +1,119 @@
1
- ---
2
- language: en
3
- license: apache-2.0
4
- ---
5
-
6
- # Shears Model Card: shears-llama-7b-50-commonsense-heuristic
7
-
8
- The heuristic adapter discovered from the [super-adapter](https://huggingface.co/IntelLabs/shears-llama-7b-50-commonsense-super) fine-tuned on sparsified LLaMA-7B with some commonsense reasoning datasets using Shears.
9
-
10
- ## Model Details
11
-
12
- ### Information
13
-
14
- - **Model name:** shears-llama-7b-50-commonsense-heuristic
15
- - **Base model:** [IntelLabs/Llama-1-7B-sparsity50](https://huggingface.co/IntelLabs/Llama-1-7B-sparsity50)
16
- - **Sparsity:** 50%
17
- - **Domain:** Commonsense
18
- - **Subnetwork version:** Heuristic
19
- - **NNCF Configuration:** [nncf_shears_llama.json](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/Shears/nncf_config/unified_commonsense/nncf_shears_llama.json)
20
-
21
- ### Adapter Configuration
22
-
23
- - **LoRA rank:** 32
24
- - **LoRA alpha:** 64
25
- - **LoRA target modules:** q_proj, k_proj, v_proj, up_proj, down_proj
26
- - **LoRA rank search space:** [32, 24, 16] (for each LoRA module)
27
-
28
- ### Training Hyperparameters
29
-
30
- - **Batch size:** 16
31
- - **Learning rate:** 3e-4
32
- - **Epoch:** 5
33
-
34
- ### Training Data
35
-
36
- Unified commonsense reasoning dataset: [commonsense_170k.json](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/ft-training_set/commonsense_170k.json).
37
-
38
- ### Evaluation Data
39
- [BoolQ](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/boolq/test.json), [PIQA](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/piqa/test.json), [SIQA](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/social_i_qa/test.json), [HellaSwag](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/hellaswag/test.json), [WinoGrande](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/winogrande/test.json), [ARC-e](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/ARC-Easy/test.json), [ARC-c](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/ARC-Challenge/test.json), [OBQA](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/openbookqa/test.json).
40
-
41
- ## How to use
42
-
43
- Use our modified PEFT library (apply [patch](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/Shears/patches/peft-modifications-for-shears-inference-usage.patch)):
44
- ```bash
45
- git clone https://github.com/huggingface/peft.git
46
- cd peft && git checkout v0.5.0 && git apply --ignore-space-change --ignore-whitespace peft-modifications-for-shears-inference-usage.patch && pip install -e . && cd ..
47
- ```
48
-
49
- ```python
50
- import torch
51
- from peft import PeftModel
52
- from transformers import AutoModelForCausalLM
53
- from transformers import AutoTokenizer
54
-
55
- def generate_prompt(instruction):
56
- return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
57
-
58
- ### Instruction:
59
- {instruction}
60
-
61
- ### Response:
62
- """
63
-
64
- base_model = AutoModelForCausalLM.from_pretrained("IntelLabs/Llama-1-7B-sparsity50")
65
- model = PeftModel.from_pretrained(base_model, "IntelLabs/shears-llama-7b-50-commonsense-heuristic")
66
- model.eval()
67
-
68
- non_zero_params = sum([(param.data != 0).sum().item() for _, param in model.named_parameters()])
69
- print(f"Number of all non-zero parameters: {non_zero_params}")
70
-
71
- tokenizer = AutoTokenizer.from_pretrained("IntelLabs/Llama-1-7B-sparsity50")
72
-
73
- instruction = "Please choose the correct answer to the question: A cactus stem is used to store\n\nAnswer1: fruit "
74
- "Answer2: liquid Answer3: food Answer4: spines\n\nAnswer format: answer1/answer2/answer3/answer4"
75
- prompt = generate_prompt(instruction)
76
- inputs = tokenizer(prompt, return_tensors="pt")
77
- input_ids = inputs["input_ids"].to(model.device)
78
- with torch.no_grad():
79
- generation_output = model.generate(
80
- input_ids=input_ids,
81
- return_dict_in_generate=True,
82
- output_scores=True,
83
- max_new_tokens=256,
84
- use_cache=True,
85
- num_beams=4,
86
- )
87
- s = generation_output.sequences[0]
88
- output = tokenizer.decode(s)
89
- print(output)
90
-
91
- ```
92
-
93
- ## Evaluation Results
94
-
95
- | Model | Sparsity | BoolQ | PIQA | SIQA | HellaSwag | WinoG | ARC-e | ARC-c | OBQA | Average |
96
- |----------------------|-----------|---------|--------|--------|------------|--------|--------|---------|--------|----------|
97
- | ChatGPT | - | 73.1 | 85.4 | 68.5 | 78.5 | 66.1 | 89.8 | 79.9 | 74.8 | 77.0 |
98
- | LLaMA-7B-LoRA | - | 68.9 | 80.7 | 77.4 | 78.1 | 78.8 | 77.8 | 61.3 | 74.8 | 74.7 |
99
- | [**LLaMA-7B-Shears**](https://huggingface.co/IntelLabs/shears-llama-7b-50-commonsense-heuristic) | **50%** | 67.3 | 79.1 | 77.5 | 73.3 | 77.7 | 74.4 | 57.9 | 72.8 | 72.5 |
100
-
101
- ## Model Sources
102
-
103
- - **Repository:** [https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/Shears](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/Shears)
104
- - **Paper:** [Shears: Unstructured Sparsity with Neural Low-rank Adapter Search](https://arxiv.org/abs/2404.10934)
105
-
106
- ## Citation
107
-
108
- ```bash
109
- @article{munoz2024shears,
110
- title = {Shears: Unstructured Sparsity with Neural Low-rank Adapter Search},
111
- author={J. Pablo Munoz and Jinjie Yuan and Nilesh Jain},
112
- journal={The 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-2024)},
113
- year={2024}
114
- }
115
- ```
116
-
117
- ## License
118
-
119
- Apache-2.0
 
1
+ ---
2
+ language: en
3
+ license: apache-2.0
4
+ ---
5
+
6
+ # Shears Model Card: shears-llama-7b-50-cs-heuristic-adapter
7
+
8
+ The heuristic adapter discovered from the [super-adapter](https://huggingface.co/IntelLabs/shears-llama-7b-50-cs-super-adapter) fine-tuned on sparsified LLaMA-7B with some commonsense reasoning datasets using Shears.
9
+
10
+ ## Model Details
11
+
12
+ ### Information
13
+
14
+ - **Model name:** shears-llama-7b-50-cs-heuristic-adapter
15
+ - **Base model:** [IntelLabs/Llama-1-7B-sparsity50](https://huggingface.co/IntelLabs/Llama-1-7B-sparsity50)
16
+ - **Sparsity:** 50%
17
+ - **Domain:** Commonsense
18
+ - **Subnetwork version:** Heuristic
19
+ - **NNCF Configuration:** [nncf_shears_llama.json](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/Shears/nncf_config/nncf_shears_llama.json)
20
+
21
+ ### Adapter Configuration
22
+
23
+ - **LoRA rank:** 32
24
+ - **LoRA alpha:** 64
25
+ - **LoRA target modules:** q_proj, k_proj, v_proj, up_proj, down_proj
26
+ - **LoRA rank search space:** [32, 24, 16] (for each LoRA module)
27
+
28
+ ### Training Hyperparameters
29
+
30
+ - **Batch size:** 16
31
+ - **Learning rate:** 3e-4
32
+ - **Epoch:** 5
33
+
34
+ ### Training Data
35
+
36
+ Unified commonsense reasoning dataset: [commonsense_170k.json](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/ft-training_set/commonsense_170k.json).
37
+
38
+ ### Evaluation Data
39
+ [BoolQ](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/boolq/test.json), [PIQA](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/piqa/test.json), [SIQA](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/social_i_qa/test.json), [HellaSwag](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/hellaswag/test.json), [WinoGrande](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/winogrande/test.json), [ARC-e](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/ARC-Easy/test.json), [ARC-c](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/ARC-Challenge/test.json), [OBQA](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/openbookqa/test.json).
40
+
41
+ ## How to use
42
+
43
+ Use our modified PEFT library (apply [patch](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/Shears/patches/peft-modifications-for-shears-inference-usage.patch)):
44
+ ```bash
45
+ git clone https://github.com/huggingface/peft.git
46
+ cd peft && git checkout v0.5.0 && git apply --ignore-space-change --ignore-whitespace peft-modifications-for-shears-inference-usage.patch && pip install -e . && cd ..
47
+ ```
48
+
49
+ ```python
50
+ import torch
51
+ from peft import PeftModel
52
+ from transformers import AutoModelForCausalLM
53
+ from transformers import AutoTokenizer
54
+
55
+ def generate_prompt(instruction):
56
+ return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
57
+
58
+ ### Instruction:
59
+ {instruction}
60
+
61
+ ### Response:
62
+ """
63
+
64
+ base_model = AutoModelForCausalLM.from_pretrained("IntelLabs/Llama-1-7B-sparsity50")
65
+ model = PeftModel.from_pretrained(base_model, "IntelLabs/shears-llama-7b-50-cs-heuristic-adapter")
66
+ model.eval()
67
+
68
+ non_zero_params = sum([(param.data != 0).sum().item() for _, param in model.named_parameters()])
69
+ print(f"Number of all non-zero parameters: {non_zero_params}")
70
+
71
+ tokenizer = AutoTokenizer.from_pretrained("IntelLabs/Llama-1-7B-sparsity50")
72
+
73
+ instruction = "Please choose the correct answer to the question: A cactus stem is used to store\n\nAnswer1: fruit "
74
+ "Answer2: liquid Answer3: food Answer4: spines\n\nAnswer format: answer1/answer2/answer3/answer4"
75
+ prompt = generate_prompt(instruction)
76
+ inputs = tokenizer(prompt, return_tensors="pt")
77
+ input_ids = inputs["input_ids"].to(model.device)
78
+ with torch.no_grad():
79
+ generation_output = model.generate(
80
+ input_ids=input_ids,
81
+ return_dict_in_generate=True,
82
+ output_scores=True,
83
+ max_new_tokens=256,
84
+ use_cache=True,
85
+ num_beams=4,
86
+ )
87
+ s = generation_output.sequences[0]
88
+ output = tokenizer.decode(s)
89
+ print(output)
90
+
91
+ ```
92
+
93
+ ## Evaluation Results
94
+
95
+ | Model | Sparsity | BoolQ | PIQA | SIQA | HellaSwag | WinoG | ARC-e | ARC-c | OBQA | Average |
96
+ |----------------------|-----------|---------|--------|--------|------------|--------|--------|---------|--------|----------|
97
+ | ChatGPT | - | 73.1 | 85.4 | 68.5 | 78.5 | 66.1 | 89.8 | 79.9 | 74.8 | 77.0 |
98
+ | LLaMA-7B-LoRA | - | 68.9 | 80.7 | 77.4 | 78.1 | 78.8 | 77.8 | 61.3 | 74.8 | 74.7 |
99
+ | [**LLaMA-7B-Shears**](https://huggingface.co/IntelLabs/shears-llama-7b-50-cs-heuristic-adapter) | **50%** | 67.3 | 79.1 | 77.5 | 73.3 | 77.7 | 74.4 | 57.9 | 72.8 | 72.5 |
100
+
101
+ ## Model Sources
102
+
103
+ - **Repository:** [https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/Shears](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/Shears)
104
+ - **Paper:** [Shears: Unstructured Sparsity with Neural Low-rank Adapter Search](https://arxiv.org/abs/2404.10934)
105
+
106
+ ## Citation
107
+
108
+ ```bash
109
+ @article{munoz2024shears,
110
+ title = {Shears: Unstructured Sparsity with Neural Low-rank Adapter Search},
111
+ author={J. Pablo Munoz and Jinjie Yuan and Nilesh Jain},
112
+ journal={The 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-2024)},
113
+ year={2024}
114
+ }
115
+ ```
116
+
117
+ ## License
118
+
119
+ Apache-2.0