Arnav0400 commited on
Commit
08243e7
1 Parent(s): df937da

removed references

Browse files
Files changed (2) hide show
  1. README.md +2 -127
  2. config.json +1 -1
README.md CHANGED
@@ -1,131 +1,6 @@
1
- ---
2
- library_name: peft
3
- datasets:
4
- - shareGPT
5
- tags:
6
- - llama
7
- inference: false
8
- pipeline_tag: text-generation
9
- ---
10
  # llama-7b-glora 🦙
11
 
12
  This model was built via parameter-efficient GLoRA finetuning of [llama-7b](https://huggingface.co/huggyllama/llama-7b) on the shareGPT dataset. We adapt only the attention layers using GLoRA.
13
 
14
- * Model license: This model is under a non-commercial license (see the LICENSE file) same as LLaMA.
15
- * GLoRA implementation: [script](https://github.com/Arnav0400/peft/blob/main/src/peft/tuners/glora.py)
16
-
17
- ## Model Description
18
-
19
- The architecture is similar to LLaMA-7B, but the bias is true for attention layers.
20
-
21
- ## Limitations and Biases
22
- _The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
23
-
24
- This model can produce factually incorrect output, and should not be relied on to produce factually accurate information.
25
- This model was trained on various public datasets.
26
- While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
27
-
28
- ## How to Use
29
-
30
- Install and import the package dependencies:
31
-
32
- ```python
33
- !pip install -q -U huggingface_hub transformers torch accelerate
34
- ```
35
-
36
- ```python
37
- import torch
38
- from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
39
- ```
40
-
41
- Basic model loading:
42
-
43
- ```python
44
- model = AutoModelForCausalLM.from_pretrained(
45
- "MBZUAI-LLM/LLaMA-7B-GLoRA-ShareGPT",
46
- use_auth_token=True,
47
- torch_dtype=torch.bfloat16,
48
- device_map="auto",
49
- )
50
- tokenizer = AutoTokenizer.from_pretrained("MBZUAI-LLM/LLaMA-7B-GLoRA-ShareGPT")
51
- ```
52
-
53
- Once loaded, the model and tokenizer can be used with the following code:
54
-
55
- ```python
56
- def llama_generate(
57
- model: AutoModelForCausalLM,
58
- tokenizer: AutoTokenizer,
59
- prompt: str,
60
- max_new_tokens: int = 128,
61
- temperature: float = 0.92,
62
- ) -> str:
63
- """
64
- Initialize the pipeline
65
- Uses Hugging Face GenerationConfig defaults
66
- https://huggingface.co/docs/transformers/v4.29.1/en/main_classes/text_generation#transformers.GenerationConfig
67
- Args:
68
- model (transformers.AutoModelForCausalLM): Falcon model for text generation
69
- tokenizer (transformers.AutoTokenizer): Tokenizer for model
70
- prompt (str): Prompt for text generation
71
- max_new_tokens (int, optional): Max new tokens after the prompt to generate. Defaults to 128.
72
- temperature (float, optional): The value used to modulate the next token probabilities.
73
- Defaults to 1.0
74
- """
75
- device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
76
- inputs = tokenizer(
77
- [prompt],
78
- return_tensors="pt",
79
- return_token_type_ids=False,
80
- ).to(
81
- device
82
- ) # tokenize inputs, load on device
83
- # when running Torch modules in lower precision, it is best practice to use the torch.autocast context manager.
84
- with torch.autocast("cuda", dtype=torch.bfloat16):
85
- response = model.generate(
86
- **inputs,
87
- max_new_tokens=max_new_tokens,
88
- temperature=temperature,
89
- return_dict_in_generate=True,
90
- eos_token_id=tokenizer.eos_token_id,
91
- pad_token_id=tokenizer.pad_token_id,
92
- )
93
- decoded_output = tokenizer.decode(
94
- response["sequences"][0],
95
- skip_special_tokens=True,
96
- ) # grab output in natural language
97
- return decoded_output[len(prompt) :] # remove prompt from output
98
- ```
99
-
100
- We can now generate text! For example:
101
-
102
- ```python
103
- prompt = "You are a helpful assistant. Tell me a recipe for vegan banana bread.\n"
104
- response = llama_generate(
105
- model,
106
- tokenizer,
107
- prompt,
108
- max_new_tokens=500,
109
- temperature=0.92,
110
- )
111
- print(response)
112
- ```
113
-
114
- ## Disclaimer
115
-
116
- The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
117
-
118
- ## Citation for GLoRA
119
-
120
- ```
121
- @misc{chavan2023oneforall,
122
- title={One-for-All: Generalized LoRA for Parameter-Efficient Fine-tuning},
123
- author={Arnav Chavan and Zhuang Liu and Deepak Gupta and Eric Xing and Zhiqiang Shen},
124
- year={2023},
125
- eprint={2306.07967},
126
- archivePrefix={arXiv},
127
- primaryClass={cs.LG}
128
- }
129
- ```
130
-
131
- ---
 
1
+
 
 
 
 
 
 
 
 
2
  # llama-7b-glora 🦙
3
 
4
  This model was built via parameter-efficient GLoRA finetuning of [llama-7b](https://huggingface.co/huggyllama/llama-7b) on the shareGPT dataset. We adapt only the attention layers using GLoRA.
5
 
6
+ Model license: This model is under a non-commercial license (see the LICENSE file) same as LLaMA.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "/l/users/arnav.chavan/merge_glora",
3
  "architectures": [
4
  "LlamaForCausalLM"
5
  ],
 
1
  {
2
+ "_name_or_path": "glora_llama2",
3
  "architectures": [
4
  "LlamaForCausalLM"
5
  ],