alokabhishek commited on
Commit
63dc88a
1 Parent(s): 67f9ebc

added code snippet for how to use

Browse files
Files changed (1) hide show
  1. README.md +37 -142
README.md CHANGED
@@ -24,30 +24,52 @@ Hugging Face Blog post on 4-bit quantization using bitsandbytes: [Making LLMs ev
24
 
25
  bitsandbytes github repo: [bitsandbytes github repo](https://github.com/TimDettmers/bitsandbytes)
26
 
 
27
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
 
 
 
29
 
30
- ### Model Description
31
 
32
- <!-- Provide a longer summary of what this model is. -->
 
 
 
33
 
34
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
35
 
36
- - **Developed by:** [More Information Needed]
37
- - **Funded by [optional]:** [More Information Needed]
38
- - **Shared by [optional]:** [More Information Needed]
39
- - **Model type:** [More Information Needed]
40
- - **Language(s) (NLP):** [More Information Needed]
41
- - **License:** [More Information Needed]
42
- - **Finetuned from model [optional]:** [More Information Needed]
43
 
44
- ### Model Sources [optional]
45
 
46
- <!-- Provide the basic links for the model. -->
 
 
 
 
47
 
48
- - **Repository:** [More Information Needed]
49
- - **Paper [optional]:** [More Information Needed]
50
- - **Demo [optional]:** [More Information Needed]
51
 
52
  ## Uses
53
 
@@ -77,134 +99,7 @@ This is the model card of a 🤗 transformers model that has been pushed on the
77
 
78
  [More Information Needed]
79
 
80
- ### Recommendations
81
-
82
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
83
-
84
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
85
-
86
- ## How to Get Started with the Model
87
-
88
- Use the code below to get started with the model.
89
-
90
- [More Information Needed]
91
-
92
- ## Training Details
93
-
94
- ### Training Data
95
-
96
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
97
-
98
- [More Information Needed]
99
-
100
- ### Training Procedure
101
-
102
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
103
-
104
- #### Preprocessing [optional]
105
-
106
- [More Information Needed]
107
-
108
-
109
- #### Training Hyperparameters
110
-
111
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
112
-
113
- #### Speeds, Sizes, Times [optional]
114
-
115
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
116
-
117
- [More Information Needed]
118
-
119
- ## Evaluation
120
-
121
- <!-- This section describes the evaluation protocols and provides the results. -->
122
-
123
- ### Testing Data, Factors & Metrics
124
-
125
- #### Testing Data
126
-
127
- <!-- This should link to a Dataset Card if possible. -->
128
-
129
- [More Information Needed]
130
-
131
- #### Factors
132
-
133
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
134
-
135
- [More Information Needed]
136
-
137
- #### Metrics
138
 
139
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
140
-
141
- [More Information Needed]
142
-
143
- ### Results
144
-
145
- [More Information Needed]
146
-
147
- #### Summary
148
-
149
-
150
-
151
- ## Model Examination [optional]
152
-
153
- <!-- Relevant interpretability work for the model goes here -->
154
-
155
- [More Information Needed]
156
-
157
- ## Environmental Impact
158
-
159
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
160
-
161
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
162
-
163
- - **Hardware Type:** [More Information Needed]
164
- - **Hours used:** [More Information Needed]
165
- - **Cloud Provider:** [More Information Needed]
166
- - **Compute Region:** [More Information Needed]
167
- - **Carbon Emitted:** [More Information Needed]
168
-
169
- ## Technical Specifications [optional]
170
-
171
- ### Model Architecture and Objective
172
-
173
- [More Information Needed]
174
-
175
- ### Compute Infrastructure
176
-
177
- [More Information Needed]
178
-
179
- #### Hardware
180
-
181
- [More Information Needed]
182
-
183
- #### Software
184
-
185
- [More Information Needed]
186
-
187
- ## Citation [optional]
188
-
189
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
190
-
191
- **BibTeX:**
192
-
193
- [More Information Needed]
194
-
195
- **APA:**
196
-
197
- [More Information Needed]
198
-
199
- ## Glossary [optional]
200
-
201
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
202
-
203
- [More Information Needed]
204
-
205
- ## More Information [optional]
206
-
207
- [More Information Needed]
208
 
209
  ## Model Card Authors [optional]
210
 
 
24
 
25
  bitsandbytes github repo: [bitsandbytes github repo](https://github.com/TimDettmers/bitsandbytes)
26
 
27
+ # How to Get Started with the Model
28
 
29
+ Use the code below to get started with the model.
30
+
31
+ ## How to run from Python code
32
+
33
+ #### First install the package
34
+ ```shell
35
+ pip install -q -U bitsandbytes accelerate torch huggingface_hub
36
+ pip install -q -U git+https://github.com/huggingface/transformers.git # Install latest version of transformers
37
+ pip install -q -U git+https://github.com/huggingface/peft.git
38
+ pip install flash-attn --no-build-isolation
39
+ ```
40
+
41
+ #### Import
42
+
43
+ ```python
44
+ import torch
45
+ import os
46
+ from torch import bfloat16
47
+ from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline, BitsAndBytesConfig, LlamaForCausalLM
48
+ ```
49
+
50
+ #### Use a pipeline as a high-level helper
51
 
52
+ ```python
53
+ model_id_llama = "alokabhishek/Llama-2-7b-chat-hf-bnb-4bit"
54
 
55
+ tokenizer_llama = AutoTokenizer.from_pretrained(model_id_llama, use_fast=True)
56
 
57
+ model_llama = AutoModelForCausalLM.from_pretrained(
58
+ model_id_llama,
59
+ device_map="auto"
60
+ )
61
 
 
62
 
63
+ pipe_llama = pipeline(model=model_llama, tokenizer=tokenizer_llama, task='text-generation')
 
 
 
 
 
 
64
 
65
+ prompt_llama = "Tell me a funny joke about Large Language Models meeting a Blackhole in an intergalactic Bar."
66
 
67
+ output_llama = pipe_llama(prompt_llama, max_new_tokens=512)
68
+
69
+ print(output_llama[0]["generated_text"])
70
+
71
+ ```
72
 
 
 
 
73
 
74
  ## Uses
75
 
 
99
 
100
  [More Information Needed]
101
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
102
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
103
 
104
  ## Model Card Authors [optional]
105