Spaces:
Running
on
Zero
Running
on
Zero
Norod78/OpenELM_3B_Demo
Browse files
README.md
CHANGED
@@ -1,12 +1,51 @@
|
|
1 |
---
|
2 |
-
title: OpenELM
|
3 |
-
emoji:
|
4 |
-
colorFrom:
|
5 |
-
colorTo:
|
6 |
sdk: gradio
|
7 |
sdk_version: 4.28.2
|
8 |
app_file: app.py
|
9 |
pinned: false
|
|
|
|
|
10 |
---
|
11 |
|
12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
title: Apple OpenELM-3B
|
3 |
+
emoji: 🍎
|
4 |
+
colorFrom: green
|
5 |
+
colorTo: red
|
6 |
sdk: gradio
|
7 |
sdk_version: 4.28.2
|
8 |
app_file: app.py
|
9 |
pinned: false
|
10 |
+
license: other
|
11 |
+
suggested_hardware: t4-small
|
12 |
---
|
13 |
|
14 |
+
# Apple OpenELM Models
|
15 |
+
|
16 |
+
OpenELM was introduced in [this paper](https://arxiv.org/abs/2404.14619v1).
|
17 |
+
|
18 |
+
This Space demonstrates [OpenELM-3B](apple/OpenELM-3B) from Apple. Please, check the original model card for details.
|
19 |
+
You can see the other models of the OpenELM family [here](https://huggingface.co/apple/OpenELM)
|
20 |
+
|
21 |
+
# The following Information was taken "as is" from original model card
|
22 |
+
|
23 |
+
## Bias, Risks, and Limitations
|
24 |
+
|
25 |
+
The release of OpenELM models aims to empower and enrich the open research community by providing access to state-of-the-art language models. Trained on publicly available datasets, these models are made available without any safety guarantees. Consequently, there exists the possibility of these models producing outputs that are inaccurate, harmful, biased, or objectionable in response to user prompts. Thus, it is imperative for users and developers to undertake thorough safety testing and implement appropriate filtering mechanisms tailored to their specific requirements.
|
26 |
+
|
27 |
+
## Citation
|
28 |
+
|
29 |
+
If you find our work useful, please cite:
|
30 |
+
|
31 |
+
```BibTex
|
32 |
+
@article{mehtaOpenELMEfficientLanguage2024,
|
33 |
+
title = {{OpenELM}: {An} {Efficient} {Language} {Model} {Family} with {Open}-source {Training} and {Inference} {Framework}},
|
34 |
+
shorttitle = {{OpenELM}},
|
35 |
+
url = {https://arxiv.org/abs/2404.14619v1},
|
36 |
+
language = {en},
|
37 |
+
urldate = {2024-04-24},
|
38 |
+
journal = {arXiv.org},
|
39 |
+
author = {Mehta, Sachin and Sekhavat, Mohammad Hossein and Cao, Qingqing and Horton, Maxwell and Jin, Yanzi and Sun, Chenfan and Mirzadeh, Iman and Najibi, Mahyar and Belenko, Dmitry and Zatloukal, Peter and Rastegari, Mohammad},
|
40 |
+
month = apr,
|
41 |
+
year = {2024},
|
42 |
+
}
|
43 |
+
|
44 |
+
@inproceedings{mehta2022cvnets,
|
45 |
+
author = {Mehta, Sachin and Abdolhosseini, Farzad and Rastegari, Mohammad},
|
46 |
+
title = {CVNets: High Performance Library for Computer Vision},
|
47 |
+
year = {2022},
|
48 |
+
booktitle = {Proceedings of the 30th ACM International Conference on Multimedia},
|
49 |
+
series = {MM '22}
|
50 |
+
}
|
51 |
+
```
|
app.py
ADDED
@@ -0,0 +1,148 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import os
|
2 |
+
from threading import Thread
|
3 |
+
from typing import Iterator
|
4 |
+
|
5 |
+
import gradio as gr
|
6 |
+
import spaces
|
7 |
+
import torch
|
8 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer
|
9 |
+
|
10 |
+
MAX_MAX_NEW_TOKENS = 1024
|
11 |
+
DEFAULT_MAX_NEW_TOKENS = 256
|
12 |
+
MAX_INPUT_TOKEN_LENGTH = 512
|
13 |
+
|
14 |
+
DESCRIPTION = """\
|
15 |
+
# OpenELM-3B
|
16 |
+
|
17 |
+
This Space demonstrates [OpenELM-3B](apple/OpenELM-3B) by Apple. Please, check the original model card for details.
|
18 |
+
You can see the other models of the OpenELM family [here](https://huggingface.co/apple/OpenELM)
|
19 |
+
The following Colab notebooks are available:
|
20 |
+
* [OpenELM-3B (GPU)](https://gist.github.com/Norod/4f11bb36bea5c548d18f10f9d7ec09b0)
|
21 |
+
* [OpenELM-270M (CPU)](https://gist.github.com/Norod/5a311a8e0a774b5c35919913545b7af4)
|
22 |
+
|
23 |
+
You might also be intrested in checking out Apple's [CoreNet Github page](https://github.com/apple/corenet?tab=readme-ov-file).
|
24 |
+
|
25 |
+
If you duplicate this space, make sure you have access to [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf)
|
26 |
+
because this model uses it as a tokenizer.
|
27 |
+
|
28 |
+
Note: While the user interface is of a chatbot for convenience, this model is the base
|
29 |
+
model and is not fine-tuned for chatbot tasks or instruction following tasks. As such,
|
30 |
+
the model is not provided a chat history and will generate text based on the last given prompt.
|
31 |
+
"""
|
32 |
+
|
33 |
+
LICENSE = """
|
34 |
+
<p/>
|
35 |
+
|
36 |
+
---
|
37 |
+
As a derivate work of [OpenELM-3B](apple/OpenELM-3B) by Apple,
|
38 |
+
this demo is governed by the original [license](https://huggingface.co/apple/OpenELM-3B/blob/main/LICENSE).
|
39 |
+
"""
|
40 |
+
|
41 |
+
if not torch.cuda.is_available():
|
42 |
+
DESCRIPTION += "\n<p>Running on CPU 🥶 This demo does not work on CPU.</p>"
|
43 |
+
|
44 |
+
|
45 |
+
if torch.cuda.is_available():
|
46 |
+
model_id = "apple/OpenELM-3B"
|
47 |
+
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", trust_remote_code=True, low_cpu_mem_usage=True)
|
48 |
+
tokenizer_id = "meta-llama/Llama-2-7b-hf"
|
49 |
+
tokenizer = AutoTokenizer.from_pretrained(tokenizer_id)
|
50 |
+
if tokenizer.pad_token == None:
|
51 |
+
tokenizer.pad_token = tokenizer.eos_token
|
52 |
+
tokenizer.pad_token_id = tokenizer.eos_token_id
|
53 |
+
|
54 |
+
@spaces.GPU
|
55 |
+
def generate(
|
56 |
+
message: str,
|
57 |
+
chat_history: list[tuple[str, str]],
|
58 |
+
max_new_tokens: int = 1024,
|
59 |
+
temperature: float = 0.6,
|
60 |
+
top_p: float = 0.9,
|
61 |
+
top_k: int = 50,
|
62 |
+
repetition_penalty: float = 1.4,
|
63 |
+
) -> Iterator[str]:
|
64 |
+
|
65 |
+
input_ids = tokenizer([message], return_tensors="pt").input_ids
|
66 |
+
if input_ids.shape[1] > MAX_INPUT_TOKEN_LENGTH:
|
67 |
+
input_ids = input_ids[:, -MAX_INPUT_TOKEN_LENGTH:]
|
68 |
+
gr.Warning(f"Trimmed input from conversation as it was longer than {MAX_INPUT_TOKEN_LENGTH} tokens.")
|
69 |
+
input_ids = input_ids.to(model.device)
|
70 |
+
|
71 |
+
streamer = TextIteratorStreamer(tokenizer, timeout=10.0, skip_prompt=True, skip_special_tokens=True)
|
72 |
+
generate_kwargs = dict(
|
73 |
+
{"input_ids": input_ids},
|
74 |
+
streamer=streamer,
|
75 |
+
max_new_tokens=max_new_tokens,
|
76 |
+
do_sample=True,
|
77 |
+
top_p=top_p,
|
78 |
+
top_k=top_k,
|
79 |
+
temperature=temperature,
|
80 |
+
num_beams=1,
|
81 |
+
repetition_penalty=repetition_penalty,
|
82 |
+
)
|
83 |
+
t = Thread(target=model.generate, kwargs=generate_kwargs)
|
84 |
+
t.start()
|
85 |
+
|
86 |
+
outputs = []
|
87 |
+
for text in streamer:
|
88 |
+
outputs.append(text)
|
89 |
+
yield "".join(outputs)
|
90 |
+
|
91 |
+
|
92 |
+
chat_interface = gr.ChatInterface(
|
93 |
+
fn=generate,
|
94 |
+
additional_inputs=[
|
95 |
+
gr.Slider(
|
96 |
+
label="Max new tokens",
|
97 |
+
minimum=1,
|
98 |
+
maximum=MAX_MAX_NEW_TOKENS,
|
99 |
+
step=1,
|
100 |
+
value=DEFAULT_MAX_NEW_TOKENS,
|
101 |
+
),
|
102 |
+
gr.Slider(
|
103 |
+
label="Temperature",
|
104 |
+
minimum=0.1,
|
105 |
+
maximum=4.0,
|
106 |
+
step=0.1,
|
107 |
+
value=0.6,
|
108 |
+
),
|
109 |
+
gr.Slider(
|
110 |
+
label="Top-p (nucleus sampling)",
|
111 |
+
minimum=0.05,
|
112 |
+
maximum=1.0,
|
113 |
+
step=0.05,
|
114 |
+
value=0.9,
|
115 |
+
),
|
116 |
+
gr.Slider(
|
117 |
+
label="Top-k",
|
118 |
+
minimum=1,
|
119 |
+
maximum=1000,
|
120 |
+
step=1,
|
121 |
+
value=50,
|
122 |
+
),
|
123 |
+
gr.Slider(
|
124 |
+
label="Repetition penalty",
|
125 |
+
minimum=1.0,
|
126 |
+
maximum=2.0,
|
127 |
+
step=0.05,
|
128 |
+
value=1.4,
|
129 |
+
),
|
130 |
+
],
|
131 |
+
stop_btn=None,
|
132 |
+
examples=[
|
133 |
+
["A recepie for a chocolate cake:"],
|
134 |
+
["Can you explain briefly to me what is the Python programming language?"],
|
135 |
+
["Explain the plot of Cinderella in a sentence."],
|
136 |
+
["Question: What is the capital of France?\nAnswer:"],
|
137 |
+
["Question: I am very tired, what should I do?\nAnswer:"],
|
138 |
+
],
|
139 |
+
)
|
140 |
+
|
141 |
+
with gr.Blocks(css="style.css") as demo:
|
142 |
+
gr.Markdown(DESCRIPTION)
|
143 |
+
gr.DuplicateButton(value="Duplicate Space for private use", elem_id="duplicate-button")
|
144 |
+
chat_interface.render()
|
145 |
+
gr.Markdown(LICENSE)
|
146 |
+
|
147 |
+
if __name__ == "__main__":
|
148 |
+
demo.queue(max_size=20).launch()
|
requirements.txt
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
accelerate==0.28.0
|
2 |
+
bitsandbytes==0.43.0
|
3 |
+
gradio==4.28.2
|
4 |
+
scipy==1.12.0
|
5 |
+
sentencepiece==0.2.0
|
6 |
+
spaces==0.26.2
|
7 |
+
torch==2.1.1
|
8 |
+
transformers==4.40.1
|
9 |
+
tokenizers==0.19.1
|
style.css
ADDED
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
h1 {
|
2 |
+
text-align: center;
|
3 |
+
display: block;
|
4 |
+
}
|
5 |
+
|
6 |
+
#duplicate-button {
|
7 |
+
margin: auto;
|
8 |
+
color: white;
|
9 |
+
background: #1565c0;
|
10 |
+
border-radius: 100vh;
|
11 |
+
}
|
12 |
+
|
13 |
+
.contain {
|
14 |
+
max-width: 900px;
|
15 |
+
margin: auto;
|
16 |
+
padding-top: 1.5rem;
|
17 |
+
}
|