kael558 commited on
Commit
ec2c4c5
1 Parent(s): c14c3b9

Add application file

Browse files
Files changed (11) hide show
  1. .gitignore +32 -0
  2. Dockerfile +35 -0
  3. LICENSE +201 -0
  4. README.md +66 -12
  5. app.py +709 -0
  6. lama_predict.py +103 -0
  7. lama_server.py +84 -0
  8. llava_interactive.py +705 -0
  9. requirements.txt +51 -0
  10. run_demo.sh +38 -0
  11. setup.sh +35 -0
.gitignore ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Python
2
+ __pycache__
3
+ *.pyc
4
+ *.egg-info
5
+ dist
6
+
7
+ # Log
8
+ *.log
9
+ *.log.*
10
+ *.json
11
+ *.jsonl
12
+
13
+ # Data
14
+ !**/alpaca-data-conversation.json
15
+ *.png
16
+ *.jpg
17
+
18
+ # Editor
19
+ .idea
20
+ *.swp
21
+
22
+ # Other
23
+ .DS_Store
24
+ wandb
25
+ output
26
+
27
+ checkpoints
28
+ ckpts*
29
+ *.pt
30
+
31
+ .ipynb_checkpoints
32
+ *.ipynb
Dockerfile ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Use the NVIDIA CUDA image as the base image
2
+ FROM nvidia/cuda:11.7.1-cudnn8-devel-ubuntu22.04
3
+
4
+ # Install dependencies
5
+ RUN apt-get update && apt-get install -y wget git
6
+
7
+ # Download and install Miniconda
8
+ RUN wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh && \
9
+ bash Miniconda3-latest-Linux-x86_64.sh -b -p /opt/conda && \
10
+ rm Miniconda3-latest-Linux-x86_64.sh
11
+
12
+ # Add conda to PATH
13
+ ENV PATH /opt/conda/bin:$PATH
14
+
15
+ # Clone the LLaVA Interactive Demo repository
16
+ RUN git clone https://github.com/LLaVA-VL/LLaVA-Interactive-Demo.git
17
+
18
+ # Create a conda environment for LLaVA Interactive Demo
19
+ RUN conda create -n llava_int -c conda-forge -c pytorch python=3.10.8 pytorch=2.0.1 -y
20
+
21
+ # Activate the conda environment
22
+ SHELL ["conda", "run", "-n", "llava_int", "/bin/bash", "-c"]
23
+
24
+ # Navigate to the LLaVA Interactive Demo directory
25
+ WORKDIR /LLaVA-Interactive-Demo
26
+
27
+ # Install Python dependencies
28
+ RUN pip install -r requirements.txt
29
+
30
+ # Run the setup script
31
+ RUN source setup.sh
32
+
33
+ # The command to run the demo (optional)
34
+ # If you want to run the demo as the default command when the container starts, you can use:
35
+ CMD ["./run_demo.sh"]
LICENSE ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright [yyyy] [name of copyright owner]
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
README.md CHANGED
@@ -1,12 +1,66 @@
1
- ---
2
- title: Llava
3
- emoji: 🐠
4
- colorFrom: blue
5
- colorTo: gray
6
- sdk: gradio
7
- sdk_version: 4.1.2
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # 🌋 LLaVA-Interactive
3
+
4
+ *An All-in-One Demo for Image Chat, Segmentation and Generation/Editing.*
5
+
6
+ [[Project Page](https://llava-vl.github.io/llava-interactive/)] [[Demo](https://llavainteractive.ngrok.io/)] [[Paper](https://arxiv.org/abs/2311.00571)]
7
+
8
+ <p align="center">
9
+ <img src="https://github.com/LLaVA-VL/llava-interactive/blob/main/images/llava_interactive_logo.png" width="45%">
10
+ <br>
11
+ </p>
12
+
13
+ # Install
14
+
15
+ Installing this project requires CUDA 11.7 or above. Follow the steps below:
16
+
17
+ ```bash
18
+ git clone https://github.com/LLaVA-VL/LLaVA-Interactive-Demo.git
19
+ conda create -n llava_int -c conda-forge -c pytorch python=3.10.8 pytorch=2.0.1 -y
20
+ conda activate llava_int
21
+ cd LLaVA-Interactive-Demo
22
+ pip install -r requirements.txt
23
+ source setup.sh
24
+ ```
25
+
26
+ # Run the demo
27
+
28
+ To run the demo, simply run the shell script.
29
+
30
+ ```bash
31
+ ./run_demo.sh
32
+ ```
33
+
34
+ <p align="center">
35
+ <img src="https://github.com/LLaVA-VL/llava-interactive/blob/main/images/llava_interactive_workflow.png" width="50%">
36
+ <br>
37
+ </p>
38
+
39
+
40
+ # Citation
41
+
42
+ If you find LLaVA-Interactive useful for your research and applications, please cite using this BibTeX:
43
+ ```bash
44
+ @article{chen2023llava_interactive,
45
+ author = {Chen, Wei-Ge and Spiridonova, Irina and Yang, Jianwei and Gao, Jianfeng and Li, Chunyuan},
46
+ title = {LLaVA-Interactive: An All-in-One Demo for Image Chat, Segmentation, Generation and Editing},
47
+ publisher = {arXiv:2311.00571},
48
+ year = {2023}
49
+ }
50
+ ```
51
+
52
+ # Related Projects
53
+
54
+ - [LLaVA: Large Language and Vision Assistant](https://github.com/haotian-liu/LLaVA)
55
+ - [SEEM: Segment Everything Everywhere All at Once](https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once)
56
+ - [GLIGEN: Open-Set Grounded Text-to-Image Generation](https://github.com/gligen/GLIGEN)
57
+
58
+ # Acknowledgement
59
+
60
+ - [LaMa](https://github.com/advimman/lama): A nice tool we use to fill the background holes in images.
61
+
62
+ # Terms of use
63
+ By using this service, users are required to agree to the following terms: The service is a research preview intended for non-commercial use only. It only provides limited safety measures and may generate offensive content. It must not be used for any illegal, harmful, violent, racist, or sexual purposes. The service may collect user dialogue data for future research. For an optimal experience, please use desktop computers for this demo, as mobile devices may compromise its quality.
64
+
65
+ # License
66
+ The service is a research preview intended for non-commercial use only, subject to the model [License](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) of LLaMA, [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation.
app.py ADDED
@@ -0,0 +1,709 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import base64
3
+ import io
4
+ import os
5
+ import sys
6
+
7
+ import cv2
8
+ import gradio as gr
9
+ import numpy as np
10
+ import requests
11
+ from functools import partial
12
+ from PIL import Image, ImageOps
13
+
14
+ sys.path.append(os.path.join(os.environ['LLAVA_INTERACTIVE_HOME'], 'GLIGEN/demo'))
15
+ import GLIGEN.demo.app as GLIGEN
16
+ sys.path.append(os.path.join(os.environ['LLAVA_INTERACTIVE_HOME'], 'SEEM/demo_code'))
17
+ import SEEM.demo_code.app as SEEM #must import GLIGEN_app before this. Otherwise, it will hit a protobuf error
18
+ sys.path.append(os.path.join(os.environ['LLAVA_INTERACTIVE_HOME'], 'LLaVA'))
19
+ import LLaVA.llava.serve.gradio_web_server as LLAVA
20
+
21
+ class ImageMask(gr.components.Image):
22
+ """
23
+ Sets: source="canvas", tool="sketch"
24
+ """
25
+
26
+ is_template = True
27
+
28
+ def __init__(self, **kwargs):
29
+ super().__init__(source="upload", tool="sketch", interactive=True, **kwargs)
30
+
31
+ def preprocess(self, x):
32
+ if isinstance(x, str):
33
+ x = {'image': x, 'mask': x}
34
+ elif isinstance(x, dict):
35
+ if (x['mask'] is None and x['image'] is None):
36
+ x
37
+ elif (x['image'] is None):
38
+ x['image'] = str(x['mask'])
39
+ elif (x['mask'] is None):
40
+ x['mask'] = str(x['image']) #not sure why mask/mask is None sometimes, this prevents preprocess crashing
41
+ elif x is not None:
42
+ assert False, 'Unexpected type {0} in ImageMask preprocess()'.format(type(x))
43
+
44
+ return super().preprocess(x)
45
+
46
+ css = """
47
+ #compose_btn {
48
+ --tw-border-opacity: 1;
49
+ border-color: rgb(255 216 180 / var(--tw-border-opacity));
50
+ --tw-gradient-from: rgb(255 216 180 / .7);
51
+ --tw-gradient-to: rgb(255 216 180 / 0);
52
+ --tw-gradient-stops: var(--tw-gradient-from), var(--tw-gradient-to);
53
+ --tw-gradient-to: rgb(255 176 102 / .8);
54
+ --tw-text-opacity: 1;
55
+ color: rgb(238 116 0 / var(--tw-text-opacity));
56
+ }
57
+ """
58
+
59
+ def get_bounding_box(img):
60
+ # Get the indices of all non-zero pixels
61
+ if (np.any(img) == False): #protect agaist an empty img
62
+ return None
63
+ non_zero_indices = np.nonzero(img)
64
+
65
+ # Get the minimum and maximum indices for each axis
66
+ min_x = np.min(non_zero_indices[1])
67
+ max_x = np.max(non_zero_indices[1])
68
+ min_y = np.min(non_zero_indices[0])
69
+ max_y = np.max(non_zero_indices[0])
70
+
71
+ # Return the bounding box as a tuple of (min_x, min_y, max_x, max_y)
72
+ return (min_x, min_y, max_x, max_y)
73
+
74
+ def composite_all_layers(base, objects): #debugging use only
75
+ img = base.copy()
76
+ for obj in objects:
77
+ for i in range(obj['img'].shape[0]):
78
+ for j in range(obj['img'].shape[1]):
79
+ if obj['img'][i, j, 3] != 0:
80
+ img[i, j] = obj['img'][i, j]
81
+ return img
82
+
83
+ def changed_objects_handler(mask_dilate_slider, state, evt: gr.SelectData):
84
+ state['move_no'] += 1
85
+
86
+ pos_x, pos_y = evt.index #obj moved out of scene is signaled by (10000, 10000)
87
+ obj_id = 255 - evt.value
88
+ print(f"obj {obj_id} moved by {pos_x}, {pos_y}")
89
+
90
+ img = state['base_layer']
91
+ for obj in state['changed_objects']:
92
+ if obj['id'] == obj_id:
93
+ img = obj['img']
94
+ state['changed_objects'].remove(obj)
95
+ break
96
+
97
+ new_img = np.zeros_like(img)
98
+ bbox = None
99
+ for i in range(img.shape[0]):
100
+ for j in range(img.shape[1]):
101
+ if img[i, j, 3] == obj_id:
102
+ new_i = i + pos_y
103
+ new_j = j + pos_x
104
+ if new_i >= 0 and new_i < img.shape[0] and new_j >= 0 and new_j < img.shape[1]:
105
+ new_img[new_i, new_j] = img[i, j]
106
+ img[i, j] = 0
107
+
108
+ bbox = get_bounding_box(new_img) #returns None if obj moved out of scene
109
+ print("bbox: ", bbox)
110
+ state['changed_objects'].append({'id': obj_id, 'img': new_img, 'text': state['segment_info'][obj_id], 'box': bbox})
111
+
112
+ #Enable for debugging only. See if the composited image is correct.
113
+ #composed_img_updated = composite_all_layers(state['base_layer'], state['changed_objects'])
114
+ #filename = str(f"composited_imge_{state['move_no']}") + ".png"
115
+ #cv2.imwrite(filename, composed_img_updated[:, :, 0:3])
116
+
117
+
118
+ return mask_dilate_slider, state['base_layer_masked'], state
119
+
120
+ def get_base_layer_mask(state):
121
+
122
+ changed_obj_id = []
123
+ for obj in state['changed_objects']:
124
+ changed_obj_id.append(obj['id'])
125
+
126
+ #union of mask of all objects
127
+ img = state['orignal_segmented']
128
+ mask = np.zeros(img.shape[:2], dtype=np.uint8)
129
+ for i in range(img.shape[0]):
130
+ for j in range(img.shape[1]):
131
+ if img[i, j, 3] in changed_obj_id:
132
+ mask[i, j] = 255
133
+ state['base_layer_mask'] = mask
134
+
135
+ mask_image = Image.fromarray(mask)
136
+ if (mask_image.mode != "L"):
137
+ mask_image = mask_image.convert("L")
138
+ mask_image = ImageOps.invert(mask_image)
139
+ #mask_image.save("mask_image.png")
140
+
141
+ img = state['orignal_segmented']
142
+ orig_image = Image.fromarray(img[:,:,:3])
143
+ orig_image.save("orig_image.png")
144
+ transparent = Image.new(orig_image.mode, orig_image.size, (0, 0, 0, 0))
145
+ masked_image = Image.composite(orig_image, transparent, mask_image)
146
+ #masked_image.save("get_masked_background_image.png")
147
+
148
+ return masked_image, state
149
+
150
+ def get_inpainted_background(state, mask_dilate_slider):
151
+
152
+ # Define the URL of the REST API endpoint
153
+ url = "http://localhost:9171/api/v2/image"
154
+
155
+ img = state['orignal_segmented']
156
+ if (isinstance(img, Image.Image) is not True):
157
+ img = Image.fromarray(img)
158
+ # Create a BytesIO object and save the image there
159
+ buffer = io.BytesIO()
160
+ img.save(buffer, format="PNG")
161
+ # Get the bytes value from the buffer
162
+ img_bytes = buffer.getvalue()
163
+
164
+ encoded_string = base64.b64encode(img_bytes).decode("utf-8")
165
+
166
+ if (mask_dilate_slider != 0) :
167
+ mask = state['base_layer_mask_enlarged']
168
+ else:
169
+ mask = state['base_layer_mask']
170
+ if (isinstance(mask, Image.Image) is not True):
171
+ mask = Image.fromarray(mask)
172
+
173
+ #mask has background as 1, lama needs object to be 1
174
+ if (mask.mode != "L"):
175
+ mask = mask.convert("L")
176
+ mask = ImageOps.invert(mask)
177
+
178
+ # Create a BytesIO object and save the image there
179
+ buffer = io.BytesIO()
180
+ mask.save(buffer, format="PNG")
181
+ # Get the bytes value from the buffer
182
+ mask_bytes = buffer.getvalue()
183
+
184
+ encoded_string_mask = base64.b64encode(mask_bytes).decode("utf-8")
185
+
186
+
187
+ # Create a POST request to the endpoint
188
+ headers = {"Content-Type": "application/json"}
189
+ data = {"image": encoded_string, "mask": encoded_string_mask}
190
+ response = requests.post(url, headers=headers, json=data)
191
+
192
+ # Check the status code of the response
193
+ if response.status_code == 200:
194
+ # The request was successful
195
+ print("Image received successfully")
196
+ image_data = response.content
197
+ # Create a io.BytesIO object from the image data
198
+ dataBytesIO = io.BytesIO(image_data)
199
+ # Open the image using Image.open()
200
+ image = Image.open(dataBytesIO)
201
+ #image.save("lama_returned_image.png")
202
+
203
+ else:
204
+ # The request failed
205
+ print("Error: HTTP status code {}".format(response.status_code))
206
+ print(response.text)
207
+
208
+ return image
209
+
210
+ def get_enlarged_masked_background(state, mask_dilate_slider):
211
+
212
+ mask = state['base_layer_mask']
213
+
214
+ kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (mask_dilate_slider, mask_dilate_slider))
215
+ mask_dilated = cv2.dilate(mask, kernel)
216
+
217
+ #mask the original
218
+ mask_image = Image.fromarray(mask_dilated)
219
+ if (mask_image.mode != "L"):
220
+ mask_image = mask_image.convert("L")
221
+ mask_image = ImageOps.invert(mask_image)
222
+ state['base_layer_mask_enlarged'] = mask_image
223
+ #mask_image.save("enlarged_mask_image.png")
224
+
225
+ img = state['orignal_segmented']
226
+ orig_image = Image.fromarray(img[:,:,:3])
227
+ transparent = Image.new(orig_image.mode, orig_image.size, (0, 0, 0, 0))
228
+ masked_image = Image.composite(orig_image, transparent, mask_image)
229
+ #masked_image.save("enlarged_masked_background_image.png")
230
+
231
+ return masked_image, state
232
+
233
+ def get_base_layer_inpainted(state, mask_dilate_slider):
234
+ masked_img, state = get_enlarged_masked_background(state, mask_dilate_slider)
235
+ inpainted_img = get_inpainted_background(state, mask_dilate_slider)
236
+ state['base_layer_inpainted'] = np.array(inpainted_img)
237
+ return masked_img, inpainted_img, state
238
+
239
+ def log_image_and_mask(img, mask): #for debugging use only
240
+ counter = 0
241
+ for filename in os.listdir('.'):
242
+ if filename.startswith('img_') and filename.endswith('.png'):
243
+ try:
244
+ num = int(filename[4:-4])
245
+ if num > counter:
246
+ counter = num
247
+ except ValueError:
248
+ pass
249
+ counter += 1
250
+ cv2.imwrite(f"img_{counter}.png", img)
251
+ cv2.imwrite(f"img_{counter}_mask.png", mask.astype(np.uint8) * 255)
252
+
253
+ def get_segments (img, task, reftxt, mask_dilate_slider, state):
254
+ assert (isinstance(state, dict))
255
+ state['orignal_segmented'] = None
256
+ state['base_layer'] = None
257
+ state['base_layer_masked'] = None
258
+ state['base_layer_mask'] = None
259
+ state['base_layer_mask_enlarged'] = None
260
+ state['base_layer_inpainted'] = None
261
+ state['segment_info'] = None
262
+ state['seg_boxes'] = {}
263
+ state['changed_objects'] = []
264
+ state['move_no'] = 0
265
+
266
+ print("Calling SEEM_app.inference")
267
+
268
+ if isinstance(img['image'], np.ndarray):
269
+ pil_image = Image.fromarray(img['image'])
270
+ if isinstance(img['mask'], np.ndarray):
271
+ pil_mask = Image.fromarray(img['mask'])
272
+ img = {'image': pil_image, 'mask': pil_mask}
273
+ img_ret, seg_info = SEEM.inference (img, task, reftxt=reftxt)
274
+ #SEEM doesn't always respect the input img dimentions
275
+ tgt_size=(img['image'].width, img['image'].height)
276
+ img_ret = img_ret.resize(tgt_size, resample=Image.Resampling.NEAREST)
277
+ state['orignal_segmented'] = np.array(img_ret).copy()
278
+ state['base_layer'] = np.array(img_ret)
279
+ state['segment_info'] = seg_info
280
+ img_ret_array = np.array(img_ret)
281
+ img_ret_array[:,:,3] = 255 - img_ret_array[:,:,3]
282
+ #NOTE: if write out as a png, the pixels values get messed up. Same reason the client side colors look weird.
283
+ #cv2.imwrite(f"get_segments_img_ret.bmp", img_ret_array)
284
+
285
+
286
+ for obj_id, lable in seg_info.items():
287
+ obj_img = (img_ret_array[:,:,3] == 255 - obj_id)
288
+ #cv2.imwrite(f"img_{obj_id}.png", obj_img.astype(np.uint8) * 255)
289
+ #log_image_and_mask(np.array(img['image']), obj_img)
290
+ bbox = get_bounding_box(obj_img)
291
+ print(f"obj_id={obj_id}, lable={lable}, bbox={bbox}")
292
+ state['seg_boxes'][obj_id] = bbox
293
+
294
+ #add a special event, obj stays at the original spot
295
+ data = {}
296
+ data["index"] = (0, 0)
297
+ data["value"] = 254 # ==> 1, the only object allowed for now
298
+ data["selected"] = True
299
+ evt = gr.SelectData(None, data)
300
+ mask_dilate_slider, _, state = changed_objects_handler(mask_dilate_slider, state, evt)
301
+
302
+ state['base_layer_masked'], state = get_base_layer_mask(state)
303
+ if (mask_dilate_slider != 0):
304
+ enlarged_masked_background, state = get_enlarged_masked_background(state, mask_dilate_slider)
305
+ state['base_layer_inpainted'] = np.array(get_inpainted_background(state, mask_dilate_slider))
306
+
307
+ return Image.fromarray(img_ret_array), enlarged_masked_background, state['base_layer_inpainted'], state
308
+
309
+ def get_generated(grounding_text, fix_seed, rand_seed, state):
310
+
311
+ if ('base_layer_inpainted' in state) == False :
312
+ raise gr.Error('The segmentation step must be completed first before generating a new image')
313
+
314
+ inpainted_background_img = state['base_layer_inpainted']
315
+ assert inpainted_background_img is not None, 'base layer should be inpainted after segment'
316
+
317
+ state['boxes'] = []
318
+ for items in state['changed_objects']:
319
+ if items['box'] is not None:
320
+ state['boxes'].append(items['box'])
321
+
322
+ if (len(state['boxes']) == 0):
323
+ if (len(grounding_text) != 0):
324
+ grounding_text = []
325
+ print("No grounding box found. Grounding text will be ignored.")
326
+ return inpainted_background_img.copy(), state, None
327
+
328
+ print('Calling GLIGEN_app.generate')
329
+ print('grounding_text: ', grounding_text)
330
+ print(state['boxes'], len(state['boxes']))
331
+ assert len(state['boxes']) == 1, 'Only handle one segmented object at a time'
332
+ if (len(grounding_text) == 0): #mostly user forgot to drag the object and didn't provide grounding text
333
+ raise gr.Error('Please providing grounding text to match the identified object')
334
+ out_gen_1, _, _, _, state = GLIGEN.generate(task='Grounded Inpainting', language_instruction='',
335
+ grounding_texts=grounding_text, sketch_pad=inpainted_background_img,
336
+ alpha_sample=0.3, guidance_scale=7.5, batch_size=1,
337
+ fix_seed=fix_seed, rand_seed=rand_seed, use_actual_mask=False, append_grounding=True,
338
+ style_cond_image=None, inpainting_image=inpainted_background_img, inpainting_mask=None, state=state)
339
+
340
+ return out_gen_1['value'], state
341
+
342
+ def get_generated_full(task, language_instruction, grounding_instruction, sketch_pad,
343
+ alpha_sample, guidance_scale, batch_size,
344
+ fix_seed, rand_seed,
345
+ use_actual_mask,
346
+ append_grounding, style_cond_image,
347
+ state):
348
+
349
+ out_gen_1, _, _, _, state = GLIGEN.generate(
350
+ task, language_instruction, grounding_instruction, sketch_pad,
351
+ alpha_sample, guidance_scale, batch_size,
352
+ fix_seed, rand_seed,
353
+ use_actual_mask,
354
+ append_grounding, style_cond_image,
355
+ state)
356
+ return out_gen_1['value'], state
357
+
358
+ def gligen_change_task(state):
359
+ if (state['working_image'] is not None):
360
+ task = "Grounded Inpainting"
361
+ else:
362
+ task = "Grounded Generation"
363
+ return task
364
+
365
+ def clear_sketch_pad_mask(sketch_pad_image):
366
+ sketch_pad = ImageMask.update(value=sketch_pad_image, visible=True)
367
+ return sketch_pad
368
+
369
+ def save_shared_state(img, state):
370
+ if (isinstance(img, dict) and 'image' in img):
371
+ state['working_image'] = img['image']
372
+ else:
373
+ state['working_image'] = img
374
+ return state
375
+
376
+ def load_shared_state(state, task = None):
377
+ if (task == "Grounded Generation"):
378
+ return None
379
+ else:
380
+ return state['working_image']
381
+
382
+ def update_shared_state(state, task):
383
+ if (task == "Grounded Generation"):
384
+ state['working_image'] = None
385
+ return state
386
+
387
+ def update_sketch_pad_trigger(sketch_pad_trigger, task):
388
+ if (task == "Grounded Generation"):
389
+ sketch_pad_trigger = sketch_pad_trigger + 1
390
+ return sketch_pad_trigger
391
+
392
+ def clear_grounding_info(state):
393
+ state['boxes'] = []
394
+ state['masks'] = []
395
+ return state, ''
396
+
397
+ def switch_to_generate ():
398
+ task = "Grounded Generation"
399
+ return task, gr.Image.update(visible=True), gr.Textbox.update(visible=True), gr.Textbox.update(visible=True), gr.Button.update(visible=True), gr.Button.update(visible=True), gr.Accordion.update(visible=True)
400
+
401
+ def switch_to_inpaint ():
402
+ task = "Grounded Inpainting"
403
+ return task, gr.Image.update(visible=True), gr.Textbox.update(visible=False), gr.Textbox.update(visible=True), gr.Button.update(visible=True), gr.Button.update(visible=True), gr.Accordion.update(visible=True)
404
+
405
+ def switch_to_compose ():
406
+ task = "Compose"
407
+ return task, gr.Image.update(visible=False), gr.Textbox.update(visible=False), gr.Textbox.update(visible=False), gr.Button.update(visible=False), gr.Button.update(visible=False), gr.Accordion.update(visible=False)
408
+
409
+ def copy_to_llava_input(img):
410
+ print('WORKING IMAGE CHANGED!!!!')
411
+ if (isinstance(img, Image.Image) is not True):
412
+ img = Image.fromarray(img)
413
+ return img
414
+
415
+ title_markdown = ("""
416
+ # <p style="text-align: center;">LLaVA Interactive</p>
417
+ """)
418
+
419
+ def build_demo():
420
+ demo = gr.Blocks(title="LLaVA Interactive", css=css+GLIGEN.css)
421
+ with demo:
422
+ compose_state = gr.State({'boxes': [], 'move_no': 0, 'base_layer': None, 'segment_info': None, 'seg_boxes': {}, 'changed_objects': []})
423
+ llava_state = gr.State()
424
+ shared_state = gr.State({'working_image': None})
425
+ gligen_state = gr.State({'draw_box': True})
426
+
427
+ gr.Markdown('<h1 style="text-align: center;"></h1>')
428
+ gr.Markdown('<h1 style="text-align: center;">LLaVA Interactive</h1>')
429
+ gr.Markdown('<h1 style="text-align: center;"></h1>')
430
+
431
+ gr.Markdown('**Experience interactive multimodal chatting and image manipulation. Select a tab for your task and follow the instructions. Switch tasks anytime and ask questions in the chat window.**')
432
+
433
+ with gr.Row(visible=False):
434
+ working_image = gr.Image(label="Working Image", type="numpy", elem_id="working_image", visible=False, interactive=False) #hidden image to save current working image
435
+ #for gligen
436
+ sketch_pad_trigger = gr.Number(value=0, visible=False)
437
+ sketch_pad_resize_trigger = gr.Number(value=0, visible=False)
438
+ init_white_trigger = gr.Number(value=0, visible=False)
439
+ image_scale = gr.Number(value=0, elem_id="image_scale", visible=False)
440
+ task = gr.Radio(
441
+ choices=["Grounded Generation", 'Grounded Inpainting', 'Compose'],
442
+ type="value",
443
+ value="Grounded Inpainting",
444
+ label="Task",
445
+ visible=False
446
+ )
447
+
448
+ with gr.Row(equal_height=False):
449
+ with gr.Column():
450
+
451
+ with gr.Row():
452
+ sketch_pad = ImageMask(label="Sketch Pad", type="numpy", shape=(512, 512), width=384, elem_id="img2img_image", brush_radius=20.0, visible=True)
453
+
454
+ compose_tab = gr.Tab("Remove or Change Objects")
455
+ with compose_tab:
456
+ gr.Markdown("Segment an object by drawing a stroke or giving a referring text. Then press the segment button. Drag the highlighted object to move it. To remove it, drag it out of the frame. To replace it with a new object, give an instruction only if the object is removed and press the generate button until you like the image.")
457
+ with gr.Row().style(equal_height=False):
458
+ with gr.Column():
459
+ with gr.Group():
460
+ with gr.Column():
461
+ with gr.Row():
462
+ segment_task= gr.Radio(["Stroke", "Text"], value="Stroke", label='Choose segmentation method')
463
+ segment_text = gr.Textbox(label="Enter referring text")
464
+ segment_btn = gr.Button("Segment", elem_id="segment-btn")
465
+
466
+ with gr.Group():
467
+ segmented_img = gr.Image(label="Move or delete object", tool="compose", height=256)
468
+
469
+ with gr.Group():
470
+ with gr.Column():
471
+ grounding_text_box = gr.Textbox(label="Enter grounding text for generating a new image")
472
+ with gr.Row():
473
+ compose_clear_btn = gr.Button("Clear", elem_id="compose_clear_btn")
474
+ compose_btn = gr.Button("Generate", elem_id="compose_btn")
475
+
476
+ with gr.Accordion("Advanced Options", open=False):
477
+ with gr.Row():
478
+ masked_background_img = gr.Image(label="Background", type='pil', interactive=False, height=256)
479
+ inpainted_background_img = gr.Image(label="Inpainted Background", type='pil', interactive=False, height=256)
480
+ mask_dilate_slider = gr.Slider(minimum=0.0, maximum=100, value=50, step=2, interactive=True, label="Mask dilation",visible=True, scale=20)
481
+ with gr.Row(visible=False):
482
+ compose_fix_seed = gr.Checkbox(value=False, label="Fixed seed", visible=False)
483
+ compose_rand_seed = gr.Slider(minimum=0, maximum=1000, step=1, value=0, label="Seed", visible=False)
484
+
485
+ gligen_inpaint = gr.Tab("Inpaint New Objects")
486
+ with gligen_inpaint:
487
+ gr.Markdown("Add a new object to the image by drawing its bounding box and giving an instruction. Press the “generate” button repeatedly until you like the image. Press “clear” to accept the image and start over with another object.")
488
+
489
+ gligen = gr.Tab("Generate New Image")
490
+ with gligen:
491
+ gr.Markdown("Generate a new image by giving a language instruction below. Draw a bounding box and give an instruction for any specific objects that need to be grounded in certain places. Hit the “generate” button repeatedly until you get the image you want.")
492
+
493
+ with gr.Group(visible=False):
494
+ language_instruction = gr.Textbox(label="Language instruction", elem_id='language_instruction', visible=False)
495
+ grounding_instruction = gr.Textbox(label="Grounding instruction (Separated by semicolon)", elem_id='grounding_instruction', visible=False)
496
+ with gr.Row():
497
+ gligen_clear_btn = gr.Button(value='Clear', visible=False)
498
+ gligen_gen_btn = gr.Button(value='Generate', elem_id="generate-btn", visible=False)
499
+
500
+ with gr.Group():
501
+ out_imagebox = gr.Image(type="pil", label="Parsed Sketch Pad", height=256, visible=False)
502
+
503
+ gligen_adv_options = gr.Accordion("Advanced Options", open=False, visible=False)
504
+ with gligen_adv_options:
505
+ with gr.Column():
506
+ alpha_sample = gr.Slider(minimum=0, maximum=1.0, step=0.1, value=0.3, label="Scheduled Sampling (τ)")
507
+ guidance_scale = gr.Slider(minimum=0, maximum=50, step=0.5, value=7.5, label="Guidance Scale")
508
+
509
+ with gr.Row(visible=False):
510
+ batch_size = gr.Slider(minimum=1, maximum=4, step=1, value=1, label="Number of Samples", visible=False)
511
+ append_grounding = gr.Checkbox(value=True, label="Append grounding instructions to the caption",visible=False)
512
+ use_actual_mask = gr.Checkbox(value=False, label="Use actual mask for inpainting", visible=False)
513
+ fix_seed = gr.Checkbox(value=False, label="Fixed seed",visible=False)
514
+ rand_seed = gr.Slider(minimum=0, maximum=1000, step=1, value=0, label="Seed",visible=False)
515
+ use_style_cond = gr.Checkbox(value=False, label="Enable Style Condition",visible=False)
516
+ style_cond_image = gr.Image(type="pil", label="Style Condition", visible=False, interactive=False)
517
+
518
+ controller = GLIGEN.Controller()
519
+ sketch_pad.edit(
520
+ GLIGEN.draw,
521
+ inputs=[task, sketch_pad, grounding_instruction, sketch_pad_resize_trigger, gligen_state],
522
+ outputs=[out_imagebox, sketch_pad_resize_trigger, image_scale, gligen_state],
523
+ queue=False,
524
+ )
525
+ llava_image = gr.Image(label='sketch_pad_image', type='pil', visible=False, interactive=False)
526
+ working_image.change(copy_to_llava_input, [working_image], [llava_image])
527
+ sketch_pad.upload(
528
+ save_shared_state,
529
+ inputs = [sketch_pad, shared_state],
530
+ outputs = shared_state).then(
531
+ load_shared_state, [shared_state], working_image)
532
+ grounding_instruction.change(
533
+ GLIGEN.draw,
534
+ inputs=[task, sketch_pad, grounding_instruction, sketch_pad_resize_trigger, gligen_state],
535
+ outputs=[out_imagebox, sketch_pad_resize_trigger, image_scale, gligen_state],
536
+ queue=False,
537
+ )
538
+ gligen_clear_btn.click(
539
+ GLIGEN.clear,
540
+ inputs=[task, sketch_pad_trigger, batch_size, gligen_state],
541
+ outputs=[sketch_pad, sketch_pad_trigger, out_imagebox, image_scale, gligen_state],
542
+ queue=False).then(
543
+ clear_grounding_info, gligen_state, [gligen_state, grounding_instruction]).then(
544
+ load_shared_state, [shared_state], sketch_pad).then(
545
+ update_sketch_pad_trigger, [sketch_pad_trigger, task], sketch_pad_trigger)
546
+ task.change(
547
+ partial(GLIGEN.clear, switch_task=True),
548
+ inputs=[task, sketch_pad_trigger, batch_size, gligen_state],
549
+ outputs=[sketch_pad, sketch_pad_trigger, out_imagebox, image_scale, gligen_state],
550
+ queue=False).then(
551
+ load_shared_state, [shared_state, task], sketch_pad).then(
552
+ update_sketch_pad_trigger, [sketch_pad_trigger, task], sketch_pad_trigger).then(
553
+ clear_grounding_info, gligen_state, [gligen_state, grounding_instruction])
554
+ sketch_pad_trigger.change(
555
+ controller.init_white,
556
+ inputs=[init_white_trigger],
557
+ outputs=[sketch_pad, image_scale, init_white_trigger],
558
+ queue=False)
559
+ sketch_pad_resize_trigger.change(
560
+ controller.resize_masked,
561
+ inputs=[gligen_state],
562
+ outputs=[sketch_pad, gligen_state],
563
+ queue=False)
564
+
565
+ gligen_gen_btn.click(
566
+ get_generated_full,
567
+ inputs=[
568
+ task, language_instruction, grounding_instruction, sketch_pad,
569
+ alpha_sample, guidance_scale, batch_size,
570
+ fix_seed, rand_seed,
571
+ use_actual_mask,
572
+ append_grounding, style_cond_image,
573
+ gligen_state],
574
+ outputs=[sketch_pad, gligen_state],
575
+ queue=True).then(
576
+ save_shared_state, [sketch_pad, shared_state], shared_state).then(
577
+ load_shared_state, [shared_state], working_image)
578
+
579
+ sketch_pad_resize_trigger.change(
580
+ None,
581
+ None,
582
+ sketch_pad_resize_trigger,
583
+ _js=GLIGEN.rescale_js,
584
+ queue=False)
585
+ init_white_trigger.change(
586
+ None,
587
+ None,
588
+ init_white_trigger,
589
+ _js=GLIGEN.rescale_js,
590
+ queue=False)
591
+ use_style_cond.change(
592
+ lambda cond: gr.Image.update(visible=cond),
593
+ use_style_cond,
594
+ style_cond_image,
595
+ queue=False)
596
+ task.change(
597
+ controller.switch_task_hide_cond,
598
+ inputs=task,
599
+ outputs=[use_style_cond, style_cond_image, alpha_sample, use_actual_mask],
600
+ queue=False)
601
+
602
+
603
+ with gr.Column():
604
+ gr.Markdown("Chat with the latest image on the left at any time by entering your text below.")
605
+ llava_chatbot = gr.Chatbot(elem_id="chatbot", label="LLaVA Chatbot", height=750)
606
+ with gr.Column(scale=8):
607
+ llava_textbox = gr.Textbox(show_label=False, placeholder="Enter text and press ENTER", container=False)
608
+ with gr.Column(scale=1, min_width=60):
609
+ llava_submit_btn = gr.Button(value="Submit", visible=False)
610
+
611
+ with gr.Row(visible=False):
612
+ upvote_btn = gr.Button(value="👍 Upvote", interactive=False, visible=False)
613
+ downvote_btn = gr.Button(value="👎 Downvote", interactive=False, visible=False)
614
+ flag_btn = gr.Button(value="⚠️ Flag", interactive=False, visible=False)
615
+ regenerate_btn = gr.Button(value="🔄 Regenerate", interactive=False, visible=False)
616
+ llava_clear_btn = gr.Button(value="🗑️ Clear history", interactive=False, visible=False)
617
+ with gr.Accordion("Parameters", open=False, visible=False) as parameter_row:
618
+ temperature = gr.Slider(minimum=0.0, maximum=1.0, value=0.2, step=0.1, interactive=True, label="Temperature",visible=True)
619
+ top_p = gr.Slider(minimum=0.0, maximum=1.0, value=0.7, step=0.1, interactive=True, label="Top P",visible=True)
620
+ max_output_tokens = gr.Slider(minimum=0, maximum=1024, value=512, step=64, interactive=True, label="Max output tokens",visible=True)
621
+
622
+ segment_btn.click(get_segments, inputs=[sketch_pad, segment_task, segment_text, mask_dilate_slider, compose_state], outputs=[segmented_img, masked_background_img, inpainted_background_img, compose_state], queue=True)
623
+ segmented_img.select (changed_objects_handler, [mask_dilate_slider, compose_state], [mask_dilate_slider, masked_background_img, compose_state])
624
+ mask_dilate_slider.release(get_base_layer_inpainted, inputs=[compose_state, mask_dilate_slider], outputs=[masked_background_img, inpainted_background_img, compose_state])
625
+ compose_btn.click(get_generated, [grounding_text_box, compose_fix_seed, compose_rand_seed, compose_state], [sketch_pad, compose_state], queue=True).then(
626
+ save_shared_state, [sketch_pad, shared_state], shared_state).then(
627
+ load_shared_state, [shared_state], working_image)
628
+ compose_clear_btn.click(load_shared_state, [shared_state], sketch_pad)
629
+
630
+ image_process_mode = gr.Radio(
631
+ ["Crop", "Resize", "Pad"],
632
+ value="Crop",
633
+ label="Preprocess for non-square image",
634
+ visible=False)
635
+ models = LLAVA.get_model_list(args)
636
+ model_selector = gr.Dropdown(
637
+ choices=models,
638
+ value=models[0] if len(models) > 0 else "",
639
+ interactive=True,
640
+ show_label=False,
641
+ container=False,
642
+ visible=False)
643
+
644
+ btn_list = [upvote_btn, downvote_btn, flag_btn, regenerate_btn, llava_clear_btn]
645
+ upvote_btn.click(LLAVA.upvote_last_response,
646
+ [llava_state, model_selector], [llava_textbox, upvote_btn, downvote_btn, flag_btn])
647
+ downvote_btn.click(LLAVA.downvote_last_response,
648
+ [llava_state, model_selector], [llava_textbox, upvote_btn, downvote_btn, flag_btn])
649
+ flag_btn.click(LLAVA.flag_last_response,
650
+ [llava_state, model_selector], [llava_textbox, upvote_btn, downvote_btn, flag_btn])
651
+ regenerate_btn.click(LLAVA.regenerate, [llava_state, image_process_mode],
652
+ [llava_state, llava_chatbot, llava_textbox, sketch_pad] + btn_list).then(
653
+ LLAVA.http_bot, [llava_state, model_selector, temperature, top_p, max_output_tokens],
654
+ [llava_state, llava_chatbot] + btn_list)
655
+ llava_clear_btn.click(LLAVA.clear_history, None, [llava_state, llava_chatbot, llava_textbox, llava_image] + btn_list)
656
+
657
+ llava_textbox.submit(LLAVA.add_text, [llava_state, llava_textbox, llava_image, image_process_mode], [llava_state, llava_chatbot, llava_textbox, llava_image] + btn_list
658
+ ).then(LLAVA.http_bot, [llava_state, model_selector, temperature, top_p, max_output_tokens],
659
+ [llava_state, llava_chatbot] + btn_list)
660
+ llava_submit_btn.click(LLAVA.add_text, [llava_state, llava_textbox, llava_image, image_process_mode], [llava_state, llava_chatbot, llava_textbox, llava_image] + btn_list
661
+ ).then(LLAVA.http_bot, [llava_state, model_selector, temperature, top_p, max_output_tokens],
662
+ [llava_state, llava_chatbot] + btn_list)
663
+
664
+ if args.model_list_mode == "once":
665
+ raise ValueError(f"Unsupported model list mode: {args.model_list_mode}")
666
+ elif args.model_list_mode == "reload":
667
+ print('disable for debugging')
668
+ demo.load(LLAVA.load_demo_refresh_model_list, inputs=None,
669
+ outputs=[llava_state, model_selector]
670
+ ).then(switch_to_compose, [], [task, out_imagebox, language_instruction, grounding_instruction, gligen_clear_btn, gligen_gen_btn, gligen_adv_options] #first tab show doesn't need any
671
+ ).then(GLIGEN.clear, inputs=[task, sketch_pad_trigger, batch_size, gligen_state],
672
+ outputs=[sketch_pad, sketch_pad_trigger, out_imagebox, image_scale, gligen_state], queue=False)
673
+
674
+ else:
675
+ raise ValueError(f"Unknown model list mode: {args.model_list_mode}")
676
+
677
+ gligen.select(
678
+ switch_to_generate,
679
+ inputs=[],
680
+ outputs=[task, out_imagebox, language_instruction, grounding_instruction, gligen_clear_btn, gligen_gen_btn, gligen_adv_options])
681
+ gligen_inpaint.select(
682
+ switch_to_inpaint,
683
+ inputs=[],
684
+ outputs=[task, out_imagebox, language_instruction, grounding_instruction, gligen_clear_btn, gligen_gen_btn, gligen_adv_options],
685
+ queue=False)
686
+
687
+ compose_tab.select(
688
+ switch_to_compose, [], [task, out_imagebox, language_instruction, grounding_instruction, gligen_clear_btn, gligen_gen_btn, gligen_adv_options])
689
+
690
+ return demo
691
+
692
+ if __name__ == "__main__":
693
+
694
+ parser = argparse.ArgumentParser()
695
+ parser.add_argument("--host", type=str, default="0.0.0.0")
696
+ parser.add_argument("--port", type=int)
697
+ parser.add_argument("--controller-url", type=str, default="http://localhost:10000")
698
+ parser.add_argument("--concurrency-count", type=int, default=8)
699
+ parser.add_argument("--model-list-mode", type=str, default="reload",
700
+ choices=["once", "reload"])
701
+ parser.add_argument("--share", action="store_true")
702
+ parser.add_argument("--moderate", action="store_true")
703
+ parser.add_argument("--embed", action="store_true")
704
+ args = parser.parse_args()
705
+ LLAVA.set_args(args)
706
+
707
+ demo = build_demo()
708
+ demo.queue(concurrency_count=1, api_open=True)
709
+ demo.launch()
lama_predict.py ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+
3
+ # Example command:
4
+ # ./bin/predict.py \
5
+ # model.path=<path to checkpoint, prepared by make_checkpoint.py> \
6
+ # indir=<path to input data> \
7
+ # outdir=<where to store predicts>
8
+
9
+ import logging
10
+ import os
11
+ import sys
12
+ import traceback
13
+
14
+ from saicinpainting.evaluation.utils import move_to_device
15
+ from saicinpainting.evaluation.refinement import refine_predict
16
+ os.environ['OMP_NUM_THREADS'] = '1'
17
+ os.environ['OPENBLAS_NUM_THREADS'] = '1'
18
+ os.environ['MKL_NUM_THREADS'] = '1'
19
+ os.environ['VECLIB_MAXIMUM_THREADS'] = '1'
20
+ os.environ['NUMEXPR_NUM_THREADS'] = '1'
21
+
22
+ import cv2
23
+ import hydra
24
+ import numpy as np
25
+ import torch
26
+ import tqdm
27
+ import yaml
28
+ from omegaconf import OmegaConf
29
+ from torch.utils.data._utils.collate import default_collate
30
+
31
+ from saicinpainting.training.data.datasets import make_default_val_dataset
32
+ from saicinpainting.training.trainers import load_checkpoint
33
+ from saicinpainting.utils import register_debug_signal_handlers
34
+
35
+ LOGGER = logging.getLogger(__name__)
36
+
37
+
38
+ #@hydra.main(config_path='../configs/prediction', config_name='web_server.yaml')
39
+ def main(predict_config: dict):
40
+ try:
41
+ #register_debug_signal_handlers() # kill -10 <pid> will result in traceback dumped into log
42
+
43
+ device = torch.device(predict_config.device)
44
+
45
+ train_config_path = os.path.join(predict_config.model.path, 'config.yaml')
46
+ with open(train_config_path, 'r') as f:
47
+ train_config = OmegaConf.create(yaml.safe_load(f))
48
+
49
+ train_config.training_model.predict_only = True
50
+ train_config.visualizer.kind = 'noop'
51
+
52
+ out_ext = predict_config.get('out_ext', '.png')
53
+
54
+ checkpoint_path = os.path.join(predict_config.model.path,
55
+ 'models',
56
+ predict_config.model.checkpoint)
57
+ model = load_checkpoint(train_config, checkpoint_path, strict=False, map_location='cpu')
58
+ model.freeze()
59
+ if not predict_config.get('refine', False):
60
+ model.to(device)
61
+
62
+ if not predict_config.indir.endswith('/'):
63
+ predict_config.indir += '/'
64
+
65
+ dataset = make_default_val_dataset(predict_config.indir, **predict_config.dataset)
66
+ for img_i in tqdm.trange(len(dataset)):
67
+ mask_fname = dataset.mask_filenames[img_i]
68
+ cur_out_fname = os.path.join(
69
+ predict_config.outdir,
70
+ os.path.splitext(mask_fname[len(predict_config.indir):])[0] + out_ext
71
+ )
72
+ os.makedirs(os.path.dirname(cur_out_fname), exist_ok=True)
73
+ batch = default_collate([dataset[img_i]])
74
+ if predict_config.get('refine', False):
75
+ assert 'unpad_to_size' in batch, "Unpadded size is required for the refinement"
76
+ # image unpadding is taken care of in the refiner, so that output image
77
+ # is same size as the input image
78
+ cur_res = refine_predict(batch, model, **predict_config.refiner)
79
+ cur_res = cur_res[0].permute(1,2,0).detach().cpu().numpy()
80
+ else:
81
+ with torch.no_grad():
82
+ batch = move_to_device(batch, device)
83
+ batch['mask'] = (batch['mask'] > 0) * 1
84
+ batch = model(batch)
85
+ cur_res = batch[predict_config.out_key][0].permute(1, 2, 0).detach().cpu().numpy()
86
+ unpad_to_size = batch.get('unpad_to_size', None)
87
+ if unpad_to_size is not None:
88
+ orig_height, orig_width = unpad_to_size
89
+ cur_res = cur_res[:orig_height, :orig_width]
90
+
91
+ cur_res = np.clip(cur_res * 255, 0, 255).astype('uint8')
92
+ cur_res = cv2.cvtColor(cur_res, cv2.COLOR_RGB2BGR)
93
+ cv2.imwrite(cur_out_fname, cur_res)
94
+
95
+ except KeyboardInterrupt:
96
+ LOGGER.warning('Interrupted by user')
97
+ except Exception as ex:
98
+ LOGGER.critical(f'Prediction failed due to {ex}:\n{traceback.format_exc()}')
99
+ sys.exit(1)
100
+
101
+
102
+ #if __name__ == '__main__':
103
+ # main()
lama_server.py ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ from flask import Flask, jsonify, send_file, request
3
+ import base64
4
+ from PIL import Image, ImageOps
5
+ import io
6
+
7
+ import hydra
8
+ from omegaconf import DictConfig
9
+ from lama_predict import main as lama_predict
10
+
11
+ import os
12
+ import yaml
13
+ from omegaconf import OmegaConf
14
+
15
+ cwd = os.getcwd()
16
+ print(cwd)
17
+
18
+ config_path = os.path.join(cwd, "configs/prediction/default.yaml")
19
+ with open(config_path, 'r') as f:
20
+ config = OmegaConf.create(yaml.safe_load(f))
21
+
22
+ config.model.path = os.path.join(cwd, "big-lama")
23
+ config.indir = os.path.join(cwd, "web_server_input")
24
+ config.outdir = os.path.join(cwd, "web_server_output")
25
+ config.refine = False
26
+
27
+ app = Flask(__name__)
28
+
29
+ @app.route("/api/v2/image", methods=["GET", "POST"])
30
+ def echo_image():
31
+ # Get the image data from the request body
32
+ json_dict = request.get_json()
33
+ print(type(json_dict))
34
+ # Get the value of the "image" key, which is the base64 encoded image data
35
+ base64_image_data = json_dict["image"]
36
+ #print(base64_image_data[0:500])
37
+
38
+ image_bytes = base64.b64decode(base64_image_data)
39
+ image_stream = io.BytesIO(image_bytes)
40
+ image = Image.open(image_stream)
41
+ print(image.format_description)
42
+ if not os.path.exists("web_server_input"):
43
+ os.makedirs("web_server_input")
44
+ image.save("web_server_input/server.png")
45
+
46
+ base64_mask_data = json_dict["mask"]
47
+ image_bytes = base64.b64decode(base64_mask_data)
48
+ image_stream = io.BytesIO(image_bytes)
49
+ mask = Image.open(image_stream)
50
+ print(mask.format_description)
51
+ print(mask.format)
52
+ print(mask.size)
53
+ print(mask.mode)
54
+ if (mask.mode != "L"):
55
+ mask = mask.convert("L")
56
+ if not os.path.exists("web_server_input"):
57
+ os.makedirs("web_server_input")
58
+ mask.save("web_server_input/server_mask.png")
59
+
60
+ # Apply the mask to the image
61
+ # Create a new transparent image with the same size and mode as the image
62
+ transparent = Image.new(image.mode, image.size, (0, 0, 0, 0))
63
+ # Composite the image and the transparent image using the mask
64
+ masked_image = Image.composite(image, transparent, mask)
65
+ masked_image.save("server_masked_image.png")
66
+
67
+ # Convert the masked image to bytes and create a new stream
68
+ masked_image_stream = io.BytesIO()
69
+ masked_image.save(masked_image_stream, format='PNG')
70
+ masked_image_stream.seek(0)
71
+
72
+ lama_predict(config)
73
+
74
+ with open("web_server_output/server_mask.png", "rb") as image_file:
75
+ image_bytes = image_file.read()
76
+ image_inpainted_stream = io.BytesIO(image_bytes)
77
+ print(image.format_description)
78
+ image_inpainted_stream.seek(0)
79
+
80
+ return send_file(image_inpainted_stream, mimetype="image/png")
81
+
82
+ if __name__ == "__main__":
83
+ app.run(debug=True, port=9171)
84
+
llava_interactive.py ADDED
@@ -0,0 +1,705 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import base64
3
+ import io
4
+ import os
5
+ import sys
6
+
7
+ import cv2
8
+ import gradio as gr
9
+ import numpy as np
10
+ import requests
11
+ from functools import partial
12
+ from PIL import Image, ImageOps
13
+
14
+ sys.path.append(os.path.join(os.environ['LLAVA_INTERACTIVE_HOME'], 'GLIGEN/demo'))
15
+ import GLIGEN.demo.app as GLIGEN
16
+ sys.path.append(os.path.join(os.environ['LLAVA_INTERACTIVE_HOME'], 'SEEM/demo_code'))
17
+ import SEEM.demo_code.app as SEEM #must import GLIGEN_app before this. Otherwise, it will hit a protobuf error
18
+ sys.path.append(os.path.join(os.environ['LLAVA_INTERACTIVE_HOME'], 'LLaVA'))
19
+ import LLaVA.llava.serve.gradio_web_server as LLAVA
20
+
21
+ class ImageMask(gr.components.Image):
22
+ """
23
+ Sets: source="canvas", tool="sketch"
24
+ """
25
+
26
+ is_template = True
27
+
28
+ def __init__(self, **kwargs):
29
+ super().__init__(source="upload", tool="sketch", interactive=True, **kwargs)
30
+
31
+ def preprocess(self, x):
32
+ if isinstance(x, str):
33
+ x = {'image': x, 'mask': x}
34
+ elif isinstance(x, dict):
35
+ if (x['mask'] is None and x['image'] is None):
36
+ x
37
+ elif (x['image'] is None):
38
+ x['image'] = str(x['mask'])
39
+ elif (x['mask'] is None):
40
+ x['mask'] = str(x['image']) #not sure why mask/mask is None sometimes, this prevents preprocess crashing
41
+ elif x is not None:
42
+ assert False, 'Unexpected type {0} in ImageMask preprocess()'.format(type(x))
43
+
44
+ return super().preprocess(x)
45
+
46
+ css = """
47
+ #compose_btn {
48
+ --tw-border-opacity: 1;
49
+ border-color: rgb(255 216 180 / var(--tw-border-opacity));
50
+ --tw-gradient-from: rgb(255 216 180 / .7);
51
+ --tw-gradient-to: rgb(255 216 180 / 0);
52
+ --tw-gradient-stops: var(--tw-gradient-from), var(--tw-gradient-to);
53
+ --tw-gradient-to: rgb(255 176 102 / .8);
54
+ --tw-text-opacity: 1;
55
+ color: rgb(238 116 0 / var(--tw-text-opacity));
56
+ }
57
+ """
58
+
59
+ def get_bounding_box(img):
60
+ # Get the indices of all non-zero pixels
61
+ if (np.any(img) == False): #protect agaist an empty img
62
+ return None
63
+ non_zero_indices = np.nonzero(img)
64
+
65
+ # Get the minimum and maximum indices for each axis
66
+ min_x = np.min(non_zero_indices[1])
67
+ max_x = np.max(non_zero_indices[1])
68
+ min_y = np.min(non_zero_indices[0])
69
+ max_y = np.max(non_zero_indices[0])
70
+
71
+ # Return the bounding box as a tuple of (min_x, min_y, max_x, max_y)
72
+ return (min_x, min_y, max_x, max_y)
73
+
74
+ def composite_all_layers(base, objects): #debugging use only
75
+ img = base.copy()
76
+ for obj in objects:
77
+ for i in range(obj['img'].shape[0]):
78
+ for j in range(obj['img'].shape[1]):
79
+ if obj['img'][i, j, 3] != 0:
80
+ img[i, j] = obj['img'][i, j]
81
+ return img
82
+
83
+ def changed_objects_handler(mask_dilate_slider, state, evt: gr.SelectData):
84
+ state['move_no'] += 1
85
+
86
+ pos_x, pos_y = evt.index #obj moved out of scene is signaled by (10000, 10000)
87
+ obj_id = 255 - evt.value
88
+ print(f"obj {obj_id} moved by {pos_x}, {pos_y}")
89
+
90
+ img = state['base_layer']
91
+ for obj in state['changed_objects']:
92
+ if obj['id'] == obj_id:
93
+ img = obj['img']
94
+ state['changed_objects'].remove(obj)
95
+ break
96
+
97
+ new_img = np.zeros_like(img)
98
+ bbox = None
99
+ for i in range(img.shape[0]):
100
+ for j in range(img.shape[1]):
101
+ if img[i, j, 3] == obj_id:
102
+ new_i = i + pos_y
103
+ new_j = j + pos_x
104
+ if new_i >= 0 and new_i < img.shape[0] and new_j >= 0 and new_j < img.shape[1]:
105
+ new_img[new_i, new_j] = img[i, j]
106
+ img[i, j] = 0
107
+
108
+ bbox = get_bounding_box(new_img) #returns None if obj moved out of scene
109
+ print("bbox: ", bbox)
110
+ state['changed_objects'].append({'id': obj_id, 'img': new_img, 'text': state['segment_info'][obj_id], 'box': bbox})
111
+
112
+ #Enable for debugging only. See if the composited image is correct.
113
+ #composed_img_updated = composite_all_layers(state['base_layer'], state['changed_objects'])
114
+ #filename = str(f"composited_imge_{state['move_no']}") + ".png"
115
+ #cv2.imwrite(filename, composed_img_updated[:, :, 0:3])
116
+
117
+
118
+ return mask_dilate_slider, state['base_layer_masked'], state
119
+
120
+ def get_base_layer_mask(state):
121
+
122
+ changed_obj_id = []
123
+ for obj in state['changed_objects']:
124
+ changed_obj_id.append(obj['id'])
125
+
126
+ #union of mask of all objects
127
+ img = state['orignal_segmented']
128
+ mask = np.zeros(img.shape[:2], dtype=np.uint8)
129
+ for i in range(img.shape[0]):
130
+ for j in range(img.shape[1]):
131
+ if img[i, j, 3] in changed_obj_id:
132
+ mask[i, j] = 255
133
+ state['base_layer_mask'] = mask
134
+
135
+ mask_image = Image.fromarray(mask)
136
+ if (mask_image.mode != "L"):
137
+ mask_image = mask_image.convert("L")
138
+ mask_image = ImageOps.invert(mask_image)
139
+ #mask_image.save("mask_image.png")
140
+
141
+ img = state['orignal_segmented']
142
+ orig_image = Image.fromarray(img[:,:,:3])
143
+ orig_image.save("orig_image.png")
144
+ transparent = Image.new(orig_image.mode, orig_image.size, (0, 0, 0, 0))
145
+ masked_image = Image.composite(orig_image, transparent, mask_image)
146
+ #masked_image.save("get_masked_background_image.png")
147
+
148
+ return masked_image, state
149
+
150
+ def get_inpainted_background(state, mask_dilate_slider):
151
+
152
+ # Define the URL of the REST API endpoint
153
+ url = "http://localhost:9171/api/v2/image"
154
+
155
+ img = state['orignal_segmented']
156
+ if (isinstance(img, Image.Image) is not True):
157
+ img = Image.fromarray(img)
158
+ # Create a BytesIO object and save the image there
159
+ buffer = io.BytesIO()
160
+ img.save(buffer, format="PNG")
161
+ # Get the bytes value from the buffer
162
+ img_bytes = buffer.getvalue()
163
+
164
+ encoded_string = base64.b64encode(img_bytes).decode("utf-8")
165
+
166
+ if (mask_dilate_slider != 0) :
167
+ mask = state['base_layer_mask_enlarged']
168
+ else:
169
+ mask = state['base_layer_mask']
170
+ if (isinstance(mask, Image.Image) is not True):
171
+ mask = Image.fromarray(mask)
172
+
173
+ #mask has background as 1, lama needs object to be 1
174
+ if (mask.mode != "L"):
175
+ mask = mask.convert("L")
176
+ mask = ImageOps.invert(mask)
177
+
178
+ # Create a BytesIO object and save the image there
179
+ buffer = io.BytesIO()
180
+ mask.save(buffer, format="PNG")
181
+ # Get the bytes value from the buffer
182
+ mask_bytes = buffer.getvalue()
183
+
184
+ encoded_string_mask = base64.b64encode(mask_bytes).decode("utf-8")
185
+
186
+
187
+ # Create a POST request to the endpoint
188
+ headers = {"Content-Type": "application/json"}
189
+ data = {"image": encoded_string, "mask": encoded_string_mask}
190
+ response = requests.post(url, headers=headers, json=data)
191
+
192
+ # Check the status code of the response
193
+ if response.status_code == 200:
194
+ # The request was successful
195
+ print("Image received successfully")
196
+ image_data = response.content
197
+ # Create a io.BytesIO object from the image data
198
+ dataBytesIO = io.BytesIO(image_data)
199
+ # Open the image using Image.open()
200
+ image = Image.open(dataBytesIO)
201
+ #image.save("lama_returned_image.png")
202
+
203
+ else:
204
+ # The request failed
205
+ print("Error: HTTP status code {}".format(response.status_code))
206
+ print(response.text)
207
+
208
+ return image
209
+
210
+ def get_enlarged_masked_background(state, mask_dilate_slider):
211
+
212
+ mask = state['base_layer_mask']
213
+
214
+ kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (mask_dilate_slider, mask_dilate_slider))
215
+ mask_dilated = cv2.dilate(mask, kernel)
216
+
217
+ #mask the original
218
+ mask_image = Image.fromarray(mask_dilated)
219
+ if (mask_image.mode != "L"):
220
+ mask_image = mask_image.convert("L")
221
+ mask_image = ImageOps.invert(mask_image)
222
+ state['base_layer_mask_enlarged'] = mask_image
223
+ #mask_image.save("enlarged_mask_image.png")
224
+
225
+ img = state['orignal_segmented']
226
+ orig_image = Image.fromarray(img[:,:,:3])
227
+ transparent = Image.new(orig_image.mode, orig_image.size, (0, 0, 0, 0))
228
+ masked_image = Image.composite(orig_image, transparent, mask_image)
229
+ #masked_image.save("enlarged_masked_background_image.png")
230
+
231
+ return masked_image, state
232
+
233
+ def get_base_layer_inpainted(state, mask_dilate_slider):
234
+ masked_img, state = get_enlarged_masked_background(state, mask_dilate_slider)
235
+ inpainted_img = get_inpainted_background(state, mask_dilate_slider)
236
+ state['base_layer_inpainted'] = np.array(inpainted_img)
237
+ return masked_img, inpainted_img, state
238
+
239
+ def log_image_and_mask(img, mask): #for debugging use only
240
+ counter = 0
241
+ for filename in os.listdir('.'):
242
+ if filename.startswith('img_') and filename.endswith('.png'):
243
+ try:
244
+ num = int(filename[4:-4])
245
+ if num > counter:
246
+ counter = num
247
+ except ValueError:
248
+ pass
249
+ counter += 1
250
+ cv2.imwrite(f"img_{counter}.png", img)
251
+ cv2.imwrite(f"img_{counter}_mask.png", mask.astype(np.uint8) * 255)
252
+
253
+ def get_segments (img, task, reftxt, mask_dilate_slider, state):
254
+ assert (isinstance(state, dict))
255
+ state['orignal_segmented'] = None
256
+ state['base_layer'] = None
257
+ state['base_layer_masked'] = None
258
+ state['base_layer_mask'] = None
259
+ state['base_layer_mask_enlarged'] = None
260
+ state['base_layer_inpainted'] = None
261
+ state['segment_info'] = None
262
+ state['seg_boxes'] = {}
263
+ state['changed_objects'] = []
264
+ state['move_no'] = 0
265
+
266
+ print("Calling SEEM_app.inference")
267
+
268
+ if isinstance(img['image'], np.ndarray):
269
+ pil_image = Image.fromarray(img['image'])
270
+ if isinstance(img['mask'], np.ndarray):
271
+ pil_mask = Image.fromarray(img['mask'])
272
+ img = {'image': pil_image, 'mask': pil_mask}
273
+ img_ret, seg_info = SEEM.inference (img, task, reftxt=reftxt)
274
+ #SEEM doesn't always respect the input img dimentions
275
+ tgt_size=(img['image'].width, img['image'].height)
276
+ img_ret = img_ret.resize(tgt_size, resample=Image.Resampling.NEAREST)
277
+ state['orignal_segmented'] = np.array(img_ret).copy()
278
+ state['base_layer'] = np.array(img_ret)
279
+ state['segment_info'] = seg_info
280
+ img_ret_array = np.array(img_ret)
281
+ img_ret_array[:,:,3] = 255 - img_ret_array[:,:,3]
282
+ #NOTE: if write out as a png, the pixels values get messed up. Same reason the client side colors look weird.
283
+ #cv2.imwrite(f"get_segments_img_ret.bmp", img_ret_array)
284
+
285
+
286
+ for obj_id, lable in seg_info.items():
287
+ obj_img = (img_ret_array[:,:,3] == 255 - obj_id)
288
+ #cv2.imwrite(f"img_{obj_id}.png", obj_img.astype(np.uint8) * 255)
289
+ #log_image_and_mask(np.array(img['image']), obj_img)
290
+ bbox = get_bounding_box(obj_img)
291
+ print(f"obj_id={obj_id}, lable={lable}, bbox={bbox}")
292
+ state['seg_boxes'][obj_id] = bbox
293
+
294
+ #add a special event, obj stays at the original spot
295
+ data = {}
296
+ data["index"] = (0, 0)
297
+ data["value"] = 254 # ==> 1, the only object allowed for now
298
+ data["selected"] = True
299
+ evt = gr.SelectData(None, data)
300
+ mask_dilate_slider, _, state = changed_objects_handler(mask_dilate_slider, state, evt)
301
+
302
+ state['base_layer_masked'], state = get_base_layer_mask(state)
303
+ if (mask_dilate_slider != 0):
304
+ enlarged_masked_background, state = get_enlarged_masked_background(state, mask_dilate_slider)
305
+ state['base_layer_inpainted'] = np.array(get_inpainted_background(state, mask_dilate_slider))
306
+
307
+ return Image.fromarray(img_ret_array), enlarged_masked_background, state['base_layer_inpainted'], state
308
+
309
+ def get_generated(grounding_text, fix_seed, rand_seed, state):
310
+
311
+ if ('base_layer_inpainted' in state) == False :
312
+ raise gr.Error('The segmentation step must be completed first before generating a new image')
313
+
314
+ inpainted_background_img = state['base_layer_inpainted']
315
+ assert inpainted_background_img is not None, 'base layer should be inpainted after segment'
316
+
317
+ state['boxes'] = []
318
+ for items in state['changed_objects']:
319
+ if items['box'] is not None:
320
+ state['boxes'].append(items['box'])
321
+
322
+ if (len(state['boxes']) == 0):
323
+ if (len(grounding_text) != 0):
324
+ grounding_text = []
325
+ print("No grounding box found. Grounding text will be ignored.")
326
+ return inpainted_background_img.copy(), state, None
327
+
328
+ print('Calling GLIGEN_app.generate')
329
+ print('grounding_text: ', grounding_text)
330
+ print(state['boxes'], len(state['boxes']))
331
+ assert len(state['boxes']) == 1, 'Only handle one segmented object at a time'
332
+ if (len(grounding_text) == 0): #mostly user forgot to drag the object and didn't provide grounding text
333
+ raise gr.Error('Please providing grounding text to match the identified object')
334
+ out_gen_1, _, _, _, state = GLIGEN.generate(task='Grounded Inpainting', language_instruction='',
335
+ grounding_texts=grounding_text, sketch_pad=inpainted_background_img,
336
+ alpha_sample=0.3, guidance_scale=7.5, batch_size=1,
337
+ fix_seed=fix_seed, rand_seed=rand_seed, use_actual_mask=False, append_grounding=True,
338
+ style_cond_image=None, inpainting_image=inpainted_background_img, inpainting_mask=None, state=state)
339
+
340
+ return out_gen_1['value'], state
341
+
342
+ def get_generated_full(task, language_instruction, grounding_instruction, sketch_pad,
343
+ alpha_sample, guidance_scale, batch_size,
344
+ fix_seed, rand_seed,
345
+ use_actual_mask,
346
+ append_grounding, style_cond_image,
347
+ state):
348
+
349
+ out_gen_1, _, _, _, state = GLIGEN.generate(
350
+ task, language_instruction, grounding_instruction, sketch_pad,
351
+ alpha_sample, guidance_scale, batch_size,
352
+ fix_seed, rand_seed,
353
+ use_actual_mask,
354
+ append_grounding, style_cond_image,
355
+ state)
356
+ return out_gen_1['value'], state
357
+
358
+ def gligen_change_task(state):
359
+ if (state['working_image'] is not None):
360
+ task = "Grounded Inpainting"
361
+ else:
362
+ task = "Grounded Generation"
363
+ return task
364
+
365
+ def clear_sketch_pad_mask(sketch_pad_image):
366
+ sketch_pad = ImageMask.update(value=sketch_pad_image, visible=True)
367
+ return sketch_pad
368
+
369
+ def save_shared_state(img, state):
370
+ if (isinstance(img, dict) and 'image' in img):
371
+ state['working_image'] = img['image']
372
+ else:
373
+ state['working_image'] = img
374
+ return state
375
+
376
+ def load_shared_state(state, task = None):
377
+ if (task == "Grounded Generation"):
378
+ return None
379
+ else:
380
+ return state['working_image']
381
+
382
+ def update_shared_state(state, task):
383
+ if (task == "Grounded Generation"):
384
+ state['working_image'] = None
385
+ return state
386
+
387
+ def update_sketch_pad_trigger(sketch_pad_trigger, task):
388
+ if (task == "Grounded Generation"):
389
+ sketch_pad_trigger = sketch_pad_trigger + 1
390
+ return sketch_pad_trigger
391
+
392
+ def clear_grounding_info(state):
393
+ state['boxes'] = []
394
+ state['masks'] = []
395
+ return state, ''
396
+
397
+ def switch_to_generate ():
398
+ task = "Grounded Generation"
399
+ return task, gr.Image.update(visible=True), gr.Textbox.update(visible=True), gr.Textbox.update(visible=True), gr.Button.update(visible=True), gr.Button.update(visible=True), gr.Accordion.update(visible=True)
400
+
401
+ def switch_to_inpaint ():
402
+ task = "Grounded Inpainting"
403
+ return task, gr.Image.update(visible=True), gr.Textbox.update(visible=False), gr.Textbox.update(visible=True), gr.Button.update(visible=True), gr.Button.update(visible=True), gr.Accordion.update(visible=True)
404
+
405
+ def switch_to_compose ():
406
+ task = "Compose"
407
+ return task, gr.Image.update(visible=False), gr.Textbox.update(visible=False), gr.Textbox.update(visible=False), gr.Button.update(visible=False), gr.Button.update(visible=False), gr.Accordion.update(visible=False)
408
+
409
+ def copy_to_llava_input(img):
410
+ print('WORKING IMAGE CHANGED!!!!')
411
+ if (isinstance(img, Image.Image) is not True):
412
+ img = Image.fromarray(img)
413
+ return img
414
+
415
+ def build_demo():
416
+ demo = gr.Blocks(title="🌋 LLaVA-Interactive", css=css+GLIGEN.css)
417
+ with demo:
418
+ compose_state = gr.State({'boxes': [], 'move_no': 0, 'base_layer': None, 'segment_info': None, 'seg_boxes': {}, 'changed_objects': []})
419
+ llava_state = gr.State()
420
+ shared_state = gr.State({'working_image': None})
421
+ gligen_state = gr.State({'draw_box': True})
422
+
423
+ gr.Markdown('<h1 style="text-align: center;"></h1>')
424
+ gr.Markdown('<h1 style="text-align: center;">LLaVA Interactive</h1>')
425
+ gr.Markdown('<h1 style="text-align: center;"></h1>')
426
+
427
+ gr.Markdown('**Experience interactive multimodal chatting and image manipulation. Select a tab for your task and follow the instructions. Switch tasks anytime and ask questions in the chat window.**')
428
+
429
+ with gr.Row(visible=False):
430
+ working_image = gr.Image(label="Working Image", type="numpy", elem_id="working_image", visible=False, interactive=False) #hidden image to save current working image
431
+ #for gligen
432
+ sketch_pad_trigger = gr.Number(value=0, visible=False)
433
+ sketch_pad_resize_trigger = gr.Number(value=0, visible=False)
434
+ init_white_trigger = gr.Number(value=0, visible=False)
435
+ image_scale = gr.Number(value=0, elem_id="image_scale", visible=False)
436
+ task = gr.Radio(
437
+ choices=["Grounded Generation", 'Grounded Inpainting', 'Compose'],
438
+ type="value",
439
+ value="Grounded Inpainting",
440
+ label="Task",
441
+ visible=False
442
+ )
443
+
444
+ with gr.Row(equal_height=False):
445
+ with gr.Column():
446
+
447
+ with gr.Row():
448
+ sketch_pad = ImageMask(label="Sketch Pad", type="numpy", shape=(512, 512), width=384, elem_id="img2img_image", brush_radius=20.0, visible=True)
449
+
450
+ compose_tab = gr.Tab("Remove or Change Objects")
451
+ with compose_tab:
452
+ gr.Markdown("Segment an object by drawing a stroke or giving a referring text. Then press the segment button. Drag the highlighted object to move it. To remove it, drag it out of the frame. To replace it with a new object, give an instruction only if the object is removed and press the generate button until you like the image.")
453
+ with gr.Row().style(equal_height=False):
454
+ with gr.Column():
455
+ with gr.Group():
456
+ with gr.Column():
457
+ with gr.Row():
458
+ segment_task= gr.Radio(["Stroke", "Text"], value="Stroke", label='Choose segmentation method')
459
+ segment_text = gr.Textbox(label="Enter referring text")
460
+ segment_btn = gr.Button("Segment", elem_id="segment-btn")
461
+
462
+ with gr.Group():
463
+ segmented_img = gr.Image(label="Move or delete object", tool="compose", height=256)
464
+
465
+ with gr.Group():
466
+ with gr.Column():
467
+ grounding_text_box = gr.Textbox(label="Enter grounding text for generating a new image")
468
+ with gr.Row():
469
+ compose_clear_btn = gr.Button("Clear", elem_id="compose_clear_btn")
470
+ compose_btn = gr.Button("Generate", elem_id="compose_btn")
471
+
472
+ with gr.Accordion("Advanced Options", open=False):
473
+ with gr.Row():
474
+ masked_background_img = gr.Image(label="Background", type='pil', interactive=False, height=256)
475
+ inpainted_background_img = gr.Image(label="Inpainted Background", type='pil', interactive=False, height=256)
476
+ mask_dilate_slider = gr.Slider(minimum=0.0, maximum=100, value=50, step=2, interactive=True, label="Mask dilation",visible=True, scale=20)
477
+ with gr.Row(visible=False):
478
+ compose_fix_seed = gr.Checkbox(value=False, label="Fixed seed", visible=False)
479
+ compose_rand_seed = gr.Slider(minimum=0, maximum=1000, step=1, value=0, label="Seed", visible=False)
480
+
481
+ gligen_inpaint = gr.Tab("Inpaint New Objects")
482
+ with gligen_inpaint:
483
+ gr.Markdown("Add a new object to the image by drawing its bounding box and giving an instruction. Press the “generate” button repeatedly until you like the image. Press “clear” to accept the image and start over with another object.")
484
+
485
+ gligen = gr.Tab("Generate New Image")
486
+ with gligen:
487
+ gr.Markdown("Generate a new image by giving a language instruction below. Draw a bounding box and give an instruction for any specific objects that need to be grounded in certain places. Hit the “generate” button repeatedly until you get the image you want.")
488
+
489
+ with gr.Group(visible=False):
490
+ language_instruction = gr.Textbox(label="Language instruction", elem_id='language_instruction', visible=False)
491
+ grounding_instruction = gr.Textbox(label="Grounding instruction (Separated by semicolon)", elem_id='grounding_instruction', visible=False)
492
+ with gr.Row():
493
+ gligen_clear_btn = gr.Button(value='Clear', visible=False)
494
+ gligen_gen_btn = gr.Button(value='Generate', elem_id="generate-btn", visible=False)
495
+
496
+ with gr.Group():
497
+ out_imagebox = gr.Image(type="pil", label="Parsed Sketch Pad", height=256, visible=False)
498
+
499
+ gligen_adv_options = gr.Accordion("Advanced Options", open=False, visible=False)
500
+ with gligen_adv_options:
501
+ with gr.Column():
502
+ alpha_sample = gr.Slider(minimum=0, maximum=1.0, step=0.1, value=0.3, label="Scheduled Sampling (τ)")
503
+ guidance_scale = gr.Slider(minimum=0, maximum=50, step=0.5, value=7.5, label="Guidance Scale")
504
+
505
+ with gr.Row(visible=False):
506
+ batch_size = gr.Slider(minimum=1, maximum=4, step=1, value=1, label="Number of Samples", visible=False)
507
+ append_grounding = gr.Checkbox(value=True, label="Append grounding instructions to the caption",visible=False)
508
+ use_actual_mask = gr.Checkbox(value=False, label="Use actual mask for inpainting", visible=False)
509
+ fix_seed = gr.Checkbox(value=False, label="Fixed seed",visible=False)
510
+ rand_seed = gr.Slider(minimum=0, maximum=1000, step=1, value=0, label="Seed",visible=False)
511
+ use_style_cond = gr.Checkbox(value=False, label="Enable Style Condition",visible=False)
512
+ style_cond_image = gr.Image(type="pil", label="Style Condition", visible=False, interactive=False)
513
+
514
+ controller = GLIGEN.Controller()
515
+ sketch_pad.edit(
516
+ GLIGEN.draw,
517
+ inputs=[task, sketch_pad, grounding_instruction, sketch_pad_resize_trigger, gligen_state],
518
+ outputs=[out_imagebox, sketch_pad_resize_trigger, image_scale, gligen_state],
519
+ queue=False,
520
+ )
521
+ llava_image = gr.Image(label='sketch_pad_image', type='pil', visible=False, interactive=False)
522
+ working_image.change(copy_to_llava_input, [working_image], [llava_image])
523
+ sketch_pad.upload(
524
+ save_shared_state,
525
+ inputs = [sketch_pad, shared_state],
526
+ outputs = shared_state).then(
527
+ load_shared_state, [shared_state], working_image)
528
+ grounding_instruction.change(
529
+ GLIGEN.draw,
530
+ inputs=[task, sketch_pad, grounding_instruction, sketch_pad_resize_trigger, gligen_state],
531
+ outputs=[out_imagebox, sketch_pad_resize_trigger, image_scale, gligen_state],
532
+ queue=False,
533
+ )
534
+ gligen_clear_btn.click(
535
+ GLIGEN.clear,
536
+ inputs=[task, sketch_pad_trigger, batch_size, gligen_state],
537
+ outputs=[sketch_pad, sketch_pad_trigger, out_imagebox, image_scale, gligen_state],
538
+ queue=False).then(
539
+ clear_grounding_info, gligen_state, [gligen_state, grounding_instruction]).then(
540
+ load_shared_state, [shared_state], sketch_pad).then(
541
+ update_sketch_pad_trigger, [sketch_pad_trigger, task], sketch_pad_trigger)
542
+ task.change(
543
+ partial(GLIGEN.clear, switch_task=True),
544
+ inputs=[task, sketch_pad_trigger, batch_size, gligen_state],
545
+ outputs=[sketch_pad, sketch_pad_trigger, out_imagebox, image_scale, gligen_state],
546
+ queue=False).then(
547
+ load_shared_state, [shared_state, task], sketch_pad).then(
548
+ update_sketch_pad_trigger, [sketch_pad_trigger, task], sketch_pad_trigger).then(
549
+ clear_grounding_info, gligen_state, [gligen_state, grounding_instruction])
550
+ sketch_pad_trigger.change(
551
+ controller.init_white,
552
+ inputs=[init_white_trigger],
553
+ outputs=[sketch_pad, image_scale, init_white_trigger],
554
+ queue=False)
555
+ sketch_pad_resize_trigger.change(
556
+ controller.resize_masked,
557
+ inputs=[gligen_state],
558
+ outputs=[sketch_pad, gligen_state],
559
+ queue=False)
560
+
561
+ gligen_gen_btn.click(
562
+ get_generated_full,
563
+ inputs=[
564
+ task, language_instruction, grounding_instruction, sketch_pad,
565
+ alpha_sample, guidance_scale, batch_size,
566
+ fix_seed, rand_seed,
567
+ use_actual_mask,
568
+ append_grounding, style_cond_image,
569
+ gligen_state],
570
+ outputs=[sketch_pad, gligen_state],
571
+ queue=True).then(
572
+ save_shared_state, [sketch_pad, shared_state], shared_state).then(
573
+ load_shared_state, [shared_state], working_image)
574
+
575
+ sketch_pad_resize_trigger.change(
576
+ None,
577
+ None,
578
+ sketch_pad_resize_trigger,
579
+ _js=GLIGEN.rescale_js,
580
+ queue=False)
581
+ init_white_trigger.change(
582
+ None,
583
+ None,
584
+ init_white_trigger,
585
+ _js=GLIGEN.rescale_js,
586
+ queue=False)
587
+ use_style_cond.change(
588
+ lambda cond: gr.Image.update(visible=cond),
589
+ use_style_cond,
590
+ style_cond_image,
591
+ queue=False)
592
+ task.change(
593
+ controller.switch_task_hide_cond,
594
+ inputs=task,
595
+ outputs=[use_style_cond, style_cond_image, alpha_sample, use_actual_mask],
596
+ queue=False)
597
+
598
+
599
+ with gr.Column():
600
+ gr.Markdown("Chat with the latest image on the left at any time by entering your text below.")
601
+ llava_chatbot = gr.Chatbot(elem_id="chatbot", label="LLaVA Chatbot", height=750)
602
+ with gr.Column(scale=8):
603
+ llava_textbox = gr.Textbox(show_label=False, placeholder="Enter text and press ENTER", container=False)
604
+ with gr.Column(scale=1, min_width=60):
605
+ llava_submit_btn = gr.Button(value="Submit", visible=False)
606
+
607
+ with gr.Row(visible=False):
608
+ upvote_btn = gr.Button(value="👍 Upvote", interactive=False, visible=False)
609
+ downvote_btn = gr.Button(value="👎 Downvote", interactive=False, visible=False)
610
+ flag_btn = gr.Button(value="⚠️ Flag", interactive=False, visible=False)
611
+ regenerate_btn = gr.Button(value="🔄 Regenerate", interactive=False, visible=False)
612
+ llava_clear_btn = gr.Button(value="🗑️ Clear history", interactive=False, visible=False)
613
+ with gr.Accordion("Parameters", open=False, visible=False) as parameter_row:
614
+ temperature = gr.Slider(minimum=0.0, maximum=1.0, value=0.2, step=0.1, interactive=True, label="Temperature",visible=True)
615
+ top_p = gr.Slider(minimum=0.0, maximum=1.0, value=0.7, step=0.1, interactive=True, label="Top P",visible=True)
616
+ max_output_tokens = gr.Slider(minimum=0, maximum=1024, value=512, step=64, interactive=True, label="Max output tokens",visible=True)
617
+
618
+ segment_btn.click(get_segments, inputs=[sketch_pad, segment_task, segment_text, mask_dilate_slider, compose_state], outputs=[segmented_img, masked_background_img, inpainted_background_img, compose_state], queue=True)
619
+ segmented_img.select (changed_objects_handler, [mask_dilate_slider, compose_state], [mask_dilate_slider, masked_background_img, compose_state])
620
+ mask_dilate_slider.release(get_base_layer_inpainted, inputs=[compose_state, mask_dilate_slider], outputs=[masked_background_img, inpainted_background_img, compose_state])
621
+ compose_btn.click(get_generated, [grounding_text_box, compose_fix_seed, compose_rand_seed, compose_state], [sketch_pad, compose_state], queue=True).then(
622
+ save_shared_state, [sketch_pad, shared_state], shared_state).then(
623
+ load_shared_state, [shared_state], working_image)
624
+ compose_clear_btn.click(load_shared_state, [shared_state], sketch_pad)
625
+
626
+ image_process_mode = gr.Radio(
627
+ ["Crop", "Resize", "Pad"],
628
+ value="Crop",
629
+ label="Preprocess for non-square image",
630
+ visible=False)
631
+ models = LLAVA.get_model_list(args)
632
+ model_selector = gr.Dropdown(
633
+ choices=models,
634
+ value=models[0] if len(models) > 0 else "",
635
+ interactive=True,
636
+ show_label=False,
637
+ container=False,
638
+ visible=False)
639
+
640
+ btn_list = [upvote_btn, downvote_btn, flag_btn, regenerate_btn, llava_clear_btn]
641
+ upvote_btn.click(LLAVA.upvote_last_response,
642
+ [llava_state, model_selector], [llava_textbox, upvote_btn, downvote_btn, flag_btn])
643
+ downvote_btn.click(LLAVA.downvote_last_response,
644
+ [llava_state, model_selector], [llava_textbox, upvote_btn, downvote_btn, flag_btn])
645
+ flag_btn.click(LLAVA.flag_last_response,
646
+ [llava_state, model_selector], [llava_textbox, upvote_btn, downvote_btn, flag_btn])
647
+ regenerate_btn.click(LLAVA.regenerate, [llava_state, image_process_mode],
648
+ [llava_state, llava_chatbot, llava_textbox, sketch_pad] + btn_list).then(
649
+ LLAVA.http_bot, [llava_state, model_selector, temperature, top_p, max_output_tokens],
650
+ [llava_state, llava_chatbot] + btn_list)
651
+ llava_clear_btn.click(LLAVA.clear_history, None, [llava_state, llava_chatbot, llava_textbox, llava_image] + btn_list)
652
+
653
+ llava_textbox.submit(LLAVA.add_text, [llava_state, llava_textbox, llava_image, image_process_mode], [llava_state, llava_chatbot, llava_textbox, llava_image] + btn_list
654
+ ).then(LLAVA.http_bot, [llava_state, model_selector, temperature, top_p, max_output_tokens],
655
+ [llava_state, llava_chatbot] + btn_list)
656
+ llava_submit_btn.click(LLAVA.add_text, [llava_state, llava_textbox, llava_image, image_process_mode], [llava_state, llava_chatbot, llava_textbox, llava_image] + btn_list
657
+ ).then(LLAVA.http_bot, [llava_state, model_selector, temperature, top_p, max_output_tokens],
658
+ [llava_state, llava_chatbot] + btn_list)
659
+
660
+ if args.model_list_mode == "once":
661
+ raise ValueError(f"Unsupported model list mode: {args.model_list_mode}")
662
+ elif args.model_list_mode == "reload":
663
+ print('disable for debugging')
664
+ demo.load(LLAVA.load_demo_refresh_model_list, inputs=None,
665
+ outputs=[llava_state, model_selector]
666
+ ).then(switch_to_compose, [], [task, out_imagebox, language_instruction, grounding_instruction, gligen_clear_btn, gligen_gen_btn, gligen_adv_options] #first tab show doesn't need any
667
+ ).then(GLIGEN.clear, inputs=[task, sketch_pad_trigger, batch_size, gligen_state],
668
+ outputs=[sketch_pad, sketch_pad_trigger, out_imagebox, image_scale, gligen_state], queue=False)
669
+
670
+ else:
671
+ raise ValueError(f"Unknown model list mode: {args.model_list_mode}")
672
+
673
+ gligen.select(
674
+ switch_to_generate,
675
+ inputs=[],
676
+ outputs=[task, out_imagebox, language_instruction, grounding_instruction, gligen_clear_btn, gligen_gen_btn, gligen_adv_options])
677
+ gligen_inpaint.select(
678
+ switch_to_inpaint,
679
+ inputs=[],
680
+ outputs=[task, out_imagebox, language_instruction, grounding_instruction, gligen_clear_btn, gligen_gen_btn, gligen_adv_options],
681
+ queue=False)
682
+
683
+ compose_tab.select(
684
+ switch_to_compose, [], [task, out_imagebox, language_instruction, grounding_instruction, gligen_clear_btn, gligen_gen_btn, gligen_adv_options])
685
+
686
+ return demo
687
+
688
+ if __name__ == "__main__":
689
+
690
+ parser = argparse.ArgumentParser()
691
+ parser.add_argument("--host", type=str, default="0.0.0.0")
692
+ parser.add_argument("--port", type=int)
693
+ parser.add_argument("--controller-url", type=str, default="http://localhost:10000")
694
+ parser.add_argument("--concurrency-count", type=int, default=8)
695
+ parser.add_argument("--model-list-mode", type=str, default="reload",
696
+ choices=["once", "reload"])
697
+ parser.add_argument("--share", action="store_true")
698
+ parser.add_argument("--moderate", action="store_true")
699
+ parser.add_argument("--embed", action="store_true")
700
+ args = parser.parse_args()
701
+ LLAVA.set_args(args)
702
+
703
+ demo = build_demo()
704
+ demo.queue(concurrency_count=1, api_open=False)
705
+ demo.launch()
requirements.txt ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ albumentations==1.3.0
2
+ accelerate==0.20.3
3
+ altair==5.0.1
4
+ cityscapesscripts==2.2.2
5
+ diffusers==0.11.1
6
+ diffdist==0.1
7
+ ftfy==6.1.1
8
+ fvcore==0.1.5.post20221221
9
+ imageio==2.9.0
10
+ imageio-ffmpeg==0.4.2
11
+ invisible-watermark==0.1.5
12
+ json_tricks==3.17.1
13
+ kornia==0.6.9
14
+ mup==1.0.0
15
+ nltk==3.8.1
16
+ numpy==1.23.1
17
+ numba==0.57.1
18
+ openai==0.27.8
19
+ omegaconf==2.1.1
20
+ opencv-python==4.7.0.72
21
+ opencv-python-headless==4.7.0.72
22
+ pandas==2.0.3
23
+ pip==22.2.2
24
+ pillow==9.4.0
25
+ pyarrow==12.0.1
26
+ pycocotools==2.0.5
27
+ pydantic==1.10.9
28
+ pyyaml==6.0
29
+ protobuf==3.20.3
30
+ pytorch-lightning==1.4.2
31
+ regex==2023.6.3
32
+ scikit-image==0.20.0
33
+ scikit-learn==1.2.2
34
+ sentencepiece==0.1.99
35
+ shapely==2.0.1
36
+ scann==1.2.7
37
+ streamlit==1.12.1
38
+ timm==0.4.12
39
+ --find-links https://download.pytorch.org/whl/cu117/torch_stable.html
40
+ torch==2.0.1+cu117
41
+ --find-links https://download.pytorch.org/whl/cu117/torch_stable.html
42
+ torchvision==0.15.2+cu117
43
+ test-tube==0.7.5
44
+ transformers==4.28.0
45
+ vision-datasets==0.2.2
46
+ yacs==0.1.8
47
+ clip @ git+https://github.com/openai/CLIP.git@a9b1bf5920416aaeaec965c25dd9e8f98c864f16
48
+ openai-whisper @ git+https://github.com/openai/whisper.git@248b6cb124225dd263bb9bd32d060b6517e067f8
49
+ einops @ git+https://github.com/arogozhnikov/einops.git
50
+ detectron2 @ git+https://github.com/maureenzou/detectron2-xyz.git@42121d75e10d9f858f3a91b6a39f5722c02868f0
51
+ gradio @ git+https://github.com/wchen-github/gradio.git
run_demo.sh ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+
3
+ pkill -9 -f llava.serve.controller
4
+ pkill -9 -f llava.serve.model_worker
5
+ pkill -9 -f lama_server
6
+ pkill -9 -f llava_interactive
7
+
8
+ eval "$(conda shell.bash hook)"
9
+
10
+ (
11
+ conda deactivate; \
12
+ cd LLaVA; \
13
+ pwd; \
14
+ conda activate llava; \
15
+ python -m llava.serve.controller --host 0.0.0.0 --port 10000 & \
16
+ python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path ./llava-v1.5-13b &
17
+ )
18
+
19
+ sleep 30
20
+
21
+ (
22
+ conda deactivate; \
23
+ conda activate lama; \
24
+ cd lama; \
25
+ pwd; \
26
+ export TORCH_HOME=$(pwd) && export PYTHONPATH=$(pwd); \
27
+ python ../lama_server.py &
28
+ )
29
+
30
+ sleep 10
31
+
32
+ (
33
+ conda deactivate; \
34
+ conda activate llava_int; \
35
+ export LLAVA_INTERACTIVE_HOME=.; \
36
+ python llava_interactive.py
37
+ )
38
+
setup.sh ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ echo "Cloning dependent repos..."
2
+ git clone --single-branch https://github.com/wchen-github/GLIGEN.git
3
+ git clone --single-branch https://github.com/wchen-github/Segment-Everything-Everywhere-All-At-Once.git SEEM
4
+ git clone --single-branch https://github.com/wchen-github/LLaVA
5
+ git clone --single-branch https://github.com/advimman/lama.git
6
+
7
+
8
+ echo "Creating environments and download pretrained models..."
9
+
10
+ cd LLaVA
11
+ conda create -n llava python=3.10 -y
12
+ conda activate llava
13
+ pip install --upgrade pip # enable PEP 660 support
14
+ pip install -e .
15
+ #download pretrained model
16
+ git clone https://huggingface.co/liuhaotian/llava-v1.5-13b
17
+ conda deactivate
18
+ cd ..
19
+
20
+ #setting up lama
21
+ cd lama
22
+ conda env create --name lama -f conda_env.yml -y
23
+ conda activate lama
24
+ conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch -y
25
+ pip install torch==1.10.2+cu113 --find-links https://download.pytorch.org/whl/cu113/torch_stable.html
26
+ pip install torchvision==0.11.3+cu113 --find-links https://download.pytorch.org/whl/cu113/torch_stable.html
27
+ pip install flask
28
+ pip install pytorch-lightning
29
+ #download pretrained model
30
+ git clone https://huggingface.co/smartywu/big-lama download
31
+ unzip download/big-lama.zip
32
+
33
+ conda deactivate
34
+ cd ..
35
+ echo "Done setting up."