diff --git a/.github/ISSUE_TEMPLATE/bug_report.yml b/.github/ISSUE_TEMPLATE/bug_report.yml new file mode 100644 index 0000000000000000000000000000000000000000..ed372f220d15d668d58395ca72ad7df681e79559 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/bug_report.yml @@ -0,0 +1,83 @@ +name: Bug Report +description: You think somethings is broken in the UI +title: "[Bug]: " +labels: ["bug-report"] + +body: + - type: checkboxes + attributes: + label: Is there an existing issue for this? + description: Please search to see if an issue already exists for the bug you encountered, and that it hasn't been fixed in a recent build/commit. + options: + - label: I have searched the existing issues and checked the recent builds/commits + required: true + - type: markdown + attributes: + value: | + *Please fill this form with as much information as possible, don't forget to fill "What OS..." and "What browsers" and *provide screenshots if possible** + - type: textarea + id: what-did + attributes: + label: What happened? + description: Tell us what happened in a very clear and simple way + validations: + required: true + - type: textarea + id: steps + attributes: + label: Steps to reproduce the problem + description: Please provide us with precise step by step information on how to reproduce the bug + value: | + 1. Go to .... + 2. Press .... + 3. ... + validations: + required: true + - type: textarea + id: what-should + attributes: + label: What should have happened? + description: tell what you think the normal behavior should be + validations: + required: true + - type: input + id: commit + attributes: + label: Commit where the problem happens + description: Which commit are you running ? (Do not write *Latest version/repo/commit*, as this means nothing and will have changed by the time we read your issue. Rather, copy the **Commit hash** shown in the cmd/terminal when you launch the UI) + validations: + required: true + - type: dropdown + id: platforms + attributes: + label: What platforms do you use to access UI ? + multiple: true + options: + - Windows + - Linux + - MacOS + - iOS + - Android + - Other/Cloud + - type: dropdown + id: browsers + attributes: + label: What browsers do you use to access the UI ? + multiple: true + options: + - Mozilla Firefox + - Google Chrome + - Brave + - Apple Safari + - Microsoft Edge + - type: textarea + id: cmdargs + attributes: + label: Command Line Arguments + description: Are you using any launching parameters/command line arguments (modified webui-user.py) ? If yes, please write them below + render: Shell + - type: textarea + id: misc + attributes: + label: Additional information, context and logs + description: Please provide us with any relevant additional info, context or log output. diff --git a/.github/ISSUE_TEMPLATE/config.yml b/.github/ISSUE_TEMPLATE/config.yml new file mode 100644 index 0000000000000000000000000000000000000000..f58c94a9be6847193a971ac67aa83e9a6d75c0ae --- /dev/null +++ b/.github/ISSUE_TEMPLATE/config.yml @@ -0,0 +1,5 @@ +blank_issues_enabled: false +contact_links: + - name: WebUI Community Support + url: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions + about: Please ask and answer questions here. diff --git a/.github/ISSUE_TEMPLATE/feature_request.yml b/.github/ISSUE_TEMPLATE/feature_request.yml new file mode 100644 index 0000000000000000000000000000000000000000..8ca6e21f5f47bc764be0e72af71a81dd01df5e69 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/feature_request.yml @@ -0,0 +1,40 @@ +name: Feature request +description: Suggest an idea for this project +title: "[Feature Request]: " +labels: ["suggestion"] + +body: + - type: checkboxes + attributes: + label: Is there an existing issue for this? + description: Please search to see if an issue already exists for the feature you want, and that it's not implemented in a recent build/commit. + options: + - label: I have searched the existing issues and checked the recent builds/commits + required: true + - type: markdown + attributes: + value: | + *Please fill this form with as much information as possible, provide screenshots and/or illustrations of the feature if possible* + - type: textarea + id: feature + attributes: + label: What would your feature do ? + description: Tell us about your feature in a very clear and simple way, and what problem it would solve + validations: + required: true + - type: textarea + id: workflow + attributes: + label: Proposed workflow + description: Please provide us with step by step information on how you'd like the feature to be accessed and used + value: | + 1. Go to .... + 2. Press .... + 3. ... + validations: + required: true + - type: textarea + id: misc + attributes: + label: Additional information + description: Add any other context or screenshots about the feature request here. diff --git a/.github/PULL_REQUEST_TEMPLATE/pull_request_template.md b/.github/PULL_REQUEST_TEMPLATE/pull_request_template.md new file mode 100644 index 0000000000000000000000000000000000000000..86009613eed49cad5683798cc345054b4b8bcc9f --- /dev/null +++ b/.github/PULL_REQUEST_TEMPLATE/pull_request_template.md @@ -0,0 +1,28 @@ +# Please read the [contributing wiki page](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing) before submitting a pull request! + +If you have a large change, pay special attention to this paragraph: + +> Before making changes, if you think that your feature will result in more than 100 lines changing, find me and talk to me about the feature you are proposing. It pains me to reject the hard work someone else did, but I won't add everything to the repo, and it's better if the rejection happens before you have to waste time working on the feature. + +Otherwise, after making sure you're following the rules described in wiki page, remove this section and continue on. + +**Describe what this pull request is trying to achieve.** + +A clear and concise description of what you're trying to accomplish with this, so your intent doesn't have to be extracted from your code. + +**Additional notes and description of your changes** + +More technical discussion about your changes go here, plus anything that a maintainer might have to specifically take a look at, or be wary of. + +**Environment this was tested in** + +List the environment you have developed / tested this on. As per the contributing page, changes should be able to work on Windows out of the box. + - OS: [e.g. Windows, Linux] + - Browser [e.g. chrome, safari] + - Graphics card [e.g. NVIDIA RTX 2080 8GB, AMD RX 6600 8GB] + +**Screenshots or videos of your changes** + +If applicable, screenshots or a video showing off your changes. If it edits an existing UI, it should ideally contain a comparison of what used to be there, before your changes were made. + +This is **required** for anything that touches the user interface. \ No newline at end of file diff --git a/.github/workflows/on_pull_request.yaml b/.github/workflows/on_pull_request.yaml new file mode 100644 index 0000000000000000000000000000000000000000..b097d1805cc0ea1ad1cd6b98feaf5031897ccca1 --- /dev/null +++ b/.github/workflows/on_pull_request.yaml @@ -0,0 +1,42 @@ +# See https://github.com/actions/starter-workflows/blob/1067f16ad8a1eac328834e4b0ae24f7d206f810d/ci/pylint.yml for original reference file +name: Run Linting/Formatting on Pull Requests + +on: + - push + - pull_request + # See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#onpull_requestpull_request_targetbranchesbranches-ignore for syntax docs + # if you want to filter out branches, delete the `- pull_request` and uncomment these lines : + # pull_request: + # branches: + # - master + # branches-ignore: + # - development + +jobs: + lint: + runs-on: ubuntu-latest + steps: + - name: Checkout Code + uses: actions/checkout@v3 + - name: Set up Python 3.10 + uses: actions/setup-python@v3 + with: + python-version: 3.10.6 + - uses: actions/cache@v2 + with: + path: ~/.cache/pip + key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }} + restore-keys: | + ${{ runner.os }}-pip- + - name: Install PyLint + run: | + python -m pip install --upgrade pip + pip install pylint + # This lets PyLint check to see if it can resolve imports + - name: Install dependencies + run : | + export COMMANDLINE_ARGS="--skip-torch-cuda-test --exit" + python launch.py + - name: Analysing the code with pylint + run: | + pylint $(git ls-files '*.py') diff --git a/.github/workflows/run_tests.yaml b/.github/workflows/run_tests.yaml new file mode 100644 index 0000000000000000000000000000000000000000..49dc92bd97dd31d4e8ecd318fc1372b6e9fcdd45 --- /dev/null +++ b/.github/workflows/run_tests.yaml @@ -0,0 +1,31 @@ +name: Run basic features tests on CPU with empty SD model + +on: + - push + - pull_request + +jobs: + test: + runs-on: ubuntu-latest + steps: + - name: Checkout Code + uses: actions/checkout@v3 + - name: Set up Python 3.10 + uses: actions/setup-python@v4 + with: + python-version: 3.10.6 + - uses: actions/cache@v3 + with: + path: ~/.cache/pip + key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }} + restore-keys: ${{ runner.os }}-pip- + - name: Run tests + run: python launch.py --tests basic_features --no-half --disable-opt-split-attention --use-cpu all --skip-torch-cuda-test + - name: Upload main app stdout-stderr + uses: actions/upload-artifact@v3 + if: always() + with: + name: stdout-stderr + path: | + test/stdout.txt + test/stderr.txt diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000000000000000000000000000000000000..21fa26a75b0b18e2998675e74f6e67646b3b5435 --- /dev/null +++ b/.gitignore @@ -0,0 +1,34 @@ +__pycache__ +*.ckpt +*.safetensors +*.pth +/ESRGAN/* +/SwinIR/* +/repositories +/venv +/tmp +/model.ckpt +/models/**/* +/GFPGANv1.3.pth +/gfpgan/weights/*.pth +/ui-config.json +/outputs +/config.json +/log +/webui.settings.bat +/embeddings +/styles.csv +/params.txt +/styles.csv.bak +/webui-user.bat +/webui-user.sh +/interrogate +/user.css +/.idea +notification.mp3 +/SwinIR +/textual_inversion +.vscode +/extensions +/test/stdout.txt +/test/stderr.txt diff --git a/.pylintrc b/.pylintrc new file mode 100644 index 0000000000000000000000000000000000000000..53254e5dcfd871c8c0f0f4dec9dceeb1ba967eda --- /dev/null +++ b/.pylintrc @@ -0,0 +1,3 @@ +# See https://pylint.pycqa.org/en/latest/user_guide/messages/message_control.html +[MESSAGES CONTROL] +disable=C,R,W,E,I diff --git a/CODEOWNERS b/CODEOWNERS new file mode 100644 index 0000000000000000000000000000000000000000..7438c9bc69d2a53df3dffd053ca82a689f97adb2 --- /dev/null +++ b/CODEOWNERS @@ -0,0 +1,12 @@ +* @AUTOMATIC1111 + +# if you were managing a localization and were removed from this file, this is because +# the intended way to do localizations now is via extensions. See: +# https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Developing-extensions +# Make a repo with your localization and since you are still listed as a collaborator +# you can add it to the wiki page yourself. This change is because some people complained +# the git commit log is cluttered with things unrelated to almost everyone and +# because I believe this is the best overall for the project to handle localizations almost +# entirely without my oversight. + + diff --git a/README.md b/README.md index b4df6186ce2a3a2fa05a6765b3c180c091b60873..556000fb8284e189bc6034ef29e23b81ab71fff0 100644 --- a/README.md +++ b/README.md @@ -1,3 +1,151 @@ ---- -license: creativeml-openrail-m ---- +# Stable Diffusion web UI +A browser interface based on Gradio library for Stable Diffusion. + +![](txt2img_Screenshot.png) + +Check the [custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) wiki page for extra scripts developed by users. + +## Features +[Detailed feature showcase with images](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features): +- Original txt2img and img2img modes +- One click install and run script (but you still must install python and git) +- Outpainting +- Inpainting +- Color Sketch +- Prompt Matrix +- Stable Diffusion Upscale +- Attention, specify parts of text that the model should pay more attention to + - a man in a ((tuxedo)) - will pay more attention to tuxedo + - a man in a (tuxedo:1.21) - alternative syntax + - select text and press ctrl+up or ctrl+down to automatically adjust attention to selected text (code contributed by anonymous user) +- Loopback, run img2img processing multiple times +- X/Y plot, a way to draw a 2 dimensional plot of images with different parameters +- Textual Inversion + - have as many embeddings as you want and use any names you like for them + - use multiple embeddings with different numbers of vectors per token + - works with half precision floating point numbers + - train embeddings on 8GB (also reports of 6GB working) +- Extras tab with: + - GFPGAN, neural network that fixes faces + - CodeFormer, face restoration tool as an alternative to GFPGAN + - RealESRGAN, neural network upscaler + - ESRGAN, neural network upscaler with a lot of third party models + - SwinIR and Swin2SR([see here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2092)), neural network upscalers + - LDSR, Latent diffusion super resolution upscaling +- Resizing aspect ratio options +- Sampling method selection + - Adjust sampler eta values (noise multiplier) + - More advanced noise setting options +- Interrupt processing at any time +- 4GB video card support (also reports of 2GB working) +- Correct seeds for batches +- Live prompt token length validation +- Generation parameters + - parameters you used to generate images are saved with that image + - in PNG chunks for PNG, in EXIF for JPEG + - can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI + - can be disabled in settings + - drag and drop an image/text-parameters to promptbox +- Read Generation Parameters Button, loads parameters in promptbox to UI +- Settings page +- Running arbitrary python code from UI (must run with --allow-code to enable) +- Mouseover hints for most UI elements +- Possible to change defaults/mix/max/step values for UI elements via text config +- Random artist button +- Tiling support, a checkbox to create images that can be tiled like textures +- Progress bar and live image generation preview +- Negative prompt, an extra text field that allows you to list what you don't want to see in generated image +- Styles, a way to save part of prompt and easily apply them via dropdown later +- Variations, a way to generate same image but with tiny differences +- Seed resizing, a way to generate same image but at slightly different resolution +- CLIP interrogator, a button that tries to guess prompt from an image +- Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway +- Batch Processing, process a group of files using img2img +- Img2img Alternative, reverse Euler method of cross attention control +- Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions +- Reloading checkpoints on the fly +- Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one +- [Custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) with many extensions from community +- [Composable-Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/), a way to use multiple prompts at once + - separate prompts using uppercase `AND` + - also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2` +- No token limit for prompts (original stable diffusion lets you use up to 75 tokens) +- DeepDanbooru integration, creates danbooru style tags for anime prompts +- [xformers](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers), major speed increase for select cards: (add --xformers to commandline args) +- via extension: [History tab](https://github.com/yfszzx/stable-diffusion-webui-images-browser): view, direct and delete images conveniently within the UI +- Generate forever option +- Training tab + - hypernetworks and embeddings options + - Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime) +- Clip skip +- Use Hypernetworks +- Use VAEs +- Estimated completion time in progress bar +- API +- Support for dedicated [inpainting model](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion) by RunwayML. +- via extension: [Aesthetic Gradients](https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients), a way to generate images with a specific aesthetic by using clip images embeds (implementation of [https://github.com/vicgalle/stable-diffusion-aesthetic-gradients](https://github.com/vicgalle/stable-diffusion-aesthetic-gradients)) +- [Stable Diffusion 2.0](https://github.com/Stability-AI/stablediffusion) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20) for instructions + +## Installation and Running +Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for both [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended) and [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs. + +Alternatively, use online services (like Google Colab): + +- [List of Online Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services) + +### Automatic Installation on Windows +1. Install [Python 3.10.6](https://www.python.org/downloads/windows/), checking "Add Python to PATH" +2. Install [git](https://git-scm.com/download/win). +3. Download the stable-diffusion-webui repository, for example by running `git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git`. +4. Place `model.ckpt` in the `models` directory (see [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) for where to get it). +5. _*(Optional)*_ Place `GFPGANv1.4.pth` in the base directory, alongside `webui.py` (see [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) for where to get it). +6. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user. + +### Automatic Installation on Linux +1. Install the dependencies: +```bash +# Debian-based: +sudo apt install wget git python3 python3-venv +# Red Hat-based: +sudo dnf install wget git python3 +# Arch-based: +sudo pacman -S wget git python3 +``` +2. To install in `/home/$(whoami)/stable-diffusion-webui/`, run: +```bash +bash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh) +``` + +### Installation on Apple Silicon + +Find the instructions [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon). + +## Contributing +Here's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing) + +## Documentation +The documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki). + +## Credits +- Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers +- k-diffusion - https://github.com/crowsonkb/k-diffusion.git +- GFPGAN - https://github.com/TencentARC/GFPGAN.git +- CodeFormer - https://github.com/sczhou/CodeFormer +- ESRGAN - https://github.com/xinntao/ESRGAN +- SwinIR - https://github.com/JingyunLiang/SwinIR +- Swin2SR - https://github.com/mv-lab/swin2sr +- LDSR - https://github.com/Hafiidz/latent-diffusion +- MiDaS - https://github.com/isl-org/MiDaS +- Ideas for optimizations - https://github.com/basujindal/stable-diffusion +- Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing. +- Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion) +- Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas). +- Idea for SD upscale - https://github.com/jquesnelle/txt2imghd +- Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot +- CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator +- Idea for Composable Diffusion - https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch +- xformers - https://github.com/facebookresearch/xformers +- DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru +- Security advice - RyotaK +- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user. +- (You) diff --git a/artists.csv b/artists.csv new file mode 100644 index 0000000000000000000000000000000000000000..1a61ed8895ec0f6c58b9d41e24273a9907fc9fb7 --- /dev/null +++ b/artists.csv @@ -0,0 +1,3041 @@ +artist,score,category +Peter Max,0.99715996,weird +Roy Lichtenstein,0.98272276,cartoon +Romero Britto,0.9498342,scribbles +Keith Haring,0.9431302,weird +Hiroshige,0.93995106,ukioe +Joan Miró,0.9169429,scribbles +Jean-Michel Basquiat,0.90080947,scribbles +Katsushika Hokusai,0.8887236,ukioe +Paul Klee,0.8868682,scribbles +Marc Chagall,0.8868168,scribbles +Karl Schmidt-Rottluff,0.88444495,scribbles +Howard Hodgkin,0.8808578,scribbles +Jean Metzinger,0.88056004,scribbles +Alma Thomas,0.87658304,weird +Rufino Tamayo,0.8749848,scribbles +Utagawa Hiroshige,0.8728796,ukioe +Chagall,0.8718535,scribbles +Harumi Hironaka,0.86914605,scribbles +Hans Hofmann,0.8686159,scribbles +Kawanabe Kyōsai,0.86612236,ukioe +Andy Warhol,0.8654825,scribbles +Barbara Takenaga,0.86223894,scribbles +Tatsuro Kiuchi,0.8597267,cartoon +Vincent Van Gogh,0.85538065,scribbles +Wassily Kandinsky,0.85490596,scribbles +Georges Seurat,0.8534801,scribbles +Karel Appel,0.8529153,scribbles +Sonia Delaunay,0.8506156,scribbles +Hokusai,0.85046995,ukioe +Eduardo Kobra,0.85036755,weird +Fra Angelico,0.84984255,fineart +Milton Avery,0.849746,scribbles +David Hockney,0.8496144,scribbles +Hiroshi Nagai,0.847129,cartoon +Aristarkh Lentulov,0.846537,scribbles +Lyonel Feininger,0.84573764,scribbles +Mary Blair,0.845709,scribbles +Ellsworth Kelly,0.8455428,scribbles +Jun Kaneko,0.8448367,scribbles +Roz Chast,0.8432013,weird +Ida Rentoul Outhwaite,0.84275174,scribbles +Robert Motherwell,0.8409468,scribbles +Garry Winogrand,0.83994275,black-white +Andrei Rublev,0.83950496,fineart +Alexander Calder,0.83832693,scribbles +Tomokazu Matsuyama,0.8376121,scribbles +August Macke,0.8362022,scribbles +Kazimir Malevich,0.8356527,scribbles +Richard Scarry,0.83554685,scribbles +Victor Vasarely,0.8335438,scribbles +Kitagawa Utamaro,0.83333457,ukioe +Matt Bors,0.83252287,scribbles +Emil Nolde,0.8323225,scribbles +Patrick Caulfield,0.8322225,scribbles +Charles Blackman,0.83200824,scribbles +Peter Doig,0.83111644,scribbles +Alexej von Jawlensky,0.8308932,scribbles +Rumiko Takahashi,0.8301817,anime +Eileen Agar,0.82945526,scribbles +Ernst Ludwig Kirchner,0.82756275,scribbles +Nicolas Delort,0.8261329,scribbles +Marsden Hartley,0.8250993,scribbles +Keith Negley,0.8212553,scribbles +Jamini Roy,0.8212199,scribbles +Quentin Blake,0.82115215,scribbles +Andy Kehoe,0.82063186,cartoon +George barbier,0.82046914,fineart +Frans Masereel,0.81997275,scribbles +Umberto Boccioni,0.81921184,scribbles +Conrad Roset,0.8190752,cartoon +Paul Ranson,0.81903255,scribbles +Yayoi Kusama,0.81886625,weird +Tomi Ungerer,0.81848705,scribbles +Saul Steinberg,0.81778854,scribbles +Jon Klassen,0.81773067,scribbles +W.W. Denslow,0.81708044,fineart +Helen Frankenthaler,0.81704986,scribbles +Jean Jullien,0.816437,scribbles +Brett Whiteley,0.81601924,scribbles +Giotto Di Bondone,0.81427747,fineart +Takashi Murakami,0.81338763,weird +Howard Finster,0.81333554,scribbles +Eduardo Paolozzi,0.81312317,scribbles +Charles Rennie Mackintosh,0.81297064,scribbles +Brandon Mably,0.8128239,weird +Rebecca Louise Law,0.81214285,weird +Victo Ngai,0.81195843,cartoon +Hanabusa Itchō II,0.81187993,ukioe +Edmund Dulac,0.81104875,scribbles +Ben Shahn,0.8104582,scribbles +Howard Arkley,0.8103746,scribbles +Wilfredo Lam,0.8096211,scribbles +Michael Deforge,0.8095954,scribbles +John Hoyland,0.8094592,fineart +Francesco Clemente,0.8090387,scribbles +Leonetto Cappiello,0.8087691,scribbles +Norman Ackroyd,0.80788493,scribbles +Bhupen Khakhar,0.8077607,scribbles +Jeremiah Ketner,0.8075384,cartoon +Chris Ofili,0.8073793,scribbles +Banksy,0.80695426,scribbles +Tom Whalen,0.805867,scribbles +Ernst Wilhelm Nay,0.805295,scribbles +Henri Rousseau,0.8049866,scribbles +Kunisada,0.80493814,ukioe +Naoko Takeuchi,0.80482674,anime +Kaethe Butcher,0.80406916,scribbles +Hasui Kawase,0.8040483,ukioe +Alvin Langdon Coburn,0.8035004,black-white +Stanley Donwood,0.8033054,scribbles +Agnes Martin,0.8028028,scribbles +Osamu Tezuka,0.8005524,cartoon +Frank Stella,0.80049455,scribbles +Dale Chihuly,0.79982775,digipa-high-impact +Evgeni Gordiets,0.79967916,scribbles +Janek Sedlar,0.7993992,fineart +Alasdair Gray,0.7992301,scribbles +Yasuo Kuniyoshi,0.79870003,ukioe +Edward Gorey,0.7984938,scribbles +Johannes Itten,0.798481,scribbles +Cuno Amiet,0.7979497,scribbles +M.C. Escher,0.7976657,scribbles +Albert Irvin,0.79688835,scribbles +Jack Gaughan,0.79443675,scribbles +Ravi Zupa,0.7939542,scribbles +Kay Nielsen,0.79385525,scribbles +Agnolo Gaddi,0.79369193,fineart +Alessandro Gottardo,0.79321593,scribbles +Paul Laffoley,0.79196846,scribbles +Giovanni Battista Piranesi,0.79111177,fineart +Adrian Tomine,0.79109013,scribbles +Adolph Gottlieb,0.79061794,scribbles +Milton Caniff,0.7905358,cartoon +Philip Guston,0.78994095,scribbles +Debbie Criswell,0.7895031,cartoon +Alice Pasquini,0.78949904,cartoon +Johannes Vermeer,0.78931487,fineart +Lisa Frank,0.7892591,cartoon +Patrick Heron,0.78889126,scribbles +Mikhail Nesterov,0.78814346,fineart +Cézanne,0.7879481,scribbles +Tristan Eaton,0.787513,scribbles +Jillian Tamaki,0.7868066,scribbles +Takato Yamamoto,0.78460765,ukioe +Martiros Saryan,0.7844924,scribbles +Emil Orlik,0.7842625,scribbles +Armand Guillaumin,0.7840431,scribbles +Jane Newland,0.7837676,scribbles +Paul Cézanne,0.78368753,scribbles +Tove Jansson,0.78356475,scribbles +Guido Crepax,0.7835321,cartoon +OSGEMEOS,0.7829088,weird +Albert Watson,0.48901254,digipa-med-impact +Emory Douglas,0.78179604,scribbles +Chris Van Allsburg,0.66413003,fineart +Ohara Koson,0.78132576,ukioe +Nicolas de Stael,0.7802779,scribbles +Aubrey Beardsley,0.77970016,scribbles +Hishikawa Moronobu,0.7794119,ukioe +Alfred Wallis,0.77926695,scribbles +Friedensreich Hundertwasser,0.7791805,scribbles +Eyvind Earle,0.7788089,scribbles +Giotto,0.7785216,fineart +Simone Martini,0.77843,fineart +Ivan Bilibin,0.77720606,fineart +Karl Blossfeldt,0.77652574,black-white +Duy Huynh,0.77634746,scribbles +Giovanni da Udina,0.7763063,fineart +Henri-Edmond Cross,0.7762994,fineart +Barry McGee,0.77618384,scribbles +William Kentridge,0.77615225,scribbles +Alexander Archipenko,0.7759824,scribbles +Jaume Plensa,0.7756799,weird +Bill Jacklin,0.77504414,fineart +Alberto Vargas,0.7747376,cartoon +Jean Dubuffet,0.7744374,scribbles +Eugène Grasset,0.7741958,fineart +Arthur Rackham,0.77418125,fineart +Yves Tanguy,0.77380997,scribbles +Elsa Beskow,0.7736908,fineart +Georgia O’Keeffe,0.77368987,scribbles +Georgia O'Keeffe,0.77368987,scribbles +Henri Cartier-Bresson,0.7735415,black-white +Andrea del Verrocchio,0.77307427,fineart +Mark Rothko,0.77294236,scribbles +Bruce Gilden,0.7256681,black-white +Gino Severini,0.77247965,scribbles +Delphin Enjolras,0.5594248,fineart +Alena Aenami,0.77210015,cartoon +Ed Freeman,0.42526615,digipa-low-impact +Apollonia Saintclair,0.7718383,anime +László Moholy-Nagy,0.771497,scribbles +Louis Glackens,0.7713224,fineart +Fang Lijun,0.77097225,fineart +Alfred Kubin,0.74409986,fineart +David Wojnarowicz,0.7705802,scribbles +Tara McPherson,0.77023256,scribbles +Gustav Doré,0.7367536,fineart +Patricia Polacco,0.7696109,scribbles +Norman Bluhm,0.7692634,fineart +Elizabeth Gadd,0.7691194,digipa-high-impact +Gabriele Münter,0.7690926,scribbles +David Inshaw,0.76905304,scribbles +Maurice Sendak,0.7690118,cartoon +Harry Clarke,0.7688428,cartoon +Howardena Pindell,0.7686921,n +Jamie Hewlett,0.7680373,scribbles +Steve Ditko,0.76725733,scribbles +Annie Soudain,0.7671485,scribbles +Albert Gleizes,0.76658314,scribbles +Henry Fuseli,0.69147265,fineart +Alain Laboile,0.67634284,c +Albrecht Altdorfer,0.7663378,fineart +Jack Butler Yeats,0.7661406,fineart +Yue Minjun,0.76583517,scribbles +Art Spiegelman,0.7656343,scribbles +Grete Stern,0.7656276,fineart +Mordecai Ardon,0.7648692,scribbles +Joel Sternfeld,0.76456416,digipa-high-impact +Milton Glaser,0.7641823,scribbles +Eishōsai Chōki,0.7639659,scribbles +Domenico Ghirlandaio,0.76372653,fineart +Alex Timmermans,0.64443207,digipa-high-impact +Andreas Vesalius,0.763446,fineart +Bruce McLean,0.76335883,scribbles +Jacob Lawrence,0.76330304,scribbles +Alex Katz,0.76317835,scribbles +Henri de Toulouse-Lautrec,0.76268333,scribbles +Franz Sedlacek,0.762062,scribbles +Paul Lehr,0.70854837,cartoon +Nicholas Roerich,0.76117516,scribbles +Henri Matisse,0.76110923,scribbles +Colin McCahon,0.76086944,scribbles +Max Dupain,0.6661642,black-white +Stephen Gammell,0.74001735,weird +Alberto Giacometti,0.7596302,scribbles +Goyō Hashiguchi,0.7595048,ukioe +Gustave Doré,0.7018832,fineart +Butcher Billy,0.7593378,cartoon +Pieter de Hooch,0.75916564,fineart +Gaetano Pesce,0.75906265,scribbles +Winsor McCay,0.7589382,scribbles +Claude Cahun,0.7588153,weird +Roger Ballen,0.64683115,black-white +Ellen Gallagher,0.758621,scribbles +Anton Corbijn,0.5550669,digipa-high-impact +Margaret Macdonald Mackintosh,0.75781375,fineart +Franz Kline,0.7576461,scribbles +Cimabue,0.75720495,fineart +André Kertész,0.7319392,black-white +Hans Hartung,0.75718236,scribbles +J. J. Grandville,0.7321584,fineart +David Octavius Hill,0.6333561,digipa-high-impact +teamLab,0.7566472,digipa-high-impact +Paul Gauguin,0.75635266,scribbles +Etel Adnan,0.75631833,scribbles +Barbara Kruger,0.7562784,scribbles +Franz Marc,0.75538874,scribbles +Saul Bass,0.75496316,scribbles +El Lissitzky,0.7549487,scribbles +Thomas Moran,0.6507399,fineart +Claude Monet,0.7541377,fineart +David Young Cameron,0.7541016,scribbles +W. Heath Robinson,0.75374347,cartoon +Yves Klein,0.7536262,fineart +Albert Pinkham Ryder,0.7338848,fineart +Elizabeth Shippen Green,0.7533686,fineart +Robert Stivers,0.5516287,fineart +Emily Kame Kngwarreye,0.7532016,weird +Charline von Heyl,0.753142,scribbles +Frida Kahlo,0.75303876,scribbles +Amy Sillman,0.752921,scribbles +Emperor Huizong of Song,0.7525214,ukioe +Edward Burne-Jones,0.75220466,fineart +Brett Weston,0.6891357,black-white +Charles E. Burchfield,0.75174403,scribbles +Hishida Shunsō,0.751617,fareast +Elaine de Kooning,0.7514996,scribbles +Gary Panter,0.7514598,scribbles +Frederick Hammersley,0.7514268,scribbles +Gustave Dore,0.6735896,fineart +Ephraim Moses Lilien,0.7510494,fineart +Hannah Hoch,0.7509496,scribbles +Shepard Fairey,0.7508583,scribbles +Richard Burlet,0.7506659,scribbles +Bill Brandt,0.6833408,black-white +Herbert List,0.68455493,black-white +Joseph Cornell,0.75023884,nudity +Nathan Wirth,0.6436741,black-white +John Kenn Mortensen,0.74758303,anime +Andre De Dienes,0.5683014,digipa-high-impact +Albert Robida,0.7485741,cartoon +Shintaro Kago,0.7484431,anime +Sidney Nolan,0.74809414,scribbles +Patrice Murciano,0.61973965,fineart +Brian Stelfreeze,0.7478351,scribbles +Francisco De Goya,0.6954584,fineart +William Morris,0.7478111,fineart +Honoré Daumier,0.74767774,scribbles +Hubert Robert,0.6863421,fineart +Marianne von Werefkin,0.7475825,fineart +Edvard Munch,0.74719715,scribbles +Victor Brauner,0.74719006,scribbles +George Inness,0.7470588,fineart +Naoki Urasawa,0.7469665,anime +Kilian Eng,0.7468486,scribbles +Bordalo II,0.7467364,digipa-high-impact +Katsuhiro Otomo,0.746364,anime +Maximilien Luce,0.74609685,fineart +Amy Earles,0.74603415,fineart +Jeanloup Sieff,0.7196009,black-white +William Zorach,0.74574494,scribbles +Pascale Campion,0.74516207,fineart +Dorothy Lathrop,0.74418795,fineart +Sofonisba Anguissola,0.74418664,fineart +Natalia Goncharova,0.74414873,scribbles +August Sander,0.6644566,black-white +Jasper Johns,0.74395454,scribbles +Arthur Dove,0.74383533,scribbles +Darwyn Cooke,0.7435789,scribbles +Leonardo Da Vinci,0.6825216,fineart +Fra Filippo Lippi,0.7433891,fineart +Pierre-Auguste Renoir,0.742464,fineart +Jeff Lemire,0.7422893,scribbles +Al Williamson,0.742113,cartoon +Childe Hassam,0.7418015,fineart +Francisco Goya,0.69522625,fineart +Alphonse Mucha,0.74171394,special +Cleon Peterson,0.74163914,scribbles +J.M.W. Turner,0.65582645,fineart +Walter Crane,0.74146044,fineart +Brassaï,0.6361966,digipa-high-impact +Virgil Finlay,0.74133486,fineart +Fernando Botero,0.7412504,nudity +Ben Nicholson,0.7411573,scribbles +Robert Rauschenberg,0.7410054,fineart +David Wiesner,0.7406237,scribbles +Bartolome Esteban Murillo,0.6933951,fineart +Jean Arp,0.7403873,scribbles +Andre Kertesz,0.7228358,black-white +Simeon Solomon,0.66441345,fineart +Hugh Ferriss,0.72443527,black-white +Agnes Lawrence Pelton,0.73960555,scribbles +Charles Camoin,0.7395686,scribbles +Paul Strand,0.7080332,black-white +Charles Gwathmey,0.7394747,scribbles +Bartolomé Esteban Murillo,0.7011274,fineart +Oskar Kokoschka,0.7392038,scribbles +Bruno Munari,0.73918355,weird +Willem de Kooning,0.73916197,scribbles +Hans Memling,0.7387886,fineart +Chris Mars,0.5861489,digipa-high-impact +Hiroshi Yoshida,0.73787534,ukioe +Hundertwasser,0.7377672,fineart +David Bowie,0.73773724,weird +Ettore Sottsass,0.7376095,digipa-high-impact +Antanas Sutkus,0.7369492,black-white +Leonora Carrington,0.73726475,scribbles +Hieronymus Bosch,0.7369955,scribbles +A. J. Casson,0.73666203,scribbles +Chaim Soutine,0.73662066,scribbles +Artur Bordalo,0.7364549,weird +Thomas Allom,0.68792284,fineart +Louis Comfort Tiffany,0.7363504,fineart +Philippe Druillet,0.7363382,cartoon +Jan Van Eyck,0.7360621,fineart +Sandro Botticelli,0.7359395,fineart +Hieronim Bosch,0.7359308,scribbles +Everett Shinn,0.7355817,fineart +Camille Corot,0.7355603,fineart +Nick Sharratt,0.73470485,scribbles +Fernand Léger,0.7079839,scribbles +Robert S. Duncanson,0.7346282,fineart +Hieronymous Bosch,0.73453265,scribbles +Charles Addams,0.7344034,scribbles +Studio Ghibli,0.73439026,anime +Archibald Motley,0.7343683,scribbles +Anton Fadeev,0.73433846,cartoon +Uemura Shoen,0.7342118,ukioe +Ando Fuchs,0.73406494,black-white +Jessie Willcox Smith,0.73398125,fineart +Alex Garant,0.7333658,scribbles +Lawren Harris,0.73331416,scribbles +Anne Truitt,0.73297834,scribbles +Richard Lindner,0.7328564,scribbles +Sailor Moon,0.73281246,anime +Bridget Bate Tichenor,0.73274165,scribbles +Ralph Steadman,0.7325864,scribbles +Annibale Carracci,0.73251307,fineart +Dürer,0.7324789,fineart +Abigail Larson,0.7319012,cartoon +Bill Traylor,0.73189163,scribbles +Louis Rhead,0.7318623,fineart +David Burliuk,0.731803,scribbles +Camille Pissarro,0.73172396,fineart +Catrin Welz-Stein,0.73117495,scribbles +William Etty,0.6497544,nudity +Pierre Bonnard,0.7310132,scribbles +Benoit B. Mandelbrot,0.5033001,digipa-med-impact +Théodore Géricault,0.692039,fineart +Andy Goldsworthy,0.7307565,digipa-high-impact +Alfred Sisley,0.7306032,fineart +Charles-Francois Daubigny,0.73057353,fineart +Karel Thole,0.7305395,cartoon +Andre Derain,0.73050404,scribbles +Larry Poons,0.73023695,fineart +Beauford Delaney,0.72999024,scribbles +Ruth Bernhard,0.72990334,black-white +David Alfaro Siqueiros,0.7297947,scribbles +Gaugin,0.729636,fineart +Carl Larsson,0.7296195,cartoon +Albrecht Dürer,0.72946966,fineart +Henri De Toulouse Lautrec,0.7294263,cartoon +Shotaro Ishinomori,0.7292093,anime +Hope Gangloff,0.729082,scribbles +Vivian Maier,0.72897506,digipa-high-impact +Alex Andreev,0.6442978,digipa-high-impact +Julie Blackmon,0.72862685,c +Arthur Melville,0.7286146,fineart +Henri Michaux,0.599607,fineart +William Steig,0.7283096,scribbles +Octavio Ocampo,0.72814554,scribbles +Cy Twombly,0.72814107,scribbles +Guy Denning,0.67375445,fineart +Maxfield Parrish,0.7280283,fineart +Randolph Caldecott,0.7279564,fineart +Duccio,0.72795,fineart +Ray Donley,0.5837457,fineart +Hiroshi Sugimoto,0.6497892,digipa-high-impact +Daniela Uhlig,0.4691466,special +Go Nagai,0.72770613,anime +Carlo Crivelli,0.72764605,fineart +Helmut Newton,0.44433144,digipa-low-impact +Josef Albers,0.7061394,scribbles +Henry Moret,0.7274567,fineart +André Masson,0.727404,scribbles +Henri Fantin Latour,0.72732764,fineart +Theo van Rysselberghe,0.7272843,fineart +John Wayne Gacy,0.72686327,scribbles +Carlos Schwabe,0.7267612,fineart +Herbert Bayer,0.7094297,scribbles +Domenichino,0.72667265,fineart +Liam Wong,0.7262276,special +George Caleb Bingham,0.7262154,digipa-high-impact +Gigadō Ashiyuki,0.7261864,fineart +Chaïm Soutine,0.72603923,scribbles +Ary Scheffer,0.64913243,fineart +Rockwell Kent,0.7257272,scribbles +Jean-Paul Riopelle,0.72570604,fineart +Ed Mell,0.6637067,cartoon +Ismail Inceoglu,0.72561014,special +Edgar Degas,0.72538006,fineart +Giorgione,0.7252798,fineart +Charles-François Daubigny,0.7252482,fineart +Arthur Lismer,0.7251765,scribbles +Aaron Siskind,0.4852289,digipa-med-impact +Arkhip Kuindzhi,0.7249981,fineart +Joseph Mallord William Turner,0.6834406,fineart +Dante Gabriel Rossetti,0.7244541,fineart +Ernst Haeckel,0.6660129,fineart +Rebecca Guay,0.72439146,cartoon +Anthony Gerace,0.636678,digipa-high-impact +Martin Kippenberger,0.72418386,scribbles +Diego Giacometti,0.72415763,scribbles +Dmitry Kustanovich,0.7241322,cartoon +Dora Carrington,0.7239633,scribbles +Shusei Nagaoko,0.7238965,anime +Odilon Redon,0.72381747,scribbles +Shohei Otomo,0.7132803,nudity +Barnett Newman,0.7236389,scribbles +Jean Fouquet,0.7235963,fineart +Gustav Klimt,0.72356784,nudity +Francisco Josè de Goya,0.6589663,fineart +Bonnard Pierre,0.72309464,nudity +Brooke Shaden,0.61281693,digipa-high-impact +Mao Hamaguchi,0.7228292,scribbles +Frederick Edwin Church,0.64416,fineart +Asher Brown Durand,0.72264796,fineart +George Baselitz,0.7223453,scribbles +Sam Bosma,0.7223237,fineart +Asaf Hanuka,0.72222745,scribbles +David Teniers the Younger,0.7221168,fineart +Nicola Samori,0.68747556,nudity +Claude Lorrain,0.7217102,fineart +Hermenegildo Anglada Camarasa,0.7214374,nudity +Pablo Picasso,0.72142905,scribbles +Howard Chaykin,0.7213998,cartoon +Ferdinand Hodler,0.7213758,nudity +Farel Dalrymple,0.7213298,fineart +Lyubov Popova,0.7213024,scribbles +Albin Egger-Lienz,0.72120845,fineart +Geertgen tot Sint Jans,0.72107565,fineart +Kate Greenaway,0.72069687,fineart +Louise Bourgeois,0.7206516,fineart +Miriam Schapiro,0.72026414,fineart +Pieter Claesz,0.7200939,fineart +George B. Bridgman,0.5592567,fineart +Piet Mondrian,0.71990657,scribbles +Michelangelo Merisi Da Caravaggio,0.7094674,fineart +Marie Spartali Stillman,0.71986604,fineart +Gertrude Abercrombie,0.7196962,scribbles +Louis Icart,0.7195913,fineart +David Driskell,0.719564,scribbles +Paula Modersohn-Becker,0.7193769,scribbles +George Hurrell,0.57496595,digipa-high-impact +Andrea Mantegna,0.7190254,fineart +Silvestro Lega,0.71891177,fineart +Junji Ito,0.7188978,anime +Jacob Hashimoto,0.7186867,digipa-high-impact +Benjamin West,0.6642946,fineart +David Teniers the Elder,0.7181293,fineart +Roberto Matta,0.71808386,fineart +Chiho Aoshima,0.71801454,anime +Amedeo Modigliani,0.71788836,scribbles +Raja Ravi Varma,0.71788085,fineart +Roberto Ferri,0.538221,nudity +Winslow Homer,0.7176876,fineart +Horace Vernet,0.65729,fineart +Lucas Cranach the Elder,0.71738195,fineart +Godfried Schalcken,0.625893,fineart +Affandi,0.7170285,nudity +Diane Arbus,0.655138,digipa-high-impact +Joseph Ducreux,0.65247905,digipa-high-impact +Berthe Morisot,0.7165984,fineart +Hilma af Klint,0.71643853,scribbles +Filippino Lippi,0.7163017,fineart +Leonid Afremov,0.7163005,fineart +Chris Ware,0.71628594,scribbles +Marius Borgeaud,0.7162446,scribbles +M.W. Kaluta,0.71612585,cartoon +Govert Flinck,0.68975246,fineart +Charles Demuth,0.71605396,scribbles +Coles Phillips,0.7158309,scribbles +Oskar Fischinger,0.6721027,digipa-high-impact +David Teniers III,0.71569765,fineart +Jean Delville,0.7156771,fineart +Antonio Saura,0.7155949,scribbles +Bridget Riley,0.7155669,fineart +Gordon Parks,0.5759978,digipa-high-impact +Anselm Kiefer,0.71514887,scribbles +Remedios Varo,0.7150927,weird +Franz Hegi,0.71495223,scribbles +Kati Horna,0.71486115,black-white +Arshile Gorky,0.71459055,scribbles +David LaChapelle,0.7144903,scribbles +Fritz von Dardel,0.71446383,scribbles +Edward Ruscha,0.71438885,fineart +Blanche Hoschedé Monet,0.7143073,fineart +Alexandre Calame,0.5735474,fineart +Sean Scully,0.714154,fineart +Alexandre Benois,0.7141515,fineart +Sally Mann,0.6534312,black-white +Thomas Eakins,0.7141104,fineart +Arnold Böcklin,0.71407956,fineart +Alfonse Mucha,0.7139052,special +Damien Hirst,0.7136273,scribbles +Lee Krasner,0.71362555,scribbles +Dorothea Lange,0.71361613,black-white +Juan Gris,0.7132987,scribbles +Bernardo Bellotto,0.70720065,fineart +John Martin,0.5376847,fineart +Harriet Backer,0.7131594,fineart +Arnold Newman,0.5736342,digipa-high-impact +Gjon Mili,0.46520913,digipa-low-impact +Asger Jorn,0.7129575,scribbles +Chesley Bonestell,0.6063316,fineart +Agostino Carracci,0.7128167,fineart +Peter Wileman,0.71271706,cartoon +Chen Hongshou,0.71268153,ukioe +Catherine Hyde,0.71266896,scribbles +Andrea Pozzo,0.626546,fineart +Kitty Lange Kielland,0.7125735,fineart +Cornelis Saftleven,0.6684047,fineart +Félix Vallotton,0.71237606,fineart +Albrecht Durer,0.7122327,fineart +Jackson Pollock,0.71222305,scribbles +John Bratby,0.7122171,scribbles +Beksinski,0.71218586,fineart +James Thomas Watts,0.5959548,fineart +Konstantin Korovin,0.71188873,fineart +Gustave Caillebotte,0.71181154,fineart +Dean Ellis,0.50233585,fineart +Friedrich von Amerling,0.6420181,fineart +Christopher Balaskas,0.67935324,special +Alexander Rodchenko,0.67415404,scribbles +Alfred Cheney Johnston,0.6647291,fineart +Mikalojus Konstantinas Ciurlionis,0.710677,scribbles +Jean-Antoine Watteau,0.71061164,fineart +Paul Delvaux,0.7105914,scribbles +Francesco del Cossa,0.7104901,nudity +Isaac Cordal,0.71046066,weird +Hikari Shimoda,0.7104546,weird +François Boucher,0.67153126,fineart +Akos Major,0.7103802,digipa-high-impact +Bernard Buffet,0.7103491,cartoon +Brandon Woelfel,0.6727086,digipa-high-impact +Edouard Manet,0.7101296,fineart +Auguste Herbin,0.6866145,scribbles +Eugene Delacroix,0.70995826,fineart +L. Birge Harrison,0.70989627,fineart +Howard Pyle,0.70979863,fineart +Diane Dillon,0.70968723,scribbles +Hans Erni,0.7096618,scribbles +Richard Diebenkorn,0.7096184,scribbles +Thomas Gainsborough,0.6759419,fineart +Maria Sibylla Merian,0.7093275,fineart +François Joseph Heim,0.6175854,fineart +E. H. Shepard,0.7091189,cartoon +Hsiao-Ron Cheng,0.7090618,scribbles +Canaletto,0.7090392,fineart +John Atkinson Grimshaw,0.7087531,fineart +Giovanni Battista Tiepolo,0.6754107,fineart +Cornelis van Poelenburgh,0.69821274,fineart +Raina Telgemeier,0.70846486,scribbles +Francesco Hayez,0.6960006,fineart +Gilbert Stuart,0.659772,fineart +Konstantin Yuon,0.7081486,fineart +Antonello da Messina,0.70806944,fineart +Austin Osman Spare,0.7079903,fineart +James Ensor,0.70781446,scribbles +Claude Bonin-Pissarro,0.70739406,fineart +Mikhail Vrubel,0.70738363,fineart +Angelica Kauffman,0.6748828,fineart +Viktor Vasnetsov,0.7072422,fineart +Alphonse Osbert,0.70724136,fineart +Tsutomu Nihei,0.7070495,anime +Harvey Quaytman,0.63613266,fineart +Jamie Hawkesworth,0.706914,digipa-high-impact +Francesco Guardi,0.70682615,fineart +Jean-Honoré Fragonard,0.6518248,fineart +Brice Marden,0.70673287,digipa-high-impact +Charles-Amédée-Philippe van Loo,0.6725916,fineart +Mati Klarwein,0.7066092,n +Gerard ter Borch,0.706589,fineart +Dan Hillier,0.48966256,digipa-med-impact +Federico Barocci,0.682664,fineart +Henri Le Sidaner,0.70637953,fineart +Olivier Bonhomme,0.7063748,scribbles +Edward Weston,0.7061382,black-white +Giovanni Paolo Cavagna,0.6840265,fineart +Germaine Krull,0.6621777,black-white +Hans Holbein the Younger,0.70590156,fineart +François Bocion,0.6272365,fineart +Georg Baselitz,0.7053314,scribbles +Caravaggio,0.7050303,fineart +Anne Rothenstein,0.70502245,scribbles +Wadim Kashin,0.43714935,digipa-low-impact +Heinrich Lefler,0.7048054,fineart +Jacob van Ruisdael,0.7047918,fineart +Bartholomeus van Bassen,0.6676872,fineart +Jeffrey Smith art,0.56750107,fineart +Anne Packard,0.7046703,weird +Jean-François Millet,0.7045456,fineart +Andrey Remnev,0.7041204,digipa-high-impact +Fujiwara Takanobu,0.70410216,ukioe +Elliott Erwitt,0.69950557,black-white +Fern Coppedge,0.7036215,fineart +Bartholomeus van der Helst,0.66411966,fineart +Rembrandt Van Rijn,0.6979987,fineart +Rene Magritte,0.703457,scribbles +Aelbert Cuyp,0.7033657,fineart +Gerda Wegener,0.70319015,scribbles +Graham Sutherland,0.7031714,scribbles +Gerrit Dou,0.7029986,fineart +August Friedrich Schenck,0.6801586,fineart +George Herriman,0.7028568,scribbles +Stanisław Szukalski,0.6903354,fineart +Slim Aarons,0.70222545,digipa-high-impact +Ernst Thoms,0.70221686,fineart +Louis Wain,0.702186,fineart +Artemisia Gentileschi,0.70198226,fineart +Eugène Delacroix,0.70155394,fineart +Peter Bagge,0.70127463,scribbles +Jeffrey Catherine Jones,0.7012148,cartoon +Eugène Carrière,0.65272695,fineart +Alexander Millar,0.7011144,scribbles +Nobuyoshi Araki,0.70108867,fareast +Tintoretto,0.6702795,fineart +André Derain,0.7009005,scribbles +Charles Maurice Detmold,0.70079994,fineart +Francisco de Zurbarán,0.7007234,fineart +Laurie Greasley,0.70072114,cartoon +Lynda Benglis,0.7006948,digipa-high-impact +Cecil Beaton,0.66362655,black-white +Gustaf Tenggren,0.7006041,cartoon +Abdur Rahman Chughtai,0.7004994,ukioe +Constantin Brancusi,0.7004367,scribbles +Mikhail Larionov,0.7004066,fineart +Jan van Kessel the Elder,0.70040506,fineart +Chantal Joffe,0.70036674,scribbles +Charles-André van Loo,0.6830367,fineart +Reginald Marsh,0.6301042,fineart +Elsa Bleda,0.70005083,digipa-high-impact +Peter Paul Rubens,0.65745676,fineart +Eugène Boudin,0.70001304,fineart +Charles Willson Peale,0.66907954,fineart +Brian Mashburn,0.63395154,digipa-high-impact +Barkley L. Hendricks,0.69986427,n +Yoshiyuki Tomino,0.6998095,anime +Guido Reni,0.6416875,fineart +Lynd Ward,0.69958556,fineart +John Constable,0.6907788,fineart +František Kupka,0.6993329,fineart +Pieter Bruegel The Elder,0.6992879,scribbles +Benjamin Gerritsz Cuyp,0.6992173,fineart +Nicolas Mignard,0.6988214,fineart +Augustus Edwin Mulready,0.6482165,fineart +Andrea del Sarto,0.698532,fineart +Edward Steichen,0.69837445,black-white +James Abbott McNeill Whistler,0.69836813,fineart +Alphonse Legros,0.6983243,fineart +Ivan Aivazovsky,0.64588225,fineart +Giovanni Francesco Barbieri,0.6981316,fineart +Grace Cossington Smith,0.69811064,fineart +Bert Stern,0.53411555,scribbles +Mary Cassatt,0.6980135,fineart +Jules Bastien-Lepage,0.69796044,fineart +Max Ernst,0.69777006,fineart +Kentaro Miura,0.697743,anime +Georges Rouault,0.69758564,scribbles +Josephine Wall,0.6973667,fineart +Anne-Louis Girodet,0.58104825,nudity +Bert Hardy,0.6972966,black-white +Adriaen van de Velde,0.69716156,fineart +Andreas Achenbach,0.61108655,fineart +Hayv Kahraman,0.69705284,fineart +Beatrix Potter,0.6969851,fineart +Elmer Bischoff,0.6968948,fineart +Cornelis de Heem,0.6968436,fineart +Inio Asano,0.6965007,anime +Alfred Henry Maurer,0.6964837,fineart +Gottfried Helnwein,0.6962953,digipa-high-impact +Paul Barson,0.54196984,digipa-high-impact +Roger de La Fresnaye,0.69620967,fineart +Abraham Mignon,0.60605425,fineart +Albert Bloch,0.69573116,nudity +Charles Dana Gibson,0.67155975,fineart +Alexandre-Évariste Fragonard,0.6507174,fineart +Ernst Fuchs,0.6953538,nudity +Alfredo Jaar,0.6952965,digipa-high-impact +Judy Chicago,0.6952246,weird +Frans van Mieris the Younger,0.6951849,fineart +Aertgen van Leyden,0.6951305,fineart +Emily Carr,0.69512105,fineart +Frances MacDonald,0.6950408,scribbles +Hannah Höch,0.69495845,scribbles +Gillis Rombouts,0.58770025,fineart +Käthe Kollwitz,0.6947756,fineart +Barbara Stauffacher Solomon,0.6920825,fineart +Georges Lacombe,0.6944455,fineart +Gwen John,0.6944161,fineart +Terada Katsuya,0.6944026,cartoon +James Gillray,0.6871335,fineart +Robert Crumb,0.69420326,fineart +Bruce Pennington,0.6545669,fineart +David Firth,0.69400465,scribbles +Arthur Boyd,0.69399726,fineart +Antonin Artaud,0.67321455,fineart +Giuseppe Arcimboldo,0.6937329,fineart +Jim Mahfood,0.6936606,cartoon +Ossip Zadkine,0.6494374,scribbles +Atelier Olschinsky,0.69349927,fineart +Carl Frederik von Breda,0.57274634,fineart +Ken Sugimori,0.6932626,anime +Chris Friel,0.5399168,fineart +Andrew Macara,0.69307995,fineart +Alexander Jansson,0.69298327,scribbles +Anne Brigman,0.6865817,black-white +George Ault,0.66756654,fineart +Arkhyp Kuindzhi,0.6928072,digipa-high-impact +Emiliano Ponzi,0.69278395,scribbles +William Holman Hunt,0.6927663,fineart +Tamara Lempicka,0.6386007,scribbles +Mark Ryden,0.69259655,fineart +Giovanni Paolo Pannini,0.6802902,fineart +Carl Barks,0.6923666,cartoon +Fritz Bultman,0.6318746,fineart +Salomon van Ruysdael,0.690313,fineart +Carrie Mae Weems,0.6645416,n +Agostino Arrivabene,0.61166185,fineart +Gustave Boulanger,0.655797,fineart +Henry Justice Ford,0.51214355,fareast +Bernardo Strozzi,0.63510317,fineart +André Lhote,0.68718815,scribbles +Paul Corfield,0.6915611,scribbles +Gifford Beal,0.6914777,fineart +Hirohiko Araki,0.6914078,anime +Emil Carlsen,0.691326,fineart +Frans van Mieris the Elder,0.6912799,fineart +Simon Stalenhag,0.6912775,special +Henry van de Velde,0.64838886,fineart +Eleanor Fortescue-Brickdale,0.6909729,fineart +Thomas W Schaller,0.69093937,special +NHK Animation,0.6907677,cartoon +Euan Uglow,0.69060403,scribbles +Hendrick Goltzius,0.69058937,fineart +William Blake,0.69038224,fineart +Vito Acconci,0.58409876,digipa-high-impact +Billy Childish,0.6902057,scribbles +Ben Quilty,0.6875855,fineart +Mark Briscoe,0.69010437,fineart +Adriaen van de Venne,0.6899867,fineart +Alasdair McLellan,0.6898454,digipa-high-impact +Ed Paschke,0.68974686,scribbles +Guy Rose,0.68960273,fineart +Barbara Hepworth,0.68958247,fineart +Edward Henry Potthast,0.6895703,fineart +Francis Bacon,0.6895397,scribbles +Pawel Kuczynski,0.6894536,fineart +Bjarke Ingels,0.68933153,digipa-high-impact +Henry Ossawa Tanner,0.68932164,fineart +Alessandro Allori,0.6892961,fineart +Abraham van Calraet,0.63841593,fineart +Egon Schiele,0.6891415,scribbles +Tim Doyle,0.5474768,digipa-high-impact +Grandma Moses,0.6890782,fineart +John Frederick Kensett,0.61981744,fineart +Giacomo Balla,0.68893707,fineart +Jamie Baldridge,0.6546651,digipa-high-impact +Max Beckmann,0.6884731,scribbles +Cornelis van Haarlem,0.6677613,fineart +Edward Hopper,0.6884258,special +Barkley Hendricks,0.6883637,n +Patrick Dougherty,0.688321,digipa-high-impact +Karol Bak,0.6367705,fineart +Pierre Puvis de Chavannes,0.6880703,fineart +Antoni Tàpies,0.685689,fineart +Alexander Nasmyth,0.57695735,fineart +Laurent Grasso,0.5793272,fineart +Camille Walala,0.6076875,digipa-high-impact +Fairfield Porter,0.68790644,fineart +Alex Colville,0.68787855,fineart +Herb Ritts,0.51471305,scribbles +Gerhard Munthe,0.687658,fineart +Susan Seddon Boulet,0.68762136,scribbles +Liu Ye,0.68760437,fineart +Robert Antoine Pinchon,0.68744636,fineart +Fujiwara Nobuzane,0.6873439,fineart +Frederick Carl Frieseke,0.6873361,fineart +Aert van der Neer,0.6159286,fineart +Allen Jones,0.6869935,scribbles +Anja Millen,0.6064488,digipa-high-impact +Esaias van de Velde,0.68673944,fineart +Gyoshū Hayami,0.68665624,anime +William Hogarth,0.6720842,fineart +Frederic Church,0.6865637,fineart +Cyril Rolando,0.68644965,cartoon +Frederic Edwin Church,0.6863009,fineart +Thomas Rowlandson,0.66726154,fineart +Joachim Brohm,0.68601763,digipa-high-impact +Cristofano Allori,0.6858083,fineart +Adrianus Eversen,0.58259964,fineart +Richard Dadd,0.68546164,fineart +Ambrosius Bosschaert II,0.6854217,fineart +Paolo Veronese,0.68422073,fineart +Abraham van den Tempel,0.66463804,fineart +Duncan Grant,0.6852565,scribbles +Hendrick Cornelisz. van Vliet,0.6851691,fineart +Geof Darrow,0.6851174,scribbles +Émile Bernard,0.6850957,fineart +Brian Bolland,0.68496394,scribbles +James Gilleard,0.6849431,cartoon +Anton Raphael Mengs,0.6689196,fineart +Augustus Jansson,0.6845705,digipa-high-impact +Hendrik Goltzius,0.6843367,fineart +Domenico Quaglio the Younger,0.65769434,fineart +Cicely Mary Barker,0.6841806,fineart +William Eggleston,0.6840795,digipa-high-impact +David Choe,0.6840449,scribbles +Adam Elsheimer,0.6716068,fineart +Heinrich Danioth,0.5390186,fineart +Franz Stuck,0.6836468,fineart +Bernie Wrightson,0.64101505,fineart +Dorina Costras,0.6835419,fineart +El Greco,0.68343943,fineart +Gatōken Shunshi,0.6833314,anime +Giovanni Bellini,0.67622876,fineart +Aron Wiesenfeld,0.68331146,nudity +Boris Kustodiev,0.68329334,fineart +Alec Soth,0.5597321,digipa-high-impact +Artus Scheiner,0.6313348,fineart +Kelly Vivanco,0.6830933,scribbles +Shaun Tan,0.6830649,fineart +Anthony van Dyck,0.6577681,fineart +Neil Welliver,0.68297863,nudity +Robert McCall,0.68294585,fineart +Sandra Chevrier,0.68284667,scribbles +Yinka Shonibare,0.68256056,n +Arthur Tress,0.6301861,digipa-high-impact +Richard McGuire,0.6820089,scribbles +Anni Albers,0.65708244,digipa-high-impact +Aleksey Savrasov,0.65207493,fineart +Wayne Barlowe,0.6537874,fineart +Giorgio de Chirico,0.6815907,fineart +Ernest Procter,0.6815795,fineart +Adriaen Brouwer,0.6815058,fineart +Ilya Glazunov,0.6813533,fineart +Alison Bechdel,0.68096143,scribbles +Carl Holsoe,0.68082225,fineart +Alfred Edward Chalon,0.6464571,fineart +Gerard David,0.68058,fineart +Basil Blackshaw,0.6805679,fineart +Gerrit Adriaenszoon Berckheyde,0.67340267,fineart +George Hendrik Breitner,0.6804209,fineart +Abraham Bloemaert,0.68036544,fineart +Ferdinand Van Kessel,0.67742276,fineart +Hugo Simberg,0.68031186,fineart +Gaston Bussière,0.665221,fineart +Shawn Coss,0.42407864,digipa-low-impact +Hanabusa Itchō,0.68023074,ukioe +Magnus Enckell,0.6801553,fineart +Gary Larson,0.6801336,scribbles +George Manson,0.68013126,digipa-high-impact +Hayao Miyazaki,0.6800754,anime +Carl Spitzweg,0.66581815,fineart +Ambrosius Holbein,0.6798341,fineart +Domenico Pozzi,0.6434162,fineart +Dorothea Tanning,0.6797955,fineart +Jeannette Guichard-Bunel,0.5251578,digipa-high-impact +Victor Moscoso,0.62962687,fineart +Francis Picabia,0.6795391,scribbles +Charles W. Bartlett,0.67947805,fineart +David A Hardy,0.5554935,fineart +C. R. W. Nevinson,0.67946506,fineart +Man Ray,0.6507145,scribbles +Albert Bierstadt,0.67935765,fineart +Charles Le Brun,0.6758479,fineart +Lovis Corinth,0.67913896,fineart +Herbert Abrams,0.5507507,digipa-high-impact +Giorgio Morandi,0.6789025,fineart +Agnolo Bronzino,0.6787985,fineart +Abraham Pether,0.66922426,fineart +John Bauer,0.6786695,fineart +Arthur Stanley Wilkinson,0.67860866,fineart +Arthur Wardle,0.5510789,fineart +George Romney,0.62868094,fineart +Laurie Lipton,0.5201844,fineart +Mickalene Thomas,0.45433685,digipa-low-impact +Alice Rahon,0.6777824,scribbles +Gustave Van de Woestijne,0.6777346,scribbles +Laurel Burch,0.67766285,fineart +Hendrik Gerritsz Pot,0.67750573,fineart +John William Waterhouse,0.677472,fineart +Conor Harrington,0.5967809,fineart +Gabriel Ba,0.6773366,cartoon +Franz Xaver Winterhalter,0.62229514,fineart +George Cruikshank,0.6473593,fineart +Hyacinthe Rigaud,0.67717785,fineart +Cornelis Claesz van Wieringen,0.6770269,fineart +Adriaen van Outrecht,0.67682564,fineart +Yaacov Agam,0.6767926,fineart +Franz von Lenbach,0.61948,fineart +Clyfford Still,0.67667866,fineart +Alexander Roslin,0.66719526,fineart +Barry Windsor Smith,0.6765375,cartoon +Takeshi Obata,0.67643225,anime +John Harris,0.47712502,fineart +Bruce Davidson,0.6763525,digipa-high-impact +Hendrik Willem Mesdag,0.6762745,fineart +Makoto Shinkai,0.67610705,anime +Andreas Gursky,0.67610145,digipa-high-impact +Mike Winkelmann (Beeple),0.6510196,digipa-high-impact +Gustave Moreau,0.67607844,fineart +Frank Weston Benson,0.6760142,fineart +Eduardo Kingman,0.6759026,fineart +Benjamin Williams Leader,0.5611925,fineart +Hervé Guibert,0.55973417,black-white +Cornelis Dusart,0.6753622,fineart +Amédée Guillemin,0.6752696,fineart +Alessio Albi,0.6752633,digipa-high-impact +Matthias Grünewald,0.6751779,fineart +Fujishima Takeji,0.6751577,anime +Georges Braque,0.67514753,scribbles +John Salminen,0.67498183,fineart +Atey Ghailan,0.674873,scribbles +Giovanni Antonio Galli,0.657484,fineart +Julie Mehretu,0.6748382,fineart +Jean Auguste Dominique Ingres,0.6746286,fineart +Francesco Albani,0.6621554,fineart +Anato Finnstark,0.6744919,digipa-high-impact +Giovanni Bernardino Mazzolini,0.64416045,fineart +Antoine Le Nain,0.6233709,fineart +Ford Madox Brown,0.6743224,fineart +Gerhard Richter,0.67426133,fineart +theCHAMBA,0.6742506,cartoon +Edward Julius Detmold,0.67421955,fineart +George Stubbs,0.6209227,fineart +George Tooker,0.6740602,scribbles +Faith Ringgold,0.6739976,scribbles +Giambattista Pittoni,0.5792371,fineart +George Bellows,0.6737008,fineart +Aldus Manutius,0.67366326,fineart +Ambrosius Bosschaert,0.67364097,digipa-high-impact +Michael Parkes,0.6133628,fineart +Hans Bellmer,0.6735973,nudity +Sir James Guthrie,0.67359626,fineart +Charles Spencelayh,0.67356884,fineart +Ivan Shishkin,0.6734136,fineart +Hans Holbein the Elder,0.6733856,fineart +Filip Hodas,0.60053295,digipa-high-impact +Herman Saftleven,0.6732188,digipa-high-impact +Dirck de Quade van Ravesteyn,0.67309594,fineart +Joe Fenton,0.6730916,scribbles +Arnold Bocklin,0.6730706,fineart +Baiōken Eishun,0.6730663,anime +Giovanni Giacometti,0.6730505,fineart +Giovanni Battista Gaulli,0.65036476,fineart +William Stout,0.672887,fineart +Gavin Hamilton,0.5982757,fineart +John Stezaker,0.6726847,black-white +Frederick McCubbin,0.67263377,fineart +Christoph Ludwig Agricola,0.62750757,fineart +Alice Neel,0.67255914,scribbles +Giovanni Battista Venanzi,0.61996603,fineart +Miho Hirano,0.6724092,anime +Tom Thomson,0.6723876,fineart +Alfred Munnings,0.6723851,fineart +David Wilkie,0.6722781,fineart +Adriaen van Ostade,0.67220736,fineart +Alfred Eisenstaedt,0.67213774,black-white +Leon Kossoff,0.67208946,fineart +Georges de La Tour,0.6421979,fineart +Chuck Close,0.6719756,digipa-high-impact +Herbert MacNair,0.6719506,scribbles +Edward Atkinson Hornel,0.6719265,fineart +Becky Cloonan,0.67192084,cartoon +Gian Lorenzo Bernini,0.58210254,fineart +Hein Gorny,0.4982776,digipa-med-impact +Joe Webb,0.6714884,fineart +Cornelis Pietersz Bega,0.64423996,fineart +Christian Krohg,0.6713641,fineart +Cornelia Parker,0.6712246,fineart +Anna Mary Robertson Moses,0.6709144,fineart +Quentin Tarantino,0.6708354,digipa-high-impact +Frederic Remington,0.67074275,fineart +Barent Fabritius,0.6707407,fineart +Oleg Oprisco,0.6707388,digipa-high-impact +Hendrick van Streeck,0.670666,fineart +Bakemono Zukushi,0.67051035,anime +Lucy Madox Brown,0.67032814,fineart +Paul Wonner,0.6700563,scribbles +Guido Borelli Da Caluso,0.66966087,digipa-high-impact +Emil Alzamora,0.5844039,nudity +Heinrich Brocksieper,0.64469147,fineart +Dan Smith,0.669563,digipa-high-impact +Lois van Baarle,0.6695091,scribbles +Arthur Garfield Dove,0.6694996,scribbles +Matthias Jung,0.66936135,digipa-high-impact +José Clemente Orozco,0.6693544,scribbles +Don Bluth,0.6693046,cartoon +Akseli Gallen-Kallela,0.66927314,fineart +Alex Howitt,0.52858865,digipa-high-impact +Giovanni Bernardino Asoleni,0.6635405,fineart +Frederick Goodall,0.6690712,fineart +Francesco Bartolozzi,0.63431,fineart +Edmund Leighton,0.6689639,fineart +Abraham Willaerts,0.5966594,fineart +François Louis Thomas Francia,0.6207474,fineart +Carel Fabritius,0.6688478,fineart +Flora Macdonald Reid,0.6687404,fineart +Bartholomeus Breenbergh,0.6163084,fineart +Bernardino Mei,0.6486895,fineart +Carel Weight,0.6684968,fineart +Aristide Maillol,0.66843045,scribbles +Chris Leib,0.60567486,fineart +Giovanni Battista Piazzetta,0.65012705,fineart +Daniel Maclise,0.6678073,fineart +Giovanni Bernardino Azzolini,0.65774256,fineart +Aaron Horkey,0.6676864,fineart +Otto Dix,0.667294,scribbles +Ferdinand Bol,0.6414797,fineart +Adriaen Coorte,0.6670663,fineart +William Gropper,0.6669881,scribbles +Gerard de Lairesse,0.6639489,fineart +Mab Graves,0.6668356,scribbles +Fernando Amorsolo,0.66683346,fineart +Pixar Concept Artists,0.6667752,cartoon +Alfred Augustus Glendening,0.64009607,fineart +Diego Velázquez,0.6666799,fineart +Jerry Pinkney,0.6665478,fineart +Antoine Wiertz,0.6143825,fineart +Alberto Burri,0.6618252,scribbles +Max Weber,0.6664029,fineart +Hans Baluschek,0.66636246,fineart +Annie Swynnerton,0.6663346,fineart +Albert Dubois-Pillet,0.57526016,fineart +Dora Maar,0.62862253,digipa-high-impact +Kay Sage,0.5614823,fineart +David A. Hardy,0.51376164,fineart +Alberto Biasi,0.42917693,digipa-low-impact +Fra Bartolomeo,0.6661105,fineart +Hendrick van Balen,0.65754294,fineart +Edwin Austin Abbey,0.66596496,fineart +George Frederic Watts,0.66595024,fineart +Alexei Kondratyevich Savrasov,0.6470352,fineart +Anna Ancher,0.66581213,fineart +Irma Stern,0.66580737,fineart +Frédéric Bazille,0.6657115,fineart +Awataguchi Takamitsu,0.6656272,anime +Edward Sorel,0.6655388,fineart +Edward Lear,0.6655078,fineart +Gabriel Metsu,0.6654555,fineart +Giovanni Battista Innocenzo Colombo,0.6653655,fineart +Scott Naismith,0.6650656,fineart +John Perceval,0.6650283,fineart +Girolamo Muziano,0.64234406,fineart +Cornelis de Man,0.66494393,fineart +Cornelis Bisschop,0.64119905,digipa-high-impact +Hans Leu the Elder,0.64770013,fineart +Michael Hutter,0.62479556,fineart +Cornelia MacIntyre Foley,0.6510235,fineart +Todd McFarlane,0.6647763,cartoon +John James Audubon,0.6279882,digipa-high-impact +William Henry Hunt,0.57340264,fineart +John Anster Fitzgerald,0.6644317,fineart +Tomer Hanuka,0.6643152,cartoon +Alex Prager,0.6641814,fineart +Heinrich Kley,0.6641148,fineart +Anne Redpath,0.66407835,scribbles +Marianne North,0.6640104,fineart +Daniel Merriam,0.6639365,fineart +Bill Carman,0.66390574,fineart +Méret Oppenheim,0.66387725,digipa-high-impact +Erich Heckel,0.66384083,fineart +Iryna Yermolova,0.663623,fineart +Antoine Ignace Melling,0.61502695,fineart +Akira Toriyama,0.6635002,anime +Gregory Crewdson,0.59810174,digipa-high-impact +Helene Schjerfbeck,0.66333634,fineart +Antonio Mancini,0.6631618,fineart +Zanele Muholi,0.58554715,n +Balthasar van der Ast,0.66294503,fineart +Toei Animations,0.6629127,anime +Arthur Quartley,0.6628106,fineart +Diego Rivera,0.6625808,fineart +Hendrik van Steenwijk II,0.6623777,fineart +James Tissot,0.6623415,fineart +Kehinde Wiley,0.66218376,n +Chiharu Shiota,0.6621249,digipa-high-impact +George Grosz,0.6620224,fineart +Peter De Seve,0.6616659,cartoon +Ryan Hewett,0.6615638,fineart +Hasegawa Tōhaku,0.66146004,anime +Apollinary Vasnetsov,0.6613177,fineart +Francis Cadell,0.66119456,fineart +Henri Harpignies,0.6611012,fineart +Henry Macbeth-Raeburn,0.6213787,fineart +Christoffel van den Berghe,0.6609149,fineart +Leiji Matsumoto,0.66089404,anime +Adriaen van der Werff,0.638286,fineart +Ramon Casas,0.6606529,fineart +Arthur Hacker,0.66062653,fineart +Edward Willis Redfield,0.66058433,fineart +Carl Gustav Carus,0.65355223,fineart +Francesca Woodman,0.60435605,digipa-high-impact +Hans Makart,0.5881955,fineart +Carne Griffiths,0.660091,weird +Will Barnet,0.65995145,scribbles +Fitz Henry Lane,0.659841,fineart +Masaaki Sasamoto,0.6597158,anime +Salvador Dali,0.6290813,scribbles +Walt Kelly,0.6596993,digipa-high-impact +Charlotte Nasmyth,0.56481636,fineart +Ferdinand Knab,0.6596528,fineart +Steve Lieber,0.6596117,scribbles +Zhang Kechun,0.6595939,fareast +Olivier Valsecchi,0.5324838,digipa-high-impact +Joel Meyerowitz,0.65937585,digipa-high-impact +Arthur Streeton,0.6592294,fineart +Henriett Seth F.,0.6592273,fineart +Genndy Tartakovsky,0.6591695,scribbles +Otto Marseus van Schrieck,0.65890455,fineart +Hanna-Barbera,0.6588123,cartoon +Mary Anning,0.6588001,fineart +Pamela Colman Smith,0.6587648,fineart +Anton Mauve,0.6586873,fineart +Hendrick Avercamp,0.65866685,fineart +Max Pechstein,0.65860206,scribbles +Franciszek Żmurko,0.56855476,fineart +Felice Casorati,0.6584761,fineart +Louis Janmot,0.65298057,fineart +Thomas Cole,0.5408042,fineart +Peter Mohrbacher,0.58273685,fineart +Arnold Franz Brasz,0.65834284,nudity +Christian Rohlfs,0.6582814,fineart +Basil Gogos,0.658105,fineart +Fitz Hugh Lane,0.657923,fineart +Liubov Sergeevna Popova,0.62325525,fineart +Elizabeth MacNicol,0.65773135,fineart +Zinaida Serebriakova,0.6577016,fineart +Ernest Lawson,0.6575238,fineart +Bruno Catalano,0.6574354,fineart +Albert Namatjira,0.6573372,fineart +Fritz von Uhde,0.6572697,fineart +Edwin Henry Landseer,0.62363374,fineart +Naoto Hattori,0.621745,fareast +Reylia Slaby,0.65709853,fineart +Arthur Burdett Frost,0.6147318,fineart +Frank Miller,0.65707314,digipa-high-impact +Algernon Talmage,0.65702903,fineart +Itō Jakuchū,0.6570199,digipa-high-impact +Billie Waters,0.65684533,digipa-high-impact +Ingrid Baars,0.58558,digipa-high-impact +Pieter Jansz Saenredam,0.6566058,fineart +Egbert van Heemskerck,0.6125889,fineart +John French Sloan,0.6362145,fineart +Craola,0.65639997,scribbles +Benjamin Marra,0.61809736,nudity +Anthony Thieme,0.65609205,fineart +Satoshi Kon,0.65606606,anime +Masamune Shirow,0.65592873,anime +Alfred Stevens,0.6557321,fineart +Hariton Pushwagner,0.6556745,anime +Carlo Carrà,0.6556279,fineart +Stuart Davis,0.6050534,digipa-high-impact +David Shrigley,0.6553904,digipa-high-impact +Albrecht Anker,0.65531695,fineart +Anton Semenov,0.6552501,digipa-high-impact +Fabio Hurtado,0.5955889,fineart +Donald Judd,0.6552257,fineart +Francisco de Burgos Mantilla,0.65516514,fineart +Barthel Bruyn the Younger,0.6551433,fineart +Abram Arkhipov,0.6550962,fineart +Paulus Potter,0.65498203,fineart +Edward Lamson Henry,0.6549521,fineart +Audrey Kawasaki,0.654843,fineart +George Catlin,0.6547183,fineart +Adélaïde Labille-Guiard,0.6066263,fineart +Sandy Skoglund,0.6546999,digipa-high-impact +Hans Baldung,0.654431,fineart +Ethan Van Sciver,0.65442884,cartoon +Frans Hals,0.6542338,fineart +Caspar David Friedrich,0.6542175,fineart +Charles Conder,0.65420866,fineart +Betty Churcher,0.65387225,fineart +Claes Corneliszoon Moeyaert,0.65386075,fineart +David Bomberg,0.6537477,fineart +Abraham Bosschaert,0.6535562,fineart +Giuseppe de Nittis,0.65354455,fineart +John La Farge,0.65342575,fineart +Frits Thaulow,0.65341854,fineart +John Duncan,0.6532379,fineart +Floris van Dyck,0.64900756,fineart +Anton Pieck,0.65310377,fineart +Roger Dean,0.6529647,nudity +Maximilian Pirner,0.65280807,fineart +Dorothy Johnstone,0.65267503,fineart +Govert Dircksz Camphuysen,0.65258145,fineart +Ryohei Hase,0.6168618,fineart +Hans von Aachen,0.62437224,fineart +Gustaf Munch-Petersen,0.6522485,fineart +Earnst Haeckel,0.6344333,fineart +Giovanni Battista Bracelli,0.62635326,fineart +Hendrick Goudt,0.6521433,fineart +Aneurin Jones,0.65191466,fineart +Bryan Hitch,0.6518333,cartoon +Coby Whitmore,0.6515695,fineart +Barthélemy d'Eyck,0.65156406,fineart +Quint Buchholz,0.65151155,fineart +Adriaen Hanneman,0.6514815,fineart +Tom Roberts,0.5855832,fineart +Fernand Khnopff,0.6512954,nudity +Charles Vess,0.6512271,cartoon +Carlo Galli Bibiena,0.6511681,nudity +Alexander Milne Calder,0.6081027,fineart +Josan Gonzalez,0.6193469,cartoon +Barthel Bruyn the Elder,0.6509954,fineart +Jon Whitcomb,0.6046063,fineart +Arcimboldo,0.6509897,fineart +Hendrik van Steenwijk I,0.65086293,fineart +Albert Joseph Pénot,0.65085316,fineart +Edward Wadsworth,0.6308917,scribbles +Andrew Wyeth,0.6507103,fineart +Correggio,0.650689,fineart +Frances Currey,0.65068,fineart +Henryk Siemiradzki,0.56721973,fineart +Worthington Whittredge,0.6504713,fineart +Federico Zandomeneghi,0.65033823,fineart +Isaac Levitan,0.6503356,fineart +Russ Mills,0.65012795,fineart +Edith Lawrence,0.65010095,fineart +Gil Elvgren,0.5614284,digipa-high-impact +Chris Foss,0.56495357,fineart +Francesco Zuccarelli,0.612805,fineart +Hendrick Bloemaert,0.64962655,fineart +Egon von Vietinghoff,0.57180583,fineart +Pixar,0.6495793,cartoon +Daniel Clowes,0.6495775,fineart +Friedrich Ritter von Friedländer-Malheim,0.6493772,fineart +Rebecca Sugar,0.6492679,scribbles +Chen Daofu,0.6492026,fineart +Dustin Nguyen,0.64909416,cartoon +Raymond Duchamp-Villon,0.6489605,nudity +Daniel Garber,0.6489332,fineart +Antonio Canova,0.58764786,fineart +Algernon Blackwood,0.59256804,fineart +Betye Saar,0.64877665,fineart +William S. Burroughs,0.5505619,fineart +Rodney Matthews,0.64844495,fineart +Michelangelo Buonarroti,0.6484401,fineart +Posuka Demizu,0.64843124,anime +Joao Ruas,0.6484134,fineart +Andy Fairhurst,0.6480388,special +"Andries Stock, Dutch Baroque painter",0.6479797,fineart +Antonio de la Gandara,0.6479292,fineart +Bruce Timm,0.6477877,scribbles +Harvey Kurtzman,0.64772683,cartoon +Eiichiro Oda,0.64772165,anime +Edwin Landseer,0.6166703,fineart +Carl Heinrich Bloch,0.64755356,fineart +Adriaen Isenbrant,0.6475428,fineart +Santiago Caruso,0.6473954,fineart +Alfred Guillou,0.6472603,fineart +Clara Peeters,0.64725095,fineart +Kim Jung Gi,0.6472225,cartoon +Milo Manara,0.6471776,cartoon +Phil Noto,0.6470769,anime +Kaws,0.6470336,cartoon +Desmond Morris,0.5951916,fineart +Gediminas Pranckevicius,0.6467787,fineart +Jack Kirby,0.6467424,cartoon +Claes Jansz. Visscher,0.6466888,fineart +Augustin Meinrad Bächtiger,0.6465789,fineart +John Lavery,0.64643383,fineart +Anne Bachelier,0.6464065,fineart +Giuseppe Bernardino Bison,0.64633006,fineart +E. T. A. Hoffmann,0.5887251,fineart +Ambrosius Benson,0.6457839,fineart +Cornelis Verbeeck,0.645782,fineart +H. R. Giger,0.6456823,weird +Adolph Menzel,0.6455246,fineart +Aliza Razell,0.5863178,digipa-high-impact +Gerard Seghers,0.6205679,fineart +David Aja,0.62812066,scribbles +Gustave Courbet,0.64476407,fineart +Alexandre Cabanel,0.63849115,fineart +Albert Marquet,0.64471006,fineart +Harold Harvey,0.64464307,fineart +William Wegman,0.6446265,scribbles +Harold Gilman,0.6445966,fineart +Jeremy Geddes,0.57839495,digipa-high-impact +Abraham van Beijeren,0.6356113,fineart +Eugène Isabey,0.6160607,fineart +Jorge Jacinto,0.58618563,fineart +Frederic Leighton,0.64383554,fineart +Dave McKean,0.6438012,cartoon +Hiromu Arakawa,0.64371413,anime +Aaron Douglas,0.6437089,fineart +Adolf Dietrich,0.590169,fineart +Frederik de Moucheron,0.6435952,fineart +Siya Oum,0.6435919,cartoon +Alberto Morrocco,0.64352196,fineart +Robert Vonnoh,0.6433115,fineart +Tom Bagshaw,0.5322264,fineart +Guerrilla Girls,0.64309967,digipa-high-impact +Johann Wolfgang von Goethe,0.6429888,fineart +Charles Le Roux,0.6426594,fineart +Auguste Toulmouche,0.64261353,fineart +Cindy Sherman,0.58666563,digipa-high-impact +Federico Zuccari,0.6425021,fineart +Mike Mignola,0.642346,cartoon +Cecily Brown,0.6421981,fineart +Brian K. Vaughan,0.64147836,cartoon +RETNA (Marquis Lewis),0.47963,n +Klaus Janson,0.64129144,cartoon +Alessandro Galli Bibiena,0.6412889,fineart +Jeremy Lipking,0.64123213,fineart +Stephen Shore,0.64108944,digipa-high-impact +Heinz Edelmann,0.51325977,digipa-med-impact +Joaquín Sorolla,0.6409732,fineart +Bella Kotak,0.6409608,digipa-high-impact +Cornelis Engebrechtsz,0.64091057,fineart +Bruce Munro,0.64084166,digipa-high-impact +Marjane Satrapi,0.64076495,fineart +Jeremy Mann,0.557744,digipa-high-impact +Heinrich Maria Davringhausen,0.6403986,fineart +Kengo Kuma,0.6402023,digipa-high-impact +Alfred Manessier,0.640153,fineart +Antonio Galli Bibiena,0.6399247,digipa-high-impact +Eduard von Grützner,0.6397164,fineart +Bunny Yeager,0.5455078,digipa-high-impact +Adolphe Willette,0.6396935,fineart +Wangechi Mutu,0.6394607,n +Peter Milligan,0.6391612,digipa-high-impact +Dalí,0.45400402,digipa-low-impact +Élisabeth Vigée Le Brun,0.6388982,fineart +Beth Conklin,0.6388204,digipa-high-impact +Charles Alphonse du Fresnoy,0.63881266,fineart +Thomas Benjamin Kennington,0.56668127,fineart +Jim Woodring,0.5625168,fineart +Francisco Oller,0.63846034,fineart +Csaba Markus,0.6384506,fineart +Botero,0.63843524,scribbles +Bill Henson,0.5394536,digipa-high-impact +Anna Bocek,0.6382304,scribbles +Hugo van der Goes,0.63822484,fineart +Robert William Hume,0.5433574,fineart +Chip Zdarsky,0.6381826,cartoon +Daniel Seghers,0.53494316,fineart +Richard Doyle,0.6377541,fineart +Hendrick Terbrugghen,0.63773805,fineart +Joe Madureira,0.6377177,special +Floris van Schooten,0.6376191,fineart +Jeff Simpson,0.3959046,fineart +Albert Joseph Moore,0.6374316,fineart +Arthur Merric Boyd,0.6373228,fineart +Amadeo de Souza Cardoso,0.5927926,fineart +Os Gemeos,0.6368859,digipa-high-impact +Giovanni Boldini,0.6368698,fineart +Albert Goodwin,0.6368695,fineart +Hans Eduard von Berlepsch-Valendas,0.61562145,fineart +Edmond Xavier Kapp,0.5758474,fineart +François Quesnel,0.6365935,fineart +Nathan Coley,0.6365817,digipa-high-impact +Jasmine Becket-Griffith,0.6365083,digipa-high-impact +Raphaelle Peale,0.6364422,fineart +Candido Portinari,0.63634276,fineart +Edward Dugmore,0.63179636,fineart +Anders Zorn,0.6361722,fineart +Ed Emshwiller,0.63615763,fineart +Francis Coates Jones,0.6361159,fineart +Ernst Haas,0.6361123,digipa-high-impact +Dirck van Baburen,0.6213001,fineart +René Lalique,0.63594735,fineart +Sydney Prior Hall,0.6359345,fineart +Brad Kunkle,0.5659712,fineart +Corneille,0.6356381,fineart +Henry Lamb,0.63560975,fineart +Dirck Hals,0.63559663,fineart +Alex Grey,0.62908936,nudity +Michael Heizer,0.63555753,fineart +Yiannis Moralis,0.61731136,fineart +Emily Murray Paterson,0.4392335,fineart +Georg Friedrich Kersting,0.6256248,fineart +Frances Hodgkins,0.6352128,fineart +Charles Cundall,0.6349486,fineart +Henry Wallis,0.63478243,fineart +Goro Fujita,0.6346491,cartoon +Jean-Léon Gérôme,0.5954844,fineart +August von Pettenkofen,0.60910493,fineart +Abbott Handerson Thayer,0.63428533,fineart +Martin John Heade,0.5926603,fineart +Ellen Jewett,0.63420236,digipa-high-impact +Hidari Jingorō,0.63388014,fareast +Taiyō Matsumoto,0.63372946,special +Emanuel Leutze,0.6007246,fineart +Adam Martinakis,0.48973057,digipa-med-impact +Will Eisner,0.63349223,cartoon +Alexander Stirling Calder,0.6331682,fineart +Saturno Butto,0.6331184,nudity +Cecilia Beaux,0.6330725,fineart +Amandine Van Ray,0.6174208,digipa-high-impact +Bob Eggleton,0.63277495,digipa-high-impact +Sherree Valentine Daines,0.63274443,fineart +Frederick Lord Leighton,0.6299176,fineart +Daniel Ridgway Knight,0.63251615,fineart +Gaetano Previati,0.61743724,fineart +John Berkey,0.63226986,fineart +Richard Misrach,0.63201725,digipa-high-impact +Aaron Jasinski,0.57948315,fineart +"Edward Otho Cresap Ord, II",0.6317712,fineart +Evelyn De Morgan,0.6317376,fineart +Noelle Stevenson,0.63159716,digipa-high-impact +Edward Robert Hughes,0.6315573,fineart +Allan Ramsay,0.63150716,fineart +Balthus,0.6314323,scribbles +Hendrick Cornelisz Vroom,0.63143134,digipa-high-impact +Ilya Repin,0.6313043,fineart +George Lambourn,0.6312267,fineart +Arthur Hughes,0.6310194,fineart +Antonio J. Manzanedo,0.53841716,fineart +John Singleton Copley,0.6264835,fineart +Dennis Miller Bunker,0.63078755,fineart +Ernie Barnes,0.6307126,cartoon +Alison Kinnaird,0.6306353,digipa-high-impact +Alex Toth,0.6305541,digipa-high-impact +Henry Raeburn,0.6155551,fineart +Alice Bailly,0.6305177,fineart +Brian Kesinger,0.63037646,scribbles +Antoine Blanchard,0.63036835,fineart +Ron Walotsky,0.63035095,fineart +Kent Monkman,0.63027304,fineart +Naomi Okubo,0.5782754,fareast +Hercules Seghers,0.62957174,fineart +August Querfurt,0.6295643,fineart +Samuel Melton Fisher,0.6283333,fineart +David Burdeny,0.62950236,digipa-high-impact +George Bain,0.58519644,fineart +Peter Holme III,0.62938106,fineart +Grayson Perry,0.62928164,digipa-high-impact +Chris Claremont,0.6292076,digipa-high-impact +Dod Procter,0.6291759,fineart +Huang Tingjian,0.6290358,fareast +Dorothea Warren O'Hara,0.6290113,fineart +Ivan Albright,0.6289551,fineart +Hubert von Herkomer,0.6288955,fineart +Barbara Nessim,0.60589516,digipa-high-impact +Henry Scott Tuke,0.6286309,fineart +Ditlev Blunck,0.6282925,fineart +Sven Nordqvist,0.62828535,fineart +Lee Madgwick,0.6281731,fineart +Hubert van Eyck,0.6281529,fineart +Edmond Bille,0.62339354,fineart +Ejnar Nielsen,0.6280824,fineart +Arturo Souto,0.6280583,fineart +Jean Giraud,0.6279888,fineart +Storm Thorgerson,0.6277394,digipa-high-impact +Ed Benedict,0.62764007,digipa-high-impact +Christoffer Wilhelm Eckersberg,0.6014842,fineart +Clarence Holbrook Carter,0.5514105,fineart +Dorothy Lockwood,0.6273235,fineart +John Singer Sargent,0.6272487,fineart +Brigid Derham,0.6270125,digipa-high-impact +Henricus Hondius II,0.6268505,fineart +Gertrude Harvey,0.5903887,fineart +Grant Wood,0.6266253,fineart +Fyodor Vasilyev,0.5234919,digipa-med-impact +Cagnaccio di San Pietro,0.6261671,fineart +Doris Boulton-Maude,0.62593174,fineart +Adolf Hirémy-Hirschl,0.5946784,fineart +Harold von Schmidt,0.6256755,fineart +Martine Johanna,0.6256161,digipa-high-impact +Gerald Kelly,0.5579602,digipa-high-impact +Ub Iwerks,0.625396,cartoon +Dirck van der Lisse,0.6253871,fineart +Edouard Riou,0.6250113,fineart +Ilya Yefimovich Repin,0.62491584,fineart +Martin Johnson Heade,0.59421235,fineart +Afarin Sajedi,0.62475824,scribbles +Alfred Thompson Bricher,0.6247515,fineart +Edwin G. Lucas,0.5553578,fineart +Georges Emile Lebacq,0.56175387,fineart +Francis Davis Millet,0.5988504,fineart +Bill Sienkiewicz,0.6125557,digipa-high-impact +Giocondo Albertolli,0.62441677,fineart +Victor Nizovtsev,0.6242258,fineart +Squeak Carnwath,0.62416434,digipa-high-impact +Bill Viola,0.62409425,digipa-high-impact +Annie Abernethie Pirie Quibell,0.6240767,fineart +Jason Edmiston,0.62405366,fineart +Al Capp,0.6239494,fineart +Kobayashi Kiyochika,0.6239368,anime +Albert Anker,0.62389827,fineart +Iain Faulkner,0.62376785,fineart +Todd Schorr,0.6237408,fineart +Charles Ginner,0.62370133,fineart +Emile Auguste Carolus-Duran,0.62353987,fineart +John Philip Falter,0.623418,cartoon +Chizuko Yoshida,0.6233001,fareast +Anna Dittmann,0.62327325,cartoon +Henry Snell Gamley,0.62319934,fineart +Edmund Charles Tarbell,0.6230626,fineart +Rob Gonsalves,0.62298363,fineart +Gladys Dawson,0.6228511,fineart +Tomma Abts,0.61153626,fineart +Kate Beaton,0.53993124,digipa-high-impact +Gustave Buchet,0.62243867,fineart +Gareth Pugh,0.6223551,digipa-high-impact +Caspar van Wittel,0.57871693,fineart +Anton Otto Fischer,0.6222941,fineart +Albert Guillaume,0.56529653,fineart +Felix Octavius Carr Darley,0.62223387,fineart +Bernard van Orley,0.62221646,fineart +Edward John Poynter,0.60147405,fineart +Walter Percy Day,0.62207425,fineart +Franciszek Starowieyski,0.5709621,fineart +Auguste Baud-Bovy,0.6219854,fineart +Chris LaBrooy,0.45497298,digipa-low-impact +Abraham de Vries,0.5859101,fineart +Antoni Gaudi,0.62162614,fineart +Joe Jusko,0.62156093,digipa-high-impact +Lynda Barry,0.62154603,digipa-high-impact +Michal Karcz,0.62154436,digipa-high-impact +Raymond Briggs,0.62150294,fineart +Herbert James Gunn,0.6210927,fineart +Dwight William Tryon,0.620984,fineart +Paul Henry,0.5752968,fineart +Helio Oiticica,0.6203739,digipa-high-impact +Sebastian Errazuriz,0.62036186,digipa-high-impact +Lucian Freud,0.6203146,nudity +Frank Auerbach,0.6201102,weird +Andre-Charles Boulle,0.6200789,fineart +Franz Fedier,0.5669752,fineart +Austin Briggs,0.57675314,fineart +Hugo Sánchez Bonilla,0.61978436,digipa-high-impact +Caroline Chariot-Dayez,0.6195682,digipa-high-impact +Bill Ward,0.61953044,digipa-high-impact +Charles Bird King,0.6194487,fineart +Adrian Ghenie,0.6193521,digipa-high-impact +Agnes Cecile,0.6192814,digipa-high-impact +Augustus John,0.6191995,fineart +Jeffrey T. Larson,0.61913544,fineart +Alexis Simon Belle,0.3190395,digipa-low-impact +Jean-Baptiste Monge,0.5758537,fineart +Adolf Bierbrauer,0.56129396,fineart +Ayako Rokkaku,0.61891204,fareast +Lisa Keene,0.54570895,digipa-high-impact +Edmond Aman-Jean,0.57168096,fineart +Marc Davis,0.61837333,cartoon +Cerith Wyn Evans,0.61829346,digipa-high-impact +George Wyllie,0.61829203,fineart +George Luks,0.6182724,fineart +William-Adolphe Bouguereau,0.618265,c +Grigoriy Myasoyedov,0.61801606,fineart +Hashimoto Gahō,0.61795104,fineart +Charles Ragland Bunnell,0.61772746,fineart +Ambrose McCarthy Patterson,0.61764514,fineart +Bill Brauer,0.5824066,fineart +Mikko Lagerstedt,0.591015,digipa-high-impact +Koson Ohara,0.53635323,fineart +Evaristo Baschenis,0.5857368,fineart +Martin Ansin,0.5294119,fineart +Cory Loftis,0.6168619,cartoon +Joseph Stella,0.6166778,fineart +André Pijet,0.5768274,fineart +Jeff Wall,0.6162895,digipa-high-impact +Eleanor Layfield Davis,0.6158844,fineart +Saul Tepper,0.61579347,fineart +Alex Hirsch,0.6157384,cartoon +Alexandre Falguière,0.55011404,fineart +Malcolm Liepke,0.6155646,fineart +Georg Friedrich Schmidt,0.60364646,fineart +Hendrik Kerstens,0.55099905,digipa-high-impact +Félix Bódog Widder,0.6153954,fineart +Marie Guillemine Benoist,0.61532974,fineart +Kelly Mckernan,0.60047054,digipa-high-impact +Ignacio Zuloaga,0.6151608,fineart +Hubert van Ravesteyn,0.61489964,fineart +Angus McKie,0.61487424,digipa-high-impact +Colin Campbell Cooper,0.6147882,fineart +Pieter Aertsen,0.61454165,fineart +Jan Brett,0.6144608,fineart +Kazuo Koike,0.61438507,fineart +Edith Grace Wheatley,0.61428297,fineart +Ogawa Kazumasa,0.61427975,fareast +Giovanni Battista Cipriani,0.6022825,fineart +André Bauchant,0.57124996,fineart +George Abe,0.6140447,digipa-high-impact +Georges Lemmen,0.6139967,scribbles +Frank Leonard Brooks,0.6139327,fineart +Gai Qi,0.613744,anime +Frank Gehry,0.6136776,digipa-high-impact +Anton Domenico Gabbiani,0.55471313,fineart +Cassandra Austen,0.6135781,fineart +Paul Gustav Fischer,0.613273,fineart +Emiliano Di Cavalcanti,0.6131207,fineart +Meryl McMaster,0.6129995,digipa-high-impact +Domenico di Pace Beccafumi,0.6129922,fineart +Ludwig Mies van der Rohe,0.6126692,fineart +Étienne-Louis Boullée,0.6126158,fineart +Dali,0.5928694,nudity +Shinji Aramaki,0.61246127,anime +Giovanni Fattori,0.59544694,fineart +Bapu,0.6122084,c +Raphael Lacoste,0.5539114,digipa-high-impact +Scarlett Hooft Graafland,0.6119631,digipa-high-impact +Rene Laloux,0.61190474,fineart +Julius Horsthuis,0.59037095,fineart +Gerald van Honthorst,0.6115939,fineart +Dino Valls,0.611533,fineart +Tony DiTerlizzi,0.6114657,cartoon +Michael Cheval,0.61138546,anime +Charles Schulz,0.6113759,digipa-high-impact +Alvar Aalto,0.61122143,digipa-high-impact +Gu Kaizhi,0.6110798,fareast +Eugene von Guerard,0.6109776,fineart +John Cassaday,0.610949,fineart +Elizabeth Forbes,0.61092335,fineart +Edmund Greacen,0.6109115,fineart +Eugène Burnand,0.6107876,fineart +Boris Grigoriev,0.6107853,scribbles +Norman Rockwell,0.6107638,fineart +Barthélemy Menn,0.61064315,fineart +George Biddle,0.61058354,fineart +Edgar Ainsworth,0.5525424,digipa-high-impact +Alfred Leyman,0.5887217,fineart +Tex Avery,0.6104007,cartoon +Beatrice Ethel Lithiby,0.61030364,fineart +Grace Pailthorpe,0.61026484,digipa-high-impact +Brian Oldham,0.396231,digipa-low-impact +Android Jones,0.61023116,fareast +François Girardon,0.5830649,fineart +Ib Eisner,0.61016303,digipa-high-impact +Armand Point,0.610156,fineart +Henri Alphonse Barnoin,0.59465057,fineart +Jean Marc Nattier,0.60987425,fineart +Francisco de Holanda,0.6091294,fineart +Marco Mazzoni,0.60970783,fineart +Esaias Boursse,0.6093308,fineart +Alexander Deyneka,0.55000365,fineart +John Totleben,0.60883725,fineart +Al Feldstein,0.6087723,fineart +Adam Hughes,0.60854626,anime +Ernest Zobole,0.6085073,fineart +Alex Gross,0.60837066,digipa-high-impact +George Jamesone,0.6079673,fineart +Frank Lloyd Wright,0.60793245,scribbles +Brooke DiDonato,0.47680336,digipa-med-impact +Hans Gude,0.60780364,fineart +Ethel Schwabacher,0.60748273,fineart +Gladys Kathleen Bell,0.60747695,fineart +Adolf Fényes,0.54192233,fineart +Carel Willink,0.58120143,fineart +George Henry,0.6070727,digipa-high-impact +Ronald Balfour,0.60697085,fineart +Elsie Dalton Hewland,0.6067718,digipa-high-impact +Alex Maleev,0.6067118,fineart +Anish Kapoor,0.6067015,digipa-high-impact +Aleksandr Ivanovich Laktionov,0.606544,fineart +Kim Keever,0.6037775,digipa-high-impact +Aleksi Briclot,0.46056762,fineart +Raymond Leech,0.6062721,fineart +Richard Eurich,0.6062664,fineart +Phil Jimenez,0.60625625,cartoon +Gao Cen,0.60618126,nudity +Mike Deodato,0.6061201,cartoon +Charles Haslewood Shannon,0.6060581,fineart +Alexandre Jacovleff,0.3991747,digipa-low-impact +André Beauneveu,0.584062,fineart +Hiroshi Honda,0.60507596,digipa-high-impact +Charles Joshua Chaplin,0.60498774,fineart +Domenico Zampieri,0.6049726,fineart +Gusukuma Seihō,0.60479784,fareast +Nikolina Petolas,0.46318632,digipa-low-impact +Casey Weldon,0.6047672,cartoon +Elmyr de Hory,0.6046374,fineart +Nan Goldin,0.6046119,digipa-high-impact +Charles McAuley,0.6045995,fineart +Archibald Skirving,0.6044234,fineart +Elizabeth York Brunton,0.6043737,fineart +Dugald Sutherland MacColl,0.6042907,fineart +Titian,0.60426414,fineart +Ignacy Witkiewicz,0.6042259,fineart +Allie Brosh,0.6042061,digipa-high-impact +H.P. Lovecraft,0.6039597,digipa-high-impact +Andrée Ruellan,0.60395086,fineart +Ralph McQuarrie,0.60380936,fineart +Mead Schaeffer,0.6036558,fineart +Henri-Julien Dumont,0.571257,fineart +Kieron Gillen,0.6035093,fineart +Maginel Wright Enright Barney,0.6034306,nudity +Vincent Di Fate,0.6034131,fineart +Briton Rivière,0.6032918,fineart +Hajime Sorayama,0.60325956,nudity +Béla Czóbel,0.6031023,fineart +Edmund Blampied,0.603072,fineart +E. Simms Campbell,0.6030443,fineart +Hisui Sugiura,0.603034,fareast +Alan Davis,0.6029676,fineart +Glen Keane,0.60287905,cartoon +Frank Holl,0.6027312,fineart +Abbott Fuller Graves,0.6025608,fineart +Albert Servaes,0.60250103,black-white +Hovsep Pushman,0.5937487,fineart +Brian M. Viveros,0.60233414,fineart +Charles Fremont Conner,0.6023278,fineart +Francesco Furini,0.6022654,digipa-high-impact +Camille-Pierre Pambu Bodo,0.60191673,fineart +Yasushi Nirasawa,0.6016714,nudity +Charles Uzzell-Edwards,0.6014683,fineart +Abram Efimovich Arkhipov,0.60128385,fineart +Hedda Sterne,0.6011857,digipa-high-impact +Ben Aronson,0.6011548,fineart +Frank Frazetta,0.551121,nudity +Elizabeth Durack,0.6010842,fineart +Ian Miller,0.42153555,fareast +Charlie Bowater,0.4410439,special +Michael Carson,0.60039437,fineart +Walter Langley,0.6002273,fineart +Cornelis Anthonisz,0.6001956,fineart +Dorothy Elizabeth Bradford,0.6001929,fineart +J.C. Leyendecker,0.5791972,fineart +Willem van Haecht,0.59990716,fineart +Anna and Elena Balbusso,0.59955937,digipa-low-impact +Harrison Fisher,0.59952044,fineart +Bill Medcalf,0.59950054,fineart +Edward Arthur Walton,0.59945667,fineart +Alois Arnegger,0.5991994,fineart +Ray Caesar,0.59902894,digipa-high-impact +Karen Wallis,0.5990094,fineart +Emmanuel Shiu,0.51082766,digipa-med-impact +Thomas Struth,0.5988324,digipa-high-impact +Barbara Longhi,0.5985706,fineart +Richard Deacon,0.59851056,fineart +Constantin Hansen,0.5984213,fineart +Harold Shapinsky,0.5984175,fineart +George Dionysus Ehret,0.5983857,fineart +Doug Wildey,0.5983639,digipa-high-impact +Fernand Toussaint,0.5982694,fineart +Horatio Nelson Poole,0.5982614,fineart +Caesar van Everdingen,0.5981566,fineart +Eva Gonzalès,0.5981396,fineart +Franz Vohwinkel,0.5448179,fineart +Margaret Mee,0.5979592,fineart +Francis Focer Brown,0.59779185,fineart +Henry Moore,0.59767926,nudity +Scott Listfield,0.58795893,fineart +Nikolai Ge,0.5973643,fineart +Jacek Yerka,0.58198756,fineart +Margaret Brundage,0.5969077,fineart +JC Leyendecker,0.5620243,fineart +Ben Templesmith,0.5498991,digipa-high-impact +Armin Hansen,0.59669334,anime +Jean-Louis Prevost,0.5966897,fineart +Daphne Allen,0.59666026,fineart +Franz Karl Basler-Kopp,0.59663445,fineart +"Henry Ives Cobb, Jr.",0.596385,fineart +Michael Sowa,0.546285,fineart +Anna Füssli,0.59600973,fineart +György Rózsahegyi,0.59580946,fineart +Luis Royo,0.59566617,fineart +Émile Gallé,0.5955559,fineart +Antonio Mora,0.5334297,digipa-high-impact +Edward P. Beard Jr.,0.59543866,fineart +Jessica Rossier,0.54958373,special +André Thomkins,0.5343785,digipa-high-impact +David Macbeth Sutherland,0.5949968,fineart +Charles Liu,0.5949787,digipa-high-impact +Edi Rama,0.5949226,digipa-high-impact +Jacques Le Moyne,0.5948843,fineart +Egbert van der Poel,0.59488285,fineart +Georg Jensen,0.594782,digipa-high-impact +Anne Sudworth,0.5947539,fineart +Jan Pietersz Saenredam,0.59472525,fineart +Henryk Stażewski,0.5945748,fineart +André François,0.58402044,fineart +Alexander Runciman,0.5944449,digipa-high-impact +Thomas Kinkade,0.594391,fineart +Robert Williams,0.5567989,digipa-high-impact +George Gardner Symons,0.57431924,fineart +D. Alexander Gregory,0.5334464,fineart +Gerald Brom,0.52473724,fineart +Robert Hagan,0.59406,fineart +Ernest Crichlow,0.5940588,fineart +Viviane Sassen,0.5939927,digipa-high-impact +Enrique Simonet,0.5937546,fineart +Esther Blaikie MacKinnon,0.593747,digipa-high-impact +Jeff Kinney,0.59372896,scribbles +Igor Morski,0.5936732,digipa-high-impact +John Currin,0.5936216,fineart +Bob Ringwood,0.5935273,digipa-high-impact +Jordan Grimmer,0.44948143,digipa-low-impact +François Barraud,0.5933471,fineart +Helen Binyon,0.59331006,digipa-high-impact +Brenda Chamberlain,0.5932333,fineart +Candido Bido,0.59310603,fineart +Abraham Storck,0.5929502,fineart +Raphael,0.59278333,fineart +Larry Sultan,0.59273386,digipa-high-impact +Agostino Tassi,0.59265685,fineart +Alexander V. Kuprin,0.5925917,fineart +Frans Koppelaar,0.5658725,fineart +Richard Corben,0.59251785,fineart +David Gilmour Blythe,0.5924247,digipa-high-impact +František Kaván,0.5924211,fineart +Rob Liefeld,0.5921167,fineart +Ernő Rubik,0.5920297,fineart +Byeon Sang-byeok,0.59200096,fareast +Johfra Bosschart,0.5919376,fineart +Emil Lindenfeld,0.5761086,fineart +Howard Mehring,0.5917471,fineart +Gwenda Morgan,0.5915571,digipa-high-impact +Henry Asencio,0.5915404,fineart +"George Barret, Sr.",0.5914306,fineart +Andrew Ferez,0.5911011,fineart +Ed Brubaker,0.5910869,digipa-high-impact +George Reid,0.59095883,digipa-high-impact +Derek Gores,0.51769906,digipa-med-impact +Charles Rollier,0.5539186,fineart +Terry Oakes,0.590443,fineart +Thomas Blackshear,0.5078616,fineart +Albert Benois,0.5902705,nudity +Krenz Cushart,0.59026587,special +Jeff Koons,0.5902637,digipa-high-impact +Akihiko Yoshida,0.5901294,special +Anja Percival,0.45039332,digipa-low-impact +Eduard von Steinle,0.59008586,fineart +Alex Russell Flint,0.5900352,digipa-high-impact +Edward Okuń,0.5897297,fineart +Emma Lampert Cooper,0.5894849,fineart +Stuart Haygarth,0.58132994,digipa-high-impact +George French Angas,0.5434376,fineart +Edmund F. Ward,0.5892848,fineart +Eleanor Vere Boyle,0.58925456,digipa-high-impact +Evelyn Cheston,0.58924586,fineart +Edwin Dickinson,0.58921975,digipa-high-impact +Christophe Vacher,0.47325426,fineart +Anne Dewailly,0.58905107,fineart +Gertrude Greene,0.5862596,digipa-high-impact +Boris Groh,0.5888809,digipa-high-impact +Douglas Smith,0.588804,digipa-high-impact +Ian Hamilton Finlay,0.5887713,fineart +Derek Jarman,0.5887292,digipa-high-impact +Archibald Thorburn,0.5882001,fineart +Gillis d'Hondecoeter,0.58813053,fineart +I Ketut Soki,0.58801544,digipa-high-impact +Alex Schomburg,0.46614102,digipa-low-impact +Bastien L. Deharme,0.583349,special +František Jakub Prokyš,0.58782333,fineart +Jesper Ejsing,0.58782053,fineart +Odd Nerdrum,0.53551745,digipa-high-impact +Tom Lovell,0.5877577,fineart +Ayami Kojima,0.5877416,fineart +Peter Sculthorpe,0.5875696,fineart +Bernard D’Andrea,0.5874042,fineart +Denis Eden,0.58739066,digipa-high-impact +Alfons Walde,0.58728385,fineart +Jovana Rikalo,0.47006977,digipa-low-impact +Franklin Booth,0.5870834,fineart +Mat Collishaw,0.5870676,digipa-high-impact +Joseph Lorusso,0.586858,fineart +Helen Stevenson,0.454647,digipa-low-impact +Delaunay,0.58657396,fineart +H.R. Millar,0.58655745,fineart +E. Charlton Fortune,0.586376,fineart +Alson Skinner Clark,0.58631575,fineart +Stan And Jan Berenstain,0.5862361,digipa-high-impact +Howard Lyon,0.5862271,fineart +John Blanche,0.586182,fineart +Bernardo Cavallino,0.5858575,fineart +Tomasz Alen Kopera,0.5216588,fineart +Peter Gric,0.58583695,fineart +Guo Pei,0.5857794,fareast +James Turrell,0.5853901,digipa-high-impact +Alexandr Averin,0.58533764,fineart +Bertalan Székely,0.5548113,digipa-high-impact +Brothers Hildebrandt,0.5850233,fineart +Ed Roth,0.5849769,digipa-high-impact +Enki Bilal,0.58492255,fineart +Alan Lee,0.5848701,fineart +Charles H. Woodbury,0.5848688,fineart +André Charles Biéler,0.5847876,fineart +Annie Rose Laing,0.5597829,fineart +Matt Fraction,0.58463776,cartoon +Charles Alston,0.58453286,fineart +Frank Xavier Leyendecker,0.545465,fineart +Alfred Richard Gurrey,0.584306,fineart +Dan Mumford,0.5843051,cartoon +Francisco Martín,0.5842005,fineart +Alvaro Siza,0.58406967,digipa-high-impact +Frank J. Girardin,0.5839858,fineart +Henry Carr,0.58397424,digipa-high-impact +Charles Furneaux,0.58394694,fineart +Daniel F. Gerhartz,0.58389103,fineart +Gilberto Soren Zaragoza,0.5448442,fineart +Bart Sears,0.5838427,cartoon +Allison Bechdel,0.58383805,digipa-high-impact +Frank O'Meara,0.5837992,fineart +Charles Codman,0.5836579,fineart +Francisco Zúñiga,0.58359766,fineart +Vladimir Kush,0.49075457,fineart +Arnold Mesches,0.5834257,fineart +Frank McKelvey,0.5831641,fineart +Allen Butler Talcott,0.5830911,fineart +Eric Zener,0.58300316,fineart +Noah Bradley,0.44176096,digipa-low-impact +Robert Childress,0.58289623,fineart +Frances C. Fairman,0.5827239,fineart +Kathryn Morris Trotter,0.465856,digipa-low-impact +Everett Raymond Kinstler,0.5824819,fineart +Edward Mitchell Bannister,0.5804899,fineart +"George Barret, Jr.",0.5823128,fineart +Greg Hildebrandt,0.4271311,fineart +Anka Zhuravleva,0.5822078,digipa-high-impact +Rolf Armstrong,0.58217514,fineart +Eric Wallis,0.58191466,fineart +Clemens Ascher,0.5480207,digipa-high-impact +Hugo Kārlis Grotuss,0.5818766,fineart +Albert Paris Gütersloh,0.5817827,fineart +Hilda May Gordon,0.5817449,fineart +Hendrik Martenszoon Sorgh,0.5817126,fineart +Pipilotti Rist,0.5816868,digipa-high-impact +Hiroyuki Tajima,0.5816242,fareast +Igor Zenin,0.58159757,digipa-high-impact +Genevieve Springston Lynch,0.4979099,digipa-med-impact +Dan Witz,0.44476372,fineart +David Roberts,0.5255326,fineart +Frieke Janssens,0.5706969,digipa-high-impact +Arnold Schoenberg,0.56520367,fineart +Inoue Naohisa,0.5809933,fareast +Elfriede Lohse-Wächtler,0.58097905,fineart +Alex Ross,0.42460668,digipa-low-impact +Robert Irwin,0.58078,c +Charles Angrand,0.58077514,fineart +Anne Nasmyth,0.54221964,fineart +Henri Bellechose,0.5773891,fineart +De Hirsh Margules,0.58059025,fineart +Hiromitsu Takahashi,0.5805599,fareast +Ilya Kuvshinov,0.5805521,special +Cassius Marcellus Coolidge,0.5805516,c +Dorothy Burroughes,0.5804835,fineart +Emanuel de Witte,0.58027405,fineart +George Herbert Baker,0.5799624,digipa-high-impact +Cheng Zhengkui,0.57990086,fareast +Bernard Fleetwood-Walker,0.57987773,digipa-high-impact +Philippe Parreno,0.57985014,digipa-high-impact +Thornton Oakley,0.57969713,fineart +Greg Rutkowski,0.5203395,special +Ike no Taiga,0.5795857,anime +Eduardo Lefebvre Scovell,0.5795808,fineart +Adolfo Müller-Ury,0.57944727,fineart +Patrick Woodroffe,0.5228063,fineart +Wim Crouwel,0.57933235,digipa-high-impact +Colijn de Coter,0.5792779,fineart +François Boquet,0.57924724,fineart +Gerbrand van den Eeckhout,0.57897866,fineart +Eugenio Granell,0.5392264,fineart +Kuang Hong,0.5782304,digipa-high-impact +Justin Gerard,0.46685404,fineart +Tokujin Yoshioka,0.5779153,digipa-high-impact +Alan Bean,0.57788515,fineart +Ernest Biéler,0.5778079,fineart +Martin Deschambault,0.44401115,digipa-low-impact +Anna Boch,0.577735,fineart +Jack Davis,0.5775291,fineart +Félix Labisse,0.5775142,fineart +Greg Simkins,0.5679761,fineart +David Lynch,0.57751054,digipa-low-impact +Eizō Katō,0.5774127,digipa-high-impact +Grethe Jürgens,0.5773412,digipa-high-impact +Heinrich Bichler,0.5770147,fineart +Barbara Nasmyth,0.5446056,fineart +Domenico Induno,0.5583946,fineart +Gustave Baumann,0.5607866,fineart +Mike Mayhew,0.5765857,cartoon +Delmer J. Yoakum,0.576538,fineart +Aykut Aydogdu,0.43111503,digipa-low-impact +George Barker,0.5763551,fineart +Ernő Grünbaum,0.57634187,fineart +Eliseu Visconti,0.5763241,fineart +Esao Andrews,0.5761547,fineart +JennyBird Alcantara,0.49165845,digipa-med-impact +Joan Tuset,0.5761051,fineart +Angela Barrett,0.55976534,digipa-high-impact +Syd Mead,0.5758396,fineart +Ignacio Bazan-Lazcano,0.5757512,fineart +Franciszek Kostrzewski,0.57570386,fineart +Eero Järnefelt,0.57540673,fineart +Loretta Lux,0.56217635,digipa-high-impact +Gaudi,0.57519895,fineart +Charles Gleyre,0.57490873,fineart +Antoine Verney-Carron,0.56386137,fineart +Albert Edelfelt,0.57466495,fineart +Fabian Perez,0.57444525,fineart +Kevin Sloan,0.5737548,fineart +Stanislav Poltavsky,0.57434607,fineart +Abraham Hondius,0.574326,fineart +Tadao Ando,0.57429105,fareast +Fyodor Slavyansky,0.49796474,digipa-med-impact +David Brewster,0.57385933,digipa-high-impact +Cliff Chiang,0.57375133,digipa-high-impact +Drew Struzan,0.5317983,digipa-high-impact +Henry O. Tanner,0.5736586,fineart +Alberto Sughi,0.5736495,fineart +Albert J. Welti,0.5736257,fineart +Charles Mahoney,0.5735923,digipa-high-impact +Exekias,0.5734506,fineart +Felipe Seade,0.57342744,digipa-high-impact +Henriette Wyeth,0.57330644,digipa-high-impact +Harold Sandys Williamson,0.5443646,fineart +Eddie Campbell,0.57329535,digipa-high-impact +Gao Fenghan,0.5732926,fareast +Cynthia Sheppard,0.51099646,fineart +Henriette Grindat,0.573179,fineart +Yasutomo Oka,0.5731342,fareast +Celia Frances Bedford,0.57313216,fineart +Les Edwards,0.42068473,fineart +Edwin Deakin,0.5031717,fineart +Eero Saarinen,0.5725142,digipa-high-impact +Franciszek Smuglewicz,0.5722554,fineart +Doris Blair,0.57221186,fineart +Seb Mckinnon,0.51721895,digipa-med-impact +Gregorio Lazzarini,0.57204294,fineart +Gerard Sekoto,0.5719927,fineart +Francis Ernest Jackson,0.5506009,fineart +Simon Birch,0.57171595,digipa-high-impact +Bayard Wu,0.57171166,fineart +François Clouet,0.57162094,fineart +Christopher Wren,0.5715372,fineart +Evgeny Lushpin,0.5714827,special +Art Green,0.5714495,digipa-high-impact +Amy Judd,0.57142305,digipa-high-impact +Art Brenner,0.42619684,digipa-low-impact +Travis Louie,0.43916368,digipa-low-impact +James Jean,0.5457318,digipa-high-impact +Ewald Rübsamen,0.57083976,fineart +Donato Giancola,0.57052535,fineart +Carl Arnold Gonzenbach,0.5703996,fineart +Bastien Lecouffe-Deharme,0.5201288,fineart +Howard Chandler Christy,0.5702813,nudity +Dean Cornwell,0.56977296,fineart +Don Maitz,0.4743015,fineart +James Montgomery Flagg,0.56974065,fineart +Andreas Levers,0.42125136,digipa-low-impact +Edgar Schofield Baum,0.56965977,fineart +Alan Parry,0.5694952,digipa-high-impact +An Zhengwen,0.56942475,fareast +Alayna Lemmer,0.48293802,fineart +Edward Marshall Boehm,0.5530143,fineart +Henri Biva,0.54013556,nudity +Fiona Rae,0.4646715,digipa-low-impact +Elizabeth Jane Lloyd,0.5688463,digipa-high-impact +Franklin Carmichael,0.5687844,digipa-high-impact +Dionisius,0.56875896,fineart +Edwin Georgi,0.56868523,fineart +Jenny Saville,0.5686633,fineart +Ernest Hébert,0.56859314,fineart +Stephan Martiniere,0.56856346,digipa-high-impact +Huang Binhong,0.56841767,fineart +August Lemmer,0.5683548,fineart +Camille Bouvagne,0.5678048,fineart +Olga Skomorokhova,0.39401102,digipa-low-impact +Sacha Goldberger,0.5675477,digipa-high-impact +Hilda Annetta Walker,0.5675261,digipa-high-impact +Harvey Pratt,0.51314723,digipa-med-impact +Jean Bourdichon,0.5670543,fineart +Noriyoshi Ohrai,0.56690073,fineart +Kadir Nelson,0.5669006,n +Ilya Ostroukhov,0.5668801,fineart +Eugène Brands,0.56681967,fineart +Achille Leonardi,0.56674325,fineart +Franz Cižek,0.56670356,fineart +George Paul Chalmers,0.5665988,digipa-high-impact +Serge Marshennikov,0.5665971,digipa-high-impact +Mike Worrall,0.56641084,fineart +Dirck van Delen,0.5661764,fineart +Peter Andrew Jones,0.5661655,fineart +Rafael Albuquerque,0.56541103,fineart +Daniel Buren,0.5654043,fineart +Giuseppe Grisoni,0.5432699,fineart +George Fiddes Watt,0.55861616,fineart +Stan Lee,0.5651268,digipa-high-impact +Dorning Rasbotham,0.56511617,fineart +Albert Lynch,0.56497896,fineart +Lorenz Hideyoshi,0.56494075,fineart +Fenghua Zhong,0.56492203,fareast +Caroline Lucy Scott,0.49190843,digipa-med-impact +Victoria Crowe,0.5647996,digipa-high-impact +Hasegawa Settan,0.5647092,fareast +Dennis H. Farber,0.56453323,digipa-high-impact +Dick Bickenbach,0.5644289,fineart +Art Frahm,0.56439924,fineart +Edith Edmonds,0.5643151,fineart +Alfred Heber Hutty,0.56419206,fineart +Henry Tonks,0.56410825,fineart +Peter Howson,0.5640759,fineart +Albert Dorne,0.56395364,fineart +Arthur Adams,0.5639404,fineart +Bernt Tunold,0.56383425,digipa-high-impact +Gianluca Foli,0.5637317,digipa-high-impact +Vittorio Matteo Corcos,0.5636767,fineart +Béla Iványi-Grünwald,0.56355745,nudity +Feng Zhu,0.5634973,fineart +Sam Kieth,0.47251505,digipa-low-impact +Charles Crodel,0.5633834,fineart +Elsie Henderson,0.56310076,digipa-high-impact +George Earl Ortman,0.56295705,fineart +Tari Márk Dávid,0.562937,fineart +Betty Merken,0.56281745,digipa-high-impact +Cecile Walton,0.46672013,digipa-low-impact +Bracha L. Ettinger,0.56237936,fineart +Ken Fairclough,0.56230986,digipa-high-impact +Phil Koch,0.56224954,digipa-high-impact +George Pirie,0.56213045,digipa-high-impact +Chad Knight,0.56194013,digipa-high-impact +Béla Kondor,0.5427164,digipa-high-impact +Barclay Shaw,0.53689134,digipa-high-impact +Tim Hildebrandt,0.47194147,fineart +Hermann Rüdisühli,0.56104004,digipa-high-impact +Ian McQue,0.5342066,digipa-high-impact +Yanjun Cheng,0.5607171,fineart +Heinrich Hofmann,0.56060636,fineart +Henry Raleigh,0.5605958,fineart +Ernest Buckmaster,0.5605704,fineart +Charles Ricketts,0.56055415,fineart +Juergen Teller,0.56051147,digipa-high-impact +Auguste Mambour,0.5604873,fineart +Sean Yoro,0.5601486,digipa-high-impact +Sheilah Beckett,0.55995446,digipa-high-impact +Eugene Tertychnyi,0.5598978,fineart +Dr. Seuss,0.5597466,c +Adolf Wölfli,0.5372333,digipa-high-impact +Enrique Tábara,0.559323,fineart +Dionisio Baixeras Verdaguer,0.5590695,fineart +Aleksander Gierymski,0.5590013,fineart +Augustus Dunbier,0.55872476,fineart +Adolf Born,0.55848217,fineart +Chris Turnham,0.5584234,digipa-high-impact +James C Christensen,0.55837405,fineart +Daphne Fedarb,0.5582459,digipa-high-impact +Andre Kohn,0.5581832,special +Ron Mueck,0.5581811,nudity +Glenn Fabry,0.55786383,fineart +Elizabeth Polunin,0.5578102,digipa-high-impact +Charles S. Kaelin,0.5577954,fineart +Arthur Radebaugh,0.5577016,fineart +Ai Yazawa,0.55768114,fareast +Charles Roka,0.55762553,fineart +Ai Weiwei,0.5576034,digipa-high-impact +Dorothy Bradford,0.55760014,digipa-high-impact +Alfred Leslie,0.557555,fineart +Heinrich Herzig,0.5574423,fineart +Eliot Hodgkin,0.55740607,digipa-high-impact +Albert Kotin,0.55737317,fineart +Carlo Carlone,0.55729353,fineart +Chen Rong,0.5571221,fineart +Ikuo Hirayama,0.5570225,digipa-high-impact +Edward Corbett,0.55701995,nudity +Eugeniusz Żak,0.556925,nudity +Ettore Tito,0.556875,fineart +Helene Knoop,0.5567731,fineart +Amanda Sage,0.37731662,fareast +Annick Bouvattier,0.54647046,fineart +Harvey Dunn,0.55663586,fineart +Hans Sandreuter,0.5562575,digipa-high-impact +Ruan Jia,0.5398549,special +Anton Räderscheidt,0.55618906,fineart +Tyler Shields,0.4081434,digipa-low-impact +Darek Zabrocki,0.49975997,digipa-med-impact +Frank Montague Moore,0.5556432,fineart +Greg Staples,0.5555332,fineart +Endre Bálint,0.5553731,fineart +Augustus Vincent Tack,0.5136602,fineart +Marc Simonetti,0.48602036,fineart +Carlo Randanini,0.55493265,digipa-high-impact +Diego Dayer,0.5549119,fineart +Kelly Freas,0.55476534,fineart +Thomas Saliot,0.5139967,digipa-med-impact +Gijsbert d'Hondecoeter,0.55455256,fineart +Walter Kim,0.554521,digipa-high-impact +Francesco Cozza,0.5155097,digipa-med-impact +Bill Watterson,0.5542879,digipa-high-impact +Mark Keathley,0.4824056,fineart +Béni Ferenczy,0.55405354,digipa-high-impact +Amadou Opa Bathily,0.5536976,n +Giuseppe Antonio Petrini,0.55340284,fineart +Enzo Cucchi,0.55331933,digipa-high-impact +Adolf Schrödter,0.55316544,fineart +George Benjamin Luks,0.548566,fineart +Glenys Cour,0.55304,digipa-high-impact +Andrew Robertson,0.5529603,digipa-high-impact +Claude Rogers,0.55272067,digipa-high-impact +Alexandre Antigna,0.5526737,fineart +Aimé Barraud,0.55265915,digipa-high-impact +György Vastagh,0.55258965,fineart +Bruce Nauman,0.55257386,digipa-high-impact +Benjamin Block,0.55251944,digipa-high-impact +Gonzalo Endara Crow,0.552346,digipa-high-impact +Dirck de Bray,0.55221736,fineart +Gerald Kelley,0.5521059,digipa-high-impact +Dave Gibbons,0.5520954,digipa-high-impact +Béla Nagy Abodi,0.5520624,digipa-high-impact +Faith 47,0.5517006,digipa-high-impact +Anna Razumovskaya,0.5229187,digipa-med-impact +Archibald Robertson,0.55129635,digipa-high-impact +Louise Dahl-Wolfe,0.55120385,digipa-high-impact +Simon Bisley,0.55119276,digipa-high-impact +Eric Fischl,0.55107886,fineart +Hu Zaobin,0.5510481,fareast +Béla Pállik,0.5507963,digipa-high-impact +Eugene J. Martin,0.55078864,fineart +Friedrich Gauermann,0.55063415,fineart +Fritz Baumann,0.5341434,fineart +Michal Lisowski,0.5505639,fineart +Paolo Roversi,0.5503342,digipa-high-impact +Andrew Atroshenko,0.55009747,fineart +Gyula Derkovits,0.5500315,fineart +Hugh Adam Crawford,0.55000615,digipa-high-impact +Béla Apáti Abkarovics,0.5499799,digipa-high-impact +Paul Chadeisson,0.389151,digipa-low-impact +Aurél Bernáth,0.54968774,fineart +Albert Henry Krehbiel,0.54952574,fineart +Piet Hein Eek,0.54918796,digipa-high-impact +Yoshitaka Amano,0.5491855,fareast +Antonio Rotta,0.54909515,fineart +Józef Mehoffer,0.50760424,fineart +Donald Sherwood,0.5490415,digipa-high-impact +Catrin G Grosse,0.5489286,digipa-high-impact +Arthur Webster Emerson,0.5478842,fineart +Incarcerated Jerkfaces,0.5488423,digipa-high-impact +Emanuel Büchel,0.5487217,fineart +Andrew Loomis,0.54854584,fineart +Charles Hopkinson,0.54853606,fineart +Gabor Szikszai,0.5485203,digipa-high-impact +Archibald Standish Hartrick,0.54850936,digipa-high-impact +Aleksander Orłowski,0.546705,nudity +Hans Hinterreiter,0.5483628,fineart +Fred Williams,0.54544824,fineart +Fred A. Precht,0.5481606,fineart +Camille Souter,0.5213742,fineart +Emil Fuchs,0.54807395,fineart +Francesco Bonsignori,0.5478936,fineart +H. R. (Hans Ruedi) Giger,0.547799,fineart +Harriet Zeitlin,0.5477388,digipa-high-impact +Christian Jane Fergusson,0.5396168,fineart +Edward Kemble,0.5476892,fineart +Bernard Aubertin,0.5475396,fineart +Augustyn Mirys,0.5474162,fineart +Alejandro Burdisio,0.47482288,special +Erin Hanson,0.4343264,digipa-low-impact +Amalia Lindegren,0.5471987,digipa-high-impact +Alberto Seveso,0.47735062,fineart +Bartholomeus Strobel,0.54703736,fineart +Jim Davis,0.54703003,digipa-high-impact +Antony Gormley,0.54696125,digipa-high-impact +Charles Marion Russell,0.54696095,fineart +George B. Sutherland,0.5467901,fineart +Almada Negreiros,0.54670584,fineart +Edward Armitage,0.54358315,fineart +Bruno Walpoth,0.546167,digipa-high-impact +Richard Hamilton,0.5461275,nudity +Charles Harold Davis,0.5460415,digipa-high-impact +Fernand Verhaegen,0.54601514,fineart +Bernard Meninsky,0.5302034,digipa-high-impact +Fede Galizia,0.5456873,digipa-high-impact +Alfred Kelsner,0.5455753,nudity +Fritz Puempin,0.5452847,fineart +Alfred Charles Parker,0.54521024,fineart +Ahmed Yacoubi,0.544767,digipa-high-impact +Arthur B. Carles,0.54447794,fineart +Alice Prin,0.54435575,digipa-high-impact +Carl Gustaf Pilo,0.5443212,digipa-high-impact +Ross Tran,0.5259248,special +Hideyuki Kikuchi,0.544193,fareast +Art Fitzpatrick,0.49847245,fineart +Cherryl Fountain,0.5440454,fineart +Skottie Young,0.5440119,cartoon +NC Wyeth,0.54382974,digipa-high-impact +Rudolf Freund,0.5437342,fineart +Mort Kunstler,0.5433619,digipa-high-impact +Ben Goossens,0.53002644,digipa-high-impact +Andreas Rocha,0.49621177,special +Gérard Ernest Schneider,0.5429964,fineart +Francesco Filippini,0.5429598,digipa-high-impact +Alejandro Jodorowsky,0.5429065,digipa-high-impact +Friedrich Traffelet,0.5428817,fineart +Honor C. Appleton,0.5428735,digipa-high-impact +Jason A. Engle,0.542821,fineart +Henry Otto Wix,0.54271996,fineart +Gregory Manchess,0.54270375,fineart +Ann Stookey,0.54269934,digipa-high-impact +Henryk Rodakowski,0.542589,fineart +Albert Welti,0.5425134,digipa-high-impact +Gerard Houckgeest,0.5424413,digipa-high-impact +Dorothy Hood,0.54226196,digipa-high-impact +Frank Schoonover,0.51056194,fineart +Erlund Hudson,0.5422107,digipa-high-impact +Alexander Litovchenko,0.54210097,fineart +Sakai Hōitsu,0.5420294,digipa-high-impact +Benito Quinquela Martín,0.54194224,fineart +David Watson Stevenson,0.54191554,fineart +Ann Thetis Blacker,0.5416629,digipa-high-impact +Frank DuMond,0.51004076,digipa-med-impact +David Dougal Williams,0.5410126,digipa-high-impact +Robert Mcginnis,0.54098356,fineart +Ernest Briggs,0.5408636,fineart +Ferenc Joachim,0.5408625,fineart +Carlos Saenz de Tejada,0.47332364,digipa-low-impact +David Burton-Richardson,0.49659324,digipa-med-impact +Ernest Heber Thompson,0.54039246,digipa-high-impact +Albert Bertelsen,0.54038215,nudity +Giorgio Giulio Clovio,0.5403708,fineart +Eugene Leroy,0.54019785,digipa-high-impact +Anna Findlay,0.54018176,digipa-high-impact +Roy Gjertson,0.54012,digipa-high-impact +Charmion von Wiegand,0.5400893,fineart +Arnold Bronckhorst,0.526247,fineart +Boris Vallejo,0.487253,fineart +Adélaïde Victoire Hall,0.539939,fineart +Earl Norem,0.5398575,fineart +Sanford Kossin,0.53977877,digipa-high-impact +Aert de Gelder,0.519166,digipa-med-impact +Carl Eugen Keel,0.539739,digipa-high-impact +Francis Bourgeois,0.5397272,digipa-high-impact +Bojan Jevtic,0.41141546,fineart +Edward Avedisian,0.5393925,fineart +Gao Xiang,0.5392419,fareast +Charles Hinman,0.53911865,digipa-high-impact +Frits Van den Berghe,0.53896487,fineart +Carlo Martini,0.5384833,digipa-high-impact +Elina Karimova,0.5384318,digipa-high-impact +Anto Carte,0.4708289,digipa-low-impact +Andrey Yefimovich Martynov,0.537721,fineart +Frances Jetter,0.5376904,fineart +Yuri Ivanovich Pimenov,0.5342793,fineart +Gaston Anglade,0.537608,digipa-high-impact +Albert Swinden,0.5375844,fineart +Bob Byerley,0.5375774,fineart +A.B. Frost,0.5375025,fineart +Jaya Suberg,0.5372893,digipa-high-impact +Josh Keyes,0.53654516,digipa-high-impact +Juliana Huxtable,0.5364195,n +Everett Warner,0.53641814,digipa-high-impact +Hugh Kretschmer,0.45171157,digipa-low-impact +Arnold Blanch,0.535774,fineart +Ryan McGinley,0.53572595,digipa-high-impact +Alfons Karpiński,0.53564656,fineart +George Aleef,0.5355317,digipa-high-impact +Hal Foster,0.5351446,fineart +Stuart Immonen,0.53501946,digipa-high-impact +Craig Thompson,0.5346844,digipa-high-impact +Bartolomeo Vivarini,0.53465015,fineart +Hermann Feierabend,0.5346168,digipa-high-impact +Antonio Donghi,0.4610982,digipa-low-impact +Adonna Khare,0.4858036,digipa-med-impact +James Stokoe,0.5015107,digipa-med-impact +Agustín Fernández,0.53403986,fineart +Germán Londoño,0.5338712,fineart +Emmanuelle Moureaux,0.5335641,digipa-high-impact +Conrad Marca-Relli,0.5148334,digipa-med-impact +Gyula Batthyány,0.5332407,fineart +Francesco Raibolini,0.53314835,fineart +Apelles,0.5166026,fineart +Marat Latypov,0.45811993,fineart +Andrei Markin,0.5328752,fineart +Einar Hakonarson,0.5328311,digipa-high-impact +Beatrice Huntington,0.5328165,digipa-high-impact +Coppo di Marcovaldo,0.5327443,fineart +Gregorio Prestopino,0.53250784,fineart +A.D.M. Cooper,0.53244877,digipa-high-impact +Horatio McCulloch,0.53244334,digipa-high-impact +Wes Anderson,0.5318741,digipa-high-impact +Moebius,0.53178746,digipa-high-impact +Gerard Soest,0.53160626,fineart +Charles Ellison,0.53152347,digipa-high-impact +Wojciech Ostrycharz,0.5314213,fineart +Doug Chiang,0.5313724,fineart +Anne Savage,0.5310638,digipa-high-impact +Cor Melchers,0.53099334,fineart +Gordon Browne,0.5308195,digipa-high-impact +Augustus Earle,0.49196815,fineart +Carlos Francisco Chang Marín,0.5304734,fineart +Larry Elmore,0.53032553,fineart +Adolf Hölzel,0.5303149,fineart +David Ligare,0.5301894,fineart +Jan Luyken,0.52985555,fineart +Earle Bergey,0.5298525,fineart +David Ramsay Hay,0.52974963,digipa-high-impact +Alfred East,0.5296565,digipa-high-impact +A. R. Middleton Todd,0.50988734,fineart +Giorgio De Vincenzi,0.5291678,fineart +Hugh William Williams,0.5291014,digipa-high-impact +Erwin Bowien,0.52895796,digipa-high-impact +Victor Adame Minguez,0.5288686,fineart +Yoji Shinkawa,0.5287015,anime +Clara Weaver Parrish,0.5284487,digipa-high-impact +Albert Eckhout,0.5284096,fineart +Dorothy Coke,0.5282345,digipa-high-impact +Jerzy Duda-Gracz,0.5279943,digipa-high-impact +Byron Galvez,0.39178842,fareast +Alson S. Clark,0.5278568,digipa-high-impact +Adolf Ulric Wertmüller,0.5278296,digipa-high-impact +Bruce Coville,0.5277226,digipa-high-impact +Gong Kai,0.5276811,digipa-high-impact +Andréi Arinouchkine,0.52763486,digipa-high-impact +Florence Engelbach,0.5273161,digipa-high-impact +Brian Froud,0.5270276,fineart +Charles Thomson,0.5270127,digipa-high-impact +Bessie Wheeler,0.5269164,digipa-high-impact +Anton Lehmden,0.5268611,fineart +Emilia Wilk,0.5264961,fineart +Carl Eytel,0.52646196,digipa-high-impact +Alfred Janes,0.5264481,digipa-high-impact +Julie Bell,0.49962538,fineart +Eugenio de Arriba,0.52613926,digipa-high-impact +Samuel and Joseph Newsom,0.52595663,digipa-high-impact +Hans Falk,0.52588874,digipa-high-impact +Guillermo del Toro,0.52565175,digipa-high-impact +Félix Arauz,0.52555984,digipa-high-impact +Gyula Basch,0.52524436,digipa-high-impact +Haroon Mirza,0.5252279,digipa-high-impact +Du Jin,0.5249934,digipa-med-impact +Harry Shoulberg,0.5249456,digipa-med-impact +Arie Smit,0.5249027,fineart +Ahmed Karahisari,0.4259451,digipa-low-impact +Brian and Wendy Froud,0.5246335,fineart +E. William Gollings,0.52461207,digipa-med-impact +Bo Bartlett,0.51341593,digipa-med-impact +Hans Burgkmair,0.52416867,digipa-med-impact +David Macaulay,0.5241233,digipa-med-impact +Benedetto Caliari,0.52370214,digipa-med-impact +Eliott Lilly,0.5235398,digipa-med-impact +Vincent Tanguay,0.48578292,digipa-med-impact +Ada Hill Walker,0.52207166,fineart +Christopher Wood,0.49360397,digipa-med-impact +Kris Kuksi,0.43938053,digipa-low-impact +Chen Yifei,0.5217867,fineart +Margaux Valonia,0.5217782,digipa-med-impact +Antoni Pitxot,0.40582713,digipa-low-impact +Jhonen Vasquez,0.5216471,digipa-med-impact +Emilio Grau Sala,0.52156484,fineart +Henry B. Christian,0.52153796,fineart +Jacques Nathan-Garamond,0.52144086,digipa-med-impact +Eddie Mendoza,0.4949638,digipa-med-impact +Grzegorz Rutkowski,0.48906532,special +Beeple,0.40085253,digipa-low-impact +Giorgio Cavallon,0.5209209,digipa-med-impact +Godfrey Blow,0.52062386,digipa-med-impact +Gabriel Dawe,0.5204431,fineart +Emile Lahner,0.5202367,digipa-med-impact +Steve Dillon,0.5201676,digipa-med-impact +Lee Quinones,0.4626683,digipa-low-impact +Hale Woodruff,0.52000225,digipa-med-impact +Tom Hammick,0.5032626,digipa-med-impact +Hamilton Sloan,0.5197798,digipa-med-impact +Caesar Andrade Faini,0.51971483,digipa-med-impact +Sam Spratt,0.48991,digipa-med-impact +Chris Cold,0.4753577,fineart +Alejandro Obregón,0.5190562,digipa-med-impact +Dan Flavin,0.51901346,digipa-med-impact +Arthur Sarnoff,0.5189428,fineart +Elenore Abbott,0.5187141,digipa-med-impact +Andrea Kowch,0.51822996,digipa-med-impact +Demetrios Farmakopoulos,0.5181248,digipa-med-impact +Alexis Grimou,0.41958088,digipa-low-impact +Lesley Vance,0.5177536,digipa-med-impact +Gyula Aggházy,0.517747,fineart +Georgina Hunt,0.46105456,digipa-low-impact +Christian W. Staudinger,0.4684662,digipa-low-impact +Abraham Begeyn,0.5172538,digipa-med-impact +Charles Mozley,0.5171356,digipa-med-impact +Elias Ravanetti,0.38719344,digipa-low-impact +Herman van Swanevelt,0.5168748,digipa-med-impact +David Paton,0.4842217,digipa-med-impact +Hans Werner Schmidt,0.51671976,digipa-med-impact +Bob Ross,0.51628315,fineart +Sou Fujimoto,0.5162528,fareast +Balcomb Greene,0.5162045,digipa-med-impact +Glen Angus,0.51609933,digipa-med-impact +Buckminster Fuller,0.51607454,digipa-med-impact +Andrei Ryabushkin,0.5158933,fineart +Almeida Júnior,0.515856,digipa-med-impact +Tim White,0.4182697,digipa-low-impact +Hans Beat Wieland,0.51553553,digipa-med-impact +Jakub Różalski,0.5154904,digipa-med-impact +John Whitcomb,0.51523805,digipa-med-impact +Dorothy King,0.5150925,digipa-med-impact +Richard S. Johnson,0.51500344,fineart +Aniello Falcone,0.51475304,digipa-med-impact +Henning Jakob Henrik Lund,0.5147134,c +Robert M Cunningham,0.5144858,digipa-med-impact +Nick Knight,0.51447505,digipa-med-impact +David Chipperfield,0.51424,digipa-med-impact +Bartolomeo Cesi,0.5136737,digipa-med-impact +Bettina Heinen-Ayech,0.51334465,digipa-med-impact +Annabel Kidston,0.51327646,digipa-med-impact +Charles Schridde,0.51308405,digipa-med-impact +Samuel Earp,0.51305825,digipa-med-impact +Eugene Montgomery,0.5128343,digipa-med-impact +Alfred Parsons,0.5127445,digipa-med-impact +Anton Möller,0.5127209,digipa-med-impact +Craig Davison,0.499598,special +Cricorps Grégoire,0.51267076,fineart +Celia Fiennes,0.51266706,digipa-med-impact +Raymond Swanland,0.41350424,fineart +Howard Knotts,0.5122062,digipa-med-impact +Helmut Federle,0.51201206,digipa-med-impact +Tyler Edlin,0.44028252,digipa-high-impact +Elwood H. Smith,0.5119027,digipa-med-impact +Ralph Horsley,0.51142794,fineart +Alexander Ivanov,0.4539051,digipa-low-impact +Cedric Peyravernay,0.4200587,digipa-low-impact +Annabel Eyres,0.51136214,digipa-med-impact +Zack Snyder,0.51129746,digipa-med-impact +Gentile Bellini,0.511102,digipa-med-impact +Giovanni Pelliccioli,0.4868688,digipa-med-impact +Fikret Muallâ Saygı,0.510694,digipa-med-impact +Bauhaus,0.43454266,digipa-low-impact +Charles Williams,0.510406,digipa-med-impact +Georg Arnold-Graboné,0.5103381,digipa-med-impact +Fedot Sychkov,0.47935224,digipa-med-impact +Alberto Magnelli,0.5103212,digipa-med-impact +Aloysius O'Kelly,0.5102891,digipa-med-impact +Alexander McQueen,0.5101986,digipa-med-impact +Cam Sykes,0.510071,digipa-med-impact +George Lucas,0.510038,digipa-med-impact +Eglon van der Neer,0.5099339,digipa-med-impact +Christian August Lorentzen,0.50989646,digipa-med-impact +Eleanor Best,0.50966686,digipa-med-impact +Terry Redlin,0.474244,fineart +Ken Kelly,0.4304738,fineart +David Eugene Henry,0.48173362,fineart +Shin Jeongho,0.5092497,fareast +Flora Borsi,0.5091922,digipa-med-impact +Berndnaut Smilde,0.50864,digipa-med-impact +Art of Brom,0.45828784,fineart +Ernő Tibor,0.50851977,digipa-med-impact +Ancell Stronach,0.5084514,digipa-med-impact +Helen Thomas Dranga,0.45412368,digipa-low-impact +Anita Malfatti,0.5080986,digipa-med-impact +Arnold Brügger,0.5080749,digipa-med-impact +Edward Ben Avram,0.50778764,digipa-med-impact +Antonio Ciseri,0.5073538,fineart +Alyssa Monks,0.50734174,digipa-med-impact +Chen Zhen,0.5071876,digipa-med-impact +Francis Helps,0.50707847,digipa-med-impact +Georg Karl Pfahler,0.50700235,digipa-med-impact +Henry Woods,0.506811,digipa-med-impact +Barbara Greg,0.50674164,digipa-med-impact +Guan Daosheng,0.506712,fareast +Guy Billout,0.5064906,digipa-med-impact +Basuki Abdullah,0.50613165,digipa-med-impact +Thomas Visscher,0.5059943,digipa-med-impact +Edward Simmons,0.50598735,digipa-med-impact +Arabella Rankin,0.50572735,digipa-med-impact +Lady Pink,0.5056634,digipa-high-impact +Christopher Williams,0.5052288,digipa-med-impact +Fuyuko Matsui,0.5051116,fareast +Edward Baird,0.5049874,digipa-med-impact +Georges Stein,0.5049069,digipa-med-impact +Alex Alemany,0.43974748,digipa-low-impact +Emanuel Schongut,0.5047326,digipa-med-impact +Hans Bol,0.5045265,digipa-med-impact +Kurzgesagt,0.5043725,digipa-med-impact +Harald Giersing,0.50410193,digipa-med-impact +Antonín Slavíček,0.5040368,fineart +Carl Rahl,0.5040115,digipa-med-impact +Etienne Delessert,0.5037818,fineart +Americo Makk,0.5034161,digipa-med-impact +Fernand Pelez,0.5027561,digipa-med-impact +Alexey Merinov,0.4469615,digipa-low-impact +Caspar Netscher,0.5019529,digipa-med-impact +Walt Disney,0.50178146,digipa-med-impact +Qian Xuan,0.50150526,fareast +Geoffrey Dyer,0.50120556,digipa-med-impact +Andre Norton,0.5007602,digipa-med-impact +Daphne McClure,0.5007391,digipa-med-impact +Dieric Bouts,0.5005882,fineart +Aguri Uchida,0.5005107,fareast +Hugo Scheiber,0.50004864,digipa-med-impact +Kenne Gregoire,0.46421963,digipa-low-impact +Wolfgang Tillmans,0.4999767,fineart +Carl-Henning Pedersen,0.4998986,digipa-med-impact +Alison Debenham,0.4998683,digipa-med-impact +Eppo Doeve,0.49975222,digipa-med-impact +Christen Købke,0.49961317,digipa-med-impact +Aron Demetz,0.49895018,digipa-med-impact +Alesso Baldovinetti,0.49849576,digipa-med-impact +Jimmy Lawlor,0.4475271,fineart +Carl Walter Liner,0.49826378,fineart +Gwenny Griffiths,0.45598924,digipa-low-impact +David Cooke Gibson,0.4976222,digipa-med-impact +Howard Butterworth,0.4974621,digipa-med-impact +Bob Thompson,0.49743804,fineart +Enguerrand Quarton,0.49711192,fineart +Abdel Hadi Al Gazzar,0.49631482,digipa-med-impact +Gu Zhengyi,0.49629828,digipa-med-impact +Aleksander Kotsis,0.4953621,digipa-med-impact +Alexander Sharpe Ross,0.49519226,digipa-med-impact +Carlos Enríquez Gómez,0.49494863,digipa-med-impact +Abed Abdi,0.4948855,digipa-med-impact +Elaine Duillo,0.49474388,digipa-med-impact +Anne Said,0.49473995,digipa-med-impact +Istvan Banyai,0.4947369,digipa-med-impact +Bouchta El Hayani,0.49455142,digipa-med-impact +Chinwe Chukwuogo-Roy,0.49445248,n +George Claessen,0.49412063,digipa-med-impact +Axel Törneman,0.49401706,digipa-med-impact +Avigdor Arikha,0.49384058,digipa-med-impact +Gloria Stoll Karn,0.4937976,digipa-med-impact +Alfredo Volpi,0.49367586,digipa-med-impact +Raffaello Sanizo,0.49365884,digipa-med-impact +Jeff Easley,0.49344411,digipa-med-impact +Aileen Eagleton,0.49318358,digipa-med-impact +Gaetano Sabatini,0.49307147,digipa-med-impact +Bertalan Pór,0.4930132,digipa-med-impact +Alfred Jensen,0.49291304,digipa-med-impact +Huang Guangjian,0.49286693,fareast +Emil Ferris,0.49282396,digipa-med-impact +Derek Chittock,0.492694,digipa-med-impact +Alonso Vázquez,0.49205148,digipa-med-impact +Kelly Sue Deconnick,0.4919476,digipa-med-impact +Clive Madgwick,0.4749857,fineart +Edward George Handel Lucas,0.49166748,digipa-med-impact +Dorothea Braby,0.49161923,digipa-med-impact +Sangyeob Park,0.49150884,fareast +Heinz Edelman,0.49140438,digipa-med-impact +Mark Seliger,0.4912073,digipa-med-impact +Camilo Egas,0.4586727,digipa-low-impact +Craig Mullins,0.49085408,fineart +Dong Kingman,0.49063343,digipa-med-impact +Douglas Robertson Bisset,0.49031347,digipa-med-impact +Blek Le Rat,0.49008566,digipa-med-impact +Anton Ažbe,0.48984748,fineart +Olafur Eliasson,0.48971075,digipa-med-impact +Elinor Proby Adams,0.48967826,digipa-med-impact +Cándido López,0.48915705,digipa-med-impact +D. Howard Hitchcock,0.48902267,digipa-med-impact +Cheng Jiasui,0.48889247,fareast +Jean Nouvel,0.4888183,digipa-med-impact +Bill Gekas,0.48848945,digipa-med-impact +Hermione Hammond,0.48845994,digipa-med-impact +Fernando Gerassi,0.48841453,digipa-med-impact +Frank Barrington Craig,0.4883762,digipa-med-impact +A. B. Jackson,0.4883623,digipa-med-impact +Bernie D’Andrea,0.48813275,digipa-med-impact +Clarice Beckett,0.487809,digipa-med-impact +Dosso Dossi,0.48775777,digipa-med-impact +Donald Roller Wilson,0.48767656,digipa-med-impact +Ernest William Christmas,0.4876317,digipa-med-impact +Aleksandr Gerasimov,0.48736423,digipa-med-impact +Edward Clark,0.48703307,digipa-med-impact +Georg Schrimpf,0.48697302,digipa-med-impact +John Wilhelm,0.48696536,digipa-med-impact +Aries Moross,0.4863676,digipa-med-impact +Bill Lewis,0.48635158,digipa-med-impact +Huang Ji,0.48611963,fareast +F. Scott Hess,0.43634564,fineart +Gao Qipei,0.4860631,fareast +Albert Tucker,0.4854299,digipa-med-impact +Barbara Balmer,0.48528513,fineart +Anne Ryan,0.48511976,digipa-med-impact +Helen Edwards,0.48484707,digipa-med-impact +Alexander Bogen,0.48421195,digipa-med-impact +David Annand,0.48418126,digipa-med-impact +Du Qiong,0.48414314,fareast +Fred Cress,0.4837878,digipa-med-impact +David B. Mattingly,0.48370445,digipa-med-impact +Hristofor Žefarović,0.4837008,digipa-med-impact +Wim Wenders,0.44484183,digipa-low-impact +Alexander Fedosav,0.48360944,digipa-med-impact +Anne Rigney,0.48357943,digipa-med-impact +Bertalan Karlovszky,0.48338628,digipa-med-impact +George Frederick Harris,0.4833259,fineart +Toshiharu Mizutani,0.48315164,fareast +David McClellan,0.39739317,digipa-low-impact +Eugeen Van Mieghem,0.48270774,digipa-med-impact +Alexei Harlamoff,0.48255378,digipa-med-impact +Jeff Legg,0.48249072,digipa-med-impact +Elizabeth Murray,0.48227608,digipa-med-impact +Hugo Heyrman,0.48213717,digipa-med-impact +Adrian Paul Allinson,0.48211843,digipa-med-impact +Altoon Sultan,0.4820177,digipa-med-impact +Alice Mason,0.48188528,fareast +Harriet Powers,0.48181778,digipa-med-impact +Aaron Bohrod,0.48175076,digipa-med-impact +Chris Saunders,0.41429797,digipa-low-impact +Clara Miller Burd,0.47797233,digipa-med-impact +David G. Sorensen,0.38101727,digipa-low-impact +Iwan Baan,0.4806739,digipa-med-impact +Anatoly Metlan,0.48020265,digipa-med-impact +Alfons von Czibulka,0.4801954,digipa-med-impact +Amedee Ozenfant,0.47950014,digipa-med-impact +Valerie Hegarty,0.47947168,digipa-med-impact +Hugo Anton Fisher,0.4793551,digipa-med-impact +Antonio Roybal,0.4792729,digipa-med-impact +Cui Zizhong,0.47902682,fareast +F Scott Hess,0.42582104,fineart +Julien Delval,0.47888556,digipa-med-impact +Marcin Jakubowski,0.4788583,digipa-med-impact +Anne Stokes,0.4786997,digipa-med-impact +David Palumbo,0.47632077,fineart +Hallsteinn Sigurðsson,0.47858906,digipa-med-impact +Mike Campau,0.47850558,digipa-med-impact +Giuseppe Avanzi,0.47846943,digipa-med-impact +Harry Morley,0.47836518,digipa-med-impact +Constance-Anne Parker,0.47832203,digipa-med-impact +Albert Keller,0.47825447,digipa-med-impact +Daniel Chodowiecki,0.47825167,digipa-med-impact +Alasdair Grant Taylor,0.47802624,digipa-med-impact +Maria Pascual Alberich,0.4779718,fineart +Rebeca Saray,0.41697127,digipa-low-impact +Ernő Bánk,0.47753686,digipa-med-impact +Shaddy Safadi,0.47724134,digipa-med-impact +André Castro,0.4771826,digipa-med-impact +Amiet Cuno,0.41975892,digipa-low-impact +Adi Granov,0.40670198,fineart +Allen Williams,0.47675848,digipa-med-impact +Anna Haifisch,0.47672725,digipa-med-impact +Clovis Trouille,0.47669724,digipa-med-impact +Jane Graverol,0.47655866,digipa-med-impact +Conroy Maddox,0.47645602,digipa-med-impact +Božidar Jakac,0.4763106,digipa-med-impact +George Morrison,0.47533786,digipa-med-impact +Douglas Bourgeois,0.47527707,digipa-med-impact +Cao Zhibai,0.47476804,fareast +Bradley Walker Tomlin,0.47462896,digipa-low-impact +Dave Dorman,0.46852386,fineart +Stevan Dohanos,0.47452107,fineart +John Howe,0.44144905,fineart +Fanny McIan,0.47406268,digipa-low-impact +Bholekar Srihari,0.47387534,digipa-low-impact +Giovanni Lanfranco,0.4737344,digipa-low-impact +Fred Marcellino,0.47346023,digipa-low-impact +Clyde Caldwell,0.47305286,fineart +Haukur Halldórsson,0.47275954,digipa-low-impact +Huang Gongwang,0.47269204,fareast +Brothers Grimm,0.47249007,digipa-low-impact +Ollie Hoff,0.47240657,digipa-low-impact +RHADS,0.4722166,digipa-low-impact +Constance Gordon-Cumming,0.47219282,digipa-low-impact +Anne Mccaffrey,0.4719924,digipa-low-impact +Henry Heerup,0.47190166,digipa-low-impact +Adrian Smith,0.4716923,digipa-high-impact +Harold Elliott,0.4714101,digipa-low-impact +Eric Peterson,0.47106332,digipa-low-impact +David Garner,0.47106326,digipa-low-impact +Edward Hicks,0.4708863,digipa-low-impact +Alfred Krupa,0.47052455,digipa-low-impact +Breyten Breytenbach,0.4699338,digipa-low-impact +Douglas Shuler,0.4695691,digipa-low-impact +Elaine Hamilton,0.46941522,digipa-low-impact +Kapwani Kiwanga,0.46917036,digipa-low-impact +Dan Scott,0.46897763,digipa-low-impact +Allan Brooks,0.46882123,digipa-low-impact +Ian Fairweather,0.46878594,digipa-low-impact +Arlington Nelson Lindenmuth,0.4683814,digipa-low-impact +Russell Ayto,0.4681503,digipa-low-impact +Allan Linder,0.46812692,digipa-low-impact +Bohumil Kubista,0.4679809,digipa-low-impact +Christopher Jin Baron,0.4677839,digipa-low-impact +Eero Snellman,0.46777654,digipa-low-impact +Christabel Dennison,0.4677633,digipa-low-impact +Amelia Peláez,0.46764764,digipa-low-impact +James Gurney,0.46740666,digipa-low-impact +Carles Delclaux Is,0.46734855,digipa-low-impact +George Papazov,0.42420334,digipa-low-impact +Mark Brooks,0.4672415,fineart +Anne Dunn,0.46722376,digipa-low-impact +Klaus Wittmann,0.4670704,fineart +Arvid Nyholm,0.46697336,digipa-low-impact +Georg Scholz,0.46674117,digipa-low-impact +David Spriggs,0.46671993,digipa-low-impact +Ernest Morgan,0.4665036,digipa-low-impact +Ella Guru,0.46619284,digipa-low-impact +Helen Berman,0.46614346,digipa-low-impact +Gen Paul,0.4658785,digipa-low-impact +Auseklis Ozols,0.46569023,digipa-low-impact +Amelia Robertson Hill,0.4654411,fineart +Jim Lee,0.46544096,digipa-low-impact +Anson Maddocks,0.46539295,digipa-low-impact +Chen Hong,0.46516004,fareast +Haddon Sundblom,0.46490777,digipa-low-impact +Eva Švankmajerová,0.46454152,digipa-low-impact +Antonio Cavallucci,0.4645282,digipa-low-impact +Herve Groussin,0.40050638,digipa-low-impact +Gwen Barnard,0.46400994,digipa-low-impact +Grace English,0.4638674,digipa-low-impact +Carl Critchlow,0.4636,digipa-low-impact +Ayshia Taşkın,0.463412,digipa-low-impact +Alison Watt,0.43141022,digipa-low-impact +Andre de Krayewski,0.4628024,digipa-low-impact +Hamish MacDonald,0.462645,digipa-low-impact +Ni Chuanjing,0.46254826,fareast +Frank Mason,0.46254665,digipa-low-impact +Steve Henderson,0.43113405,fineart +Eileen Aldridge,0.46210572,digipa-low-impact +Brad Rigney,0.28446302,digipa-low-impact +Ching Yeh,0.46177,fareast +Bertram Brooker,0.46176457,digipa-low-impact +Henry Bright,0.46150023,digipa-low-impact +Claire Dalby,0.46117848,digipa-low-impact +Brian Despain,0.41538632,digipa-low-impact +Anna Maria Barbara Abesch,0.4611045,digipa-low-impact +Bernardo Daddi,0.46088326,digipa-low-impact +Abraham Mintchine,0.46088243,digipa-high-impact +Alexander Carse,0.46078917,digipa-low-impact +Doc Hammer,0.46075988,digipa-low-impact +Yuumei,0.46072406,digipa-low-impact +Teophilus Tetteh,0.46064255,n +Bess Hamiti,0.46062252,digipa-low-impact +Ceferí Olivé,0.46058378,digipa-low-impact +Enrique Grau,0.46046937,digipa-low-impact +Eleanor Hughes,0.46007007,digipa-low-impact +Elizabeth Charleston,0.46001568,digipa-low-impact +Félix Ziem,0.45987016,digipa-low-impact +Eugeniusz Zak,0.45985222,digipa-low-impact +Dain Yoon,0.45977795,fareast +Gong Xian,0.4595083,digipa-low-impact +Flavia Blois,0.45950204,digipa-low-impact +Frederik Vermehren,0.45949826,digipa-low-impact +Gang Se-hwang,0.45937777,digipa-low-impact +Bjørn Wiinblad,0.45934483,digipa-low-impact +Alex Horley-Orlandelli,0.42623433,digipa-low-impact +Dr. Atl,0.459287,digipa-low-impact +Hu Jieqing,0.45889485,fareast +Amédée Ozenfant,0.4585215,digipa-low-impact +Warren Ellis,0.4584044,digipa-low-impact +Helen Dahm,0.45804346,digipa-low-impact +Anne Geddes,0.45785287,digipa-low-impact +Bikash Bhattacharjee,0.45775396,digipa-low-impact +Phil Foglio,0.457582,digipa-low-impact +Evelyn Abelson,0.4574563,digipa-low-impact +Alan Moore,0.4573369,digipa-low-impact +Josh Kao,0.45725146,fareast +Bertil Nilsson,0.45724383,digipa-low-impact +Hristofor Zhefarovich,0.457089,fineart +Edward Bailey,0.45659882,digipa-low-impact +Christopher Moeller,0.45648077,digipa-low-impact +Dóra Keresztes,0.4558745,fineart +Cory Arcangel,0.4558071,digipa-low-impact +Aleksander Kobzdej,0.45552525,digipa-low-impact +Tim Burton,0.45541722,digipa-high-impact +Chen Jiru,0.4553378,fareast +George Passantino,0.4552104,digipa-low-impact +Fuller Potter,0.4552072,digipa-low-impact +Warwick Globe,0.45516664,digipa-low-impact +Heinz Anger,0.45466962,digipa-low-impact +Elias Goldberg,0.45416242,digipa-low-impact +tokyogenso,0.45406622,fareast +Zeen Chin,0.45404464,digipa-low-impact +Albert Koetsier,0.45385844,fineart +Giuseppe Camuncoli,0.45377725,digipa-low-impact +Elsie Vera Cole,0.45377362,digipa-low-impact +Andreas Franke,0.4300047,digipa-low-impact +Constantine Andreou,0.4533816,digipa-low-impact +Elisabeth Collins,0.45337808,digipa-low-impact +Ted Nasmith,0.45302224,fineart +Antônio Parreiras,0.45269623,digipa-low-impact +Gwilym Prichard,0.45256525,digipa-low-impact +Fang Congyi,0.45240825,fareast +Huang Ding,0.45233482,fareast +Hans von Bartels,0.45200723,digipa-low-impact +Peter Elson,0.4121406,fineart +Fan Kuan,0.4513034,digipa-low-impact +Dean Roger,0.45112592,digipa-low-impact +Bernat Sanjuan,0.45074993,fareast +Fletcher Martin,0.45055175,digipa-low-impact +Gentile Tondino,0.45043385,digipa-low-impact +Ei-Q,0.45038772,digipa-low-impact +Chen Lin,0.45035738,fareast +Ted Wallace,0.4500007,digipa-low-impact +"Cornelisz Hendriksz Vroom, the Younger",0.4499252,digipa-low-impact +Alpo Jaakola,0.44981295,digipa-low-impact +Clark Voorhees,0.4495309,digipa-low-impact +Cleve Gray,0.449188,digipa-low-impact +Wolf Kahn,0.4489858,digipa-low-impact +Choi Buk,0.44892842,fareast +Frank Tinsley,0.4480373,digipa-low-impact +George Bell,0.44779524,digipa-low-impact +Fiona Stephenson,0.44761062,fineart +Carlos Trillo Name,0.4470371,digipa-low-impact +Jamie McKelvie,0.44696707,digipa-low-impact +Dennis Flanders,0.44673377,digipa-low-impact +Dulah Marie Evans,0.44662604,digipa-low-impact +Hans Schwarz,0.4463275,digipa-low-impact +Steve McCurry,0.44620228,digipa-low-impact +Bedwyr Williams,0.44616276,digipa-low-impact +Anton Graff,0.38569996,digipa-low-impact +Leticia Gillett,0.44578317,digipa-low-impact +Rafał Olbiński,0.44561762,digipa-low-impact +Artgerm,0.44555497,fineart +Adrienn Henczné Deák,0.445518,digipa-low-impact +Gu Hongzhong,0.4454906,fareast +Matt Groening,0.44518438,digipa-low-impact +Sue Bryce,0.4447164,digipa-low-impact +Armin Baumgarten,0.444061,digipa-low-impact +Araceli Gilbert,0.44399196,digipa-low-impact +Carey Morris,0.44388965,digipa-low-impact +Ignat Bednarik,0.4438085,digipa-low-impact +Frank Buchser,0.44373792,digipa-low-impact +Ben Zoeller,0.44368798,digipa-low-impact +Adam Szentpétery,0.4434548,fineart +Gene Davis,0.44343877,digipa-low-impact +Fei Danxu,0.4433627,fareast +Andrei Kolkoutine,0.44328922,digipa-low-impact +Bruce Onobrakpeya,0.42588046,n +Christoph Amberger,0.38912287,digipa-low-impact +"Fred Mitchell,",0.4432277,digipa-low-impact +Klaus Burgle,0.44295216,digipa-low-impact +Carl Hoppe,0.44270635,digipa-low-impact +Caroline Gotch,0.44263047,digipa-low-impact +Hans Mertens,0.44260004,digipa-low-impact +Mandy Disher,0.44219893,fineart +Sarah Lucas,0.4420507,digipa-low-impact +Sydney Edmunds,0.44198513,digipa-low-impact +Amos Ferguson,0.4418735,digipa-low-impact +Alton Tobey,0.4416385,digipa-low-impact +Clifford Ross,0.44139367,digipa-low-impact +Henric Trenk,0.4412782,digipa-low-impact +Claire Hummel,0.44119984,digipa-low-impact +Norman Foster,0.4411899,digipa-low-impact +Carmen Saldana,0.44076762,digipa-low-impact +Michael Whelan,0.4372847,digipa-low-impact +Carlos Berlanga,0.440354,digipa-low-impact +Gilles Beloeil,0.43997732,digipa-low-impact +Ashley Wood,0.4398396,digipa-low-impact +David Allan,0.43969798,digipa-low-impact +Mark Lovett,0.43922082,digipa-low-impact +Jed Henry,0.43882954,digipa-low-impact +Adam Bruce Thomson,0.43847767,digipa-low-impact +Horst Antes,0.4384303,digipa-low-impact +Fritz Glarner,0.43787453,digipa-low-impact +Harold McCauley,0.43760818,digipa-low-impact +Estuardo Maldonado,0.437594,digipa-low-impact +Dai Jin,0.4375449,fareast +Fabien Charuau,0.43688047,digipa-low-impact +Chica Macnab,0.4365166,digipa-low-impact +Jim Burns,0.3975072,digipa-low-impact +Santiago Calatrava,0.43651623,digipa-low-impact +Robert Maguire,0.40926617,digipa-low-impact +Cliff Childs,0.43611953,digipa-low-impact +Charles Martin,0.43582463,fareast +Elbridge Ayer Burbank,0.43572164,digipa-low-impact +Anita Kunz,0.4356005,digipa-low-impact +Colin Geller,0.43559563,digipa-low-impact +Allen Tupper True,0.43556124,digipa-low-impact +Jef Wu,0.43555313,digipa-low-impact +Jon McCoy,0.4147122,digipa-low-impact +Cedric Seaut,0.43521535,digipa-low-impact +Emily Shanks,0.43519047,digipa-low-impact +Andrew Whem,0.43512022,digipa-low-impact +Ibrahim Kodra,0.43471518,digipa-low-impact +Harrington Mann,0.4345901,digipa-low-impact +Jerry Siegel,0.43458986,digipa-low-impact +Howard Kanovitz,0.4345178,digipa-low-impact +Cicely Hey,0.43449926,digipa-low-impact +Ben Thompson,0.43436068,digipa-low-impact +Joe Bowler,0.43413073,digipa-low-impact +Lori Earley,0.43389612,digipa-low-impact +Arent Arentsz,0.43373522,digipa-low-impact +David Bailly,0.43371305,digipa-low-impact +Hans Arnold,0.4335214,digipa-low-impact +Constance Copeman,0.4334836,digipa-low-impact +Brent Heighton,0.4333118,fineart +Eric Taylor,0.43312082,digipa-low-impact +Aleksander Gine,0.4326849,digipa-low-impact +Alexander Johnston,0.4326589,digipa-low-impact +David Park,0.43235332,digipa-low-impact +Balázs Diószegi,0.432244,digipa-low-impact +Ed Binkley,0.43222216,digipa-low-impact +Eric Dinyer,0.4321258,digipa-low-impact +Susan Luo,0.43198025,fareast +Cedric Seaut (Keos Masons),0.4317356,digipa-low-impact +Lorena Alvarez Gómez,0.431683,digipa-low-impact +Fred Ludekens,0.431662,digipa-low-impact +David Begbie,0.4316218,digipa-low-impact +Ai Xuan,0.43150818,fareast +Felix-Kelly,0.43132153,digipa-low-impact +Antonín Chittussi,0.431248,digipa-low-impact +Ammi Phillips,0.43095884,digipa-low-impact +Elke Vogelsang,0.43092483,digipa-low-impact +Fathi Hassan,0.43090487,digipa-low-impact +Angela Sung,0.391746,fareast +Clément Serveau,0.43050706,digipa-low-impact +Dong Yuan,0.4303865,fareast +Hew Lorimer,0.43035403,digipa-low-impact +David Finch,0.29487437,digipa-low-impact +Bill Durgin,0.4300932,digipa-low-impact +Alexander Robertson,0.4300743,digipa-low-impact diff --git a/configs/alt-diffusion-inference.yaml b/configs/alt-diffusion-inference.yaml new file mode 100644 index 0000000000000000000000000000000000000000..cfbee72d71bfd7deed2075e423ca51bd1da0521c --- /dev/null +++ b/configs/alt-diffusion-inference.yaml @@ -0,0 +1,72 @@ +model: + base_learning_rate: 1.0e-04 + target: ldm.models.diffusion.ddpm.LatentDiffusion + params: + linear_start: 0.00085 + linear_end: 0.0120 + num_timesteps_cond: 1 + log_every_t: 200 + timesteps: 1000 + first_stage_key: "jpg" + cond_stage_key: "txt" + image_size: 64 + channels: 4 + cond_stage_trainable: false # Note: different from the one we trained before + conditioning_key: crossattn + monitor: val/loss_simple_ema + scale_factor: 0.18215 + use_ema: False + + scheduler_config: # 10000 warmup steps + target: ldm.lr_scheduler.LambdaLinearScheduler + params: + warm_up_steps: [ 10000 ] + cycle_lengths: [ 10000000000000 ] # incredibly large number to prevent corner cases + f_start: [ 1.e-6 ] + f_max: [ 1. ] + f_min: [ 1. ] + + unet_config: + target: ldm.modules.diffusionmodules.openaimodel.UNetModel + params: + image_size: 32 # unused + in_channels: 4 + out_channels: 4 + model_channels: 320 + attention_resolutions: [ 4, 2, 1 ] + num_res_blocks: 2 + channel_mult: [ 1, 2, 4, 4 ] + num_heads: 8 + use_spatial_transformer: True + transformer_depth: 1 + context_dim: 768 + use_checkpoint: True + legacy: False + + first_stage_config: + target: ldm.models.autoencoder.AutoencoderKL + params: + embed_dim: 4 + monitor: val/rec_loss + ddconfig: + double_z: true + z_channels: 4 + resolution: 256 + in_channels: 3 + out_ch: 3 + ch: 128 + ch_mult: + - 1 + - 2 + - 4 + - 4 + num_res_blocks: 2 + attn_resolutions: [] + dropout: 0.0 + lossconfig: + target: torch.nn.Identity + + cond_stage_config: + target: modules.xlmr.BertSeriesModelWithTransformation + params: + name: "XLMR-Large" \ No newline at end of file diff --git a/configs/v1-inference.yaml b/configs/v1-inference.yaml new file mode 100644 index 0000000000000000000000000000000000000000..d4effe569e897369918625f9d8be5603a0e6a0d6 --- /dev/null +++ b/configs/v1-inference.yaml @@ -0,0 +1,70 @@ +model: + base_learning_rate: 1.0e-04 + target: ldm.models.diffusion.ddpm.LatentDiffusion + params: + linear_start: 0.00085 + linear_end: 0.0120 + num_timesteps_cond: 1 + log_every_t: 200 + timesteps: 1000 + first_stage_key: "jpg" + cond_stage_key: "txt" + image_size: 64 + channels: 4 + cond_stage_trainable: false # Note: different from the one we trained before + conditioning_key: crossattn + monitor: val/loss_simple_ema + scale_factor: 0.18215 + use_ema: False + + scheduler_config: # 10000 warmup steps + target: ldm.lr_scheduler.LambdaLinearScheduler + params: + warm_up_steps: [ 10000 ] + cycle_lengths: [ 10000000000000 ] # incredibly large number to prevent corner cases + f_start: [ 1.e-6 ] + f_max: [ 1. ] + f_min: [ 1. ] + + unet_config: + target: ldm.modules.diffusionmodules.openaimodel.UNetModel + params: + image_size: 32 # unused + in_channels: 4 + out_channels: 4 + model_channels: 320 + attention_resolutions: [ 4, 2, 1 ] + num_res_blocks: 2 + channel_mult: [ 1, 2, 4, 4 ] + num_heads: 8 + use_spatial_transformer: True + transformer_depth: 1 + context_dim: 768 + use_checkpoint: True + legacy: False + + first_stage_config: + target: ldm.models.autoencoder.AutoencoderKL + params: + embed_dim: 4 + monitor: val/rec_loss + ddconfig: + double_z: true + z_channels: 4 + resolution: 256 + in_channels: 3 + out_ch: 3 + ch: 128 + ch_mult: + - 1 + - 2 + - 4 + - 4 + num_res_blocks: 2 + attn_resolutions: [] + dropout: 0.0 + lossconfig: + target: torch.nn.Identity + + cond_stage_config: + target: ldm.modules.encoders.modules.FrozenCLIPEmbedder diff --git a/environment-wsl2.yaml b/environment-wsl2.yaml new file mode 100644 index 0000000000000000000000000000000000000000..f88727507835d96bfbbfae3ece2996e8506e3760 --- /dev/null +++ b/environment-wsl2.yaml @@ -0,0 +1,11 @@ +name: automatic +channels: + - pytorch + - defaults +dependencies: + - python=3.10 + - pip=22.2.2 + - cudatoolkit=11.3 + - pytorch=1.12.1 + - torchvision=0.13.1 + - numpy=1.23.1 \ No newline at end of file diff --git a/extensions-builtin/LDSR/ldsr_model_arch.py b/extensions-builtin/LDSR/ldsr_model_arch.py new file mode 100644 index 0000000000000000000000000000000000000000..0ad49f4e9f815baffa2eb08c625cd2fb8aadf9a6 --- /dev/null +++ b/extensions-builtin/LDSR/ldsr_model_arch.py @@ -0,0 +1,256 @@ +import os +import gc +import time +import warnings + +import numpy as np +import torch +import torchvision +from PIL import Image +from einops import rearrange, repeat +from omegaconf import OmegaConf +import safetensors.torch + +from ldm.models.diffusion.ddim import DDIMSampler +from ldm.util import instantiate_from_config, ismap +from modules import shared, sd_hijack + +warnings.filterwarnings("ignore", category=UserWarning) + +cached_ldsr_model: torch.nn.Module = None + + +# Create LDSR Class +class LDSR: + def load_model_from_config(self, half_attention): + global cached_ldsr_model + + if shared.opts.ldsr_cached and cached_ldsr_model is not None: + print("Loading model from cache") + model: torch.nn.Module = cached_ldsr_model + else: + print(f"Loading model from {self.modelPath}") + _, extension = os.path.splitext(self.modelPath) + if extension.lower() == ".safetensors": + pl_sd = safetensors.torch.load_file(self.modelPath, device="cpu") + else: + pl_sd = torch.load(self.modelPath, map_location="cpu") + sd = pl_sd["state_dict"] if "state_dict" in pl_sd else pl_sd + config = OmegaConf.load(self.yamlPath) + config.model.target = "ldm.models.diffusion.ddpm.LatentDiffusionV1" + model: torch.nn.Module = instantiate_from_config(config.model) + model.load_state_dict(sd, strict=False) + model = model.to(shared.device) + if half_attention: + model = model.half() + if shared.cmd_opts.opt_channelslast: + model = model.to(memory_format=torch.channels_last) + + sd_hijack.model_hijack.hijack(model) # apply optimization + model.eval() + + if shared.opts.ldsr_cached: + cached_ldsr_model = model + + return {"model": model} + + def __init__(self, model_path, yaml_path): + self.modelPath = model_path + self.yamlPath = yaml_path + + @staticmethod + def run(model, selected_path, custom_steps, eta): + example = get_cond(selected_path) + + n_runs = 1 + guider = None + ckwargs = None + ddim_use_x0_pred = False + temperature = 1. + eta = eta + custom_shape = None + + height, width = example["image"].shape[1:3] + split_input = height >= 128 and width >= 128 + + if split_input: + ks = 128 + stride = 64 + vqf = 4 # + model.split_input_params = {"ks": (ks, ks), "stride": (stride, stride), + "vqf": vqf, + "patch_distributed_vq": True, + "tie_braker": False, + "clip_max_weight": 0.5, + "clip_min_weight": 0.01, + "clip_max_tie_weight": 0.5, + "clip_min_tie_weight": 0.01} + else: + if hasattr(model, "split_input_params"): + delattr(model, "split_input_params") + + x_t = None + logs = None + for n in range(n_runs): + if custom_shape is not None: + x_t = torch.randn(1, custom_shape[1], custom_shape[2], custom_shape[3]).to(model.device) + x_t = repeat(x_t, '1 c h w -> b c h w', b=custom_shape[0]) + + logs = make_convolutional_sample(example, model, + custom_steps=custom_steps, + eta=eta, quantize_x0=False, + custom_shape=custom_shape, + temperature=temperature, noise_dropout=0., + corrector=guider, corrector_kwargs=ckwargs, x_T=x_t, + ddim_use_x0_pred=ddim_use_x0_pred + ) + return logs + + def super_resolution(self, image, steps=100, target_scale=2, half_attention=False): + model = self.load_model_from_config(half_attention) + + # Run settings + diffusion_steps = int(steps) + eta = 1.0 + + down_sample_method = 'Lanczos' + + gc.collect() + if torch.cuda.is_available: + torch.cuda.empty_cache() + + im_og = image + width_og, height_og = im_og.size + # If we can adjust the max upscale size, then the 4 below should be our variable + down_sample_rate = target_scale / 4 + wd = width_og * down_sample_rate + hd = height_og * down_sample_rate + width_downsampled_pre = int(np.ceil(wd)) + height_downsampled_pre = int(np.ceil(hd)) + + if down_sample_rate != 1: + print( + f'Downsampling from [{width_og}, {height_og}] to [{width_downsampled_pre}, {height_downsampled_pre}]') + im_og = im_og.resize((width_downsampled_pre, height_downsampled_pre), Image.LANCZOS) + else: + print(f"Down sample rate is 1 from {target_scale} / 4 (Not downsampling)") + + # pad width and height to multiples of 64, pads with the edge values of image to avoid artifacts + pad_w, pad_h = np.max(((2, 2), np.ceil(np.array(im_og.size) / 64).astype(int)), axis=0) * 64 - im_og.size + im_padded = Image.fromarray(np.pad(np.array(im_og), ((0, pad_h), (0, pad_w), (0, 0)), mode='edge')) + + logs = self.run(model["model"], im_padded, diffusion_steps, eta) + + sample = logs["sample"] + sample = sample.detach().cpu() + sample = torch.clamp(sample, -1., 1.) + sample = (sample + 1.) / 2. * 255 + sample = sample.numpy().astype(np.uint8) + sample = np.transpose(sample, (0, 2, 3, 1)) + a = Image.fromarray(sample[0]) + + # remove padding + a = a.crop((0, 0) + tuple(np.array(im_og.size) * 4)) + + del model + gc.collect() + if torch.cuda.is_available: + torch.cuda.empty_cache() + + return a + + +def get_cond(selected_path): + example = dict() + up_f = 4 + c = selected_path.convert('RGB') + c = torch.unsqueeze(torchvision.transforms.ToTensor()(c), 0) + c_up = torchvision.transforms.functional.resize(c, size=[up_f * c.shape[2], up_f * c.shape[3]], + antialias=True) + c_up = rearrange(c_up, '1 c h w -> 1 h w c') + c = rearrange(c, '1 c h w -> 1 h w c') + c = 2. * c - 1. + + c = c.to(shared.device) + example["LR_image"] = c + example["image"] = c_up + + return example + + +@torch.no_grad() +def convsample_ddim(model, cond, steps, shape, eta=1.0, callback=None, normals_sequence=None, + mask=None, x0=None, quantize_x0=False, temperature=1., score_corrector=None, + corrector_kwargs=None, x_t=None + ): + ddim = DDIMSampler(model) + bs = shape[0] + shape = shape[1:] + print(f"Sampling with eta = {eta}; steps: {steps}") + samples, intermediates = ddim.sample(steps, batch_size=bs, shape=shape, conditioning=cond, callback=callback, + normals_sequence=normals_sequence, quantize_x0=quantize_x0, eta=eta, + mask=mask, x0=x0, temperature=temperature, verbose=False, + score_corrector=score_corrector, + corrector_kwargs=corrector_kwargs, x_t=x_t) + + return samples, intermediates + + +@torch.no_grad() +def make_convolutional_sample(batch, model, custom_steps=None, eta=1.0, quantize_x0=False, custom_shape=None, temperature=1., noise_dropout=0., corrector=None, + corrector_kwargs=None, x_T=None, ddim_use_x0_pred=False): + log = dict() + + z, c, x, xrec, xc = model.get_input(batch, model.first_stage_key, + return_first_stage_outputs=True, + force_c_encode=not (hasattr(model, 'split_input_params') + and model.cond_stage_key == 'coordinates_bbox'), + return_original_cond=True) + + if custom_shape is not None: + z = torch.randn(custom_shape) + print(f"Generating {custom_shape[0]} samples of shape {custom_shape[1:]}") + + z0 = None + + log["input"] = x + log["reconstruction"] = xrec + + if ismap(xc): + log["original_conditioning"] = model.to_rgb(xc) + if hasattr(model, 'cond_stage_key'): + log[model.cond_stage_key] = model.to_rgb(xc) + + else: + log["original_conditioning"] = xc if xc is not None else torch.zeros_like(x) + if model.cond_stage_model: + log[model.cond_stage_key] = xc if xc is not None else torch.zeros_like(x) + if model.cond_stage_key == 'class_label': + log[model.cond_stage_key] = xc[model.cond_stage_key] + + with model.ema_scope("Plotting"): + t0 = time.time() + + sample, intermediates = convsample_ddim(model, c, steps=custom_steps, shape=z.shape, + eta=eta, + quantize_x0=quantize_x0, mask=None, x0=z0, + temperature=temperature, score_corrector=corrector, corrector_kwargs=corrector_kwargs, + x_t=x_T) + t1 = time.time() + + if ddim_use_x0_pred: + sample = intermediates['pred_x0'][-1] + + x_sample = model.decode_first_stage(sample) + + try: + x_sample_noquant = model.decode_first_stage(sample, force_not_quantize=True) + log["sample_noquant"] = x_sample_noquant + log["sample_diff"] = torch.abs(x_sample_noquant - x_sample) + except: + pass + + log["sample"] = x_sample + log["time"] = t1 - t0 + + return log diff --git a/extensions-builtin/LDSR/preload.py b/extensions-builtin/LDSR/preload.py new file mode 100644 index 0000000000000000000000000000000000000000..d746007c7cc83b037b6ae82b766475b56c3fd778 --- /dev/null +++ b/extensions-builtin/LDSR/preload.py @@ -0,0 +1,6 @@ +import os +from modules import paths + + +def preload(parser): + parser.add_argument("--ldsr-models-path", type=str, help="Path to directory with LDSR model file(s).", default=os.path.join(paths.models_path, 'LDSR')) diff --git a/extensions-builtin/LDSR/scripts/ldsr_model.py b/extensions-builtin/LDSR/scripts/ldsr_model.py new file mode 100644 index 0000000000000000000000000000000000000000..b8cff29b9f4ca56e3a9f4b1ac8e150abb1a0ff30 --- /dev/null +++ b/extensions-builtin/LDSR/scripts/ldsr_model.py @@ -0,0 +1,69 @@ +import os +import sys +import traceback + +from basicsr.utils.download_util import load_file_from_url + +from modules.upscaler import Upscaler, UpscalerData +from ldsr_model_arch import LDSR +from modules import shared, script_callbacks +import sd_hijack_autoencoder, sd_hijack_ddpm_v1 + + +class UpscalerLDSR(Upscaler): + def __init__(self, user_path): + self.name = "LDSR" + self.user_path = user_path + self.model_url = "https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1" + self.yaml_url = "https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1" + super().__init__() + scaler_data = UpscalerData("LDSR", None, self) + self.scalers = [scaler_data] + + def load_model(self, path: str): + # Remove incorrect project.yaml file if too big + yaml_path = os.path.join(self.model_path, "project.yaml") + old_model_path = os.path.join(self.model_path, "model.pth") + new_model_path = os.path.join(self.model_path, "model.ckpt") + safetensors_model_path = os.path.join(self.model_path, "model.safetensors") + if os.path.exists(yaml_path): + statinfo = os.stat(yaml_path) + if statinfo.st_size >= 10485760: + print("Removing invalid LDSR YAML file.") + os.remove(yaml_path) + if os.path.exists(old_model_path): + print("Renaming model from model.pth to model.ckpt") + os.rename(old_model_path, new_model_path) + if os.path.exists(safetensors_model_path): + model = safetensors_model_path + else: + model = load_file_from_url(url=self.model_url, model_dir=self.model_path, + file_name="model.ckpt", progress=True) + yaml = load_file_from_url(url=self.yaml_url, model_dir=self.model_path, + file_name="project.yaml", progress=True) + + try: + return LDSR(model, yaml) + + except Exception: + print("Error importing LDSR:", file=sys.stderr) + print(traceback.format_exc(), file=sys.stderr) + return None + + def do_upscale(self, img, path): + ldsr = self.load_model(path) + if ldsr is None: + print("NO LDSR!") + return img + ddim_steps = shared.opts.ldsr_steps + return ldsr.super_resolution(img, ddim_steps, self.scale) + + +def on_ui_settings(): + import gradio as gr + + shared.opts.add_option("ldsr_steps", shared.OptionInfo(100, "LDSR processing steps. Lower = faster", gr.Slider, {"minimum": 1, "maximum": 200, "step": 1}, section=('upscaling', "Upscaling"))) + shared.opts.add_option("ldsr_cached", shared.OptionInfo(False, "Cache LDSR model in memory", gr.Checkbox, {"interactive": True}, section=('upscaling', "Upscaling"))) + + +script_callbacks.on_ui_settings(on_ui_settings) diff --git a/extensions-builtin/LDSR/sd_hijack_autoencoder.py b/extensions-builtin/LDSR/sd_hijack_autoencoder.py new file mode 100644 index 0000000000000000000000000000000000000000..8e03c7f898988c237c714ed949610f5035b30b50 --- /dev/null +++ b/extensions-builtin/LDSR/sd_hijack_autoencoder.py @@ -0,0 +1,286 @@ +# The content of this file comes from the ldm/models/autoencoder.py file of the compvis/stable-diffusion repo +# The VQModel & VQModelInterface were subsequently removed from ldm/models/autoencoder.py when we moved to the stability-ai/stablediffusion repo +# As the LDSR upscaler relies on VQModel & VQModelInterface, the hijack aims to put them back into the ldm.models.autoencoder + +import torch +import pytorch_lightning as pl +import torch.nn.functional as F +from contextlib import contextmanager +from taming.modules.vqvae.quantize import VectorQuantizer2 as VectorQuantizer +from ldm.modules.diffusionmodules.model import Encoder, Decoder +from ldm.util import instantiate_from_config + +import ldm.models.autoencoder + +class VQModel(pl.LightningModule): + def __init__(self, + ddconfig, + lossconfig, + n_embed, + embed_dim, + ckpt_path=None, + ignore_keys=[], + image_key="image", + colorize_nlabels=None, + monitor=None, + batch_resize_range=None, + scheduler_config=None, + lr_g_factor=1.0, + remap=None, + sane_index_shape=False, # tell vector quantizer to return indices as bhw + use_ema=False + ): + super().__init__() + self.embed_dim = embed_dim + self.n_embed = n_embed + self.image_key = image_key + self.encoder = Encoder(**ddconfig) + self.decoder = Decoder(**ddconfig) + self.loss = instantiate_from_config(lossconfig) + self.quantize = VectorQuantizer(n_embed, embed_dim, beta=0.25, + remap=remap, + sane_index_shape=sane_index_shape) + self.quant_conv = torch.nn.Conv2d(ddconfig["z_channels"], embed_dim, 1) + self.post_quant_conv = torch.nn.Conv2d(embed_dim, ddconfig["z_channels"], 1) + if colorize_nlabels is not None: + assert type(colorize_nlabels)==int + self.register_buffer("colorize", torch.randn(3, colorize_nlabels, 1, 1)) + if monitor is not None: + self.monitor = monitor + self.batch_resize_range = batch_resize_range + if self.batch_resize_range is not None: + print(f"{self.__class__.__name__}: Using per-batch resizing in range {batch_resize_range}.") + + self.use_ema = use_ema + if self.use_ema: + self.model_ema = LitEma(self) + print(f"Keeping EMAs of {len(list(self.model_ema.buffers()))}.") + + if ckpt_path is not None: + self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys) + self.scheduler_config = scheduler_config + self.lr_g_factor = lr_g_factor + + @contextmanager + def ema_scope(self, context=None): + if self.use_ema: + self.model_ema.store(self.parameters()) + self.model_ema.copy_to(self) + if context is not None: + print(f"{context}: Switched to EMA weights") + try: + yield None + finally: + if self.use_ema: + self.model_ema.restore(self.parameters()) + if context is not None: + print(f"{context}: Restored training weights") + + def init_from_ckpt(self, path, ignore_keys=list()): + sd = torch.load(path, map_location="cpu")["state_dict"] + keys = list(sd.keys()) + for k in keys: + for ik in ignore_keys: + if k.startswith(ik): + print("Deleting key {} from state_dict.".format(k)) + del sd[k] + missing, unexpected = self.load_state_dict(sd, strict=False) + print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys") + if len(missing) > 0: + print(f"Missing Keys: {missing}") + print(f"Unexpected Keys: {unexpected}") + + def on_train_batch_end(self, *args, **kwargs): + if self.use_ema: + self.model_ema(self) + + def encode(self, x): + h = self.encoder(x) + h = self.quant_conv(h) + quant, emb_loss, info = self.quantize(h) + return quant, emb_loss, info + + def encode_to_prequant(self, x): + h = self.encoder(x) + h = self.quant_conv(h) + return h + + def decode(self, quant): + quant = self.post_quant_conv(quant) + dec = self.decoder(quant) + return dec + + def decode_code(self, code_b): + quant_b = self.quantize.embed_code(code_b) + dec = self.decode(quant_b) + return dec + + def forward(self, input, return_pred_indices=False): + quant, diff, (_,_,ind) = self.encode(input) + dec = self.decode(quant) + if return_pred_indices: + return dec, diff, ind + return dec, diff + + def get_input(self, batch, k): + x = batch[k] + if len(x.shape) == 3: + x = x[..., None] + x = x.permute(0, 3, 1, 2).to(memory_format=torch.contiguous_format).float() + if self.batch_resize_range is not None: + lower_size = self.batch_resize_range[0] + upper_size = self.batch_resize_range[1] + if self.global_step <= 4: + # do the first few batches with max size to avoid later oom + new_resize = upper_size + else: + new_resize = np.random.choice(np.arange(lower_size, upper_size+16, 16)) + if new_resize != x.shape[2]: + x = F.interpolate(x, size=new_resize, mode="bicubic") + x = x.detach() + return x + + def training_step(self, batch, batch_idx, optimizer_idx): + # https://github.com/pytorch/pytorch/issues/37142 + # try not to fool the heuristics + x = self.get_input(batch, self.image_key) + xrec, qloss, ind = self(x, return_pred_indices=True) + + if optimizer_idx == 0: + # autoencode + aeloss, log_dict_ae = self.loss(qloss, x, xrec, optimizer_idx, self.global_step, + last_layer=self.get_last_layer(), split="train", + predicted_indices=ind) + + self.log_dict(log_dict_ae, prog_bar=False, logger=True, on_step=True, on_epoch=True) + return aeloss + + if optimizer_idx == 1: + # discriminator + discloss, log_dict_disc = self.loss(qloss, x, xrec, optimizer_idx, self.global_step, + last_layer=self.get_last_layer(), split="train") + self.log_dict(log_dict_disc, prog_bar=False, logger=True, on_step=True, on_epoch=True) + return discloss + + def validation_step(self, batch, batch_idx): + log_dict = self._validation_step(batch, batch_idx) + with self.ema_scope(): + log_dict_ema = self._validation_step(batch, batch_idx, suffix="_ema") + return log_dict + + def _validation_step(self, batch, batch_idx, suffix=""): + x = self.get_input(batch, self.image_key) + xrec, qloss, ind = self(x, return_pred_indices=True) + aeloss, log_dict_ae = self.loss(qloss, x, xrec, 0, + self.global_step, + last_layer=self.get_last_layer(), + split="val"+suffix, + predicted_indices=ind + ) + + discloss, log_dict_disc = self.loss(qloss, x, xrec, 1, + self.global_step, + last_layer=self.get_last_layer(), + split="val"+suffix, + predicted_indices=ind + ) + rec_loss = log_dict_ae[f"val{suffix}/rec_loss"] + self.log(f"val{suffix}/rec_loss", rec_loss, + prog_bar=True, logger=True, on_step=False, on_epoch=True, sync_dist=True) + self.log(f"val{suffix}/aeloss", aeloss, + prog_bar=True, logger=True, on_step=False, on_epoch=True, sync_dist=True) + if version.parse(pl.__version__) >= version.parse('1.4.0'): + del log_dict_ae[f"val{suffix}/rec_loss"] + self.log_dict(log_dict_ae) + self.log_dict(log_dict_disc) + return self.log_dict + + def configure_optimizers(self): + lr_d = self.learning_rate + lr_g = self.lr_g_factor*self.learning_rate + print("lr_d", lr_d) + print("lr_g", lr_g) + opt_ae = torch.optim.Adam(list(self.encoder.parameters())+ + list(self.decoder.parameters())+ + list(self.quantize.parameters())+ + list(self.quant_conv.parameters())+ + list(self.post_quant_conv.parameters()), + lr=lr_g, betas=(0.5, 0.9)) + opt_disc = torch.optim.Adam(self.loss.discriminator.parameters(), + lr=lr_d, betas=(0.5, 0.9)) + + if self.scheduler_config is not None: + scheduler = instantiate_from_config(self.scheduler_config) + + print("Setting up LambdaLR scheduler...") + scheduler = [ + { + 'scheduler': LambdaLR(opt_ae, lr_lambda=scheduler.schedule), + 'interval': 'step', + 'frequency': 1 + }, + { + 'scheduler': LambdaLR(opt_disc, lr_lambda=scheduler.schedule), + 'interval': 'step', + 'frequency': 1 + }, + ] + return [opt_ae, opt_disc], scheduler + return [opt_ae, opt_disc], [] + + def get_last_layer(self): + return self.decoder.conv_out.weight + + def log_images(self, batch, only_inputs=False, plot_ema=False, **kwargs): + log = dict() + x = self.get_input(batch, self.image_key) + x = x.to(self.device) + if only_inputs: + log["inputs"] = x + return log + xrec, _ = self(x) + if x.shape[1] > 3: + # colorize with random projection + assert xrec.shape[1] > 3 + x = self.to_rgb(x) + xrec = self.to_rgb(xrec) + log["inputs"] = x + log["reconstructions"] = xrec + if plot_ema: + with self.ema_scope(): + xrec_ema, _ = self(x) + if x.shape[1] > 3: xrec_ema = self.to_rgb(xrec_ema) + log["reconstructions_ema"] = xrec_ema + return log + + def to_rgb(self, x): + assert self.image_key == "segmentation" + if not hasattr(self, "colorize"): + self.register_buffer("colorize", torch.randn(3, x.shape[1], 1, 1).to(x)) + x = F.conv2d(x, weight=self.colorize) + x = 2.*(x-x.min())/(x.max()-x.min()) - 1. + return x + + +class VQModelInterface(VQModel): + def __init__(self, embed_dim, *args, **kwargs): + super().__init__(embed_dim=embed_dim, *args, **kwargs) + self.embed_dim = embed_dim + + def encode(self, x): + h = self.encoder(x) + h = self.quant_conv(h) + return h + + def decode(self, h, force_not_quantize=False): + # also go through quantization layer + if not force_not_quantize: + quant, emb_loss, info = self.quantize(h) + else: + quant = h + quant = self.post_quant_conv(quant) + dec = self.decoder(quant) + return dec + +setattr(ldm.models.autoencoder, "VQModel", VQModel) +setattr(ldm.models.autoencoder, "VQModelInterface", VQModelInterface) diff --git a/extensions-builtin/LDSR/sd_hijack_ddpm_v1.py b/extensions-builtin/LDSR/sd_hijack_ddpm_v1.py new file mode 100644 index 0000000000000000000000000000000000000000..5c0488e5f6fcea41e7f9fa25070e38fbfe656478 --- /dev/null +++ b/extensions-builtin/LDSR/sd_hijack_ddpm_v1.py @@ -0,0 +1,1449 @@ +# This script is copied from the compvis/stable-diffusion repo (aka the SD V1 repo) +# Original filename: ldm/models/diffusion/ddpm.py +# The purpose to reinstate the old DDPM logic which works with VQ, whereas the V2 one doesn't +# Some models such as LDSR require VQ to work correctly +# The classes are suffixed with "V1" and added back to the "ldm.models.diffusion.ddpm" module + +import torch +import torch.nn as nn +import numpy as np +import pytorch_lightning as pl +from torch.optim.lr_scheduler import LambdaLR +from einops import rearrange, repeat +from contextlib import contextmanager +from functools import partial +from tqdm import tqdm +from torchvision.utils import make_grid +from pytorch_lightning.utilities.distributed import rank_zero_only + +from ldm.util import log_txt_as_img, exists, default, ismap, isimage, mean_flat, count_params, instantiate_from_config +from ldm.modules.ema import LitEma +from ldm.modules.distributions.distributions import normal_kl, DiagonalGaussianDistribution +from ldm.models.autoencoder import VQModelInterface, IdentityFirstStage, AutoencoderKL +from ldm.modules.diffusionmodules.util import make_beta_schedule, extract_into_tensor, noise_like +from ldm.models.diffusion.ddim import DDIMSampler + +import ldm.models.diffusion.ddpm + +__conditioning_keys__ = {'concat': 'c_concat', + 'crossattn': 'c_crossattn', + 'adm': 'y'} + + +def disabled_train(self, mode=True): + """Overwrite model.train with this function to make sure train/eval mode + does not change anymore.""" + return self + + +def uniform_on_device(r1, r2, shape, device): + return (r1 - r2) * torch.rand(*shape, device=device) + r2 + + +class DDPMV1(pl.LightningModule): + # classic DDPM with Gaussian diffusion, in image space + def __init__(self, + unet_config, + timesteps=1000, + beta_schedule="linear", + loss_type="l2", + ckpt_path=None, + ignore_keys=[], + load_only_unet=False, + monitor="val/loss", + use_ema=True, + first_stage_key="image", + image_size=256, + channels=3, + log_every_t=100, + clip_denoised=True, + linear_start=1e-4, + linear_end=2e-2, + cosine_s=8e-3, + given_betas=None, + original_elbo_weight=0., + v_posterior=0., # weight for choosing posterior variance as sigma = (1-v) * beta_tilde + v * beta + l_simple_weight=1., + conditioning_key=None, + parameterization="eps", # all assuming fixed variance schedules + scheduler_config=None, + use_positional_encodings=False, + learn_logvar=False, + logvar_init=0., + ): + super().__init__() + assert parameterization in ["eps", "x0"], 'currently only supporting "eps" and "x0"' + self.parameterization = parameterization + print(f"{self.__class__.__name__}: Running in {self.parameterization}-prediction mode") + self.cond_stage_model = None + self.clip_denoised = clip_denoised + self.log_every_t = log_every_t + self.first_stage_key = first_stage_key + self.image_size = image_size # try conv? + self.channels = channels + self.use_positional_encodings = use_positional_encodings + self.model = DiffusionWrapperV1(unet_config, conditioning_key) + count_params(self.model, verbose=True) + self.use_ema = use_ema + if self.use_ema: + self.model_ema = LitEma(self.model) + print(f"Keeping EMAs of {len(list(self.model_ema.buffers()))}.") + + self.use_scheduler = scheduler_config is not None + if self.use_scheduler: + self.scheduler_config = scheduler_config + + self.v_posterior = v_posterior + self.original_elbo_weight = original_elbo_weight + self.l_simple_weight = l_simple_weight + + if monitor is not None: + self.monitor = monitor + if ckpt_path is not None: + self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys, only_model=load_only_unet) + + self.register_schedule(given_betas=given_betas, beta_schedule=beta_schedule, timesteps=timesteps, + linear_start=linear_start, linear_end=linear_end, cosine_s=cosine_s) + + self.loss_type = loss_type + + self.learn_logvar = learn_logvar + self.logvar = torch.full(fill_value=logvar_init, size=(self.num_timesteps,)) + if self.learn_logvar: + self.logvar = nn.Parameter(self.logvar, requires_grad=True) + + + def register_schedule(self, given_betas=None, beta_schedule="linear", timesteps=1000, + linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): + if exists(given_betas): + betas = given_betas + else: + betas = make_beta_schedule(beta_schedule, timesteps, linear_start=linear_start, linear_end=linear_end, + cosine_s=cosine_s) + alphas = 1. - betas + alphas_cumprod = np.cumprod(alphas, axis=0) + alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1]) + + timesteps, = betas.shape + self.num_timesteps = int(timesteps) + self.linear_start = linear_start + self.linear_end = linear_end + assert alphas_cumprod.shape[0] == self.num_timesteps, 'alphas have to be defined for each timestep' + + to_torch = partial(torch.tensor, dtype=torch.float32) + + self.register_buffer('betas', to_torch(betas)) + self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) + self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev)) + + # calculations for diffusion q(x_t | x_{t-1}) and others + self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod))) + self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod))) + self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod))) + self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod))) + self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1))) + + # calculations for posterior q(x_{t-1} | x_t, x_0) + posterior_variance = (1 - self.v_posterior) * betas * (1. - alphas_cumprod_prev) / ( + 1. - alphas_cumprod) + self.v_posterior * betas + # above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t) + self.register_buffer('posterior_variance', to_torch(posterior_variance)) + # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain + self.register_buffer('posterior_log_variance_clipped', to_torch(np.log(np.maximum(posterior_variance, 1e-20)))) + self.register_buffer('posterior_mean_coef1', to_torch( + betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod))) + self.register_buffer('posterior_mean_coef2', to_torch( + (1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod))) + + if self.parameterization == "eps": + lvlb_weights = self.betas ** 2 / ( + 2 * self.posterior_variance * to_torch(alphas) * (1 - self.alphas_cumprod)) + elif self.parameterization == "x0": + lvlb_weights = 0.5 * np.sqrt(torch.Tensor(alphas_cumprod)) / (2. * 1 - torch.Tensor(alphas_cumprod)) + else: + raise NotImplementedError("mu not supported") + # TODO how to choose this term + lvlb_weights[0] = lvlb_weights[1] + self.register_buffer('lvlb_weights', lvlb_weights, persistent=False) + assert not torch.isnan(self.lvlb_weights).all() + + @contextmanager + def ema_scope(self, context=None): + if self.use_ema: + self.model_ema.store(self.model.parameters()) + self.model_ema.copy_to(self.model) + if context is not None: + print(f"{context}: Switched to EMA weights") + try: + yield None + finally: + if self.use_ema: + self.model_ema.restore(self.model.parameters()) + if context is not None: + print(f"{context}: Restored training weights") + + def init_from_ckpt(self, path, ignore_keys=list(), only_model=False): + sd = torch.load(path, map_location="cpu") + if "state_dict" in list(sd.keys()): + sd = sd["state_dict"] + keys = list(sd.keys()) + for k in keys: + for ik in ignore_keys: + if k.startswith(ik): + print("Deleting key {} from state_dict.".format(k)) + del sd[k] + missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict( + sd, strict=False) + print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys") + if len(missing) > 0: + print(f"Missing Keys: {missing}") + if len(unexpected) > 0: + print(f"Unexpected Keys: {unexpected}") + + def q_mean_variance(self, x_start, t): + """ + Get the distribution q(x_t | x_0). + :param x_start: the [N x C x ...] tensor of noiseless inputs. + :param t: the number of diffusion steps (minus 1). Here, 0 means one step. + :return: A tuple (mean, variance, log_variance), all of x_start's shape. + """ + mean = (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start) + variance = extract_into_tensor(1.0 - self.alphas_cumprod, t, x_start.shape) + log_variance = extract_into_tensor(self.log_one_minus_alphas_cumprod, t, x_start.shape) + return mean, variance, log_variance + + def predict_start_from_noise(self, x_t, t, noise): + return ( + extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - + extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise + ) + + def q_posterior(self, x_start, x_t, t): + posterior_mean = ( + extract_into_tensor(self.posterior_mean_coef1, t, x_t.shape) * x_start + + extract_into_tensor(self.posterior_mean_coef2, t, x_t.shape) * x_t + ) + posterior_variance = extract_into_tensor(self.posterior_variance, t, x_t.shape) + posterior_log_variance_clipped = extract_into_tensor(self.posterior_log_variance_clipped, t, x_t.shape) + return posterior_mean, posterior_variance, posterior_log_variance_clipped + + def p_mean_variance(self, x, t, clip_denoised: bool): + model_out = self.model(x, t) + if self.parameterization == "eps": + x_recon = self.predict_start_from_noise(x, t=t, noise=model_out) + elif self.parameterization == "x0": + x_recon = model_out + if clip_denoised: + x_recon.clamp_(-1., 1.) + + model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t) + return model_mean, posterior_variance, posterior_log_variance + + @torch.no_grad() + def p_sample(self, x, t, clip_denoised=True, repeat_noise=False): + b, *_, device = *x.shape, x.device + model_mean, _, model_log_variance = self.p_mean_variance(x=x, t=t, clip_denoised=clip_denoised) + noise = noise_like(x.shape, device, repeat_noise) + # no noise when t == 0 + nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))) + return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise + + @torch.no_grad() + def p_sample_loop(self, shape, return_intermediates=False): + device = self.betas.device + b = shape[0] + img = torch.randn(shape, device=device) + intermediates = [img] + for i in tqdm(reversed(range(0, self.num_timesteps)), desc='Sampling t', total=self.num_timesteps): + img = self.p_sample(img, torch.full((b,), i, device=device, dtype=torch.long), + clip_denoised=self.clip_denoised) + if i % self.log_every_t == 0 or i == self.num_timesteps - 1: + intermediates.append(img) + if return_intermediates: + return img, intermediates + return img + + @torch.no_grad() + def sample(self, batch_size=16, return_intermediates=False): + image_size = self.image_size + channels = self.channels + return self.p_sample_loop((batch_size, channels, image_size, image_size), + return_intermediates=return_intermediates) + + def q_sample(self, x_start, t, noise=None): + noise = default(noise, lambda: torch.randn_like(x_start)) + return (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start + + extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise) + + def get_loss(self, pred, target, mean=True): + if self.loss_type == 'l1': + loss = (target - pred).abs() + if mean: + loss = loss.mean() + elif self.loss_type == 'l2': + if mean: + loss = torch.nn.functional.mse_loss(target, pred) + else: + loss = torch.nn.functional.mse_loss(target, pred, reduction='none') + else: + raise NotImplementedError("unknown loss type '{loss_type}'") + + return loss + + def p_losses(self, x_start, t, noise=None): + noise = default(noise, lambda: torch.randn_like(x_start)) + x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) + model_out = self.model(x_noisy, t) + + loss_dict = {} + if self.parameterization == "eps": + target = noise + elif self.parameterization == "x0": + target = x_start + else: + raise NotImplementedError(f"Paramterization {self.parameterization} not yet supported") + + loss = self.get_loss(model_out, target, mean=False).mean(dim=[1, 2, 3]) + + log_prefix = 'train' if self.training else 'val' + + loss_dict.update({f'{log_prefix}/loss_simple': loss.mean()}) + loss_simple = loss.mean() * self.l_simple_weight + + loss_vlb = (self.lvlb_weights[t] * loss).mean() + loss_dict.update({f'{log_prefix}/loss_vlb': loss_vlb}) + + loss = loss_simple + self.original_elbo_weight * loss_vlb + + loss_dict.update({f'{log_prefix}/loss': loss}) + + return loss, loss_dict + + def forward(self, x, *args, **kwargs): + # b, c, h, w, device, img_size, = *x.shape, x.device, self.image_size + # assert h == img_size and w == img_size, f'height and width of image must be {img_size}' + t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long() + return self.p_losses(x, t, *args, **kwargs) + + def get_input(self, batch, k): + x = batch[k] + if len(x.shape) == 3: + x = x[..., None] + x = rearrange(x, 'b h w c -> b c h w') + x = x.to(memory_format=torch.contiguous_format).float() + return x + + def shared_step(self, batch): + x = self.get_input(batch, self.first_stage_key) + loss, loss_dict = self(x) + return loss, loss_dict + + def training_step(self, batch, batch_idx): + loss, loss_dict = self.shared_step(batch) + + self.log_dict(loss_dict, prog_bar=True, + logger=True, on_step=True, on_epoch=True) + + self.log("global_step", self.global_step, + prog_bar=True, logger=True, on_step=True, on_epoch=False) + + if self.use_scheduler: + lr = self.optimizers().param_groups[0]['lr'] + self.log('lr_abs', lr, prog_bar=True, logger=True, on_step=True, on_epoch=False) + + return loss + + @torch.no_grad() + def validation_step(self, batch, batch_idx): + _, loss_dict_no_ema = self.shared_step(batch) + with self.ema_scope(): + _, loss_dict_ema = self.shared_step(batch) + loss_dict_ema = {key + '_ema': loss_dict_ema[key] for key in loss_dict_ema} + self.log_dict(loss_dict_no_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True) + self.log_dict(loss_dict_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True) + + def on_train_batch_end(self, *args, **kwargs): + if self.use_ema: + self.model_ema(self.model) + + def _get_rows_from_list(self, samples): + n_imgs_per_row = len(samples) + denoise_grid = rearrange(samples, 'n b c h w -> b n c h w') + denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w') + denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row) + return denoise_grid + + @torch.no_grad() + def log_images(self, batch, N=8, n_row=2, sample=True, return_keys=None, **kwargs): + log = dict() + x = self.get_input(batch, self.first_stage_key) + N = min(x.shape[0], N) + n_row = min(x.shape[0], n_row) + x = x.to(self.device)[:N] + log["inputs"] = x + + # get diffusion row + diffusion_row = list() + x_start = x[:n_row] + + for t in range(self.num_timesteps): + if t % self.log_every_t == 0 or t == self.num_timesteps - 1: + t = repeat(torch.tensor([t]), '1 -> b', b=n_row) + t = t.to(self.device).long() + noise = torch.randn_like(x_start) + x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) + diffusion_row.append(x_noisy) + + log["diffusion_row"] = self._get_rows_from_list(diffusion_row) + + if sample: + # get denoise row + with self.ema_scope("Plotting"): + samples, denoise_row = self.sample(batch_size=N, return_intermediates=True) + + log["samples"] = samples + log["denoise_row"] = self._get_rows_from_list(denoise_row) + + if return_keys: + if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0: + return log + else: + return {key: log[key] for key in return_keys} + return log + + def configure_optimizers(self): + lr = self.learning_rate + params = list(self.model.parameters()) + if self.learn_logvar: + params = params + [self.logvar] + opt = torch.optim.AdamW(params, lr=lr) + return opt + + +class LatentDiffusionV1(DDPMV1): + """main class""" + def __init__(self, + first_stage_config, + cond_stage_config, + num_timesteps_cond=None, + cond_stage_key="image", + cond_stage_trainable=False, + concat_mode=True, + cond_stage_forward=None, + conditioning_key=None, + scale_factor=1.0, + scale_by_std=False, + *args, **kwargs): + self.num_timesteps_cond = default(num_timesteps_cond, 1) + self.scale_by_std = scale_by_std + assert self.num_timesteps_cond <= kwargs['timesteps'] + # for backwards compatibility after implementation of DiffusionWrapper + if conditioning_key is None: + conditioning_key = 'concat' if concat_mode else 'crossattn' + if cond_stage_config == '__is_unconditional__': + conditioning_key = None + ckpt_path = kwargs.pop("ckpt_path", None) + ignore_keys = kwargs.pop("ignore_keys", []) + super().__init__(conditioning_key=conditioning_key, *args, **kwargs) + self.concat_mode = concat_mode + self.cond_stage_trainable = cond_stage_trainable + self.cond_stage_key = cond_stage_key + try: + self.num_downs = len(first_stage_config.params.ddconfig.ch_mult) - 1 + except: + self.num_downs = 0 + if not scale_by_std: + self.scale_factor = scale_factor + else: + self.register_buffer('scale_factor', torch.tensor(scale_factor)) + self.instantiate_first_stage(first_stage_config) + self.instantiate_cond_stage(cond_stage_config) + self.cond_stage_forward = cond_stage_forward + self.clip_denoised = False + self.bbox_tokenizer = None + + self.restarted_from_ckpt = False + if ckpt_path is not None: + self.init_from_ckpt(ckpt_path, ignore_keys) + self.restarted_from_ckpt = True + + def make_cond_schedule(self, ): + self.cond_ids = torch.full(size=(self.num_timesteps,), fill_value=self.num_timesteps - 1, dtype=torch.long) + ids = torch.round(torch.linspace(0, self.num_timesteps - 1, self.num_timesteps_cond)).long() + self.cond_ids[:self.num_timesteps_cond] = ids + + @rank_zero_only + @torch.no_grad() + def on_train_batch_start(self, batch, batch_idx, dataloader_idx): + # only for very first batch + if self.scale_by_std and self.current_epoch == 0 and self.global_step == 0 and batch_idx == 0 and not self.restarted_from_ckpt: + assert self.scale_factor == 1., 'rather not use custom rescaling and std-rescaling simultaneously' + # set rescale weight to 1./std of encodings + print("### USING STD-RESCALING ###") + x = super().get_input(batch, self.first_stage_key) + x = x.to(self.device) + encoder_posterior = self.encode_first_stage(x) + z = self.get_first_stage_encoding(encoder_posterior).detach() + del self.scale_factor + self.register_buffer('scale_factor', 1. / z.flatten().std()) + print(f"setting self.scale_factor to {self.scale_factor}") + print("### USING STD-RESCALING ###") + + def register_schedule(self, + given_betas=None, beta_schedule="linear", timesteps=1000, + linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): + super().register_schedule(given_betas, beta_schedule, timesteps, linear_start, linear_end, cosine_s) + + self.shorten_cond_schedule = self.num_timesteps_cond > 1 + if self.shorten_cond_schedule: + self.make_cond_schedule() + + def instantiate_first_stage(self, config): + model = instantiate_from_config(config) + self.first_stage_model = model.eval() + self.first_stage_model.train = disabled_train + for param in self.first_stage_model.parameters(): + param.requires_grad = False + + def instantiate_cond_stage(self, config): + if not self.cond_stage_trainable: + if config == "__is_first_stage__": + print("Using first stage also as cond stage.") + self.cond_stage_model = self.first_stage_model + elif config == "__is_unconditional__": + print(f"Training {self.__class__.__name__} as an unconditional model.") + self.cond_stage_model = None + # self.be_unconditional = True + else: + model = instantiate_from_config(config) + self.cond_stage_model = model.eval() + self.cond_stage_model.train = disabled_train + for param in self.cond_stage_model.parameters(): + param.requires_grad = False + else: + assert config != '__is_first_stage__' + assert config != '__is_unconditional__' + model = instantiate_from_config(config) + self.cond_stage_model = model + + def _get_denoise_row_from_list(self, samples, desc='', force_no_decoder_quantization=False): + denoise_row = [] + for zd in tqdm(samples, desc=desc): + denoise_row.append(self.decode_first_stage(zd.to(self.device), + force_not_quantize=force_no_decoder_quantization)) + n_imgs_per_row = len(denoise_row) + denoise_row = torch.stack(denoise_row) # n_log_step, n_row, C, H, W + denoise_grid = rearrange(denoise_row, 'n b c h w -> b n c h w') + denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w') + denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row) + return denoise_grid + + def get_first_stage_encoding(self, encoder_posterior): + if isinstance(encoder_posterior, DiagonalGaussianDistribution): + z = encoder_posterior.sample() + elif isinstance(encoder_posterior, torch.Tensor): + z = encoder_posterior + else: + raise NotImplementedError(f"encoder_posterior of type '{type(encoder_posterior)}' not yet implemented") + return self.scale_factor * z + + def get_learned_conditioning(self, c): + if self.cond_stage_forward is None: + if hasattr(self.cond_stage_model, 'encode') and callable(self.cond_stage_model.encode): + c = self.cond_stage_model.encode(c) + if isinstance(c, DiagonalGaussianDistribution): + c = c.mode() + else: + c = self.cond_stage_model(c) + else: + assert hasattr(self.cond_stage_model, self.cond_stage_forward) + c = getattr(self.cond_stage_model, self.cond_stage_forward)(c) + return c + + def meshgrid(self, h, w): + y = torch.arange(0, h).view(h, 1, 1).repeat(1, w, 1) + x = torch.arange(0, w).view(1, w, 1).repeat(h, 1, 1) + + arr = torch.cat([y, x], dim=-1) + return arr + + def delta_border(self, h, w): + """ + :param h: height + :param w: width + :return: normalized distance to image border, + wtith min distance = 0 at border and max dist = 0.5 at image center + """ + lower_right_corner = torch.tensor([h - 1, w - 1]).view(1, 1, 2) + arr = self.meshgrid(h, w) / lower_right_corner + dist_left_up = torch.min(arr, dim=-1, keepdims=True)[0] + dist_right_down = torch.min(1 - arr, dim=-1, keepdims=True)[0] + edge_dist = torch.min(torch.cat([dist_left_up, dist_right_down], dim=-1), dim=-1)[0] + return edge_dist + + def get_weighting(self, h, w, Ly, Lx, device): + weighting = self.delta_border(h, w) + weighting = torch.clip(weighting, self.split_input_params["clip_min_weight"], + self.split_input_params["clip_max_weight"], ) + weighting = weighting.view(1, h * w, 1).repeat(1, 1, Ly * Lx).to(device) + + if self.split_input_params["tie_braker"]: + L_weighting = self.delta_border(Ly, Lx) + L_weighting = torch.clip(L_weighting, + self.split_input_params["clip_min_tie_weight"], + self.split_input_params["clip_max_tie_weight"]) + + L_weighting = L_weighting.view(1, 1, Ly * Lx).to(device) + weighting = weighting * L_weighting + return weighting + + def get_fold_unfold(self, x, kernel_size, stride, uf=1, df=1): # todo load once not every time, shorten code + """ + :param x: img of size (bs, c, h, w) + :return: n img crops of size (n, bs, c, kernel_size[0], kernel_size[1]) + """ + bs, nc, h, w = x.shape + + # number of crops in image + Ly = (h - kernel_size[0]) // stride[0] + 1 + Lx = (w - kernel_size[1]) // stride[1] + 1 + + if uf == 1 and df == 1: + fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) + unfold = torch.nn.Unfold(**fold_params) + + fold = torch.nn.Fold(output_size=x.shape[2:], **fold_params) + + weighting = self.get_weighting(kernel_size[0], kernel_size[1], Ly, Lx, x.device).to(x.dtype) + normalization = fold(weighting).view(1, 1, h, w) # normalizes the overlap + weighting = weighting.view((1, 1, kernel_size[0], kernel_size[1], Ly * Lx)) + + elif uf > 1 and df == 1: + fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) + unfold = torch.nn.Unfold(**fold_params) + + fold_params2 = dict(kernel_size=(kernel_size[0] * uf, kernel_size[0] * uf), + dilation=1, padding=0, + stride=(stride[0] * uf, stride[1] * uf)) + fold = torch.nn.Fold(output_size=(x.shape[2] * uf, x.shape[3] * uf), **fold_params2) + + weighting = self.get_weighting(kernel_size[0] * uf, kernel_size[1] * uf, Ly, Lx, x.device).to(x.dtype) + normalization = fold(weighting).view(1, 1, h * uf, w * uf) # normalizes the overlap + weighting = weighting.view((1, 1, kernel_size[0] * uf, kernel_size[1] * uf, Ly * Lx)) + + elif df > 1 and uf == 1: + fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) + unfold = torch.nn.Unfold(**fold_params) + + fold_params2 = dict(kernel_size=(kernel_size[0] // df, kernel_size[0] // df), + dilation=1, padding=0, + stride=(stride[0] // df, stride[1] // df)) + fold = torch.nn.Fold(output_size=(x.shape[2] // df, x.shape[3] // df), **fold_params2) + + weighting = self.get_weighting(kernel_size[0] // df, kernel_size[1] // df, Ly, Lx, x.device).to(x.dtype) + normalization = fold(weighting).view(1, 1, h // df, w // df) # normalizes the overlap + weighting = weighting.view((1, 1, kernel_size[0] // df, kernel_size[1] // df, Ly * Lx)) + + else: + raise NotImplementedError + + return fold, unfold, normalization, weighting + + @torch.no_grad() + def get_input(self, batch, k, return_first_stage_outputs=False, force_c_encode=False, + cond_key=None, return_original_cond=False, bs=None): + x = super().get_input(batch, k) + if bs is not None: + x = x[:bs] + x = x.to(self.device) + encoder_posterior = self.encode_first_stage(x) + z = self.get_first_stage_encoding(encoder_posterior).detach() + + if self.model.conditioning_key is not None: + if cond_key is None: + cond_key = self.cond_stage_key + if cond_key != self.first_stage_key: + if cond_key in ['caption', 'coordinates_bbox']: + xc = batch[cond_key] + elif cond_key == 'class_label': + xc = batch + else: + xc = super().get_input(batch, cond_key).to(self.device) + else: + xc = x + if not self.cond_stage_trainable or force_c_encode: + if isinstance(xc, dict) or isinstance(xc, list): + # import pudb; pudb.set_trace() + c = self.get_learned_conditioning(xc) + else: + c = self.get_learned_conditioning(xc.to(self.device)) + else: + c = xc + if bs is not None: + c = c[:bs] + + if self.use_positional_encodings: + pos_x, pos_y = self.compute_latent_shifts(batch) + ckey = __conditioning_keys__[self.model.conditioning_key] + c = {ckey: c, 'pos_x': pos_x, 'pos_y': pos_y} + + else: + c = None + xc = None + if self.use_positional_encodings: + pos_x, pos_y = self.compute_latent_shifts(batch) + c = {'pos_x': pos_x, 'pos_y': pos_y} + out = [z, c] + if return_first_stage_outputs: + xrec = self.decode_first_stage(z) + out.extend([x, xrec]) + if return_original_cond: + out.append(xc) + return out + + @torch.no_grad() + def decode_first_stage(self, z, predict_cids=False, force_not_quantize=False): + if predict_cids: + if z.dim() == 4: + z = torch.argmax(z.exp(), dim=1).long() + z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None) + z = rearrange(z, 'b h w c -> b c h w').contiguous() + + z = 1. / self.scale_factor * z + + if hasattr(self, "split_input_params"): + if self.split_input_params["patch_distributed_vq"]: + ks = self.split_input_params["ks"] # eg. (128, 128) + stride = self.split_input_params["stride"] # eg. (64, 64) + uf = self.split_input_params["vqf"] + bs, nc, h, w = z.shape + if ks[0] > h or ks[1] > w: + ks = (min(ks[0], h), min(ks[1], w)) + print("reducing Kernel") + + if stride[0] > h or stride[1] > w: + stride = (min(stride[0], h), min(stride[1], w)) + print("reducing stride") + + fold, unfold, normalization, weighting = self.get_fold_unfold(z, ks, stride, uf=uf) + + z = unfold(z) # (bn, nc * prod(**ks), L) + # 1. Reshape to img shape + z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L ) + + # 2. apply model loop over last dim + if isinstance(self.first_stage_model, VQModelInterface): + output_list = [self.first_stage_model.decode(z[:, :, :, :, i], + force_not_quantize=predict_cids or force_not_quantize) + for i in range(z.shape[-1])] + else: + + output_list = [self.first_stage_model.decode(z[:, :, :, :, i]) + for i in range(z.shape[-1])] + + o = torch.stack(output_list, axis=-1) # # (bn, nc, ks[0], ks[1], L) + o = o * weighting + # Reverse 1. reshape to img shape + o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L) + # stitch crops together + decoded = fold(o) + decoded = decoded / normalization # norm is shape (1, 1, h, w) + return decoded + else: + if isinstance(self.first_stage_model, VQModelInterface): + return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) + else: + return self.first_stage_model.decode(z) + + else: + if isinstance(self.first_stage_model, VQModelInterface): + return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) + else: + return self.first_stage_model.decode(z) + + # same as above but without decorator + def differentiable_decode_first_stage(self, z, predict_cids=False, force_not_quantize=False): + if predict_cids: + if z.dim() == 4: + z = torch.argmax(z.exp(), dim=1).long() + z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None) + z = rearrange(z, 'b h w c -> b c h w').contiguous() + + z = 1. / self.scale_factor * z + + if hasattr(self, "split_input_params"): + if self.split_input_params["patch_distributed_vq"]: + ks = self.split_input_params["ks"] # eg. (128, 128) + stride = self.split_input_params["stride"] # eg. (64, 64) + uf = self.split_input_params["vqf"] + bs, nc, h, w = z.shape + if ks[0] > h or ks[1] > w: + ks = (min(ks[0], h), min(ks[1], w)) + print("reducing Kernel") + + if stride[0] > h or stride[1] > w: + stride = (min(stride[0], h), min(stride[1], w)) + print("reducing stride") + + fold, unfold, normalization, weighting = self.get_fold_unfold(z, ks, stride, uf=uf) + + z = unfold(z) # (bn, nc * prod(**ks), L) + # 1. Reshape to img shape + z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L ) + + # 2. apply model loop over last dim + if isinstance(self.first_stage_model, VQModelInterface): + output_list = [self.first_stage_model.decode(z[:, :, :, :, i], + force_not_quantize=predict_cids or force_not_quantize) + for i in range(z.shape[-1])] + else: + + output_list = [self.first_stage_model.decode(z[:, :, :, :, i]) + for i in range(z.shape[-1])] + + o = torch.stack(output_list, axis=-1) # # (bn, nc, ks[0], ks[1], L) + o = o * weighting + # Reverse 1. reshape to img shape + o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L) + # stitch crops together + decoded = fold(o) + decoded = decoded / normalization # norm is shape (1, 1, h, w) + return decoded + else: + if isinstance(self.first_stage_model, VQModelInterface): + return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) + else: + return self.first_stage_model.decode(z) + + else: + if isinstance(self.first_stage_model, VQModelInterface): + return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) + else: + return self.first_stage_model.decode(z) + + @torch.no_grad() + def encode_first_stage(self, x): + if hasattr(self, "split_input_params"): + if self.split_input_params["patch_distributed_vq"]: + ks = self.split_input_params["ks"] # eg. (128, 128) + stride = self.split_input_params["stride"] # eg. (64, 64) + df = self.split_input_params["vqf"] + self.split_input_params['original_image_size'] = x.shape[-2:] + bs, nc, h, w = x.shape + if ks[0] > h or ks[1] > w: + ks = (min(ks[0], h), min(ks[1], w)) + print("reducing Kernel") + + if stride[0] > h or stride[1] > w: + stride = (min(stride[0], h), min(stride[1], w)) + print("reducing stride") + + fold, unfold, normalization, weighting = self.get_fold_unfold(x, ks, stride, df=df) + z = unfold(x) # (bn, nc * prod(**ks), L) + # Reshape to img shape + z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L ) + + output_list = [self.first_stage_model.encode(z[:, :, :, :, i]) + for i in range(z.shape[-1])] + + o = torch.stack(output_list, axis=-1) + o = o * weighting + + # Reverse reshape to img shape + o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L) + # stitch crops together + decoded = fold(o) + decoded = decoded / normalization + return decoded + + else: + return self.first_stage_model.encode(x) + else: + return self.first_stage_model.encode(x) + + def shared_step(self, batch, **kwargs): + x, c = self.get_input(batch, self.first_stage_key) + loss = self(x, c) + return loss + + def forward(self, x, c, *args, **kwargs): + t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long() + if self.model.conditioning_key is not None: + assert c is not None + if self.cond_stage_trainable: + c = self.get_learned_conditioning(c) + if self.shorten_cond_schedule: # TODO: drop this option + tc = self.cond_ids[t].to(self.device) + c = self.q_sample(x_start=c, t=tc, noise=torch.randn_like(c.float())) + return self.p_losses(x, c, t, *args, **kwargs) + + def _rescale_annotations(self, bboxes, crop_coordinates): # TODO: move to dataset + def rescale_bbox(bbox): + x0 = clamp((bbox[0] - crop_coordinates[0]) / crop_coordinates[2]) + y0 = clamp((bbox[1] - crop_coordinates[1]) / crop_coordinates[3]) + w = min(bbox[2] / crop_coordinates[2], 1 - x0) + h = min(bbox[3] / crop_coordinates[3], 1 - y0) + return x0, y0, w, h + + return [rescale_bbox(b) for b in bboxes] + + def apply_model(self, x_noisy, t, cond, return_ids=False): + + if isinstance(cond, dict): + # hybrid case, cond is exptected to be a dict + pass + else: + if not isinstance(cond, list): + cond = [cond] + key = 'c_concat' if self.model.conditioning_key == 'concat' else 'c_crossattn' + cond = {key: cond} + + if hasattr(self, "split_input_params"): + assert len(cond) == 1 # todo can only deal with one conditioning atm + assert not return_ids + ks = self.split_input_params["ks"] # eg. (128, 128) + stride = self.split_input_params["stride"] # eg. (64, 64) + + h, w = x_noisy.shape[-2:] + + fold, unfold, normalization, weighting = self.get_fold_unfold(x_noisy, ks, stride) + + z = unfold(x_noisy) # (bn, nc * prod(**ks), L) + # Reshape to img shape + z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L ) + z_list = [z[:, :, :, :, i] for i in range(z.shape[-1])] + + if self.cond_stage_key in ["image", "LR_image", "segmentation", + 'bbox_img'] and self.model.conditioning_key: # todo check for completeness + c_key = next(iter(cond.keys())) # get key + c = next(iter(cond.values())) # get value + assert (len(c) == 1) # todo extend to list with more than one elem + c = c[0] # get element + + c = unfold(c) + c = c.view((c.shape[0], -1, ks[0], ks[1], c.shape[-1])) # (bn, nc, ks[0], ks[1], L ) + + cond_list = [{c_key: [c[:, :, :, :, i]]} for i in range(c.shape[-1])] + + elif self.cond_stage_key == 'coordinates_bbox': + assert 'original_image_size' in self.split_input_params, 'BoudingBoxRescaling is missing original_image_size' + + # assuming padding of unfold is always 0 and its dilation is always 1 + n_patches_per_row = int((w - ks[0]) / stride[0] + 1) + full_img_h, full_img_w = self.split_input_params['original_image_size'] + # as we are operating on latents, we need the factor from the original image size to the + # spatial latent size to properly rescale the crops for regenerating the bbox annotations + num_downs = self.first_stage_model.encoder.num_resolutions - 1 + rescale_latent = 2 ** (num_downs) + + # get top left postions of patches as conforming for the bbbox tokenizer, therefore we + # need to rescale the tl patch coordinates to be in between (0,1) + tl_patch_coordinates = [(rescale_latent * stride[0] * (patch_nr % n_patches_per_row) / full_img_w, + rescale_latent * stride[1] * (patch_nr // n_patches_per_row) / full_img_h) + for patch_nr in range(z.shape[-1])] + + # patch_limits are tl_coord, width and height coordinates as (x_tl, y_tl, h, w) + patch_limits = [(x_tl, y_tl, + rescale_latent * ks[0] / full_img_w, + rescale_latent * ks[1] / full_img_h) for x_tl, y_tl in tl_patch_coordinates] + # patch_values = [(np.arange(x_tl,min(x_tl+ks, 1.)),np.arange(y_tl,min(y_tl+ks, 1.))) for x_tl, y_tl in tl_patch_coordinates] + + # tokenize crop coordinates for the bounding boxes of the respective patches + patch_limits_tknzd = [torch.LongTensor(self.bbox_tokenizer._crop_encoder(bbox))[None].to(self.device) + for bbox in patch_limits] # list of length l with tensors of shape (1, 2) + print(patch_limits_tknzd[0].shape) + # cut tknzd crop position from conditioning + assert isinstance(cond, dict), 'cond must be dict to be fed into model' + cut_cond = cond['c_crossattn'][0][..., :-2].to(self.device) + print(cut_cond.shape) + + adapted_cond = torch.stack([torch.cat([cut_cond, p], dim=1) for p in patch_limits_tknzd]) + adapted_cond = rearrange(adapted_cond, 'l b n -> (l b) n') + print(adapted_cond.shape) + adapted_cond = self.get_learned_conditioning(adapted_cond) + print(adapted_cond.shape) + adapted_cond = rearrange(adapted_cond, '(l b) n d -> l b n d', l=z.shape[-1]) + print(adapted_cond.shape) + + cond_list = [{'c_crossattn': [e]} for e in adapted_cond] + + else: + cond_list = [cond for i in range(z.shape[-1])] # Todo make this more efficient + + # apply model by loop over crops + output_list = [self.model(z_list[i], t, **cond_list[i]) for i in range(z.shape[-1])] + assert not isinstance(output_list[0], + tuple) # todo cant deal with multiple model outputs check this never happens + + o = torch.stack(output_list, axis=-1) + o = o * weighting + # Reverse reshape to img shape + o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L) + # stitch crops together + x_recon = fold(o) / normalization + + else: + x_recon = self.model(x_noisy, t, **cond) + + if isinstance(x_recon, tuple) and not return_ids: + return x_recon[0] + else: + return x_recon + + def _predict_eps_from_xstart(self, x_t, t, pred_xstart): + return (extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - pred_xstart) / \ + extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) + + def _prior_bpd(self, x_start): + """ + Get the prior KL term for the variational lower-bound, measured in + bits-per-dim. + This term can't be optimized, as it only depends on the encoder. + :param x_start: the [N x C x ...] tensor of inputs. + :return: a batch of [N] KL values (in bits), one per batch element. + """ + batch_size = x_start.shape[0] + t = torch.tensor([self.num_timesteps - 1] * batch_size, device=x_start.device) + qt_mean, _, qt_log_variance = self.q_mean_variance(x_start, t) + kl_prior = normal_kl(mean1=qt_mean, logvar1=qt_log_variance, mean2=0.0, logvar2=0.0) + return mean_flat(kl_prior) / np.log(2.0) + + def p_losses(self, x_start, cond, t, noise=None): + noise = default(noise, lambda: torch.randn_like(x_start)) + x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) + model_output = self.apply_model(x_noisy, t, cond) + + loss_dict = {} + prefix = 'train' if self.training else 'val' + + if self.parameterization == "x0": + target = x_start + elif self.parameterization == "eps": + target = noise + else: + raise NotImplementedError() + + loss_simple = self.get_loss(model_output, target, mean=False).mean([1, 2, 3]) + loss_dict.update({f'{prefix}/loss_simple': loss_simple.mean()}) + + logvar_t = self.logvar[t].to(self.device) + loss = loss_simple / torch.exp(logvar_t) + logvar_t + # loss = loss_simple / torch.exp(self.logvar) + self.logvar + if self.learn_logvar: + loss_dict.update({f'{prefix}/loss_gamma': loss.mean()}) + loss_dict.update({'logvar': self.logvar.data.mean()}) + + loss = self.l_simple_weight * loss.mean() + + loss_vlb = self.get_loss(model_output, target, mean=False).mean(dim=(1, 2, 3)) + loss_vlb = (self.lvlb_weights[t] * loss_vlb).mean() + loss_dict.update({f'{prefix}/loss_vlb': loss_vlb}) + loss += (self.original_elbo_weight * loss_vlb) + loss_dict.update({f'{prefix}/loss': loss}) + + return loss, loss_dict + + def p_mean_variance(self, x, c, t, clip_denoised: bool, return_codebook_ids=False, quantize_denoised=False, + return_x0=False, score_corrector=None, corrector_kwargs=None): + t_in = t + model_out = self.apply_model(x, t_in, c, return_ids=return_codebook_ids) + + if score_corrector is not None: + assert self.parameterization == "eps" + model_out = score_corrector.modify_score(self, model_out, x, t, c, **corrector_kwargs) + + if return_codebook_ids: + model_out, logits = model_out + + if self.parameterization == "eps": + x_recon = self.predict_start_from_noise(x, t=t, noise=model_out) + elif self.parameterization == "x0": + x_recon = model_out + else: + raise NotImplementedError() + + if clip_denoised: + x_recon.clamp_(-1., 1.) + if quantize_denoised: + x_recon, _, [_, _, indices] = self.first_stage_model.quantize(x_recon) + model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t) + if return_codebook_ids: + return model_mean, posterior_variance, posterior_log_variance, logits + elif return_x0: + return model_mean, posterior_variance, posterior_log_variance, x_recon + else: + return model_mean, posterior_variance, posterior_log_variance + + @torch.no_grad() + def p_sample(self, x, c, t, clip_denoised=False, repeat_noise=False, + return_codebook_ids=False, quantize_denoised=False, return_x0=False, + temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None): + b, *_, device = *x.shape, x.device + outputs = self.p_mean_variance(x=x, c=c, t=t, clip_denoised=clip_denoised, + return_codebook_ids=return_codebook_ids, + quantize_denoised=quantize_denoised, + return_x0=return_x0, + score_corrector=score_corrector, corrector_kwargs=corrector_kwargs) + if return_codebook_ids: + raise DeprecationWarning("Support dropped.") + model_mean, _, model_log_variance, logits = outputs + elif return_x0: + model_mean, _, model_log_variance, x0 = outputs + else: + model_mean, _, model_log_variance = outputs + + noise = noise_like(x.shape, device, repeat_noise) * temperature + if noise_dropout > 0.: + noise = torch.nn.functional.dropout(noise, p=noise_dropout) + # no noise when t == 0 + nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))) + + if return_codebook_ids: + return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, logits.argmax(dim=1) + if return_x0: + return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, x0 + else: + return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise + + @torch.no_grad() + def progressive_denoising(self, cond, shape, verbose=True, callback=None, quantize_denoised=False, + img_callback=None, mask=None, x0=None, temperature=1., noise_dropout=0., + score_corrector=None, corrector_kwargs=None, batch_size=None, x_T=None, start_T=None, + log_every_t=None): + if not log_every_t: + log_every_t = self.log_every_t + timesteps = self.num_timesteps + if batch_size is not None: + b = batch_size if batch_size is not None else shape[0] + shape = [batch_size] + list(shape) + else: + b = batch_size = shape[0] + if x_T is None: + img = torch.randn(shape, device=self.device) + else: + img = x_T + intermediates = [] + if cond is not None: + if isinstance(cond, dict): + cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else + list(map(lambda x: x[:batch_size], cond[key])) for key in cond} + else: + cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size] + + if start_T is not None: + timesteps = min(timesteps, start_T) + iterator = tqdm(reversed(range(0, timesteps)), desc='Progressive Generation', + total=timesteps) if verbose else reversed( + range(0, timesteps)) + if type(temperature) == float: + temperature = [temperature] * timesteps + + for i in iterator: + ts = torch.full((b,), i, device=self.device, dtype=torch.long) + if self.shorten_cond_schedule: + assert self.model.conditioning_key != 'hybrid' + tc = self.cond_ids[ts].to(cond.device) + cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond)) + + img, x0_partial = self.p_sample(img, cond, ts, + clip_denoised=self.clip_denoised, + quantize_denoised=quantize_denoised, return_x0=True, + temperature=temperature[i], noise_dropout=noise_dropout, + score_corrector=score_corrector, corrector_kwargs=corrector_kwargs) + if mask is not None: + assert x0 is not None + img_orig = self.q_sample(x0, ts) + img = img_orig * mask + (1. - mask) * img + + if i % log_every_t == 0 or i == timesteps - 1: + intermediates.append(x0_partial) + if callback: callback(i) + if img_callback: img_callback(img, i) + return img, intermediates + + @torch.no_grad() + def p_sample_loop(self, cond, shape, return_intermediates=False, + x_T=None, verbose=True, callback=None, timesteps=None, quantize_denoised=False, + mask=None, x0=None, img_callback=None, start_T=None, + log_every_t=None): + + if not log_every_t: + log_every_t = self.log_every_t + device = self.betas.device + b = shape[0] + if x_T is None: + img = torch.randn(shape, device=device) + else: + img = x_T + + intermediates = [img] + if timesteps is None: + timesteps = self.num_timesteps + + if start_T is not None: + timesteps = min(timesteps, start_T) + iterator = tqdm(reversed(range(0, timesteps)), desc='Sampling t', total=timesteps) if verbose else reversed( + range(0, timesteps)) + + if mask is not None: + assert x0 is not None + assert x0.shape[2:3] == mask.shape[2:3] # spatial size has to match + + for i in iterator: + ts = torch.full((b,), i, device=device, dtype=torch.long) + if self.shorten_cond_schedule: + assert self.model.conditioning_key != 'hybrid' + tc = self.cond_ids[ts].to(cond.device) + cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond)) + + img = self.p_sample(img, cond, ts, + clip_denoised=self.clip_denoised, + quantize_denoised=quantize_denoised) + if mask is not None: + img_orig = self.q_sample(x0, ts) + img = img_orig * mask + (1. - mask) * img + + if i % log_every_t == 0 or i == timesteps - 1: + intermediates.append(img) + if callback: callback(i) + if img_callback: img_callback(img, i) + + if return_intermediates: + return img, intermediates + return img + + @torch.no_grad() + def sample(self, cond, batch_size=16, return_intermediates=False, x_T=None, + verbose=True, timesteps=None, quantize_denoised=False, + mask=None, x0=None, shape=None,**kwargs): + if shape is None: + shape = (batch_size, self.channels, self.image_size, self.image_size) + if cond is not None: + if isinstance(cond, dict): + cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else + list(map(lambda x: x[:batch_size], cond[key])) for key in cond} + else: + cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size] + return self.p_sample_loop(cond, + shape, + return_intermediates=return_intermediates, x_T=x_T, + verbose=verbose, timesteps=timesteps, quantize_denoised=quantize_denoised, + mask=mask, x0=x0) + + @torch.no_grad() + def sample_log(self,cond,batch_size,ddim, ddim_steps,**kwargs): + + if ddim: + ddim_sampler = DDIMSampler(self) + shape = (self.channels, self.image_size, self.image_size) + samples, intermediates =ddim_sampler.sample(ddim_steps,batch_size, + shape,cond,verbose=False,**kwargs) + + else: + samples, intermediates = self.sample(cond=cond, batch_size=batch_size, + return_intermediates=True,**kwargs) + + return samples, intermediates + + + @torch.no_grad() + def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=200, ddim_eta=1., return_keys=None, + quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True, + plot_diffusion_rows=True, **kwargs): + + use_ddim = ddim_steps is not None + + log = dict() + z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key, + return_first_stage_outputs=True, + force_c_encode=True, + return_original_cond=True, + bs=N) + N = min(x.shape[0], N) + n_row = min(x.shape[0], n_row) + log["inputs"] = x + log["reconstruction"] = xrec + if self.model.conditioning_key is not None: + if hasattr(self.cond_stage_model, "decode"): + xc = self.cond_stage_model.decode(c) + log["conditioning"] = xc + elif self.cond_stage_key in ["caption"]: + xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["caption"]) + log["conditioning"] = xc + elif self.cond_stage_key == 'class_label': + xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"]) + log['conditioning'] = xc + elif isimage(xc): + log["conditioning"] = xc + if ismap(xc): + log["original_conditioning"] = self.to_rgb(xc) + + if plot_diffusion_rows: + # get diffusion row + diffusion_row = list() + z_start = z[:n_row] + for t in range(self.num_timesteps): + if t % self.log_every_t == 0 or t == self.num_timesteps - 1: + t = repeat(torch.tensor([t]), '1 -> b', b=n_row) + t = t.to(self.device).long() + noise = torch.randn_like(z_start) + z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise) + diffusion_row.append(self.decode_first_stage(z_noisy)) + + diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W + diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w') + diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w') + diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0]) + log["diffusion_row"] = diffusion_grid + + if sample: + # get denoise row + with self.ema_scope("Plotting"): + samples, z_denoise_row = self.sample_log(cond=c,batch_size=N,ddim=use_ddim, + ddim_steps=ddim_steps,eta=ddim_eta) + # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True) + x_samples = self.decode_first_stage(samples) + log["samples"] = x_samples + if plot_denoise_rows: + denoise_grid = self._get_denoise_row_from_list(z_denoise_row) + log["denoise_row"] = denoise_grid + + if quantize_denoised and not isinstance(self.first_stage_model, AutoencoderKL) and not isinstance( + self.first_stage_model, IdentityFirstStage): + # also display when quantizing x0 while sampling + with self.ema_scope("Plotting Quantized Denoised"): + samples, z_denoise_row = self.sample_log(cond=c,batch_size=N,ddim=use_ddim, + ddim_steps=ddim_steps,eta=ddim_eta, + quantize_denoised=True) + # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True, + # quantize_denoised=True) + x_samples = self.decode_first_stage(samples.to(self.device)) + log["samples_x0_quantized"] = x_samples + + if inpaint: + # make a simple center square + b, h, w = z.shape[0], z.shape[2], z.shape[3] + mask = torch.ones(N, h, w).to(self.device) + # zeros will be filled in + mask[:, h // 4:3 * h // 4, w // 4:3 * w // 4] = 0. + mask = mask[:, None, ...] + with self.ema_scope("Plotting Inpaint"): + + samples, _ = self.sample_log(cond=c,batch_size=N,ddim=use_ddim, eta=ddim_eta, + ddim_steps=ddim_steps, x0=z[:N], mask=mask) + x_samples = self.decode_first_stage(samples.to(self.device)) + log["samples_inpainting"] = x_samples + log["mask"] = mask + + # outpaint + with self.ema_scope("Plotting Outpaint"): + samples, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,eta=ddim_eta, + ddim_steps=ddim_steps, x0=z[:N], mask=mask) + x_samples = self.decode_first_stage(samples.to(self.device)) + log["samples_outpainting"] = x_samples + + if plot_progressive_rows: + with self.ema_scope("Plotting Progressives"): + img, progressives = self.progressive_denoising(c, + shape=(self.channels, self.image_size, self.image_size), + batch_size=N) + prog_row = self._get_denoise_row_from_list(progressives, desc="Progressive Generation") + log["progressive_row"] = prog_row + + if return_keys: + if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0: + return log + else: + return {key: log[key] for key in return_keys} + return log + + def configure_optimizers(self): + lr = self.learning_rate + params = list(self.model.parameters()) + if self.cond_stage_trainable: + print(f"{self.__class__.__name__}: Also optimizing conditioner params!") + params = params + list(self.cond_stage_model.parameters()) + if self.learn_logvar: + print('Diffusion model optimizing logvar') + params.append(self.logvar) + opt = torch.optim.AdamW(params, lr=lr) + if self.use_scheduler: + assert 'target' in self.scheduler_config + scheduler = instantiate_from_config(self.scheduler_config) + + print("Setting up LambdaLR scheduler...") + scheduler = [ + { + 'scheduler': LambdaLR(opt, lr_lambda=scheduler.schedule), + 'interval': 'step', + 'frequency': 1 + }] + return [opt], scheduler + return opt + + @torch.no_grad() + def to_rgb(self, x): + x = x.float() + if not hasattr(self, "colorize"): + self.colorize = torch.randn(3, x.shape[1], 1, 1).to(x) + x = nn.functional.conv2d(x, weight=self.colorize) + x = 2. * (x - x.min()) / (x.max() - x.min()) - 1. + return x + + +class DiffusionWrapperV1(pl.LightningModule): + def __init__(self, diff_model_config, conditioning_key): + super().__init__() + self.diffusion_model = instantiate_from_config(diff_model_config) + self.conditioning_key = conditioning_key + assert self.conditioning_key in [None, 'concat', 'crossattn', 'hybrid', 'adm'] + + def forward(self, x, t, c_concat: list = None, c_crossattn: list = None): + if self.conditioning_key is None: + out = self.diffusion_model(x, t) + elif self.conditioning_key == 'concat': + xc = torch.cat([x] + c_concat, dim=1) + out = self.diffusion_model(xc, t) + elif self.conditioning_key == 'crossattn': + cc = torch.cat(c_crossattn, 1) + out = self.diffusion_model(x, t, context=cc) + elif self.conditioning_key == 'hybrid': + xc = torch.cat([x] + c_concat, dim=1) + cc = torch.cat(c_crossattn, 1) + out = self.diffusion_model(xc, t, context=cc) + elif self.conditioning_key == 'adm': + cc = c_crossattn[0] + out = self.diffusion_model(x, t, y=cc) + else: + raise NotImplementedError() + + return out + + +class Layout2ImgDiffusionV1(LatentDiffusionV1): + # TODO: move all layout-specific hacks to this class + def __init__(self, cond_stage_key, *args, **kwargs): + assert cond_stage_key == 'coordinates_bbox', 'Layout2ImgDiffusion only for cond_stage_key="coordinates_bbox"' + super().__init__(cond_stage_key=cond_stage_key, *args, **kwargs) + + def log_images(self, batch, N=8, *args, **kwargs): + logs = super().log_images(batch=batch, N=N, *args, **kwargs) + + key = 'train' if self.training else 'validation' + dset = self.trainer.datamodule.datasets[key] + mapper = dset.conditional_builders[self.cond_stage_key] + + bbox_imgs = [] + map_fn = lambda catno: dset.get_textual_label(dset.get_category_id(catno)) + for tknzd_bbox in batch[self.cond_stage_key][:N]: + bboximg = mapper.plot(tknzd_bbox.detach().cpu(), map_fn, (256, 256)) + bbox_imgs.append(bboximg) + + cond_img = torch.stack(bbox_imgs, dim=0) + logs['bbox_image'] = cond_img + return logs + +setattr(ldm.models.diffusion.ddpm, "DDPMV1", DDPMV1) +setattr(ldm.models.diffusion.ddpm, "LatentDiffusionV1", LatentDiffusionV1) +setattr(ldm.models.diffusion.ddpm, "DiffusionWrapperV1", DiffusionWrapperV1) +setattr(ldm.models.diffusion.ddpm, "Layout2ImgDiffusionV1", Layout2ImgDiffusionV1) diff --git a/extensions-builtin/ScuNET/preload.py b/extensions-builtin/ScuNET/preload.py new file mode 100644 index 0000000000000000000000000000000000000000..f12c5b90ed2984ef16d8d8dd30d1ebef34cbf7c3 --- /dev/null +++ b/extensions-builtin/ScuNET/preload.py @@ -0,0 +1,6 @@ +import os +from modules import paths + + +def preload(parser): + parser.add_argument("--scunet-models-path", type=str, help="Path to directory with ScuNET model file(s).", default=os.path.join(paths.models_path, 'ScuNET')) diff --git a/extensions-builtin/ScuNET/scripts/scunet_model.py b/extensions-builtin/ScuNET/scripts/scunet_model.py new file mode 100644 index 0000000000000000000000000000000000000000..e0fbf3a33747f447d396dd0d564e92c904cfabac --- /dev/null +++ b/extensions-builtin/ScuNET/scripts/scunet_model.py @@ -0,0 +1,87 @@ +import os.path +import sys +import traceback + +import PIL.Image +import numpy as np +import torch +from basicsr.utils.download_util import load_file_from_url + +import modules.upscaler +from modules import devices, modelloader +from scunet_model_arch import SCUNet as net + + +class UpscalerScuNET(modules.upscaler.Upscaler): + def __init__(self, dirname): + self.name = "ScuNET" + self.model_name = "ScuNET GAN" + self.model_name2 = "ScuNET PSNR" + self.model_url = "https://github.com/cszn/KAIR/releases/download/v1.0/scunet_color_real_gan.pth" + self.model_url2 = "https://github.com/cszn/KAIR/releases/download/v1.0/scunet_color_real_psnr.pth" + self.user_path = dirname + super().__init__() + model_paths = self.find_models(ext_filter=[".pth"]) + scalers = [] + add_model2 = True + for file in model_paths: + if "http" in file: + name = self.model_name + else: + name = modelloader.friendly_name(file) + if name == self.model_name2 or file == self.model_url2: + add_model2 = False + try: + scaler_data = modules.upscaler.UpscalerData(name, file, self, 4) + scalers.append(scaler_data) + except Exception: + print(f"Error loading ScuNET model: {file}", file=sys.stderr) + print(traceback.format_exc(), file=sys.stderr) + if add_model2: + scaler_data2 = modules.upscaler.UpscalerData(self.model_name2, self.model_url2, self) + scalers.append(scaler_data2) + self.scalers = scalers + + def do_upscale(self, img: PIL.Image, selected_file): + torch.cuda.empty_cache() + + model = self.load_model(selected_file) + if model is None: + return img + + device = devices.get_device_for('scunet') + img = np.array(img) + img = img[:, :, ::-1] + img = np.moveaxis(img, 2, 0) / 255 + img = torch.from_numpy(img).float() + img = img.unsqueeze(0).to(device) + + with torch.no_grad(): + output = model(img) + output = output.squeeze().float().cpu().clamp_(0, 1).numpy() + output = 255. * np.moveaxis(output, 0, 2) + output = output.astype(np.uint8) + output = output[:, :, ::-1] + torch.cuda.empty_cache() + return PIL.Image.fromarray(output, 'RGB') + + def load_model(self, path: str): + device = devices.get_device_for('scunet') + if "http" in path: + filename = load_file_from_url(url=self.model_url, model_dir=self.model_path, file_name="%s.pth" % self.name, + progress=True) + else: + filename = path + if not os.path.exists(os.path.join(self.model_path, filename)) or filename is None: + print(f"ScuNET: Unable to load model from {filename}", file=sys.stderr) + return None + + model = net(in_nc=3, config=[4, 4, 4, 4, 4, 4, 4], dim=64) + model.load_state_dict(torch.load(filename), strict=True) + model.eval() + for k, v in model.named_parameters(): + v.requires_grad = False + model = model.to(device) + + return model + diff --git a/extensions-builtin/ScuNET/scunet_model_arch.py b/extensions-builtin/ScuNET/scunet_model_arch.py new file mode 100644 index 0000000000000000000000000000000000000000..43ca8d36fe57a12dcad58e8b06ee2e0774494b0e --- /dev/null +++ b/extensions-builtin/ScuNET/scunet_model_arch.py @@ -0,0 +1,265 @@ +# -*- coding: utf-8 -*- +import numpy as np +import torch +import torch.nn as nn +from einops import rearrange +from einops.layers.torch import Rearrange +from timm.models.layers import trunc_normal_, DropPath + + +class WMSA(nn.Module): + """ Self-attention module in Swin Transformer + """ + + def __init__(self, input_dim, output_dim, head_dim, window_size, type): + super(WMSA, self).__init__() + self.input_dim = input_dim + self.output_dim = output_dim + self.head_dim = head_dim + self.scale = self.head_dim ** -0.5 + self.n_heads = input_dim // head_dim + self.window_size = window_size + self.type = type + self.embedding_layer = nn.Linear(self.input_dim, 3 * self.input_dim, bias=True) + + self.relative_position_params = nn.Parameter( + torch.zeros((2 * window_size - 1) * (2 * window_size - 1), self.n_heads)) + + self.linear = nn.Linear(self.input_dim, self.output_dim) + + trunc_normal_(self.relative_position_params, std=.02) + self.relative_position_params = torch.nn.Parameter( + self.relative_position_params.view(2 * window_size - 1, 2 * window_size - 1, self.n_heads).transpose(1, + 2).transpose( + 0, 1)) + + def generate_mask(self, h, w, p, shift): + """ generating the mask of SW-MSA + Args: + shift: shift parameters in CyclicShift. + Returns: + attn_mask: should be (1 1 w p p), + """ + # supporting square. + attn_mask = torch.zeros(h, w, p, p, p, p, dtype=torch.bool, device=self.relative_position_params.device) + if self.type == 'W': + return attn_mask + + s = p - shift + attn_mask[-1, :, :s, :, s:, :] = True + attn_mask[-1, :, s:, :, :s, :] = True + attn_mask[:, -1, :, :s, :, s:] = True + attn_mask[:, -1, :, s:, :, :s] = True + attn_mask = rearrange(attn_mask, 'w1 w2 p1 p2 p3 p4 -> 1 1 (w1 w2) (p1 p2) (p3 p4)') + return attn_mask + + def forward(self, x): + """ Forward pass of Window Multi-head Self-attention module. + Args: + x: input tensor with shape of [b h w c]; + attn_mask: attention mask, fill -inf where the value is True; + Returns: + output: tensor shape [b h w c] + """ + if self.type != 'W': x = torch.roll(x, shifts=(-(self.window_size // 2), -(self.window_size // 2)), dims=(1, 2)) + x = rearrange(x, 'b (w1 p1) (w2 p2) c -> b w1 w2 p1 p2 c', p1=self.window_size, p2=self.window_size) + h_windows = x.size(1) + w_windows = x.size(2) + # square validation + # assert h_windows == w_windows + + x = rearrange(x, 'b w1 w2 p1 p2 c -> b (w1 w2) (p1 p2) c', p1=self.window_size, p2=self.window_size) + qkv = self.embedding_layer(x) + q, k, v = rearrange(qkv, 'b nw np (threeh c) -> threeh b nw np c', c=self.head_dim).chunk(3, dim=0) + sim = torch.einsum('hbwpc,hbwqc->hbwpq', q, k) * self.scale + # Adding learnable relative embedding + sim = sim + rearrange(self.relative_embedding(), 'h p q -> h 1 1 p q') + # Using Attn Mask to distinguish different subwindows. + if self.type != 'W': + attn_mask = self.generate_mask(h_windows, w_windows, self.window_size, shift=self.window_size // 2) + sim = sim.masked_fill_(attn_mask, float("-inf")) + + probs = nn.functional.softmax(sim, dim=-1) + output = torch.einsum('hbwij,hbwjc->hbwic', probs, v) + output = rearrange(output, 'h b w p c -> b w p (h c)') + output = self.linear(output) + output = rearrange(output, 'b (w1 w2) (p1 p2) c -> b (w1 p1) (w2 p2) c', w1=h_windows, p1=self.window_size) + + if self.type != 'W': output = torch.roll(output, shifts=(self.window_size // 2, self.window_size // 2), + dims=(1, 2)) + return output + + def relative_embedding(self): + cord = torch.tensor(np.array([[i, j] for i in range(self.window_size) for j in range(self.window_size)])) + relation = cord[:, None, :] - cord[None, :, :] + self.window_size - 1 + # negative is allowed + return self.relative_position_params[:, relation[:, :, 0].long(), relation[:, :, 1].long()] + + +class Block(nn.Module): + def __init__(self, input_dim, output_dim, head_dim, window_size, drop_path, type='W', input_resolution=None): + """ SwinTransformer Block + """ + super(Block, self).__init__() + self.input_dim = input_dim + self.output_dim = output_dim + assert type in ['W', 'SW'] + self.type = type + if input_resolution <= window_size: + self.type = 'W' + + self.ln1 = nn.LayerNorm(input_dim) + self.msa = WMSA(input_dim, input_dim, head_dim, window_size, self.type) + self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() + self.ln2 = nn.LayerNorm(input_dim) + self.mlp = nn.Sequential( + nn.Linear(input_dim, 4 * input_dim), + nn.GELU(), + nn.Linear(4 * input_dim, output_dim), + ) + + def forward(self, x): + x = x + self.drop_path(self.msa(self.ln1(x))) + x = x + self.drop_path(self.mlp(self.ln2(x))) + return x + + +class ConvTransBlock(nn.Module): + def __init__(self, conv_dim, trans_dim, head_dim, window_size, drop_path, type='W', input_resolution=None): + """ SwinTransformer and Conv Block + """ + super(ConvTransBlock, self).__init__() + self.conv_dim = conv_dim + self.trans_dim = trans_dim + self.head_dim = head_dim + self.window_size = window_size + self.drop_path = drop_path + self.type = type + self.input_resolution = input_resolution + + assert self.type in ['W', 'SW'] + if self.input_resolution <= self.window_size: + self.type = 'W' + + self.trans_block = Block(self.trans_dim, self.trans_dim, self.head_dim, self.window_size, self.drop_path, + self.type, self.input_resolution) + self.conv1_1 = nn.Conv2d(self.conv_dim + self.trans_dim, self.conv_dim + self.trans_dim, 1, 1, 0, bias=True) + self.conv1_2 = nn.Conv2d(self.conv_dim + self.trans_dim, self.conv_dim + self.trans_dim, 1, 1, 0, bias=True) + + self.conv_block = nn.Sequential( + nn.Conv2d(self.conv_dim, self.conv_dim, 3, 1, 1, bias=False), + nn.ReLU(True), + nn.Conv2d(self.conv_dim, self.conv_dim, 3, 1, 1, bias=False) + ) + + def forward(self, x): + conv_x, trans_x = torch.split(self.conv1_1(x), (self.conv_dim, self.trans_dim), dim=1) + conv_x = self.conv_block(conv_x) + conv_x + trans_x = Rearrange('b c h w -> b h w c')(trans_x) + trans_x = self.trans_block(trans_x) + trans_x = Rearrange('b h w c -> b c h w')(trans_x) + res = self.conv1_2(torch.cat((conv_x, trans_x), dim=1)) + x = x + res + + return x + + +class SCUNet(nn.Module): + # def __init__(self, in_nc=3, config=[2, 2, 2, 2, 2, 2, 2], dim=64, drop_path_rate=0.0, input_resolution=256): + def __init__(self, in_nc=3, config=None, dim=64, drop_path_rate=0.0, input_resolution=256): + super(SCUNet, self).__init__() + if config is None: + config = [2, 2, 2, 2, 2, 2, 2] + self.config = config + self.dim = dim + self.head_dim = 32 + self.window_size = 8 + + # drop path rate for each layer + dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(config))] + + self.m_head = [nn.Conv2d(in_nc, dim, 3, 1, 1, bias=False)] + + begin = 0 + self.m_down1 = [ConvTransBlock(dim // 2, dim // 2, self.head_dim, self.window_size, dpr[i + begin], + 'W' if not i % 2 else 'SW', input_resolution) + for i in range(config[0])] + \ + [nn.Conv2d(dim, 2 * dim, 2, 2, 0, bias=False)] + + begin += config[0] + self.m_down2 = [ConvTransBlock(dim, dim, self.head_dim, self.window_size, dpr[i + begin], + 'W' if not i % 2 else 'SW', input_resolution // 2) + for i in range(config[1])] + \ + [nn.Conv2d(2 * dim, 4 * dim, 2, 2, 0, bias=False)] + + begin += config[1] + self.m_down3 = [ConvTransBlock(2 * dim, 2 * dim, self.head_dim, self.window_size, dpr[i + begin], + 'W' if not i % 2 else 'SW', input_resolution // 4) + for i in range(config[2])] + \ + [nn.Conv2d(4 * dim, 8 * dim, 2, 2, 0, bias=False)] + + begin += config[2] + self.m_body = [ConvTransBlock(4 * dim, 4 * dim, self.head_dim, self.window_size, dpr[i + begin], + 'W' if not i % 2 else 'SW', input_resolution // 8) + for i in range(config[3])] + + begin += config[3] + self.m_up3 = [nn.ConvTranspose2d(8 * dim, 4 * dim, 2, 2, 0, bias=False), ] + \ + [ConvTransBlock(2 * dim, 2 * dim, self.head_dim, self.window_size, dpr[i + begin], + 'W' if not i % 2 else 'SW', input_resolution // 4) + for i in range(config[4])] + + begin += config[4] + self.m_up2 = [nn.ConvTranspose2d(4 * dim, 2 * dim, 2, 2, 0, bias=False), ] + \ + [ConvTransBlock(dim, dim, self.head_dim, self.window_size, dpr[i + begin], + 'W' if not i % 2 else 'SW', input_resolution // 2) + for i in range(config[5])] + + begin += config[5] + self.m_up1 = [nn.ConvTranspose2d(2 * dim, dim, 2, 2, 0, bias=False), ] + \ + [ConvTransBlock(dim // 2, dim // 2, self.head_dim, self.window_size, dpr[i + begin], + 'W' if not i % 2 else 'SW', input_resolution) + for i in range(config[6])] + + self.m_tail = [nn.Conv2d(dim, in_nc, 3, 1, 1, bias=False)] + + self.m_head = nn.Sequential(*self.m_head) + self.m_down1 = nn.Sequential(*self.m_down1) + self.m_down2 = nn.Sequential(*self.m_down2) + self.m_down3 = nn.Sequential(*self.m_down3) + self.m_body = nn.Sequential(*self.m_body) + self.m_up3 = nn.Sequential(*self.m_up3) + self.m_up2 = nn.Sequential(*self.m_up2) + self.m_up1 = nn.Sequential(*self.m_up1) + self.m_tail = nn.Sequential(*self.m_tail) + # self.apply(self._init_weights) + + def forward(self, x0): + + h, w = x0.size()[-2:] + paddingBottom = int(np.ceil(h / 64) * 64 - h) + paddingRight = int(np.ceil(w / 64) * 64 - w) + x0 = nn.ReplicationPad2d((0, paddingRight, 0, paddingBottom))(x0) + + x1 = self.m_head(x0) + x2 = self.m_down1(x1) + x3 = self.m_down2(x2) + x4 = self.m_down3(x3) + x = self.m_body(x4) + x = self.m_up3(x + x4) + x = self.m_up2(x + x3) + x = self.m_up1(x + x2) + x = self.m_tail(x + x1) + + x = x[..., :h, :w] + + return x + + def _init_weights(self, m): + if isinstance(m, nn.Linear): + trunc_normal_(m.weight, std=.02) + if m.bias is not None: + nn.init.constant_(m.bias, 0) + elif isinstance(m, nn.LayerNorm): + nn.init.constant_(m.bias, 0) + nn.init.constant_(m.weight, 1.0) \ No newline at end of file diff --git a/extensions-builtin/SwinIR/preload.py b/extensions-builtin/SwinIR/preload.py new file mode 100644 index 0000000000000000000000000000000000000000..567e44bcaa1d40ca5dabed7744c94c5b2c68e87f --- /dev/null +++ b/extensions-builtin/SwinIR/preload.py @@ -0,0 +1,6 @@ +import os +from modules import paths + + +def preload(parser): + parser.add_argument("--swinir-models-path", type=str, help="Path to directory with SwinIR model file(s).", default=os.path.join(paths.models_path, 'SwinIR')) diff --git a/extensions-builtin/SwinIR/scripts/swinir_model.py b/extensions-builtin/SwinIR/scripts/swinir_model.py new file mode 100644 index 0000000000000000000000000000000000000000..9a74b253851bb7e5235a5cd81730f9be8b6c9cb0 --- /dev/null +++ b/extensions-builtin/SwinIR/scripts/swinir_model.py @@ -0,0 +1,172 @@ +import contextlib +import os + +import numpy as np +import torch +from PIL import Image +from basicsr.utils.download_util import load_file_from_url +from tqdm import tqdm + +from modules import modelloader, devices, script_callbacks, shared +from modules.shared import cmd_opts, opts +from swinir_model_arch import SwinIR as net +from swinir_model_arch_v2 import Swin2SR as net2 +from modules.upscaler import Upscaler, UpscalerData + + +device_swinir = devices.get_device_for('swinir') + + +class UpscalerSwinIR(Upscaler): + def __init__(self, dirname): + self.name = "SwinIR" + self.model_url = "https://github.com/JingyunLiang/SwinIR/releases/download/v0.0" \ + "/003_realSR_BSRGAN_DFOWMFC_s64w8_SwinIR" \ + "-L_x4_GAN.pth " + self.model_name = "SwinIR 4x" + self.user_path = dirname + super().__init__() + scalers = [] + model_files = self.find_models(ext_filter=[".pt", ".pth"]) + for model in model_files: + if "http" in model: + name = self.model_name + else: + name = modelloader.friendly_name(model) + model_data = UpscalerData(name, model, self) + scalers.append(model_data) + self.scalers = scalers + + def do_upscale(self, img, model_file): + model = self.load_model(model_file) + if model is None: + return img + model = model.to(device_swinir, dtype=devices.dtype) + img = upscale(img, model) + try: + torch.cuda.empty_cache() + except: + pass + return img + + def load_model(self, path, scale=4): + if "http" in path: + dl_name = "%s%s" % (self.model_name.replace(" ", "_"), ".pth") + filename = load_file_from_url(url=path, model_dir=self.model_path, file_name=dl_name, progress=True) + else: + filename = path + if filename is None or not os.path.exists(filename): + return None + if filename.endswith(".v2.pth"): + model = net2( + upscale=scale, + in_chans=3, + img_size=64, + window_size=8, + img_range=1.0, + depths=[6, 6, 6, 6, 6, 6], + embed_dim=180, + num_heads=[6, 6, 6, 6, 6, 6], + mlp_ratio=2, + upsampler="nearest+conv", + resi_connection="1conv", + ) + params = None + else: + model = net( + upscale=scale, + in_chans=3, + img_size=64, + window_size=8, + img_range=1.0, + depths=[6, 6, 6, 6, 6, 6, 6, 6, 6], + embed_dim=240, + num_heads=[8, 8, 8, 8, 8, 8, 8, 8, 8], + mlp_ratio=2, + upsampler="nearest+conv", + resi_connection="3conv", + ) + params = "params_ema" + + pretrained_model = torch.load(filename) + if params is not None: + model.load_state_dict(pretrained_model[params], strict=True) + else: + model.load_state_dict(pretrained_model, strict=True) + return model + + +def upscale( + img, + model, + tile=None, + tile_overlap=None, + window_size=8, + scale=4, +): + tile = tile or opts.SWIN_tile + tile_overlap = tile_overlap or opts.SWIN_tile_overlap + + + img = np.array(img) + img = img[:, :, ::-1] + img = np.moveaxis(img, 2, 0) / 255 + img = torch.from_numpy(img).float() + img = img.unsqueeze(0).to(device_swinir, dtype=devices.dtype) + with torch.no_grad(), devices.autocast(): + _, _, h_old, w_old = img.size() + h_pad = (h_old // window_size + 1) * window_size - h_old + w_pad = (w_old // window_size + 1) * window_size - w_old + img = torch.cat([img, torch.flip(img, [2])], 2)[:, :, : h_old + h_pad, :] + img = torch.cat([img, torch.flip(img, [3])], 3)[:, :, :, : w_old + w_pad] + output = inference(img, model, tile, tile_overlap, window_size, scale) + output = output[..., : h_old * scale, : w_old * scale] + output = output.data.squeeze().float().cpu().clamp_(0, 1).numpy() + if output.ndim == 3: + output = np.transpose( + output[[2, 1, 0], :, :], (1, 2, 0) + ) # CHW-RGB to HCW-BGR + output = (output * 255.0).round().astype(np.uint8) # float32 to uint8 + return Image.fromarray(output, "RGB") + + +def inference(img, model, tile, tile_overlap, window_size, scale): + # test the image tile by tile + b, c, h, w = img.size() + tile = min(tile, h, w) + assert tile % window_size == 0, "tile size should be a multiple of window_size" + sf = scale + + stride = tile - tile_overlap + h_idx_list = list(range(0, h - tile, stride)) + [h - tile] + w_idx_list = list(range(0, w - tile, stride)) + [w - tile] + E = torch.zeros(b, c, h * sf, w * sf, dtype=devices.dtype, device=device_swinir).type_as(img) + W = torch.zeros_like(E, dtype=devices.dtype, device=device_swinir) + + with tqdm(total=len(h_idx_list) * len(w_idx_list), desc="SwinIR tiles") as pbar: + for h_idx in h_idx_list: + for w_idx in w_idx_list: + in_patch = img[..., h_idx: h_idx + tile, w_idx: w_idx + tile] + out_patch = model(in_patch) + out_patch_mask = torch.ones_like(out_patch) + + E[ + ..., h_idx * sf: (h_idx + tile) * sf, w_idx * sf: (w_idx + tile) * sf + ].add_(out_patch) + W[ + ..., h_idx * sf: (h_idx + tile) * sf, w_idx * sf: (w_idx + tile) * sf + ].add_(out_patch_mask) + pbar.update(1) + output = E.div_(W) + + return output + + +def on_ui_settings(): + import gradio as gr + + shared.opts.add_option("SWIN_tile", shared.OptionInfo(192, "Tile size for all SwinIR.", gr.Slider, {"minimum": 16, "maximum": 512, "step": 16}, section=('upscaling', "Upscaling"))) + shared.opts.add_option("SWIN_tile_overlap", shared.OptionInfo(8, "Tile overlap, in pixels for SwinIR. Low values = visible seam.", gr.Slider, {"minimum": 0, "maximum": 48, "step": 1}, section=('upscaling', "Upscaling"))) + + +script_callbacks.on_ui_settings(on_ui_settings) diff --git a/extensions-builtin/SwinIR/swinir_model_arch.py b/extensions-builtin/SwinIR/swinir_model_arch.py new file mode 100644 index 0000000000000000000000000000000000000000..863f42db6f50e5eac70931b8c0e6443f831a6018 --- /dev/null +++ b/extensions-builtin/SwinIR/swinir_model_arch.py @@ -0,0 +1,867 @@ +# ----------------------------------------------------------------------------------- +# SwinIR: Image Restoration Using Swin Transformer, https://arxiv.org/abs/2108.10257 +# Originally Written by Ze Liu, Modified by Jingyun Liang. +# ----------------------------------------------------------------------------------- + +import math +import torch +import torch.nn as nn +import torch.nn.functional as F +import torch.utils.checkpoint as checkpoint +from timm.models.layers import DropPath, to_2tuple, trunc_normal_ + + +class Mlp(nn.Module): + def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): + super().__init__() + out_features = out_features or in_features + hidden_features = hidden_features or in_features + self.fc1 = nn.Linear(in_features, hidden_features) + self.act = act_layer() + self.fc2 = nn.Linear(hidden_features, out_features) + self.drop = nn.Dropout(drop) + + def forward(self, x): + x = self.fc1(x) + x = self.act(x) + x = self.drop(x) + x = self.fc2(x) + x = self.drop(x) + return x + + +def window_partition(x, window_size): + """ + Args: + x: (B, H, W, C) + window_size (int): window size + + Returns: + windows: (num_windows*B, window_size, window_size, C) + """ + B, H, W, C = x.shape + x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) + windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) + return windows + + +def window_reverse(windows, window_size, H, W): + """ + Args: + windows: (num_windows*B, window_size, window_size, C) + window_size (int): Window size + H (int): Height of image + W (int): Width of image + + Returns: + x: (B, H, W, C) + """ + B = int(windows.shape[0] / (H * W / window_size / window_size)) + x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) + x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) + return x + + +class WindowAttention(nn.Module): + r""" Window based multi-head self attention (W-MSA) module with relative position bias. + It supports both of shifted and non-shifted window. + + Args: + dim (int): Number of input channels. + window_size (tuple[int]): The height and width of the window. + num_heads (int): Number of attention heads. + qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True + qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set + attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0 + proj_drop (float, optional): Dropout ratio of output. Default: 0.0 + """ + + def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.): + + super().__init__() + self.dim = dim + self.window_size = window_size # Wh, Ww + self.num_heads = num_heads + head_dim = dim // num_heads + self.scale = qk_scale or head_dim ** -0.5 + + # define a parameter table of relative position bias + self.relative_position_bias_table = nn.Parameter( + torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH + + # get pair-wise relative position index for each token inside the window + coords_h = torch.arange(self.window_size[0]) + coords_w = torch.arange(self.window_size[1]) + coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww + coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww + relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww + relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 + relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0 + relative_coords[:, :, 1] += self.window_size[1] - 1 + relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 + relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww + self.register_buffer("relative_position_index", relative_position_index) + + self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) + self.attn_drop = nn.Dropout(attn_drop) + self.proj = nn.Linear(dim, dim) + + self.proj_drop = nn.Dropout(proj_drop) + + trunc_normal_(self.relative_position_bias_table, std=.02) + self.softmax = nn.Softmax(dim=-1) + + def forward(self, x, mask=None): + """ + Args: + x: input features with shape of (num_windows*B, N, C) + mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None + """ + B_, N, C = x.shape + qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) + q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) + + q = q * self.scale + attn = (q @ k.transpose(-2, -1)) + + relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view( + self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH + relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww + attn = attn + relative_position_bias.unsqueeze(0) + + if mask is not None: + nW = mask.shape[0] + attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0) + attn = attn.view(-1, self.num_heads, N, N) + attn = self.softmax(attn) + else: + attn = self.softmax(attn) + + attn = self.attn_drop(attn) + + x = (attn @ v).transpose(1, 2).reshape(B_, N, C) + x = self.proj(x) + x = self.proj_drop(x) + return x + + def extra_repr(self) -> str: + return f'dim={self.dim}, window_size={self.window_size}, num_heads={self.num_heads}' + + def flops(self, N): + # calculate flops for 1 window with token length of N + flops = 0 + # qkv = self.qkv(x) + flops += N * self.dim * 3 * self.dim + # attn = (q @ k.transpose(-2, -1)) + flops += self.num_heads * N * (self.dim // self.num_heads) * N + # x = (attn @ v) + flops += self.num_heads * N * N * (self.dim // self.num_heads) + # x = self.proj(x) + flops += N * self.dim * self.dim + return flops + + +class SwinTransformerBlock(nn.Module): + r""" Swin Transformer Block. + + Args: + dim (int): Number of input channels. + input_resolution (tuple[int]): Input resolution. + num_heads (int): Number of attention heads. + window_size (int): Window size. + shift_size (int): Shift size for SW-MSA. + mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. + qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True + qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. + drop (float, optional): Dropout rate. Default: 0.0 + attn_drop (float, optional): Attention dropout rate. Default: 0.0 + drop_path (float, optional): Stochastic depth rate. Default: 0.0 + act_layer (nn.Module, optional): Activation layer. Default: nn.GELU + norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm + """ + + def __init__(self, dim, input_resolution, num_heads, window_size=7, shift_size=0, + mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0., + act_layer=nn.GELU, norm_layer=nn.LayerNorm): + super().__init__() + self.dim = dim + self.input_resolution = input_resolution + self.num_heads = num_heads + self.window_size = window_size + self.shift_size = shift_size + self.mlp_ratio = mlp_ratio + if min(self.input_resolution) <= self.window_size: + # if window size is larger than input resolution, we don't partition windows + self.shift_size = 0 + self.window_size = min(self.input_resolution) + assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size" + + self.norm1 = norm_layer(dim) + self.attn = WindowAttention( + dim, window_size=to_2tuple(self.window_size), num_heads=num_heads, + qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop) + + self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() + self.norm2 = norm_layer(dim) + mlp_hidden_dim = int(dim * mlp_ratio) + self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) + + if self.shift_size > 0: + attn_mask = self.calculate_mask(self.input_resolution) + else: + attn_mask = None + + self.register_buffer("attn_mask", attn_mask) + + def calculate_mask(self, x_size): + # calculate attention mask for SW-MSA + H, W = x_size + img_mask = torch.zeros((1, H, W, 1)) # 1 H W 1 + h_slices = (slice(0, -self.window_size), + slice(-self.window_size, -self.shift_size), + slice(-self.shift_size, None)) + w_slices = (slice(0, -self.window_size), + slice(-self.window_size, -self.shift_size), + slice(-self.shift_size, None)) + cnt = 0 + for h in h_slices: + for w in w_slices: + img_mask[:, h, w, :] = cnt + cnt += 1 + + mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1 + mask_windows = mask_windows.view(-1, self.window_size * self.window_size) + attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) + attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0)) + + return attn_mask + + def forward(self, x, x_size): + H, W = x_size + B, L, C = x.shape + # assert L == H * W, "input feature has wrong size" + + shortcut = x + x = self.norm1(x) + x = x.view(B, H, W, C) + + # cyclic shift + if self.shift_size > 0: + shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)) + else: + shifted_x = x + + # partition windows + x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C + x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C + + # W-MSA/SW-MSA (to be compatible for testing on images whose shapes are the multiple of window size + if self.input_resolution == x_size: + attn_windows = self.attn(x_windows, mask=self.attn_mask) # nW*B, window_size*window_size, C + else: + attn_windows = self.attn(x_windows, mask=self.calculate_mask(x_size).to(x.device)) + + # merge windows + attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) + shifted_x = window_reverse(attn_windows, self.window_size, H, W) # B H' W' C + + # reverse cyclic shift + if self.shift_size > 0: + x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2)) + else: + x = shifted_x + x = x.view(B, H * W, C) + + # FFN + x = shortcut + self.drop_path(x) + x = x + self.drop_path(self.mlp(self.norm2(x))) + + return x + + def extra_repr(self) -> str: + return f"dim={self.dim}, input_resolution={self.input_resolution}, num_heads={self.num_heads}, " \ + f"window_size={self.window_size}, shift_size={self.shift_size}, mlp_ratio={self.mlp_ratio}" + + def flops(self): + flops = 0 + H, W = self.input_resolution + # norm1 + flops += self.dim * H * W + # W-MSA/SW-MSA + nW = H * W / self.window_size / self.window_size + flops += nW * self.attn.flops(self.window_size * self.window_size) + # mlp + flops += 2 * H * W * self.dim * self.dim * self.mlp_ratio + # norm2 + flops += self.dim * H * W + return flops + + +class PatchMerging(nn.Module): + r""" Patch Merging Layer. + + Args: + input_resolution (tuple[int]): Resolution of input feature. + dim (int): Number of input channels. + norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm + """ + + def __init__(self, input_resolution, dim, norm_layer=nn.LayerNorm): + super().__init__() + self.input_resolution = input_resolution + self.dim = dim + self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False) + self.norm = norm_layer(4 * dim) + + def forward(self, x): + """ + x: B, H*W, C + """ + H, W = self.input_resolution + B, L, C = x.shape + assert L == H * W, "input feature has wrong size" + assert H % 2 == 0 and W % 2 == 0, f"x size ({H}*{W}) are not even." + + x = x.view(B, H, W, C) + + x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C + x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C + x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C + x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C + x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C + x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C + + x = self.norm(x) + x = self.reduction(x) + + return x + + def extra_repr(self) -> str: + return f"input_resolution={self.input_resolution}, dim={self.dim}" + + def flops(self): + H, W = self.input_resolution + flops = H * W * self.dim + flops += (H // 2) * (W // 2) * 4 * self.dim * 2 * self.dim + return flops + + +class BasicLayer(nn.Module): + """ A basic Swin Transformer layer for one stage. + + Args: + dim (int): Number of input channels. + input_resolution (tuple[int]): Input resolution. + depth (int): Number of blocks. + num_heads (int): Number of attention heads. + window_size (int): Local window size. + mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. + qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True + qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. + drop (float, optional): Dropout rate. Default: 0.0 + attn_drop (float, optional): Attention dropout rate. Default: 0.0 + drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 + norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm + downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None + use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. + """ + + def __init__(self, dim, input_resolution, depth, num_heads, window_size, + mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., + drop_path=0., norm_layer=nn.LayerNorm, downsample=None, use_checkpoint=False): + + super().__init__() + self.dim = dim + self.input_resolution = input_resolution + self.depth = depth + self.use_checkpoint = use_checkpoint + + # build blocks + self.blocks = nn.ModuleList([ + SwinTransformerBlock(dim=dim, input_resolution=input_resolution, + num_heads=num_heads, window_size=window_size, + shift_size=0 if (i % 2 == 0) else window_size // 2, + mlp_ratio=mlp_ratio, + qkv_bias=qkv_bias, qk_scale=qk_scale, + drop=drop, attn_drop=attn_drop, + drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path, + norm_layer=norm_layer) + for i in range(depth)]) + + # patch merging layer + if downsample is not None: + self.downsample = downsample(input_resolution, dim=dim, norm_layer=norm_layer) + else: + self.downsample = None + + def forward(self, x, x_size): + for blk in self.blocks: + if self.use_checkpoint: + x = checkpoint.checkpoint(blk, x, x_size) + else: + x = blk(x, x_size) + if self.downsample is not None: + x = self.downsample(x) + return x + + def extra_repr(self) -> str: + return f"dim={self.dim}, input_resolution={self.input_resolution}, depth={self.depth}" + + def flops(self): + flops = 0 + for blk in self.blocks: + flops += blk.flops() + if self.downsample is not None: + flops += self.downsample.flops() + return flops + + +class RSTB(nn.Module): + """Residual Swin Transformer Block (RSTB). + + Args: + dim (int): Number of input channels. + input_resolution (tuple[int]): Input resolution. + depth (int): Number of blocks. + num_heads (int): Number of attention heads. + window_size (int): Local window size. + mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. + qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True + qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. + drop (float, optional): Dropout rate. Default: 0.0 + attn_drop (float, optional): Attention dropout rate. Default: 0.0 + drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 + norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm + downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None + use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. + img_size: Input image size. + patch_size: Patch size. + resi_connection: The convolutional block before residual connection. + """ + + def __init__(self, dim, input_resolution, depth, num_heads, window_size, + mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., + drop_path=0., norm_layer=nn.LayerNorm, downsample=None, use_checkpoint=False, + img_size=224, patch_size=4, resi_connection='1conv'): + super(RSTB, self).__init__() + + self.dim = dim + self.input_resolution = input_resolution + + self.residual_group = BasicLayer(dim=dim, + input_resolution=input_resolution, + depth=depth, + num_heads=num_heads, + window_size=window_size, + mlp_ratio=mlp_ratio, + qkv_bias=qkv_bias, qk_scale=qk_scale, + drop=drop, attn_drop=attn_drop, + drop_path=drop_path, + norm_layer=norm_layer, + downsample=downsample, + use_checkpoint=use_checkpoint) + + if resi_connection == '1conv': + self.conv = nn.Conv2d(dim, dim, 3, 1, 1) + elif resi_connection == '3conv': + # to save parameters and memory + self.conv = nn.Sequential(nn.Conv2d(dim, dim // 4, 3, 1, 1), nn.LeakyReLU(negative_slope=0.2, inplace=True), + nn.Conv2d(dim // 4, dim // 4, 1, 1, 0), + nn.LeakyReLU(negative_slope=0.2, inplace=True), + nn.Conv2d(dim // 4, dim, 3, 1, 1)) + + self.patch_embed = PatchEmbed( + img_size=img_size, patch_size=patch_size, in_chans=0, embed_dim=dim, + norm_layer=None) + + self.patch_unembed = PatchUnEmbed( + img_size=img_size, patch_size=patch_size, in_chans=0, embed_dim=dim, + norm_layer=None) + + def forward(self, x, x_size): + return self.patch_embed(self.conv(self.patch_unembed(self.residual_group(x, x_size), x_size))) + x + + def flops(self): + flops = 0 + flops += self.residual_group.flops() + H, W = self.input_resolution + flops += H * W * self.dim * self.dim * 9 + flops += self.patch_embed.flops() + flops += self.patch_unembed.flops() + + return flops + + +class PatchEmbed(nn.Module): + r""" Image to Patch Embedding + + Args: + img_size (int): Image size. Default: 224. + patch_size (int): Patch token size. Default: 4. + in_chans (int): Number of input image channels. Default: 3. + embed_dim (int): Number of linear projection output channels. Default: 96. + norm_layer (nn.Module, optional): Normalization layer. Default: None + """ + + def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None): + super().__init__() + img_size = to_2tuple(img_size) + patch_size = to_2tuple(patch_size) + patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]] + self.img_size = img_size + self.patch_size = patch_size + self.patches_resolution = patches_resolution + self.num_patches = patches_resolution[0] * patches_resolution[1] + + self.in_chans = in_chans + self.embed_dim = embed_dim + + if norm_layer is not None: + self.norm = norm_layer(embed_dim) + else: + self.norm = None + + def forward(self, x): + x = x.flatten(2).transpose(1, 2) # B Ph*Pw C + if self.norm is not None: + x = self.norm(x) + return x + + def flops(self): + flops = 0 + H, W = self.img_size + if self.norm is not None: + flops += H * W * self.embed_dim + return flops + + +class PatchUnEmbed(nn.Module): + r""" Image to Patch Unembedding + + Args: + img_size (int): Image size. Default: 224. + patch_size (int): Patch token size. Default: 4. + in_chans (int): Number of input image channels. Default: 3. + embed_dim (int): Number of linear projection output channels. Default: 96. + norm_layer (nn.Module, optional): Normalization layer. Default: None + """ + + def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None): + super().__init__() + img_size = to_2tuple(img_size) + patch_size = to_2tuple(patch_size) + patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]] + self.img_size = img_size + self.patch_size = patch_size + self.patches_resolution = patches_resolution + self.num_patches = patches_resolution[0] * patches_resolution[1] + + self.in_chans = in_chans + self.embed_dim = embed_dim + + def forward(self, x, x_size): + B, HW, C = x.shape + x = x.transpose(1, 2).view(B, self.embed_dim, x_size[0], x_size[1]) # B Ph*Pw C + return x + + def flops(self): + flops = 0 + return flops + + +class Upsample(nn.Sequential): + """Upsample module. + + Args: + scale (int): Scale factor. Supported scales: 2^n and 3. + num_feat (int): Channel number of intermediate features. + """ + + def __init__(self, scale, num_feat): + m = [] + if (scale & (scale - 1)) == 0: # scale = 2^n + for _ in range(int(math.log(scale, 2))): + m.append(nn.Conv2d(num_feat, 4 * num_feat, 3, 1, 1)) + m.append(nn.PixelShuffle(2)) + elif scale == 3: + m.append(nn.Conv2d(num_feat, 9 * num_feat, 3, 1, 1)) + m.append(nn.PixelShuffle(3)) + else: + raise ValueError(f'scale {scale} is not supported. ' 'Supported scales: 2^n and 3.') + super(Upsample, self).__init__(*m) + + +class UpsampleOneStep(nn.Sequential): + """UpsampleOneStep module (the difference with Upsample is that it always only has 1conv + 1pixelshuffle) + Used in lightweight SR to save parameters. + + Args: + scale (int): Scale factor. Supported scales: 2^n and 3. + num_feat (int): Channel number of intermediate features. + + """ + + def __init__(self, scale, num_feat, num_out_ch, input_resolution=None): + self.num_feat = num_feat + self.input_resolution = input_resolution + m = [] + m.append(nn.Conv2d(num_feat, (scale ** 2) * num_out_ch, 3, 1, 1)) + m.append(nn.PixelShuffle(scale)) + super(UpsampleOneStep, self).__init__(*m) + + def flops(self): + H, W = self.input_resolution + flops = H * W * self.num_feat * 3 * 9 + return flops + + +class SwinIR(nn.Module): + r""" SwinIR + A PyTorch impl of : `SwinIR: Image Restoration Using Swin Transformer`, based on Swin Transformer. + + Args: + img_size (int | tuple(int)): Input image size. Default 64 + patch_size (int | tuple(int)): Patch size. Default: 1 + in_chans (int): Number of input image channels. Default: 3 + embed_dim (int): Patch embedding dimension. Default: 96 + depths (tuple(int)): Depth of each Swin Transformer layer. + num_heads (tuple(int)): Number of attention heads in different layers. + window_size (int): Window size. Default: 7 + mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4 + qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True + qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. Default: None + drop_rate (float): Dropout rate. Default: 0 + attn_drop_rate (float): Attention dropout rate. Default: 0 + drop_path_rate (float): Stochastic depth rate. Default: 0.1 + norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm. + ape (bool): If True, add absolute position embedding to the patch embedding. Default: False + patch_norm (bool): If True, add normalization after patch embedding. Default: True + use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False + upscale: Upscale factor. 2/3/4/8 for image SR, 1 for denoising and compress artifact reduction + img_range: Image range. 1. or 255. + upsampler: The reconstruction reconstruction module. 'pixelshuffle'/'pixelshuffledirect'/'nearest+conv'/None + resi_connection: The convolutional block before residual connection. '1conv'/'3conv' + """ + + def __init__(self, img_size=64, patch_size=1, in_chans=3, + embed_dim=96, depths=[6, 6, 6, 6], num_heads=[6, 6, 6, 6], + window_size=7, mlp_ratio=4., qkv_bias=True, qk_scale=None, + drop_rate=0., attn_drop_rate=0., drop_path_rate=0.1, + norm_layer=nn.LayerNorm, ape=False, patch_norm=True, + use_checkpoint=False, upscale=2, img_range=1., upsampler='', resi_connection='1conv', + **kwargs): + super(SwinIR, self).__init__() + num_in_ch = in_chans + num_out_ch = in_chans + num_feat = 64 + self.img_range = img_range + if in_chans == 3: + rgb_mean = (0.4488, 0.4371, 0.4040) + self.mean = torch.Tensor(rgb_mean).view(1, 3, 1, 1) + else: + self.mean = torch.zeros(1, 1, 1, 1) + self.upscale = upscale + self.upsampler = upsampler + self.window_size = window_size + + ##################################################################################################### + ################################### 1, shallow feature extraction ################################### + self.conv_first = nn.Conv2d(num_in_ch, embed_dim, 3, 1, 1) + + ##################################################################################################### + ################################### 2, deep feature extraction ###################################### + self.num_layers = len(depths) + self.embed_dim = embed_dim + self.ape = ape + self.patch_norm = patch_norm + self.num_features = embed_dim + self.mlp_ratio = mlp_ratio + + # split image into non-overlapping patches + self.patch_embed = PatchEmbed( + img_size=img_size, patch_size=patch_size, in_chans=embed_dim, embed_dim=embed_dim, + norm_layer=norm_layer if self.patch_norm else None) + num_patches = self.patch_embed.num_patches + patches_resolution = self.patch_embed.patches_resolution + self.patches_resolution = patches_resolution + + # merge non-overlapping patches into image + self.patch_unembed = PatchUnEmbed( + img_size=img_size, patch_size=patch_size, in_chans=embed_dim, embed_dim=embed_dim, + norm_layer=norm_layer if self.patch_norm else None) + + # absolute position embedding + if self.ape: + self.absolute_pos_embed = nn.Parameter(torch.zeros(1, num_patches, embed_dim)) + trunc_normal_(self.absolute_pos_embed, std=.02) + + self.pos_drop = nn.Dropout(p=drop_rate) + + # stochastic depth + dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule + + # build Residual Swin Transformer blocks (RSTB) + self.layers = nn.ModuleList() + for i_layer in range(self.num_layers): + layer = RSTB(dim=embed_dim, + input_resolution=(patches_resolution[0], + patches_resolution[1]), + depth=depths[i_layer], + num_heads=num_heads[i_layer], + window_size=window_size, + mlp_ratio=self.mlp_ratio, + qkv_bias=qkv_bias, qk_scale=qk_scale, + drop=drop_rate, attn_drop=attn_drop_rate, + drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])], # no impact on SR results + norm_layer=norm_layer, + downsample=None, + use_checkpoint=use_checkpoint, + img_size=img_size, + patch_size=patch_size, + resi_connection=resi_connection + + ) + self.layers.append(layer) + self.norm = norm_layer(self.num_features) + + # build the last conv layer in deep feature extraction + if resi_connection == '1conv': + self.conv_after_body = nn.Conv2d(embed_dim, embed_dim, 3, 1, 1) + elif resi_connection == '3conv': + # to save parameters and memory + self.conv_after_body = nn.Sequential(nn.Conv2d(embed_dim, embed_dim // 4, 3, 1, 1), + nn.LeakyReLU(negative_slope=0.2, inplace=True), + nn.Conv2d(embed_dim // 4, embed_dim // 4, 1, 1, 0), + nn.LeakyReLU(negative_slope=0.2, inplace=True), + nn.Conv2d(embed_dim // 4, embed_dim, 3, 1, 1)) + + ##################################################################################################### + ################################ 3, high quality image reconstruction ################################ + if self.upsampler == 'pixelshuffle': + # for classical SR + self.conv_before_upsample = nn.Sequential(nn.Conv2d(embed_dim, num_feat, 3, 1, 1), + nn.LeakyReLU(inplace=True)) + self.upsample = Upsample(upscale, num_feat) + self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1) + elif self.upsampler == 'pixelshuffledirect': + # for lightweight SR (to save parameters) + self.upsample = UpsampleOneStep(upscale, embed_dim, num_out_ch, + (patches_resolution[0], patches_resolution[1])) + elif self.upsampler == 'nearest+conv': + # for real-world SR (less artifacts) + self.conv_before_upsample = nn.Sequential(nn.Conv2d(embed_dim, num_feat, 3, 1, 1), + nn.LeakyReLU(inplace=True)) + self.conv_up1 = nn.Conv2d(num_feat, num_feat, 3, 1, 1) + if self.upscale == 4: + self.conv_up2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1) + self.conv_hr = nn.Conv2d(num_feat, num_feat, 3, 1, 1) + self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1) + self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True) + else: + # for image denoising and JPEG compression artifact reduction + self.conv_last = nn.Conv2d(embed_dim, num_out_ch, 3, 1, 1) + + self.apply(self._init_weights) + + def _init_weights(self, m): + if isinstance(m, nn.Linear): + trunc_normal_(m.weight, std=.02) + if isinstance(m, nn.Linear) and m.bias is not None: + nn.init.constant_(m.bias, 0) + elif isinstance(m, nn.LayerNorm): + nn.init.constant_(m.bias, 0) + nn.init.constant_(m.weight, 1.0) + + @torch.jit.ignore + def no_weight_decay(self): + return {'absolute_pos_embed'} + + @torch.jit.ignore + def no_weight_decay_keywords(self): + return {'relative_position_bias_table'} + + def check_image_size(self, x): + _, _, h, w = x.size() + mod_pad_h = (self.window_size - h % self.window_size) % self.window_size + mod_pad_w = (self.window_size - w % self.window_size) % self.window_size + x = F.pad(x, (0, mod_pad_w, 0, mod_pad_h), 'reflect') + return x + + def forward_features(self, x): + x_size = (x.shape[2], x.shape[3]) + x = self.patch_embed(x) + if self.ape: + x = x + self.absolute_pos_embed + x = self.pos_drop(x) + + for layer in self.layers: + x = layer(x, x_size) + + x = self.norm(x) # B L C + x = self.patch_unembed(x, x_size) + + return x + + def forward(self, x): + H, W = x.shape[2:] + x = self.check_image_size(x) + + self.mean = self.mean.type_as(x) + x = (x - self.mean) * self.img_range + + if self.upsampler == 'pixelshuffle': + # for classical SR + x = self.conv_first(x) + x = self.conv_after_body(self.forward_features(x)) + x + x = self.conv_before_upsample(x) + x = self.conv_last(self.upsample(x)) + elif self.upsampler == 'pixelshuffledirect': + # for lightweight SR + x = self.conv_first(x) + x = self.conv_after_body(self.forward_features(x)) + x + x = self.upsample(x) + elif self.upsampler == 'nearest+conv': + # for real-world SR + x = self.conv_first(x) + x = self.conv_after_body(self.forward_features(x)) + x + x = self.conv_before_upsample(x) + x = self.lrelu(self.conv_up1(torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest'))) + if self.upscale == 4: + x = self.lrelu(self.conv_up2(torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest'))) + x = self.conv_last(self.lrelu(self.conv_hr(x))) + else: + # for image denoising and JPEG compression artifact reduction + x_first = self.conv_first(x) + res = self.conv_after_body(self.forward_features(x_first)) + x_first + x = x + self.conv_last(res) + + x = x / self.img_range + self.mean + + return x[:, :, :H*self.upscale, :W*self.upscale] + + def flops(self): + flops = 0 + H, W = self.patches_resolution + flops += H * W * 3 * self.embed_dim * 9 + flops += self.patch_embed.flops() + for i, layer in enumerate(self.layers): + flops += layer.flops() + flops += H * W * 3 * self.embed_dim * self.embed_dim + flops += self.upsample.flops() + return flops + + +if __name__ == '__main__': + upscale = 4 + window_size = 8 + height = (1024 // upscale // window_size + 1) * window_size + width = (720 // upscale // window_size + 1) * window_size + model = SwinIR(upscale=2, img_size=(height, width), + window_size=window_size, img_range=1., depths=[6, 6, 6, 6], + embed_dim=60, num_heads=[6, 6, 6, 6], mlp_ratio=2, upsampler='pixelshuffledirect') + print(model) + print(height, width, model.flops() / 1e9) + + x = torch.randn((1, 3, height, width)) + x = model(x) + print(x.shape) diff --git a/extensions-builtin/SwinIR/swinir_model_arch_v2.py b/extensions-builtin/SwinIR/swinir_model_arch_v2.py new file mode 100644 index 0000000000000000000000000000000000000000..0e28ae6eefa2f4bc6260b14760907c54ce633876 --- /dev/null +++ b/extensions-builtin/SwinIR/swinir_model_arch_v2.py @@ -0,0 +1,1017 @@ +# ----------------------------------------------------------------------------------- +# Swin2SR: Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration, https://arxiv.org/abs/ +# Written by Conde and Choi et al. +# ----------------------------------------------------------------------------------- + +import math +import numpy as np +import torch +import torch.nn as nn +import torch.nn.functional as F +import torch.utils.checkpoint as checkpoint +from timm.models.layers import DropPath, to_2tuple, trunc_normal_ + + +class Mlp(nn.Module): + def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): + super().__init__() + out_features = out_features or in_features + hidden_features = hidden_features or in_features + self.fc1 = nn.Linear(in_features, hidden_features) + self.act = act_layer() + self.fc2 = nn.Linear(hidden_features, out_features) + self.drop = nn.Dropout(drop) + + def forward(self, x): + x = self.fc1(x) + x = self.act(x) + x = self.drop(x) + x = self.fc2(x) + x = self.drop(x) + return x + + +def window_partition(x, window_size): + """ + Args: + x: (B, H, W, C) + window_size (int): window size + Returns: + windows: (num_windows*B, window_size, window_size, C) + """ + B, H, W, C = x.shape + x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) + windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) + return windows + + +def window_reverse(windows, window_size, H, W): + """ + Args: + windows: (num_windows*B, window_size, window_size, C) + window_size (int): Window size + H (int): Height of image + W (int): Width of image + Returns: + x: (B, H, W, C) + """ + B = int(windows.shape[0] / (H * W / window_size / window_size)) + x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) + x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) + return x + +class WindowAttention(nn.Module): + r""" Window based multi-head self attention (W-MSA) module with relative position bias. + It supports both of shifted and non-shifted window. + Args: + dim (int): Number of input channels. + window_size (tuple[int]): The height and width of the window. + num_heads (int): Number of attention heads. + qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True + attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0 + proj_drop (float, optional): Dropout ratio of output. Default: 0.0 + pretrained_window_size (tuple[int]): The height and width of the window in pre-training. + """ + + def __init__(self, dim, window_size, num_heads, qkv_bias=True, attn_drop=0., proj_drop=0., + pretrained_window_size=[0, 0]): + + super().__init__() + self.dim = dim + self.window_size = window_size # Wh, Ww + self.pretrained_window_size = pretrained_window_size + self.num_heads = num_heads + + self.logit_scale = nn.Parameter(torch.log(10 * torch.ones((num_heads, 1, 1))), requires_grad=True) + + # mlp to generate continuous relative position bias + self.cpb_mlp = nn.Sequential(nn.Linear(2, 512, bias=True), + nn.ReLU(inplace=True), + nn.Linear(512, num_heads, bias=False)) + + # get relative_coords_table + relative_coords_h = torch.arange(-(self.window_size[0] - 1), self.window_size[0], dtype=torch.float32) + relative_coords_w = torch.arange(-(self.window_size[1] - 1), self.window_size[1], dtype=torch.float32) + relative_coords_table = torch.stack( + torch.meshgrid([relative_coords_h, + relative_coords_w])).permute(1, 2, 0).contiguous().unsqueeze(0) # 1, 2*Wh-1, 2*Ww-1, 2 + if pretrained_window_size[0] > 0: + relative_coords_table[:, :, :, 0] /= (pretrained_window_size[0] - 1) + relative_coords_table[:, :, :, 1] /= (pretrained_window_size[1] - 1) + else: + relative_coords_table[:, :, :, 0] /= (self.window_size[0] - 1) + relative_coords_table[:, :, :, 1] /= (self.window_size[1] - 1) + relative_coords_table *= 8 # normalize to -8, 8 + relative_coords_table = torch.sign(relative_coords_table) * torch.log2( + torch.abs(relative_coords_table) + 1.0) / np.log2(8) + + self.register_buffer("relative_coords_table", relative_coords_table) + + # get pair-wise relative position index for each token inside the window + coords_h = torch.arange(self.window_size[0]) + coords_w = torch.arange(self.window_size[1]) + coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww + coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww + relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww + relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 + relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0 + relative_coords[:, :, 1] += self.window_size[1] - 1 + relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 + relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww + self.register_buffer("relative_position_index", relative_position_index) + + self.qkv = nn.Linear(dim, dim * 3, bias=False) + if qkv_bias: + self.q_bias = nn.Parameter(torch.zeros(dim)) + self.v_bias = nn.Parameter(torch.zeros(dim)) + else: + self.q_bias = None + self.v_bias = None + self.attn_drop = nn.Dropout(attn_drop) + self.proj = nn.Linear(dim, dim) + self.proj_drop = nn.Dropout(proj_drop) + self.softmax = nn.Softmax(dim=-1) + + def forward(self, x, mask=None): + """ + Args: + x: input features with shape of (num_windows*B, N, C) + mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None + """ + B_, N, C = x.shape + qkv_bias = None + if self.q_bias is not None: + qkv_bias = torch.cat((self.q_bias, torch.zeros_like(self.v_bias, requires_grad=False), self.v_bias)) + qkv = F.linear(input=x, weight=self.qkv.weight, bias=qkv_bias) + qkv = qkv.reshape(B_, N, 3, self.num_heads, -1).permute(2, 0, 3, 1, 4) + q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) + + # cosine attention + attn = (F.normalize(q, dim=-1) @ F.normalize(k, dim=-1).transpose(-2, -1)) + logit_scale = torch.clamp(self.logit_scale, max=torch.log(torch.tensor(1. / 0.01)).to(self.logit_scale.device)).exp() + attn = attn * logit_scale + + relative_position_bias_table = self.cpb_mlp(self.relative_coords_table).view(-1, self.num_heads) + relative_position_bias = relative_position_bias_table[self.relative_position_index.view(-1)].view( + self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH + relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww + relative_position_bias = 16 * torch.sigmoid(relative_position_bias) + attn = attn + relative_position_bias.unsqueeze(0) + + if mask is not None: + nW = mask.shape[0] + attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0) + attn = attn.view(-1, self.num_heads, N, N) + attn = self.softmax(attn) + else: + attn = self.softmax(attn) + + attn = self.attn_drop(attn) + + x = (attn @ v).transpose(1, 2).reshape(B_, N, C) + x = self.proj(x) + x = self.proj_drop(x) + return x + + def extra_repr(self) -> str: + return f'dim={self.dim}, window_size={self.window_size}, ' \ + f'pretrained_window_size={self.pretrained_window_size}, num_heads={self.num_heads}' + + def flops(self, N): + # calculate flops for 1 window with token length of N + flops = 0 + # qkv = self.qkv(x) + flops += N * self.dim * 3 * self.dim + # attn = (q @ k.transpose(-2, -1)) + flops += self.num_heads * N * (self.dim // self.num_heads) * N + # x = (attn @ v) + flops += self.num_heads * N * N * (self.dim // self.num_heads) + # x = self.proj(x) + flops += N * self.dim * self.dim + return flops + +class SwinTransformerBlock(nn.Module): + r""" Swin Transformer Block. + Args: + dim (int): Number of input channels. + input_resolution (tuple[int]): Input resulotion. + num_heads (int): Number of attention heads. + window_size (int): Window size. + shift_size (int): Shift size for SW-MSA. + mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. + qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True + drop (float, optional): Dropout rate. Default: 0.0 + attn_drop (float, optional): Attention dropout rate. Default: 0.0 + drop_path (float, optional): Stochastic depth rate. Default: 0.0 + act_layer (nn.Module, optional): Activation layer. Default: nn.GELU + norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm + pretrained_window_size (int): Window size in pre-training. + """ + + def __init__(self, dim, input_resolution, num_heads, window_size=7, shift_size=0, + mlp_ratio=4., qkv_bias=True, drop=0., attn_drop=0., drop_path=0., + act_layer=nn.GELU, norm_layer=nn.LayerNorm, pretrained_window_size=0): + super().__init__() + self.dim = dim + self.input_resolution = input_resolution + self.num_heads = num_heads + self.window_size = window_size + self.shift_size = shift_size + self.mlp_ratio = mlp_ratio + if min(self.input_resolution) <= self.window_size: + # if window size is larger than input resolution, we don't partition windows + self.shift_size = 0 + self.window_size = min(self.input_resolution) + assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size" + + self.norm1 = norm_layer(dim) + self.attn = WindowAttention( + dim, window_size=to_2tuple(self.window_size), num_heads=num_heads, + qkv_bias=qkv_bias, attn_drop=attn_drop, proj_drop=drop, + pretrained_window_size=to_2tuple(pretrained_window_size)) + + self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() + self.norm2 = norm_layer(dim) + mlp_hidden_dim = int(dim * mlp_ratio) + self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) + + if self.shift_size > 0: + attn_mask = self.calculate_mask(self.input_resolution) + else: + attn_mask = None + + self.register_buffer("attn_mask", attn_mask) + + def calculate_mask(self, x_size): + # calculate attention mask for SW-MSA + H, W = x_size + img_mask = torch.zeros((1, H, W, 1)) # 1 H W 1 + h_slices = (slice(0, -self.window_size), + slice(-self.window_size, -self.shift_size), + slice(-self.shift_size, None)) + w_slices = (slice(0, -self.window_size), + slice(-self.window_size, -self.shift_size), + slice(-self.shift_size, None)) + cnt = 0 + for h in h_slices: + for w in w_slices: + img_mask[:, h, w, :] = cnt + cnt += 1 + + mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1 + mask_windows = mask_windows.view(-1, self.window_size * self.window_size) + attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) + attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0)) + + return attn_mask + + def forward(self, x, x_size): + H, W = x_size + B, L, C = x.shape + #assert L == H * W, "input feature has wrong size" + + shortcut = x + x = x.view(B, H, W, C) + + # cyclic shift + if self.shift_size > 0: + shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)) + else: + shifted_x = x + + # partition windows + x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C + x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C + + # W-MSA/SW-MSA (to be compatible for testing on images whose shapes are the multiple of window size + if self.input_resolution == x_size: + attn_windows = self.attn(x_windows, mask=self.attn_mask) # nW*B, window_size*window_size, C + else: + attn_windows = self.attn(x_windows, mask=self.calculate_mask(x_size).to(x.device)) + + # merge windows + attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) + shifted_x = window_reverse(attn_windows, self.window_size, H, W) # B H' W' C + + # reverse cyclic shift + if self.shift_size > 0: + x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2)) + else: + x = shifted_x + x = x.view(B, H * W, C) + x = shortcut + self.drop_path(self.norm1(x)) + + # FFN + x = x + self.drop_path(self.norm2(self.mlp(x))) + + return x + + def extra_repr(self) -> str: + return f"dim={self.dim}, input_resolution={self.input_resolution}, num_heads={self.num_heads}, " \ + f"window_size={self.window_size}, shift_size={self.shift_size}, mlp_ratio={self.mlp_ratio}" + + def flops(self): + flops = 0 + H, W = self.input_resolution + # norm1 + flops += self.dim * H * W + # W-MSA/SW-MSA + nW = H * W / self.window_size / self.window_size + flops += nW * self.attn.flops(self.window_size * self.window_size) + # mlp + flops += 2 * H * W * self.dim * self.dim * self.mlp_ratio + # norm2 + flops += self.dim * H * W + return flops + +class PatchMerging(nn.Module): + r""" Patch Merging Layer. + Args: + input_resolution (tuple[int]): Resolution of input feature. + dim (int): Number of input channels. + norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm + """ + + def __init__(self, input_resolution, dim, norm_layer=nn.LayerNorm): + super().__init__() + self.input_resolution = input_resolution + self.dim = dim + self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False) + self.norm = norm_layer(2 * dim) + + def forward(self, x): + """ + x: B, H*W, C + """ + H, W = self.input_resolution + B, L, C = x.shape + assert L == H * W, "input feature has wrong size" + assert H % 2 == 0 and W % 2 == 0, f"x size ({H}*{W}) are not even." + + x = x.view(B, H, W, C) + + x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C + x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C + x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C + x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C + x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C + x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C + + x = self.reduction(x) + x = self.norm(x) + + return x + + def extra_repr(self) -> str: + return f"input_resolution={self.input_resolution}, dim={self.dim}" + + def flops(self): + H, W = self.input_resolution + flops = (H // 2) * (W // 2) * 4 * self.dim * 2 * self.dim + flops += H * W * self.dim // 2 + return flops + +class BasicLayer(nn.Module): + """ A basic Swin Transformer layer for one stage. + Args: + dim (int): Number of input channels. + input_resolution (tuple[int]): Input resolution. + depth (int): Number of blocks. + num_heads (int): Number of attention heads. + window_size (int): Local window size. + mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. + qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True + drop (float, optional): Dropout rate. Default: 0.0 + attn_drop (float, optional): Attention dropout rate. Default: 0.0 + drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 + norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm + downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None + use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. + pretrained_window_size (int): Local window size in pre-training. + """ + + def __init__(self, dim, input_resolution, depth, num_heads, window_size, + mlp_ratio=4., qkv_bias=True, drop=0., attn_drop=0., + drop_path=0., norm_layer=nn.LayerNorm, downsample=None, use_checkpoint=False, + pretrained_window_size=0): + + super().__init__() + self.dim = dim + self.input_resolution = input_resolution + self.depth = depth + self.use_checkpoint = use_checkpoint + + # build blocks + self.blocks = nn.ModuleList([ + SwinTransformerBlock(dim=dim, input_resolution=input_resolution, + num_heads=num_heads, window_size=window_size, + shift_size=0 if (i % 2 == 0) else window_size // 2, + mlp_ratio=mlp_ratio, + qkv_bias=qkv_bias, + drop=drop, attn_drop=attn_drop, + drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path, + norm_layer=norm_layer, + pretrained_window_size=pretrained_window_size) + for i in range(depth)]) + + # patch merging layer + if downsample is not None: + self.downsample = downsample(input_resolution, dim=dim, norm_layer=norm_layer) + else: + self.downsample = None + + def forward(self, x, x_size): + for blk in self.blocks: + if self.use_checkpoint: + x = checkpoint.checkpoint(blk, x, x_size) + else: + x = blk(x, x_size) + if self.downsample is not None: + x = self.downsample(x) + return x + + def extra_repr(self) -> str: + return f"dim={self.dim}, input_resolution={self.input_resolution}, depth={self.depth}" + + def flops(self): + flops = 0 + for blk in self.blocks: + flops += blk.flops() + if self.downsample is not None: + flops += self.downsample.flops() + return flops + + def _init_respostnorm(self): + for blk in self.blocks: + nn.init.constant_(blk.norm1.bias, 0) + nn.init.constant_(blk.norm1.weight, 0) + nn.init.constant_(blk.norm2.bias, 0) + nn.init.constant_(blk.norm2.weight, 0) + +class PatchEmbed(nn.Module): + r""" Image to Patch Embedding + Args: + img_size (int): Image size. Default: 224. + patch_size (int): Patch token size. Default: 4. + in_chans (int): Number of input image channels. Default: 3. + embed_dim (int): Number of linear projection output channels. Default: 96. + norm_layer (nn.Module, optional): Normalization layer. Default: None + """ + + def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None): + super().__init__() + img_size = to_2tuple(img_size) + patch_size = to_2tuple(patch_size) + patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]] + self.img_size = img_size + self.patch_size = patch_size + self.patches_resolution = patches_resolution + self.num_patches = patches_resolution[0] * patches_resolution[1] + + self.in_chans = in_chans + self.embed_dim = embed_dim + + self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) + if norm_layer is not None: + self.norm = norm_layer(embed_dim) + else: + self.norm = None + + def forward(self, x): + B, C, H, W = x.shape + # FIXME look at relaxing size constraints + # assert H == self.img_size[0] and W == self.img_size[1], + # f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})." + x = self.proj(x).flatten(2).transpose(1, 2) # B Ph*Pw C + if self.norm is not None: + x = self.norm(x) + return x + + def flops(self): + Ho, Wo = self.patches_resolution + flops = Ho * Wo * self.embed_dim * self.in_chans * (self.patch_size[0] * self.patch_size[1]) + if self.norm is not None: + flops += Ho * Wo * self.embed_dim + return flops + +class RSTB(nn.Module): + """Residual Swin Transformer Block (RSTB). + + Args: + dim (int): Number of input channels. + input_resolution (tuple[int]): Input resolution. + depth (int): Number of blocks. + num_heads (int): Number of attention heads. + window_size (int): Local window size. + mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. + qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True + drop (float, optional): Dropout rate. Default: 0.0 + attn_drop (float, optional): Attention dropout rate. Default: 0.0 + drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 + norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm + downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None + use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. + img_size: Input image size. + patch_size: Patch size. + resi_connection: The convolutional block before residual connection. + """ + + def __init__(self, dim, input_resolution, depth, num_heads, window_size, + mlp_ratio=4., qkv_bias=True, drop=0., attn_drop=0., + drop_path=0., norm_layer=nn.LayerNorm, downsample=None, use_checkpoint=False, + img_size=224, patch_size=4, resi_connection='1conv'): + super(RSTB, self).__init__() + + self.dim = dim + self.input_resolution = input_resolution + + self.residual_group = BasicLayer(dim=dim, + input_resolution=input_resolution, + depth=depth, + num_heads=num_heads, + window_size=window_size, + mlp_ratio=mlp_ratio, + qkv_bias=qkv_bias, + drop=drop, attn_drop=attn_drop, + drop_path=drop_path, + norm_layer=norm_layer, + downsample=downsample, + use_checkpoint=use_checkpoint) + + if resi_connection == '1conv': + self.conv = nn.Conv2d(dim, dim, 3, 1, 1) + elif resi_connection == '3conv': + # to save parameters and memory + self.conv = nn.Sequential(nn.Conv2d(dim, dim // 4, 3, 1, 1), nn.LeakyReLU(negative_slope=0.2, inplace=True), + nn.Conv2d(dim // 4, dim // 4, 1, 1, 0), + nn.LeakyReLU(negative_slope=0.2, inplace=True), + nn.Conv2d(dim // 4, dim, 3, 1, 1)) + + self.patch_embed = PatchEmbed( + img_size=img_size, patch_size=patch_size, in_chans=dim, embed_dim=dim, + norm_layer=None) + + self.patch_unembed = PatchUnEmbed( + img_size=img_size, patch_size=patch_size, in_chans=dim, embed_dim=dim, + norm_layer=None) + + def forward(self, x, x_size): + return self.patch_embed(self.conv(self.patch_unembed(self.residual_group(x, x_size), x_size))) + x + + def flops(self): + flops = 0 + flops += self.residual_group.flops() + H, W = self.input_resolution + flops += H * W * self.dim * self.dim * 9 + flops += self.patch_embed.flops() + flops += self.patch_unembed.flops() + + return flops + +class PatchUnEmbed(nn.Module): + r""" Image to Patch Unembedding + + Args: + img_size (int): Image size. Default: 224. + patch_size (int): Patch token size. Default: 4. + in_chans (int): Number of input image channels. Default: 3. + embed_dim (int): Number of linear projection output channels. Default: 96. + norm_layer (nn.Module, optional): Normalization layer. Default: None + """ + + def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None): + super().__init__() + img_size = to_2tuple(img_size) + patch_size = to_2tuple(patch_size) + patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]] + self.img_size = img_size + self.patch_size = patch_size + self.patches_resolution = patches_resolution + self.num_patches = patches_resolution[0] * patches_resolution[1] + + self.in_chans = in_chans + self.embed_dim = embed_dim + + def forward(self, x, x_size): + B, HW, C = x.shape + x = x.transpose(1, 2).view(B, self.embed_dim, x_size[0], x_size[1]) # B Ph*Pw C + return x + + def flops(self): + flops = 0 + return flops + + +class Upsample(nn.Sequential): + """Upsample module. + + Args: + scale (int): Scale factor. Supported scales: 2^n and 3. + num_feat (int): Channel number of intermediate features. + """ + + def __init__(self, scale, num_feat): + m = [] + if (scale & (scale - 1)) == 0: # scale = 2^n + for _ in range(int(math.log(scale, 2))): + m.append(nn.Conv2d(num_feat, 4 * num_feat, 3, 1, 1)) + m.append(nn.PixelShuffle(2)) + elif scale == 3: + m.append(nn.Conv2d(num_feat, 9 * num_feat, 3, 1, 1)) + m.append(nn.PixelShuffle(3)) + else: + raise ValueError(f'scale {scale} is not supported. ' 'Supported scales: 2^n and 3.') + super(Upsample, self).__init__(*m) + +class Upsample_hf(nn.Sequential): + """Upsample module. + + Args: + scale (int): Scale factor. Supported scales: 2^n and 3. + num_feat (int): Channel number of intermediate features. + """ + + def __init__(self, scale, num_feat): + m = [] + if (scale & (scale - 1)) == 0: # scale = 2^n + for _ in range(int(math.log(scale, 2))): + m.append(nn.Conv2d(num_feat, 4 * num_feat, 3, 1, 1)) + m.append(nn.PixelShuffle(2)) + elif scale == 3: + m.append(nn.Conv2d(num_feat, 9 * num_feat, 3, 1, 1)) + m.append(nn.PixelShuffle(3)) + else: + raise ValueError(f'scale {scale} is not supported. ' 'Supported scales: 2^n and 3.') + super(Upsample_hf, self).__init__(*m) + + +class UpsampleOneStep(nn.Sequential): + """UpsampleOneStep module (the difference with Upsample is that it always only has 1conv + 1pixelshuffle) + Used in lightweight SR to save parameters. + + Args: + scale (int): Scale factor. Supported scales: 2^n and 3. + num_feat (int): Channel number of intermediate features. + + """ + + def __init__(self, scale, num_feat, num_out_ch, input_resolution=None): + self.num_feat = num_feat + self.input_resolution = input_resolution + m = [] + m.append(nn.Conv2d(num_feat, (scale ** 2) * num_out_ch, 3, 1, 1)) + m.append(nn.PixelShuffle(scale)) + super(UpsampleOneStep, self).__init__(*m) + + def flops(self): + H, W = self.input_resolution + flops = H * W * self.num_feat * 3 * 9 + return flops + + + +class Swin2SR(nn.Module): + r""" Swin2SR + A PyTorch impl of : `Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration`. + + Args: + img_size (int | tuple(int)): Input image size. Default 64 + patch_size (int | tuple(int)): Patch size. Default: 1 + in_chans (int): Number of input image channels. Default: 3 + embed_dim (int): Patch embedding dimension. Default: 96 + depths (tuple(int)): Depth of each Swin Transformer layer. + num_heads (tuple(int)): Number of attention heads in different layers. + window_size (int): Window size. Default: 7 + mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4 + qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True + drop_rate (float): Dropout rate. Default: 0 + attn_drop_rate (float): Attention dropout rate. Default: 0 + drop_path_rate (float): Stochastic depth rate. Default: 0.1 + norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm. + ape (bool): If True, add absolute position embedding to the patch embedding. Default: False + patch_norm (bool): If True, add normalization after patch embedding. Default: True + use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False + upscale: Upscale factor. 2/3/4/8 for image SR, 1 for denoising and compress artifact reduction + img_range: Image range. 1. or 255. + upsampler: The reconstruction reconstruction module. 'pixelshuffle'/'pixelshuffledirect'/'nearest+conv'/None + resi_connection: The convolutional block before residual connection. '1conv'/'3conv' + """ + + def __init__(self, img_size=64, patch_size=1, in_chans=3, + embed_dim=96, depths=[6, 6, 6, 6], num_heads=[6, 6, 6, 6], + window_size=7, mlp_ratio=4., qkv_bias=True, + drop_rate=0., attn_drop_rate=0., drop_path_rate=0.1, + norm_layer=nn.LayerNorm, ape=False, patch_norm=True, + use_checkpoint=False, upscale=2, img_range=1., upsampler='', resi_connection='1conv', + **kwargs): + super(Swin2SR, self).__init__() + num_in_ch = in_chans + num_out_ch = in_chans + num_feat = 64 + self.img_range = img_range + if in_chans == 3: + rgb_mean = (0.4488, 0.4371, 0.4040) + self.mean = torch.Tensor(rgb_mean).view(1, 3, 1, 1) + else: + self.mean = torch.zeros(1, 1, 1, 1) + self.upscale = upscale + self.upsampler = upsampler + self.window_size = window_size + + ##################################################################################################### + ################################### 1, shallow feature extraction ################################### + self.conv_first = nn.Conv2d(num_in_ch, embed_dim, 3, 1, 1) + + ##################################################################################################### + ################################### 2, deep feature extraction ###################################### + self.num_layers = len(depths) + self.embed_dim = embed_dim + self.ape = ape + self.patch_norm = patch_norm + self.num_features = embed_dim + self.mlp_ratio = mlp_ratio + + # split image into non-overlapping patches + self.patch_embed = PatchEmbed( + img_size=img_size, patch_size=patch_size, in_chans=embed_dim, embed_dim=embed_dim, + norm_layer=norm_layer if self.patch_norm else None) + num_patches = self.patch_embed.num_patches + patches_resolution = self.patch_embed.patches_resolution + self.patches_resolution = patches_resolution + + # merge non-overlapping patches into image + self.patch_unembed = PatchUnEmbed( + img_size=img_size, patch_size=patch_size, in_chans=embed_dim, embed_dim=embed_dim, + norm_layer=norm_layer if self.patch_norm else None) + + # absolute position embedding + if self.ape: + self.absolute_pos_embed = nn.Parameter(torch.zeros(1, num_patches, embed_dim)) + trunc_normal_(self.absolute_pos_embed, std=.02) + + self.pos_drop = nn.Dropout(p=drop_rate) + + # stochastic depth + dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule + + # build Residual Swin Transformer blocks (RSTB) + self.layers = nn.ModuleList() + for i_layer in range(self.num_layers): + layer = RSTB(dim=embed_dim, + input_resolution=(patches_resolution[0], + patches_resolution[1]), + depth=depths[i_layer], + num_heads=num_heads[i_layer], + window_size=window_size, + mlp_ratio=self.mlp_ratio, + qkv_bias=qkv_bias, + drop=drop_rate, attn_drop=attn_drop_rate, + drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])], # no impact on SR results + norm_layer=norm_layer, + downsample=None, + use_checkpoint=use_checkpoint, + img_size=img_size, + patch_size=patch_size, + resi_connection=resi_connection + + ) + self.layers.append(layer) + + if self.upsampler == 'pixelshuffle_hf': + self.layers_hf = nn.ModuleList() + for i_layer in range(self.num_layers): + layer = RSTB(dim=embed_dim, + input_resolution=(patches_resolution[0], + patches_resolution[1]), + depth=depths[i_layer], + num_heads=num_heads[i_layer], + window_size=window_size, + mlp_ratio=self.mlp_ratio, + qkv_bias=qkv_bias, + drop=drop_rate, attn_drop=attn_drop_rate, + drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])], # no impact on SR results + norm_layer=norm_layer, + downsample=None, + use_checkpoint=use_checkpoint, + img_size=img_size, + patch_size=patch_size, + resi_connection=resi_connection + + ) + self.layers_hf.append(layer) + + self.norm = norm_layer(self.num_features) + + # build the last conv layer in deep feature extraction + if resi_connection == '1conv': + self.conv_after_body = nn.Conv2d(embed_dim, embed_dim, 3, 1, 1) + elif resi_connection == '3conv': + # to save parameters and memory + self.conv_after_body = nn.Sequential(nn.Conv2d(embed_dim, embed_dim // 4, 3, 1, 1), + nn.LeakyReLU(negative_slope=0.2, inplace=True), + nn.Conv2d(embed_dim // 4, embed_dim // 4, 1, 1, 0), + nn.LeakyReLU(negative_slope=0.2, inplace=True), + nn.Conv2d(embed_dim // 4, embed_dim, 3, 1, 1)) + + ##################################################################################################### + ################################ 3, high quality image reconstruction ################################ + if self.upsampler == 'pixelshuffle': + # for classical SR + self.conv_before_upsample = nn.Sequential(nn.Conv2d(embed_dim, num_feat, 3, 1, 1), + nn.LeakyReLU(inplace=True)) + self.upsample = Upsample(upscale, num_feat) + self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1) + elif self.upsampler == 'pixelshuffle_aux': + self.conv_bicubic = nn.Conv2d(num_in_ch, num_feat, 3, 1, 1) + self.conv_before_upsample = nn.Sequential( + nn.Conv2d(embed_dim, num_feat, 3, 1, 1), + nn.LeakyReLU(inplace=True)) + self.conv_aux = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1) + self.conv_after_aux = nn.Sequential( + nn.Conv2d(3, num_feat, 3, 1, 1), + nn.LeakyReLU(inplace=True)) + self.upsample = Upsample(upscale, num_feat) + self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1) + + elif self.upsampler == 'pixelshuffle_hf': + self.conv_before_upsample = nn.Sequential(nn.Conv2d(embed_dim, num_feat, 3, 1, 1), + nn.LeakyReLU(inplace=True)) + self.upsample = Upsample(upscale, num_feat) + self.upsample_hf = Upsample_hf(upscale, num_feat) + self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1) + self.conv_first_hf = nn.Sequential(nn.Conv2d(num_feat, embed_dim, 3, 1, 1), + nn.LeakyReLU(inplace=True)) + self.conv_after_body_hf = nn.Conv2d(embed_dim, embed_dim, 3, 1, 1) + self.conv_before_upsample_hf = nn.Sequential( + nn.Conv2d(embed_dim, num_feat, 3, 1, 1), + nn.LeakyReLU(inplace=True)) + self.conv_last_hf = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1) + + elif self.upsampler == 'pixelshuffledirect': + # for lightweight SR (to save parameters) + self.upsample = UpsampleOneStep(upscale, embed_dim, num_out_ch, + (patches_resolution[0], patches_resolution[1])) + elif self.upsampler == 'nearest+conv': + # for real-world SR (less artifacts) + assert self.upscale == 4, 'only support x4 now.' + self.conv_before_upsample = nn.Sequential(nn.Conv2d(embed_dim, num_feat, 3, 1, 1), + nn.LeakyReLU(inplace=True)) + self.conv_up1 = nn.Conv2d(num_feat, num_feat, 3, 1, 1) + self.conv_up2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1) + self.conv_hr = nn.Conv2d(num_feat, num_feat, 3, 1, 1) + self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1) + self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True) + else: + # for image denoising and JPEG compression artifact reduction + self.conv_last = nn.Conv2d(embed_dim, num_out_ch, 3, 1, 1) + + self.apply(self._init_weights) + + def _init_weights(self, m): + if isinstance(m, nn.Linear): + trunc_normal_(m.weight, std=.02) + if isinstance(m, nn.Linear) and m.bias is not None: + nn.init.constant_(m.bias, 0) + elif isinstance(m, nn.LayerNorm): + nn.init.constant_(m.bias, 0) + nn.init.constant_(m.weight, 1.0) + + @torch.jit.ignore + def no_weight_decay(self): + return {'absolute_pos_embed'} + + @torch.jit.ignore + def no_weight_decay_keywords(self): + return {'relative_position_bias_table'} + + def check_image_size(self, x): + _, _, h, w = x.size() + mod_pad_h = (self.window_size - h % self.window_size) % self.window_size + mod_pad_w = (self.window_size - w % self.window_size) % self.window_size + x = F.pad(x, (0, mod_pad_w, 0, mod_pad_h), 'reflect') + return x + + def forward_features(self, x): + x_size = (x.shape[2], x.shape[3]) + x = self.patch_embed(x) + if self.ape: + x = x + self.absolute_pos_embed + x = self.pos_drop(x) + + for layer in self.layers: + x = layer(x, x_size) + + x = self.norm(x) # B L C + x = self.patch_unembed(x, x_size) + + return x + + def forward_features_hf(self, x): + x_size = (x.shape[2], x.shape[3]) + x = self.patch_embed(x) + if self.ape: + x = x + self.absolute_pos_embed + x = self.pos_drop(x) + + for layer in self.layers_hf: + x = layer(x, x_size) + + x = self.norm(x) # B L C + x = self.patch_unembed(x, x_size) + + return x + + def forward(self, x): + H, W = x.shape[2:] + x = self.check_image_size(x) + + self.mean = self.mean.type_as(x) + x = (x - self.mean) * self.img_range + + if self.upsampler == 'pixelshuffle': + # for classical SR + x = self.conv_first(x) + x = self.conv_after_body(self.forward_features(x)) + x + x = self.conv_before_upsample(x) + x = self.conv_last(self.upsample(x)) + elif self.upsampler == 'pixelshuffle_aux': + bicubic = F.interpolate(x, size=(H * self.upscale, W * self.upscale), mode='bicubic', align_corners=False) + bicubic = self.conv_bicubic(bicubic) + x = self.conv_first(x) + x = self.conv_after_body(self.forward_features(x)) + x + x = self.conv_before_upsample(x) + aux = self.conv_aux(x) # b, 3, LR_H, LR_W + x = self.conv_after_aux(aux) + x = self.upsample(x)[:, :, :H * self.upscale, :W * self.upscale] + bicubic[:, :, :H * self.upscale, :W * self.upscale] + x = self.conv_last(x) + aux = aux / self.img_range + self.mean + elif self.upsampler == 'pixelshuffle_hf': + # for classical SR with HF + x = self.conv_first(x) + x = self.conv_after_body(self.forward_features(x)) + x + x_before = self.conv_before_upsample(x) + x_out = self.conv_last(self.upsample(x_before)) + + x_hf = self.conv_first_hf(x_before) + x_hf = self.conv_after_body_hf(self.forward_features_hf(x_hf)) + x_hf + x_hf = self.conv_before_upsample_hf(x_hf) + x_hf = self.conv_last_hf(self.upsample_hf(x_hf)) + x = x_out + x_hf + x_hf = x_hf / self.img_range + self.mean + + elif self.upsampler == 'pixelshuffledirect': + # for lightweight SR + x = self.conv_first(x) + x = self.conv_after_body(self.forward_features(x)) + x + x = self.upsample(x) + elif self.upsampler == 'nearest+conv': + # for real-world SR + x = self.conv_first(x) + x = self.conv_after_body(self.forward_features(x)) + x + x = self.conv_before_upsample(x) + x = self.lrelu(self.conv_up1(torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest'))) + x = self.lrelu(self.conv_up2(torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest'))) + x = self.conv_last(self.lrelu(self.conv_hr(x))) + else: + # for image denoising and JPEG compression artifact reduction + x_first = self.conv_first(x) + res = self.conv_after_body(self.forward_features(x_first)) + x_first + x = x + self.conv_last(res) + + x = x / self.img_range + self.mean + if self.upsampler == "pixelshuffle_aux": + return x[:, :, :H*self.upscale, :W*self.upscale], aux + + elif self.upsampler == "pixelshuffle_hf": + x_out = x_out / self.img_range + self.mean + return x_out[:, :, :H*self.upscale, :W*self.upscale], x[:, :, :H*self.upscale, :W*self.upscale], x_hf[:, :, :H*self.upscale, :W*self.upscale] + + else: + return x[:, :, :H*self.upscale, :W*self.upscale] + + def flops(self): + flops = 0 + H, W = self.patches_resolution + flops += H * W * 3 * self.embed_dim * 9 + flops += self.patch_embed.flops() + for i, layer in enumerate(self.layers): + flops += layer.flops() + flops += H * W * 3 * self.embed_dim * self.embed_dim + flops += self.upsample.flops() + return flops + + +if __name__ == '__main__': + upscale = 4 + window_size = 8 + height = (1024 // upscale // window_size + 1) * window_size + width = (720 // upscale // window_size + 1) * window_size + model = Swin2SR(upscale=2, img_size=(height, width), + window_size=window_size, img_range=1., depths=[6, 6, 6, 6], + embed_dim=60, num_heads=[6, 6, 6, 6], mlp_ratio=2, upsampler='pixelshuffledirect') + print(model) + print(height, width, model.flops() / 1e9) + + x = torch.randn((1, 3, height, width)) + x = model(x) + print(x.shape) \ No newline at end of file diff --git a/extensions-builtin/prompt-bracket-checker/javascript/prompt-bracket-checker.js b/extensions-builtin/prompt-bracket-checker/javascript/prompt-bracket-checker.js new file mode 100644 index 0000000000000000000000000000000000000000..eccfb0f9d2ca10cc2ad20ddc6f39b3e7b7bbbcb8 --- /dev/null +++ b/extensions-builtin/prompt-bracket-checker/javascript/prompt-bracket-checker.js @@ -0,0 +1,107 @@ +// Stable Diffusion WebUI - Bracket checker +// Version 1.0 +// By Hingashi no Florin/Bwin4L +// Counts open and closed brackets (round, square, curly) in the prompt and negative prompt text boxes in the txt2img and img2img tabs. +// If there's a mismatch, the keyword counter turns red and if you hover on it, a tooltip tells you what's wrong. + +function checkBrackets(evt) { + textArea = evt.target; + tabName = evt.target.parentElement.parentElement.id.split("_")[0]; + counterElt = document.querySelector('gradio-app').shadowRoot.querySelector('#' + tabName + '_token_counter'); + + promptName = evt.target.parentElement.parentElement.id.includes('neg') ? ' negative' : ''; + + errorStringParen = '(' + tabName + promptName + ' prompt) - Different number of opening and closing parentheses detected.\n'; + errorStringSquare = '[' + tabName + promptName + ' prompt] - Different number of opening and closing square brackets detected.\n'; + errorStringCurly = '{' + tabName + promptName + ' prompt} - Different number of opening and closing curly brackets detected.\n'; + + openBracketRegExp = /\(/g; + closeBracketRegExp = /\)/g; + + openSquareBracketRegExp = /\[/g; + closeSquareBracketRegExp = /\]/g; + + openCurlyBracketRegExp = /\{/g; + closeCurlyBracketRegExp = /\}/g; + + totalOpenBracketMatches = 0; + totalCloseBracketMatches = 0; + totalOpenSquareBracketMatches = 0; + totalCloseSquareBracketMatches = 0; + totalOpenCurlyBracketMatches = 0; + totalCloseCurlyBracketMatches = 0; + + openBracketMatches = textArea.value.match(openBracketRegExp); + if(openBracketMatches) { + totalOpenBracketMatches = openBracketMatches.length; + } + + closeBracketMatches = textArea.value.match(closeBracketRegExp); + if(closeBracketMatches) { + totalCloseBracketMatches = closeBracketMatches.length; + } + + openSquareBracketMatches = textArea.value.match(openSquareBracketRegExp); + if(openSquareBracketMatches) { + totalOpenSquareBracketMatches = openSquareBracketMatches.length; + } + + closeSquareBracketMatches = textArea.value.match(closeSquareBracketRegExp); + if(closeSquareBracketMatches) { + totalCloseSquareBracketMatches = closeSquareBracketMatches.length; + } + + openCurlyBracketMatches = textArea.value.match(openCurlyBracketRegExp); + if(openCurlyBracketMatches) { + totalOpenCurlyBracketMatches = openCurlyBracketMatches.length; + } + + closeCurlyBracketMatches = textArea.value.match(closeCurlyBracketRegExp); + if(closeCurlyBracketMatches) { + totalCloseCurlyBracketMatches = closeCurlyBracketMatches.length; + } + + if(totalOpenBracketMatches != totalCloseBracketMatches) { + if(!counterElt.title.includes(errorStringParen)) { + counterElt.title += errorStringParen; + } + } else { + counterElt.title = counterElt.title.replace(errorStringParen, ''); + } + + if(totalOpenSquareBracketMatches != totalCloseSquareBracketMatches) { + if(!counterElt.title.includes(errorStringSquare)) { + counterElt.title += errorStringSquare; + } + } else { + counterElt.title = counterElt.title.replace(errorStringSquare, ''); + } + + if(totalOpenCurlyBracketMatches != totalCloseCurlyBracketMatches) { + if(!counterElt.title.includes(errorStringCurly)) { + counterElt.title += errorStringCurly; + } + } else { + counterElt.title = counterElt.title.replace(errorStringCurly, ''); + } + + if(counterElt.title != '') { + counterElt.style = 'color: #FF5555;'; + } else { + counterElt.style = ''; + } +} + +var shadowRootLoaded = setInterval(function() { + var shadowTextArea = document.querySelector('gradio-app').shadowRoot.querySelectorAll('#txt2img_prompt > label > textarea'); + if(shadowTextArea.length < 1) { + return false; + } + + clearInterval(shadowRootLoaded); + + document.querySelector('gradio-app').shadowRoot.querySelector('#txt2img_prompt').onkeyup = checkBrackets; + document.querySelector('gradio-app').shadowRoot.querySelector('#txt2img_neg_prompt').onkeyup = checkBrackets; + document.querySelector('gradio-app').shadowRoot.querySelector('#img2img_prompt').onkeyup = checkBrackets; + document.querySelector('gradio-app').shadowRoot.querySelector('#img2img_neg_prompt').onkeyup = checkBrackets; +}, 1000); diff --git a/javascript/aspectRatioOverlay.js b/javascript/aspectRatioOverlay.js new file mode 100644 index 0000000000000000000000000000000000000000..66f26a22ac97bed4cba286e138e074ca89730bf6 --- /dev/null +++ b/javascript/aspectRatioOverlay.js @@ -0,0 +1,108 @@ + +let currentWidth = null; +let currentHeight = null; +let arFrameTimeout = setTimeout(function(){},0); + +function dimensionChange(e, is_width, is_height){ + + if(is_width){ + currentWidth = e.target.value*1.0 + } + if(is_height){ + currentHeight = e.target.value*1.0 + } + + var inImg2img = Boolean(gradioApp().querySelector("button.rounded-t-lg.border-gray-200")) + + if(!inImg2img){ + return; + } + + var targetElement = null; + + var tabIndex = get_tab_index('mode_img2img') + if(tabIndex == 0){ + targetElement = gradioApp().querySelector('div[data-testid=image] img'); + } else if(tabIndex == 1){ + targetElement = gradioApp().querySelector('#img2maskimg div[data-testid=image] img'); + } + + if(targetElement){ + + var arPreviewRect = gradioApp().querySelector('#imageARPreview'); + if(!arPreviewRect){ + arPreviewRect = document.createElement('div') + arPreviewRect.id = "imageARPreview"; + gradioApp().getRootNode().appendChild(arPreviewRect) + } + + + + var viewportOffset = targetElement.getBoundingClientRect(); + + viewportscale = Math.min( targetElement.clientWidth/targetElement.naturalWidth, targetElement.clientHeight/targetElement.naturalHeight ) + + scaledx = targetElement.naturalWidth*viewportscale + scaledy = targetElement.naturalHeight*viewportscale + + cleintRectTop = (viewportOffset.top+window.scrollY) + cleintRectLeft = (viewportOffset.left+window.scrollX) + cleintRectCentreY = cleintRectTop + (targetElement.clientHeight/2) + cleintRectCentreX = cleintRectLeft + (targetElement.clientWidth/2) + + viewRectTop = cleintRectCentreY-(scaledy/2) + viewRectLeft = cleintRectCentreX-(scaledx/2) + arRectWidth = scaledx + arRectHeight = scaledy + + arscale = Math.min( arRectWidth/currentWidth, arRectHeight/currentHeight ) + arscaledx = currentWidth*arscale + arscaledy = currentHeight*arscale + + arRectTop = cleintRectCentreY-(arscaledy/2) + arRectLeft = cleintRectCentreX-(arscaledx/2) + arRectWidth = arscaledx + arRectHeight = arscaledy + + arPreviewRect.style.top = arRectTop+'px'; + arPreviewRect.style.left = arRectLeft+'px'; + arPreviewRect.style.width = arRectWidth+'px'; + arPreviewRect.style.height = arRectHeight+'px'; + + clearTimeout(arFrameTimeout); + arFrameTimeout = setTimeout(function(){ + arPreviewRect.style.display = 'none'; + },2000); + + arPreviewRect.style.display = 'block'; + + } + +} + + +onUiUpdate(function(){ + var arPreviewRect = gradioApp().querySelector('#imageARPreview'); + if(arPreviewRect){ + arPreviewRect.style.display = 'none'; + } + var inImg2img = Boolean(gradioApp().querySelector("button.rounded-t-lg.border-gray-200")) + if(inImg2img){ + let inputs = gradioApp().querySelectorAll('input'); + inputs.forEach(function(e){ + var is_width = e.parentElement.id == "img2img_width" + var is_height = e.parentElement.id == "img2img_height" + + if((is_width || is_height) && !e.classList.contains('scrollwatch')){ + e.addEventListener('input', function(e){dimensionChange(e, is_width, is_height)} ) + e.classList.add('scrollwatch') + } + if(is_width){ + currentWidth = e.value*1.0 + } + if(is_height){ + currentHeight = e.value*1.0 + } + }) + } +}); diff --git a/javascript/contextMenus.js b/javascript/contextMenus.js new file mode 100644 index 0000000000000000000000000000000000000000..11bcce1bcbdc0ed5c1004fbd9d971d255645826b --- /dev/null +++ b/javascript/contextMenus.js @@ -0,0 +1,177 @@ + +contextMenuInit = function(){ + let eventListenerApplied=false; + let menuSpecs = new Map(); + + const uid = function(){ + return Date.now().toString(36) + Math.random().toString(36).substr(2); + } + + function showContextMenu(event,element,menuEntries){ + let posx = event.clientX + document.body.scrollLeft + document.documentElement.scrollLeft; + let posy = event.clientY + document.body.scrollTop + document.documentElement.scrollTop; + + let oldMenu = gradioApp().querySelector('#context-menu') + if(oldMenu){ + oldMenu.remove() + } + + let tabButton = uiCurrentTab + let baseStyle = window.getComputedStyle(tabButton) + + const contextMenu = document.createElement('nav') + contextMenu.id = "context-menu" + contextMenu.style.background = baseStyle.background + contextMenu.style.color = baseStyle.color + contextMenu.style.fontFamily = baseStyle.fontFamily + contextMenu.style.top = posy+'px' + contextMenu.style.left = posx+'px' + + + + const contextMenuList = document.createElement('ul') + contextMenuList.className = 'context-menu-items'; + contextMenu.append(contextMenuList); + + menuEntries.forEach(function(entry){ + let contextMenuEntry = document.createElement('a') + contextMenuEntry.innerHTML = entry['name'] + contextMenuEntry.addEventListener("click", function(e) { + entry['func'](); + }) + contextMenuList.append(contextMenuEntry); + + }) + + gradioApp().getRootNode().appendChild(contextMenu) + + let menuWidth = contextMenu.offsetWidth + 4; + let menuHeight = contextMenu.offsetHeight + 4; + + let windowWidth = window.innerWidth; + let windowHeight = window.innerHeight; + + if ( (windowWidth - posx) < menuWidth ) { + contextMenu.style.left = windowWidth - menuWidth + "px"; + } + + if ( (windowHeight - posy) < menuHeight ) { + contextMenu.style.top = windowHeight - menuHeight + "px"; + } + + } + + function appendContextMenuOption(targetElementSelector,entryName,entryFunction){ + + currentItems = menuSpecs.get(targetElementSelector) + + if(!currentItems){ + currentItems = [] + menuSpecs.set(targetElementSelector,currentItems); + } + let newItem = {'id':targetElementSelector+'_'+uid(), + 'name':entryName, + 'func':entryFunction, + 'isNew':true} + + currentItems.push(newItem) + return newItem['id'] + } + + function removeContextMenuOption(uid){ + menuSpecs.forEach(function(v,k) { + let index = -1 + v.forEach(function(e,ei){if(e['id']==uid){index=ei}}) + if(index>=0){ + v.splice(index, 1); + } + }) + } + + function addContextMenuEventListener(){ + if(eventListenerApplied){ + return; + } + gradioApp().addEventListener("click", function(e) { + let source = e.composedPath()[0] + if(source.id && source.id.indexOf('check_progress')>-1){ + return + } + + let oldMenu = gradioApp().querySelector('#context-menu') + if(oldMenu){ + oldMenu.remove() + } + }); + gradioApp().addEventListener("contextmenu", function(e) { + let oldMenu = gradioApp().querySelector('#context-menu') + if(oldMenu){ + oldMenu.remove() + } + menuSpecs.forEach(function(v,k) { + if(e.composedPath()[0].matches(k)){ + showContextMenu(e,e.composedPath()[0],v) + e.preventDefault() + return + } + }) + }); + eventListenerApplied=true + + } + + return [appendContextMenuOption, removeContextMenuOption, addContextMenuEventListener] +} + +initResponse = contextMenuInit(); +appendContextMenuOption = initResponse[0]; +removeContextMenuOption = initResponse[1]; +addContextMenuEventListener = initResponse[2]; + +(function(){ + //Start example Context Menu Items + let generateOnRepeat = function(genbuttonid,interruptbuttonid){ + let genbutton = gradioApp().querySelector(genbuttonid); + let interruptbutton = gradioApp().querySelector(interruptbuttonid); + if(!interruptbutton.offsetParent){ + genbutton.click(); + } + clearInterval(window.generateOnRepeatInterval) + window.generateOnRepeatInterval = setInterval(function(){ + if(!interruptbutton.offsetParent){ + genbutton.click(); + } + }, + 500) + } + + appendContextMenuOption('#txt2img_generate','Generate forever',function(){ + generateOnRepeat('#txt2img_generate','#txt2img_interrupt'); + }) + appendContextMenuOption('#img2img_generate','Generate forever',function(){ + generateOnRepeat('#img2img_generate','#img2img_interrupt'); + }) + + let cancelGenerateForever = function(){ + clearInterval(window.generateOnRepeatInterval) + } + + appendContextMenuOption('#txt2img_interrupt','Cancel generate forever',cancelGenerateForever) + appendContextMenuOption('#txt2img_generate', 'Cancel generate forever',cancelGenerateForever) + appendContextMenuOption('#img2img_interrupt','Cancel generate forever',cancelGenerateForever) + appendContextMenuOption('#img2img_generate', 'Cancel generate forever',cancelGenerateForever) + + appendContextMenuOption('#roll','Roll three', + function(){ + let rollbutton = get_uiCurrentTabContent().querySelector('#roll'); + setTimeout(function(){rollbutton.click()},100) + setTimeout(function(){rollbutton.click()},200) + setTimeout(function(){rollbutton.click()},300) + } + ) +})(); +//End example Context Menu Items + +onUiUpdate(function(){ + addContextMenuEventListener() +}); diff --git a/javascript/dragdrop.js b/javascript/dragdrop.js new file mode 100644 index 0000000000000000000000000000000000000000..3ed1cb3c65627cb83d6db206352547fb39a22a2e --- /dev/null +++ b/javascript/dragdrop.js @@ -0,0 +1,89 @@ +// allows drag-dropping files into gradio image elements, and also pasting images from clipboard + +function isValidImageList( files ) { + return files && files?.length === 1 && ['image/png', 'image/gif', 'image/jpeg'].includes(files[0].type); +} + +function dropReplaceImage( imgWrap, files ) { + if ( ! isValidImageList( files ) ) { + return; + } + + imgWrap.querySelector('.modify-upload button + button, .touch-none + div button + button')?.click(); + const callback = () => { + const fileInput = imgWrap.querySelector('input[type="file"]'); + if ( fileInput ) { + fileInput.files = files; + fileInput.dispatchEvent(new Event('change')); + } + }; + + if ( imgWrap.closest('#pnginfo_image') ) { + // special treatment for PNG Info tab, wait for fetch request to finish + const oldFetch = window.fetch; + window.fetch = async (input, options) => { + const response = await oldFetch(input, options); + if ( 'api/predict/' === input ) { + const content = await response.text(); + window.fetch = oldFetch; + window.requestAnimationFrame( () => callback() ); + return new Response(content, { + status: response.status, + statusText: response.statusText, + headers: response.headers + }) + } + return response; + }; + } else { + window.requestAnimationFrame( () => callback() ); + } +} + +window.document.addEventListener('dragover', e => { + const target = e.composedPath()[0]; + const imgWrap = target.closest('[data-testid="image"]'); + if ( !imgWrap && target.placeholder && target.placeholder.indexOf("Prompt") == -1) { + return; + } + e.stopPropagation(); + e.preventDefault(); + e.dataTransfer.dropEffect = 'copy'; +}); + +window.document.addEventListener('drop', e => { + const target = e.composedPath()[0]; + if (target.placeholder.indexOf("Prompt") == -1) { + return; + } + const imgWrap = target.closest('[data-testid="image"]'); + if ( !imgWrap ) { + return; + } + e.stopPropagation(); + e.preventDefault(); + const files = e.dataTransfer.files; + dropReplaceImage( imgWrap, files ); +}); + +window.addEventListener('paste', e => { + const files = e.clipboardData.files; + if ( ! isValidImageList( files ) ) { + return; + } + + const visibleImageFields = [...gradioApp().querySelectorAll('[data-testid="image"]')] + .filter(el => uiElementIsVisible(el)); + if ( ! visibleImageFields.length ) { + return; + } + + const firstFreeImageField = visibleImageFields + .filter(el => el.querySelector('input[type=file]'))?.[0]; + + dropReplaceImage( + firstFreeImageField ? + firstFreeImageField : + visibleImageFields[visibleImageFields.length - 1] + , files ); +}); diff --git a/javascript/edit-attention.js b/javascript/edit-attention.js new file mode 100644 index 0000000000000000000000000000000000000000..b947cbecdbbefa73f9cb0631ac5b1a4e0b6313e3 --- /dev/null +++ b/javascript/edit-attention.js @@ -0,0 +1,75 @@ +addEventListener('keydown', (event) => { + let target = event.originalTarget || event.composedPath()[0]; + if (!target.matches("#toprow textarea.gr-text-input[placeholder]")) return; + if (! (event.metaKey || event.ctrlKey)) return; + + + let plus = "ArrowUp" + let minus = "ArrowDown" + if (event.key != plus && event.key != minus) return; + + let selectionStart = target.selectionStart; + let selectionEnd = target.selectionEnd; + // If the user hasn't selected anything, let's select their current parenthesis block + if (selectionStart === selectionEnd) { + // Find opening parenthesis around current cursor + const before = target.value.substring(0, selectionStart); + let beforeParen = before.lastIndexOf("("); + if (beforeParen == -1) return; + let beforeParenClose = before.lastIndexOf(")"); + while (beforeParenClose !== -1 && beforeParenClose > beforeParen) { + beforeParen = before.lastIndexOf("(", beforeParen - 1); + beforeParenClose = before.lastIndexOf(")", beforeParenClose - 1); + } + + // Find closing parenthesis around current cursor + const after = target.value.substring(selectionStart); + let afterParen = after.indexOf(")"); + if (afterParen == -1) return; + let afterParenOpen = after.indexOf("("); + while (afterParenOpen !== -1 && afterParen > afterParenOpen) { + afterParen = after.indexOf(")", afterParen + 1); + afterParenOpen = after.indexOf("(", afterParenOpen + 1); + } + if (beforeParen === -1 || afterParen === -1) return; + + // Set the selection to the text between the parenthesis + const parenContent = target.value.substring(beforeParen + 1, selectionStart + afterParen); + const lastColon = parenContent.lastIndexOf(":"); + selectionStart = beforeParen + 1; + selectionEnd = selectionStart + lastColon; + target.setSelectionRange(selectionStart, selectionEnd); + } + + event.preventDefault(); + + if (selectionStart == 0 || target.value[selectionStart - 1] != "(") { + target.value = target.value.slice(0, selectionStart) + + "(" + target.value.slice(selectionStart, selectionEnd) + ":1.0)" + + target.value.slice(selectionEnd); + + target.focus(); + target.selectionStart = selectionStart + 1; + target.selectionEnd = selectionEnd + 1; + + } else { + end = target.value.slice(selectionEnd + 1).indexOf(")") + 1; + weight = parseFloat(target.value.slice(selectionEnd + 1, selectionEnd + 1 + end)); + if (isNaN(weight)) return; + if (event.key == minus) weight -= 0.1; + if (event.key == plus) weight += 0.1; + + weight = parseFloat(weight.toPrecision(12)); + + target.value = target.value.slice(0, selectionEnd + 1) + + weight + + target.value.slice(selectionEnd + 1 + end - 1); + + target.focus(); + target.selectionStart = selectionStart; + target.selectionEnd = selectionEnd; + } + // Since we've modified a Gradio Textbox component manually, we need to simulate an `input` DOM event to ensure its + // internal Svelte data binding remains in sync. + target.dispatchEvent(new Event("input", { bubbles: true })); +}); diff --git a/javascript/extensions.js b/javascript/extensions.js new file mode 100644 index 0000000000000000000000000000000000000000..59179ca6d502ede9acb2bf0884a83fd1bad2c06b --- /dev/null +++ b/javascript/extensions.js @@ -0,0 +1,35 @@ + +function extensions_apply(_, _){ + disable = [] + update = [] + gradioApp().querySelectorAll('#extensions input[type="checkbox"]').forEach(function(x){ + if(x.name.startsWith("enable_") && ! x.checked) + disable.push(x.name.substr(7)) + + if(x.name.startsWith("update_") && x.checked) + update.push(x.name.substr(7)) + }) + + restart_reload() + + return [JSON.stringify(disable), JSON.stringify(update)] +} + +function extensions_check(){ + gradioApp().querySelectorAll('#extensions .extension_status').forEach(function(x){ + x.innerHTML = "Loading..." + }) + + return [] +} + +function install_extension_from_index(button, url){ + button.disabled = "disabled" + button.value = "Installing..." + + textarea = gradioApp().querySelector('#extension_to_install textarea') + textarea.value = url + textarea.dispatchEvent(new Event("input", { bubbles: true })) + + gradioApp().querySelector('#install_extension_button').click() +} diff --git a/javascript/generationParams.js b/javascript/generationParams.js new file mode 100644 index 0000000000000000000000000000000000000000..95f050939b72a8d09d62de8d725caf1e7d15d3c0 --- /dev/null +++ b/javascript/generationParams.js @@ -0,0 +1,33 @@ +// attaches listeners to the txt2img and img2img galleries to update displayed generation param text when the image changes + +let txt2img_gallery, img2img_gallery, modal = undefined; +onUiUpdate(function(){ + if (!txt2img_gallery) { + txt2img_gallery = attachGalleryListeners("txt2img") + } + if (!img2img_gallery) { + img2img_gallery = attachGalleryListeners("img2img") + } + if (!modal) { + modal = gradioApp().getElementById('lightboxModal') + modalObserver.observe(modal, { attributes : true, attributeFilter : ['style'] }); + } +}); + +let modalObserver = new MutationObserver(function(mutations) { + mutations.forEach(function(mutationRecord) { + let selectedTab = gradioApp().querySelector('#tabs div button.bg-white')?.innerText + if (mutationRecord.target.style.display === 'none' && selectedTab === 'txt2img' || selectedTab === 'img2img') + gradioApp().getElementById(selectedTab+"_generation_info_button").click() + }); +}); + +function attachGalleryListeners(tab_name) { + gallery = gradioApp().querySelector('#'+tab_name+'_gallery') + gallery?.addEventListener('click', () => gradioApp().getElementById(tab_name+"_generation_info_button").click()); + gallery?.addEventListener('keydown', (e) => { + if (e.keyCode == 37 || e.keyCode == 39) // left or right arrow + gradioApp().getElementById(tab_name+"_generation_info_button").click() + }); + return gallery; +} diff --git a/javascript/hints.js b/javascript/hints.js new file mode 100644 index 0000000000000000000000000000000000000000..63e17e05ff68ab52c790ca85cdf9c25164d44ffa --- /dev/null +++ b/javascript/hints.js @@ -0,0 +1,136 @@ +// mouseover tooltips for various UI elements + +titles = { + "Sampling steps": "How many times to improve the generated image iteratively; higher values take longer; very low values can produce bad results", + "Sampling method": "Which algorithm to use to produce the image", + "GFPGAN": "Restore low quality faces using GFPGAN neural network", + "Euler a": "Euler Ancestral - very creative, each can get a completely different picture depending on step count, setting steps to higher than 30-40 does not help", + "DDIM": "Denoising Diffusion Implicit Models - best at inpainting", + "DPM adaptive": "Ignores step count - uses a number of steps determined by the CFG and resolution", + + "Batch count": "How many batches of images to create", + "Batch size": "How many image to create in a single batch", + "CFG Scale": "Classifier Free Guidance Scale - how strongly the image should conform to prompt - lower values produce more creative results", + "Seed": "A value that determines the output of random number generator - if you create an image with same parameters and seed as another image, you'll get the same result", + "\u{1f3b2}\ufe0f": "Set seed to -1, which will cause a new random number to be used every time", + "\u267b\ufe0f": "Reuse seed from last generation, mostly useful if it was randomed", + "\u{1f3a8}": "Add a random artist to the prompt.", + "\u2199\ufe0f": "Read generation parameters from prompt or last generation if prompt is empty into user interface.", + "\u{1f4c2}": "Open images output directory", + "\u{1f4be}": "Save style", + "\U0001F5D1": "Clear prompt", + "\u{1f4cb}": "Apply selected styles to current prompt", + + "Inpaint a part of image": "Draw a mask over an image, and the script will regenerate the masked area with content according to prompt", + "SD upscale": "Upscale image normally, split result into tiles, improve each tile using img2img, merge whole image back", + + "Just resize": "Resize image to target resolution. Unless height and width match, you will get incorrect aspect ratio.", + "Crop and resize": "Resize the image so that entirety of target resolution is filled with the image. Crop parts that stick out.", + "Resize and fill": "Resize the image so that entirety of image is inside target resolution. Fill empty space with image's colors.", + + "Mask blur": "How much to blur the mask before processing, in pixels.", + "Masked content": "What to put inside the masked area before processing it with Stable Diffusion.", + "fill": "fill it with colors of the image", + "original": "keep whatever was there originally", + "latent noise": "fill it with latent space noise", + "latent nothing": "fill it with latent space zeroes", + "Inpaint at full resolution": "Upscale masked region to target resolution, do inpainting, downscale back and paste into original image", + + "Denoising strength": "Determines how little respect the algorithm should have for image's content. At 0, nothing will change, and at 1 you'll get an unrelated image. With values below 1.0, processing will take less steps than the Sampling Steps slider specifies.", + "Denoising strength change factor": "In loopback mode, on each loop the denoising strength is multiplied by this value. <1 means decreasing variety so your sequence will converge on a fixed picture. >1 means increasing variety so your sequence will become more and more chaotic.", + + "Skip": "Stop processing current image and continue processing.", + "Interrupt": "Stop processing images and return any results accumulated so far.", + "Save": "Write image to a directory (default - log/images) and generation parameters into csv file.", + + "X values": "Separate values for X axis using commas.", + "Y values": "Separate values for Y axis using commas.", + + "None": "Do not do anything special", + "Prompt matrix": "Separate prompts into parts using vertical pipe character (|) and the script will create a picture for every combination of them (except for the first part, which will be present in all combinations)", + "X/Y plot": "Create a grid where images will have different parameters. Use inputs below to specify which parameters will be shared by columns and rows", + "Custom code": "Run Python code. Advanced user only. Must run program with --allow-code for this to work", + + "Prompt S/R": "Separate a list of words with commas, and the first word will be used as a keyword: script will search for this word in the prompt, and replace it with others", + "Prompt order": "Separate a list of words with commas, and the script will make a variation of prompt with those words for their every possible order", + + "Tiling": "Produce an image that can be tiled.", + "Tile overlap": "For SD upscale, how much overlap in pixels should there be between tiles. Tiles overlap so that when they are merged back into one picture, there is no clearly visible seam.", + + "Variation seed": "Seed of a different picture to be mixed into the generation.", + "Variation strength": "How strong of a variation to produce. At 0, there will be no effect. At 1, you will get the complete picture with variation seed (except for ancestral samplers, where you will just get something).", + "Resize seed from height": "Make an attempt to produce a picture similar to what would have been produced with same seed at specified resolution", + "Resize seed from width": "Make an attempt to produce a picture similar to what would have been produced with same seed at specified resolution", + + "Interrogate": "Reconstruct prompt from existing image and put it into the prompt field.", + + "Images filename pattern": "Use following tags to define how filenames for images are chosen: [steps], [cfg], [prompt], [prompt_no_styles], [prompt_spaces], [width], [height], [styles], [sampler], [seed], [model_hash], [model_name], [prompt_words], [date], [datetime], [datetime], [datetime