Kizi-Art commited on
Commit
07f0a48
1 Parent(s): 083fed9

Upload folder using huggingface_hub

Browse files
Files changed (28) hide show
  1. extensions-builtin/sdw-wd14-tagger/.gitignore +6 -0
  2. extensions-builtin/sdw-wd14-tagger/CHANGELOG.md +105 -0
  3. extensions-builtin/sdw-wd14-tagger/CONTRIBUTING.md +71 -0
  4. extensions-builtin/sdw-wd14-tagger/README.ko.md +61 -0
  5. extensions-builtin/sdw-wd14-tagger/README.md +63 -0
  6. extensions-builtin/sdw-wd14-tagger/docs/model-comparison.md +49 -0
  7. extensions-builtin/sdw-wd14-tagger/docs/screenshot.png +3 -0
  8. extensions-builtin/sdw-wd14-tagger/docs/what-is-wd14-tagger.md +30 -0
  9. extensions-builtin/sdw-wd14-tagger/install.py +13 -0
  10. extensions-builtin/sdw-wd14-tagger/javascript/tagger.js +180 -0
  11. extensions-builtin/sdw-wd14-tagger/json_schema/db_json_v1_schema.json +52 -0
  12. extensions-builtin/sdw-wd14-tagger/preload.py +26 -0
  13. extensions-builtin/sdw-wd14-tagger/pyproject.toml +38 -0
  14. extensions-builtin/sdw-wd14-tagger/requirements.txt +18 -0
  15. extensions-builtin/sdw-wd14-tagger/scripts/tagger.py +21 -0
  16. extensions-builtin/sdw-wd14-tagger/style.css +43 -0
  17. extensions-builtin/sdw-wd14-tagger/tag_based_image_dedup.sh +88 -0
  18. extensions-builtin/sdw-wd14-tagger/tagger/api.py +119 -0
  19. extensions-builtin/sdw-wd14-tagger/tagger/api_models.py +37 -0
  20. extensions-builtin/sdw-wd14-tagger/tagger/dbimutils.py +81 -0
  21. extensions-builtin/sdw-wd14-tagger/tagger/format.py +46 -0
  22. extensions-builtin/sdw-wd14-tagger/tagger/generator/tf_data_reader.py +133 -0
  23. extensions-builtin/sdw-wd14-tagger/tagger/interrogator.py +660 -0
  24. extensions-builtin/sdw-wd14-tagger/tagger/preset.py +108 -0
  25. extensions-builtin/sdw-wd14-tagger/tagger/settings.py +157 -0
  26. extensions-builtin/sdw-wd14-tagger/tagger/ui.py +482 -0
  27. extensions-builtin/sdw-wd14-tagger/tagger/uiset.py +634 -0
  28. extensions-builtin/sdw-wd14-tagger/tagger/utils.py +131 -0
extensions-builtin/sdw-wd14-tagger/.gitignore ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ __pycache__/
2
+ .vscode/
3
+ .venv/
4
+ .env
5
+
6
+ presets/
extensions-builtin/sdw-wd14-tagger/CHANGELOG.md ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # v1.1.2 (2023-08-26)
2
+
3
+ Explain recursive path usage better in ui
4
+ Fix sending tags via buttons to txt2img and img2img
5
+ type additions, inadvertently pushed, later retouched.
6
+ allow setting gpu device via flag
7
+ Fix inverted cumulative checkbox
8
+ wrap_gradio_gpu_call fallback
9
+ Fix for preload shared access
10
+ preload update
11
+ A few ui changes
12
+ Fix not clearing the tags after writing them to files
13
+ Fix: Tags were still added, beyond count threshold
14
+ fix search/replace bug
15
+ (here int based weights were reverted)
16
+ circumvent when unable to load tensorflow
17
+ fix for too many exclude_tags
18
+ add db.json validation schema, add schema validation
19
+ return fix for fastapi
20
+ pick up huggingface cache dir from env, with default, configurable also via settings.
21
+ leave tensorflow requirements to the user.
22
+ Fix for Reappearance of gradio bug: duplicate image edit
23
+ (index based weights, but later reverted)
24
+ Instead of cache_dir use local_dir, leav
25
+
26
+
27
+ # v1.1.1 eada050 (2023-07-20)
28
+
29
+ Internal cleanup, no separate interrogation for inverse
30
+ Fix issues with search and sending selection to keep/exclude
31
+ Fix issue #14, picking up last edit box changes
32
+ Fix 2 issues reported by guansss
33
+ fix huggingface reload issues. Thanks to Atoli and coder168 for reporting
34
+ experimental tensorflow unloading, but after some discussion, maybe conversion to onxx can solve this. See #17, thanks again Sean Wang.
35
+ add gallery tab, rudimentary.
36
+ fix some hf download issues
37
+ fixes for fastapi
38
+ added ML-Danbooru support, thanks to [CCRcmcpe](github.com/CCRcmcpe)
39
+
40
+
41
+ # v1.1.0 87706b7 (2023-07-16)
42
+
43
+ fix: failed to install onnxruntime package on MacOS thanks to heady713
44
+ fastapi: remote unload model, picked up from [here](https://github.com/toriato/stable-diffusion-webui-wd14-tagger/pull/109)
45
+ attribute error fix from aria1th also reported by yjunej
46
+ re-allowed weighted tags files, now configured in settings -> tagger.
47
+ wzgrx pointed out there were some modules not installed by default, so I've added a requirements.txt file that will auto-install required dependencies. However, the initial requirements.txt had issues. I ran to create the requirements.txt:
48
+ ```
49
+ pipreqs --force `pwd`
50
+ sed -i s/==.*$//g requirements.txt
51
+ ```
52
+ but it ended up adding external modules that were shadowing webui modules. If you have installed those, you may find you are not even able to start the webui until you remove them. Change to the directory of my extension and
53
+ ```
54
+ pip uninstall webui
55
+ pip uninstall modules
56
+ pip uninstall launch
57
+ ```
58
+ In particular installing a module named modules was a serious problem. Python should flag that name as illegal.
59
+
60
+ There were some interrogators that were not working unless you have them installed manually. Now they are only listed if you have them.
61
+
62
+ Thanks to wzgrx for testing and reporting these last two issues.
63
+ changed internal file structure, thanks to idiotcomerce #4
64
+ more regex usage in search and exclusion tags
65
+ fixed a bug where some exclusion tags were not reflected in the tags file
66
+ changed internal error handling, It is a bit quirky, which I intend to fix, still.
67
+ If you find it keeps complaining about an input field without reason, just try editing that one again (e.g. add a space there and remove it).
68
+
69
+
70
+ # v1.0.0 a1b59d6 (2023-07-10)
71
+
72
+ You may have to remove the presets/default.json and save a new one.witth your desired defaults. Otherwise checkboxes may not have the right default values.
73
+
74
+ General changes:
75
+
76
+ Weights, when enabled, are not printed in the tags list. Weights are displayed in the list below already as bars, so they do not add information, only obfuscate the list IMO.
77
+ There is an settings entry for the tagger, several options have been moved there.
78
+ The list of tags weights stops at a number specified on the settings tab (the slider)
79
+ There is both an included and excluded rags tab
80
+ tags in the tags list on top are clickable.
81
+ Tags below are also clickable. There is a difference if you click on the dotted line or on the actual word. a click on the word will add it to a search/kept tag (dependent on which was last active) on the dotted line will add it to the input box next to it.
82
+ interrogations can be combined (checkbox), also for a single image.
83
+ Make the labels listed clickable again, a click will add it to the selected listbox. This also functions when you are on the discarded tags tab.
84
+ Added search and replace input lists.
85
+ Changed behavior: when clicking on the dotted line, inserted is in the exclude/replace input list, if not the tag is inserted in the additional/search input list
86
+ Added a Mininmum fraction for tags slider. This filters tags based on the fraction of images and interrogations per image that has this tag with the selected weight threshold. I find this kind of filtering makes more sense than limiting the tags list to a number, though that is ok to prevent cluttering up the view,
87
+
88
+ Added a string search selected tags input field (top right) and two buttons:
89
+ Move visible tags to keep tags
90
+ Move visible tags to exclude tags
91
+
92
+ For batch processing:
93
+ After each update a db.json is written in the images folder. The db contains the weights for queries, a rerun of the same images using an interrogator just rereads this db.json. This also works after a stable diffusion reload or a reboot, as long as this db.json is there.
94
+
95
+ There is a huge batch implementation, but I was unable to test, not the right tensorflow version. EXPERIMENTAL. It is only enabled if you have the right tf version, but it's likely buggy due to my lack of testing. feel free to send me a patch if you can improve it. also see here
96
+ pre- or appending weights to weighed tag files, i.e. with weights enabled, will instead have the weights averaged
97
+
98
+ After batch processing the combined tag count average is listed, for all processed files, and the corrected average when combining the weighed tags. This is not limited to the tag_count_threshold, as it relates to the weights of all tag files. Conversely, the already existing threshold slider does affect this list length.
99
+ search tag can be a single regex or as many as replacements, comma separated. Currently a single regex or multiple as many strings in search an replace are allowed, but this is going to change in the near future, to allow all regexes and back referencing per replacements as in a re.sub().
100
+ added a 'verbose setting'.
101
+ a comma was previously missing when appending tags
102
+ several of the interrogators have been fixed.
103
+
104
+
105
+
extensions-builtin/sdw-wd14-tagger/CONTRIBUTING.md ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Thanks for the time to contrribute to this project.
2
+
3
+ The followiing is a set of guidelines for contributing to this project. These are just guidelines, not rules, use your best judgment and this document is also subject to change.
4
+
5
+ Table of Contents
6
+ =================
7
+ 1. Contribution Workflow
8
+ * Styleguides
9
+ * Git Commit Messages
10
+ * Styleguides, general notes
11
+ * JavaScript Styleguide
12
+ * Python Styleguide
13
+ * Documentation Styleguide
14
+ 2. License
15
+ 3. Questions
16
+
17
+ # Contribution Workflow
18
+ * Fork the repo and create your branch from master.
19
+ * If you've added code that should be tested, add tests.
20
+ * If you've changed APIs, update the documentation.
21
+ * Ensure the test suite passes.
22
+ * Make sure your code lints.
23
+ * Issue that pull request!
24
+
25
+ # Styleguides
26
+ ## Git Commit Messages
27
+ * Use the present tense ("Add feature" not "Added feature")
28
+ * Use the imperative mood ("Move cursor to..." not "Moves cursor to...")
29
+ * Limit the first line to 72 characters or less
30
+ * Reference issues and pull requests liberally after the first line
31
+ * When only changing documentation, include [ci skip] in the commit title
32
+ * Consider starting the commit message with an applicable emoji.
33
+ * A sign-off is not required, but encouraged using the -s flag. Example: git commit -s -m "Adding a new feature"
34
+
35
+ Example commit message:
36
+ ```
37
+ :rocket: Adds `launch()` method
38
+
39
+ The launch method accepts a single argument for the speed of the launch.
40
+ This method is necessary to get to the moon and fixes #76.
41
+ This commit closes issue #34
42
+
43
+ Signed-off-by: Jane Doe <Jane.doe@hotmail.com>
44
+ ```
45
+
46
+ ## Styleguides, general notes
47
+ The current code does not follow the below proposed styleguides everywhere. Please try to follow the styleguides as much as possible, but if you see something that is not following the styleguides, please do not change it. Commits should be atomic and only change one thing, and changing the style obfuscates the changes. The same goes for whitespace changes.
48
+
49
+ * If you change current code, please do use the styleguides, even if the code around it does not follow it.
50
+ * If you do not adhere to the styleguides, that is ok as well, but please make sure your code is readable and easy to understand.
51
+
52
+
53
+ ## JavaScript Styleguide
54
+ All JavaScript must adhere to [JavaScript Standard Style](https://standardjs.com/). [![JavaScript Style Guide](https://cdn.rawgit.com/standard/standard/master/badge.svg)](JS%20Style%20Guide)
55
+
56
+ ## Python Styleguide
57
+ Try to adhere to [PEP 8](https://www.python.org/dev/peps/pep-0008/). It is not required, but it is recommended.
58
+
59
+ ## Documentation Styleguide
60
+ Use [JSDoc](http://usejsdoc.org/) syntax to document code.
61
+ Use [GitHub-flavored Markdown](https://guides.github.com/features/mastering-markdown/) syntax to format documentation.
62
+
63
+ Thank you for your interest in contributing to this project!
64
+
65
+ # License
66
+ Largely public domain, I think tagger/dbimutils,py was [MIT](https://choosealicense.com/licenses/mit/)
67
+
68
+ # Questions
69
+ If you have any questions about the repo, open an issue or contact me directly at [email](mailto:pi.co.0o.byte@gmail.com).
70
+
71
+
extensions-builtin/sdw-wd14-tagger/README.ko.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [Automatic1111 웹UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui)를 위한 태깅(라벨링) 확장 기능
2
+ ---
3
+ DeepDanbooru 와 같은 모델을 통해 단일 또는 여러 이미지로부터 부루에서 사용하는 태그를 알아냅니다.
4
+
5
+ [You don't know how to read Korean? Read it in English here!](README.md)
6
+
7
+ ## 들어가기 앞서
8
+ 모델과 대부분의 코드는 제가 만들지 않았고 [DeepDanbooru](https://github.com/KichangKim/DeepDanbooru) 와 MrSmillingWolf 의 태거에서 가져왔습니다.
9
+
10
+ ## 설치하기
11
+ 1. *확장기능* -> *URL로부터 확장기능 설치* -> 이 레포지토리 주소 입력 -> *설치*
12
+ - 또는 이 레포지토리를 `extensions/` 디렉터리 내에 클론합니다.
13
+ ```sh
14
+ $ git clone https://github.com/picobyte/stable-diffusion-webui-wd14-tagger.git extensions/tagger
15
+ ```
16
+
17
+ 1. 모델 추가하기
18
+ - #### *MrSmilingWolf's model (a.k.a. Waifu Diffusion 1.4 tagger)*
19
+ 처음 실행할 때 [HuggingFace 레포지토리](https://huggingface.co/SmilingWolf/wd-v1-4-vit-tagger)로부터 자동으로 받아옵니다.
20
+
21
+ 모델과 관련된 또는 추가 학습에 대한 질문은 원작자인 MrSmilingWolf#5991 으로 물어봐주세요.
22
+
23
+ - #### *DeepDanbooru*
24
+ 1. 다양한 모델 파일은 아래 주소에서 찾을 수 있습니다.
25
+ - [DeepDanbooru model](https://github.com/KichangKim/DeepDanbooru/releases)
26
+ - [e621 model by 🐾Zack🐾#1984](https://discord.gg/BDFpq9Yb7K)
27
+ *(NSFW 주의!)*
28
+
29
+ 1. 모델과 설정 파일이 포함된 프로젝트 폴더를 `models/deepdanbooru` 경로로 옮깁니다.
30
+
31
+ 1. 파일 구조는 다음과 같습니다:
32
+ ```
33
+ models/
34
+ └╴deepdanbooru/
35
+ ├╴deepdanbooru-v3-20211112-sgd-e28/
36
+ │ ├╴project.json
37
+ │ └╴...
38
+
39
+ ├╴deepdanbooru-v4-20200814-sgd-e30/
40
+ │ ├╴project.json
41
+ │ └╴...
42
+
43
+ ├╴e621-v3-20221117-sgd-e32/
44
+ │ ├╴project.json
45
+ │ └╴...
46
+
47
+ ...
48
+ ```
49
+
50
+ 1. 웹UI 를 시작하거나 재시작합니다.
51
+ - 또는 *Interrogator* 드롭다운 상자 우측에 있는 새로고침 버튼을 누릅니다.
52
+
53
+
54
+ ## 스크린샷
55
+ ![Screenshot](docs/screenshot.png)
56
+
57
+ Artwork made by [hecattaart](https://vk.com/hecattaart?w=wall-89063929_3767)
58
+
59
+ ## 저작권
60
+
61
+ 빌려온 코드(예: `dbimutils.py`)를 제외하고 모두 Public domain
extensions-builtin/sdw-wd14-tagger/README.md ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Tagger for [Automatic1111's WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui)
2
+ ---
3
+ Interrogate booru style tags for single or multiple image files using various models, such as DeepDanbooru.
4
+
5
+ [한국어를 사용하시나요? 여기에 한국어 설명서가 있습니다!](README.ko.md)
6
+
7
+ ## Disclaimer
8
+ I didn't make any models, and most of the code was heavily borrowed from the [DeepDanbooru](https://github.com/KichangKim/DeepDanbooru) and MrSmillingWolf's tagger.
9
+
10
+ ## Installation
11
+ 1. *Extensions* -> *Install from URL* -> Enter URL of this repository -> Press *Install* button
12
+ - or clone this repository under `extensions/`
13
+ ```sh
14
+ $ git clone https://github.com/picobyte/stable-diffusion-webui-wd14-tagger.git extensions/tagger
15
+ ```
16
+
17
+ 1. *(optional)* Add interrogate model
18
+ - #### [*Waifu Diffusion 1.4 Tagger by MrSmilingWolf*](docs/what-is-wd14-tagger.md)
19
+ Downloads automatically from the [HuggingFace repository](https://huggingface.co/SmilingWolf/wd-v1-4-vit-tagger) the first time you run it.
20
+
21
+ - #### *DeepDanbooru*
22
+ 1. Various model files can be found below.
23
+ - [DeepDanbooru models](https://github.com/KichangKim/DeepDanbooru/releases)
24
+ - [e621 model by 🐾Zack🐾#1984](https://discord.gg/BDFpq9Yb7K)
25
+ *(link contains NSFW contents!)*
26
+
27
+ 1. Move the project folder containing the model and config to `models/deepdanbooru`
28
+
29
+ 1. The file structure should look like:
30
+ ```
31
+ models/
32
+ └╴deepdanbooru/
33
+ ├╴deepdanbooru-v3-20211112-sgd-e28/
34
+ │ ├╴project.json
35
+ │ └╴...
36
+
37
+ ├╴deepdanbooru-v4-20200814-sgd-e30/
38
+ │ ├╴project.json
39
+ │ └╴...
40
+
41
+ ├╴e621-v3-20221117-sgd-e32/
42
+ │ ├╴project.json
43
+ │ └╴...
44
+
45
+ ...
46
+ ```
47
+
48
+ 1. Start or restart the WebUI.
49
+ - or you can press refresh button after *Interrogator* dropdown box.
50
+ - "You must close stable diffusion completely after installation and re-run it!"
51
+
52
+
53
+ ## Model comparison
54
+ [Model comparison](docs/model-comparison.md)
55
+
56
+ ## Screenshot
57
+ ![Screenshot](docs/screenshot.png)
58
+
59
+ Artwork made by [hecattaart](https://vk.com/hecattaart?w=wall-89063929_3767)
60
+
61
+ ## Copyright
62
+
63
+ Public domain, except borrowed parts (e.g. `dbimutils.py`)
extensions-builtin/sdw-wd14-tagger/docs/model-comparison.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Model comparison
2
+ ---
3
+
4
+ * Used image: [hecattaart's artwork](https://vk.com/hecattaart?w=wall-89063929_3767)
5
+ * Threshold: `0.5`
6
+
7
+ ### DeepDanbooru
8
+
9
+ #### [`deepdanbooru-v3-20211112-sgd-e28`](https://github.com/KichangKim/DeepDanbooru/releases/tag/v3-20211112-sgd-e28)
10
+ ```
11
+ 1girl, animal ears, cat ears, cat tail, clothes writing, full body, rating:safe, shiba inu, shirt, shoes, simple background, sneakers, socks, solo, standing, t-shirt, tail, white background, white shirt
12
+ ```
13
+
14
+ #### [`deepdanbooru-v4-20200814-sgd-e30`](https://github.com/KichangKim/DeepDanbooru/releases/tag/v4-20200814-sgd-e30)
15
+ ```
16
+ 1girl, animal, animal ears, bottomless, clothes writing, full body, rating:safe, shirt, shoes, short sleeves, sneakers, solo, standing, t-shirt, tail, white background, white shirt
17
+ ```
18
+
19
+ #### `e621-v3-20221117-sgd-e32`
20
+ ```
21
+ anthro, bottomwear, clothing, footwear, fur, hi res, mammal, shirt, shoes, shorts, simple background, sneakers, socks, solo, standing, text on clothing, text on topwear, topwear, white background
22
+ ```
23
+
24
+ ### Waifu Diffusion Tagger
25
+
26
+ #### [`wd14-vit`](https://huggingface.co/SmilingWolf/wd-v1-4-vit-tagger)
27
+ ```
28
+ 1boy, animal ears, dog, furry, leg hair, male focus, shirt, shoes, simple background, socks, solo, tail, white background
29
+ ```
30
+
31
+ #### [`wd14-convnext`](https://huggingface.co/SmilingWolf/wd-v1-4-convnext-tagger)
32
+ ```
33
+ full body, furry, shirt, shoes, simple background, socks, solo, tail, white background
34
+ ```
35
+
36
+ #### [`wd14-vit-v2`](https://huggingface.co/SmilingWolf/wd-v1-4-vit-tagger-v2)
37
+ ```
38
+ 1boy, animal ears, cat, furry, male focus, shirt, shoes, simple background, socks, solo, tail, white background
39
+ ```
40
+
41
+ #### [`wd14-convnext-v2`](https://huggingface.co/SmilingWolf/wd-v1-4-convnext-tagger-v2)
42
+ ```
43
+ animal focus, clothes writing, earrings, full body, meme, shirt, shoes, simple background, socks, solo, sweat, tail, white background, white shirt
44
+ ```
45
+
46
+ #### [`wd14-swinv2-v2`](https://huggingface.co/SmilingWolf/wd-v1-4-swinv2-tagger-v2)
47
+ ```
48
+ 1boy, arm hair, black footwear, cat, dirty, full body, furry, leg hair, male focus, shirt, shoes, simple background, socks, solo, standing, tail, white background, white shirt
49
+ ```
extensions-builtin/sdw-wd14-tagger/docs/screenshot.png ADDED

Git LFS Details

  • SHA256: 4527449f4d38ca6b55482f510696eb3e2d267344d74c93b6106eb5158beed7a4
  • Pointer size: 131 Bytes
  • Size of remote file: 160 kB
extensions-builtin/sdw-wd14-tagger/docs/what-is-wd14-tagger.md ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ What is Waifu Diffison 1.4 Tagger?
2
+ ---
3
+
4
+ Image to text model created and maintained by [MrSmilingWolf](https://huggingface.co/SmilingWolf), which was used to train Waifu Diffusion.
5
+
6
+ Please ask the original author `MrSmilingWolf#5991` for questions related to model or additional training.
7
+
8
+ ## SwinV2 vs Convnext vs ViT
9
+ > It's got characters now, the HF space has been updated too. Model of choice for classification is SwinV2 now. ConvNext was used to extract features because SwinV2 is a bit of a pain cuz it is twice as slow and more memory intensive
10
+
11
+ — [this message](https://discord.com/channels/930499730843250783/930499731451428926/1066830289382408285) from the [東方Project AI discord server](https://discord.com/invite/touhouai)
12
+
13
+ > To make it clear: the ViT model is the one used to tag images for WD 1.4. That's why the repo was originally called like that. This one has been trained on the same data and tags, but has got no other relation to WD 1.4, aside from stemming from the same coordination effort. They were trained in parallel, and the best one at the time was selected for WD 1.4
14
+ >
15
+ > This particular model was trained later and might actually be slightly better than the ViT one. Difference is in the noise range tho
16
+
17
+ — [this thread](https://discord.com/channels/930499730843250783/1052283314997837955) from the [東方Project AI discord server](https://discord.com/invite/touhouai)
18
+
19
+ ## Performance
20
+ > I stack them together and get a 1.1GB model with higher validation metrics than the three separated, so they each do their own thing and averaging the predictions sorta helps covering for each models failures. I suppose.
21
+ > As for my impression for each model:
22
+ > - SwinV2: a memory and GPU hog. Best metrics of the bunch, my model is compatible with timm weights (so it can be used on PyTorch if somebody ports it) but slooow. Good for a few predictions, would reconsider for massive tagging jobs if you're pressed for time
23
+ > - ConvNext: nice perfs, good metrics. A sweet spot. The 1024 final embedding size provides ample space for training the Dense layer on other datasets, like E621.
24
+ > - ViT: fastest of the bunch, at least on TPU, probably on GPU too? Slightly less then stellar metrics when compared with the other two. Onnxruntime and Tensorflow keep adding optimizations for Transformer models so that's good too.
25
+
26
+ — [this message](https://discord.com/channels/930499730843250783/930499731451428926/1066833768112996384) from the [東方Project AI discord server](https://discord.com/invite/touhouai)
27
+
28
+ ## Links
29
+ - [MrSmilingWolf's HuggingFace profile](https://huggingface.co/SmilingWolf)
30
+ - [MrSmilingWolf's GitHub profile](https://github.com/SmilingWolf)
extensions-builtin/sdw-wd14-tagger/install.py ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Install requirements for WD14-tagger."""
2
+ import os
3
+ import sys
4
+
5
+ from launch import run # pylint: disable=import-error
6
+
7
+ NAME = "WD14-tagger"
8
+ req_file = os.path.join(os.path.dirname(os.path.realpath(__file__)),
9
+ "requirements.txt")
10
+ print(f"loading {NAME} reqs from {req_file}")
11
+ run(f'"{sys.executable}" -m pip install -q -r "{req_file}"',
12
+ f"Checking {NAME} requirements.",
13
+ f"Couldn't install {NAME} requirements.")
extensions-builtin/sdw-wd14-tagger/javascript/tagger.js ADDED
@@ -0,0 +1,180 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /**
2
+ * wait until element is loaded and returns
3
+ * @param {string} selector
4
+ * @param {number} timeout
5
+ * @param {Element} $rootElement
6
+ * @returns {Promise<HTMLElement>}
7
+ */
8
+ function waitQuerySelector(selector, timeout = 5000, $rootElement = gradioApp()) {
9
+ return new Promise((resolve, reject) => {
10
+ const element = $rootElement.querySelector(selector)
11
+ if (document.querySelector(element)) {
12
+ return resolve(element)
13
+ }
14
+
15
+ let timeoutId
16
+
17
+ const observer = new MutationObserver(() => {
18
+ const element = $rootElement.querySelector(selector)
19
+ if (!element) {
20
+ return
21
+ }
22
+
23
+ if (timeoutId) {
24
+ clearInterval(timeoutId)
25
+ }
26
+
27
+ observer.disconnect()
28
+ resolve(element)
29
+ })
30
+
31
+ timeoutId = setTimeout(() => {
32
+ observer.disconnect()
33
+ reject(new Error(`timeout, cannot find element by '${selector}'`))
34
+ }, timeout)
35
+
36
+ observer.observe($rootElement, {
37
+ childList: true,
38
+ subtree: true
39
+ })
40
+ })
41
+ }
42
+
43
+ function tag_clicked(tag, is_inverse) {
44
+ // escaped characters
45
+ const escapedTag = tag.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
46
+
47
+ // add the tag to the selected textarea
48
+ let $selectedTextarea;
49
+ if (is_inverse) {
50
+ $selectedTextarea = document.getElementById('keep-tags');
51
+ } else {
52
+ $selectedTextarea = document.getElementById('exclude-tags');
53
+ }
54
+ let value = $selectedTextarea.querySelector('textarea').value;
55
+ // ignore if tag is already exist in textbox
56
+ const pattern = new RegExp(`(^|,)\\s{0,}${escapedTag}\\s{0,}($|,)`);
57
+ if (pattern.test(value)) {
58
+ return;
59
+ }
60
+ const emptyRegex = new RegExp(`^\\s*$`);
61
+ if (!emptyRegex.test(value)) {
62
+ value += ', ';
63
+ }
64
+ // besides setting the value an event needs to be triggered or the value isn't actually stored.
65
+ const input_event = new Event('input');
66
+ $selectedTextarea.querySelector('textarea').value = value + escapedTag;
67
+ $selectedTextarea.dispatchEvent(input_event);
68
+ const input_event2 = new Event('blur');
69
+ $selectedTextarea.dispatchEvent(input_event2);
70
+ }
71
+
72
+ document.addEventListener('DOMContentLoaded', () => {
73
+ Promise.all([
74
+ // option texts
75
+ waitQuerySelector('#keep-tags'),
76
+ waitQuerySelector('#exclude-tags'),
77
+ waitQuerySelector('#search-tags'),
78
+ waitQuerySelector('#replace-tags'),
79
+
80
+ // tag-confident labels
81
+ waitQuerySelector('#rating-confidences'),
82
+ waitQuerySelector('#tag-confidences'),
83
+ waitQuerySelector('#discard-tag-confidences')
84
+ ]).then(elements => {
85
+
86
+ const $keepTags = elements[0];
87
+ const $excludeTags = elements[1];
88
+ const $searchTags = elements[2];
89
+ const $replaceTags = elements[3];
90
+ const $ratingConfidents = elements[4];
91
+ const $tagConfidents = elements[5];
92
+ const $discardTagConfidents = elements[6];
93
+
94
+ let $selectedTextarea = $keepTags;
95
+
96
+ /**
97
+ * @this {HTMLElement}
98
+ * @param {MouseEvent} e
99
+ * @listens document#click
100
+ */
101
+ function onClickTextarea(e) {
102
+ $selectedTextarea = this;
103
+ }
104
+
105
+ $keepTags.addEventListener('click', onClickTextarea);
106
+ $excludeTags.addEventListener('click', onClickTextarea);
107
+ $searchTags.addEventListener('click', onClickTextarea);
108
+ $replaceTags.addEventListener('click', onClickTextarea);
109
+
110
+ /**
111
+ * @this {HTMLElement}
112
+ * @param {MouseEvent} e
113
+ * @listens document#click
114
+ */
115
+ function onClickLabels(e) {
116
+ // find clicked label item's wrapper element
117
+ let tag = e.target.innerText;
118
+
119
+ // when clicking unlucky, you get all tags and percentages. Prevent inserting those here.
120
+ const multiTag = new RegExp(`\\n.*\\n`);
121
+ if (tag.match(multiTag)) {
122
+ return;
123
+ }
124
+
125
+ // when clicking on the dotted line or the percentage, you get the percentage as well. Don't include it in the tags.
126
+ // use this fact to choose whether to insert in positive or negative. May require some getting used to, but saves
127
+ // having to select the input field.
128
+ const pctPattern = new RegExp(`\\n?([0-9.]+)%$`);
129
+ let percentage = tag.match(pctPattern);
130
+ if (percentage) {
131
+ tag = tag.replace(pctPattern, '');
132
+ if (tag == '') {
133
+ //percentage = percentage[1];
134
+ // could trigger a set Thresold value event
135
+ return;
136
+ }
137
+ // when clicking on athe dotted line, insert in either the exclude or replace list
138
+ // when not clicking on the dotted line, insert in the additingal or search list
139
+ if ($selectedTextarea == $keepTags) {
140
+ $selectedTextarea = $excludeTags;
141
+ } else if ($selectedTextarea == $searchTags) {
142
+ $selectedTextarea = $replaceTags;
143
+ }
144
+ } else if ($selectedTextarea == $excludeTags) {
145
+ $selectedTextarea = $keepTags;
146
+ } else if ($selectedTextarea == $replaceTags) {
147
+ $selectedTextarea = $searchTags;
148
+ }
149
+
150
+ let value = $selectedTextarea.querySelector('textarea').value;
151
+ // except replace_tag because multiple can be replaced with the same
152
+ if ($selectedTextarea != $replaceTags) {
153
+ // ignore if tag is already exist in textbox
154
+ const escapedTag = tag.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
155
+ const pattern = new RegExp(`(^|,)\\s{0,}${escapedTag}\\s{0,}($|,)`);
156
+ if (pattern.test(value)) {
157
+ return;
158
+ }
159
+ }
160
+
161
+ // besides setting the value an event needs to be triggered or the value isn't actually stored.
162
+ const spaceOrAlreadyWithComma = new RegExp(`(^|.*,)\\s*$`);
163
+ if (!spaceOrAlreadyWithComma.test(value)) {
164
+ value += ', ';
165
+ }
166
+ const input_event = new Event('input');
167
+ $selectedTextarea.querySelector('textarea').value = value + tag;
168
+ $selectedTextarea.querySelector('textarea').dispatchEvent(input_event);
169
+ const input_event2 = new Event('blur');
170
+ $selectedTextarea.querySelector('textarea').dispatchEvent(input_event2);
171
+
172
+ }
173
+
174
+ $tagConfidents.addEventListener('click', onClickLabels)
175
+ $discardTagConfidents.addEventListener('click', onClickLabels)
176
+
177
+ }).catch(err => {
178
+ console.error(err)
179
+ })
180
+ })
extensions-builtin/sdw-wd14-tagger/json_schema/db_json_v1_schema.json ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "type": "object",
3
+ "properties": {
4
+ "rating": { "$ref": "#/$defs/weighted_label" },
5
+ "tag": { "$ref": "#/$defs/weighted_label" },
6
+ "query": {
7
+ "type": "object",
8
+ "patternProperties": {
9
+ "^[0-9a-f]{64}.*$": {
10
+ "type": "array",
11
+ "prefixItems": [
12
+ {"type": "string" },
13
+ {"type": "number", "minimum": 0}
14
+ ],
15
+ "minContains": 2,
16
+ "maxContains": 2
17
+ }
18
+ }
19
+ },
20
+ "meta": {
21
+ "type": "object",
22
+ "properties": {
23
+ "index_shift": {
24
+ "type": "integer",
25
+ "minimum": 0,
26
+ "maximum": 16
27
+ }
28
+ }
29
+ },
30
+ "add": { "type": "string" },
31
+ "exclude": { "type": "string" },
32
+ "keep": { "type": "string" },
33
+ "repl": { "type": "string" },
34
+ "search": { "type": "string" }
35
+ },
36
+ "required": ["rating", "tag", "query"],
37
+ "additionalProperties": false,
38
+ "$defs": {
39
+ "weighted_label": {
40
+ "type": "object",
41
+ "patternProperties": {
42
+ "^[^,]+$": {
43
+ "type": "array",
44
+ "items": {
45
+ "type": "number",
46
+ "minimum": 0
47
+ }
48
+ }
49
+ }
50
+ }
51
+ }
52
+ }
extensions-builtin/sdw-wd14-tagger/preload.py ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """ Preload module for DeepDanbooru or onnxtagger. """
2
+ from argparse import ArgumentParser
3
+
4
+
5
+ def preload(parser: ArgumentParser):
6
+ """ Preload module for DeepDanbooru or onnxtagger. """
7
+ # default deepdanbooru use different paths:
8
+ # models/deepbooru and models/torch_deepdanbooru
9
+ # https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/c81d440d876dfd2ab3560410f37442ef56fc6632
10
+
11
+ parser.add_argument(
12
+ '--deepdanbooru-projects-path',
13
+ type=str,
14
+ help='Path to directory with DeepDanbooru project(s).'
15
+ )
16
+ parser.add_argument(
17
+ '--onnxtagger-path',
18
+ type=str,
19
+ help='Path to directory with Onnyx project(s).'
20
+ )
21
+ # TODO allow using devices in parallel, specified as comma separed list
22
+ parser.add_argument(
23
+ '--additional-device-ids',
24
+ type=str,
25
+ help='Device ID to use. cpu:0, gpu:0 or gpu:1, etc.',
26
+ )
extensions-builtin/sdw-wd14-tagger/pyproject.toml ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [tool.ruff]
2
+
3
+ target-version = "py39"
4
+
5
+ extend-select = [
6
+ "B",
7
+ "C",
8
+ "I",
9
+ "W",
10
+ ]
11
+
12
+ exclude = [
13
+ "addons",
14
+ ]
15
+
16
+ ignore = [
17
+ "E501", # Line too long
18
+ "E731", # Do not assign a `lambda` expression, use a `def`
19
+
20
+ "I001", # Import block is un-sorted or un-formatted
21
+ "C901", # Function is too complex
22
+ "C408", # Rewrite as a literal
23
+ "W605", # invalid escape sequence, messes with some docstrings
24
+ ]
25
+
26
+ #[tool.ruff.per-file-ignores]
27
+ #"webui.py" = ["E402"] # Module level import not at top of file
28
+
29
+ #[tool.ruff.flake8-bugbear]
30
+ # Allow default arguments like, e.g., `data: List[str] = fastapi.Query(None)`.
31
+ #extend-immutable-calls = ["fastapi.Depends", "fastapi.security.HTTPBasic"]
32
+
33
+ [tool.pytest.ini_options]
34
+ base_url = "http://127.0.0.1:7860"
35
+
36
+ [tool.pylint.'MESSAGES CONTROL']
37
+ extension-pkg-whitelist = ["pydantic"]
38
+ disable= ["C", "R", "W", "E", "I"]
extensions-builtin/sdw-wd14-tagger/requirements.txt ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ deepdanbooru
2
+ onnxruntime; python_version != '3.9' and sys_platform == 'darwin' and platform_machine != 'arm64'
3
+ onnxruntime-coreml; python_version == '3.9' and sys_platform == 'darwin' and platform_machine != 'arm64'
4
+ onnxruntime-silicon; sys_platform == 'darwin' and platform_machine == 'arm64'
5
+ onnxruntime-gpu; sys_platform != 'darwin'
6
+ jsonschema
7
+ fastapi
8
+ gradio
9
+ huggingface_hub
10
+ numpy
11
+ opencv_contrib_python
12
+ opencv_python
13
+ opencv_python_headless
14
+ packaging
15
+ pandas
16
+ Pillow
17
+ tensorflow
18
+ tqdm
extensions-builtin/sdw-wd14-tagger/scripts/tagger.py ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Tagger module entry point."""
2
+ from PIL import Image, ImageFile
3
+
4
+ from modules import script_callbacks # pylint: disable=import-error
5
+ from tagger.api import on_app_started # pylint: disable=import-error
6
+ from tagger.ui import on_ui_tabs # pylint: disable=import-error
7
+ from tagger.settings import on_ui_settings # pylint: disable=import-error
8
+
9
+
10
+ # if you do not initialize the Image object
11
+ # Image.registered_extensions() returns only PNG
12
+ Image.init()
13
+
14
+ # PIL spits errors when loading a truncated image by default
15
+ # https://pillow.readthedocs.io/en/stable/reference/ImageFile.html#PIL.ImageFile.LOAD_TRUNCATED_IMAGES
16
+ ImageFile.LOAD_TRUNCATED_IMAGES = True
17
+
18
+
19
+ script_callbacks.on_app_started(on_app_started)
20
+ script_callbacks.on_ui_tabs(on_ui_tabs)
21
+ script_callbacks.on_ui_settings(on_ui_settings)
extensions-builtin/sdw-wd14-tagger/style.css ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #rating-confidences .output-label>div:not(:first-child) {
2
+ cursor: pointer;
3
+ }
4
+
5
+ #tag-confidences .output-label>div:not(:first-child) {
6
+ cursor: pointer;
7
+ }
8
+
9
+ #rating-confidences .output-label>div:not(:first-child):hover {
10
+ foreground-color: #f5f5f5;
11
+ }
12
+
13
+ #tag-confidences .output-label>div:not(:first-child):hover {
14
+ foreground-color: #f5f5f5;
15
+ }
16
+
17
+ #rating-confidences .output-label>div:not(:first-child):active {
18
+ foreground-color: #e6e6e6;
19
+ }
20
+
21
+ #tag-confidences .output-label>div:not(:first-child):active {
22
+ foreground-color: #e6e6e6;
23
+ }
24
+
25
+ #discard-tag-confidences .output-label>div:not(:first-child) {
26
+ cursor: pointer;
27
+ }
28
+
29
+ #discard-tag-confidences .output-label>div:not(:first-child):hover {
30
+ foreground-color: #f5f5f5;
31
+ }
32
+
33
+ #discard-tag-confidences .output-label>div:not(:first-child):active {
34
+ foreground-color: #e6e6e6;
35
+ }
36
+ #tags a {
37
+ font-weight: inherit;
38
+ color: #888;
39
+ }
40
+ #tags a:hover {
41
+ color: #f5f5f5;
42
+ }
43
+
extensions-builtin/sdw-wd14-tagger/tag_based_image_dedup.sh ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+
3
+ # this script is for deduping images based on tags after they have been interrogated using this extension
4
+ #
5
+ # the file removal instructions are written to remove_instructions.sh
6
+ # you have to manually run remove_instructions.sh to remove the files
7
+ # this script requires exiftool and feh
8
+ # TODO: implement this in the extension
9
+ #
10
+ # Usage:
11
+ # repo_dir=/path/to/repo
12
+ # cd /path/to/images
13
+ #
14
+
15
+ # use tabs as field separator
16
+ while read -r -d '\t' first_file second_file etc; do
17
+ # images may be jpg jpeg or png
18
+ first_image=$(basename "$first_file" ".txt")
19
+ if [[ -f "$first_image.jpg" ]]; then
20
+ first_image="$first_image.jpg"
21
+ elif [[ -f "$first_image.jpeg" ]]; then
22
+ first_image="$first_image.jpeg"
23
+ elif [[ -f "$first_image.png" ]]; then
24
+ first_image="$first_image.png"
25
+ else
26
+ echo "No image file found for $first_file" 1>&2
27
+ continue
28
+ fi
29
+ second_image=$(basename "$second_file" ".txt")
30
+ if [[ -f "$second_image.jpg" ]]; then
31
+ second_image="$second_image.jpg"
32
+ elif [[ -f "$second_image.jpeg" ]]; then
33
+ second_image="$second_image.jpeg"
34
+ elif [[ -f "$second_image.png" ]]; then
35
+ second_image="$second_image.png"
36
+ else
37
+ echo "No image file found for $second_file" 1>&2
38
+ continue
39
+ fi
40
+ feh -g 950x800+5+30 -Z --scale-down -d -S filename --title "$first_image" "$first_image"&
41
+ pid1=$!
42
+ feh -g 950x800+963+30 -Z --scale-down -d -S filename --title "$second_image" "$second_image"&
43
+ pid2=$!
44
+ read -p "Are $first_image and $second_image the same? " -n 1 -r REPLY </dev/tty 1>&2
45
+ if [[ ! $REPLY =~ ^[Yy]$ ]]; then
46
+ echo "Not the same" 1>&2
47
+ continue
48
+ fi
49
+ # keep file with largest dimensions
50
+ first_width=$(exiftool "$first_image" | grep -E '^Image Width' | cut -d ':' -f 2)
51
+ first_height=$(exiftool "$first_image" | grep -E '^Image Height' | cut -d ':' -f 2)
52
+ second_width=$(exiftool "$second_image" | grep -E '^Image Width' | cut -d ':' -f 2)
53
+ second_height=$(exiftool "$second_image" | grep -E '^Image Height' | cut -d ':' -f 2)
54
+ echo -e "$first_image: ${first_width}x${first_height}\t-\t$second_image: ${second_width}x${second_height}" 1>&2
55
+ first_product=$((first_width * first_height))
56
+ second_product=$((second_width * second_height))
57
+
58
+ if [ $first_product -eq $second_product ]; then
59
+ read -p "Same size for 1) $first_image and 2) $second_image. Which one do you want to keep? (1/2) [skip]" -n 1 -r REPLY </dev/tty 1>&2
60
+ if [[ $REPLY =~ ^[1]$ ]]; then
61
+ echo "Keeping $first_file" 1>&2
62
+ echo rm "$second_file" "$second_image"
63
+ elif [[ $REPLY =~ ^[2]$ ]]; then
64
+ echo "Keeping $second_file" 1>&2
65
+ echo rm "$first_file" "$first_image"
66
+ else
67
+ echo "Skipping" 1>&2
68
+ fi
69
+ elif [ $((first_width * first_height)) -gt $((second_width * second_height)) ]; then
70
+ echo "Keeping $first_file" 1>&2
71
+ echo rm "$second_file" "$second_image"
72
+ else
73
+ echo "Keeping $second_file" 1>&2
74
+ echo rm "$first_file" "$first_image"
75
+ fi
76
+ kill $pid1 $pid2
77
+ done < <(
78
+ ls -1 *.txt | while read f; do
79
+ sed 's/, /\n/g' "$f" | sort | tr '\n' ',' | sed "s~,$~\t$f\n~"
80
+ done | sort | awk -F'\t' '{
81
+ a[$1] = a[$1] == "" ? $2 : a[$1]"\t"$2;
82
+ } END {
83
+ for (i in a) {
84
+ if (index(a[i], "\t") != 0) {
85
+ print a[i];
86
+ }
87
+ }
88
+ }') > remove_instructions.sh
extensions-builtin/sdw-wd14-tagger/tagger/api.py ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """API module for FastAPI"""
2
+ from typing import Callable
3
+ from threading import Lock
4
+ from secrets import compare_digest
5
+
6
+ from modules import shared # pylint: disable=import-error
7
+ from modules.api.api import decode_base64_to_image # pylint: disable=E0401
8
+ from modules.call_queue import queue_lock # pylint: disable=import-error
9
+ from fastapi import FastAPI, Depends, HTTPException
10
+ from fastapi.security import HTTPBasic, HTTPBasicCredentials
11
+
12
+ from tagger import utils # pylint: disable=import-error
13
+ from tagger import api_models as models # pylint: disable=import-error
14
+ from tagger.uiset import QData # pylint: disable=import-error
15
+
16
+
17
+ class Api:
18
+ """Api class for FastAPI"""
19
+ def __init__(
20
+ self, app: FastAPI, qlock: Lock, prefix: str = None
21
+ ) -> None:
22
+ if shared.cmd_opts.api_auth:
23
+ self.credentials = {}
24
+ for auth in shared.cmd_opts.api_auth.split(","):
25
+ user, password = auth.split(":")
26
+ self.credentials[user] = password
27
+
28
+ self.app = app
29
+ self.queue_lock = qlock
30
+ self.prefix = prefix
31
+
32
+ self.add_api_route(
33
+ 'interrogate',
34
+ self.endpoint_interrogate,
35
+ methods=['POST'],
36
+ response_model=models.TaggerInterrogateResponse
37
+ )
38
+
39
+ self.add_api_route(
40
+ 'interrogators',
41
+ self.endpoint_interrogators,
42
+ methods=['GET'],
43
+ response_model=models.InterrogatorsResponse
44
+ )
45
+
46
+ self.add_api_route(
47
+ "unload-interrogators",
48
+ self.endpoint_unload_interrogators,
49
+ methods=["POST"],
50
+ response_model=str,
51
+ )
52
+
53
+ def auth(self, creds: HTTPBasicCredentials = None):
54
+ if creds is None:
55
+ creds = Depends(HTTPBasic())
56
+ if creds.username in self.credentials:
57
+ if compare_digest(creds.password,
58
+ self.credentials[creds.username]):
59
+ return True
60
+
61
+ raise HTTPException(
62
+ status_code=401,
63
+ detail="Incorrect username or password",
64
+ headers={
65
+ "WWW-Authenticate": "Basic"
66
+ })
67
+
68
+ def add_api_route(self, path: str, endpoint: Callable, **kwargs):
69
+ if self.prefix:
70
+ path = f'{self.prefix}/{path}'
71
+
72
+ if shared.cmd_opts.api_auth:
73
+ return self.app.add_api_route(path, endpoint, dependencies=[
74
+ Depends(self.auth)], **kwargs)
75
+ return self.app.add_api_route(path, endpoint, **kwargs)
76
+
77
+ def endpoint_interrogate(self, req: models.TaggerInterrogateRequest):
78
+ if req.image is None:
79
+ raise HTTPException(404, 'Image not found')
80
+
81
+ if req.model not in utils.interrogators.keys():
82
+ raise HTTPException(404, 'Model not found')
83
+
84
+ image = decode_base64_to_image(req.image)
85
+ interrogator = utils.interrogators[req.model]
86
+
87
+ with self.queue_lock:
88
+ QData.tags.clear()
89
+ QData.ratings.clear()
90
+ QData.in_db.clear()
91
+ QData.for_tags_file.clear()
92
+ data = ('', '', '') + interrogator.interrogate(image)
93
+ QData.apply_filters(data)
94
+ output = QData.finalize(1)
95
+
96
+ return models.TaggerInterrogateResponse(
97
+ caption={
98
+ **output[0],
99
+ **output[1],
100
+ **output[2],
101
+ })
102
+
103
+ def endpoint_interrogators(self):
104
+ return models.InterrogatorsResponse(
105
+ models=list(utils.interrogators.keys())
106
+ )
107
+
108
+ def endpoint_unload_interrogators(self):
109
+ unloaded_models = 0
110
+
111
+ for i in utils.interrogators.values():
112
+ if i.unload():
113
+ unloaded_models = unloaded_models + 1
114
+
115
+ return f"Successfully unload {unloaded_models} model(s)"
116
+
117
+
118
+ def on_app_started(_, app: FastAPI):
119
+ Api(app, queue_lock, '/tagger/v1')
extensions-builtin/sdw-wd14-tagger/tagger/api_models.py ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Purpose: Pydantic models for the API."""
2
+ from typing import List, Dict
3
+
4
+ from modules.api import models as sd_models # pylint: disable=E0401
5
+ from pydantic import BaseModel, Field
6
+
7
+
8
+ class TaggerInterrogateRequest(sd_models.InterrogateRequest):
9
+ """Interrogate request model"""
10
+ model: str = Field(
11
+ title='Model',
12
+ description='The interrogate model used.'
13
+ )
14
+
15
+ threshold: float = Field(
16
+ default=0.35,
17
+ title='Threshold',
18
+ description='',
19
+ ge=0,
20
+ le=1
21
+ )
22
+
23
+
24
+ class TaggerInterrogateResponse(BaseModel):
25
+ """Interrogate response model"""
26
+ caption: Dict[str, float] = Field(
27
+ title='Caption',
28
+ description='The generated caption for the image.'
29
+ )
30
+
31
+
32
+ class InterrogatorsResponse(BaseModel):
33
+ """Interrogators response model"""
34
+ models: List[str] = Field(
35
+ title='Models',
36
+ description=''
37
+ )
extensions-builtin/sdw-wd14-tagger/tagger/dbimutils.py ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """DanBooru IMage Utility functions"""
2
+
3
+ import cv2
4
+ import numpy as np
5
+ from PIL import Image
6
+
7
+
8
+ def fill_transparent(image: Image.Image, color='WHITE'):
9
+ image = image.convert('RGBA')
10
+ new_image = Image.new('RGBA', image.size, color)
11
+ new_image.paste(image, mask=image)
12
+ image = new_image.convert('RGB')
13
+ return image
14
+
15
+
16
+ def resize(pic: Image.Image, size: int, keep_ratio=True) -> Image.Image:
17
+ if not keep_ratio:
18
+ target_size = (size, size)
19
+ else:
20
+ min_edge = min(pic.size)
21
+ target_size = (
22
+ int(pic.size[0] / min_edge * size),
23
+ int(pic.size[1] / min_edge * size),
24
+ )
25
+
26
+ target_size = (target_size[0] & ~3, target_size[1] & ~3)
27
+
28
+ return pic.resize(target_size, resample=Image.Resampling.LANCZOS)
29
+
30
+
31
+ def smart_imread(img, flag=cv2.IMREAD_UNCHANGED):
32
+ """ Read an image, convert to 24-bit if necessary """
33
+ if img.endswith(".gif"):
34
+ img = Image.open(img)
35
+ img = img.convert("RGB")
36
+ img = cv2.cvtColor(np.array(img), cv2.COLOR_RGB2BGR)
37
+ else:
38
+ img = cv2.imread(img, flag)
39
+ return img
40
+
41
+
42
+ def smart_24bit(img):
43
+ """ Convert an image to 24-bit if necessary """
44
+ if img.dtype is np.dtype(np.uint16):
45
+ img = (img / 257).astype(np.uint8)
46
+
47
+ if len(img.shape) == 2:
48
+ img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
49
+ elif img.shape[2] == 4:
50
+ trans_mask = img[:, :, 3] == 0
51
+ img[trans_mask] = [255, 255, 255, 255]
52
+ img = cv2.cvtColor(img, cv2.COLOR_BGRA2BGR)
53
+ return img
54
+
55
+
56
+ def make_square(img, target_size):
57
+ """ Make an image square """
58
+ old_size = img.shape[:2]
59
+ desired_size = max(old_size)
60
+ desired_size = max(desired_size, target_size)
61
+
62
+ delta_w = desired_size - old_size[1]
63
+ delta_h = desired_size - old_size[0]
64
+ top, bottom = delta_h // 2, delta_h - (delta_h // 2)
65
+ left, right = delta_w // 2, delta_w - (delta_w // 2)
66
+
67
+ color = [255, 255, 255]
68
+ new_im = cv2.copyMakeBorder(
69
+ img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color
70
+ )
71
+ return new_im
72
+
73
+
74
+ def smart_resize(img, size):
75
+ """ Resize an image """
76
+ # Assumes the image has already gone through make_square
77
+ if img.shape[0] > size:
78
+ img = cv2.resize(img, (size, size), interpolation=cv2.INTER_AREA)
79
+ elif img.shape[0] < size:
80
+ img = cv2.resize(img, (size, size), interpolation=cv2.INTER_CUBIC)
81
+ return img
extensions-builtin/sdw-wd14-tagger/tagger/format.py ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Format module, for formatting output filenames"""
2
+ import re
3
+ import hashlib
4
+
5
+ from typing import Dict, Callable, NamedTuple
6
+ from pathlib import Path
7
+
8
+
9
+ class Info(NamedTuple):
10
+ path: Path
11
+ output_ext: str
12
+
13
+
14
+ def hashfun(i: Info, algo='sha1') -> str:
15
+ try:
16
+ hasher = hashlib.new(algo)
17
+ except ImportError as err:
18
+ raise ValueError(f"'{algo}' is invalid hash algorithm") from err
19
+
20
+ with open(i.path, 'rb') as file:
21
+ hasher.update(file.read())
22
+
23
+ return hasher.hexdigest()
24
+
25
+
26
+ pattern = re.compile(r'\[([\w:]+)\]')
27
+
28
+ # all function must returns string or raise TypeError or ValueError
29
+ # other errors will cause the extension error
30
+ available_formats: Dict[str, Callable] = {
31
+ 'name': lambda i: i.path.stem,
32
+ 'extension': lambda i: i.path.suffix[1:],
33
+ 'hash': hashfun,
34
+
35
+ 'output_extension': lambda i: i.output_ext
36
+ }
37
+
38
+
39
+ def parse(match: re.Match, info: Info) -> str:
40
+ matches = match[1].split(':')
41
+ name, args = matches[0], matches[1:]
42
+
43
+ if name not in available_formats:
44
+ return match[0]
45
+
46
+ return available_formats[name](info, *args)
extensions-builtin/sdw-wd14-tagger/tagger/generator/tf_data_reader.py ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """ Credits to SmilingWolf """
2
+
3
+ import tensorflow as tf
4
+ try:
5
+ import tensorflow_io as tfio # pylint: disable=import-error
6
+ except ImportError:
7
+ tfio = None
8
+
9
+ def is_webp(contents):
10
+ """Checks if the image is a webp image"""
11
+ riff_header = tf.strings.substr(contents, 0, 4)
12
+ webp_header = tf.strings.substr(contents, 8, 4)
13
+
14
+ is_riff = riff_header == b"RIFF"
15
+ is_fourcc_webp = webp_header == b"WEBP"
16
+ return is_riff and is_fourcc_webp
17
+
18
+
19
+ class DataGenerator:
20
+ """ Data generator for the dataset """
21
+ def __init__(self, file_list, target_height, target_width, batch_size):
22
+ self.file_list = file_list
23
+ self.target_width = target_width
24
+ self.target_height = target_height
25
+ self.batch_size = batch_size
26
+
27
+ def read_image(self, filename):
28
+ image_bytes = tf.io.read_file(filename)
29
+ return filename, image_bytes
30
+
31
+ def parse_single_image(self, filename, image_bytes):
32
+ """ Parses a single image """
33
+ if is_webp(image_bytes):
34
+ image = tfio.image.decode_webp(image_bytes)
35
+ else:
36
+ image = tf.io.decode_image(
37
+ image_bytes, channels=0, dtype=tf.uint8,
38
+ expand_animations=False
39
+ )
40
+
41
+ # Black and white image
42
+ if tf.shape(image)[2] == 1:
43
+ image = tf.repeat(image, 3, axis=-1)
44
+
45
+ # Black and white image with alpha
46
+ elif tf.shape(image)[2] == 2:
47
+ image, mask = tf.unstack(image, num=2, axis=-1)
48
+ mask = tf.expand_dims(mask, axis=-1)
49
+ image = tf.expand_dims(image, axis=-1)
50
+ image = tf.repeat(image, 3, axis=-1)
51
+ image = tf.concat([image, mask], -1)
52
+
53
+ # Alpha to white
54
+ if tf.shape(image)[2] == 4:
55
+ alpha_mask = image[:, :, 3]
56
+ alpha_mask = tf.cast(alpha_mask, tf.float32) / 255
57
+ alpha_mask = tf.repeat(tf.expand_dims(alpha_mask, -1), 4, axis=-1)
58
+
59
+ matte = tf.ones_like(image, dtype=tf.uint8) * [255, 255, 255, 255]
60
+
61
+ weighted_matte = tf.cast(matte, dtype=alpha_mask.dtype) * (1 - alpha_mask) # noqa: E501
62
+ weighted_image = tf.cast(image, dtype=alpha_mask.dtype) * alpha_mask # noqa: E501
63
+ image = weighted_image + weighted_matte
64
+
65
+ # Remove alpha channel
66
+ image = tf.cast(image, dtype=tf.uint8)[:, :, :-1]
67
+
68
+ # Pillow/Tensorflow RGB -> OpenCV BGR
69
+ image = image[:, :, ::-1]
70
+ return filename, image
71
+
72
+ def resize_single_image(self, filename, image):
73
+ """ Resizes a single image """
74
+ height, width, _ = tf.unstack(tf.shape(image))
75
+
76
+ if height <= self.target_height and width <= self.target_width:
77
+ return filename, image
78
+
79
+ image = tf.image.resize(
80
+ image,
81
+ (self.target_height, self.target_width),
82
+ method=tf.image.ResizeMethod.AREA,
83
+ preserve_aspect_ratio=True,
84
+ )
85
+ image = tf.cast(tf.math.round(image), dtype=tf.uint8)
86
+ return filename, image
87
+
88
+ def pad_single_image(self, filename, image):
89
+ """ Pads a single image """
90
+ height, width, _ = tf.unstack(tf.shape(image))
91
+
92
+ float_h = tf.cast(height, dtype=tf.float32)
93
+ float_w = tf.cast(width, dtype=tf.float32)
94
+ float_target_h = tf.cast(self.target_height, dtype=tf.float32)
95
+ float_target_w = tf.cast(self.target_width, dtype=tf.float32)
96
+
97
+ padding_top = tf.cast((float_target_h - float_h) / 2, dtype=tf.int32)
98
+ padding_right = tf.cast((float_target_w - float_w) / 2, dtype=tf.int32)
99
+ padding_bottom = self.target_height - padding_top - height
100
+ padding_left = self.target_width - padding_right - width
101
+
102
+ padding = [[padding_top, padding_bottom],
103
+ [padding_right, padding_left], [0, 0]]
104
+ image = tf.pad(image, padding, mode="CONSTANT", constant_values=255)
105
+ return filename, image
106
+
107
+ def gen_ds(self):
108
+ """ Generates the dataset """
109
+ if tfio is None:
110
+ print("Tensorflow IO is not installed, try\n"
111
+ "`pip install tensorflow_io' or use another interrogator")
112
+ return []
113
+ images_list = tf.data.Dataset.from_tensor_slices(self.file_list)
114
+
115
+ images_data = images_list.map(
116
+ self.read_image, num_parallel_calls=tf.data.AUTOTUNE
117
+ )
118
+ images_data = images_data.map(
119
+ self.parse_single_image, num_parallel_calls=tf.data.AUTOTUNE
120
+ )
121
+ images_data = images_data.map(
122
+ self.resize_single_image, num_parallel_calls=tf.data.AUTOTUNE
123
+ )
124
+ images_data = images_data.map(
125
+ self.pad_single_image, num_parallel_calls=tf.data.AUTOTUNE
126
+ )
127
+
128
+ images_list = images_data.batch(
129
+ self.batch_size, drop_remainder=False,
130
+ num_parallel_calls=tf.data.AUTOTUNE
131
+ )
132
+ images_list = images_list.prefetch(tf.data.AUTOTUNE)
133
+ return images_list
extensions-builtin/sdw-wd14-tagger/tagger/interrogator.py ADDED
@@ -0,0 +1,660 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """ Interrogator class and subclasses for tagger """
2
+ import os
3
+ from pathlib import Path
4
+ import io
5
+ import json
6
+ import inspect
7
+ from re import match as re_match
8
+ from platform import system, uname
9
+ from typing import Tuple, List, Dict, Callable
10
+ from pandas import read_csv
11
+ from PIL import Image, UnidentifiedImageError
12
+ from numpy import asarray, float32, expand_dims, exp
13
+ from tqdm import tqdm
14
+ from huggingface_hub import hf_hub_download
15
+
16
+ from modules.paths import extensions_dir
17
+ from modules import shared
18
+ from tagger import settings # pylint: disable=import-error
19
+ from tagger.uiset import QData, IOData # pylint: disable=import-error
20
+ from . import dbimutils # pylint: disable=import-error # noqa
21
+
22
+ Its = settings.InterrogatorSettings
23
+
24
+ # select a device to process
25
+ use_cpu = ('all' in shared.cmd_opts.use_cpu) or (
26
+ 'interrogate' in shared.cmd_opts.use_cpu)
27
+
28
+ # https://onnxruntime.ai/docs/execution-providers/
29
+ # https://github.com/toriato/stable-diffusion-webui-wd14-tagger/commit/e4ec460122cf674bbf984df30cdb10b4370c1224#r92654958
30
+ onnxrt_providers = ['CUDAExecutionProvider', 'CPUExecutionProvider']
31
+
32
+ if shared.cmd_opts.additional_device_ids is not None:
33
+ m = re_match(r'([cg])pu:\d+$', shared.cmd_opts.additional_device_ids)
34
+ if m is None:
35
+ raise ValueError('--device-id is not cpu:<nr> or gpu:<nr>')
36
+ if m.group(1) == 'c':
37
+ onnxrt_providers.pop(0)
38
+ TF_DEVICE_NAME = f'/{shared.cmd_opts.additional_device_ids}'
39
+ elif use_cpu:
40
+ TF_DEVICE_NAME = '/cpu:0'
41
+ onnxrt_providers.pop(0)
42
+ else:
43
+ TF_DEVICE_NAME = '/gpu:0'
44
+
45
+ print(f'== WD14 tagger {TF_DEVICE_NAME}, {uname()} ==')
46
+
47
+
48
+ class Interrogator:
49
+ """ Interrogator class for tagger """
50
+ # the raw input and output.
51
+ input = {
52
+ "cumulative": False,
53
+ "large_query": False,
54
+ "unload_after": False,
55
+ "add": '',
56
+ "keep": '',
57
+ "exclude": '',
58
+ "search": '',
59
+ "replace": '',
60
+ "output_dir": '',
61
+ }
62
+ output = None
63
+ odd_increment = 0
64
+
65
+ @classmethod
66
+ def flip(cls, key):
67
+ def toggle():
68
+ cls.input[key] = not cls.input[key]
69
+ return toggle
70
+
71
+ @staticmethod
72
+ def get_errors() -> str:
73
+ errors = ''
74
+ if len(IOData.err) > 0:
75
+ # write errors in html pointer list, every error in a <li> tag
76
+ errors = IOData.error_msg()
77
+ if len(QData.err) > 0:
78
+ errors += 'Fix to write correct output:<br><ul><li>' + \
79
+ '</li><li>'.join(QData.err) + '</li></ul>'
80
+ return errors
81
+
82
+ @classmethod
83
+ def set(cls, key: str) -> Callable[[str], Tuple[str, str]]:
84
+ def setter(val) -> Tuple[str, str]:
85
+ if key == 'input_glob':
86
+ IOData.update_input_glob(val)
87
+ return (val, cls.get_errors())
88
+ if val != cls.input[key]:
89
+ tgt_cls = IOData if key == 'output_dir' else QData
90
+ getattr(tgt_cls, "update_" + key)(val)
91
+ cls.input[key] = val
92
+ return (cls.input[key], cls.get_errors())
93
+
94
+ return setter
95
+
96
+ @staticmethod
97
+ def load_image(path: str) -> Image:
98
+ try:
99
+ return Image.open(path)
100
+ except FileNotFoundError:
101
+ print(f'${path} not found')
102
+ except UnidentifiedImageError:
103
+ # just in case, user has mysterious file...
104
+ print(f'${path} is not a supported image type')
105
+ except ValueError:
106
+ print(f'${path} is not readable or StringIO')
107
+ return None
108
+
109
+ def __init__(self, name: str) -> None:
110
+ self.name = name
111
+ self.model = None
112
+ self.tags = None
113
+ # run_mode 0 is dry run, 1 means run (alternating), 2 means disabled
114
+ self.run_mode = 0 if hasattr(self, "large_batch_interrogate") else 2
115
+
116
+ def load(self):
117
+ raise NotImplementedError()
118
+
119
+ def large_batch_interrogate(self, images: List, dry_run=False) -> str:
120
+ raise NotImplementedError()
121
+
122
+ def unload(self) -> bool:
123
+ unloaded = False
124
+
125
+ if self.model is not None:
126
+ del self.model
127
+ self.model = None
128
+ unloaded = True
129
+ print(f'Unloaded {self.name}')
130
+
131
+ if hasattr(self, 'tags'):
132
+ del self.tags
133
+ self.tags = None
134
+
135
+ return unloaded
136
+
137
+ def interrogate_image(self, image: Image) -> None:
138
+ sha = IOData.get_bytes_hash(image.tobytes())
139
+ QData.clear(1 - Interrogator.input["cumulative"])
140
+
141
+ fi_key = sha + self.name
142
+ count = 0
143
+
144
+ if fi_key in QData.query:
145
+ # this file was already queried for this interrogator.
146
+ QData.single_data(fi_key)
147
+ else:
148
+ # single process
149
+ count += 1
150
+ data = ('', '', fi_key) + self.interrogate(image)
151
+ # When drag-dropping an image, the path [0] is not known
152
+ if Interrogator.input["unload_after"]:
153
+ self.unload()
154
+
155
+ QData.apply_filters(data)
156
+
157
+ for got in QData.in_db.values():
158
+ QData.apply_filters(got)
159
+
160
+ Interrogator.output = QData.finalize(count)
161
+
162
+ def batch_interrogate_image(self, index: int) -> None:
163
+ # if outputpath is '', no tags file will be written
164
+ if len(IOData.paths[index]) == 5:
165
+ path, out_path, output_dir, image_hash, image = IOData.paths[index]
166
+ elif len(IOData.paths[index]) == 4:
167
+ path, out_path, output_dir, image_hash = IOData.paths[index]
168
+ image = Interrogator.load_image(path)
169
+ # should work, we queried before to get the image_hash
170
+ else:
171
+ path, out_path, output_dir = IOData.paths[index]
172
+ image = Interrogator.load_image(path)
173
+ if image is None:
174
+ return
175
+
176
+ image_hash = IOData.get_bytes_hash(image.tobytes())
177
+ IOData.paths[index].append(image_hash)
178
+ if getattr(shared.opts, 'tagger_store_images', False):
179
+ IOData.paths[index].append(image)
180
+
181
+ if output_dir:
182
+ output_dir.mkdir(0o755, True, True)
183
+ # next iteration we don't need to create the directory
184
+ IOData.paths[index][2] = ''
185
+ QData.image_dups[image_hash].add(path)
186
+
187
+ abspath = str(path.absolute())
188
+ fi_key = image_hash + self.name
189
+
190
+ if fi_key in QData.query:
191
+ # this file was already queried for this interrogator.
192
+ i = QData.get_index(fi_key, abspath)
193
+ # this file was already queried and stored
194
+ QData.in_db[i] = (abspath, out_path, '', {}, {})
195
+ else:
196
+ data = (abspath, out_path, fi_key) + self.interrogate(image)
197
+ # also the tags can indicate that the image is a duplicate
198
+ no_floats = sorted(filter(lambda x: not isinstance(x[0], float),
199
+ data[3].items()), key=lambda x: x[0])
200
+ sorted_tags = ','.join(f'({k},{v:.1f})' for (k, v) in no_floats)
201
+ QData.image_dups[sorted_tags].add(abspath)
202
+ QData.apply_filters(data)
203
+ QData.had_new = True
204
+
205
+ def batch_interrogate(self) -> None:
206
+ """ Interrogate all images in the input list """
207
+ QData.clear(1 - Interrogator.input["cumulative"])
208
+
209
+ if Interrogator.input["large_query"] is True and self.run_mode < 2:
210
+ # TODO: write specified tags files instead of simple .txt
211
+ image_list = [str(x[0].resolve()) for x in IOData.paths]
212
+ self.large_batch_interrogate(image_list, self.run_mode == 0)
213
+
214
+ # alternating dry run and run modes
215
+ self.run_mode = (self.run_mode + 1) % 2
216
+ count = len(image_list)
217
+ Interrogator.output = QData.finalize(count)
218
+ else:
219
+ verb = getattr(shared.opts, 'tagger_verbose', True)
220
+ count = len(QData.query)
221
+
222
+ for i in tqdm(range(len(IOData.paths)), disable=verb, desc='Tags'):
223
+ self.batch_interrogate_image(i)
224
+
225
+ if Interrogator.input["unload_after"]:
226
+ self.unload()
227
+
228
+ count = len(QData.query) - count
229
+ Interrogator.output = QData.finalize_batch(count)
230
+
231
+ def interrogate(
232
+ self,
233
+ image: Image
234
+ ) -> Tuple[
235
+ Dict[str, float], # rating confidences
236
+ Dict[str, float] # tag confidences
237
+ ]:
238
+ raise NotImplementedError()
239
+
240
+
241
+ class DeepDanbooruInterrogator(Interrogator):
242
+ """ Interrogator for DeepDanbooru models """
243
+ def __init__(self, name: str, project_path: os.PathLike) -> None:
244
+ super().__init__(name)
245
+ self.project_path = project_path
246
+ self.model = None
247
+ self.tags = None
248
+
249
+ def load(self) -> None:
250
+ print(f'Loading {self.name} from {str(self.project_path)}')
251
+
252
+ # deepdanbooru package is not include in web-sd anymore
253
+ # https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/c81d440d876dfd2ab3560410f37442ef56fc663
254
+ from launch import is_installed, run_pip
255
+ if not is_installed('deepdanbooru'):
256
+ package = os.environ.get(
257
+ 'DEEPDANBOORU_PACKAGE',
258
+ 'git+https://github.com/KichangKim/DeepDanbooru.'
259
+ 'git@d91a2963bf87c6a770d74894667e9ffa9f6de7ff'
260
+ )
261
+
262
+ run_pip(
263
+ f'install {package} tensorflow tensorflow-io', 'deepdanbooru')
264
+
265
+ import tensorflow as tf
266
+
267
+ # tensorflow maps nearly all vram by default, so we limit this
268
+ # https://www.tensorflow.org/guide/gpu#limiting_gpu_memory_growth
269
+ # TODO: only run on the first run
270
+ for device in tf.config.experimental.list_physical_devices('GPU'):
271
+ try:
272
+ tf.config.experimental.set_memory_growth(device, True)
273
+ except RuntimeError as err:
274
+ print(err)
275
+
276
+ with tf.device(TF_DEVICE_NAME):
277
+ import deepdanbooru.project as ddp
278
+
279
+ self.model = ddp.load_model_from_project(
280
+ project_path=self.project_path,
281
+ compile_model=False
282
+ )
283
+
284
+ print(f'Loaded {self.name} model from {str(self.project_path)}')
285
+
286
+ self.tags = ddp.load_tags_from_project(
287
+ project_path=self.project_path
288
+ )
289
+
290
+ def unload(self) -> bool:
291
+ return False
292
+
293
+ def interrogate(
294
+ self,
295
+ image: Image
296
+ ) -> Tuple[
297
+ Dict[str, float], # rating confidences
298
+ Dict[str, float] # tag confidences
299
+ ]:
300
+ # init model
301
+ if self.model is None:
302
+ self.load()
303
+
304
+ import deepdanbooru.data as ddd
305
+
306
+ # convert an image to fit the model
307
+ image_bufs = io.BytesIO()
308
+ image.save(image_bufs, format='PNG')
309
+ image = ddd.load_image_for_evaluate(
310
+ image_bufs,
311
+ self.model.input_shape[2],
312
+ self.model.input_shape[1]
313
+ )
314
+
315
+ image = image.reshape((1, *image.shape[0:3]))
316
+
317
+ # evaluate model
318
+ result = self.model.predict(image)
319
+
320
+ confidences = result[0].tolist()
321
+ ratings = {}
322
+ tags = {}
323
+
324
+ for i, tag in enumerate(self.tags):
325
+ if tag[:7] != "rating:":
326
+ tags[tag] = confidences[i]
327
+ else:
328
+ ratings[tag[7:]] = confidences[i]
329
+
330
+ return ratings, tags
331
+
332
+ def large_batch_interrogate(self, images: List, dry_run=False) -> str:
333
+ raise NotImplementedError()
334
+
335
+
336
+ # FIXME this is silly, in what scenario would the env change from MacOS to
337
+ # another OS? TODO: remove if the author does not respond.
338
+ def get_onnxrt():
339
+ try:
340
+ import onnxruntime
341
+ return onnxruntime
342
+ except ImportError:
343
+ # only one of these packages should be installed at one time in an env
344
+ # https://onnxruntime.ai/docs/get-started/with-python.html#install-onnx-runtime
345
+ # TODO: remove old package when the environment changes?
346
+ from launch import is_installed, run_pip
347
+ if not is_installed('onnxruntime'):
348
+ if system() == "Darwin":
349
+ package_name = "onnxruntime-silicon"
350
+ else:
351
+ package_name = "onnxruntime-gpu"
352
+ package = os.environ.get(
353
+ 'ONNXRUNTIME_PACKAGE',
354
+ package_name
355
+ )
356
+
357
+ run_pip(f'install {package}', 'onnxruntime')
358
+
359
+ import onnxruntime
360
+ return onnxruntime
361
+
362
+
363
+ class WaifuDiffusionInterrogator(Interrogator):
364
+ """ Interrogator for Waifu Diffusion models """
365
+ def __init__(
366
+ self,
367
+ name: str,
368
+ model_path='model.onnx',
369
+ tags_path='selected_tags.csv',
370
+ repo_id=None,
371
+ is_hf=True,
372
+ ) -> None:
373
+ super().__init__(name)
374
+ self.repo_id = repo_id
375
+ self.model_path = model_path
376
+ self.tags_path = tags_path
377
+ self.tags = None
378
+ self.model = None
379
+ self.tags = None
380
+ self.local_model = None
381
+ self.local_tags = None
382
+ self.is_hf = is_hf
383
+
384
+ def download(self) -> None:
385
+ mdir = Path(shared.models_path, 'interrogators')
386
+ if self.is_hf:
387
+ cache = getattr(shared.opts, 'tagger_hf_cache_dir', Its.hf_cache)
388
+ print(f"Loading {self.name} model file from {self.repo_id}, "
389
+ f"{self.model_path}")
390
+
391
+ model_path = hf_hub_download(
392
+ repo_id=self.repo_id,
393
+ filename=self.model_path,
394
+ cache_dir=cache)
395
+ tags_path = hf_hub_download(
396
+ repo_id=self.repo_id,
397
+ filename=self.tags_path,
398
+ cache_dir=cache)
399
+ else:
400
+ model_path = self.local_model
401
+ tags_path = self.local_tags
402
+
403
+ download_model = {
404
+ 'name': self.name,
405
+ 'model_path': model_path,
406
+ 'tags_path': tags_path,
407
+ }
408
+ mpath = Path(mdir, 'model.json')
409
+
410
+ data = [download_model]
411
+
412
+ if not os.path.exists(mdir):
413
+ os.mkdir(mdir)
414
+
415
+ elif os.path.exists(mpath):
416
+ with io.open(file=mpath, mode='r', encoding='utf-8') as filename:
417
+ try:
418
+ data = json.load(filename)
419
+ # No need to append if it's already contained
420
+ if download_model not in data:
421
+ data.append(download_model)
422
+ except json.JSONDecodeError as err:
423
+ print(f'Adding download_model {mpath} raised {repr(err)}')
424
+ data = [download_model]
425
+
426
+ with io.open(mpath, 'w', encoding='utf-8') as filename:
427
+ json.dump(data, filename)
428
+ return model_path, tags_path
429
+
430
+ def load(self) -> None:
431
+ model_path, tags_path = self.download()
432
+ ort = get_onnxrt()
433
+ self.model = ort.InferenceSession(model_path,
434
+ providers=onnxrt_providers)
435
+
436
+ print(f'Loaded {self.name} model from {self.repo_id}')
437
+ self.tags = read_csv(tags_path)
438
+
439
+ def interrogate(
440
+ self,
441
+ image: Image
442
+ ) -> Tuple[
443
+ Dict[str, float], # rating confidences
444
+ Dict[str, float] # tag confidences
445
+ ]:
446
+ # init model
447
+ if self.model is None:
448
+ self.load()
449
+
450
+ # code for converting the image and running the model is taken from the
451
+ # link below. thanks, SmilingWolf!
452
+ # https://huggingface.co/spaces/SmilingWolf/wd-v1-4-tags/blob/main/app.py
453
+
454
+ # convert an image to fit the model
455
+ _, height, _, _ = self.model.get_inputs()[0].shape
456
+
457
+ # alpha to white
458
+ image = dbimutils.fill_transparent(image)
459
+
460
+ image = asarray(image)
461
+ # PIL RGB to OpenCV BGR
462
+ image = image[:, :, ::-1]
463
+
464
+ tags = dict
465
+
466
+ image = dbimutils.make_square(image, height)
467
+ image = dbimutils.smart_resize(image, height)
468
+ image = image.astype(float32)
469
+ image = expand_dims(image, 0)
470
+
471
+ # evaluate model
472
+ input_name = self.model.get_inputs()[0].name
473
+ label_name = self.model.get_outputs()[0].name
474
+ confidences = self.model.run([label_name], {input_name: image})[0]
475
+
476
+ tags = self.tags[:][['name']]
477
+ tags['confidences'] = confidences[0]
478
+
479
+ # first 4 items are for rating (general, sensitive, questionable,
480
+ # explicit)
481
+ ratings = dict(tags[:4].values)
482
+
483
+ # rest are regular tags
484
+ tags = dict(tags[4:].values)
485
+
486
+ return ratings, tags
487
+
488
+ def dry_run(self, images) -> Tuple[str, Callable[[str], None]]:
489
+
490
+ def process_images(filepaths, _):
491
+ lines = []
492
+ for image_path in filepaths:
493
+ image_path = image_path.numpy().decode("utf-8")
494
+ lines.append(f"{image_path}\n")
495
+ with io.open("dry_run_read.txt", "a", encoding="utf-8") as filen:
496
+ filen.writelines(lines)
497
+
498
+ scheduled = [f"{image_path}\n" for image_path in images]
499
+
500
+ # Truncate the file from previous runs
501
+ print("updating dry_run_read.txt")
502
+ io.open("dry_run_read.txt", "w", encoding="utf-8").close()
503
+ with io.open("dry_run_scheduled.txt", "w", encoding="utf-8") as filen:
504
+ filen.writelines(scheduled)
505
+ return process_images
506
+
507
+ def run(self, images, pred_model) -> Tuple[str, Callable[[str], None]]:
508
+ threshold = QData.threshold
509
+ self.tags["sanitized_name"] = self.tags["name"].map(
510
+ lambda i: i if i in Its.kaomojis else i.replace("_", " ")
511
+ )
512
+
513
+ def process_images(filepaths, images):
514
+ preds = pred_model(images).numpy()
515
+
516
+ for ipath, pred in zip(filepaths, preds):
517
+ ipath = ipath.numpy().decode("utf-8")
518
+
519
+ self.tags["preds"] = pred
520
+ generic = self.tags[self.tags["category"] == 0]
521
+ chosen = generic[generic["preds"] > threshold]
522
+ chosen = chosen.sort_values(by="preds", ascending=False)
523
+ tags_names = chosen["sanitized_name"]
524
+
525
+ key = ipath.split("/")[-1].split(".")[0] + "_" + self.name
526
+ QData.add_tags = tags_names
527
+ QData.apply_filters((ipath, '', {}, {}), key, False)
528
+
529
+ tags_string = ", ".join(tags_names)
530
+ txtfile = Path(ipath).with_suffix(".txt")
531
+ with io.open(txtfile, "w", encoding="utf-8") as filename:
532
+ filename.write(tags_string)
533
+ return images, process_images
534
+
535
+ def large_batch_interrogate(self, images, dry_run=True) -> None:
536
+ """ Interrogate a large batch of images. """
537
+
538
+ # init model
539
+ if not hasattr(self, 'model') or self.model is None:
540
+ self.load()
541
+
542
+ os.environ["TF_XLA_FLAGS"] = '--tf_xla_auto_jit=2 '\
543
+ '--tf_xla_cpu_global_jit'
544
+ # Reduce logging
545
+ # os.environ["TF_CPP_MIN_LOG_LEVEL"] = "1"
546
+
547
+ import tensorflow as tf
548
+
549
+ from tagger.generator.tf_data_reader import DataGenerator
550
+
551
+ # tensorflow maps nearly all vram by default, so we limit this
552
+ # https://www.tensorflow.org/guide/gpu#limiting_gpu_memory_growth
553
+ # TODO: only run on the first run
554
+ gpus = tf.config.experimental.list_physical_devices("GPU")
555
+ if gpus:
556
+ for device in gpus:
557
+ try:
558
+ tf.config.experimental.set_memory_growth(device, True)
559
+ except RuntimeError as err:
560
+ print(err)
561
+
562
+ if dry_run: # dry run
563
+ height, width = 224, 224
564
+ process_images = self.dry_run(images)
565
+ else:
566
+ _, height, width, _ = self.model.inputs[0].shape
567
+
568
+ @tf.function
569
+ def pred_model(model):
570
+ return self.model(model, training=False)
571
+
572
+ process_images = self.run(images, pred_model)
573
+
574
+ generator = DataGenerator(
575
+ file_list=images, target_height=height, target_width=width,
576
+ batch_size=getattr(shared.opts, 'tagger_batch_size', 1024)
577
+ ).gen_ds()
578
+
579
+ orig_add_tags = QData.add_tags
580
+ for filepaths, image_list in tqdm(generator):
581
+ process_images(filepaths, image_list)
582
+ QData.add_tag = orig_add_tags
583
+ del os.environ["TF_XLA_FLAGS"]
584
+
585
+
586
+ class MLDanbooruInterrogator(Interrogator):
587
+ """ Interrogator for the MLDanbooru model. """
588
+ def __init__(
589
+ self,
590
+ name: str,
591
+ repo_id: str,
592
+ model_path: str,
593
+ tags_path='classes.json',
594
+ ) -> None:
595
+ super().__init__(name)
596
+ self.model_path = model_path
597
+ self.tags_path = tags_path
598
+ self.repo_id = repo_id
599
+ self.tags = None
600
+ self.model = None
601
+
602
+ def download(self) -> Tuple[str, str]:
603
+ print(f"Loading {self.name} model file from {self.repo_id}")
604
+ cache = getattr(shared.opts, 'tagger_hf_cache_dir', Its.hf_cache)
605
+
606
+ model_path = hf_hub_download(
607
+ repo_id=self.repo_id,
608
+ filename=self.model_path,
609
+ cache_dir=cache
610
+ )
611
+ tags_path = hf_hub_download(
612
+ repo_id=self.repo_id,
613
+ filename=self.tags_path,
614
+ cache_dir=cache
615
+ )
616
+ return model_path, tags_path
617
+
618
+ def load(self) -> None:
619
+ model_path, tags_path = self.download()
620
+
621
+ ort = get_onnxrt()
622
+ self.model = ort.InferenceSession(model_path,
623
+ providers=onnxrt_providers)
624
+ print(f'Loaded {self.name} model from {model_path}')
625
+
626
+ with open(tags_path, 'r', encoding='utf-8') as filen:
627
+ self.tags = json.load(filen)
628
+
629
+ def interrogate(
630
+ self,
631
+ image: Image
632
+ ) -> Tuple[
633
+ Dict[str, float], # rating confidents
634
+ Dict[str, float] # tag confidents
635
+ ]:
636
+ # init model
637
+ if self.model is None:
638
+ self.load()
639
+
640
+ image = dbimutils.fill_transparent(image)
641
+ image = dbimutils.resize(image, 448) # TODO CUSTOMIZE
642
+
643
+ x = asarray(image, dtype=float32) / 255
644
+ # HWC -> 1CHW
645
+ x = x.transpose((2, 0, 1))
646
+ x = expand_dims(x, 0)
647
+
648
+ input_ = self.model.get_inputs()[0]
649
+ output = self.model.get_outputs()[0]
650
+ # evaluate model
651
+ y, = self.model.run([output.name], {input_.name: x})
652
+
653
+ # Softmax
654
+ y = 1 / (1 + exp(-y))
655
+
656
+ tags = {tag: float(conf) for tag, conf in zip(self.tags, y.flatten())}
657
+ return {}, tags
658
+
659
+ def large_batch_interrogate(self, images: List, dry_run=False) -> str:
660
+ raise NotImplementedError()
extensions-builtin/sdw-wd14-tagger/tagger/preset.py ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Module for Tagger, to save and load presets."""
2
+ import os
3
+ import json
4
+
5
+ from typing import Tuple, List, Dict
6
+ from pathlib import Path
7
+ from gradio.context import Context
8
+ from modules.images import sanitize_filename_part # pylint: disable=E0401
9
+
10
+ PresetDict = Dict[str, Dict[str, any]]
11
+
12
+
13
+ class Preset:
14
+ """Preset class for Tagger, to save and load presets."""
15
+ base_dir: Path
16
+ default_filename: str
17
+ default_values: PresetDict
18
+ components: List[object]
19
+
20
+ def __init__(
21
+ self,
22
+ base_dir: os.PathLike,
23
+ default_filename='default.json'
24
+ ) -> None:
25
+ self.base_dir = Path(base_dir)
26
+ self.default_filename = default_filename
27
+ self.default_values = self.load(default_filename)[1]
28
+ self.components = []
29
+
30
+ def component(self, component_class: object, **kwargs) -> object:
31
+ # find all the top components from the Gradio context and create a path
32
+ parent = Context.block
33
+ paths = [kwargs['label']]
34
+
35
+ while parent is not None:
36
+ if hasattr(parent, 'label'):
37
+ paths.insert(0, parent.label)
38
+
39
+ parent = parent.parent
40
+
41
+ path = '/'.join(paths)
42
+
43
+ component = component_class(**{
44
+ **kwargs,
45
+ **self.default_values.get(path, {})
46
+ })
47
+
48
+ component.path = path
49
+
50
+ self.components.append(component)
51
+ return component
52
+
53
+ def load(self, filename: str) -> Tuple[str, PresetDict]:
54
+ if not filename.endswith('.json'):
55
+ filename += '.json'
56
+
57
+ path = self.base_dir.joinpath(sanitize_filename_part(filename))
58
+ configs = {}
59
+
60
+ if path.is_file():
61
+ configs = json.loads(path.read_text(encoding='utf-8'))
62
+
63
+ return path, configs
64
+
65
+ def save(self, filename: str, *values) -> Tuple:
66
+ path, configs = self.load(filename)
67
+
68
+ for index, component in enumerate(self.components):
69
+ config = configs.get(component.path, {})
70
+ config['value'] = values[index]
71
+
72
+ for attr in ['visible', 'min', 'max', 'step']:
73
+ if hasattr(component, attr):
74
+ config[attr] = config.get(attr, getattr(component, attr))
75
+
76
+ configs[component.path] = config
77
+
78
+ self.base_dir.mkdir(0o777, True, True)
79
+ path.write_text(json.dumps(configs, indent=4), encoding='utf-8')
80
+
81
+ return 'successfully saved the preset'
82
+
83
+ def apply(self, filename: str) -> Tuple:
84
+ values = self.load(filename)[1]
85
+ outputs = []
86
+
87
+ for component in self.components:
88
+ config = values.get(component.path, {})
89
+
90
+ if 'value' in config and hasattr(component, 'choices'):
91
+ if config['value'] not in component.choices:
92
+ config['value'] = None
93
+
94
+ outputs.append(component.update(**config))
95
+
96
+ return (*outputs, 'successfully loaded the preset')
97
+
98
+ def list(self) -> List[str]:
99
+ presets = [
100
+ p.name
101
+ for p in self.base_dir.glob('*.json')
102
+ if p.is_file()
103
+ ]
104
+
105
+ if len(presets) < 1:
106
+ presets.append(self.default_filename)
107
+
108
+ return presets
extensions-builtin/sdw-wd14-tagger/tagger/settings.py ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Settings tab entries for the tagger module"""
2
+ import os
3
+ from typing import List
4
+ from modules import shared # pylint: disable=import-error
5
+ from gradio import inputs as gr
6
+
7
+ # kaomoji from WD 1.4 tagger csv. thanks, Meow-San#5400!
8
+ DEFAULT_KAMOJIS = '0_0, (o)_(o), +_+, +_-, ._., <o>_<o>, <|>_<|>, =_=, >_<, 3_3, 6_9, >_o, @_@, ^_^, o_o, u_u, x_x, |_|, ||_||' # pylint: disable=line-too-long # noqa: E501
9
+
10
+ DEFAULT_OFF = '[name].[output_extension]'
11
+
12
+ HF_CACHE = os.environ.get('HF_HOME', os.environ.get('HUGGINGFACE_HUB_CACHE',
13
+ str(os.path.join(shared.models_path, 'interrogators'))))
14
+
15
+ def slider_wrapper(value, elem_id, **kwargs):
16
+ # required or else gradio will throw errors
17
+ return gr.Slider(**kwargs)
18
+
19
+
20
+ def on_ui_settings():
21
+ """Called when the UI settings tab is opened"""
22
+ Its = InterrogatorSettings
23
+ section = 'tagger', 'Tagger'
24
+ shared.opts.add_option(
25
+ key='tagger_out_filename_fmt',
26
+ info=shared.OptionInfo(
27
+ DEFAULT_OFF,
28
+ label='Tag file output format. Leave blank to use same filename or'
29
+ ' e.g. "[name].[hash:sha1].[output_extension]". Also allowed are '
30
+ '[extension] or any other [hash:<algorithm>] supported by hashlib',
31
+ section=section,
32
+ ),
33
+ )
34
+ shared.opts.onchange(
35
+ key='tagger_out_filename_fmt',
36
+ func=Its.set_output_filename_format
37
+ )
38
+ shared.opts.add_option(
39
+ key='tagger_count_threshold',
40
+ info=shared.OptionInfo(
41
+ 100.0,
42
+ label="Maximum number of tags to be shown in the UI",
43
+ section=section,
44
+ component=slider_wrapper,
45
+ component_args={"minimum": 1.0, "maximum": 500.0, "step": 1.0},
46
+ ),
47
+ )
48
+ shared.opts.add_option(
49
+ key='tagger_batch_recursive',
50
+ info=shared.OptionInfo(
51
+ True,
52
+ label='Glob recursively with input directory pattern',
53
+ section=section,
54
+ ),
55
+ )
56
+ shared.opts.add_option(
57
+ key='tagger_auto_serde_json',
58
+ info=shared.OptionInfo(
59
+ True,
60
+ label='Auto load and save JSON database',
61
+ section=section,
62
+ ),
63
+ )
64
+ shared.opts.add_option(
65
+ key='tagger_store_images',
66
+ info=shared.OptionInfo(
67
+ False,
68
+ label='Store images in database',
69
+ section=section,
70
+ ),
71
+ )
72
+ shared.opts.add_option(
73
+ key='tagger_weighted_tags_files',
74
+ info=shared.OptionInfo(
75
+ False,
76
+ label='Write weights to tags files',
77
+ section=section,
78
+ ),
79
+ )
80
+ shared.opts.add_option(
81
+ key='tagger_verbose',
82
+ info=shared.OptionInfo(
83
+ False,
84
+ label='Console log tag counts per file, no progress bar',
85
+ section=section,
86
+ ),
87
+ )
88
+ shared.opts.add_option(
89
+ key='tagger_repl_us',
90
+ info=shared.OptionInfo(
91
+ True,
92
+ label='Use spaces instead of underscore',
93
+ section=section,
94
+ ),
95
+ )
96
+ shared.opts.add_option(
97
+ key='tagger_repl_us_excl',
98
+ info=shared.OptionInfo(
99
+ DEFAULT_KAMOJIS,
100
+ label='Excudes (split by comma)',
101
+ section=section,
102
+ ),
103
+ )
104
+ shared.opts.onchange(
105
+ key='tagger_repl_us_excl',
106
+ func=Its.set_us_excl
107
+ )
108
+ shared.opts.add_option(
109
+ key='tagger_escape',
110
+ info=shared.OptionInfo(
111
+ False,
112
+ label='Escape brackets',
113
+ section=section,
114
+ ),
115
+ )
116
+ shared.opts.add_option(
117
+ key='tagger_batch_size',
118
+ info=shared.OptionInfo(
119
+ 1024,
120
+ label='batch size for large queries',
121
+ section=section,
122
+ ),
123
+ )
124
+ # see huggingface_hub guides/manage-cache
125
+ shared.opts.add_option(
126
+ key='tagger_hf_cache_dir',
127
+ info=shared.OptionInfo(
128
+ HF_CACHE,
129
+ label='HuggingFace cache directory, '
130
+ 'see huggingface_hub guides/manage-cache',
131
+ section=section,
132
+ ),
133
+ )
134
+
135
+
136
+ def split_str(string: str, separator=',') -> List[str]:
137
+ return [x.strip() for x in string.split(separator) if x]
138
+
139
+
140
+ class InterrogatorSettings:
141
+ kamojis = set(split_str(DEFAULT_KAMOJIS))
142
+ output_filename_format = DEFAULT_OFF
143
+ hf_cache = HF_CACHE
144
+
145
+ @classmethod
146
+ def set_us_excl(cls):
147
+ ruxs = getattr(shared.opts, 'tagger_repl_us_excl', DEFAULT_KAMOJIS)
148
+ cls.kamojis = set(split_str(ruxs))
149
+
150
+ @classmethod
151
+ def set_output_filename_format(cls):
152
+ fnfmt = getattr(shared.opts, 'tagger_out_filename_fmt', DEFAULT_OFF)
153
+ if fnfmt[-12:] == '.[extension]':
154
+ print("refused to write an image extension")
155
+ fnfmt = fnfmt[:-12] + '.[output_extension]'
156
+
157
+ cls.output_filename_format = fnfmt.strip()
extensions-builtin/sdw-wd14-tagger/tagger/ui.py ADDED
@@ -0,0 +1,482 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """ This module contains the ui for the tagger tab. """
2
+ from typing import Dict, Tuple, List, Optional
3
+ import gradio as gr
4
+ import re
5
+ from PIL import Image
6
+ from packaging import version
7
+
8
+ try:
9
+ from tensorflow import __version__ as tf_version
10
+ except ImportError:
11
+ def tf_version():
12
+ return '0.0.0'
13
+
14
+ from html import escape as html_esc
15
+
16
+ from modules import ui # pylint: disable=import-error
17
+ from modules import generation_parameters_copypaste as parameters_copypaste # pylint: disable=import-error # noqa
18
+
19
+ try:
20
+ from modules.call_queue import wrap_gradio_gpu_call
21
+ except ImportError:
22
+ from webui import wrap_gradio_gpu_call # pylint: disable=import-error
23
+ from tagger import utils # pylint: disable=import-error
24
+ from tagger.interrogator import Interrogator as It # pylint: disable=E0401
25
+ from tagger.uiset import IOData, QData # pylint: disable=import-error
26
+
27
+ TAG_INPUTS = ["add", "keep", "exclude", "search", "replace"]
28
+ COMMON_OUTPUT = Tuple[
29
+ Optional[str], # tags as string
30
+ Optional[str], # html tags as string
31
+ Optional[str], # discarded tags as string
32
+ Optional[Dict[str, float]], # rating confidences
33
+ Optional[Dict[str, float]], # tag confidences
34
+ Optional[Dict[str, float]], # excluded tag confidences
35
+ str, # error message
36
+ ]
37
+
38
+
39
+ def unload_interrogators() -> Tuple[str]:
40
+ unloaded_models = 0
41
+ remaining_models = ''
42
+
43
+ for i in utils.interrogators.values():
44
+ if i.unload():
45
+ unloaded_models = unloaded_models + 1
46
+ elif i.model is not None:
47
+ if remaining_models == '':
48
+ remaining_models = f', remaining models:<ul><li>{i.name}</li>'
49
+ else:
50
+ remaining_models = remaining_models + f'<li>{i.name}</li>'
51
+ if remaining_models != '':
52
+ remaining_models = remaining_models + "Some tensorflow models could "\
53
+ "not be unloaded, a known issue."
54
+ QData.clear(1)
55
+
56
+ return (f'{unloaded_models} model(s) unloaded{remaining_models}',)
57
+
58
+
59
+ def on_interrogate(
60
+ input_glob: str, output_dir: str, name: str, filt: str, *args
61
+ ) -> COMMON_OUTPUT:
62
+ # input glob should always be rechecked for new files
63
+ IOData.update_input_glob(input_glob)
64
+ if output_dir != It.input["output_dir"]:
65
+ IOData.update_output_dir(output_dir)
66
+ It.input["output_dir"] = output_dir
67
+
68
+ if len(IOData.err) > 0:
69
+ return (None,) * 6 + (IOData.error_msg(),)
70
+
71
+ for i, val in enumerate(args):
72
+ part = TAG_INPUTS[i]
73
+ if val != It.input[part]:
74
+ getattr(QData, "update_" + part)(val)
75
+ It.input[part] = val
76
+
77
+ interrogator: It = next((i for i in utils.interrogators.values() if
78
+ i.name == name), None)
79
+ if interrogator is None:
80
+ return (None,) * 6 + (f"'{name}': invalid interrogator",)
81
+
82
+ interrogator.batch_interrogate()
83
+ return search_filter(filt)
84
+
85
+
86
+ def on_gallery() -> List:
87
+ return QData.get_image_dups()
88
+
89
+
90
+ def on_interrogate_image(*args) -> COMMON_OUTPUT:
91
+ # hack brcause image interrogaion occurs twice
92
+ It.odd_increment = It.odd_increment + 1
93
+ if It.odd_increment & 1 == 1:
94
+ return (None,) * 6 + ('',)
95
+ return on_interrogate_image_submit(*args)
96
+
97
+
98
+ def on_interrogate_image_submit(
99
+ image: Image, name: str, filt: str, *args
100
+ ) -> COMMON_OUTPUT:
101
+ for i, val in enumerate(args):
102
+ part = TAG_INPUTS[i]
103
+ if val != It.input[part]:
104
+ getattr(QData, "update_" + part)(val)
105
+ It.input[part] = val
106
+
107
+ if image is None:
108
+ return (None,) * 6 + ('No image selected',)
109
+ interrogator: It = next((i for i in utils.interrogators.values() if
110
+ i.name == name), None)
111
+ if interrogator is None:
112
+ return (None,) * 6 + (f"'{name}': invalid interrogator",)
113
+
114
+ interrogator.interrogate_image(image)
115
+ return search_filter(filt)
116
+
117
+
118
+ def move_selection_to_input(
119
+ filt: str, field: str
120
+ ) -> Tuple[Optional[str], Optional[str], str]:
121
+ """ moves the selected to the input field """
122
+ if It.output is None:
123
+ return (None, None, '')
124
+ tags = It.output[1]
125
+ got = It.input[field]
126
+ existing = set(got.split(', '))
127
+ if filt:
128
+ re_part = re.compile('(' + re.sub(', ?', '|', filt) + ')')
129
+ tags = {k: v for k, v in tags.items() if re_part.search(k) and
130
+ k not in existing}
131
+ print("Tags remaining: ", tags)
132
+
133
+ if len(tags) == 0:
134
+ return ('', None, '')
135
+
136
+ if got != '':
137
+ got = got + ', '
138
+
139
+ (data, info) = It.set(field)(got + ', '.join(tags.keys()))
140
+ return ('', data, info)
141
+
142
+
143
+ def move_selection_to_keep(
144
+ tag_search_filter: str
145
+ ) -> Tuple[Optional[str], Optional[str], str]:
146
+ return move_selection_to_input(tag_search_filter, "keep")
147
+
148
+
149
+ def move_selection_to_exclude(
150
+ tag_search_filter: str
151
+ ) -> Tuple[Optional[str], Optional[str], str]:
152
+ return move_selection_to_input(tag_search_filter, "exclude")
153
+
154
+
155
+ def search_filter(filt: str) -> COMMON_OUTPUT:
156
+ """ filters the tags and lost tags for the search field """
157
+ ratings, tags, lost, info = It.output
158
+ if ratings is None:
159
+ return (None,) * 6 + (info,)
160
+ if filt:
161
+ re_part = re.compile('(' + re.sub(', ?', '|', filt) + ')')
162
+ tags = {k: v for k, v in tags.items() if re_part.search(k)}
163
+ lost = {k: v for k, v in lost.items() if re_part.search(k)}
164
+
165
+ h_tags = ', '.join(f'<a href="javascript:tag_clicked(\'{html_esc(k)}\','
166
+ f'true)">{k}</a>' for k in tags.keys())
167
+ h_lost = ', '.join(f'<a href="javascript:tag_clicked(\'{html_esc(k)}\','
168
+ f'false)">{k}</a>' for k in lost.keys())
169
+
170
+ return (', '.join(tags.keys()), h_tags, h_lost, ratings, tags, lost, info)
171
+
172
+
173
+ def on_ui_tabs():
174
+ """ configures the ui on the tagger tab """
175
+ # If checkboxes misbehave you have to adapt the default.json preset
176
+ tag_input = {}
177
+
178
+ with gr.Blocks(analytics_enabled=False) as tagger_interface:
179
+ with gr.Row():
180
+ with gr.Column(variant='panel'):
181
+
182
+ # input components
183
+ with gr.Tabs():
184
+ with gr.TabItem(label='Single process'):
185
+ image = gr.Image(
186
+ label='Source',
187
+ source='upload',
188
+ interactive=True,
189
+ type="pil"
190
+ )
191
+ image_submit = gr.Button(
192
+ value='Interrogate image',
193
+ variant='primary'
194
+ )
195
+
196
+ with gr.TabItem(label='Batch from directory'):
197
+ input_glob = utils.preset.component(
198
+ gr.Textbox,
199
+ value='',
200
+ label='Input directory - To recurse use ** or */* '
201
+ 'in your glob; also check the settings tab.',
202
+ placeholder='/path/to/images or to/images/**/*'
203
+ )
204
+ output_dir = utils.preset.component(
205
+ gr.Textbox,
206
+ value=It.input["output_dir"],
207
+ label='Output directory',
208
+ placeholder='Leave blank to save images '
209
+ 'to the same path.'
210
+ )
211
+
212
+ batch_submit = gr.Button(
213
+ value='Interrogate',
214
+ variant='primary'
215
+ )
216
+ with gr.Row(variant='compact'):
217
+ with gr.Column(variant='panel'):
218
+ large_query = utils.preset.component(
219
+ gr.Checkbox,
220
+ label='huge batch query (TF 2.10, '
221
+ 'experimental)',
222
+ value=False,
223
+ interactive=version.parse(tf_version) ==
224
+ version.parse('2.10')
225
+ )
226
+ with gr.Column(variant='panel'):
227
+ save_tags = utils.preset.component(
228
+ gr.Checkbox,
229
+ label='Save to tags files',
230
+ value=True
231
+ )
232
+
233
+ info = gr.HTML(
234
+ label='Info',
235
+ interactive=False,
236
+ elem_classes=['info']
237
+ )
238
+
239
+ # interrogator selector
240
+ with gr.Column():
241
+ # preset selector
242
+ with gr.Row(variant='compact'):
243
+ available_presets = utils.preset.list()
244
+ selected_preset = gr.Dropdown(
245
+ label='Preset',
246
+ choices=available_presets,
247
+ value=available_presets[0]
248
+ )
249
+
250
+ save_preset_button = gr.Button(
251
+ value=ui.save_style_symbol
252
+ )
253
+
254
+ ui.create_refresh_button(
255
+ selected_preset,
256
+ lambda: None,
257
+ lambda: {'choices': utils.preset.list()},
258
+ 'refresh_preset'
259
+ )
260
+
261
+ with gr.Row(variant='compact'):
262
+ def refresh():
263
+ utils.refresh_interrogators()
264
+ return sorted(x.name for x in utils.interrogators
265
+ .values())
266
+ interrogator_names = refresh()
267
+ interrogator = utils.preset.component(
268
+ gr.Dropdown,
269
+ label='Interrogator',
270
+ choices=interrogator_names,
271
+ value=(
272
+ None
273
+ if len(interrogator_names) < 1 else
274
+ interrogator_names[-1]
275
+ )
276
+ )
277
+
278
+ ui.create_refresh_button(
279
+ interrogator,
280
+ lambda: None,
281
+ lambda: {'choices': refresh()},
282
+ 'refresh_interrogator'
283
+ )
284
+
285
+ unload_all_models = gr.Button(
286
+ value='Unload all interrogate models'
287
+ )
288
+ with gr.Row(variant='compact'):
289
+ tag_input["add"] = utils.preset.component(
290
+ gr.Textbox,
291
+ label='Additional tags (comma split)',
292
+ elem_id='additional-tags'
293
+ )
294
+ with gr.Row(variant='compact'):
295
+ threshold = utils.preset.component(
296
+ gr.Slider,
297
+ label='Weight threshold',
298
+ minimum=0,
299
+ maximum=1,
300
+ value=QData.threshold
301
+ )
302
+ tag_frac_threshold = utils.preset.component(
303
+ gr.Slider,
304
+ label='Min tag fraction in batch and '
305
+ 'interrogations',
306
+ minimum=0,
307
+ maximum=1,
308
+ value=QData.tag_frac_threshold,
309
+ )
310
+ with gr.Row(variant='compact'):
311
+ cumulative = utils.preset.component(
312
+ gr.Checkbox,
313
+ label='Combine interrogations',
314
+ value=False
315
+ )
316
+ unload_after = utils.preset.component(
317
+ gr.Checkbox,
318
+ label='Unload model after running',
319
+ value=False
320
+ )
321
+ with gr.Row(variant='compact'):
322
+ tag_input["search"] = utils.preset.component(
323
+ gr.Textbox,
324
+ label='Search tag, .. ->',
325
+ elem_id='search-tags'
326
+ )
327
+ tag_input["replace"] = utils.preset.component(
328
+ gr.Textbox,
329
+ label='-> Replace tag, ..',
330
+ elem_id='replace-tags'
331
+ )
332
+ with gr.Row(variant='compact'):
333
+ tag_input["keep"] = utils.preset.component(
334
+ gr.Textbox,
335
+ label='Keep tag, ..',
336
+ elem_id='keep-tags'
337
+ )
338
+ tag_input["exclude"] = utils.preset.component(
339
+ gr.Textbox,
340
+ label='Exclude tag, ..',
341
+ elem_id='exclude-tags'
342
+ )
343
+
344
+ # output components
345
+ with gr.Column(variant='panel'):
346
+ with gr.Row(variant='compact'):
347
+ with gr.Column(variant='compact'):
348
+ mv_selection_to_keep = gr.Button(
349
+ value='Move visible tags to keep tags',
350
+ variant='secondary'
351
+ )
352
+ mv_selection_to_exclude = gr.Button(
353
+ value='Move visible tags to exclude tags',
354
+ variant='secondary'
355
+ )
356
+ with gr.Column(variant='compact'):
357
+ tag_search_selection = utils.preset.component(
358
+ gr.Textbox,
359
+ label='Multi string search: part1, part2.. '
360
+ '(Enter key to update)',
361
+ )
362
+ with gr.Tabs():
363
+ with gr.TabItem(label='Ratings and included tags'):
364
+ # clickable tags to populate excluded tags
365
+ tags = gr.State(value="")
366
+ html_tags = gr.HTML(
367
+ label='Tags',
368
+ elem_id='tags',
369
+ )
370
+
371
+ with gr.Row():
372
+ parameters_copypaste.bind_buttons(
373
+ parameters_copypaste.create_buttons(
374
+ ["txt2img", "img2img"],
375
+ ),
376
+ None,
377
+ tags
378
+ )
379
+ rating_confidences = gr.Label(
380
+ label='Rating confidences',
381
+ elem_id='rating-confidences',
382
+ )
383
+ tag_confidences = gr.Label(
384
+ label='Tag confidences',
385
+ elem_id='tag-confidences',
386
+ )
387
+ with gr.TabItem(label='Excluded tags'):
388
+ # clickable tags to populate keep tags
389
+ discarded_tags = gr.HTML(
390
+ label='Tags',
391
+ elem_id='tags',
392
+ )
393
+ excluded_tag_confidences = gr.Label(
394
+ label='Excluded Tag confidences',
395
+ elem_id='discard-tag-confidences',
396
+ )
397
+ tab_gallery = gr.TabItem(label='Gallery')
398
+ with tab_gallery:
399
+ gallery = gr.Gallery(
400
+ label='Gallery',
401
+ elem_id='gallery',
402
+ columns=[2],
403
+ rows=[8],
404
+ object_fit="contain",
405
+ height="auto"
406
+ )
407
+
408
+ # register events
409
+ # Checkboxes
410
+ cumulative.input(fn=It.flip('cumulative'), inputs=[], outputs=[])
411
+ large_query.input(fn=It.flip('large_query'), inputs=[], outputs=[])
412
+ unload_after.input(fn=It.flip('unload_after'), inputs=[], outputs=[])
413
+
414
+ save_tags.input(fn=IOData.flip_save_tags(), inputs=[], outputs=[])
415
+
416
+ # Preset and unload buttons
417
+ selected_preset.change(fn=utils.preset.apply, inputs=[selected_preset],
418
+ outputs=[*utils.preset.components, info])
419
+
420
+ save_preset_button.click(fn=utils.preset.save, inputs=[selected_preset,
421
+ *utils.preset.components], outputs=[info])
422
+
423
+ unload_all_models.click(fn=unload_interrogators, outputs=[info])
424
+
425
+ # Sliders
426
+ threshold.input(fn=QData.set("threshold"), inputs=[threshold],
427
+ outputs=[])
428
+ threshold.release(fn=QData.set("threshold"), inputs=[threshold],
429
+ outputs=[])
430
+
431
+ tag_frac_threshold.input(fn=QData.set("tag_frac_threshold"),
432
+ inputs=[tag_frac_threshold], outputs=[])
433
+ tag_frac_threshold.release(fn=QData.set("tag_frac_threshold"),
434
+ inputs=[tag_frac_threshold], outputs=[])
435
+
436
+ # Input textboxes (blur == lose focus)
437
+ for tag in TAG_INPUTS:
438
+ tag_input[tag].blur(fn=wrap_gradio_gpu_call(It.set(tag)),
439
+ inputs=[tag_input[tag]],
440
+ outputs=[tag_input[tag], info])
441
+
442
+ input_glob.blur(fn=wrap_gradio_gpu_call(It.set("input_glob")),
443
+ inputs=[input_glob], outputs=[input_glob, info])
444
+ output_dir.blur(fn=wrap_gradio_gpu_call(It.set("output_dir")),
445
+ inputs=[output_dir], outputs=[output_dir, info])
446
+
447
+ tab_gallery.select(fn=on_gallery, inputs=[], outputs=[gallery])
448
+
449
+ common_output = [tags, html_tags, discarded_tags, rating_confidences,
450
+ tag_confidences, excluded_tag_confidences, info]
451
+
452
+ # search input textbox
453
+ for fun in [tag_search_selection.change, tag_search_selection.submit]:
454
+ fun(fn=wrap_gradio_gpu_call(search_filter),
455
+ inputs=[tag_search_selection], outputs=common_output)
456
+
457
+ # buttons to move tags (right)
458
+ mv_selection_to_keep.click(
459
+ fn=wrap_gradio_gpu_call(move_selection_to_keep),
460
+ inputs=[tag_search_selection],
461
+ outputs=[tag_search_selection, tag_input["keep"], info])
462
+
463
+ mv_selection_to_exclude.click(
464
+ fn=wrap_gradio_gpu_call(move_selection_to_exclude),
465
+ inputs=[tag_search_selection],
466
+ outputs=[tag_search_selection, tag_input["exclude"], info])
467
+
468
+ common_input = [interrogator, tag_search_selection] + \
469
+ [tag_input[tag] for tag in TAG_INPUTS]
470
+
471
+ # interrogation events
472
+ image_submit.click(fn=wrap_gradio_gpu_call(on_interrogate_image_submit),
473
+ inputs=[image] + common_input, outputs=common_output)
474
+
475
+ image.change(fn=wrap_gradio_gpu_call(on_interrogate_image),
476
+ inputs=[image] + common_input, outputs=common_output)
477
+
478
+ batch_submit.click(fn=wrap_gradio_gpu_call(on_interrogate),
479
+ inputs=[input_glob, output_dir] + common_input,
480
+ outputs=common_output)
481
+
482
+ return [(tagger_interface, "Tagger", "tagger")]
extensions-builtin/sdw-wd14-tagger/tagger/uiset.py ADDED
@@ -0,0 +1,634 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """ for handling ui settings """
2
+
3
+ from typing import List, Dict, Tuple, Callable, Set, Optional
4
+ import os
5
+ from pathlib import Path
6
+ from glob import glob
7
+ from math import ceil
8
+ from hashlib import sha256
9
+ from re import compile as re_comp, sub as re_sub, match as re_match, IGNORECASE
10
+ from json import dumps, loads
11
+ from jsonschema import validate, ValidationError
12
+ from functools import partial
13
+ from collections import defaultdict
14
+ from PIL import Image
15
+
16
+ from modules import shared # pylint: disable=import-error
17
+ from modules.deepbooru import re_special # pylint: disable=import-error
18
+ from tagger import format as tags_format # pylint: disable=import-error
19
+ from tagger import settings # pylint: disable=import-error
20
+
21
+ Its = settings.InterrogatorSettings
22
+
23
+ # PIL.Image.registered_extensions() returns only PNG if you call early
24
+ supported_extensions = {
25
+ e
26
+ for e, f in Image.registered_extensions().items()
27
+ if f in Image.OPEN
28
+ }
29
+
30
+ # interrogator return type
31
+ ItRetTP = Tuple[
32
+ Dict[str, float], # rating confidences
33
+ Dict[str, float], # tag confidences
34
+ Dict[str, float], # excluded tag confidences
35
+ str, # error message
36
+ ]
37
+
38
+
39
+ class IOData:
40
+ """ data class for input and output paths """
41
+ last_path_mtimes = None
42
+ base_dir = None
43
+ output_root = None
44
+ paths = []
45
+ save_tags = True
46
+ err = set()
47
+
48
+ @classmethod
49
+ def error_msg(cls) -> str:
50
+ return "Errors:<ul>" + ''.join(f'<li>{x}</li>' for x in cls.err) + \
51
+ "</ul>"
52
+
53
+ @classmethod
54
+ def flip_save_tags(cls) -> callable:
55
+ def toggle():
56
+ cls.save_tags = not cls.save_tags
57
+ return toggle
58
+
59
+ @classmethod
60
+ def toggle_save_tags(cls) -> None:
61
+ cls.save_tags = not cls.save_tags
62
+
63
+ @classmethod
64
+ def update_output_dir(cls, output_dir: str) -> None:
65
+ """ update output directory, and set input and output paths """
66
+ pout = Path(output_dir)
67
+ if pout != cls.output_root:
68
+ paths = [x[0] for x in cls.paths]
69
+ cls.paths = []
70
+ cls.output_root = pout
71
+ cls.set_batch_io(paths)
72
+
73
+ @staticmethod
74
+ def get_bytes_hash(data) -> str:
75
+ """ get sha256 checksum of file """
76
+ # Note: the checksum from an image is not the same as from file
77
+ return sha256(data).hexdigest()
78
+
79
+ @classmethod
80
+ def get_hashes(cls) -> Set[str]:
81
+ """ get hashes of all files """
82
+ ret = set()
83
+ for entries in cls.paths:
84
+ if len(entries) == 4:
85
+ ret.add(entries[3])
86
+ else:
87
+ # if there is no checksum, calculate it
88
+ image = Image.open(entries[0])
89
+ checksum = cls.get_bytes_hash(image.tobytes())
90
+ entries.append(checksum)
91
+ ret.add(checksum)
92
+ return ret
93
+
94
+ @classmethod
95
+ def update_input_glob(cls, input_glob: str) -> None:
96
+ """ update input glob pattern, and set input and output paths """
97
+ input_glob = input_glob.strip()
98
+
99
+ paths = []
100
+
101
+ # if there is no glob pattern, insert it automatically
102
+ if not input_glob.endswith('*'):
103
+ if not input_glob.endswith(os.sep):
104
+ input_glob += os.sep
105
+ input_glob += '*'
106
+
107
+ # get root directory of input glob pattern
108
+ base_dir = input_glob.replace('?', '*')
109
+ base_dir = base_dir.split(os.sep + '*').pop(0)
110
+ msg = 'Invalid input directory'
111
+ if not os.path.isdir(base_dir):
112
+ cls.err.add(msg)
113
+ return
114
+ cls.err.discard(msg)
115
+
116
+ recursive = getattr(shared.opts, 'tagger_batch_recursive', True)
117
+ path_mtimes = []
118
+ for filename in glob(input_glob, recursive=recursive):
119
+ if not os.path.isdir(filename):
120
+ ext = os.path.splitext(filename)[1].lower()
121
+ if ext in supported_extensions:
122
+ path_mtimes.append(os.path.getmtime(filename))
123
+ paths.append(filename)
124
+ elif ext != '.txt' and 'db.json' not in filename:
125
+ print(f'{filename}: not an image extension: "{ext}"')
126
+
127
+ # interrogating in a directory with no pics, still flush the cache
128
+ if len(path_mtimes) > 0 and cls.last_path_mtimes == path_mtimes:
129
+ print('No changed images')
130
+ return
131
+
132
+ QData.clear(2)
133
+ cls.last_path_mtimes = path_mtimes
134
+
135
+ if not cls.output_root:
136
+ cls.output_root = Path(base_dir)
137
+ elif cls.base_dir and cls.output_root == Path(cls.base_dir):
138
+ cls.output_root = Path(base_dir)
139
+
140
+ # XXX what is this basedir magic trying to achieve?
141
+ cls.base_dir_last = Path(base_dir).parts[-1]
142
+ cls.base_dir = base_dir
143
+
144
+ QData.read_json(cls.output_root)
145
+
146
+ print(f'found {len(paths)} image(s)')
147
+ cls.set_batch_io(paths)
148
+
149
+ @classmethod
150
+ def set_batch_io(cls, paths: List[str]) -> None:
151
+ """ set input and output paths for batch mode """
152
+ checked_dirs = set()
153
+ cls.paths = []
154
+ for path in paths:
155
+ path = Path(path)
156
+ if not cls.save_tags:
157
+ cls.paths.append([path, '', ''])
158
+ continue
159
+
160
+ # guess the output path
161
+ base_dir_last_idx = path.parts.index(cls.base_dir_last)
162
+ # format output filename
163
+
164
+ info = tags_format.Info(path, 'txt')
165
+ fmt = partial(lambda info, m: tags_format.parse(m, info), info)
166
+
167
+ msg = 'Invalid output format'
168
+ cls.err.discard(msg)
169
+ try:
170
+ formatted_output_filename = tags_format.pattern.sub(
171
+ fmt,
172
+ Its.output_filename_format
173
+ )
174
+ except (TypeError, ValueError):
175
+ cls.err.add(msg)
176
+
177
+ output_dir = cls.output_root.joinpath(
178
+ *path.parts[base_dir_last_idx + 1:]).parent
179
+
180
+ tags_out = output_dir.joinpath(formatted_output_filename)
181
+
182
+ if output_dir in checked_dirs:
183
+ cls.paths.append([path, tags_out, ''])
184
+ else:
185
+ checked_dirs.add(output_dir)
186
+ if os.path.exists(output_dir):
187
+ msg = 'output_dir: not a directory.'
188
+ if os.path.isdir(output_dir):
189
+ cls.paths.append([path, tags_out, ''])
190
+ cls.err.discard(msg)
191
+ else:
192
+ cls.err.add(msg)
193
+ else:
194
+ cls.paths.append([path, tags_out, output_dir])
195
+
196
+
197
+ class QData:
198
+ """ Query data: contains parameters for the query """
199
+ add_tags = []
200
+ keep_tags = set()
201
+ exclude_tags = []
202
+ search_tags = {}
203
+ replace_tags = []
204
+ threshold = 0.35
205
+ tag_frac_threshold = 0.05
206
+
207
+ # read from db.json, update with what should be written to db.json:
208
+ json_db = None
209
+ weighed = (defaultdict(list), defaultdict(list))
210
+ query = {}
211
+
212
+ # representing the (cumulative) current interrogations
213
+ ratings = defaultdict(float)
214
+ tags = defaultdict(list)
215
+ discarded_tags = defaultdict(list)
216
+ in_db = {}
217
+ for_tags_file = defaultdict(lambda: defaultdict(float))
218
+
219
+ had_new = False
220
+ err = set()
221
+ image_dups = defaultdict(set)
222
+
223
+ @classmethod
224
+ def set(cls, key: str) -> Callable[[str], Tuple[str]]:
225
+ def setter(val) -> Tuple[str]:
226
+ setattr(cls, key, val)
227
+ return setter
228
+
229
+ @classmethod
230
+ def set(cls, key: str) -> Callable[[str], Tuple[str]]:
231
+ def setter(val) -> Tuple[str]:
232
+ setattr(cls, key, val)
233
+ return setter
234
+
235
+ @classmethod
236
+ def clear(cls, mode: int) -> None:
237
+ """ clear tags and ratings """
238
+ cls.tags.clear()
239
+ cls.discarded_tags.clear()
240
+ cls.ratings.clear()
241
+ cls.for_tags_file.clear()
242
+ if mode > 0:
243
+ cls.in_db.clear()
244
+ cls.image_dups.clear()
245
+ if mode > 1:
246
+ cls.json_db = None
247
+ cls.weighed = (defaultdict(list), defaultdict(list))
248
+ cls.query = {}
249
+ if mode > 2:
250
+ cls.add_tags = []
251
+ cls.keep_tags = set()
252
+ cls.exclude_tags = []
253
+ cls.search_tags = {}
254
+ cls.replace_tags = []
255
+
256
+ @classmethod
257
+ def test_add(cls, tag: str, current: str, incompatible: list) -> None:
258
+ """ check if there are incompatible collections """
259
+ msg = f'Empty tag in {current} tags'
260
+ if tag == '':
261
+ cls.err.add(msg)
262
+ return
263
+ cls.err.discard(msg)
264
+ for bad in incompatible:
265
+ if current < bad:
266
+ msg = f'"{tag}" is both in {bad} and {current} tags'
267
+ else:
268
+ msg = f'"{tag}" is both in {current} and {bad} tags'
269
+ attr = getattr(cls, bad + '_tags')
270
+ if bad == 'search':
271
+ for rex in attr.values():
272
+ if rex.match(tag):
273
+ cls.err.add(msg)
274
+ return
275
+ elif bad in 'exclude':
276
+ if any(rex.match(tag) for rex in attr):
277
+ cls.err.add(msg)
278
+ return
279
+ else:
280
+ if tag in attr:
281
+ cls.err.add(msg)
282
+ return
283
+
284
+ attr = getattr(cls, current + '_tags')
285
+ if current in ['add', 'replace']:
286
+ attr.append(tag)
287
+ elif current == 'keep':
288
+ attr.add(tag)
289
+ else:
290
+ rex = cls.compile_rex(tag)
291
+ if rex:
292
+ if current == 'exclude':
293
+ attr.append(rex)
294
+ elif current == 'search':
295
+ attr[len(attr)] = rex
296
+ else:
297
+ cls.err.add(f'empty regex in {current} tags')
298
+
299
+ @classmethod
300
+ def update_keep(cls, keep: str) -> None:
301
+ cls.keep_tags = set()
302
+ if keep == '':
303
+ return
304
+ un_re = re_comp(r' keep(?: and \w+)? tags')
305
+ cls.err = {err for err in cls.err if not un_re.search(err)}
306
+ for tag in map(str.strip, keep.split(',')):
307
+ cls.test_add(tag, 'keep', ['exclude', 'search'])
308
+
309
+ @classmethod
310
+ def update_add(cls, add: str) -> None:
311
+ cls.add_tags = []
312
+ if add == '':
313
+ return
314
+ un_re = re_comp(r' add(?: and \w+)? tags')
315
+ cls.err = {err for err in cls.err if not un_re.search(err)}
316
+ for tag in map(str.strip, add.split(',')):
317
+ cls.test_add(tag, 'add', ['exclude', 'search'])
318
+
319
+ # silently raise count threshold to avoid issue in apply_filters
320
+ count_threshold = getattr(shared.opts, 'tagger_count_threshold', 100)
321
+ if len(cls.add_tags) > count_threshold:
322
+ shared.opts.tagger_count_threshold = len(cls.add_tags)
323
+
324
+ @staticmethod
325
+ def compile_rex(rex: str) -> Optional:
326
+ if rex in {'', '^', '$', '^$'}:
327
+ return None
328
+ if rex[0] == '^':
329
+ rex = rex[1:]
330
+ if rex[-1] == '$':
331
+ rex = rex[:-1]
332
+ return re_comp('^'+rex+'$', flags=IGNORECASE)
333
+
334
+ @classmethod
335
+ def update_exclude(cls, exclude: str) -> None:
336
+ cls.exclude_tags = []
337
+ if exclude == '':
338
+ return
339
+ un_re = re_comp(r' exclude(?: and \w+)? tags')
340
+ cls.err = {err for err in cls.err if not un_re.search(err)}
341
+ for excl in map(str.strip, exclude.split(',')):
342
+ incompatible = ['add', 'keep', 'search', 'replace']
343
+ cls.test_add(excl, 'exclude', incompatible)
344
+
345
+ @classmethod
346
+ def update_search(cls, search_str: str) -> None:
347
+ cls.search_tags = {}
348
+ if search_str == '':
349
+ return
350
+ un_re = re_comp(r' search(?: and \w+)? tags')
351
+ cls.err = {err for err in cls.err if not un_re.search(err)}
352
+ for rex in map(str.strip, search_str.split(',')):
353
+ incompatible = ['add', 'keep', 'exclude', 'replace']
354
+ cls.test_add(rex, 'search', incompatible)
355
+
356
+ msg = 'Unequal number of search and replace tags'
357
+ if len(cls.search_tags) != len(cls.replace_tags):
358
+ cls.err.add(msg)
359
+ else:
360
+ cls.err.discard(msg)
361
+
362
+ @classmethod
363
+ def update_replace(cls, replace: str) -> None:
364
+ cls.replace_tags = []
365
+ if replace == '':
366
+ return
367
+ un_re = re_comp(r' replace(?: and \w+)? tags')
368
+ cls.err = {err for err in cls.err if not un_re.search(err)}
369
+ for repl in map(str.strip, replace.split(',')):
370
+ cls.test_add(repl, 'replace', ['exclude', 'search'])
371
+ msg = 'Unequal number of search and replace tags'
372
+ if len(cls.search_tags) != len(cls.replace_tags):
373
+ cls.err.add(msg)
374
+ else:
375
+ cls.err.discard(msg)
376
+
377
+ @classmethod
378
+ def get_i_wt(cls, stored: int) -> Tuple[int, float]:
379
+ """
380
+ in db.json or QData.weighed, the weights & increment in the list are
381
+ encoded. Each filestamp-interrogation corresponds to an incrementing
382
+ index. The index is above the floating point, the weight is below.
383
+ """
384
+ i = ceil(stored) - 1
385
+ return i, stored - i
386
+
387
+ @classmethod
388
+ def read_json(cls, outdir) -> None:
389
+ """ read db.json if it exists, validate it, and update cls """
390
+ cls.json_db = None
391
+ if getattr(shared.opts, 'tagger_auto_serde_json', True):
392
+ cls.json_db = outdir.joinpath('db.json')
393
+ if cls.json_db.is_file():
394
+ print(f'Reading {cls.json_db}')
395
+ cls.had_new = False
396
+ msg = f'Error reading {cls.json_db}'
397
+ cls.err.discard(msg)
398
+ # validate json using either json_schema/db_jon_v1_schema.json
399
+ # or json_schema/db_jon_v2_schema.json
400
+
401
+ schema = Path(__file__).parent.parent.joinpath(
402
+ 'json_schema', 'db_json_v1_schema.json'
403
+ )
404
+ try:
405
+ data = loads(cls.json_db.read_text())
406
+ validate(data, loads(schema.read_text()))
407
+
408
+ # convert v2 back to v1
409
+ if "meta" in data:
410
+ cls.had_new = True # <- force write for v2 -> v1
411
+ except (ValidationError, IndexError) as err:
412
+ print(f'{msg}: {repr(err)}')
413
+ cls.err.add(msg)
414
+ data = {"query": {}, "tag": [], "rating": []}
415
+
416
+ cls.query = data["query"]
417
+ cls.weighed = (
418
+ defaultdict(list, data["rating"]),
419
+ defaultdict(list, data["tag"])
420
+ )
421
+ print(f'Read {cls.json_db}: {len(cls.query)} interrogations, '
422
+ f'{len(cls.tags)} tags.')
423
+
424
+ @classmethod
425
+ def write_json(cls) -> None:
426
+ """ write db.json """
427
+ if cls.json_db is not None:
428
+ data = {
429
+ "rating": cls.weighed[0],
430
+ "tag": cls.weighed[1],
431
+ "query": cls.query,
432
+ }
433
+ cls.json_db.write_text(dumps(data, indent=2))
434
+ print(f'Wrote {cls.json_db}: {len(cls.query)} interrogations, '
435
+ f'{len(cls.tags)} tags.')
436
+
437
+ @classmethod
438
+ def get_index(cls, fi_key: str, path='') -> int:
439
+ """ get index for filestamp-interrogator """
440
+ if path and path != cls.query[fi_key][0]:
441
+ if cls.query[fi_key][0] != '':
442
+ print(f'Dup or rename: Identical checksums for {path}\n'
443
+ f'and: {cls.query[fi_key][0]} (path updated)')
444
+ cls.had_new = True
445
+ cls.query[fi_key] = (path, cls.query[fi_key][1])
446
+
447
+ return cls.query[fi_key][1]
448
+
449
+ @classmethod
450
+ def single_data(cls, fi_key: str) -> None:
451
+ """ get tags and ratings for filestamp-interrogator """
452
+ index = cls.query.get(fi_key)[1]
453
+ data = ({}, {})
454
+ for j in range(2):
455
+ for ent, lst in cls.weighed[j].items():
456
+ for i, val in map(cls.get_i_wt, lst):
457
+ if i == index:
458
+ data[j][ent] = val
459
+
460
+ QData.in_db[index] = ('', '', '') + data
461
+
462
+ @classmethod
463
+ def is_excluded(cls, ent: str) -> bool:
464
+ """ check if tag is excluded """
465
+ return any(re_match(x, ent) for x in cls.exclude_tags)
466
+
467
+ @classmethod
468
+ def correct_tag(cls, tag: str) -> str:
469
+ """ correct tag for display """
470
+ replace_underscore = getattr(shared.opts, 'tagger_repl_us', True)
471
+ if replace_underscore and tag not in Its.kamojis:
472
+ tag = tag.replace('_', ' ')
473
+
474
+ if getattr(shared.opts, 'tagger_escape', False):
475
+ tag = re_special.sub(r'\\\1', tag) # tag_escape_pattern
476
+
477
+ if len(cls.search_tags) == len(cls.replace_tags):
478
+ for i, regex in cls.search_tags.items():
479
+ if re_match(regex, tag):
480
+ tag = re_sub(regex, cls.replace_tags[i], tag)
481
+ break
482
+
483
+ return tag
484
+
485
+ @classmethod
486
+ def apply_filters(cls, data) -> None:
487
+ """ apply filters to query data, store in db.json if required """
488
+ # data = (path, fi_key, tags, ratings, new)
489
+ # fi_key == '' means this is a new file or interrogation for that file
490
+
491
+ tags = sorted(data[4].items(), key=lambda x: x[1], reverse=True)
492
+
493
+ fi_key = data[2]
494
+ index = len(cls.query)
495
+
496
+ ratings = sorted(data[3].items(), key=lambda x: x[1], reverse=True)
497
+ # loop over ratings
498
+ for rating, val in ratings:
499
+ if fi_key != '':
500
+ cls.weighed[0][rating].append(val + index)
501
+ cls.ratings[rating] += val
502
+
503
+ count_threshold = getattr(shared.opts, 'tagger_count_threshold', 100)
504
+ max_ct = count_threshold - len(cls.add_tags)
505
+ count = 0
506
+ # loop over tags with db update
507
+ for tag, val in tags:
508
+ if isinstance(tag, float):
509
+ print(f'bad return from interrogator, float: {tag} {val}')
510
+ # FIXME: why does this happen? what does it mean?
511
+ continue
512
+
513
+ if fi_key != '' and val >= 0.005:
514
+ cls.weighed[1][tag].append(val + index)
515
+
516
+ if count < max_ct:
517
+ tag = cls.correct_tag(tag)
518
+ if tag not in cls.keep_tags:
519
+ if cls.is_excluded(tag) or val < cls.threshold:
520
+ if tag not in cls.add_tags and \
521
+ len(cls.discarded_tags) < max_ct:
522
+ cls.discarded_tags[tag].append(val)
523
+ continue
524
+ if data[1] != '':
525
+ current = cls.for_tags_file[data[1]].get(tag, 0.0)
526
+ cls.for_tags_file[data[1]][tag] = min(val + current, 1.0)
527
+ count += 1
528
+ if tag not in cls.add_tags:
529
+ # those are already added
530
+ cls.tags[tag].append(val)
531
+ elif fi_key == '':
532
+ break
533
+
534
+ if getattr(shared.opts, 'tagger_verbose', True):
535
+ print(f'{data[0]}: {count}/{len(tags)} tags kept')
536
+
537
+ if fi_key != '':
538
+ cls.query[fi_key] = (data[0], index)
539
+
540
+ @classmethod
541
+ def finalize_batch(cls, count: int) -> ItRetTP:
542
+ """ finalize the batch query """
543
+ if cls.json_db and cls.had_new:
544
+ cls.write_json()
545
+ cls.had_new = False
546
+
547
+ # collect the weights per file/interrogation of the prior in db stored.
548
+ for index in range(2):
549
+ for ent, lst in cls.weighed[index].items():
550
+ for i, val in map(cls.get_i_wt, lst):
551
+ if i not in cls.in_db:
552
+ continue
553
+ cls.in_db[i][3+index][ent] = val
554
+
555
+ # process the retrieved from db and add them to the stats
556
+ for got in cls.in_db.values():
557
+ no_floats = sorted(filter(lambda x: not isinstance(x[0], float),
558
+ got[3].items()), key=lambda x: x[0])
559
+ sorted_tags = ','.join(f'({k},{v:.1f})' for (k, v) in no_floats)
560
+ QData.image_dups[sorted_tags].add(got[0])
561
+ cls.apply_filters(got)
562
+
563
+ # average
564
+ return cls.finalize(count)
565
+
566
+ @staticmethod
567
+ def sort_tags(tags: Dict[str, float]) -> List[Tuple[str, float]]:
568
+ """ sort tags by value, return list of tuples """
569
+ return sorted(tags.items(), key=lambda x: x[1], reverse=True)
570
+
571
+ @classmethod
572
+ def get_image_dups(cls) -> List[str]:
573
+ # first sort values so that those without a comma come first
574
+ ordered = sorted(cls.image_dups.items(), key=lambda x: ',' in x[0])
575
+ return [str(x) for s in ordered if len(s[1]) > 1 for x in s[1]]
576
+
577
+ @classmethod
578
+ def finalize(cls, count: int) -> ItRetTP:
579
+ """ finalize the query, return the results """
580
+
581
+ count += len(cls.in_db)
582
+ if count == 0:
583
+ return None, None, None, 'no results for query'
584
+
585
+ ratings, tags, discarded_tags = {}, {}, {}
586
+
587
+ for n in cls.for_tags_file.keys():
588
+ for k in cls.add_tags:
589
+ cls.for_tags_file[n][k] = 1.0 * count
590
+
591
+ for k in cls.add_tags:
592
+ tags[k] = 1.0
593
+
594
+ for k, lst in cls.tags.items():
595
+ # len(!) fraction of the all interrogations was above the threshold
596
+ fraction_of_queries = len(lst) / count
597
+
598
+ if fraction_of_queries >= cls.tag_frac_threshold:
599
+ # store the average of those interrogations sum(!) / count
600
+ tags[k] = sum(lst) / count
601
+ # trigger an event to place the tag in the active tags list
602
+ # replace if k interferes with html code
603
+ else:
604
+ discarded_tags[k] = sum(lst) / count
605
+ for n in cls.for_tags_file.keys():
606
+ if k in cls.for_tags_file[n]:
607
+ if k not in cls.add_tags and k not in cls.keep_tags:
608
+ del cls.for_tags_file[n][k]
609
+
610
+ for k, lst in cls.discarded_tags.items():
611
+ fraction_of_queries = len(lst) / count
612
+ discarded_tags[k] = sum(lst) / count
613
+
614
+ for ent, val in cls.ratings.items():
615
+ ratings[ent] = val / count
616
+
617
+ weighted_tags_files = getattr(shared.opts,
618
+ 'tagger_weighted_tags_files', False)
619
+ for file, remaining_tags in cls.for_tags_file.items():
620
+ sorted_tags = cls.sort_tags(remaining_tags)
621
+ if weighted_tags_files:
622
+ sorted_tags = [f'({k}:{v})' for k, v in sorted_tags]
623
+ else:
624
+ sorted_tags = [k for k, v in sorted_tags]
625
+ file.write_text(', '.join(sorted_tags), encoding='utf-8')
626
+
627
+ warn = ""
628
+ if len(QData.err) > 0:
629
+ warn = "Warnings (fix and try again - it should be cheap):<ul>" + \
630
+ ''.join([f'<li>{x}</li>' for x in QData.err]) + "</ul>"
631
+
632
+ if count > 1 and len(cls.get_image_dups()) > 0:
633
+ warn += "There were duplicates, see gallery tab"
634
+ return ratings, tags, discarded_tags, warn
extensions-builtin/sdw-wd14-tagger/tagger/utils.py ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Utility functions for the tagger module"""
2
+ import os
3
+
4
+ from typing import List, Dict
5
+ from pathlib import Path
6
+
7
+ from modules import shared, scripts # pylint: disable=import-error
8
+ from modules.shared import models_path # pylint: disable=import-error
9
+
10
+ default_ddp_path = Path(models_path, 'deepdanbooru')
11
+ default_onnx_path = Path(models_path, 'TaggerOnnx')
12
+ from tagger.preset import Preset # pylint: disable=import-error
13
+ from tagger.interrogator import Interrogator, DeepDanbooruInterrogator, \
14
+ MLDanbooruInterrogator # pylint: disable=E0401 # noqa: E501
15
+ from tagger.interrogator import WaifuDiffusionInterrogator # pylint: disable=E0401 # noqa: E501
16
+
17
+ preset = Preset(Path(scripts.basedir(), 'presets'))
18
+
19
+ interrogators: Dict[str, Interrogator] = {
20
+ 'wd14-vit.v1': WaifuDiffusionInterrogator(
21
+ 'WD14 ViT v1',
22
+ repo_id='SmilingWolf/wd-v1-4-vit-tagger'
23
+ ),
24
+ 'wd14-vit.v2': WaifuDiffusionInterrogator(
25
+ 'WD14 ViT v2',
26
+ repo_id='SmilingWolf/wd-v1-4-vit-tagger-v2',
27
+ ),
28
+ 'wd14-convnext.v1': WaifuDiffusionInterrogator(
29
+ 'WD14 ConvNeXT v1',
30
+ repo_id='SmilingWolf/wd-v1-4-convnext-tagger'
31
+ ),
32
+ 'wd14-convnext.v2': WaifuDiffusionInterrogator(
33
+ 'WD14 ConvNeXT v2',
34
+ repo_id='SmilingWolf/wd-v1-4-convnext-tagger-v2',
35
+ ),
36
+ 'wd14-convnextv2.v1': WaifuDiffusionInterrogator(
37
+ 'WD14 ConvNeXTV2 v1',
38
+ # the name is misleading, but it's v1
39
+ repo_id='SmilingWolf/wd-v1-4-convnextv2-tagger-v2',
40
+ ),
41
+ 'wd14-swinv2-v1': WaifuDiffusionInterrogator(
42
+ 'WD14 SwinV2 v1',
43
+ # again misleading name
44
+ repo_id='SmilingWolf/wd-v1-4-swinv2-tagger-v2',
45
+ ),
46
+ 'wd-v1-4-moat-tagger.v2': WaifuDiffusionInterrogator(
47
+ 'WD14 moat tagger v2',
48
+ repo_id='SmilingWolf/wd-v1-4-moat-tagger-v2'
49
+ ),
50
+ 'mld-caformer.dec-5-97527': MLDanbooruInterrogator(
51
+ 'ML-Danbooru Caformer dec-5-97527',
52
+ repo_id='deepghs/ml-danbooru-onnx',
53
+ model_path='ml_caformer_m36_dec-5-97527.onnx'
54
+ ),
55
+ 'mld-tresnetd.6-30000': MLDanbooruInterrogator(
56
+ 'ML-Danbooru TResNet-D 6-30000',
57
+ repo_id='deepghs/ml-danbooru-onnx',
58
+ model_path='TResnet-D-FLq_ema_6-30000.onnx'
59
+ ),
60
+ }
61
+
62
+
63
+ def refresh_interrogators() -> List[str]:
64
+ """Refreshes the interrogators list"""
65
+ # load deepdanbooru project
66
+ ddp_path = shared.cmd_opts.deepdanbooru_projects_path
67
+ if ddp_path is None:
68
+ ddp_path = default_ddp_path
69
+ onnx_path = shared.cmd_opts.onnxtagger_path
70
+ if onnx_path is None:
71
+ onnx_path = default_onnx_path
72
+ os.makedirs(ddp_path, exist_ok=True)
73
+ os.makedirs(onnx_path, exist_ok=True)
74
+
75
+ for path in os.scandir(ddp_path):
76
+ print(f"Scanning {path} as deepdanbooru project")
77
+ if not path.is_dir():
78
+ print(f"Warning: {path} is not a directory, skipped")
79
+ continue
80
+
81
+ if not Path(path, 'project.json').is_file():
82
+ print(f"Warning: {path} has no project.json, skipped")
83
+ continue
84
+
85
+ interrogators[path.name] = DeepDanbooruInterrogator(path.name, path)
86
+ # scan for onnx models as well
87
+ for path in os.scandir(onnx_path):
88
+ print(f"Scanning {path} as onnx model")
89
+ if not path.is_dir():
90
+ print(f"Warning: {path} is not a directory, skipped")
91
+ continue
92
+
93
+ onnx_files = [x for x in os.scandir(path) if x.name.endswith('.onnx')]
94
+ if len(onnx_files) != 1:
95
+ print(f"Warning: {path} requires exactly one .onnx model, skipped")
96
+ continue
97
+ local_path = Path(path, onnx_files[0].name)
98
+
99
+ csv = [x for x in os.scandir(path) if x.name.endswith('.csv')]
100
+ if len(csv) == 0:
101
+ print(f"Warning: {path} has no selected tags .csv file, skipped")
102
+ continue
103
+
104
+ def tag_select_csvs_up_front(k):
105
+ sum(-1 if t in k.name.lower() else 1 for t in ["tag", "select"])
106
+
107
+ csv.sort(key=tag_select_csvs_up_front)
108
+ tags_path = Path(path, csv[0])
109
+
110
+ if path.name not in interrogators:
111
+ if path.name == 'wd-v1-4-convnextv2-tagger-v2':
112
+ interrogators[path.name] = WaifuDiffusionInterrogator(
113
+ path.name,
114
+ repo_id='SmilingWolf/SW-CV-ModelZoo',
115
+ is_hf=False
116
+ )
117
+ elif path.name == 'Z3D-E621-Convnext':
118
+ interrogators[path.name] = WaifuDiffusionInterrogator(
119
+ 'Z3D-E621-Convnext', is_hf=False)
120
+ else:
121
+ raise NotImplementedError(f"Add {path.name} resolution similar"
122
+ "to above here")
123
+
124
+ interrogators[path.name].local_model = str(local_path)
125
+ interrogators[path.name].local_tags = str(tags_path)
126
+
127
+ return sorted(interrogators.keys())
128
+
129
+
130
+ def split_str(string: str, separator=',') -> List[str]:
131
+ return [x.strip() for x in string.split(separator) if x]