Outimus commited on
Commit
ef7f3ab
1 Parent(s): 313ca23

Upload 24 files

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ fonts/CALIBRI.TTF filter=lfs diff=lfs merge=lfs -text
.gitignore ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ __pycache__
2
+ Config.json
3
+ nsp_pantry.json
4
+ output
5
+
README.md ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Thank you to all the valuable contributors. Kindly submit any pull requests to the development branch instead of the main branch. Your efforts are greatly appreciated.
2
+
3
+ # ComfyUI-extra-nodes - quality of life
4
+ Extra nodes to be used in ComfyUI, including a new ChatGPT node for generating natural language responses.
5
+
6
+ ## ComfyUI
7
+ ComfyUI is an advanced node-based UI that utilizes Stable Diffusion, allowing you to create customized workflows such as image post-processing or conversions.
8
+
9
+ ## How to install
10
+ Download the zip file.
11
+ Extract to ..\ComfyUI\custom_nodes.
12
+ Restart ComfyUI if it was running (reloading the web is not enough).
13
+ You will find my nodes under the new group O/....
14
+
15
+ ## How to update
16
+ - quality of life will auto update each time you run comfyUI
17
+ - when you run comfyUI, the suit will generate a config file
18
+ The file looks like this :
19
+
20
+ {
21
+ "autoUpdate": true,
22
+ "branch": "main",
23
+ "openAI_API_Key": "sk-#################################"
24
+ }
25
+
26
+ - if you want to stop autoUpdate edit `config.json` set "autoUpdate": false
27
+
28
+ ## Current nodes
29
+ ## openAI suite
30
+
31
+ ## ChatGPT simple
32
+ This node harnesses the power of chatGPT, an advanced language model that can generate detailed image descriptions from a small input.
33
+ - you need to have OpenAI API key , which you can find at https://beta.openai.com/docs/developer-apis/overview
34
+ - Once you have your API key, add it to the `config.json` file
35
+ - I have made it a separate file, so that the API key doesn't get embedded in the generated images.
36
+
37
+ ## advanced openAI
38
+ - load_openAI:load openAI module
39
+ ### ChatGPT
40
+ - Chat_Message: creates a message to be sent to chatGPT
41
+ - combine_chat_messages : combine 2 messages together
42
+ - Chat completion: send the messages to ChatGPT and receive answer
43
+ ### DalE-2
44
+ - create image
45
+ - variation_image
46
+
47
+ ## String Suit
48
+ This set of nodes adds support for string manipulation and includes a tool to generate an image from text.
49
+
50
+ - Concat String: This node combines two strings together.
51
+ - Trim String: This node removes any extra spaces at the start or end of a string.
52
+ - Replace String : This nodes replace part of the text with another part.
53
+ - Debug String: This node writes the string to the console.
54
+ - Debug String route: This node writes the string to the console but will output the same string so that you can add it in middle of a route.
55
+ ### String2image
56
+ This node generates an image based on text, which can be used with ControlNet to add text to the image. The tool supports various fonts; you can add the font you want in the fonts folder. If you load the example image in ComfyUI, the workflow that generated it will be loaded.
57
+
58
+ ### save text
59
+ - saveTextToFile: this node will save input text to a file "the file will be generated inside /output folder"
60
+ ### NSP
61
+ "node soup" which is a collection of different values categorized under different terminologies that you can use to generate new prompts easily
62
+ - RandomNSP: returns a random value from the selected terminology
63
+ - ConcatRandomNSP: will append a random value from the selected terminology to the input text (can be used mid route)
64
+
65
+ ## latentTools
66
+ ### selectLatentFromBatch
67
+ this node allow you to select 1 latent image from image batch
68
+ for example if you generate 4 images, it allow you to select 1 of them to do further processing on it
69
+ or you can use it to process them sequentially
70
+ ### LatentUpscaleFactor & LatentUpscaleFactorSimple
71
+ This node is a variant of the original LatentUpscale tool, but instead of using width and height, you use a multiply number. For example, if the original image dimensions are (512,512) and the mul values are (2,2), the result image will be (1024,1024). You can also use it to downscale by using fractions, e.g., (512,512) mul (.5,.5) → (256,256).
72
+
73
+ ## ImageTools
74
+ ### ImageScaleFactor & ImageScaleFactorSimple
75
+ This node is a variant of the original LatentUpscale tool, but instead of using width and height, you use a multiply number. For example, if the original image dimensions are (512,512) and the mul values are (2,2), the result image will be (1024,1024). You can also use it to downscale by using fractions, e.g., (512,512) mul (.5,.5) → (256,256).
76
+
77
+
78
+ ## Thanks for reading my message, and I hope that my tools will help you.
79
+
80
+ ## Contact
81
+ ### Discord: Omar92#3374
82
+ ### GitHub: omar92 (https://github.com/omar92)
Workflows/All nodes Workdflow .json ADDED
@@ -0,0 +1,3338 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "last_node_id": 86,
3
+ "last_link_id": 80,
4
+ "nodes": [
5
+ {
6
+ "id": 2,
7
+ "type": "Equation1param _O",
8
+ "pos": [
9
+ 70.26363461036739,
10
+ 125.46773995871277
11
+ ],
12
+ "size": {
13
+ "0": 400,
14
+ "1": 200
15
+ },
16
+ "flags": {},
17
+ "order": 0,
18
+ "mode": 0,
19
+ "outputs": [
20
+ {
21
+ "name": "FLOAT",
22
+ "type": "FLOAT",
23
+ "links": [
24
+ 7
25
+ ],
26
+ "slot_index": 0
27
+ }
28
+ ],
29
+ "properties": {
30
+ "Node name for S&R": "Equation1param _O"
31
+ },
32
+ "widgets_values": [
33
+ 6.5,
34
+ "x+1/3"
35
+ ]
36
+ },
37
+ {
38
+ "id": 5,
39
+ "type": "Equation2params _O",
40
+ "pos": [
41
+ 81.2636346103674,
42
+ 394.4677399587126
43
+ ],
44
+ "size": {
45
+ "0": 400,
46
+ "1": 200
47
+ },
48
+ "flags": {},
49
+ "order": 35,
50
+ "mode": 0,
51
+ "inputs": [
52
+ {
53
+ "name": "x",
54
+ "type": "FLOAT",
55
+ "link": 7,
56
+ "widget": {
57
+ "name": "x",
58
+ "config": [
59
+ "FLOAT",
60
+ {
61
+ "default": 0,
62
+ "min": 0,
63
+ "max": 18446744073709552000
64
+ }
65
+ ]
66
+ }
67
+ }
68
+ ],
69
+ "outputs": [
70
+ {
71
+ "name": "FLOAT",
72
+ "type": "FLOAT",
73
+ "links": [
74
+ 4
75
+ ],
76
+ "slot_index": 0
77
+ }
78
+ ],
79
+ "properties": {
80
+ "Node name for S&R": "Equation2params _O"
81
+ },
82
+ "widgets_values": [
83
+ 5,
84
+ 2.5,
85
+ "x+y"
86
+ ]
87
+ },
88
+ {
89
+ "id": 6,
90
+ "type": "floatToInt _O",
91
+ "pos": [
92
+ 551.2636346103676,
93
+ 134.4677399587128
94
+ ],
95
+ "size": {
96
+ "0": 315,
97
+ "1": 58
98
+ },
99
+ "flags": {},
100
+ "order": 44,
101
+ "mode": 0,
102
+ "inputs": [
103
+ {
104
+ "name": "float",
105
+ "type": "FLOAT",
106
+ "link": 4,
107
+ "widget": {
108
+ "name": "float",
109
+ "config": [
110
+ "FLOAT",
111
+ {
112
+ "default": 0,
113
+ "min": 0,
114
+ "max": 18446744073709552000
115
+ }
116
+ ]
117
+ }
118
+ }
119
+ ],
120
+ "outputs": [
121
+ {
122
+ "name": "INT",
123
+ "type": "INT",
124
+ "links": [
125
+ 5
126
+ ],
127
+ "slot_index": 0
128
+ }
129
+ ],
130
+ "properties": {
131
+ "Node name for S&R": "floatToInt _O"
132
+ },
133
+ "widgets_values": [
134
+ 0
135
+ ]
136
+ },
137
+ {
138
+ "id": 7,
139
+ "type": "intToFloat _O",
140
+ "pos": [
141
+ 571.2636346103676,
142
+ 322.4677399587126
143
+ ],
144
+ "size": {
145
+ "0": 315,
146
+ "1": 58
147
+ },
148
+ "flags": {},
149
+ "order": 51,
150
+ "mode": 0,
151
+ "inputs": [
152
+ {
153
+ "name": "int",
154
+ "type": "INT",
155
+ "link": 5,
156
+ "widget": {
157
+ "name": "int",
158
+ "config": [
159
+ "INT",
160
+ {
161
+ "default": 0,
162
+ "min": 0,
163
+ "max": 18446744073709552000
164
+ }
165
+ ]
166
+ }
167
+ }
168
+ ],
169
+ "outputs": [
170
+ {
171
+ "name": "FLOAT",
172
+ "type": "FLOAT",
173
+ "links": [
174
+ 6
175
+ ],
176
+ "slot_index": 0
177
+ }
178
+ ],
179
+ "properties": {
180
+ "Node name for S&R": "intToFloat _O"
181
+ },
182
+ "widgets_values": [
183
+ 0
184
+ ]
185
+ },
186
+ {
187
+ "id": 3,
188
+ "type": "floatToText _O",
189
+ "pos": [
190
+ 558.2636346103676,
191
+ 478.4677399587126
192
+ ],
193
+ "size": {
194
+ "0": 315,
195
+ "1": 58
196
+ },
197
+ "flags": {},
198
+ "order": 59,
199
+ "mode": 0,
200
+ "inputs": [
201
+ {
202
+ "name": "float",
203
+ "type": "FLOAT",
204
+ "link": 6,
205
+ "widget": {
206
+ "name": "float",
207
+ "config": [
208
+ "FLOAT",
209
+ {
210
+ "default": 0,
211
+ "min": 0,
212
+ "max": 18446744073709552000
213
+ }
214
+ ]
215
+ }
216
+ }
217
+ ],
218
+ "outputs": [
219
+ {
220
+ "name": "STRING",
221
+ "type": "STRING",
222
+ "links": [
223
+ 2
224
+ ],
225
+ "slot_index": 0
226
+ }
227
+ ],
228
+ "properties": {
229
+ "Node name for S&R": "floatToText _O"
230
+ },
231
+ "widgets_values": [
232
+ 0
233
+ ]
234
+ },
235
+ {
236
+ "id": 14,
237
+ "type": "Chat_Message _O",
238
+ "pos": [
239
+ 490.4199728939069,
240
+ 1279.0299540944563
241
+ ],
242
+ "size": {
243
+ "0": 389.8748779296875,
244
+ "1": 128.49383544921875
245
+ },
246
+ "flags": {},
247
+ "order": 1,
248
+ "mode": 0,
249
+ "outputs": [
250
+ {
251
+ "name": "OPENAI_CHAT_MESSAGES",
252
+ "type": "OPENAI_CHAT_MESSAGES",
253
+ "links": [
254
+ 9
255
+ ],
256
+ "slot_index": 0
257
+ }
258
+ ],
259
+ "title": "Chat_Message _O (the init message)",
260
+ "properties": {
261
+ "Node name for S&R": "Chat_Message _O"
262
+ },
263
+ "widgets_values": [
264
+ "user",
265
+ "act as prompt generator ,i will give you text and you describe an image that match that text in details, answer with one response only"
266
+ ]
267
+ },
268
+ {
269
+ "id": 16,
270
+ "type": "combine_chat_messages _O",
271
+ "pos": [
272
+ 969.4199728939068,
273
+ 1280.0299540944563
274
+ ],
275
+ "size": {
276
+ "0": 367.79998779296875,
277
+ "1": 46
278
+ },
279
+ "flags": {},
280
+ "order": 37,
281
+ "mode": 0,
282
+ "inputs": [
283
+ {
284
+ "name": "message1",
285
+ "type": "OPENAI_CHAT_MESSAGES",
286
+ "link": 9
287
+ },
288
+ {
289
+ "name": "message2",
290
+ "type": "OPENAI_CHAT_MESSAGES",
291
+ "link": 10
292
+ }
293
+ ],
294
+ "outputs": [
295
+ {
296
+ "name": "OPENAI_CHAT_MESSAGES",
297
+ "type": "OPENAI_CHAT_MESSAGES",
298
+ "links": [
299
+ 12
300
+ ],
301
+ "slot_index": 0
302
+ }
303
+ ],
304
+ "properties": {
305
+ "Node name for S&R": "combine_chat_messages _O"
306
+ }
307
+ },
308
+ {
309
+ "id": 19,
310
+ "type": "Reroute",
311
+ "pos": [
312
+ 1315.4199728939052,
313
+ 1349.0299540944563
314
+ ],
315
+ "size": [
316
+ 75,
317
+ 26
318
+ ],
319
+ "flags": {},
320
+ "order": 46,
321
+ "mode": 0,
322
+ "inputs": [
323
+ {
324
+ "name": "",
325
+ "type": "*",
326
+ "link": 12,
327
+ "pos": [
328
+ 37.5,
329
+ 0
330
+ ]
331
+ }
332
+ ],
333
+ "outputs": [
334
+ {
335
+ "name": "",
336
+ "type": "OPENAI_CHAT_MESSAGES",
337
+ "links": [
338
+ 14
339
+ ],
340
+ "slot_index": 0
341
+ }
342
+ ],
343
+ "properties": {
344
+ "showOutputText": false,
345
+ "horizontal": true
346
+ }
347
+ },
348
+ {
349
+ "id": 11,
350
+ "type": "Note _O",
351
+ "pos": [
352
+ 25.593636956406257,
353
+ 1237.5227515553931
354
+ ],
355
+ "size": {
356
+ "0": 400,
357
+ "1": 200
358
+ },
359
+ "flags": {},
360
+ "order": 2,
361
+ "mode": 0,
362
+ "properties": {
363
+ "Node name for S&R": "Note _O"
364
+ },
365
+ "widgets_values": [
366
+ "updates\n - no longer need to use String nodes, so this new one is using \n normal text parameter, so it is now compatible with other comfyUI \n nodes that receive text\n\n - support selecting the model in chat Completion node\n so if you have access to gpt-4 you can use it\n\n//note: using ChatGPT with revAnimated or mistoonAnime checkpoints produce stunning accurate results"
367
+ ],
368
+ "color": "#432",
369
+ "bgcolor": "#653"
370
+ },
371
+ {
372
+ "id": 22,
373
+ "type": "Note _O",
374
+ "pos": [
375
+ -443.40636304359373,
376
+ 731.5227515553921
377
+ ],
378
+ "size": {
379
+ "0": 400,
380
+ "1": 200
381
+ },
382
+ "flags": {},
383
+ "order": 3,
384
+ "mode": 0,
385
+ "properties": {
386
+ "Node name for S&R": "Note _O"
387
+ },
388
+ "widgets_values": [
389
+ "Open AI package"
390
+ ],
391
+ "color": "#432",
392
+ "bgcolor": "#653"
393
+ },
394
+ {
395
+ "id": 13,
396
+ "type": "load_openAI _O",
397
+ "pos": [
398
+ 462.59363695640627,
399
+ 1093.5227515553931
400
+ ],
401
+ "size": {
402
+ "0": 315,
403
+ "1": 58
404
+ },
405
+ "flags": {},
406
+ "order": 4,
407
+ "mode": 0,
408
+ "outputs": [
409
+ {
410
+ "name": "OPENAI",
411
+ "type": "OPENAI",
412
+ "links": [
413
+ 18
414
+ ],
415
+ "slot_index": 0
416
+ }
417
+ ],
418
+ "properties": {
419
+ "Node name for S&R": "load_openAI _O"
420
+ }
421
+ },
422
+ {
423
+ "id": 20,
424
+ "type": "Reroute",
425
+ "pos": [
426
+ 926.4199728939068,
427
+ 1402.0299540944563
428
+ ],
429
+ "size": [
430
+ 75,
431
+ 26
432
+ ],
433
+ "flags": {},
434
+ "order": 54,
435
+ "mode": 0,
436
+ "inputs": [
437
+ {
438
+ "name": "",
439
+ "type": "*",
440
+ "link": 14,
441
+ "pos": [
442
+ 37.5,
443
+ 0
444
+ ]
445
+ }
446
+ ],
447
+ "outputs": [
448
+ {
449
+ "name": "",
450
+ "type": "OPENAI_CHAT_MESSAGES",
451
+ "links": [
452
+ 15
453
+ ],
454
+ "slot_index": 0
455
+ }
456
+ ],
457
+ "properties": {
458
+ "showOutputText": false,
459
+ "horizontal": true
460
+ }
461
+ },
462
+ {
463
+ "id": 15,
464
+ "type": "Chat_Message _O",
465
+ "pos": [
466
+ 494.4199728939069,
467
+ 1448.0299540944563
468
+ ],
469
+ "size": {
470
+ "0": 389.8748779296875,
471
+ "1": 128.49383544921875
472
+ },
473
+ "flags": {},
474
+ "order": 5,
475
+ "mode": 0,
476
+ "outputs": [
477
+ {
478
+ "name": "OPENAI_CHAT_MESSAGES",
479
+ "type": "OPENAI_CHAT_MESSAGES",
480
+ "links": [
481
+ 10
482
+ ],
483
+ "slot_index": 0
484
+ }
485
+ ],
486
+ "title": "Chat_Message _O (your prompt)",
487
+ "properties": {
488
+ "Node name for S&R": "Chat_Message _O"
489
+ },
490
+ "widgets_values": [
491
+ "user",
492
+ "dancng girl"
493
+ ]
494
+ },
495
+ {
496
+ "id": 27,
497
+ "type": "Reroute",
498
+ "pos": [
499
+ 436.56722430992215,
500
+ 1721.548828508518
501
+ ],
502
+ "size": [
503
+ 90.4,
504
+ 26
505
+ ],
506
+ "flags": {},
507
+ "order": 52,
508
+ "mode": 0,
509
+ "inputs": [
510
+ {
511
+ "name": "",
512
+ "type": "*",
513
+ "link": 24,
514
+ "pos": [
515
+ 45.2,
516
+ 0
517
+ ]
518
+ }
519
+ ],
520
+ "outputs": [
521
+ {
522
+ "name": "OPENAI",
523
+ "type": "OPENAI",
524
+ "links": [
525
+ 22
526
+ ],
527
+ "slot_index": 0
528
+ }
529
+ ],
530
+ "properties": {
531
+ "showOutputText": true,
532
+ "horizontal": true
533
+ }
534
+ },
535
+ {
536
+ "id": 29,
537
+ "type": "Note _O",
538
+ "pos": [
539
+ 29.593636956406243,
540
+ 1675.5227515553931
541
+ ],
542
+ "size": {
543
+ "0": 400,
544
+ "1": 200
545
+ },
546
+ "flags": {},
547
+ "order": 6,
548
+ "mode": 0,
549
+ "properties": {
550
+ "Node name for S&R": "Note _O"
551
+ },
552
+ "widgets_values": [
553
+ "updates\n - create image input now is text instead of string, so it can be\n Compatible with any other text nodes \n - also add fake seeds to force the node to generate new input each \n cycle if needed"
554
+ ],
555
+ "color": "#432",
556
+ "bgcolor": "#653"
557
+ },
558
+ {
559
+ "id": 28,
560
+ "type": "Reroute",
561
+ "pos": [
562
+ 865.3935602474226,
563
+ 1635.0560310475812
564
+ ],
565
+ "size": [
566
+ 90.4,
567
+ 26
568
+ ],
569
+ "flags": {},
570
+ "order": 45,
571
+ "mode": 0,
572
+ "inputs": [
573
+ {
574
+ "name": "",
575
+ "type": "*",
576
+ "link": 23,
577
+ "pos": [
578
+ 45.2,
579
+ 0
580
+ ]
581
+ }
582
+ ],
583
+ "outputs": [
584
+ {
585
+ "name": "OPENAI",
586
+ "type": "OPENAI",
587
+ "links": [
588
+ 24,
589
+ 28
590
+ ],
591
+ "slot_index": 0
592
+ }
593
+ ],
594
+ "properties": {
595
+ "showOutputText": true,
596
+ "horizontal": true
597
+ }
598
+ },
599
+ {
600
+ "id": 30,
601
+ "type": "PreviewImage",
602
+ "pos": [
603
+ 1054.5672243099211,
604
+ 1784.548828508518
605
+ ],
606
+ "size": {
607
+ "0": 237.004638671875,
608
+ "1": 215.01806640625
609
+ },
610
+ "flags": {},
611
+ "order": 69,
612
+ "mode": 0,
613
+ "inputs": [
614
+ {
615
+ "name": "images",
616
+ "type": "IMAGE",
617
+ "link": 25
618
+ }
619
+ ],
620
+ "properties": {
621
+ "Node name for S&R": "PreviewImage"
622
+ }
623
+ },
624
+ {
625
+ "id": 33,
626
+ "type": "PreviewImage",
627
+ "pos": [
628
+ 1763.5672243099211,
629
+ 1780.548828508518
630
+ ],
631
+ "size": {
632
+ "0": 237.004638671875,
633
+ "1": 215.01806640625
634
+ },
635
+ "flags": {},
636
+ "order": 75,
637
+ "mode": 0,
638
+ "inputs": [
639
+ {
640
+ "name": "images",
641
+ "type": "IMAGE",
642
+ "link": 30
643
+ }
644
+ ],
645
+ "properties": {
646
+ "Node name for S&R": "PreviewImage"
647
+ }
648
+ },
649
+ {
650
+ "id": 32,
651
+ "type": "Reroute",
652
+ "pos": [
653
+ 1271.5672243099211,
654
+ 1701.548828508518
655
+ ],
656
+ "size": [
657
+ 90.4,
658
+ 26
659
+ ],
660
+ "flags": {},
661
+ "order": 53,
662
+ "mode": 0,
663
+ "inputs": [
664
+ {
665
+ "name": "",
666
+ "type": "*",
667
+ "link": 28,
668
+ "pos": [
669
+ 45.2,
670
+ 0
671
+ ]
672
+ }
673
+ ],
674
+ "outputs": [
675
+ {
676
+ "name": "OPENAI",
677
+ "type": "OPENAI",
678
+ "links": [
679
+ 29
680
+ ],
681
+ "slot_index": 0
682
+ }
683
+ ],
684
+ "properties": {
685
+ "showOutputText": true,
686
+ "horizontal": true
687
+ }
688
+ },
689
+ {
690
+ "id": 26,
691
+ "type": "create image _O",
692
+ "pos": [
693
+ 519.567224309923,
694
+ 1784.548828508518
695
+ ],
696
+ "size": {
697
+ "0": 400,
698
+ "1": 200
699
+ },
700
+ "flags": {},
701
+ "order": 60,
702
+ "mode": 0,
703
+ "inputs": [
704
+ {
705
+ "name": "openai",
706
+ "type": "OPENAI",
707
+ "link": 22
708
+ }
709
+ ],
710
+ "outputs": [
711
+ {
712
+ "name": "IMAGE",
713
+ "type": "IMAGE",
714
+ "links": [
715
+ 25,
716
+ 26
717
+ ],
718
+ "slot_index": 0
719
+ },
720
+ {
721
+ "name": "MASK",
722
+ "type": "MASK",
723
+ "links": null
724
+ }
725
+ ],
726
+ "properties": {
727
+ "Node name for S&R": "create image _O"
728
+ },
729
+ "widgets_values": [
730
+ "dancng girl",
731
+ 1,
732
+ "256x256",
733
+ 0,
734
+ false
735
+ ],
736
+ "color": "#232",
737
+ "bgcolor": "#353"
738
+ },
739
+ {
740
+ "id": 17,
741
+ "type": "Chat completion _O",
742
+ "pos": [
743
+ 976.4199728939068,
744
+ 1473.0299540944563
745
+ ],
746
+ "size": {
747
+ "0": 393,
748
+ "1": 126
749
+ },
750
+ "flags": {},
751
+ "order": 61,
752
+ "mode": 0,
753
+ "inputs": [
754
+ {
755
+ "name": "openai",
756
+ "type": "OPENAI",
757
+ "link": 19
758
+ },
759
+ {
760
+ "name": "messages",
761
+ "type": "OPENAI_CHAT_MESSAGES",
762
+ "link": 15
763
+ }
764
+ ],
765
+ "outputs": [
766
+ {
767
+ "name": "STRING",
768
+ "type": "STRING",
769
+ "links": [
770
+ 16
771
+ ],
772
+ "slot_index": 0
773
+ },
774
+ {
775
+ "name": "OPENAI_CHAT_COMPLETION",
776
+ "type": "OPENAI_CHAT_COMPLETION",
777
+ "links": null
778
+ }
779
+ ],
780
+ "properties": {
781
+ "Node name for S&R": "Chat completion _O"
782
+ },
783
+ "widgets_values": [
784
+ "gpt-3.5-turbo",
785
+ 0,
786
+ false
787
+ ],
788
+ "color": "#232",
789
+ "bgcolor": "#353"
790
+ },
791
+ {
792
+ "id": 35,
793
+ "type": "Note _O",
794
+ "pos": [
795
+ -433.40636304359373,
796
+ 2131.522751555393
797
+ ],
798
+ "size": {
799
+ "0": 400,
800
+ "1": 200
801
+ },
802
+ "flags": {},
803
+ "order": 7,
804
+ "mode": 0,
805
+ "properties": {
806
+ "Node name for S&R": "Note _O"
807
+ },
808
+ "widgets_values": [
809
+ "updates\n - remove the no longer necessary string node \n - add new NSP node\n - enhanced the text2image node\n"
810
+ ],
811
+ "color": "#432",
812
+ "bgcolor": "#653"
813
+ },
814
+ {
815
+ "id": 36,
816
+ "type": "Note _O",
817
+ "pos": [
818
+ 26.593636956406257,
819
+ 2131.522751555393
820
+ ],
821
+ "size": {
822
+ "0": 400,
823
+ "1": 200
824
+ },
825
+ "flags": {},
826
+ "order": 8,
827
+ "mode": 0,
828
+ "properties": {
829
+ "Node name for S&R": "Note _O"
830
+ },
831
+ "widgets_values": [
832
+ "updates\n - this node will select a random value from the NSP file included \n with the Package based on the terminology you select"
833
+ ],
834
+ "color": "#432",
835
+ "bgcolor": "#653"
836
+ },
837
+ {
838
+ "id": 39,
839
+ "type": "Concat Text _O",
840
+ "pos": [
841
+ 978.2740901701846,
842
+ 2513.6782103622136
843
+ ],
844
+ "size": {
845
+ "0": 255.0090789794922,
846
+ "1": 78
847
+ },
848
+ "flags": {},
849
+ "order": 49,
850
+ "mode": 0,
851
+ "inputs": [
852
+ {
853
+ "name": "text1",
854
+ "type": "STRING",
855
+ "link": 36,
856
+ "widget": {
857
+ "name": "text1",
858
+ "config": [
859
+ "STRING",
860
+ {
861
+ "multiline": true
862
+ }
863
+ ]
864
+ }
865
+ },
866
+ {
867
+ "name": "text2",
868
+ "type": "STRING",
869
+ "link": 37,
870
+ "widget": {
871
+ "name": "text2",
872
+ "config": [
873
+ "STRING",
874
+ {
875
+ "multiline": true
876
+ }
877
+ ]
878
+ }
879
+ }
880
+ ],
881
+ "outputs": [
882
+ {
883
+ "name": "STRING",
884
+ "type": "STRING",
885
+ "links": [
886
+ 38
887
+ ],
888
+ "slot_index": 0
889
+ }
890
+ ],
891
+ "properties": {
892
+ "Node name for S&R": "Concat Text _O"
893
+ },
894
+ "widgets_values": [
895
+ "",
896
+ " at ",
897
+ ""
898
+ ]
899
+ },
900
+ {
901
+ "id": 43,
902
+ "type": "Note _O",
903
+ "pos": [
904
+ 976.2740901701849,
905
+ 2254.6782103622136
906
+ ],
907
+ "size": {
908
+ "0": 261.47479248046875,
909
+ "1": 202.4876708984375
910
+ },
911
+ "flags": {},
912
+ "order": 9,
913
+ "mode": 0,
914
+ "properties": {
915
+ "Node name for S&R": "Note _O"
916
+ },
917
+ "widgets_values": [
918
+ " - combine two text inputs to one text also \n it will add the separator in the middle"
919
+ ],
920
+ "color": "#432",
921
+ "bgcolor": "#653"
922
+ },
923
+ {
924
+ "id": 46,
925
+ "type": "Note _O",
926
+ "pos": [
927
+ 1286.2740901701827,
928
+ 2253.6782103622136
929
+ ],
930
+ "size": {
931
+ "0": 261.47479248046875,
932
+ "1": 202.4876708984375
933
+ },
934
+ "flags": {},
935
+ "order": 10,
936
+ "mode": 0,
937
+ "properties": {
938
+ "Node name for S&R": "Note _O"
939
+ },
940
+ "widgets_values": [
941
+ " - replaces all occurrences of (old) with \n the new) value "
942
+ ],
943
+ "color": "#432",
944
+ "bgcolor": "#653"
945
+ },
946
+ {
947
+ "id": 44,
948
+ "type": "Replace Text _O",
949
+ "pos": [
950
+ 1268.2740901701827,
951
+ 2513.6782103622136
952
+ ],
953
+ "size": {
954
+ "0": 289.0200500488281,
955
+ "1": 83.88400268554688
956
+ },
957
+ "flags": {},
958
+ "order": 58,
959
+ "mode": 0,
960
+ "inputs": [
961
+ {
962
+ "name": "text",
963
+ "type": "STRING",
964
+ "link": 38,
965
+ "widget": {
966
+ "name": "text",
967
+ "config": [
968
+ "STRING",
969
+ {
970
+ "multiline": true
971
+ }
972
+ ]
973
+ }
974
+ }
975
+ ],
976
+ "outputs": [
977
+ {
978
+ "name": "STRING",
979
+ "type": "STRING",
980
+ "links": [
981
+ 39,
982
+ 41
983
+ ],
984
+ "slot_index": 0
985
+ }
986
+ ],
987
+ "properties": {
988
+ "Node name for S&R": "Replace Text _O"
989
+ },
990
+ "widgets_values": [
991
+ "",
992
+ "Wizard",
993
+ "Witch"
994
+ ]
995
+ },
996
+ {
997
+ "id": 52,
998
+ "type": "Note _O",
999
+ "pos": [
1000
+ 46.593636956406215,
1001
+ 3032.522751555393
1002
+ ],
1003
+ "size": {
1004
+ "0": 261.47479248046875,
1005
+ "1": 202.4876708984375
1006
+ },
1007
+ "flags": {},
1008
+ "order": 12,
1009
+ "mode": 0,
1010
+ "properties": {
1011
+ "Node name for S&R": "Note _O"
1012
+ },
1013
+ "widgets_values": [
1014
+ " - use text instead of String\n - allow transparent text and BG\n - allow you to set image size\n - expand: this option will resize the \n result image to fit the text if \n the image don't fit \n - x,y is to move the text around the image \n it points to text center "
1015
+ ],
1016
+ "color": "#432",
1017
+ "bgcolor": "#653"
1018
+ },
1019
+ {
1020
+ "id": 51,
1021
+ "type": "PreviewImage",
1022
+ "pos": [
1023
+ 834.5936369564066,
1024
+ 3057.522751555393
1025
+ ],
1026
+ "size": {
1027
+ "0": 1037.8057861328125,
1028
+ "1": 210.6422882080078
1029
+ },
1030
+ "flags": {},
1031
+ "order": 78,
1032
+ "mode": 0,
1033
+ "inputs": [
1034
+ {
1035
+ "name": "images",
1036
+ "type": "IMAGE",
1037
+ "link": 45
1038
+ }
1039
+ ],
1040
+ "properties": {
1041
+ "Node name for S&R": "PreviewImage"
1042
+ }
1043
+ },
1044
+ {
1045
+ "id": 42,
1046
+ "type": "Note _O",
1047
+ "pos": [
1048
+ 659.2618648283882,
1049
+ 2254.662382237214
1050
+ ],
1051
+ "size": {
1052
+ "0": 261.47479248046875,
1053
+ "1": 202.4876708984375
1054
+ },
1055
+ "flags": {},
1056
+ "order": 13,
1057
+ "mode": 0,
1058
+ "properties": {
1059
+ "Node name for S&R": "Note _O"
1060
+ },
1061
+ "widgets_values": [
1062
+ " - trim removes any extra spaces after or \n before the text if found"
1063
+ ],
1064
+ "color": "#432",
1065
+ "bgcolor": "#653"
1066
+ },
1067
+ {
1068
+ "id": 50,
1069
+ "type": "Reroute",
1070
+ "pos": [
1071
+ 172.59363695640633,
1072
+ 2926.522751555393
1073
+ ],
1074
+ "size": [
1075
+ 75,
1076
+ 26
1077
+ ],
1078
+ "flags": {},
1079
+ "order": 74,
1080
+ "mode": 0,
1081
+ "inputs": [
1082
+ {
1083
+ "name": "",
1084
+ "type": "*",
1085
+ "link": 46,
1086
+ "pos": [
1087
+ 37.5,
1088
+ 0
1089
+ ]
1090
+ }
1091
+ ],
1092
+ "outputs": [
1093
+ {
1094
+ "name": "",
1095
+ "type": "STRING",
1096
+ "links": [
1097
+ 44
1098
+ ]
1099
+ }
1100
+ ],
1101
+ "properties": {
1102
+ "showOutputText": false,
1103
+ "horizontal": true
1104
+ }
1105
+ },
1106
+ {
1107
+ "id": 53,
1108
+ "type": "Note _O",
1109
+ "pos": [
1110
+ 78.44912461814945,
1111
+ 4266.738520461633
1112
+ ],
1113
+ "size": {
1114
+ "0": 400,
1115
+ "1": 200
1116
+ },
1117
+ "flags": {},
1118
+ "order": 14,
1119
+ "mode": 0,
1120
+ "properties": {
1121
+ "Node name for S&R": "Note _O"
1122
+ },
1123
+ "widgets_values": [
1124
+ ""
1125
+ ]
1126
+ },
1127
+ {
1128
+ "id": 56,
1129
+ "type": "int _O",
1130
+ "pos": [
1131
+ 1552.4491246181487,
1132
+ 4286.738520461633
1133
+ ],
1134
+ "size": {
1135
+ "0": 315,
1136
+ "1": 58
1137
+ },
1138
+ "flags": {},
1139
+ "order": 15,
1140
+ "mode": 0,
1141
+ "outputs": [
1142
+ {
1143
+ "name": "INT",
1144
+ "type": "INT",
1145
+ "links": null
1146
+ }
1147
+ ],
1148
+ "properties": {
1149
+ "Node name for S&R": "int _O"
1150
+ },
1151
+ "widgets_values": [
1152
+ 0
1153
+ ]
1154
+ },
1155
+ {
1156
+ "id": 55,
1157
+ "type": "seed _O",
1158
+ "pos": [
1159
+ 1091.4491246181487,
1160
+ 4277.738520461633
1161
+ ],
1162
+ "size": {
1163
+ "0": 315,
1164
+ "1": 82
1165
+ },
1166
+ "flags": {},
1167
+ "order": 16,
1168
+ "mode": 0,
1169
+ "outputs": [
1170
+ {
1171
+ "name": "INT",
1172
+ "type": "INT",
1173
+ "links": null
1174
+ }
1175
+ ],
1176
+ "properties": {
1177
+ "Node name for S&R": "seed _O"
1178
+ },
1179
+ "widgets_values": [
1180
+ 555830498635480,
1181
+ true
1182
+ ]
1183
+ },
1184
+ {
1185
+ "id": 54,
1186
+ "type": "Text _O",
1187
+ "pos": [
1188
+ 576.4491246181497,
1189
+ 4268.738520461633
1190
+ ],
1191
+ "size": {
1192
+ "0": 400,
1193
+ "1": 200
1194
+ },
1195
+ "flags": {},
1196
+ "order": 17,
1197
+ "mode": 0,
1198
+ "outputs": [
1199
+ {
1200
+ "name": "STRING",
1201
+ "type": "STRING",
1202
+ "links": null
1203
+ }
1204
+ ],
1205
+ "properties": {
1206
+ "Node name for S&R": "Text _O"
1207
+ },
1208
+ "widgets_values": [
1209
+ ""
1210
+ ]
1211
+ },
1212
+ {
1213
+ "id": 58,
1214
+ "type": "Note _O",
1215
+ "pos": [
1216
+ 158.4491246181495,
1217
+ 4100.738520461642
1218
+ ],
1219
+ "size": {
1220
+ "0": 229.99794006347656,
1221
+ "1": 103.36981201171875
1222
+ },
1223
+ "flags": {},
1224
+ "order": 18,
1225
+ "mode": 0,
1226
+ "properties": {
1227
+ "Node name for S&R": "Note _O"
1228
+ },
1229
+ "widgets_values": [
1230
+ "an empty node that can be used to write notes X) \n"
1231
+ ],
1232
+ "color": "#432",
1233
+ "bgcolor": "#653"
1234
+ },
1235
+ {
1236
+ "id": 59,
1237
+ "type": "Note _O",
1238
+ "pos": [
1239
+ 657.4491246181497,
1240
+ 4099.738520461642
1241
+ ],
1242
+ "size": {
1243
+ "0": 229.99794006347656,
1244
+ "1": 103.36981201171875
1245
+ },
1246
+ "flags": {},
1247
+ "order": 19,
1248
+ "mode": 0,
1249
+ "properties": {
1250
+ "Node name for S&R": "Note _O"
1251
+ },
1252
+ "widgets_values": [
1253
+ "text input node"
1254
+ ],
1255
+ "color": "#432",
1256
+ "bgcolor": "#653"
1257
+ },
1258
+ {
1259
+ "id": 60,
1260
+ "type": "Note _O",
1261
+ "pos": [
1262
+ 1133.4491246181487,
1263
+ 4100.738520461642
1264
+ ],
1265
+ "size": {
1266
+ "0": 229.99794006347656,
1267
+ "1": 103.36981201171875
1268
+ },
1269
+ "flags": {},
1270
+ "order": 20,
1271
+ "mode": 0,
1272
+ "properties": {
1273
+ "Node name for S&R": "Note _O"
1274
+ },
1275
+ "widgets_values": [
1276
+ "seed input node"
1277
+ ],
1278
+ "color": "#432",
1279
+ "bgcolor": "#653"
1280
+ },
1281
+ {
1282
+ "id": 61,
1283
+ "type": "Note _O",
1284
+ "pos": [
1285
+ 1596.4491246181487,
1286
+ 4101.738520461642
1287
+ ],
1288
+ "size": {
1289
+ "0": 229.99794006347656,
1290
+ "1": 103.36981201171875
1291
+ },
1292
+ "flags": {},
1293
+ "order": 21,
1294
+ "mode": 0,
1295
+ "properties": {
1296
+ "Node name for S&R": "Note _O"
1297
+ },
1298
+ "widgets_values": [
1299
+ "number input nodes"
1300
+ ],
1301
+ "color": "#432",
1302
+ "bgcolor": "#653"
1303
+ },
1304
+ {
1305
+ "id": 57,
1306
+ "type": "float _O",
1307
+ "pos": [
1308
+ 1556.4491246181487,
1309
+ 4417.738520461633
1310
+ ],
1311
+ "size": {
1312
+ "0": 315,
1313
+ "1": 58
1314
+ },
1315
+ "flags": {},
1316
+ "order": 22,
1317
+ "mode": 0,
1318
+ "outputs": [
1319
+ {
1320
+ "name": "FLOAT",
1321
+ "type": "FLOAT",
1322
+ "links": null
1323
+ }
1324
+ ],
1325
+ "properties": {
1326
+ "Node name for S&R": "float _O"
1327
+ },
1328
+ "widgets_values": [
1329
+ 0
1330
+ ]
1331
+ },
1332
+ {
1333
+ "id": 31,
1334
+ "type": "variation_image _O",
1335
+ "pos": [
1336
+ 1384.5672243099211,
1337
+ 1773.548828508518
1338
+ ],
1339
+ "size": {
1340
+ "0": 315,
1341
+ "1": 150
1342
+ },
1343
+ "flags": {},
1344
+ "order": 70,
1345
+ "mode": 0,
1346
+ "inputs": [
1347
+ {
1348
+ "name": "openai",
1349
+ "type": "OPENAI",
1350
+ "link": 29
1351
+ },
1352
+ {
1353
+ "name": "image",
1354
+ "type": "IMAGE",
1355
+ "link": 26
1356
+ }
1357
+ ],
1358
+ "outputs": [
1359
+ {
1360
+ "name": "IMAGE",
1361
+ "type": "IMAGE",
1362
+ "links": [
1363
+ 30
1364
+ ],
1365
+ "slot_index": 0
1366
+ },
1367
+ {
1368
+ "name": "MASK",
1369
+ "type": "MASK",
1370
+ "links": null
1371
+ }
1372
+ ],
1373
+ "properties": {
1374
+ "Node name for S&R": "variation_image _O"
1375
+ },
1376
+ "widgets_values": [
1377
+ 1,
1378
+ "256x256",
1379
+ 0,
1380
+ false
1381
+ ],
1382
+ "color": "#232",
1383
+ "bgcolor": "#353"
1384
+ },
1385
+ {
1386
+ "id": 9,
1387
+ "type": "Note _O",
1388
+ "pos": [
1389
+ -438.375,
1390
+ 36.28409090909091
1391
+ ],
1392
+ "size": {
1393
+ "0": 400,
1394
+ "1": 200
1395
+ },
1396
+ "flags": {},
1397
+ "order": 23,
1398
+ "mode": 0,
1399
+ "properties": {
1400
+ "Node name for S&R": "Note _O"
1401
+ },
1402
+ "widgets_values": [
1403
+ "In this example, you can write your equation to be applied on the input "
1404
+ ],
1405
+ "color": "#432",
1406
+ "bgcolor": "#653"
1407
+ },
1408
+ {
1409
+ "id": 10,
1410
+ "type": "Note _O",
1411
+ "pos": [
1412
+ 27,
1413
+ 732
1414
+ ],
1415
+ "size": {
1416
+ "0": 400,
1417
+ "1": 200
1418
+ },
1419
+ "flags": {},
1420
+ "order": 24,
1421
+ "mode": 0,
1422
+ "properties": {
1423
+ "Node name for S&R": "Note _O"
1424
+ },
1425
+ "widgets_values": [
1426
+ "ChatGPT updates\n - support selecting the model \n so if you have access to gpt-4 you can use it\n\n - add a seed input (it is not a real seed) but it is used to make \n the node generate new input \n\n\n//note: using ChatGPT with revAnimated or mistoonAnime checkpoints produce stunning accurate results"
1427
+ ],
1428
+ "color": "#432",
1429
+ "bgcolor": "#653"
1430
+ },
1431
+ {
1432
+ "id": 23,
1433
+ "type": "Note _O",
1434
+ "pos": [
1435
+ -428,
1436
+ -452
1437
+ ],
1438
+ "size": {
1439
+ "0": 400,
1440
+ "1": 200
1441
+ },
1442
+ "flags": {},
1443
+ "order": 25,
1444
+ "mode": 0,
1445
+ "properties": {
1446
+ "Node name for S&R": "Note _O"
1447
+ },
1448
+ "widgets_values": [
1449
+ "Thanks for using my tools \n\n- kindly notice that the green colored nodes are the new updates in \n this version"
1450
+ ],
1451
+ "color": "#432",
1452
+ "bgcolor": "#653"
1453
+ },
1454
+ {
1455
+ "id": 67,
1456
+ "type": "Note _O",
1457
+ "pos": [
1458
+ 1389,
1459
+ 90
1460
+ ],
1461
+ "size": {
1462
+ "0": 400,
1463
+ "1": 200
1464
+ },
1465
+ "flags": {},
1466
+ "order": 26,
1467
+ "mode": 0,
1468
+ "properties": {
1469
+ "Node name for S&R": "Note _O"
1470
+ },
1471
+ "widgets_values": [
1472
+ "Upscale image using factors "
1473
+ ],
1474
+ "color": "#432",
1475
+ "bgcolor": "#653"
1476
+ },
1477
+ {
1478
+ "id": 64,
1479
+ "type": "Note _O",
1480
+ "pos": [
1481
+ -325,
1482
+ 3465
1483
+ ],
1484
+ "size": {
1485
+ "0": 305.0923767089844,
1486
+ "1": 101.80223083496094
1487
+ },
1488
+ "flags": {},
1489
+ "order": 27,
1490
+ "mode": 0,
1491
+ "properties": {
1492
+ "Node name for S&R": "Note _O"
1493
+ },
1494
+ "widgets_values": [
1495
+ "latent tools \n - new node added SelectLatentFromBatch_O\n\nit is useful if you want to select an image to continue working on after generating multiple images "
1496
+ ],
1497
+ "color": "#432",
1498
+ "bgcolor": "#653"
1499
+ },
1500
+ {
1501
+ "id": 65,
1502
+ "type": "Note _O",
1503
+ "pos": [
1504
+ -288,
1505
+ 4046
1506
+ ],
1507
+ "size": {
1508
+ "0": 229.99794006347656,
1509
+ "1": 103.36981201171875
1510
+ },
1511
+ "flags": {},
1512
+ "order": 28,
1513
+ "mode": 0,
1514
+ "properties": {
1515
+ "Node name for S&R": "Note _O"
1516
+ },
1517
+ "widgets_values": [
1518
+ "utility nodes\n\n- the input nodes good if you want to \n reroute after them as currently the \n primitive node dost work with \n reroute nodes "
1519
+ ],
1520
+ "color": "#432",
1521
+ "bgcolor": "#653"
1522
+ },
1523
+ {
1524
+ "id": 69,
1525
+ "type": "CheckpointLoaderSimple",
1526
+ "pos": [
1527
+ 38,
1528
+ 3519
1529
+ ],
1530
+ "size": {
1531
+ "0": 210,
1532
+ "1": 98
1533
+ },
1534
+ "flags": {},
1535
+ "order": 29,
1536
+ "mode": 0,
1537
+ "outputs": [
1538
+ {
1539
+ "name": "MODEL",
1540
+ "type": "MODEL",
1541
+ "links": [
1542
+ 47
1543
+ ],
1544
+ "slot_index": 0
1545
+ },
1546
+ {
1547
+ "name": "CLIP",
1548
+ "type": "CLIP",
1549
+ "links": [
1550
+ 53,
1551
+ 54
1552
+ ],
1553
+ "slot_index": 1
1554
+ },
1555
+ {
1556
+ "name": "VAE",
1557
+ "type": "VAE",
1558
+ "links": [
1559
+ 59
1560
+ ],
1561
+ "slot_index": 2
1562
+ }
1563
+ ],
1564
+ "properties": {
1565
+ "Node name for S&R": "CheckpointLoaderSimple"
1566
+ },
1567
+ "widgets_values": [
1568
+ "sd-v1-4.ckpt"
1569
+ ]
1570
+ },
1571
+ {
1572
+ "id": 70,
1573
+ "type": "CLIPTextEncode",
1574
+ "pos": [
1575
+ 43,
1576
+ 3657
1577
+ ],
1578
+ "size": {
1579
+ "0": 210,
1580
+ "1": 76.00001525878906
1581
+ },
1582
+ "flags": {},
1583
+ "order": 38,
1584
+ "mode": 0,
1585
+ "inputs": [
1586
+ {
1587
+ "name": "clip",
1588
+ "type": "CLIP",
1589
+ "link": 53
1590
+ }
1591
+ ],
1592
+ "outputs": [
1593
+ {
1594
+ "name": "CONDITIONING",
1595
+ "type": "CONDITIONING",
1596
+ "links": [
1597
+ 48
1598
+ ]
1599
+ }
1600
+ ],
1601
+ "properties": {
1602
+ "Node name for S&R": "CLIPTextEncode"
1603
+ },
1604
+ "widgets_values": [
1605
+ "cute girl "
1606
+ ]
1607
+ },
1608
+ {
1609
+ "id": 71,
1610
+ "type": "CLIPTextEncode",
1611
+ "pos": [
1612
+ 48,
1613
+ 3770
1614
+ ],
1615
+ "size": {
1616
+ "0": 210,
1617
+ "1": 76.00001525878906
1618
+ },
1619
+ "flags": {},
1620
+ "order": 39,
1621
+ "mode": 0,
1622
+ "inputs": [
1623
+ {
1624
+ "name": "clip",
1625
+ "type": "CLIP",
1626
+ "link": 54
1627
+ }
1628
+ ],
1629
+ "outputs": [
1630
+ {
1631
+ "name": "CONDITIONING",
1632
+ "type": "CONDITIONING",
1633
+ "links": [
1634
+ 49
1635
+ ],
1636
+ "slot_index": 0
1637
+ }
1638
+ ],
1639
+ "properties": {
1640
+ "Node name for S&R": "CLIPTextEncode"
1641
+ },
1642
+ "widgets_values": [
1643
+ "bad hands "
1644
+ ]
1645
+ },
1646
+ {
1647
+ "id": 72,
1648
+ "type": "EmptyLatentImage",
1649
+ "pos": [
1650
+ 49,
1651
+ 3883
1652
+ ],
1653
+ "size": {
1654
+ "0": 210,
1655
+ "1": 106
1656
+ },
1657
+ "flags": {},
1658
+ "order": 30,
1659
+ "mode": 0,
1660
+ "outputs": [
1661
+ {
1662
+ "name": "LATENT",
1663
+ "type": "LATENT",
1664
+ "links": [
1665
+ 50
1666
+ ]
1667
+ }
1668
+ ],
1669
+ "properties": {
1670
+ "Node name for S&R": "EmptyLatentImage"
1671
+ },
1672
+ "widgets_values": [
1673
+ 512,
1674
+ 512,
1675
+ 4
1676
+ ],
1677
+ "color": "#323",
1678
+ "bgcolor": "#535"
1679
+ },
1680
+ {
1681
+ "id": 73,
1682
+ "type": "VAEDecode",
1683
+ "pos": [
1684
+ 739.204545454546,
1685
+ 3541.295454545454
1686
+ ],
1687
+ "size": {
1688
+ "0": 140,
1689
+ "1": 46
1690
+ },
1691
+ "flags": {},
1692
+ "order": 56,
1693
+ "mode": 0,
1694
+ "inputs": [
1695
+ {
1696
+ "name": "samples",
1697
+ "type": "LATENT",
1698
+ "link": 51
1699
+ },
1700
+ {
1701
+ "name": "vae",
1702
+ "type": "VAE",
1703
+ "link": 69
1704
+ }
1705
+ ],
1706
+ "outputs": [
1707
+ {
1708
+ "name": "IMAGE",
1709
+ "type": "IMAGE",
1710
+ "links": [
1711
+ 55
1712
+ ],
1713
+ "slot_index": 0
1714
+ }
1715
+ ],
1716
+ "properties": {
1717
+ "Node name for S&R": "VAEDecode"
1718
+ }
1719
+ },
1720
+ {
1721
+ "id": 77,
1722
+ "type": "Reroute",
1723
+ "pos": [
1724
+ 254,
1725
+ 3464
1726
+ ],
1727
+ "size": [
1728
+ 75,
1729
+ 26
1730
+ ],
1731
+ "flags": {},
1732
+ "order": 40,
1733
+ "mode": 0,
1734
+ "inputs": [
1735
+ {
1736
+ "name": "",
1737
+ "type": "*",
1738
+ "link": 59
1739
+ }
1740
+ ],
1741
+ "outputs": [
1742
+ {
1743
+ "name": "VAE",
1744
+ "type": "VAE",
1745
+ "links": [
1746
+ 62
1747
+ ],
1748
+ "slot_index": 0
1749
+ }
1750
+ ],
1751
+ "properties": {
1752
+ "showOutputText": true,
1753
+ "horizontal": false
1754
+ }
1755
+ },
1756
+ {
1757
+ "id": 78,
1758
+ "type": "Reroute",
1759
+ "pos": [
1760
+ 657.204545454546,
1761
+ 3467.295454545454
1762
+ ],
1763
+ "size": [
1764
+ 75,
1765
+ 26
1766
+ ],
1767
+ "flags": {},
1768
+ "order": 48,
1769
+ "mode": 0,
1770
+ "inputs": [
1771
+ {
1772
+ "name": "",
1773
+ "type": "*",
1774
+ "link": 62
1775
+ }
1776
+ ],
1777
+ "outputs": [
1778
+ {
1779
+ "name": "VAE",
1780
+ "type": "VAE",
1781
+ "links": [
1782
+ 69,
1783
+ 78
1784
+ ],
1785
+ "slot_index": 0
1786
+ }
1787
+ ],
1788
+ "properties": {
1789
+ "showOutputText": true,
1790
+ "horizontal": false
1791
+ }
1792
+ },
1793
+ {
1794
+ "id": 74,
1795
+ "type": "PreviewImage",
1796
+ "pos": [
1797
+ 895,
1798
+ 3458
1799
+ ],
1800
+ "size": {
1801
+ "0": 210,
1802
+ "1": 250
1803
+ },
1804
+ "flags": {},
1805
+ "order": 63,
1806
+ "mode": 0,
1807
+ "inputs": [
1808
+ {
1809
+ "name": "images",
1810
+ "type": "IMAGE",
1811
+ "link": 55
1812
+ }
1813
+ ],
1814
+ "properties": {
1815
+ "Node name for S&R": "PreviewImage"
1816
+ }
1817
+ },
1818
+ {
1819
+ "id": 76,
1820
+ "type": "PreviewImage",
1821
+ "pos": [
1822
+ 1382,
1823
+ 3560.75
1824
+ ],
1825
+ "size": {
1826
+ "0": 210,
1827
+ "1": 250
1828
+ },
1829
+ "flags": {},
1830
+ "order": 72,
1831
+ "mode": 0,
1832
+ "inputs": [
1833
+ {
1834
+ "name": "images",
1835
+ "type": "IMAGE",
1836
+ "link": 58
1837
+ }
1838
+ ],
1839
+ "properties": {
1840
+ "Node name for S&R": "PreviewImage"
1841
+ }
1842
+ },
1843
+ {
1844
+ "id": 80,
1845
+ "type": "VAEDecode",
1846
+ "pos": [
1847
+ 1640,
1848
+ 3850
1849
+ ],
1850
+ "size": {
1851
+ "0": 140,
1852
+ "1": 46
1853
+ },
1854
+ "flags": {},
1855
+ "order": 73,
1856
+ "mode": 0,
1857
+ "inputs": [
1858
+ {
1859
+ "name": "samples",
1860
+ "type": "LATENT",
1861
+ "link": 66
1862
+ },
1863
+ {
1864
+ "name": "vae",
1865
+ "type": "VAE",
1866
+ "link": 77
1867
+ }
1868
+ ],
1869
+ "outputs": [
1870
+ {
1871
+ "name": "IMAGE",
1872
+ "type": "IMAGE",
1873
+ "links": [
1874
+ 68
1875
+ ],
1876
+ "slot_index": 0
1877
+ }
1878
+ ],
1879
+ "properties": {
1880
+ "Node name for S&R": "VAEDecode"
1881
+ }
1882
+ },
1883
+ {
1884
+ "id": 81,
1885
+ "type": "PreviewImage",
1886
+ "pos": [
1887
+ 1797,
1888
+ 3726
1889
+ ],
1890
+ "size": {
1891
+ "0": 210,
1892
+ "1": 250
1893
+ },
1894
+ "flags": {},
1895
+ "order": 76,
1896
+ "mode": 0,
1897
+ "inputs": [
1898
+ {
1899
+ "name": "images",
1900
+ "type": "IMAGE",
1901
+ "link": 68
1902
+ }
1903
+ ],
1904
+ "properties": {
1905
+ "Node name for S&R": "PreviewImage"
1906
+ }
1907
+ },
1908
+ {
1909
+ "id": 62,
1910
+ "type": "LatentUpscaleFactor _O",
1911
+ "pos": [
1912
+ 838,
1913
+ 3848
1914
+ ],
1915
+ "size": {
1916
+ "0": 315,
1917
+ "1": 130
1918
+ },
1919
+ "flags": {},
1920
+ "order": 62,
1921
+ "mode": 0,
1922
+ "inputs": [
1923
+ {
1924
+ "name": "samples",
1925
+ "type": "LATENT",
1926
+ "link": 65
1927
+ }
1928
+ ],
1929
+ "outputs": [
1930
+ {
1931
+ "name": "LATENT",
1932
+ "type": "LATENT",
1933
+ "links": [
1934
+ 66
1935
+ ],
1936
+ "slot_index": 0
1937
+ }
1938
+ ],
1939
+ "properties": {
1940
+ "Node name for S&R": "LatentUpscaleFactor _O"
1941
+ },
1942
+ "widgets_values": [
1943
+ "bilinear",
1944
+ 1.25,
1945
+ 1.25,
1946
+ "disabled"
1947
+ ],
1948
+ "color": "#232",
1949
+ "bgcolor": "#353"
1950
+ },
1951
+ {
1952
+ "id": 68,
1953
+ "type": "KSampler",
1954
+ "pos": [
1955
+ 284,
1956
+ 3540
1957
+ ],
1958
+ "size": {
1959
+ "0": 210,
1960
+ "1": 430.0031433105469
1961
+ },
1962
+ "flags": {},
1963
+ "order": 47,
1964
+ "mode": 0,
1965
+ "inputs": [
1966
+ {
1967
+ "name": "model",
1968
+ "type": "MODEL",
1969
+ "link": 47
1970
+ },
1971
+ {
1972
+ "name": "positive",
1973
+ "type": "CONDITIONING",
1974
+ "link": 48,
1975
+ "slot_index": 1
1976
+ },
1977
+ {
1978
+ "name": "negative",
1979
+ "type": "CONDITIONING",
1980
+ "link": 49
1981
+ },
1982
+ {
1983
+ "name": "latent_image",
1984
+ "type": "LATENT",
1985
+ "link": 50,
1986
+ "slot_index": 3
1987
+ }
1988
+ ],
1989
+ "outputs": [
1990
+ {
1991
+ "name": "LATENT",
1992
+ "type": "LATENT",
1993
+ "links": [
1994
+ 51,
1995
+ 56
1996
+ ],
1997
+ "slot_index": 0
1998
+ }
1999
+ ],
2000
+ "properties": {
2001
+ "Node name for S&R": "KSampler"
2002
+ },
2003
+ "widgets_values": [
2004
+ 1020066313120726,
2005
+ false,
2006
+ 20,
2007
+ 8,
2008
+ "euler",
2009
+ "karras",
2010
+ 1
2011
+ ]
2012
+ },
2013
+ {
2014
+ "id": 66,
2015
+ "type": "selectLatentFromBatch _O",
2016
+ "pos": [
2017
+ 571,
2018
+ 3721
2019
+ ],
2020
+ "size": {
2021
+ "0": 210,
2022
+ "1": 58
2023
+ },
2024
+ "flags": {},
2025
+ "order": 55,
2026
+ "mode": 0,
2027
+ "inputs": [
2028
+ {
2029
+ "name": "samples",
2030
+ "type": "LATENT",
2031
+ "link": 56
2032
+ }
2033
+ ],
2034
+ "outputs": [
2035
+ {
2036
+ "name": "LATENT",
2037
+ "type": "LATENT",
2038
+ "links": [
2039
+ 57,
2040
+ 65
2041
+ ],
2042
+ "slot_index": 0
2043
+ }
2044
+ ],
2045
+ "properties": {
2046
+ "Node name for S&R": "selectLatentFromBatch _O"
2047
+ },
2048
+ "widgets_values": [
2049
+ 2
2050
+ ],
2051
+ "color": "#232",
2052
+ "bgcolor": "#353"
2053
+ },
2054
+ {
2055
+ "id": 75,
2056
+ "type": "VAEDecode",
2057
+ "pos": [
2058
+ 1203,
2059
+ 3721
2060
+ ],
2061
+ "size": {
2062
+ "0": 140,
2063
+ "1": 46
2064
+ },
2065
+ "flags": {},
2066
+ "order": 64,
2067
+ "mode": 0,
2068
+ "inputs": [
2069
+ {
2070
+ "name": "samples",
2071
+ "type": "LATENT",
2072
+ "link": 57
2073
+ },
2074
+ {
2075
+ "name": "vae",
2076
+ "type": "VAE",
2077
+ "link": 75
2078
+ }
2079
+ ],
2080
+ "outputs": [
2081
+ {
2082
+ "name": "IMAGE",
2083
+ "type": "IMAGE",
2084
+ "links": [
2085
+ 58
2086
+ ],
2087
+ "slot_index": 0
2088
+ }
2089
+ ],
2090
+ "properties": {
2091
+ "Node name for S&R": "VAEDecode"
2092
+ }
2093
+ },
2094
+ {
2095
+ "id": 83,
2096
+ "type": "Reroute",
2097
+ "pos": [
2098
+ 1129,
2099
+ 3471
2100
+ ],
2101
+ "size": [
2102
+ 75,
2103
+ 26
2104
+ ],
2105
+ "flags": {},
2106
+ "order": 57,
2107
+ "mode": 0,
2108
+ "inputs": [
2109
+ {
2110
+ "name": "",
2111
+ "type": "*",
2112
+ "link": 78
2113
+ }
2114
+ ],
2115
+ "outputs": [
2116
+ {
2117
+ "name": "VAE",
2118
+ "type": "VAE",
2119
+ "links": [
2120
+ 75,
2121
+ 76
2122
+ ],
2123
+ "slot_index": 0
2124
+ }
2125
+ ],
2126
+ "properties": {
2127
+ "showOutputText": true,
2128
+ "horizontal": false
2129
+ }
2130
+ },
2131
+ {
2132
+ "id": 84,
2133
+ "type": "Reroute",
2134
+ "pos": [
2135
+ 1553,
2136
+ 3475
2137
+ ],
2138
+ "size": [
2139
+ 75,
2140
+ 26
2141
+ ],
2142
+ "flags": {},
2143
+ "order": 65,
2144
+ "mode": 0,
2145
+ "inputs": [
2146
+ {
2147
+ "name": "",
2148
+ "type": "*",
2149
+ "link": 76
2150
+ }
2151
+ ],
2152
+ "outputs": [
2153
+ {
2154
+ "name": "VAE",
2155
+ "type": "VAE",
2156
+ "links": [
2157
+ 77
2158
+ ],
2159
+ "slot_index": 0
2160
+ }
2161
+ ],
2162
+ "properties": {
2163
+ "showOutputText": true,
2164
+ "horizontal": false
2165
+ }
2166
+ },
2167
+ {
2168
+ "id": 48,
2169
+ "type": "Text2Image _O",
2170
+ "pos": [
2171
+ 355.59363695640627,
2172
+ 2945.522751555393
2173
+ ],
2174
+ "size": {
2175
+ "0": 400,
2176
+ "1": 436.00006103515625
2177
+ },
2178
+ "flags": {},
2179
+ "order": 77,
2180
+ "mode": 0,
2181
+ "inputs": [
2182
+ {
2183
+ "name": "text",
2184
+ "type": "STRING",
2185
+ "link": 44,
2186
+ "widget": {
2187
+ "name": "text",
2188
+ "config": [
2189
+ "STRING",
2190
+ {
2191
+ "multiline": true
2192
+ }
2193
+ ]
2194
+ }
2195
+ }
2196
+ ],
2197
+ "outputs": [
2198
+ {
2199
+ "name": "IMAGE",
2200
+ "type": "IMAGE",
2201
+ "links": [
2202
+ 45
2203
+ ],
2204
+ "slot_index": 0
2205
+ }
2206
+ ],
2207
+ "properties": {
2208
+ "Node name for S&R": "Text2Image _O"
2209
+ },
2210
+ "widgets_values": [
2211
+ "",
2212
+ "CALIBRI.TTF",
2213
+ 36,
2214
+ 0,
2215
+ 0,
2216
+ 0,
2217
+ 255,
2218
+ 255,
2219
+ 255,
2220
+ 255,
2221
+ 255,
2222
+ 512,
2223
+ 256,
2224
+ "true",
2225
+ 256,
2226
+ 128
2227
+ ],
2228
+ "color": "#232",
2229
+ "bgcolor": "#353"
2230
+ },
2231
+ {
2232
+ "id": 34,
2233
+ "type": "RandomNSP _O",
2234
+ "pos": [
2235
+ 61.593636956406215,
2236
+ 2429.522751555393
2237
+ ],
2238
+ "size": {
2239
+ "0": 315,
2240
+ "1": 106
2241
+ },
2242
+ "flags": {},
2243
+ "order": 31,
2244
+ "mode": 0,
2245
+ "outputs": [
2246
+ {
2247
+ "name": "STRING",
2248
+ "type": "STRING",
2249
+ "links": [
2250
+ 35
2251
+ ],
2252
+ "slot_index": 0
2253
+ }
2254
+ ],
2255
+ "title": "RandomNSP _O (Creature)",
2256
+ "properties": {
2257
+ "Node name for S&R": "RandomNSP _O"
2258
+ },
2259
+ "widgets_values": [
2260
+ "fantasy-creature",
2261
+ 864738385711296,
2262
+ false
2263
+ ],
2264
+ "color": "#232",
2265
+ "bgcolor": "#353"
2266
+ },
2267
+ {
2268
+ "id": 37,
2269
+ "type": "RandomNSP _O",
2270
+ "pos": [
2271
+ 61.602386956406235,
2272
+ 2639.859001555393
2273
+ ],
2274
+ "size": {
2275
+ "0": 315,
2276
+ "1": 106
2277
+ },
2278
+ "flags": {},
2279
+ "order": 32,
2280
+ "mode": 0,
2281
+ "outputs": [
2282
+ {
2283
+ "name": "STRING",
2284
+ "type": "STRING",
2285
+ "links": [
2286
+ 37
2287
+ ],
2288
+ "slot_index": 0
2289
+ }
2290
+ ],
2291
+ "title": "RandomNSP _O (Location)",
2292
+ "properties": {
2293
+ "Node name for S&R": "RandomNSP _O"
2294
+ },
2295
+ "widgets_values": [
2296
+ "pop-location",
2297
+ 837829450938436,
2298
+ false
2299
+ ],
2300
+ "color": "#232",
2301
+ "bgcolor": "#353"
2302
+ },
2303
+ {
2304
+ "id": 8,
2305
+ "type": "ChatGPT Simple _O",
2306
+ "pos": [
2307
+ 498.8685881282818,
2308
+ 788.8676148366416
2309
+ ],
2310
+ "size": {
2311
+ "0": 400,
2312
+ "1": 200
2313
+ },
2314
+ "flags": {},
2315
+ "order": 33,
2316
+ "mode": 0,
2317
+ "outputs": [
2318
+ {
2319
+ "name": "STRING",
2320
+ "type": "STRING",
2321
+ "links": [
2322
+ 8
2323
+ ],
2324
+ "slot_index": 0
2325
+ }
2326
+ ],
2327
+ "properties": {
2328
+ "Node name for S&R": "ChatGPT Simple _O"
2329
+ },
2330
+ "widgets_values": [
2331
+ "dancng girl",
2332
+ "gpt-3.5-turbo",
2333
+ 1122472901949655,
2334
+ 693269650780473
2335
+ ],
2336
+ "color": "#232",
2337
+ "bgcolor": "#353"
2338
+ },
2339
+ {
2340
+ "id": 4,
2341
+ "type": "Debug Text _O",
2342
+ "pos": [
2343
+ 912,
2344
+ 476
2345
+ ],
2346
+ "size": [
2347
+ 210,
2348
+ 58
2349
+ ],
2350
+ "flags": {
2351
+ "collapsed": false
2352
+ },
2353
+ "order": 68,
2354
+ "mode": 0,
2355
+ "inputs": [
2356
+ {
2357
+ "name": "text",
2358
+ "type": "STRING",
2359
+ "link": 2,
2360
+ "widget": {
2361
+ "name": "text",
2362
+ "config": [
2363
+ "STRING",
2364
+ {
2365
+ "multiline": true
2366
+ }
2367
+ ]
2368
+ }
2369
+ }
2370
+ ],
2371
+ "properties": {
2372
+ "Node name for S&R": "Debug Text _O"
2373
+ },
2374
+ "widgets_values": [
2375
+ "",
2376
+ "Numbers"
2377
+ ]
2378
+ },
2379
+ {
2380
+ "id": 85,
2381
+ "type": "LoadImage",
2382
+ "pos": [
2383
+ 1184,
2384
+ 341
2385
+ ],
2386
+ "size": [
2387
+ 218.67405007102252,
2388
+ 206.55003703724253
2389
+ ],
2390
+ "flags": {},
2391
+ "order": 34,
2392
+ "mode": 0,
2393
+ "outputs": [
2394
+ {
2395
+ "name": "IMAGE",
2396
+ "type": "IMAGE",
2397
+ "links": [
2398
+ 79
2399
+ ],
2400
+ "slot_index": 0
2401
+ },
2402
+ {
2403
+ "name": "MASK",
2404
+ "type": "MASK",
2405
+ "links": null
2406
+ }
2407
+ ],
2408
+ "properties": {
2409
+ "Node name for S&R": "LoadImage"
2410
+ },
2411
+ "widgets_values": [
2412
+ "example.png",
2413
+ "image"
2414
+ ]
2415
+ },
2416
+ {
2417
+ "id": 24,
2418
+ "type": "Reroute",
2419
+ "pos": [
2420
+ 853,
2421
+ 1135
2422
+ ],
2423
+ "size": [
2424
+ 90.4,
2425
+ 26
2426
+ ],
2427
+ "flags": {},
2428
+ "order": 36,
2429
+ "mode": 0,
2430
+ "inputs": [
2431
+ {
2432
+ "name": "",
2433
+ "type": "*",
2434
+ "link": 18,
2435
+ "pos": [
2436
+ 45.2,
2437
+ 0
2438
+ ]
2439
+ }
2440
+ ],
2441
+ "outputs": [
2442
+ {
2443
+ "name": "OPENAI",
2444
+ "type": "OPENAI",
2445
+ "links": [
2446
+ 19,
2447
+ 23
2448
+ ],
2449
+ "slot_index": 0
2450
+ }
2451
+ ],
2452
+ "properties": {
2453
+ "showOutputText": true,
2454
+ "horizontal": true
2455
+ }
2456
+ },
2457
+ {
2458
+ "id": 86,
2459
+ "type": "PreviewImage",
2460
+ "pos": [
2461
+ 1800,
2462
+ 331
2463
+ ],
2464
+ "size": {
2465
+ "0": 210,
2466
+ "1": 250
2467
+ },
2468
+ "flags": {},
2469
+ "order": 50,
2470
+ "mode": 0,
2471
+ "inputs": [
2472
+ {
2473
+ "name": "images",
2474
+ "type": "IMAGE",
2475
+ "link": 80
2476
+ }
2477
+ ],
2478
+ "properties": {
2479
+ "Node name for S&R": "PreviewImage"
2480
+ }
2481
+ },
2482
+ {
2483
+ "id": 12,
2484
+ "type": "Debug Text _O",
2485
+ "pos": [
2486
+ 1002,
2487
+ 796
2488
+ ],
2489
+ "size": [
2490
+ 280.1154894326173,
2491
+ 58
2492
+ ],
2493
+ "flags": {
2494
+ "collapsed": false
2495
+ },
2496
+ "order": 42,
2497
+ "mode": 0,
2498
+ "inputs": [
2499
+ {
2500
+ "name": "text",
2501
+ "type": "STRING",
2502
+ "link": 8,
2503
+ "widget": {
2504
+ "name": "text",
2505
+ "config": [
2506
+ "STRING",
2507
+ {
2508
+ "multiline": true
2509
+ }
2510
+ ]
2511
+ }
2512
+ }
2513
+ ],
2514
+ "properties": {
2515
+ "Node name for S&R": "Debug Text _O"
2516
+ },
2517
+ "widgets_values": [
2518
+ "",
2519
+ "ChatGPT simple"
2520
+ ]
2521
+ },
2522
+ {
2523
+ "id": 21,
2524
+ "type": "Debug Text _O",
2525
+ "pos": [
2526
+ 1448,
2527
+ 1469
2528
+ ],
2529
+ "size": {
2530
+ "0": 210,
2531
+ "1": 58
2532
+ },
2533
+ "flags": {
2534
+ "collapsed": false
2535
+ },
2536
+ "order": 71,
2537
+ "mode": 0,
2538
+ "inputs": [
2539
+ {
2540
+ "name": "text",
2541
+ "type": "STRING",
2542
+ "link": 16,
2543
+ "widget": {
2544
+ "name": "text",
2545
+ "config": [
2546
+ "STRING",
2547
+ {
2548
+ "multiline": true
2549
+ }
2550
+ ]
2551
+ }
2552
+ }
2553
+ ],
2554
+ "properties": {
2555
+ "Node name for S&R": "Debug Text _O"
2556
+ },
2557
+ "widgets_values": [
2558
+ "",
2559
+ "ChatGPT"
2560
+ ]
2561
+ },
2562
+ {
2563
+ "id": 41,
2564
+ "type": "Trim Text _O",
2565
+ "pos": [
2566
+ 666,
2567
+ 2518
2568
+ ],
2569
+ "size": {
2570
+ "0": 239.58309936523438,
2571
+ "1": 34
2572
+ },
2573
+ "flags": {},
2574
+ "order": 41,
2575
+ "mode": 0,
2576
+ "inputs": [
2577
+ {
2578
+ "name": "text",
2579
+ "type": "STRING",
2580
+ "link": 35,
2581
+ "widget": {
2582
+ "name": "text",
2583
+ "config": [
2584
+ "STRING",
2585
+ {
2586
+ "multiline": true
2587
+ }
2588
+ ]
2589
+ }
2590
+ }
2591
+ ],
2592
+ "outputs": [
2593
+ {
2594
+ "name": "STRING",
2595
+ "type": "STRING",
2596
+ "links": [
2597
+ 36
2598
+ ],
2599
+ "slot_index": 0
2600
+ }
2601
+ ],
2602
+ "properties": {
2603
+ "Node name for S&R": "Trim Text _O"
2604
+ },
2605
+ "widgets_values": [
2606
+ ""
2607
+ ]
2608
+ },
2609
+ {
2610
+ "id": 45,
2611
+ "type": "Debug Text _O",
2612
+ "pos": [
2613
+ 1618,
2614
+ 2519
2615
+ ],
2616
+ "size": [
2617
+ 210,
2618
+ 58
2619
+ ],
2620
+ "flags": {
2621
+ "collapsed": false
2622
+ },
2623
+ "order": 66,
2624
+ "mode": 0,
2625
+ "inputs": [
2626
+ {
2627
+ "name": "text",
2628
+ "type": "STRING",
2629
+ "link": 39,
2630
+ "widget": {
2631
+ "name": "text",
2632
+ "config": [
2633
+ "STRING",
2634
+ {
2635
+ "multiline": true
2636
+ }
2637
+ ]
2638
+ }
2639
+ }
2640
+ ],
2641
+ "properties": {
2642
+ "Node name for S&R": "Debug Text _O"
2643
+ },
2644
+ "widgets_values": [
2645
+ "",
2646
+ "NSP"
2647
+ ],
2648
+ "color": "#232",
2649
+ "bgcolor": "#353"
2650
+ },
2651
+ {
2652
+ "id": 49,
2653
+ "type": "Reroute",
2654
+ "pos": [
2655
+ 1619,
2656
+ 2797
2657
+ ],
2658
+ "size": [
2659
+ 75,
2660
+ 26
2661
+ ],
2662
+ "flags": {},
2663
+ "order": 67,
2664
+ "mode": 0,
2665
+ "inputs": [
2666
+ {
2667
+ "name": "",
2668
+ "type": "*",
2669
+ "link": 41,
2670
+ "pos": [
2671
+ 37.5,
2672
+ 0
2673
+ ]
2674
+ }
2675
+ ],
2676
+ "outputs": [
2677
+ {
2678
+ "name": "",
2679
+ "type": "STRING",
2680
+ "links": [
2681
+ 46
2682
+ ],
2683
+ "slot_index": 0
2684
+ }
2685
+ ],
2686
+ "properties": {
2687
+ "showOutputText": false,
2688
+ "horizontal": true
2689
+ }
2690
+ },
2691
+ {
2692
+ "id": 47,
2693
+ "type": "Note _O",
2694
+ "pos": [
2695
+ 1614,
2696
+ 2255
2697
+ ],
2698
+ "size": {
2699
+ "0": 210,
2700
+ "1": 175.2283172607422
2701
+ },
2702
+ "flags": {},
2703
+ "order": 11,
2704
+ "mode": 0,
2705
+ "properties": {
2706
+ "Node name for S&R": "Note _O"
2707
+ },
2708
+ "widgets_values": [
2709
+ " - debug will write text to the \n console screen\n - prefix will be written before \n your log "
2710
+ ],
2711
+ "color": "#432",
2712
+ "bgcolor": "#653"
2713
+ },
2714
+ {
2715
+ "id": 63,
2716
+ "type": "ImageScaleFactor _O",
2717
+ "pos": [
2718
+ 1433,
2719
+ 351
2720
+ ],
2721
+ "size": {
2722
+ "0": 315,
2723
+ "1": 154
2724
+ },
2725
+ "flags": {},
2726
+ "order": 43,
2727
+ "mode": 0,
2728
+ "inputs": [
2729
+ {
2730
+ "name": "image",
2731
+ "type": "IMAGE",
2732
+ "link": 79
2733
+ }
2734
+ ],
2735
+ "outputs": [
2736
+ {
2737
+ "name": "IMAGE",
2738
+ "type": "IMAGE",
2739
+ "links": [
2740
+ 80
2741
+ ],
2742
+ "slot_index": 0
2743
+ }
2744
+ ],
2745
+ "properties": {
2746
+ "Node name for S&R": "ImageScaleFactor _O"
2747
+ },
2748
+ "widgets_values": [
2749
+ "nearest-exact",
2750
+ 1.25,
2751
+ 1.25,
2752
+ "enabled",
2753
+ "disabled"
2754
+ ],
2755
+ "color": "#232",
2756
+ "bgcolor": "#353"
2757
+ }
2758
+ ],
2759
+ "links": [
2760
+ [
2761
+ 2,
2762
+ 3,
2763
+ 0,
2764
+ 4,
2765
+ 0,
2766
+ "STRING"
2767
+ ],
2768
+ [
2769
+ 4,
2770
+ 5,
2771
+ 0,
2772
+ 6,
2773
+ 0,
2774
+ "FLOAT"
2775
+ ],
2776
+ [
2777
+ 5,
2778
+ 6,
2779
+ 0,
2780
+ 7,
2781
+ 0,
2782
+ "INT"
2783
+ ],
2784
+ [
2785
+ 6,
2786
+ 7,
2787
+ 0,
2788
+ 3,
2789
+ 0,
2790
+ "FLOAT"
2791
+ ],
2792
+ [
2793
+ 7,
2794
+ 2,
2795
+ 0,
2796
+ 5,
2797
+ 0,
2798
+ "FLOAT"
2799
+ ],
2800
+ [
2801
+ 8,
2802
+ 8,
2803
+ 0,
2804
+ 12,
2805
+ 0,
2806
+ "STRING"
2807
+ ],
2808
+ [
2809
+ 9,
2810
+ 14,
2811
+ 0,
2812
+ 16,
2813
+ 0,
2814
+ "OPENAI_CHAT_MESSAGES"
2815
+ ],
2816
+ [
2817
+ 10,
2818
+ 15,
2819
+ 0,
2820
+ 16,
2821
+ 1,
2822
+ "OPENAI_CHAT_MESSAGES"
2823
+ ],
2824
+ [
2825
+ 12,
2826
+ 16,
2827
+ 0,
2828
+ 19,
2829
+ 0,
2830
+ "*"
2831
+ ],
2832
+ [
2833
+ 14,
2834
+ 19,
2835
+ 0,
2836
+ 20,
2837
+ 0,
2838
+ "*"
2839
+ ],
2840
+ [
2841
+ 15,
2842
+ 20,
2843
+ 0,
2844
+ 17,
2845
+ 1,
2846
+ "OPENAI_CHAT_MESSAGES"
2847
+ ],
2848
+ [
2849
+ 16,
2850
+ 17,
2851
+ 0,
2852
+ 21,
2853
+ 0,
2854
+ "STRING"
2855
+ ],
2856
+ [
2857
+ 18,
2858
+ 13,
2859
+ 0,
2860
+ 24,
2861
+ 0,
2862
+ "*"
2863
+ ],
2864
+ [
2865
+ 19,
2866
+ 24,
2867
+ 0,
2868
+ 17,
2869
+ 0,
2870
+ "OPENAI"
2871
+ ],
2872
+ [
2873
+ 22,
2874
+ 27,
2875
+ 0,
2876
+ 26,
2877
+ 0,
2878
+ "OPENAI"
2879
+ ],
2880
+ [
2881
+ 23,
2882
+ 24,
2883
+ 0,
2884
+ 28,
2885
+ 0,
2886
+ "*"
2887
+ ],
2888
+ [
2889
+ 24,
2890
+ 28,
2891
+ 0,
2892
+ 27,
2893
+ 0,
2894
+ "*"
2895
+ ],
2896
+ [
2897
+ 25,
2898
+ 26,
2899
+ 0,
2900
+ 30,
2901
+ 0,
2902
+ "IMAGE"
2903
+ ],
2904
+ [
2905
+ 26,
2906
+ 26,
2907
+ 0,
2908
+ 31,
2909
+ 1,
2910
+ "IMAGE"
2911
+ ],
2912
+ [
2913
+ 28,
2914
+ 28,
2915
+ 0,
2916
+ 32,
2917
+ 0,
2918
+ "*"
2919
+ ],
2920
+ [
2921
+ 29,
2922
+ 32,
2923
+ 0,
2924
+ 31,
2925
+ 0,
2926
+ "OPENAI"
2927
+ ],
2928
+ [
2929
+ 30,
2930
+ 31,
2931
+ 0,
2932
+ 33,
2933
+ 0,
2934
+ "IMAGE"
2935
+ ],
2936
+ [
2937
+ 35,
2938
+ 34,
2939
+ 0,
2940
+ 41,
2941
+ 0,
2942
+ "STRING"
2943
+ ],
2944
+ [
2945
+ 36,
2946
+ 41,
2947
+ 0,
2948
+ 39,
2949
+ 0,
2950
+ "STRING"
2951
+ ],
2952
+ [
2953
+ 37,
2954
+ 37,
2955
+ 0,
2956
+ 39,
2957
+ 1,
2958
+ "STRING"
2959
+ ],
2960
+ [
2961
+ 38,
2962
+ 39,
2963
+ 0,
2964
+ 44,
2965
+ 0,
2966
+ "STRING"
2967
+ ],
2968
+ [
2969
+ 39,
2970
+ 44,
2971
+ 0,
2972
+ 45,
2973
+ 0,
2974
+ "STRING"
2975
+ ],
2976
+ [
2977
+ 41,
2978
+ 44,
2979
+ 0,
2980
+ 49,
2981
+ 0,
2982
+ "*"
2983
+ ],
2984
+ [
2985
+ 44,
2986
+ 50,
2987
+ 0,
2988
+ 48,
2989
+ 0,
2990
+ "STRING"
2991
+ ],
2992
+ [
2993
+ 45,
2994
+ 48,
2995
+ 0,
2996
+ 51,
2997
+ 0,
2998
+ "IMAGE"
2999
+ ],
3000
+ [
3001
+ 46,
3002
+ 49,
3003
+ 0,
3004
+ 50,
3005
+ 0,
3006
+ "*"
3007
+ ],
3008
+ [
3009
+ 47,
3010
+ 69,
3011
+ 0,
3012
+ 68,
3013
+ 0,
3014
+ "MODEL"
3015
+ ],
3016
+ [
3017
+ 48,
3018
+ 70,
3019
+ 0,
3020
+ 68,
3021
+ 1,
3022
+ "CONDITIONING"
3023
+ ],
3024
+ [
3025
+ 49,
3026
+ 71,
3027
+ 0,
3028
+ 68,
3029
+ 2,
3030
+ "CONDITIONING"
3031
+ ],
3032
+ [
3033
+ 50,
3034
+ 72,
3035
+ 0,
3036
+ 68,
3037
+ 3,
3038
+ "LATENT"
3039
+ ],
3040
+ [
3041
+ 51,
3042
+ 68,
3043
+ 0,
3044
+ 73,
3045
+ 0,
3046
+ "LATENT"
3047
+ ],
3048
+ [
3049
+ 53,
3050
+ 69,
3051
+ 1,
3052
+ 70,
3053
+ 0,
3054
+ "CLIP"
3055
+ ],
3056
+ [
3057
+ 54,
3058
+ 69,
3059
+ 1,
3060
+ 71,
3061
+ 0,
3062
+ "CLIP"
3063
+ ],
3064
+ [
3065
+ 55,
3066
+ 73,
3067
+ 0,
3068
+ 74,
3069
+ 0,
3070
+ "IMAGE"
3071
+ ],
3072
+ [
3073
+ 56,
3074
+ 68,
3075
+ 0,
3076
+ 66,
3077
+ 0,
3078
+ "LATENT"
3079
+ ],
3080
+ [
3081
+ 57,
3082
+ 66,
3083
+ 0,
3084
+ 75,
3085
+ 0,
3086
+ "LATENT"
3087
+ ],
3088
+ [
3089
+ 58,
3090
+ 75,
3091
+ 0,
3092
+ 76,
3093
+ 0,
3094
+ "IMAGE"
3095
+ ],
3096
+ [
3097
+ 59,
3098
+ 69,
3099
+ 2,
3100
+ 77,
3101
+ 0,
3102
+ "*"
3103
+ ],
3104
+ [
3105
+ 62,
3106
+ 77,
3107
+ 0,
3108
+ 78,
3109
+ 0,
3110
+ "*"
3111
+ ],
3112
+ [
3113
+ 65,
3114
+ 66,
3115
+ 0,
3116
+ 62,
3117
+ 0,
3118
+ "LATENT"
3119
+ ],
3120
+ [
3121
+ 66,
3122
+ 62,
3123
+ 0,
3124
+ 80,
3125
+ 0,
3126
+ "LATENT"
3127
+ ],
3128
+ [
3129
+ 68,
3130
+ 80,
3131
+ 0,
3132
+ 81,
3133
+ 0,
3134
+ "IMAGE"
3135
+ ],
3136
+ [
3137
+ 69,
3138
+ 78,
3139
+ 0,
3140
+ 73,
3141
+ 1,
3142
+ "VAE"
3143
+ ],
3144
+ [
3145
+ 75,
3146
+ 83,
3147
+ 0,
3148
+ 75,
3149
+ 1,
3150
+ "VAE"
3151
+ ],
3152
+ [
3153
+ 76,
3154
+ 83,
3155
+ 0,
3156
+ 84,
3157
+ 0,
3158
+ "*"
3159
+ ],
3160
+ [
3161
+ 77,
3162
+ 84,
3163
+ 0,
3164
+ 80,
3165
+ 1,
3166
+ "VAE"
3167
+ ],
3168
+ [
3169
+ 78,
3170
+ 78,
3171
+ 0,
3172
+ 83,
3173
+ 0,
3174
+ "*"
3175
+ ],
3176
+ [
3177
+ 79,
3178
+ 85,
3179
+ 0,
3180
+ 63,
3181
+ 0,
3182
+ "IMAGE"
3183
+ ],
3184
+ [
3185
+ 80,
3186
+ 63,
3187
+ 0,
3188
+ 86,
3189
+ 0,
3190
+ "IMAGE"
3191
+ ]
3192
+ ],
3193
+ "groups": [
3194
+ {
3195
+ "title": "Numbers",
3196
+ "bounding": [
3197
+ 0,
3198
+ 0,
3199
+ 1129,
3200
+ 609
3201
+ ],
3202
+ "color": "#8A8"
3203
+ },
3204
+ {
3205
+ "title": "OpenAI",
3206
+ "bounding": [
3207
+ 3,
3208
+ 638,
3209
+ 2030,
3210
+ 1387
3211
+ ],
3212
+ "color": "#3f789e"
3213
+ },
3214
+ {
3215
+ "title": "ChatGPT simple",
3216
+ "bounding": [
3217
+ 462,
3218
+ 690,
3219
+ 1560,
3220
+ 335
3221
+ ],
3222
+ "color": "#88A"
3223
+ },
3224
+ {
3225
+ "title": "ChatGPT",
3226
+ "bounding": [
3227
+ 474,
3228
+ 1194,
3229
+ 1552,
3230
+ 416
3231
+ ],
3232
+ "color": "#88A"
3233
+ },
3234
+ {
3235
+ "title": "Dalle2",
3236
+ "bounding": [
3237
+ 471,
3238
+ 1636,
3239
+ 1550,
3240
+ 372
3241
+ ],
3242
+ "color": "#88A"
3243
+ },
3244
+ {
3245
+ "title": "Text tools",
3246
+ "bounding": [
3247
+ 5,
3248
+ 2042,
3249
+ 2029,
3250
+ 1359
3251
+ ],
3252
+ "color": "#3f789e"
3253
+ },
3254
+ {
3255
+ "title": "Soup Prompts",
3256
+ "bounding": [
3257
+ 32,
3258
+ 2345,
3259
+ 403,
3260
+ 422
3261
+ ],
3262
+ "color": "#8A8"
3263
+ },
3264
+ {
3265
+ "title": "text operations",
3266
+ "bounding": [
3267
+ 458,
3268
+ 2102,
3269
+ 1543,
3270
+ 665
3271
+ ],
3272
+ "color": "#88A"
3273
+ },
3274
+ {
3275
+ "title": "Utility",
3276
+ "bounding": [
3277
+ -4,
3278
+ 4007,
3279
+ 2027,
3280
+ 606
3281
+ ],
3282
+ "color": "#3f789e"
3283
+ },
3284
+ {
3285
+ "title": "Latent tools",
3286
+ "bounding": [
3287
+ 7,
3288
+ 3416,
3289
+ 2024,
3290
+ 579
3291
+ ],
3292
+ "color": "#3f789e"
3293
+ },
3294
+ {
3295
+ "title": "Image tools",
3296
+ "bounding": [
3297
+ 1157,
3298
+ -1,
3299
+ 872,
3300
+ 609
3301
+ ],
3302
+ "color": "#3f789e"
3303
+ },
3304
+ {
3305
+ "title": "generate 4 images",
3306
+ "bounding": [
3307
+ 6,
3308
+ 3451,
3309
+ 510,
3310
+ 543
3311
+ ],
3312
+ "color": "#88A"
3313
+ },
3314
+ {
3315
+ "title": "Group",
3316
+ "bounding": [
3317
+ 73,
3318
+ 3485,
3319
+ 140,
3320
+ 80
3321
+ ],
3322
+ "color": "#3f789e"
3323
+ },
3324
+ {
3325
+ "title": "Group",
3326
+ "bounding": [
3327
+ 53,
3328
+ 3487,
3329
+ 140,
3330
+ 80
3331
+ ],
3332
+ "color": "#3f789e"
3333
+ }
3334
+ ],
3335
+ "config": {},
3336
+ "extra": {},
3337
+ "version": 0.4
3338
+ }
Workflows/ChatGPT.png ADDED
Workflows/ChatGPT_Advanced.png ADDED
Workflows/string_o.png ADDED
__init__.py ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ #----------------------------------------------------------
3
+ #update logic
4
+ import os
5
+ import json
6
+
7
+
8
+ #check if config file exists
9
+ if not os.path.isfile(os.path.join(os.path.dirname(os.path.realpath(__file__)),"config.json")):
10
+ #create config file
11
+ config = {
12
+ "autoUpdate": True,
13
+ "branch": "main",
14
+ "openAI_API_Key": "sk-#########################################"
15
+ }
16
+ with open(os.path.join(os.path.dirname(os.path.realpath(__file__)),"config.json"), "w") as f:
17
+ json.dump(config, f, indent=4)
18
+
19
+ #load config file
20
+ with open(os.path.join(os.path.dirname(os.path.realpath(__file__)),"config.json"), "r") as f:
21
+ config = json.load(f)
22
+
23
+ #check if autoUpdate is set
24
+ if "autoUpdate" not in config:
25
+ config["autoUpdate"] = False
26
+
27
+ #check if version is set
28
+ if "version" not in config:
29
+ config["branch"] = "main"
30
+
31
+
32
+ if config["autoUpdate"] == True:
33
+ try:
34
+ from .update import update as update
35
+ currentPath = os.path.dirname(os.path.realpath(__file__))
36
+ #run update/update.py to update the node class mappings
37
+ update.update(currentPath,branch_name=config["branch"])
38
+ except ImportError:
39
+ print("Failed to auto update `Quality of Life Suit` ")
40
+
41
+ #----------------------------------------------------------
42
+
43
+ from .src.QualityOfLifeSuit_Omar92 import NODE_CLASS_MAPPINGS as NODE_CLASS_MAPPINGS_SUIT
44
+ try:
45
+ from .src.QualityOfLife_deprecatedNodes import NODE_CLASS_MAPPINGS as NODE_CLASS_MAPPINGS_DEPRECATED
46
+ except ImportError:
47
+ NODE_CLASS_MAPPINGS_DEPRECATED = {}
48
+
49
+
50
+ __all__ = ['NODE_CLASS_MAPPINGS_SUIT', 'NODE_CLASS_MAPPINGS_DEPRECATED', 'NODE_CLASS_MAPPINGS']
51
+ NODE_CLASS_MAPPINGS = {
52
+ **NODE_CLASS_MAPPINGS_SUIT,
53
+ **NODE_CLASS_MAPPINGS_DEPRECATED
54
+ }
__pycache__/__init__.cpython-310.pyc ADDED
Binary file (1.2 kB). View file
 
config.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "autoUpdate": true,
3
+ "branch": "main",
4
+ "openAI_API_Key": "sk-#########################################"
5
+ }
fonts/Alkatra.ttf ADDED
Binary file (858 kB). View file
 
fonts/CALIBRI.TTF ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df2c69a18a462e5cbc97d04a033f3bd7cd0abfe818381641f8c2dee7b7c43dbd
3
+ size 1681220
fonts/COMIC.TTF ADDED
Binary file (246 kB). View file
 
fonts/COMICI.TTF ADDED
Binary file (227 kB). View file
 
fonts/COMICZ.TTF ADDED
Binary file (224 kB). View file
 
fonts/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ ## empty
how directory should look like.png ADDED
nsp_pantry.json ADDED
The diff for this file is too large to render. See raw diff
 
src/QualityOfLifeSuit_Omar92.py ADDED
@@ -0,0 +1,1638 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Developed by Omar - https://github.com/omar92
2
+ # https://civitai.com/user/omar92
3
+ # discord: Omar92#3374
4
+
5
+ import io
6
+ import json
7
+ import os
8
+ import random
9
+ import time
10
+ from urllib.request import urlopen
11
+ import numpy as np
12
+ import requests
13
+ import torch
14
+ from PIL import Image, ImageFont, ImageDraw
15
+ import importlib
16
+ import comfy.samplers
17
+ import comfy.sd
18
+ import comfy.utils
19
+ import torch.nn as nn
20
+
21
+ MAX_RESOLUTION = 8192
22
+
23
+ # region INSTALLATION CLEANUP (thanks WAS i got this from you)
24
+ # Delete legacy nodes
25
+ legacy_nodes = ['ChatGPT_Omar92.py',
26
+ 'LatentUpscaleMultiply_Omar92.py', 'StringSuit_Omar92.py']
27
+ legacy_nodes_found = []
28
+ f_disp = False
29
+ for f in legacy_nodes:
30
+ node_path_dir = os.getcwd()+'./custom_nodes/'
31
+ file = f'{node_path_dir}{f}'
32
+ if os.path.exists(file):
33
+ import zipfile
34
+ if not f_disp:
35
+ print(
36
+ '\033[33mQualityOflife Node Suite:\033[0m Found legacy nodes. Archiving legacy nodes...')
37
+ f_disp = True
38
+ legacy_nodes_found.append(file)
39
+ if legacy_nodes_found:
40
+ from os.path import basename
41
+ archive = zipfile.ZipFile(
42
+ f'{node_path_dir}QualityOflife_Backup_{round(time.time())}.zip', "w")
43
+ for f in legacy_nodes_found:
44
+ archive.write(f, basename(f))
45
+ try:
46
+ os.remove(f)
47
+ except OSError:
48
+ pass
49
+ archive.close()
50
+ if f_disp:
51
+ print('\033[33mQualityOflife Node Suite:\033[0m Legacy cleanup complete.')
52
+ # endregion
53
+
54
+ # region global
55
+ PACKAGE_NAME = '\033[33mQualityOfLifeSuit_Omar92:\033[0m'
56
+ NODE_FILE = os.path.abspath(__file__)
57
+ SUIT_DIR = (os.path.dirname(os.path.dirname(NODE_FILE))
58
+ if os.path.dirname(os.path.dirname(NODE_FILE)) == 'QualityOfLifeSuit_Omar92'
59
+ or os.path.dirname(os.path.dirname(NODE_FILE)) == 'QualityOfLifeSuit_Omar92-dev'
60
+ else os.path.dirname(NODE_FILE))
61
+ SUIT_DIR = os.path.normpath(os.path.join(SUIT_DIR, '..'))
62
+ print(f'\033[33mQualityOfLifeSuit_Omar92_DIR:\033[0m {SUIT_DIR}')
63
+
64
+
65
+ def enforce_mul_of_64(d):
66
+ leftover = d % 8 # 8 is the number of pixels per byte
67
+ if leftover != 0: # if the number of pixels is not a multiple of 8
68
+ if (leftover < 4): # if the number of pixels is less than 4
69
+ d -= leftover # remove the leftover pixels
70
+ else: # if the number of pixels is more than 4
71
+ d += 8 - leftover # add the leftover pixels
72
+
73
+ return d
74
+
75
+ # endregion
76
+
77
+
78
+ # region openAITools
79
+
80
+
81
+ def install_openai():
82
+ # Helper function to install the OpenAI module if not already installed
83
+ try:
84
+ importlib.import_module('openai')
85
+ except ImportError:
86
+ import pip
87
+ pip.main(['install', 'openai'])
88
+
89
+
90
+ def get_api_key():
91
+ # Helper function to get the API key from the file
92
+ try:
93
+ # open config file
94
+ configPath = os.path.join(SUIT_DIR, "config.json")
95
+ with open(configPath, 'r') as f: # Open the file and read the API key
96
+ config = json.load(f)
97
+ api_key = config["openAI_API_Key"]
98
+ except:
99
+ print("Error: OpenAI API key file not found OpenAI features wont work for you")
100
+ return ""
101
+ return api_key # Return the API key
102
+
103
+
104
+ openAI_models = None
105
+ #region chatGPTDefaultInitMessages
106
+ chatGPTDefaultInitMessage_tags = """
107
+ First, some basic Stable Diffusion prompting rules for you to better understand the syntax. The parentheses are there for grouping prompt words together, so that we can set uniform weight to multiple words at the same time. Notice the ":1.2" in (masterpiece, best quality, absurdres:1.2), it means that we set the weight of both "masterpiece" and "best quality" to 1.2. The parentheses can also be used to directly increase weight for single word without adding ":WEIGHT". For example, we can type ((masterpiece)), this will increase the weight of "masterpiece" to 1.21. This basic rule is imperative that any parentheses in a set of prompts have purpose, and so they must not be remove at any case. Conversely, when brackets are used in prompts, it means to decrease the weight of a word. For example, by typing "[bird]", we decrease the weight of the word "bird" by 1.1.
108
+ Now, I've develop a prompt template to use generate character portraits in Stable Diffusion. Here's how it works. Every time user sent you "CHAR prompts", you should give prompts that follow below format:
109
+ CHAR: [pre-defined prompts], [location], [time], [weather], [gender], [skin color], [photo type], [pose], [camera position], [facial expression], [body feature], [skin feature], [eye color], [outfit], [hair style], [hair color], [accessories], [random prompt],
110
+
111
+ [pre-defined prompts] are always the same, which are "RAW, (masterpiece, best quality, photorealistic, absurdres, 8k:1.2), best lighting, complex pupils, complex textile, detailed background". Don't change anything in [pre-defined prompts], meaning that you SHOULD NOT REMOVE OR MODIFY the parentheses since their purpose is for grouping prompt words together so that we can set uniform weight to them;
112
+ [location] is the location where character is in, can be either outdoor location or indoor, but need to be specific;
113
+ [time] refers to the time of day, can be "day", "noon", "night", "evening", "dawn" or "dusk";
114
+ [weather] is the weather, for example "windy", "rainy" or "cloudy";
115
+ [gender] is either "1boy" or "1girl";
116
+ [skin color] is the skin color of the character, could be "dark skin", "yellow skin" or "pale skin";
117
+ [photo type] can be "upper body", "full body", "close up", "mid-range", "Headshot", "3/4 shot" or "environmental portrait";
118
+ [pose] is the character's pose, for example, "standing", "sitting", "kneeling" or "squatting" ...;
119
+ [camera position] can be "from top", "from below", "from side", "from front" or "from behind";
120
+ [facial expression] is the expression of the character, you should give user a random expression;
121
+ [body feature] describe how the character's body looks like, for example, it could be "wide hip", "large breasts" or "sexy", try to be creative;
122
+ [skin feature] is the feature of character's skin. Could be "scar on skin", "dirty skin", "tanned mark", "birthmarks" or other skin features you can think of;
123
+ [eye color] is the pupil color of the character, it can be of any color as long as the color looks natural on human eyes, so avoid colors like pure red or pure black;
124
+ [outfit] is what character wears, it should include at least the top wear, bottom wear and footwear, for example, "crop top, shorts, sneakers", the style of outfit can be any, but the [character gender] should be considered;
125
+ [hair style] is the hairstyle of the character, [character gender] should be taken into account when setting the hairstyle;
126
+ [hair color] can be of any color, for example, "orange hair", "multi-colored hair";
127
+ [accessories] is the accessory the character might wear, can be "chocker", "earrings", "bracelet" or other types of accessory;
128
+ [random prompt] will test your creativity, put anything here, just remember that you can only use nouns in [random prompt], the number of [random prompt] can be between 1 to 4. For example, you could give "campfire", but you can also give "shooting star, large moon, fallen leaves". Again, be creative with this one.
129
+
130
+ also use gelbooru tags as much as you can
131
+ if you use gelbooru write "gTags" before it
132
+ Do not use markdown syntax in prompts, do not use capital letter and keep all prompt words in the same line. Respond with "prompt:" to start prompting with us.
133
+
134
+ """;
135
+
136
+ chatGPTDefaultInitMessage_description = """
137
+ act as prompt generator ,i will give you text and you describe an image that match that text in details use gelbooru tags in your description also describe the high quality of the image, answer with one response only
138
+ """;
139
+ def get_init_message(isTags=False):
140
+ if(isTags):
141
+ return chatGPTDefaultInitMessage_tags
142
+ else:
143
+ return chatGPTDefaultInitMessage_description
144
+
145
+ #endregion chatGPTDefaultInitMessages
146
+ def get_openAI_models():
147
+ global openAI_models
148
+ if (openAI_models != None):
149
+ return openAI_models
150
+
151
+ install_openai()
152
+ import openai
153
+ # Set the API key for the OpenAI module
154
+ openai.api_key = get_api_key()
155
+
156
+ try:
157
+ models = openai.Model.list() # Get the list of models
158
+ except:
159
+ print("Error: OpenAI API key is invalid OpenAI features wont work for you")
160
+ return []
161
+
162
+ openAI_models = [] # Create a list for the chat models
163
+ for model in models["data"]: # Loop through the models
164
+ openAI_models.append(model["id"]) # Add the model to the list
165
+
166
+ return openAI_models # Return the list of chat models
167
+
168
+
169
+ openAI_gpt_models = None
170
+
171
+
172
+ def get_gpt_models():
173
+ global openAI_gpt_models
174
+ if (openAI_gpt_models != None):
175
+ return openAI_gpt_models
176
+ models = get_openAI_models()
177
+ openAI_gpt_models = [] # Create a list for the chat models
178
+ for model in models: # Loop through the models
179
+ if ("gpt" in model.lower()):
180
+ openAI_gpt_models.append(model)
181
+
182
+ return openAI_gpt_models # Return the list of chat models
183
+
184
+
185
+ class O_ChatGPT_O:
186
+ """
187
+ this node is based on the openAI GPT-3 API to generate propmpts using the AI
188
+ """
189
+ # Define the input types for the node
190
+ @classmethod
191
+ def INPUT_TYPES(cls):
192
+ return {
193
+ "required": {
194
+ # Multiline string input for the prompt
195
+ "prompt": ("STRING", {"multiline": True}),
196
+ "model": (get_gpt_models(), {"default": "gpt-3.5-turbo"}),
197
+ "behaviour": (["tags","description"], {"default": "description"}),
198
+ },
199
+ "optional": {
200
+ "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
201
+ }
202
+ }
203
+
204
+ RETURN_TYPES = ("STRING",) # Define the return type of the node
205
+ FUNCTION = "fun" # Define the function name for the node
206
+ CATEGORY = "O/OpenAI" # Define the category for the node
207
+
208
+ def fun(self, model, prompt,behaviour, seed):
209
+ install_openai() # Install the OpenAI module if not already installed
210
+ import openai # Import the OpenAI module
211
+
212
+ # Get the API key from the file
213
+ api_key = get_api_key()
214
+
215
+ openai.api_key = api_key # Set the API key for the OpenAI module
216
+ initMessage = "";
217
+ if(behaviour == "description"):
218
+ initMessage = get_init_message(False);
219
+ else:
220
+ initMessage = get_init_message(True);
221
+ # Create a chat completion using the OpenAI module
222
+ try:
223
+ completion = openai.ChatCompletion.create(
224
+ model=model,
225
+ messages=[
226
+ {"role": "user", "content":initMessage},
227
+ {"role": "user", "content": prompt}
228
+ ]
229
+ )
230
+ except: # sometimes it fails first time to connect to server
231
+ completion = openai.ChatCompletion.create(
232
+ model=model,
233
+ messages=[
234
+ {"role": "user", "content": initMessage},
235
+ {"role": "user", "content": prompt}
236
+ ]
237
+ )
238
+ # Get the answer from the chat completion
239
+ answer = completion["choices"][0]["message"]["content"]
240
+ return (answer,) # Return the answer as a string
241
+
242
+
243
+ class O_ChatGPT_medium_O:
244
+ """
245
+ this node is based on the openAI GPT-3 API to generate propmpts using the AI
246
+ """
247
+ # Define the input types for the node
248
+ @classmethod
249
+ def INPUT_TYPES(cls):
250
+ return {
251
+ "required": {
252
+ # Multiline string input for the prompt
253
+ "prompt": ("STRING", {"multiline": True}),
254
+ "initMsg": ("STRING", {"multiline": True, "default": get_init_message()}),
255
+ "model": (get_gpt_models(), {"default": "gpt-3.5-turbo"}),
256
+ },
257
+ "optional": {
258
+ "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
259
+ }
260
+ }
261
+
262
+ RETURN_TYPES = ("STRING",) # Define the return type of the node
263
+ FUNCTION = "fun" # Define the function name for the node
264
+ CATEGORY = "O/OpenAI" # Define the category for the node
265
+
266
+ def fun(self, model, prompt, initMsg, seed):
267
+ install_openai() # Install the OpenAI module if not already installed
268
+ import openai # Import the OpenAI module
269
+
270
+ # Get the API key from the file
271
+ api_key = get_api_key()
272
+
273
+ openai.api_key = api_key # Set the API key for the OpenAI module
274
+
275
+ # Create a chat completion using the OpenAI module
276
+ try:
277
+ completion = openai.ChatCompletion.create(
278
+ model=model,
279
+ messages=[
280
+ {"role": "user", "content": initMsg},
281
+ {"role": "user", "content": prompt}
282
+ ]
283
+ )
284
+ except: # sometimes it fails first time to connect to server
285
+ completion = openai.ChatCompletion.create(
286
+ model=model,
287
+ messages=[
288
+ {"role": "user", "content": initMsg},
289
+ {"role": "user", "content": prompt}
290
+ ]
291
+ )
292
+ # Get the answer from the chat completion
293
+ answer = completion["choices"][0]["message"]["content"]
294
+ return (answer,) # Return the answer as a string
295
+
296
+
297
+ # region advanced
298
+
299
+
300
+ class load_openAI_O:
301
+ """
302
+ this node will load openAI model
303
+ """
304
+ # Define the input types for the node
305
+ @classmethod
306
+ def INPUT_TYPES(cls):
307
+ return {
308
+ "required": {
309
+ }
310
+ }
311
+ RETURN_TYPES = ("OPENAI",) # Define the return type of the node
312
+ FUNCTION = "fun" # Define the function name for the node
313
+ CATEGORY = "O/OpenAI/Advanced" # Define the category for the node
314
+
315
+ def fun(self):
316
+ install_openai() # Install the OpenAI module if not already installed
317
+ import openai # Import the OpenAI module
318
+
319
+ # Get the API key from the file
320
+ api_key = get_api_key()
321
+ openai.api_key = api_key # Set the API key for the OpenAI module
322
+
323
+ return (
324
+ {
325
+ "openai": openai, # Return openAI model
326
+ },
327
+ )
328
+ # region ChatGPT
329
+
330
+
331
+ class openAi_chat_message_O:
332
+ """
333
+ create chat message for openAI chatGPT
334
+ """
335
+ # Define the input types for the node
336
+ @classmethod
337
+ def INPUT_TYPES(cls):
338
+ return {
339
+ "required": {
340
+ "role": (["user", "assistant", "system"], {"default": "user"}),
341
+ "content": ("STRING", {"multiline": True, "default":get_init_message()}),
342
+ }
343
+ }
344
+ # Define the return type of the node
345
+ RETURN_TYPES = ("OPENAI_CHAT_MESSAGES",)
346
+ FUNCTION = "fun" # Define the function name for the node
347
+ # Define the category for the node
348
+ CATEGORY = "O/OpenAI/Advanced/ChatGPT"
349
+
350
+ def fun(self, role, content):
351
+ return (
352
+ {
353
+ "messages": [{"role": role, "content": content, }]
354
+ },
355
+ )
356
+
357
+
358
+ class openAi_chat_messages_Combine_O:
359
+ """
360
+ compine chat messages into 1 tuple
361
+ """
362
+ # Define the input types for the node
363
+ @classmethod
364
+ def INPUT_TYPES(cls):
365
+ return {
366
+ "required": {
367
+ "message1": ("OPENAI_CHAT_MESSAGES", ),
368
+ "message2": ("OPENAI_CHAT_MESSAGES", ),
369
+ }
370
+ }
371
+ # Define the return type of the node
372
+ RETURN_TYPES = ("OPENAI_CHAT_MESSAGES",)
373
+ FUNCTION = "fun" # Define the function name for the node
374
+ # Define the category for the node
375
+ CATEGORY = "O/OpenAI/Advanced/ChatGPT"
376
+
377
+ def fun(self, message1, message2):
378
+ messages = message1["messages"] + \
379
+ message2["messages"] # compine messages
380
+
381
+ return (
382
+ {
383
+ "messages": messages
384
+ },
385
+ )
386
+
387
+
388
+ class openAi_chat_completion_O:
389
+ """
390
+ create chat completion for openAI chatGPT
391
+ """
392
+ # Define the input types for the node
393
+ @classmethod
394
+ def INPUT_TYPES(cls):
395
+ return {
396
+ "required": {
397
+ "openai": ("OPENAI", ),
398
+ # "model": ("STRING", {"multiline": False, "default": "gpt-3.5-turbo"}),
399
+ "model": (get_gpt_models(), {"default": "gpt-3.5-turbo"}),
400
+ "messages": ("OPENAI_CHAT_MESSAGES", ),
401
+ },
402
+ "optional": {
403
+ "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
404
+ }
405
+ }
406
+ # Define the return type of the node
407
+ RETURN_TYPES = ("STRING", "OPENAI_CHAT_COMPLETION",)
408
+ FUNCTION = "fun" # Define the function name for the node
409
+ OUTPUT_NODE = True
410
+ # Define the category for the node
411
+ CATEGORY = "O/OpenAI/Advanced/ChatGPT"
412
+
413
+ def fun(self, openai, model, messages, seed):
414
+ # Create a chat completion using the OpenAI module
415
+ openai = openai["openai"]
416
+ try:
417
+ completion = openai.ChatCompletion.create(
418
+ model=model,
419
+ messages=messages["messages"]
420
+ )
421
+ except: # sometimes it fails first time to connect to server
422
+ completion = openai.ChatCompletion.create(
423
+ model=model,
424
+ messages=messages["messages"]
425
+ )
426
+ # Get the answer from the chat completion
427
+ content = completion["choices"][0]["message"]["content"]
428
+ return (
429
+ content, # Return the answer as a string
430
+ completion, # Return the chat completion
431
+ )
432
+
433
+
434
+ class DebugOpenAIChatMEssages_O:
435
+ """
436
+ Debug OpenAI Chat Messages
437
+ """
438
+ # Define the input types for the node
439
+ @classmethod
440
+ def INPUT_TYPES(cls):
441
+ return {
442
+ "required": {
443
+ "messages": ("OPENAI_CHAT_MESSAGES", ),
444
+ }
445
+ }
446
+ # Define the return type of the node
447
+ RETURN_TYPES = ()
448
+ FUNCTION = "fun" # Define the function name for the node
449
+ OUTPUT_NODE = True
450
+ # Define the category for the node
451
+ CATEGORY = "O/debug/OpenAI/Advanced/ChatGPT"
452
+
453
+ def fun(self, messages):
454
+ print(f'{PACKAGE_NAME}:OpenAIChatMEssages', messages["messages"])
455
+ return ()
456
+
457
+
458
+ class DebugOpenAIChatCompletion_O:
459
+ """
460
+ Debug OpenAI Chat Completion
461
+ """
462
+ # Define the input types for the node
463
+ @classmethod
464
+ def INPUT_TYPES(cls):
465
+ return {
466
+ "required": {
467
+ "completion": ("OPENAI_CHAT_COMPLETION", ),
468
+ }
469
+ }
470
+ # Define the return type of the node
471
+ RETURN_TYPES = ()
472
+ FUNCTION = "fun" # Define the function name for the node
473
+ OUTPUT_NODE = True
474
+ # Define the category for the node
475
+ CATEGORY = "O/debug/OpenAI/Advanced/ChatGPT"
476
+
477
+ def fun(self, completion):
478
+ print(f'{PACKAGE_NAME}:OpenAIChatCompletion:', completion)
479
+ return ()
480
+ # endregion ChatGPT
481
+ # region Image
482
+
483
+
484
+ class openAi_Image_create_O:
485
+ """
486
+ create image using openai
487
+ """
488
+ # Define the input types for the node
489
+ @classmethod
490
+ def INPUT_TYPES(cls):
491
+ return {
492
+ "required": {
493
+ "openai": ("OPENAI", ),
494
+ "prompt": ("STRING", {"multiline": True}),
495
+ "number": ("INT", {"default": 1, "min": 1, "max": 10, "step": 1}),
496
+ "size": (["256x256", "512x512", "1024x1024"], {"default": "256x256"}),
497
+ },
498
+ "optional": {
499
+ "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
500
+ }
501
+ }
502
+ # Define the return type of the node
503
+ RETURN_TYPES = ("IMAGE", "MASK")
504
+ FUNCTION = "fun" # Define the function name for the node
505
+ OUTPUT_NODE = True
506
+ # Define the category for the node
507
+ CATEGORY = "O/OpenAI/Advanced/Image"
508
+
509
+ def fun(self, openai, prompt, number, size, seed):
510
+ # Create a chat completion using the OpenAI module
511
+ openai = openai["openai"]
512
+ prompt = prompt
513
+ number = 1
514
+
515
+ imageURL = ""
516
+ try:
517
+ imagesURLS = openai.Image.create(
518
+ prompt=prompt,
519
+ n=number,
520
+ size=size
521
+ )
522
+ imageURL = imagesURLS["data"][0]["url"]
523
+ except Exception as e:
524
+ print(f'{PACKAGE_NAME}:openAi_Image_create_O:', e)
525
+ imageURL = "https://i.imgur.com/removed.png"
526
+
527
+ image = requests.get(imageURL).content
528
+ i = Image.open(io.BytesIO(image))
529
+ image = i.convert("RGBA")
530
+ image = np.array(image).astype(np.float32) / 255.0
531
+ # image_np = np.transpose(image_np, (2, 0, 1))
532
+ image = torch.from_numpy(image)[None,]
533
+ if 'A' in i.getbands():
534
+ mask = np.array(i.getchannel('A')).astype(np.float32) / 255.0
535
+ mask = 1. - torch.from_numpy(mask)
536
+ else:
537
+ mask = torch.zeros((64, 64), dtype=torch.float32, device="cpu")
538
+ return (image, mask)
539
+
540
+
541
+ class openAi_Image_Edit_O:
542
+ """
543
+ edit an image using openai
544
+ """
545
+ # Define the input types for the node
546
+ @classmethod
547
+ def INPUT_TYPES(cls):
548
+ return {
549
+ "required": {
550
+ "openai": ("OPENAI", ),
551
+ "image": ("IMAGE",),
552
+ "prompt": ("STRING", {"multiline": True}),
553
+ "number": ("INT", {"default": 1, "min": 1, "max": 10, "step": 1}),
554
+ "size": (["256x256", "512x512", "1024x1024"], {"default": "256x256"}),
555
+ },
556
+ "optional": {
557
+ "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
558
+ }
559
+ }
560
+ # Define the return type of the node
561
+ RETURN_TYPES = ("IMAGE", "MASK")
562
+ FUNCTION = "fun" # Define the function name for the node
563
+ OUTPUT_NODE = True
564
+ # Define the category for the node
565
+ CATEGORY = "O/OpenAI/Advanced/Image"
566
+
567
+ def fun(self, openai, image, prompt, number, size, seed):
568
+ # Create a chat completion using the OpenAI module
569
+ openai = openai["openai"]
570
+ prompt = prompt
571
+ number = 1
572
+
573
+ # Convert PyTorch tensor to NumPy array
574
+ image = image[0]
575
+ i = 255. * image.cpu().numpy()
576
+ img = Image.fromarray(np.clip(i, 0, 255).astype(np.uint8))
577
+ # Save the image to a BytesIO object as a PNG file
578
+ with io.BytesIO() as output:
579
+ img.save(output, format='PNG')
580
+ binary_image = output.getvalue()
581
+
582
+ # Create a circular mask with alpha 0 in the middle
583
+ mask = np.zeros((image.shape[0], image.shape[1], 4), dtype=np.uint8)
584
+ center = (image.shape[1] // 2, image.shape[0] // 2)
585
+ radius = min(center[0], center[1])
586
+ draw = ImageDraw.Draw(Image.fromarray(mask, mode='RGBA'))
587
+ draw.ellipse((center[0]-radius, center[1]-radius, center[0]+radius,
588
+ center[1]+radius), fill=(0, 0, 0, 255), outline=(0, 0, 0, 0))
589
+ del draw
590
+ # Save the mask to a BytesIO object as a PNG file
591
+ with io.BytesIO() as output:
592
+ Image.fromarray(mask, mode='RGBA').save(output, format='PNG')
593
+ binary_mask = output.getvalue()
594
+
595
+ imageURL = ""
596
+ try:
597
+ imagesURLS = openai.Image.create_edit(
598
+ image=binary_image,
599
+ mask=binary_mask,
600
+ prompt=prompt,
601
+ n=number,
602
+ size=size
603
+ )
604
+ imageURL = imagesURLS["data"][0]["url"]
605
+ except Exception as e:
606
+ print(f'{PACKAGE_NAME}:openAi_Image_create_O:', e)
607
+ imageURL = "https://i.imgur.com/removed.png"
608
+
609
+ image = requests.get(imageURL).content
610
+ i = Image.open(io.BytesIO(image))
611
+ image = i.convert("RGBA")
612
+ image = np.array(image).astype(np.float32) / 255.0
613
+ # image_np = np.transpose(image_np, (2, 0, 1))
614
+ image = torch.from_numpy(image)[None,]
615
+ if 'A' in i.getbands():
616
+ mask = np.array(i.getchannel('A')).astype(np.float32) / 255.0
617
+ mask = 1. - torch.from_numpy(mask)
618
+ else:
619
+ mask = torch.zeros(
620
+ (1, image.shape[2], image.shape[3]), dtype=torch.float32, device="cpu")
621
+ return (image, mask)
622
+
623
+
624
+ class openAi_Image_variation_O:
625
+ """
626
+ edit an image using openai
627
+ """
628
+ # Define the input types for the node
629
+ @classmethod
630
+ def INPUT_TYPES(cls):
631
+ return {
632
+ "required": {
633
+ "openai": ("OPENAI", ),
634
+ "image": ("IMAGE",),
635
+ "number": ("INT", {"default": 1, "min": 1, "max": 10, "step": 1}),
636
+ "size": (["256x256", "512x512", "1024x1024"], {"default": "256x256"}),
637
+ },
638
+ "optional": {
639
+ "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
640
+ }
641
+ }
642
+ # Define the return type of the node
643
+ RETURN_TYPES = ("IMAGE", "MASK")
644
+ FUNCTION = "fun" # Define the function name for the node
645
+ OUTPUT_NODE = True
646
+ # Define the category for the node
647
+ CATEGORY = "O/OpenAI/Advanced/Image"
648
+
649
+ def fun(self, openai, image, number, size, seed):
650
+ # Create a chat completion using the OpenAI module
651
+ openai = openai["openai"]
652
+ number = 1
653
+
654
+ # Convert PyTorch tensor to NumPy array
655
+ image = image[0]
656
+ i = 255. * image.cpu().numpy()
657
+ img = Image.fromarray(np.clip(i, 0, 255).astype(np.uint8))
658
+ # Save the image to a BytesIO object as a PNG file
659
+ with io.BytesIO() as output:
660
+ img.save(output, format='PNG')
661
+ binary_image = output.getvalue()
662
+
663
+ imageURL = " "
664
+ try:
665
+ imagesURLS = openai.Image.create_variation(
666
+ image=binary_image,
667
+ n=number,
668
+ size=size
669
+ )
670
+ imageURL = imagesURLS["data"][0]["url"]
671
+ except Exception as e:
672
+ print(f'{PACKAGE_NAME}:openAi_Image_create_O:', e)
673
+ imageURL = "https://i.imgur.com/removed.png"
674
+
675
+ image = requests.get(imageURL).content
676
+ i = Image.open(io.BytesIO(image))
677
+ image = i.convert("RGBA")
678
+ image = np.array(image).astype(np.float32) / 255.0
679
+ # image_np = np.transpose(image_np, (2, 0, 1))
680
+ image = torch.from_numpy(image)[None,]
681
+ if 'A' in i.getbands():
682
+ mask = np.array(i.getchannel('A')).astype(np.float32) / 255.0
683
+ mask = 1. - torch.from_numpy(mask)
684
+ else:
685
+ mask = torch.zeros(
686
+ (1, image.shape[2], image.shape[3]), dtype=torch.float32, device="cpu")
687
+ return (image, mask)
688
+ # endregion Image
689
+ # endregion advanced
690
+ # endregion openAI
691
+
692
+
693
+ # region latentTools
694
+
695
+
696
+ class LatentUpscaleFactor_O:
697
+ """
698
+ Upscale the latent code by multiplying the width and height by a factor
699
+ """
700
+ upscale_methods = ["nearest-exact", "bilinear", "area"]
701
+ crop_methods = ["disabled", "center"]
702
+
703
+ @classmethod
704
+ def INPUT_TYPES(cls):
705
+ return {
706
+ "required": {
707
+ "samples": ("LATENT",),
708
+ "upscale_method": (cls.upscale_methods,),
709
+ "WidthFactor": ("FLOAT", {"default": 1.25, "min": 0.0, "max": 10.0, "step": 0.28125}),
710
+ "HeightFactor": ("FLOAT", {"default": 1.25, "min": 0.0, "max": 10.0, "step": 0.28125}),
711
+ "crop": (cls.crop_methods,),
712
+ }
713
+ }
714
+
715
+ RETURN_TYPES = ("LATENT",)
716
+ FUNCTION = "upscale"
717
+ CATEGORY = "O/latent"
718
+
719
+ def upscale(self, samples, upscale_method, WidthFactor, HeightFactor, crop):
720
+ s = samples.copy()
721
+ x = samples["samples"].shape[3]
722
+ y = samples["samples"].shape[2]
723
+
724
+ new_x = int(x * WidthFactor)
725
+ new_y = int(y * HeightFactor)
726
+
727
+ if (new_x > MAX_RESOLUTION):
728
+ new_x = MAX_RESOLUTION
729
+ if (new_y > MAX_RESOLUTION):
730
+ new_y = MAX_RESOLUTION
731
+
732
+ print(f'{PACKAGE_NAME}:upscale from ({x*8},{y*8}) to ({new_x*8},{new_y*8})')
733
+
734
+ s["samples"] = comfy.utils.common_upscale(
735
+ samples["samples"], enforce_mul_of_64(
736
+ new_x), enforce_mul_of_64(new_y), upscale_method, crop
737
+ )
738
+ return (s,)
739
+
740
+
741
+ class LatentUpscaleFactorSimple_O:
742
+ """
743
+ Upscale the latent code by multiplying the width and height by a factor
744
+ """
745
+ upscale_methods = ["nearest-exact", "bilinear", "area"]
746
+ crop_methods = ["disabled", "center"]
747
+
748
+ @classmethod
749
+ def INPUT_TYPES(cls):
750
+ return {
751
+ "required": {
752
+ "samples": ("LATENT",),
753
+ "upscale_method": (cls.upscale_methods,),
754
+ "factor": ("FLOAT", {"default": 1.25, "min": 0.0, "max": 10.0, "step": 0.28125}),
755
+ "crop": (cls.crop_methods,),
756
+ }
757
+ }
758
+
759
+ RETURN_TYPES = ("LATENT",)
760
+ FUNCTION = "upscale"
761
+ CATEGORY = "O/latent"
762
+
763
+ def upscale(self, samples, upscale_method, factor, crop):
764
+ s = samples.copy()
765
+ x = samples["samples"].shape[3]
766
+ y = samples["samples"].shape[2]
767
+
768
+ new_x = int(x * factor)
769
+ new_y = int(y * factor)
770
+
771
+ if (new_x > MAX_RESOLUTION):
772
+ new_x = MAX_RESOLUTION
773
+ if (new_y > MAX_RESOLUTION):
774
+ new_y = MAX_RESOLUTION
775
+
776
+ print(f'{PACKAGE_NAME}:upscale from ({x*8},{y*8}) to ({new_x*8},{new_y*8})')
777
+
778
+ s["samples"] = comfy.utils.common_upscale(
779
+ samples["samples"], enforce_mul_of_64(
780
+ new_x), enforce_mul_of_64(new_y), upscale_method, crop
781
+ )
782
+ return (s,)
783
+
784
+
785
+ class SelectLatentImage_O:
786
+ """
787
+ Select a single image from a batch of generated latent images.
788
+ """
789
+ @classmethod
790
+ def INPUT_TYPES(cls):
791
+ return {
792
+ "required": {
793
+ "samples": ("LATENT",),
794
+ "index": ("INT", {"default": 0, "min": 0}),
795
+ }
796
+ }
797
+
798
+ RETURN_TYPES = ("LATENT",)
799
+ FUNCTION = "fun"
800
+ CATEGORY = "O/latent"
801
+
802
+ def fun(self, samples, index):
803
+ # Get the batch size and number of channels
804
+ batch_size, num_channels, height, width = samples["samples"].shape
805
+
806
+ # Ensure that the index is within bounds
807
+ if index >= batch_size:
808
+ index = batch_size - 1
809
+
810
+ # Select the specified image
811
+ selected_image = samples["samples"][index].unsqueeze(0)
812
+
813
+ # Return the selected image
814
+ return ({"samples": selected_image},)
815
+
816
+
817
+ class VAEDecodeParallel_O:
818
+ def __init__(self, device="cpu"):
819
+ self.device = device
820
+ self.device_count = torch.cuda.device_count() if device != "cpu" else 1
821
+ self.module = VAEDecodeOriginal(device)
822
+ self.net = nn.DataParallel(self.module)
823
+
824
+ @classmethod
825
+ def INPUT_TYPES(cls):
826
+ return {"required": {"samples": ("LATENT", ), "vae": ("VAE", )}}
827
+
828
+ RETURN_TYPES = ("IMAGE",)
829
+ FUNCTION = "decode_parallel"
830
+ CATEGORY = "latent"
831
+
832
+ def decode_parallel(self, vae, samples):
833
+ batch_size = samples["samples"].shape[0]
834
+ images = torch.zeros((batch_size, 3, 256, 256)).to(self.device)
835
+
836
+ for i in range(0, batch_size, self.device_count):
837
+ batch_samples = samples["samples"][i:i +
838
+ self.device_count].to(self.device)
839
+ batch_images = self.net(vae, {"samples": batch_samples})[
840
+ 0].to(self.device)
841
+ images[i:i+self.device_count] = batch_images
842
+
843
+ return (images,)
844
+
845
+
846
+ class VAEDecodeOriginal:
847
+ def __init__(self, device="cpu"):
848
+ self.device = device
849
+
850
+ @classmethod
851
+ def INPUT_TYPES(s):
852
+ return {"required": {"samples": ("LATENT", ), "vae": ("VAE", )}}
853
+ RETURN_TYPES = ("IMAGE",)
854
+ FUNCTION = "decode"
855
+
856
+ CATEGORY = "latent"
857
+
858
+ def decode(self, vae, samples):
859
+ return (vae.decode(samples["samples"]), )
860
+
861
+ # endregion latentTools
862
+
863
+ # region TextTools
864
+
865
+
866
+ class seed2String_O:
867
+ """
868
+ This node convert seeds to string // can be used to force the system to read a string again if it got compined with it
869
+ """
870
+ @classmethod
871
+ def INPUT_TYPES(cls):
872
+ return {"required": {"seed": ("SEED")}}
873
+
874
+ RETURN_TYPES = ("STRING")
875
+ FUNCTION = "fun"
876
+ CATEGORY = "O/utils"
877
+
878
+ def fun(self, seed):
879
+ return (str(seed))
880
+
881
+
882
+ class saveTextToFile_O:
883
+ def __init__(self):
884
+ pass
885
+
886
+ @classmethod
887
+ def INPUT_TYPES(cls):
888
+ return {
889
+ "required": {
890
+ "text": ("STRING", {"default": '', "multiline": False, "defaultBehavior": "input"}),
891
+ "filename": ("STRING", {"default": "log.txt", "multiline": False}),
892
+ },
893
+ "optional": {
894
+ "append": (["true", "false"], {"default": True})
895
+ }
896
+ }
897
+
898
+ OUTPUT_NODE = True
899
+ RETURN_TYPES = ()
900
+ FUNCTION = "fun"
901
+ CATEGORY = "O/text"
902
+
903
+ def fun(self, text, filename, append):
904
+ # append dateTime
905
+ current_time = time.strftime("%d/%m/%Y %H:%M:%S") # dd/mm/YY H:M:S
906
+ textToSave = f'{current_time}: \n'
907
+ # append text in new line
908
+ textToSave += f' {text} \n\n'
909
+
910
+ self.saveTextToFile(textToSave, filename, append)
911
+
912
+ return (textToSave, )
913
+
914
+ def saveTextToFile(self, text, filename, append):
915
+ saveDir = os.path.join(SUIT_DIR, "output")
916
+ saveFile = os.path.join(saveDir, filename)
917
+
918
+ # Create directory if it does not exist
919
+ if not os.path.exists(saveDir):
920
+ os.makedirs(saveDir)
921
+
922
+ # Write to file
923
+ mode = "a" if append else "w"
924
+ try:
925
+ with open(saveFile, mode, encoding="utf-8") as f:
926
+ f.write(text)
927
+ except OSError as e:
928
+ print(f'{PACKAGE_NAME}:error writing to file {saveFile}')
929
+
930
+
931
+ fonts = None
932
+
933
+
934
+ def loadFonts():
935
+
936
+ global fonts
937
+ if (fonts != None):
938
+ return fonts
939
+ try:
940
+ fonts_filepath = os.path.join(SUIT_DIR, "fonts")
941
+ fonts = []
942
+ for file in os.listdir(fonts_filepath):
943
+ if file.endswith(".ttf") or file.endswith(".otf") or file.endswith(".ttc") or file.endswith(".TTF") or file.endswith(".OTF") or file.endswith(".TTC"):
944
+ fonts.append(file)
945
+ except:
946
+ fonts = []
947
+
948
+ if (len(fonts) == 0):
949
+ print(f'{PACKAGE_NAME}:no fonts found in {fonts_filepath}')
950
+ fonts = ["Arial.ttf"]
951
+ return fonts
952
+
953
+
954
+ class Text2Image_O:
955
+ """
956
+ This node will convert a string to an image
957
+ """
958
+
959
+ def __init__(self):
960
+ self.font_filepath = os.path.join(SUIT_DIR, "fonts")
961
+
962
+ @classmethod
963
+ def INPUT_TYPES(s):
964
+ return {
965
+ "required": {
966
+ "text": ("STRING", {"multiline": True}),
967
+ "font": (loadFonts(), {"default": loadFonts()[0], }),
968
+ "size": ("INT", {"default": 36, "min": 0, "max": 255, "step": 1}),
969
+ "font_R": ("INT", {"default": 0, "min": 0, "max": 255, "step": 1}),
970
+ "font_G": ("INT", {"default": 0, "min": 0, "max": 255, "step": 1}),
971
+ "font_B": ("INT", {"default": 0, "min": 0, "max": 255, "step": 1}),
972
+ "font_A": ("INT", {"default": 255, "min": 0, "max": 255, "step": 1}),
973
+ "background_R": ("INT", {"default": 255, "min": 0, "max": 255, "step": 1}),
974
+ "background_G": ("INT", {"default": 255, "min": 0, "max": 255, "step": 1}),
975
+ "background_B": ("INT", {"default": 255, "min": 0, "max": 255, "step": 1}),
976
+ "background_A": ("INT", {"default": 255, "min": 0, "max": 255, "step": 1}),
977
+ "width": ("INT", {"default": 128, "min": 0, "step": 1}),
978
+ "height": ("INT", {"default": 128, "min": 0, "step": 1}),
979
+ "expand": (["true", "false"], {"default": "true"}),
980
+ "x": ("INT", {"default": 0, "min": -100, "step": 1}),
981
+ "y": ("INT", {"default": 0, "min": -100, "step": 1}),
982
+ }
983
+ }
984
+
985
+ RETURN_TYPES = ("IMAGE",)
986
+ FUNCTION = "create_image_new"
987
+ OUTPUT_NODE = False
988
+ CATEGORY = "O/text"
989
+
990
+ def create_image_new(self, text, font, size, font_R, font_G, font_B, font_A, background_R, background_G, background_B, background_A, width, height, expand, x, y):
991
+ font_color = (font_R, font_G, font_B, font_A)
992
+ background_color = (background_R, background_G,
993
+ background_B, background_A)
994
+
995
+ font_path = os.path.join(self.font_filepath, font)
996
+ font = ImageFont.truetype(font_path, size)
997
+
998
+ # Initialize the drawing context
999
+ image = Image.new('RGBA', (1, 1), color=background_color)
1000
+ draw = ImageDraw.Draw(image)
1001
+
1002
+ # Get the size of the text
1003
+ text_width, text_height = draw.textsize(text, font=font)
1004
+
1005
+ # Set the dimensions of the image
1006
+ if expand == "true":
1007
+ if width < text_width:
1008
+ width = text_width
1009
+ if height < text_height:
1010
+ height = text_height
1011
+
1012
+ width = enforce_mul_of_64(width)
1013
+ height = enforce_mul_of_64(height)
1014
+
1015
+ # Create a new image
1016
+ image = Image.new('RGBA', (width, height), color=background_color)
1017
+
1018
+ # Initialize the drawing context
1019
+ draw = ImageDraw.Draw(image)
1020
+
1021
+ # Calculate the position of the text
1022
+ text_x = x - text_width/2
1023
+ if (text_x < 0):
1024
+ text_x = 0
1025
+ if (text_x > width-text_width):
1026
+ text_x = width - text_width
1027
+
1028
+ text_y = y - text_height/2
1029
+ if (text_y < 0):
1030
+ text_y = 0
1031
+ if (text_y > height-text_height):
1032
+ text_y = height - text_height
1033
+
1034
+ # Draw the text on the image
1035
+ draw.text((text_x, text_y), text, fill=font_color, font=font)
1036
+
1037
+ # Convert the PIL Image to a tensor
1038
+ image_np = np.array(image).astype(np.float32) / 255.0
1039
+ image_tensor = torch.from_numpy(image_np).unsqueeze(0)
1040
+ return image_tensor, {"ui": {"images": image_tensor}}
1041
+
1042
+ # region text/NSP
1043
+
1044
+
1045
+ nspterminology = None # Cache the NSP terminology
1046
+
1047
+
1048
+ def laodNSP():
1049
+ global nspterminology
1050
+ if (nspterminology != None):
1051
+ return nspterminology
1052
+ # Fetch the NSP Pantry
1053
+ local_pantry = os.path.join(SUIT_DIR, "nsp_pantry.json")
1054
+
1055
+ if not os.path.exists(local_pantry):
1056
+ print(f'{PACKAGE_NAME}:downloading NSP')
1057
+ response = urlopen(
1058
+ 'https://raw.githubusercontent.com/WASasquatch/noodle-soup-prompts/main/nsp_pantry.json')
1059
+ tmp_pantry = json.loads(response.read())
1060
+ # Dump JSON locally
1061
+ pantry_serialized = json.dumps(tmp_pantry, indent=4)
1062
+ with open(local_pantry, "w") as f:
1063
+ f.write(pantry_serialized)
1064
+ del response, tmp_pantry
1065
+
1066
+ # Load local pantry
1067
+ with open(local_pantry, 'r') as f:
1068
+ nspterminology = json.load(f)
1069
+
1070
+ print(f'{PACKAGE_NAME}:NSP ready')
1071
+ return nspterminology
1072
+
1073
+
1074
+ class RandomNSP_O:
1075
+ @classmethod
1076
+ def laodCategories(s):
1077
+ nspterminology = laodNSP()
1078
+ terminologies = []
1079
+ for term in nspterminology:
1080
+ terminologies.append(term)
1081
+
1082
+ return (terminologies)
1083
+
1084
+ @classmethod
1085
+ def INPUT_TYPES(s):
1086
+ return {"required": {
1087
+ "terminology": (s.laodCategories(),),
1088
+ "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
1089
+ }}
1090
+ RETURN_TYPES = ("STRING",)
1091
+ FUNCTION = "fun"
1092
+
1093
+ CATEGORY = "O/text/NSP"
1094
+
1095
+ def fun(self, terminology, seed):
1096
+
1097
+ nspterminology = laodNSP()
1098
+ # Set the seed
1099
+ random.seed(seed)
1100
+
1101
+ result = random.choice(nspterminology[terminology])
1102
+ return (result, {"ui": {"STRING": result}})
1103
+
1104
+
1105
+ class ConcatRandomNSP_O:
1106
+ @classmethod
1107
+ def laodCategories(s):
1108
+ nspterminology = laodNSP()
1109
+ terminologies = []
1110
+ for term in nspterminology:
1111
+ terminologies.append(term)
1112
+
1113
+ return (terminologies)
1114
+
1115
+ @classmethod
1116
+ def INPUT_TYPES(s):
1117
+ return {"required": {
1118
+ "text": ("STRING", {"multiline": False, "defaultBehavior": "input"}),
1119
+ "terminology": (s.laodCategories(),),
1120
+ "separator": ("STRING", {"multiline": False, "default": ","}),
1121
+ "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
1122
+ }}
1123
+ RETURN_TYPES = ("STRING",)
1124
+ FUNCTION = "fun"
1125
+ CATEGORY = "O/text/NSP"
1126
+
1127
+ def fun(self, text, terminology, separator, seed):
1128
+
1129
+ nspterminology = laodNSP()
1130
+ # Set the seed
1131
+ random.seed(seed)
1132
+
1133
+ result = random.choice(nspterminology[terminology])
1134
+
1135
+ return (text+separator+result+separator, {"ui": {"STRING": result}})
1136
+ # endregion text/NSP
1137
+
1138
+ # region debug text
1139
+
1140
+
1141
+ class DebugText_O:
1142
+ """
1143
+ This node will write a text to the console
1144
+ """
1145
+ @classmethod
1146
+ def INPUT_TYPES(cls):
1147
+ return {"required": {
1148
+ "text": ("STRING", {"multiline": False, "defaultBehavior": "input"}),
1149
+ "prefix": ("STRING", {"default": "debug", "multiline": False}),
1150
+ }}
1151
+
1152
+ RETURN_TYPES = ()
1153
+ FUNCTION = "debug_string"
1154
+ OUTPUT_NODE = True
1155
+ CATEGORY = "O/debug/text"
1156
+
1157
+ @staticmethod
1158
+ def debug_string(text, prefix):
1159
+ print(f'{PACKAGE_NAME}:{prefix}:{text}')
1160
+ return ()
1161
+
1162
+
1163
+ class DebugTextRoute_O:
1164
+ """
1165
+ This node will write a text to the console
1166
+ """
1167
+ @classmethod
1168
+ def INPUT_TYPES(cls):
1169
+ return {"required": {
1170
+ "text": ("STRING", {"multiline": False, "defaultBehavior": "input"}),
1171
+ "prefix": ("STRING", {"default": "debug", "multiline": False}),
1172
+ }}
1173
+
1174
+ RETURN_TYPES = ("STRING",)
1175
+ FUNCTION = "debug_string"
1176
+ CATEGORY = "O/debug/text"
1177
+
1178
+ @staticmethod
1179
+ def debug_string(text, prefix):
1180
+ print(f'{PACKAGE_NAME}:{prefix}:{text}')
1181
+ return (text,)
1182
+
1183
+
1184
+ # endregion
1185
+
1186
+ # region text/operations
1187
+
1188
+
1189
+ class concat_text_O:
1190
+ """
1191
+ This node will concatenate two strings together
1192
+ """
1193
+ @ classmethod
1194
+ def INPUT_TYPES(cls):
1195
+ return {"required": {
1196
+ "text1": ("STRING", {"multiline": True, "defaultBehavior": "input"}),
1197
+ "text2": ("STRING", {"multiline": True, "defaultBehavior": "input"}),
1198
+ "separator": ("STRING", {"multiline": False, "default": ","}),
1199
+ }}
1200
+
1201
+ RETURN_TYPES = ("STRING",)
1202
+ FUNCTION = "fun"
1203
+ CATEGORY = "O/text/operations"
1204
+
1205
+ @ staticmethod
1206
+ def fun(text1, separator, text2):
1207
+ return (text1 + separator + text2,)
1208
+
1209
+
1210
+ class trim_text_O:
1211
+ """
1212
+ This node will trim a string from the left and right
1213
+ """
1214
+ @ classmethod
1215
+ def INPUT_TYPES(cls):
1216
+ return {"required": {
1217
+ "text": ("STRING", {"multiline": False, "defaultBehavior": "input"}),
1218
+ }}
1219
+
1220
+ RETURN_TYPES = ("STRING",)
1221
+ FUNCTION = "fun"
1222
+ CATEGORY = "O/text/operations"
1223
+
1224
+ def fun(self, text):
1225
+ return (text.strip(),)
1226
+
1227
+
1228
+ class replace_text_O:
1229
+ """
1230
+ This node will replace a string with another string
1231
+ """
1232
+ @ classmethod
1233
+ def INPUT_TYPES(cls):
1234
+ return {"required": {
1235
+ "text": ("STRING", {"multiline": True, "defaultBehavior": "input"}),
1236
+ "old": ("STRING", {"multiline": False}),
1237
+ "new": ("STRING", {"multiline": False})
1238
+ }}
1239
+
1240
+ RETURN_TYPES = ("STRING",)
1241
+ FUNCTION = "fun"
1242
+ CATEGORY = "O/text/operations"
1243
+
1244
+ @ staticmethod
1245
+ def fun(text, old, new):
1246
+ return (text.replace(old, new),) # replace a text with another text
1247
+ # endregion
1248
+ # endregion TextTools
1249
+
1250
+ # region Image
1251
+
1252
+
1253
+ def upscaleImage(image, upscale_method, WidthFactor, HeightFactor, crop, MulOf46):
1254
+ samples = image.movedim(-1, 1)
1255
+ height = HeightFactor * samples.shape[2]
1256
+ width = WidthFactor * samples.shape[3]
1257
+ if (width > MAX_RESOLUTION):
1258
+ width = MAX_RESOLUTION
1259
+ if (height > MAX_RESOLUTION):
1260
+ height = MAX_RESOLUTION
1261
+
1262
+ if (MulOf46 == "enabled"):
1263
+ width = enforce_mul_of_64(width)
1264
+ height = enforce_mul_of_64(height)
1265
+
1266
+ width = int(width)
1267
+ height = int(height)
1268
+ print(
1269
+ f'{PACKAGE_NAME}:upscale from ({samples.shape[2]},{samples.shape[3]}) to ({width},{height})')
1270
+ s = comfy.utils.common_upscale(
1271
+ samples, width, height, upscale_method, crop)
1272
+ s = s.movedim(1, -1)
1273
+ return (s,)
1274
+
1275
+
1276
+ class ImageScaleFactorSimple_O:
1277
+ upscale_methods = ["nearest-exact", "bilinear", "area"]
1278
+ crop_methods = ["disabled", "center"]
1279
+ toggle = ["enabled", "disabled"]
1280
+
1281
+ @classmethod
1282
+ def INPUT_TYPES(s):
1283
+ return {"required": {"image": ("IMAGE",),
1284
+ "upscale_method": (s.upscale_methods,),
1285
+ "Factor": ("FLOAT", {"default": 1.25, "min": 0.0, "max": 10.0, "step": 0.28125}),
1286
+ "MulOf46": (s.toggle, {"default": "enabled"}),
1287
+ "crop": (s.crop_methods,)
1288
+ }}
1289
+ RETURN_TYPES = ("IMAGE",)
1290
+ FUNCTION = "upscale"
1291
+
1292
+ CATEGORY = "O/image"
1293
+
1294
+ def upscale(self, image, upscale_method, Factor, crop, MulOf46):
1295
+ return upscaleImage(image, upscale_method, Factor, Factor, crop, MulOf46)
1296
+
1297
+
1298
+ class ImageScaleFactor_O:
1299
+ upscale_methods = ["nearest-exact", "bilinear", "area"]
1300
+ crop_methods = ["disabled", "center"]
1301
+ toggle = ["enabled", "disabled"]
1302
+
1303
+ @classmethod
1304
+ def INPUT_TYPES(s):
1305
+ return {"required": {"image": ("IMAGE",),
1306
+ "upscale_method": (s.upscale_methods,),
1307
+ "WidthFactor": ("FLOAT", {"default": 1.25, "min": 0.0, "max": 10.0, "step": 0.28125}),
1308
+ "HeightFactor": ("FLOAT", {"default": 1.25, "min": 0.0, "max": 10.0, "step": 0.28125}),
1309
+ "MulOf46": (s.toggle, {"default": "enabled"}),
1310
+ "crop": (s.crop_methods,)
1311
+ }}
1312
+ RETURN_TYPES = ("IMAGE",)
1313
+ FUNCTION = "upscale"
1314
+
1315
+ CATEGORY = "O/image"
1316
+
1317
+ def upscale(self, image, upscale_method, WidthFactor, HeightFactor, crop, MulOf46):
1318
+ return upscaleImage(image, upscale_method, WidthFactor, HeightFactor, crop, MulOf46)
1319
+
1320
+ # endregion
1321
+
1322
+ # region numbers
1323
+
1324
+
1325
+ def solveEquation(equation):
1326
+ answer = 0.0
1327
+ # Check if v is a valid equation or a number using regular expressions
1328
+ try:
1329
+ # Solve the equation using Python's built-in eval function
1330
+ answer = eval(equation)
1331
+ except Exception as e:
1332
+ print(f'{PACKAGE_NAME}: equation is not valid: {equation} error: {e}')
1333
+ answer = "NAN"
1334
+
1335
+ return answer
1336
+
1337
+
1338
+ class applyEquation1param_O:
1339
+ """
1340
+ This node generate seeds for the model
1341
+ """
1342
+ @classmethod
1343
+ def INPUT_TYPES(cls):
1344
+ return {"required": {
1345
+ "x": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 0xffffffffffffffff, "defaultBehavior": "input"}),
1346
+ "equation": ("STRING", {"multiline": True, "default": "x*1"}),
1347
+ }
1348
+ }
1349
+
1350
+ RETURN_TYPES = ("FLOAT", "int",)
1351
+ FUNCTION = "fun"
1352
+ CATEGORY = "O/numbers"
1353
+
1354
+ def fun(self, x, equation):
1355
+ equation = equation.replace("x", "("+str(x)+")")
1356
+ answer = solveEquation(equation)
1357
+ return (answer, int(answer), )
1358
+
1359
+
1360
+ class applyEquation2params_O:
1361
+ """
1362
+ This node generate seeds for the model
1363
+ """
1364
+ @classmethod
1365
+ def INPUT_TYPES(cls):
1366
+ return {"required": {
1367
+ "x": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 0xffffffffffffffff, "defaultBehavior": "input"}),
1368
+ "y": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 0xffffffffffffffff, "defaultBehavior": "input"}),
1369
+ "equation": ("STRING", {"multiline": True, "default": "x+y"}),
1370
+ },
1371
+ "optional": {
1372
+ "equation_2": ("STRING", {"multiline": True, "default": "x+y"}),
1373
+ }
1374
+ }
1375
+
1376
+ RETURN_TYPES = ("FLOAT", "INT", "FLOAT", "INT")
1377
+ FUNCTION = "fun"
1378
+ CATEGORY = "O/numbers"
1379
+
1380
+ def fun(self, x, y, equation, equation_2):
1381
+
1382
+ answer = 0.0
1383
+ answer_2 = 0.0
1384
+
1385
+ if (equation != ""):
1386
+ equation = equation.replace("x", "("+str(x)+")")
1387
+ equation = equation.replace("y", "("+str(y)+")")
1388
+ answer = solveEquation(equation)
1389
+
1390
+ if (equation_2 != ""):
1391
+ equation_2 = equation_2.replace("x", "("+str(x)+")")
1392
+ equation_2 = equation_2.replace("y", "("+str(y)+")")
1393
+ answer_2 = solveEquation(equation_2)
1394
+
1395
+ return (answer, int(answer), answer_2, int(answer_2),)
1396
+
1397
+
1398
+ class floatToInt_O:
1399
+ """
1400
+ This node convert float to int
1401
+ """
1402
+ @classmethod
1403
+ def INPUT_TYPES(cls):
1404
+ return {"required": {
1405
+ "float": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 0xffffffffffffffff, "defaultBehavior": "input"}),
1406
+ }
1407
+ }
1408
+
1409
+ RETURN_TYPES = ("INT",)
1410
+ FUNCTION = "fun"
1411
+ CATEGORY = "O/numbers"
1412
+
1413
+ def fun(self, float):
1414
+ return (int(float),)
1415
+
1416
+
1417
+ class intToFloat_O:
1418
+ """
1419
+ This node convert int to float
1420
+ """
1421
+ @classmethod
1422
+ def INPUT_TYPES(cls):
1423
+ return {"required": {
1424
+ "int": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff, "defaultBehavior": "input"}),
1425
+ }
1426
+ }
1427
+
1428
+ RETURN_TYPES = ("FLOAT",)
1429
+ FUNCTION = "fun"
1430
+ CATEGORY = "O/numbers"
1431
+
1432
+ def fun(self, int):
1433
+ return (float(int),)
1434
+
1435
+
1436
+ class floatToText_O:
1437
+ """
1438
+ This node convert float to text
1439
+ """
1440
+ @classmethod
1441
+ def INPUT_TYPES(cls):
1442
+ return {"required": {
1443
+ "float": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 0xffffffffffffffff, "defaultBehavior": "input"}),
1444
+ }
1445
+ }
1446
+
1447
+ RETURN_TYPES = ("STRING",)
1448
+ FUNCTION = "fun"
1449
+ CATEGORY = "O/numbers"
1450
+
1451
+ def fun(self, float):
1452
+ return (str(float),)
1453
+
1454
+
1455
+ class GetImageWidthAndHeight_O:
1456
+ upscale_methods = ["nearest-exact", "bilinear", "area"]
1457
+ crop_methods = ["disabled", "center"]
1458
+ toggle = ["enabled", "disabled"]
1459
+
1460
+ @classmethod
1461
+ def INPUT_TYPES(s):
1462
+ return {"required": {"image": ("IMAGE",),
1463
+ }
1464
+ }
1465
+ RETURN_TYPES = ("INT", "INT")
1466
+ FUNCTION = "fun"
1467
+
1468
+ CATEGORY = "O/numbers"
1469
+
1470
+ def fun(self, image):
1471
+ samples = image.movedim(-1, 1)
1472
+ height = samples.shape[2]
1473
+ width = samples.shape[3]
1474
+ return (int(width), int(height),)
1475
+
1476
+
1477
+ class GetLatentWidthAndHeight_O:
1478
+ """
1479
+ Upscale the latent code by multiplying the width and height by a factor
1480
+ """
1481
+ upscale_methods = ["nearest-exact", "bilinear", "area"]
1482
+ crop_methods = ["disabled", "center"]
1483
+
1484
+ @classmethod
1485
+ def INPUT_TYPES(cls):
1486
+ return {
1487
+ "required": {
1488
+ "samples": ("LATENT",),
1489
+ }
1490
+ }
1491
+
1492
+ RETURN_TYPES = ("INT", "INT",)
1493
+ FUNCTION = "fun"
1494
+ CATEGORY = "O/numbers"
1495
+
1496
+ def fun(self, samples):
1497
+ w = samples["samples"].shape[3]
1498
+ h = samples["samples"].shape[2]
1499
+ return (int(w), int(h),)
1500
+
1501
+ # endregion
1502
+
1503
+ # region Utils
1504
+
1505
+
1506
+ class Text_O:
1507
+ """
1508
+ to provide text to the model
1509
+ """
1510
+ @classmethod
1511
+ def INPUT_TYPES(cls):
1512
+ return {
1513
+ "required": {
1514
+ "text": ("STRING", {"multiline": True}),
1515
+ }
1516
+ }
1517
+
1518
+ RETURN_TYPES = ("STRING",)
1519
+ FUNCTION = "fun"
1520
+ CATEGORY = "O/utils"
1521
+
1522
+ def fun(self, text):
1523
+ return (text+" ",)
1524
+
1525
+
1526
+ class seed_O:
1527
+ """
1528
+ This node generate seeds for the model
1529
+ """
1530
+ @classmethod
1531
+ def INPUT_TYPES(cls):
1532
+ return {"required": {"seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), }}
1533
+
1534
+ RETURN_TYPES = ("INT",)
1535
+ FUNCTION = "fun"
1536
+ CATEGORY = "O/utils"
1537
+
1538
+ def fun(self, seed):
1539
+ return (seed,)
1540
+
1541
+
1542
+ class int_O:
1543
+ """
1544
+ This node generate seeds for the model
1545
+ """
1546
+ @classmethod
1547
+ def INPUT_TYPES(cls):
1548
+ return {"required": {"int": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), }}
1549
+
1550
+ RETURN_TYPES = ("INT",)
1551
+ FUNCTION = "fun"
1552
+ CATEGORY = "O/utils"
1553
+
1554
+ def fun(self, int):
1555
+ return (int,)
1556
+
1557
+
1558
+ class float_O:
1559
+ """
1560
+ This node generate seeds for the model
1561
+ """
1562
+ @classmethod
1563
+ def INPUT_TYPES(cls):
1564
+ return {"required": {"float": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 0xffffffffffffffff}), }}
1565
+
1566
+ RETURN_TYPES = ("FLOAT",)
1567
+ FUNCTION = "fun"
1568
+ CATEGORY = "O/utils"
1569
+
1570
+ def fun(self, float):
1571
+ return (float,)
1572
+
1573
+
1574
+ class Note_O:
1575
+ @classmethod
1576
+ def INPUT_TYPES(s):
1577
+ return {"required": {"text": ("STRING", {"multiline": True})}}
1578
+ RETURN_TYPES = ()
1579
+ FUNCTION = "fun"
1580
+ OUTPUT_NODE = True
1581
+ CATEGORY = "O/utils"
1582
+
1583
+ def fun(self, text):
1584
+ return ()
1585
+ # endregion
1586
+
1587
+
1588
+ # Define the node class mappings
1589
+ NODE_CLASS_MAPPINGS = {
1590
+ # openAITools------------------------------------------
1591
+ "ChatGPT Simple _O": O_ChatGPT_O,
1592
+ "ChatGPT compact _O": O_ChatGPT_medium_O,
1593
+ # openAiTools > Advanced
1594
+ "load_openAI _O": load_openAI_O,
1595
+ # openAiTools > Advanced > ChatGPT
1596
+ "Chat_Message _O": openAi_chat_message_O,
1597
+ "combine_chat_messages _O": openAi_chat_messages_Combine_O,
1598
+ "Chat completion _O": openAi_chat_completion_O,
1599
+ # openAiTools > Advanced > image
1600
+ "create image _O": openAi_Image_create_O,
1601
+ # "Edit_image _O": openAi_Image_Edit, # coming soon
1602
+ "variation_image _O": openAi_Image_variation_O,
1603
+ # latentTools------------------------------------------
1604
+ "LatentUpscaleFactor _O": LatentUpscaleFactor_O,
1605
+ "LatentUpscaleFactorSimple _O": LatentUpscaleFactorSimple_O,
1606
+ "selectLatentFromBatch _O": SelectLatentImage_O,
1607
+ # "VAEDecodeParallel _O": VAEDecodeParallel_O, # coming soon
1608
+ # StringTools------------------------------------------
1609
+ "RandomNSP _O": RandomNSP_O,
1610
+ "ConcatRandomNSP_O": ConcatRandomNSP_O,
1611
+ "Concat Text _O": concat_text_O,
1612
+ "Trim Text _O": trim_text_O,
1613
+ "Replace Text _O": replace_text_O,
1614
+ "saveTextToFile _O": saveTextToFile_O,
1615
+ "Text2Image _O": Text2Image_O,
1616
+ # ImageTools------------------------------------------
1617
+ "ImageScaleFactor _O": ImageScaleFactor_O,
1618
+ "ImageScaleFactorSimple _O": ImageScaleFactorSimple_O,
1619
+ # NumberTools------------------------------------------
1620
+ "Equation1param _O": applyEquation1param_O,
1621
+ "Equation2params _O": applyEquation2params_O,
1622
+ "floatToInt _O": floatToInt_O,
1623
+ "intToFloat _O": intToFloat_O,
1624
+ "floatToText _O": floatToText_O,
1625
+ "GetImage_(Width&Height) _O": GetImageWidthAndHeight_O,
1626
+ "GetLatent_(Width&Height) _O": GetLatentWidthAndHeight_O,
1627
+ # debug------------------------------------------
1628
+ "debug messages_O": DebugOpenAIChatMEssages_O,
1629
+ "debug Completeion _O": DebugOpenAIChatCompletion_O,
1630
+ "Debug Text _O": DebugText_O,
1631
+ "Debug Text route _O": DebugTextRoute_O,
1632
+ # Utils------------------------------------------
1633
+ "Note _O": Note_O,
1634
+ "Text _O": Text_O,
1635
+ "seed _O": seed_O,
1636
+ "int _O": int_O,
1637
+ "float _O": float_O,
1638
+ }
src/QualityOfLife_deprecatedNodes.py ADDED
@@ -0,0 +1,543 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Developed by Omar - https://github.com/omar92
2
+ # https://civitai.com/user/omar92
3
+ # discord: Omar92#3374
4
+
5
+ ###
6
+ #
7
+ # All nodes in this file are deprecated and will be removed in the future , i left them here for backward compatibility
8
+ #
9
+ ###
10
+
11
+ import io
12
+ import os
13
+ import time
14
+ import numpy as np
15
+ import requests
16
+ import torch
17
+ from PIL import Image, ImageFont, ImageDraw
18
+ from PIL import Image, ImageDraw
19
+ import importlib
20
+ import comfy.samplers
21
+ import comfy.sd
22
+ import comfy.utils
23
+
24
+ # -------------------------------------------------------------------------------------------
25
+ # region deprecated
26
+ # region openAITools
27
+
28
+
29
+ class O_ChatGPT_deprecated:
30
+ """
31
+ this node is based on the openAI GPT-3 API to generate propmpts using the AI
32
+ """
33
+ # Define the input types for the node
34
+ @classmethod
35
+ def INPUT_TYPES(cls):
36
+ return {
37
+ "required": {
38
+ # Multiline string input for the prompt
39
+ "prompt": ("STRING", {"multiline": True}),
40
+ # File input for the API key
41
+ "api_key_file": ("STRING", {"file": True, "default": "api_key.txt"})
42
+ }
43
+ }
44
+
45
+ RETURN_TYPES = ("STR",) # Define the return type of the node
46
+ FUNCTION = "fun" # Define the function name for the node
47
+ CATEGORY = "O/deprecated/OpenAI" # Define the category for the node
48
+
49
+ def fun(self, api_key_file, prompt):
50
+ self.install_openai() # Install the OpenAI module if not already installed
51
+ import openai # Import the OpenAI module
52
+
53
+ # Get the API key from the file
54
+ api_key = self.get_api_key(api_key_file)
55
+
56
+ openai.api_key = api_key # Set the API key for the OpenAI module
57
+
58
+ # Create a chat completion using the OpenAI module
59
+ completion = openai.ChatCompletion.create(
60
+ model="gpt-3.5-turbo",
61
+ messages=[
62
+ {"role": "user", "content": "act as prompt generator ,i will give you text and you describe an image that match that text in details, answer with one response only"},
63
+ {"role": "user", "content": prompt}
64
+ ]
65
+ )
66
+ # Get the answer from the chat completion
67
+ answer = completion["choices"][0]["message"]["content"]
68
+
69
+ return (
70
+ {
71
+ "string": answer, # Return the answer as a string
72
+ },
73
+ )
74
+
75
+ # Helper function to get the API key from the file
76
+ def get_api_key(self, api_key_file):
77
+ custom_nodes_dir = './custom_nodes/' # Define the directory for the file
78
+ with open(custom_nodes_dir+api_key_file, 'r') as f: # Open the file and read the API key
79
+ api_key = f.read().strip()
80
+ return api_key # Return the API key
81
+
82
+ # Helper function to install the OpenAI module if not already installed
83
+ def install_openai(self):
84
+ try:
85
+ importlib.import_module('openai')
86
+ except ImportError:
87
+ import pip
88
+ pip.main(['install', 'openai'])
89
+ # region advanced
90
+
91
+ """
92
+ this node will load openAI model
93
+ """
94
+ # Define the input types for the node
95
+ @classmethod
96
+ def INPUT_TYPES(cls):
97
+ return {
98
+ "required": {
99
+ # File input for the API key
100
+ "api_key_file": ("STRING", {"file": True, "default": "api_key.txt"})
101
+ }
102
+ }
103
+ RETURN_TYPES = ("OPENAI",) # Define the return type of the node
104
+ FUNCTION = "fun" # Define the function name for the node
105
+ CATEGORY = "O/OpenAI/Advanced" # Define the category for the node
106
+
107
+ def fun(self, api_key_file):
108
+ self.install_openai() # Install the OpenAI module if not already installed
109
+ import openai # Import the OpenAI module
110
+
111
+ # Get the API key from the file
112
+ api_key = self.get_api_key(api_key_file)
113
+ openai.api_key = api_key # Set the API key for the OpenAI module
114
+
115
+ return (
116
+ {
117
+ "openai": openai, # Return openAI model
118
+ },
119
+ )
120
+
121
+ # Helper function to install the OpenAI module if not already installed
122
+ def install_openai(self):
123
+ try:
124
+ importlib.import_module('openai')
125
+ except ImportError:
126
+ import pip
127
+ pip.main(['install', 'openai'])
128
+
129
+ # Helper function to get the API key from the file
130
+ def get_api_key(self, api_key_file):
131
+ custom_nodes_dir = './custom_nodes/' # Define the directory for the file
132
+ with open(custom_nodes_dir+api_key_file, 'r') as f: # Open the file and read the API key
133
+ api_key = f.read().strip()
134
+ return api_key # Return the API key
135
+ # region ChatGPT
136
+
137
+
138
+ class openAi_chat_message_STR_deprecated:
139
+ """
140
+ create chat message for openAI chatGPT
141
+ """
142
+ # Define the input types for the node
143
+ @classmethod
144
+ def INPUT_TYPES(cls):
145
+ return {
146
+ "required": {
147
+ "role": (["user", "assistant", "system"], {"default": "user"}),
148
+ "content": ("STR",),
149
+ }
150
+ }
151
+ # Define the return type of the node
152
+ RETURN_TYPES = ("OPENAI_CHAT_MESSAGES",)
153
+ FUNCTION = "fun" # Define the function name for the node
154
+ # Define the category for the node
155
+ CATEGORY = "O/deprecated/OpenAI/Advanced/ChatGPT"
156
+
157
+ def fun(self, role, content):
158
+ return (
159
+ {
160
+ "messages": [{"role": role, "content": content["string"], }]
161
+ },
162
+ )
163
+ # endregion ChatGPT
164
+ # region Image
165
+
166
+ class openAi_chat_messages_Combine_deprecated:
167
+ """
168
+ compine chat messages into 1 tuple
169
+ """
170
+ # Define the input types for the node
171
+ @classmethod
172
+ def INPUT_TYPES(cls):
173
+ return {
174
+ "required": {
175
+ "message1": ("OPENAI_CHAT_MESSAGES", ),
176
+ "message2": ("OPENAI_CHAT_MESSAGES", ),
177
+ }
178
+ }
179
+ # Define the return type of the node
180
+ RETURN_TYPES = ("OPENAI_CHAT_MESSAGES",)
181
+ FUNCTION = "fun" # Define the function name for the node
182
+ # Define the category for the node
183
+ CATEGORY = "O/deprecated/OpenAI/Advanced/ChatGPT"
184
+
185
+ def fun(self, message1, message2):
186
+ messages = message1["messages"] + \
187
+ message2["messages"] # compine messages
188
+
189
+ return (
190
+ {
191
+ "messages": messages
192
+ },
193
+ )
194
+ class openAi_Image_create_deprecated:
195
+ """
196
+ create image using openai
197
+ """
198
+ # Define the input types for the node
199
+ @classmethod
200
+ def INPUT_TYPES(cls):
201
+ return {
202
+ "required": {
203
+ "openai": ("OPENAI", ),
204
+ "prompt": ("STR",),
205
+ "number": ("INT", {"default": 1, "min": 1, "max": 10, "step": 1}),
206
+ "size": (["256x256", "512x512", "1024x1024"], {"default": "256x256"}),
207
+ }
208
+ }
209
+ # Define the return type of the node
210
+ RETURN_TYPES = ("IMAGE", "MASK")
211
+ FUNCTION = "fun" # Define the function name for the node
212
+ OUTPUT_NODE = True
213
+ # Define the category for the node
214
+ CATEGORY = "O/deprecated/OpenAI/Advanced/Image"
215
+
216
+ def fun(self, openai, prompt, number, size):
217
+ # Create a chat completion using the OpenAI module
218
+ openai = openai["openai"]
219
+ prompt = prompt["string"]
220
+ number = 1
221
+ imagesURLS = openai.Image.create(
222
+ prompt=prompt,
223
+ n=number,
224
+ size=size
225
+ )
226
+ imageURL = imagesURLS["data"][0]["url"]
227
+ print("imageURL:", imageURL)
228
+ image = requests.get(imageURL).content
229
+ i = Image.open(io.BytesIO(image))
230
+ image = i.convert("RGBA")
231
+ image = np.array(image).astype(np.float32) / 255.0
232
+ # image_np = np.transpose(image_np, (2, 0, 1))
233
+ image = torch.from_numpy(image)[None,]
234
+ if 'A' in i.getbands():
235
+ mask = np.array(i.getchannel('A')).astype(np.float32) / 255.0
236
+ mask = 1. - torch.from_numpy(mask)
237
+ else:
238
+ mask = torch.zeros((64, 64), dtype=torch.float32, device="cpu")
239
+ print("image_tensor: done")
240
+ return (image, mask)
241
+
242
+
243
+ class openAi_chat_completion_deprecated:
244
+ """
245
+ create chat completion for openAI chatGPT
246
+ """
247
+ # Define the input types for the node
248
+ @classmethod
249
+ def INPUT_TYPES(cls):
250
+ return {
251
+ "required": {
252
+ "openai": ("OPENAI", ),
253
+ "model": ("STRING", {"multiline": False, "default": "gpt-3.5-turbo"}),
254
+ "messages": ("OPENAI_CHAT_MESSAGES", ),
255
+ }
256
+ }
257
+ # Define the return type of the node
258
+ RETURN_TYPES = ("STR", "OPENAI_CHAT_COMPLETION",)
259
+ FUNCTION = "fun" # Define the function name for the node
260
+ OUTPUT_NODE = True
261
+ # Define the category for the node
262
+ CATEGORY = "O/deprecated/OpenAI/Advanced/ChatGPT"
263
+
264
+ def fun(self, openai, model, messages):
265
+ # Create a chat completion using the OpenAI module
266
+ openai = openai["openai"]
267
+ completion = openai.ChatCompletion.create(
268
+ model=model,
269
+ messages=messages["messages"]
270
+ )
271
+ # Get the answer from the chat completion
272
+ content = completion["choices"][0]["message"]["content"]
273
+ return (
274
+ {
275
+ "string": content, # Return the answer as a string
276
+ },
277
+ {
278
+ "completion": completion, # Return the chat completion
279
+ }
280
+ )
281
+
282
+
283
+ # endregion Image
284
+ # endregion advanced
285
+ # endregion openAI
286
+ # region StringTools
287
+
288
+
289
+ class O_String_deprecated:
290
+ """
291
+ this node is a simple string node that can be used to hold userinput as string
292
+ """
293
+ @classmethod
294
+ def INPUT_TYPES(cls):
295
+ return {"required": {"string": ("STRING", {"multiline": True})}}
296
+
297
+ RETURN_TYPES = ("STR",)
298
+ FUNCTION = "ostr"
299
+ CATEGORY = "O/deprecated/string"
300
+
301
+ @staticmethod
302
+ def ostr(string):
303
+ return ({"string": string},)
304
+
305
+
306
+ class DebugString_deprecated:
307
+ """
308
+ This node will write a string to the console
309
+ """
310
+ @classmethod
311
+ def INPUT_TYPES(cls):
312
+ return {"required": {"string": ("STR",)}}
313
+
314
+ RETURN_TYPES = ()
315
+ FUNCTION = "debug_string"
316
+ OUTPUT_NODE = True
317
+ CATEGORY = "O/deprecated/string"
318
+
319
+ @staticmethod
320
+ def debug_string(string):
321
+ print("debugString:", string["string"])
322
+ return ()
323
+
324
+
325
+ class string2Image_deprecated:
326
+ """
327
+ This node will convert a string to an image
328
+ """
329
+
330
+ def __init__(self):
331
+ self.font_filepath = os.path.join(
332
+ os.path.dirname(os.path.realpath(__file__)), "fonts")
333
+
334
+ @classmethod
335
+ def INPUT_TYPES(s):
336
+ return {
337
+ "required": {
338
+ "string": ("STR",),
339
+ "font": ("STRING", {"default": "CALIBRI.TTF", "multiline": False}),
340
+ "size": ("INT", {"default": 36, "min": 0, "max": 255, "step": 1}),
341
+ "font_R": ("INT", {"default": 0, "min": 0, "max": 255, "step": 1}),
342
+ "font_G": ("INT", {"default": 0, "min": 0, "max": 255, "step": 1}),
343
+ "font_B": ("INT", {"default": 0, "min": 0, "max": 255, "step": 1}),
344
+ "background_R": ("INT", {"default": 255, "min": 0, "max": 255, "step": 1}),
345
+ "background_G": ("INT", {"default": 255, "min": 0, "max": 255, "step": 1}),
346
+ "background_B": ("INT", {"default": 255, "min": 0, "max": 255, "step": 1}),
347
+ }
348
+ }
349
+
350
+ RETURN_TYPES = ("IMAGE",)
351
+ FUNCTION = "create_image"
352
+ OUTPUT_NODE = False
353
+ CATEGORY = "O/deprecated/string"
354
+
355
+ def create_image(self, string, font, size, font_R, font_G, font_B, background_R, background_G, background_B):
356
+ font_color = (font_R, font_G, font_B)
357
+ font = ImageFont.truetype(self.font_filepath+"\\"+font, size)
358
+ mask_image = font.getmask(string["string"], "L")
359
+ image = Image.new("RGBA", mask_image.size,
360
+ (background_R, background_G, background_B))
361
+ # need to use the inner `img.im.paste` due to `getmask` returning a core
362
+ image.im.paste(font_color, (0, 0) + mask_image.size, mask_image)
363
+
364
+ # Convert the PIL Image to a tensor
365
+ image_np = np.array(image).astype(np.float32) / 255.0
366
+ image_tensor = torch.from_numpy(image_np).unsqueeze(0)
367
+ return (image_tensor,)
368
+
369
+
370
+ class CLIPStringEncode_deprecated:
371
+ """
372
+ This node will encode a string with CLIP
373
+ """
374
+ @classmethod
375
+ def INPUT_TYPES(s):
376
+ return {"required": {
377
+ "string": ("STR",),
378
+ "clip": ("CLIP", )
379
+ }}
380
+ RETURN_TYPES = ("CONDITIONING",)
381
+ FUNCTION = "encode"
382
+
383
+ CATEGORY = "O/deprecated/string"
384
+
385
+ def encode(self, string, clip):
386
+ return ([[clip.encode(string["string"]), {}]], )
387
+ # region String/operations
388
+
389
+
390
+ class concat_String_deprecated:
391
+ """
392
+ This node will concatenate two strings together
393
+ """
394
+ @classmethod
395
+ def INPUT_TYPES(cls):
396
+ return {"required": {
397
+ "string1": ("STR",),
398
+ "string2": ("STR",)
399
+ }}
400
+
401
+ RETURN_TYPES = ("STR",)
402
+ FUNCTION = "fun"
403
+ CATEGORY = "O/deprecated/string/operations"
404
+
405
+ @staticmethod
406
+ def fun(string1, string2):
407
+ return ({"string": string1["string"] + string2["string"]},)
408
+
409
+
410
+ class trim_String_deprecated:
411
+ """
412
+ This node will trim a string from the left and right
413
+ """
414
+ @classmethod
415
+ def INPUT_TYPES(cls):
416
+ return {"required": {
417
+ "string": ("STR",),
418
+ }}
419
+
420
+ RETURN_TYPES = ("STR",)
421
+ FUNCTION = "fun"
422
+ CATEGORY = "O/deprecated/string/operations"
423
+
424
+ def fun(self, string):
425
+ return (
426
+ {
427
+ "string": (string["string"].strip()),
428
+ },
429
+ )
430
+
431
+
432
+ class replace_String_deprecated:
433
+ """
434
+ This node will replace a string with another string
435
+ """
436
+ @classmethod
437
+ def INPUT_TYPES(cls):
438
+ return {"required": {
439
+ "string": ("STR",),
440
+ "old": ("STRING", {"multiline": False}),
441
+ "new": ("STRING", {"multiline": False})
442
+ }}
443
+
444
+ RETURN_TYPES = ("STR",)
445
+ FUNCTION = "fun"
446
+ CATEGORY = "O/deprecated/string/operations"
447
+
448
+ @staticmethod
449
+ def fun(string, old, new):
450
+ return ({"string": string["string"].replace(old, new)},)
451
+
452
+ # replace a string with another string
453
+
454
+
455
+ class replace_String_advanced_deprecated:
456
+ """
457
+ This node will replace a string with another string
458
+ """
459
+ @classmethod
460
+ def INPUT_TYPES(cls):
461
+ return {"required": {
462
+ "string": ("STR",),
463
+ "old": ("STR",),
464
+ "new": ("STR",),
465
+ }}
466
+
467
+ RETURN_TYPES = ("STR",)
468
+ FUNCTION = "fun"
469
+ CATEGORY = "O/deprecated/string/operations"
470
+
471
+ @staticmethod
472
+ def fun(string, old, new):
473
+ return ({"string": string["string"].replace(old["string"], new["string"])},)
474
+ # endregion
475
+ # endregion
476
+
477
+
478
+ class LatentUpscaleMultiply_deprecated:
479
+ """
480
+ Upscale the latent code by multiplying the width and height by a factor
481
+ """
482
+ upscale_methods = ["nearest-exact", "bilinear", "area"]
483
+ crop_methods = ["disabled", "center"]
484
+
485
+ @classmethod
486
+ def INPUT_TYPES(cls):
487
+ return {
488
+ "required": {
489
+ "samples": ("LATENT",),
490
+ "upscale_method": (cls.upscale_methods,),
491
+ "WidthMul": ("FLOAT", {"default": 1.25, "min": 0.0, "max": 10.0, "step": 0.1}),
492
+ "HeightMul": ("FLOAT", {"default": 1.25, "min": 0.0, "max": 10.0, "step": 0.1}),
493
+ "crop": (cls.crop_methods,),
494
+ }
495
+ }
496
+
497
+ RETURN_TYPES = ("LATENT",)
498
+ FUNCTION = "upscale"
499
+ CATEGORY = "O/deprecated/latent"
500
+
501
+ def upscale(self, samples, upscale_method, WidthMul, HeightMul, crop):
502
+ s = samples.copy()
503
+ x = samples["samples"].shape[3]
504
+ y = samples["samples"].shape[2]
505
+
506
+ new_x = int(x * WidthMul)
507
+ new_y = int(y * HeightMul)
508
+ print(f"upscale from ({x*8},{y*8}) to ({new_x*8},{new_y*8})")
509
+
510
+ def enforce_mul_of_64(d):
511
+ leftover = d % 8
512
+ if leftover != 0:
513
+ d += 8 - leftover
514
+ return d
515
+
516
+ s["samples"] = comfy.utils.common_upscale(
517
+ samples["samples"], enforce_mul_of_64(
518
+ new_x), enforce_mul_of_64(new_y), upscale_method, crop
519
+ )
520
+ return (s,)
521
+
522
+ # endregion deprecated
523
+
524
+
525
+ # Define the node class mappings
526
+ NODE_CLASS_MAPPINGS = {
527
+ # deprecated
528
+ "ChatGPT _O": O_ChatGPT_deprecated,
529
+ "Chat_Message_fromString _O": openAi_chat_message_STR_deprecated,
530
+ "compine_chat_messages _O": openAi_chat_messages_Combine_deprecated,
531
+ "Chat_Completion _O": openAi_chat_completion_deprecated,
532
+ "create_image _O": openAi_Image_create_deprecated,
533
+ "String _O": O_String_deprecated,
534
+ "Debug String _O": DebugString_deprecated,
535
+ "concat Strings _O": concat_String_deprecated,
536
+ "trim String _O": trim_String_deprecated,
537
+ "replace String _O": replace_String_deprecated,
538
+ "replace String advanced _O": replace_String_advanced_deprecated,
539
+ "string2Image _O": string2Image_deprecated,
540
+ "CLIPStringEncode _O": CLIPStringEncode_deprecated,
541
+ "CLIPStringEncode _O": CLIPStringEncode_deprecated,
542
+ "LatentUpscaleMultiply": LatentUpscaleMultiply_deprecated,
543
+ }
src/__pycache__/QualityOfLifeSuit_Omar92.cpython-310.pyc ADDED
Binary file (40.4 kB). View file
 
src/__pycache__/QualityOfLife_deprecatedNodes.cpython-310.pyc ADDED
Binary file (13.4 kB). View file
 
update/__pycache__/update.cpython-310.pyc ADDED
Binary file (2.27 kB). View file
 
update/update.py ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import importlib
2
+ from datetime import datetime
3
+
4
+ def pull(pygit2,repo, remote_name='origin', branch='master'):
5
+ for remote in repo.remotes:
6
+ #print("fetching latest changes: ",remote.name)
7
+ if remote.name == remote_name:
8
+ remote.fetch()
9
+ remote_master_id = repo.lookup_reference('refs/remotes/origin/%s' % (branch)).target
10
+ merge_result, _ = repo.merge_analysis(remote_master_id)
11
+ # Up to date, do nothing
12
+ if merge_result & pygit2.GIT_MERGE_ANALYSIS_UP_TO_DATE:
13
+ return
14
+ # We can just fastforward
15
+ elif merge_result & pygit2.GIT_MERGE_ANALYSIS_FASTFORWARD:
16
+ repo.checkout_tree(repo.get(remote_master_id))
17
+ try:
18
+ master_ref = repo.lookup_reference('refs/heads/%s' % (branch))
19
+ master_ref.set_target(remote_master_id)
20
+ except KeyError:
21
+ repo.create_branch(branch, repo.get(remote_master_id))
22
+ repo.head.set_target(remote_master_id)
23
+ elif merge_result & pygit2.GIT_MERGE_ANALYSIS_NORMAL:
24
+ repo.merge(remote_master_id)
25
+
26
+ if repo.index.conflicts is not None:
27
+ for conflict in repo.index.conflicts:
28
+ print('Conflicts found in:', conflict[0].path)
29
+ raise AssertionError('Conflicts, ahhhhh!!')
30
+
31
+ user = repo.default_signature
32
+ tree = repo.index.write_tree()
33
+ commit = repo.create_commit('HEAD',
34
+ user,
35
+ user,
36
+ 'Merge!',
37
+ tree,
38
+ [repo.head.target, remote_master_id])
39
+ # We need to do this or git CLI will think we are still merging.
40
+ repo.state_cleanup()
41
+ else:
42
+ raise AssertionError('Unknown merge analysis result')
43
+ def install_pygit2():
44
+ # Helper function to install the pygit2 module if not already installed
45
+ try:
46
+ importlib.import_module('pygit2')
47
+ except ImportError:
48
+ import pip
49
+ pip.main(['install', 'pygit2'])
50
+ def update(repoPath = "", branch_name="main" ):
51
+ print(f"Updating: Quality of Life Suit...")
52
+
53
+ install_pygit2()
54
+ import pygit2
55
+
56
+ repo = pygit2.Repository(repoPath)
57
+ ident = pygit2.Signature('omar92', 'omar@92')
58
+ try:
59
+ #print("stashing current changes")
60
+ repo.stash(ident)
61
+ except KeyError:
62
+ #print("nothing to stash")
63
+ pass
64
+ backup_branch_name = 'backup_branch_{}'.format(datetime.today().strftime('%Y-%m-%d_%H_%M_%S'))
65
+ #print("creating backup branch: {}".format(backup_branch_name))
66
+ repo.branches.local.create(backup_branch_name, repo.head.peel())
67
+
68
+ #print(f"checking out {branch_name} branch")
69
+ branch = repo.lookup_branch(str(branch_name))
70
+ ref = repo.lookup_reference(branch.name)
71
+ repo.checkout(ref)
72
+
73
+ #print("pulling latest changes")
74
+ pull(pygit2,repo, branch=branch_name)
75
+
76
+ print(f"done: Quality of Life Suit, updated successfully...")
77
+
update/update_QualityOfLifeSuit.bat ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ python.exe .\update.py ..
2
+ pause