How to display Extra\Prompt in different views?
Moderators: XnTriq, helmut, xnview, Dreamer
-
- Posts: 11
- Joined: Sat Jul 29, 2023 6:01 pm
How to display Extra\Prompt in different views?
Hi All,
I'm new to XnView... I'm working a lot with generative AI - mostly with Stable Diffusion via ComfyUI which embeds a workflow and other information into the png file. I can see this information if I click on show properties and scroll down to the extras section and look at the Prompt category. Ideally I could set up both the XnView MP browser and the Viewer to easily display that information stored in the extras-prompt category. When I try to do this in the settings, I can see the extras picklist but the prompt category doesn't come up as one of the choices.
Thanks,
Eric
I'm new to XnView... I'm working a lot with generative AI - mostly with Stable Diffusion via ComfyUI which embeds a workflow and other information into the png file. I can see this information if I click on show properties and scroll down to the extras section and look at the Prompt category. Ideally I could set up both the XnView MP browser and the Viewer to easily display that information stored in the extras-prompt category. When I try to do this in the settings, I can see the extras picklist but the prompt category doesn't come up as one of the choices.
Thanks,
Eric
-
- Author of XnView
- Posts: 45193
- Joined: Mon Oct 13, 2003 7:31 am
- Location: France
-
- Posts: 11
- Joined: Sat Jul 29, 2023 6:01 pm
Re: How to display Extra\Prompt in different views?
Hi,
I tried to post a png sample, but got the message file too large. I think these are about 3 - 5 mb
I tried to post a png sample, but got the message file too large. I think these are about 3 - 5 mb
-
- Posts: 11
- Joined: Sat Jul 29, 2023 6:01 pm
Re: How to display Extra\Prompt in different views?
Here's a screen shot saved as jpg showing the XnView with one of the generative images in view and the properties window open via using alt+enter
The information that is important to me is at the bottom in the extras section.
The information that is important to me is at the bottom in the extras section.
You do not have the required permissions to view the files attached to this post.
-
- Author of XnView
- Posts: 45193
- Joined: Mon Oct 13, 2003 7:31 am
- Location: France
Re: How to display Extra\Prompt in different views?
ok, please send me it by mail (contat at xnview dot com)
Pierre.
-
- XnThusiast
- Posts: 2109
- Joined: Sat May 09, 2015 9:37 am
Re: How to display Extra\Prompt in different views?
since issue is about metadata - just resize png to something like 10x10pxEricRollei wrote: ↑Sun Aug 20, 2023 3:11 am Hi,
I tried to post a png sample, but got the message file too large. I think these are about 3 - 5 mb
-
- Posts: 11
- Joined: Sat Jul 29, 2023 6:01 pm
Re: How to display Extra\Prompt in different views?
Hi Pierre,
Sorry, I got distracted and forgot to come back here and check-in. I'm trying to attach a small png again, and I will e-mail you a png that has the embedded information in it.
Regards,
Eric
Sorry, I got distracted and forgot to come back here and check-in. I'm trying to attach a small png again, and I will e-mail you a png that has the embedded information in it.
Regards,
Eric
You do not have the required permissions to view the files attached to this post.
-
- Posts: 11
- Joined: Sat Jul 29, 2023 6:01 pm
Re: How to display Extra\Prompt in different views?
In that png image that I just was able to upload you can view the properties/Extras and see there are two sections - Prompt and Workflow that contain the useful information.
Promt is maybe the most useful and it's a big line of stuff - see pasted below:
Of that huge list, the most important probably are the ckpt models and the text input:
{"inputs": {"ckpt_name": "SDXL\\AllAround\\juggernautXL_version5.safetensors"}
{"inputs": {"text": "beautiful geometric patterns of very fine color lines are projected onto an abstracted dark female form, full body in a dynamic dance pose, revealing the secrets of life in fine detail. In style of ••• SPAM ••• and style of Kehinde Wiley and Gustav Klimt\n\n"}
But it would be amazing to be able to search a set of images for ckpt model or any of the other input terms like Lora
and if there were a way to display these values in their own box, that would be an incredible resource to those using Stable Diffusion since there isn't any easy way currently to manage catalogs of images with this information
prompt contains:
{"1": {"inputs": {"samples": ["2", 0], "vae": ["9", 0]}, "class_type": "VAEDecode"}, "2": {"inputs": {"seed": ["20", 0], "cfg_scale": 17.6, "sampler": "dpmpp_2m", "scheduler": "karras", "start_at_step": 0, "base_steps": 83, "refiner_steps": 21, "detail_level": 1.0, "detail_from": "penultimate_step", "noise_source": "CPU", "auto_rescale_tonemap": "enable", "rescale_tonemap_to": 11.0, "model_model": ["136", 0], "model_refiner": ["3", 0], "CONDITIONING_model_pos": ["133", 0], "CONDITIONING_model_neg": ["135", 0], "CONDITIONING_refiner_pos": ["132", 0], "CONDITIONING_refiner_neg": ["134", 0], "latent_image": ["96", 0]}, "class_type": "KSamplerSDXLAdvanced"}, "3": {"inputs": {"ckpt_name": {"content": "SDXL\\OEM\\sd_xl_refiner_1.0_0.9vae.safetensors", "image": null}, "example": "[none]"}, "class_type": "CheckpointLoader|pysssss"}, "4": {"inputs": {"image": "fingerprint-plus (2).jpg", "choose file to upload": "image"}, "class_type": "LoadImage", "is_changed": ["46b83e12b1b6c0b29054f4d0182a2a202df93e8b82afabdb9c9748ca97ce346a"]}, "5": {"inputs": {"pixels": ["7", 0], "vae": ["60", 0]}, "class_type": "VAEEncode"}, "6": {"inputs": {"width": ["33", 0], "height": ["33", 1], "method": "lanczos", "images": ["61", 0]}, "class_type": "ImageTransformResizeAbsolute"}, "7": {"inputs": {"intensity": 1.0, "scale": 100.0, "temperature": 0.0, "vignette": 0.0, "image": ["6", 0]}, "class_type": "FilmGrain"}, "9": {"inputs": {"vae_name": "diffusion_pytorch_model_XL09_RefinerVAE.safetensors"}, "class_type": "VAELoader"}, "10": {"inputs": {"lora_name": {"content": "SDXL\\GraphicD\\LeParc_AI_LoRA_XL_v2.1-000020.safetensors", "image": "loras/SDXL\\GraphicD\\LeParc_AI_LoRA_XL_v2.1-000020.png"}, "strength_model": 0.96, "strength_clip": 0.99, "example": "[none]", "model": ["39", 0], "clip": ["39", 1]}, "class_type": "LoraLoader|pysssss"}, "11": {"inputs": {"model_name": "Best\\4x_UniversalUpscalerV2-Sharper_103000_G.pth"}, "class_type": "UpscaleModelLoader"}, "12": {"inputs": {"upscale_by": 3.0, "seed": 413547168058634, "steps": 65, "cfg": 21.200000000000003, "sampler_name": "dpmpp_2m_sde_gpu", "scheduler": "exponential", "denoise": 0.4, "mode_type": "Chess", "tile_width": 1024, "tile_height": 1024, "mask_blur": 8, "tile_padding": 32, "seam_fix_mode": "None", "seam_fix_denoise": 1.0, "seam_fix_width": 64, "seam_fix_mask_blur": 8, "seam_fix_padding": 16, "force_uniform_tiles": "enable", "image": ["77", 0], "model": ["119", 0], "positive": ["133", 0], "negative": ["135", 0], "vae": ["9", 0], "upscale_model": ["11", 0]}, "class_type": "UltimateSDUpscale"}, "13": {"inputs": {"vae_name": "diffusion_pytorch_model_XL09_BaseVAE.safetensors"}, "class_type": "VAELoader"}, "14": {"inputs": {"model_name": "bbox/face_yolov8m.pt"}, "class_type": "UltralyticsDetectorProvider"}, "15": {"inputs": {"threshold": 0.5, "dilation": 10, "crop_factor": 2.0, "drop_size": 10, "bbox_detector": ["14", 0], "image": ["98", 0]}, "class_type": "BboxDetectorSEGS"}, "16": {"inputs": {"model_name": "sam_vit_l_0b3195.pth", "device_mode": "AUTO"}, "class_type": "SAMLoader"}, "17": {"inputs": {"detection_hint": "center-1", "dilation": 0, "threshold": 0.93, "bbox_expansion": 0, "mask_hint_threshold": 0.7, "mask_hint_use_negative": "False", "sam_model": ["16", 0], "segs": ["15", 0], "image": ["98", 0]}, "class_type": "SAMDetectorCombined"}, "18": {"inputs": {"segs": ["15", 0], "mask": ["17", 0]}, "class_type": "Segs & Mask"}, "19": {"inputs": {"guide_size": 1024.0, "guide_size_for": false, "max_size": 1024.0, "seed": ["20", 0], "steps": 35, "cfg": 8.5, "sampler_name": "dpmpp_2m_alt", "scheduler": "karras", "denoise": 0.25, "feather": 5, "noise_mask": true, "force_inpaint": true, "wildcard": "laughing", "image": ["98", 0], "segs": ["18", 0], "model": ["39", 0], "clip": ["39", 1], "vae": ["13", 0], "positive": ["21", 0], "negative": ["135", 0]}, "class_type": "DetailerForEachDebug"}, "20": {"inputs": {"seed": 165369254788705}, "class_type": "CR Seed"}, "21": {"inputs": {"conditioning_1": ["22", 0], "conditioning_2": ["133", 0]}, "class_type": "ConditioningCombine"}, "22": {"inputs": {"text": "laughing, professional face, photography, detailed, macro crisp quality ", "clip": ["39", 1]}, "class_type": "CLIPTextEncode"}, "23": {"inputs": {"add_noise": "disable", "noise_seed": ["20", 0], "tile_width": 1024, "tile_height": 1024, "tiling_strategy": "random", "steps": 115, "cfg": 16.0, "sampler_name": "dpmpp_2m_sde_gpu", "scheduler": "exponential", "start_at_step": 65, "end_at_step": 10000, "return_with_leftover_noise": "disable", "preview": "enable", "model": ["59", 0], "positive": ["132", 0], "negative": ["134", 0], "latent_image": ["24", 0]}, "class_type": "BNK_TiledKSamplerAdvanced"}, "24": {"inputs": {"pixels": ["12", 0], "vae": ["9", 0]}, "class_type": "VAEEncode"}, "25": {"inputs": {"images": ["126", 0]}, "class_type": "PreviewImage"}, "26": {"inputs": {"output_path": "[time(%Y-%m-%d)]/PNG-HR-workflow/", "filename_prefix": ["41", 0], "filename_delimiter": "_", "filename_number_padding": 4, "filename_number_start": "false", "extension": "png", "quality": 100, "lossless_webp": "false", "overwrite_mode": "false", "show_history": "false", "show_history_by_prefix": "true", "embed_workflow": "true", "show_previews": "true", "images": ["93", 0]}, "class_type": "Image Save"}, "27": {"inputs": {"images": ["12", 0]}, "class_type": "PreviewImage"}, "28": {"inputs": {"output_path": "[time(%Y-%m-%d)]/JPG-To-Share/", "filename_prefix": ["44", 0], "filename_delimiter": "_", "filename_number_padding": 4, "filename_number_start": "false", "extension": "jpeg", "quality": 86, "lossless_webp": "false", "overwrite_mode": "false", "show_history": "true", "show_history_by_prefix": "true", "embed_workflow": "false", "show_previews": "true", "images": ["88", 0]}, "class_type": "Image Save"}, "29": {"inputs": {"output_path": "[time(%Y-%m-%d)]/Base-Gen-PNG/", "filename_prefix": ["47", 0], "filename_delimiter": "_", "filename_number_padding": 4, "filename_number_start": "false", "extension": "png", "quality": 100, "lossless_webp": "false", "overwrite_mode": "false", "show_history": "false", "show_history_by_prefix": "true", "embed_workflow": "true", "show_previews": "false", "images": ["90", 0]}, "class_type": "Image Save"}, "30": {"inputs": {"lora_name": {"content": "SDXL\\clothes\\xl_burlesque_dress-1.0.safetensors", "image": "loras/SDXL\\clothes\\xl_burlesque_dress-1.0.png"}, "strength_model": 1.02, "strength_clip": 0.99, "example": "[none]", "model": ["10", 0], "clip": ["10", 1]}, "class_type": "LoraLoader|pysssss"}, "31": {"inputs": {"text": ["111", 0]}, "class_type": "ttN text"}, "32": {"inputs": {"text": ["112", 0]}, "class_type": "ttN text"}, "33": {"inputs": {"width": 1024, "height": 1024, "aspect_ratio": "3:4 portrait 896x1152", "swap_dimensions": "Off", "upscale_factor1": 1.0, "upscale_factor2": 1.0, "batch_size": 1}, "class_type": "CR Aspect Ratio SDXL"}, "34": {"inputs": {"text": ["35", 0]}, "class_type": "ShowText|pysssss"}, "35": {"inputs": {"int_": ["33", 0]}, "class_type": "CR Integer To String"}, "36": {"inputs": {"text": ["37", 0]}, "class_type": "ShowText|pysssss"}, "37": {"inputs": {"int_": ["33", 1]}, "class_type": "CR Integer To String"}, "38": {"inputs": {"text1": ["39", 3], "text2": "Refiner-HR_", "separator": "_"}, "class_type": "Concat Text _O"}, "39": {"inputs": {"ckpt_name": "SDXL\\2DandArt\\wanxiangxl_v50.safetensors"}, "class_type": "Checkpoint Loader w/Name (WLSH)"}, "40": {"inputs": {"text": ["51", 0], "old": "\\", "new": "_"}, "class_type": "Replace Text _O"}, "41": {"inputs": {"text": ["40", 0], "old": ".", "new": "_"}, "class_type": "Replace Text _O"}, "42": {"inputs": {"text1": ["39", 3], "text2": "Refiner-HR-Share_", "separator": "_"}, "class_type": "Concat Text _O"}, "43": {"inputs": {"text": ["53", 0], "old": "\\", "new": "_"}, "class_type": "Replace Text _O"}, "44": {"inputs": {"text": ["43", 0], "old": ".", "new": "_"}, "class_type": "Replace Text _O"}, "45": {"inputs": {"text1": ["39", 3], "text2": "-Base-", "separator": "_"}, "class_type": "Concat Text _O"}, "46": {"inputs": {"text": ["55", 0], "old": "\\", "new": "_"}, "class_type": "Replace Text _O"}, "47": {"inputs": {"text": ["46", 0], "old": ".", "new": "_"}, "class_type": "Replace Text _O"}, "48": {"inputs": {"text": ["44", 0]}, "class_type": "ShowText|pysssss"}, "49": {"inputs": {"text": ["41", 0]}, "class_type": "ShowText|pysssss"}, "50": {"inputs": {"style": "%Y-%m-%d-%H%M"}, "class_type": "Time String (WLSH)"}, "51": {"inputs": {"text_a": ["38", 0], "text_b": ["50", 0], "linebreak_addition": "false"}, "class_type": "Text Concatenate"}, "52": {"inputs": {"style": "%Y-%m-%d-%H%M"}, "class_type": "Time String (WLSH)"}, "53": {"inputs": {"text_a": ["42", 0], "text_b": ["52", 0], "linebreak_addition": "false"}, "class_type": "Text Concatenate"}, "54": {"inputs": {"style": "%Y-%m-%d-%H%M"}, "class_type": "Time String (WLSH)"}, "55": {"inputs": {"text_a": ["45", 0], "text_b": ["54", 0], "linebreak_addition": "false"}, "class_type": "Text Concatenate"}, "56": {"inputs": {"images": ["19", 0]}, "class_type": "PreviewImage"}, "57": {"inputs": {"mask": ["17", 0]}, "class_type": "MaskToImage"}, "58": {"inputs": {"images": ["57", 0]}, "class_type": "PreviewImage"}, "59": {"inputs": {"multiplier": 0.38, "model": ["3", 0]}, "class_type": "ModelSamplerTonemapNoiseTest"}, "60": {"inputs": {"vae_name": "diffusion_pytorch_model_XL09_reinstate_Base.safetensors"}, "class_type": "VAELoader"}, "61": {"inputs": {"start_x": 0.12, "start_y": 0.11, "end_x": 0.88, "end_y": 0.89, "images": ["4", 0]}, "class_type": "ImageTransformCropRelative"}, "62": {"inputs": {"lora_name": {"content": "SDXL\\body\\popovy_SDXL-fashion-doll-000006.safetensors", "image": "loras/SDXL\\body\\popovy_SDXL-fashion-doll-000006.png"}, "strength_model": 1.17, "strength_clip": 1.08, "example": "[none]", "model": ["30", 0], "clip": ["30", 1]}, "class_type": "LoraLoader|pysssss"}, "63": {"inputs": {"b1": 1.0, "b2": 0.75, "s1": 0.9, "s2": 0.75, "model": ["62", 0]}, "class_type": "FreeU"}, "64": {"inputs": {"strength": 0.8, "segs": ["66", 0], "control_net": ["65", 0], "segs_preprocessor": ["87", 0]}, "class_type": "ImpactControlNetApplySEGS"}, "65": {"inputs": {"control_net_name": "OpenPoseXL2.safetensors"}, "class_type": "ControlNetLoader"}, "66": {"inputs": {"bbox_threshold": 0.5, "bbox_dilation": 0, "crop_factor": 5.0, "drop_size": 10, "sub_threshold": 0.5, "sub_dilation": 0, "sub_bbox_expansion": 0, "sam_mask_hint_threshold": 0.7, "bbox_detector": ["67", 0], "image": ["19", 0]}, "class_type": "ImpactSimpleDetectorSEGS"}, "67": {"inputs": {"model_name": "bbox/hand_yolov8n.pt"}, "class_type": "UltralyticsDetectorProvider"}, "68": {"inputs": {"images": ["72", 0]}, "class_type": "PreviewImage"}, "69": {"inputs": {"images": ["72", 6]}, "class_type": "PreviewImage"}, "70": {"inputs": {"images": ["77", 0]}, "class_type": "PreviewImage"}, "71": {"inputs": {"images": ["77", 6]}, "class_type": "PreviewImage"}, "72": {"inputs": {"guide_size": 256.0, "guide_size_for": true, "max_size": 768.0, "seed": 0, "steps": 20, "cfg": 8.0, "sampler_name": "dpmpp_2m_sde", "scheduler": "karras", "denoise": 0.6, "feather": 5, "noise_mask": true, "force_inpaint": true, "wildcard": "", "refiner_ratio": 0.5, "image": ["19", 0], "segs": ["64", 0], "basic_pipe": ["76", 0], "refiner_basic_pipe_opt": ["82", 0]}, "class_type": "DetailerForEachDebugPipe"}, "73": {"inputs": {"ckpt_name": "SDXL\\AllAround\\juggernautXL_version5.safetensors"}, "class_type": "CheckpointLoaderSimple"}, "74": {"inputs": {"text": ["31", 0], "clip": ["73", 1]}, "class_type": "CLIPTextEncode"}, "75": {"inputs": {"text": ["32", 0], "clip": ["73", 1]}, "class_type": "CLIPTextEncode"}, "76": {"inputs": {"model": ["73", 0], "clip": ["73", 1], "vae": ["73", 2], "positive": ["74", 0], "negative": ["75", 0]}, "class_type": "ToBasicPipe"}, "77": {"inputs": {"guide_size": 360.0, "guide_size_for": true, "max_size": 768.0, "seed": 0, "steps": 29, "cfg": 8.0, "sampler_name": "dpmpp_2m_sde_gpu", "scheduler": "karras", "denoise": 0.4, "feather": 5, "noise_mask": true, "force_inpaint": true, "wildcard": "", "refiner_ratio": 0.2, "image": ["72", 0], "segs": ["72", 1], "basic_pipe": ["72", 2]}, "class_type": "DetailerForEachDebugPipe"}, "78": {"inputs": {"images": ["77", 4]}, "class_type": "PreviewImage"}, "79": {"inputs": {"images": ["77", 3]}, "class_type": "PreviewImage"}, "81": {"inputs": {"ckpt_name": "SDXL\\OEM\\sd_xl_refiner_1.0_0.9vae.safetensors"}, "class_type": "CheckpointLoaderSimple"}, "82": {"inputs": {"model": ["81", 0], "clip": ["81", 1], "vae": ["81", 2], "positive": ["83", 0], "negative": ["84", 0]}, "class_type": "ToBasicPipe"}, "83": {"inputs": {"text": ["31", 0], "clip": ["81", 1]}, "class_type": "CLIPTextEncode"}, "84": {"inputs": {"text": ["32", 0], "clip": ["81", 1]}, "class_type": "CLIPTextEncode"}, "87": {"inputs": {"detect_hand": true, "detect_body": false, "detect_face": false, "resolution_upscale_by": 1}, "class_type": "DWPreprocessor_Provider_for_SEGS //Inspire"}, "88": {"inputs": {"amount": 0.75, "image": ["93", 0]}, "class_type": "ImageCASharpening+"}, "89": {"inputs": {"mimic_scale": 4.0, "threshold_percentile": 1.0, "mimic_mode": "Power Up", "mimic_scale_min": 4.0, "cfg_mode": "Power Up", "cfg_scale_min": 2.0, "sched_val": 4.8, "separate_feature_channels": "enable", "scaling_startpoint": "MEAN", "variability_measure": "AD", "interpolate_phi": 1.0, "model": ["62", 0]}, "class_type": "DynamicThresholdingFull"}, "90": {"inputs": {"upscale_model": ["91", 0], "image": ["97", 0]}, "class_type": "ImageUpscaleWithModel"}, "91": {"inputs": {"model_name": "Deblur\\1x_ArtClarity.pth"}, "class_type": "UpscaleModelLoader"}, "93": {"inputs": {"upscale_model": ["94", 0], "image": ["126", 0]}, "class_type": "ImageUpscaleWithModel"}, "94": {"inputs": {"model_name": "Deblur\\1x_ArtClarity_strong.pth"}, "class_type": "UpscaleModelLoader"}, "95": {"inputs": {"images": ["90", 0]}, "class_type": "Preview for Image Chooser"}, "96": {"inputs": {"batch_size": 1, "latent": ["5", 0]}, "class_type": "CR Latent Batch Size"}, "97": {"inputs": {"amount": 0.0, "image": ["1", 0]}, "class_type": "ImageCASharpening+"}, "98": {"inputs": {"id": 4421395, "choice": "1", "mode": "Only pause if batch", "images": ["95", 0]}, "class_type": "Image Chooser"}, "99": {"inputs": {"text_positive_g": "", "text_positive_l": "", "text_negative": "", "artist": "No Artist", "negative_prompt_to": "Both", "log_prompt": true}, "class_type": "ArtistStylerAdvanced"}, "100": {"inputs": {"text_positive_g": "", "text_positive_l": "", "text_negative": "", "composition": "No option", "negative_prompt_to": "Both", "log_prompt": true}, "class_type": "CompositionStylerAdvanced"}, "101": {"inputs": {"text_positive_g": "sharp focus, extremely fine details", "text_positive_l": "", "text_negative": "", "focus": "No option", "negative_prompt_to": "Both", "log_prompt": true}, "class_type": "FocusStylerAdvanced"}, "102": {"inputs": {"text_positive_g": "", "text_positive_l": "", "text_negative": "", "milehigh": "no style", "negative_prompt_to": "L only", "log_prompt": true}, "class_type": "MilehighStylerAdvanced"}, "103": {"inputs": {"text_positive_g": "", "text_positive_l": "", "text_negative": "", "mood": "No option", "negative_prompt_to": "Both", "log_prompt": true}, "class_type": "MoodStylerAdvanced"}, "104": {"inputs": {"text_positive_g": "", "text_positive_l": "", "text_negative": "", "environment": "No option", "negative_prompt_to": "Both", "log_prompt": true}, "class_type": "EnvironmentStylerAdvanced"}, "105": {"inputs": {"text_positive_g": "", "text_positive_l": "", "text_negative": "", "subject": "No option", "negative_prompt_to": "Both", "log_prompt": true}, "class_type": "SubjectStylerAdvanced"}, "106": {"inputs": {"text_positive_g": "", "text_positive_l": "", "text_negative": "", "lighting": "No option", "negative_prompt_to": "Both", "log_prompt": true}, "class_type": "LightingStylerAdvanced"}, "107": {"inputs": {"text_a": ["102", 2], "text_b": ["105", 2], "linebreak_addition": "false", "text_c": ["100", 2], "text_d": ["101", 2]}, "class_type": "Text Concatenate"}, "108": {"inputs": {"text_a": ["102", 5], "text_b": ["100", 5], "linebreak_addition": "false", "text_c": ["105", 5], "text_d": ["101", 5]}, "class_type": "Text Concatenate"}, "109": {"inputs": {"text_a": ["99", 2], "text_b": ["104", 2], "linebreak_addition": "false", "text_c": ["103", 2], "text_d": ["106", 2]}, "class_type": "Text Concatenate"}, "110": {"inputs": {"text_a": ["99", 5], "text_b": ["104", 5], "linebreak_addition": "false", "text_c": ["103", 5], "text_d": ["106", 5]}, "class_type": "Text Concatenate"}, "111": {"inputs": {"text_a": ["117", 0], "text_b": ["107", 0], "linebreak_addition": "false", "text_c": ["109", 0]}, "class_type": "Text Concatenate"}, "112": {"inputs": {"text_a": ["118", 0], "text_b": ["108", 0], "linebreak_addition": "false", "text_c": ["110", 0]}, "class_type": "Text Concatenate"}, "113": {"inputs": {"text": ["111", 0]}, "class_type": "ShowText|pysssss"}, "114": {"inputs": {"text": ["112", 0]}, "class_type": "ShowText|pysssss"}, "117": {"inputs": {"text": "beautiful geometric patterns of very fine color lines are projected onto an abstracted dark female form, full body in a dynamic dance pose, revealing the secrets of life in fine detail. In style of ••• SPAM ••• and style of Kehinde Wiley and Gustav Klimt\n\n"}, "class_type": "Text _O"}, "118": {"inputs": {"text": "cartoon\nembedding:SDXL\\Neg\\unaestheticXLv31\nblurry\nundetailed\nhorror \nugly \n"}, "class_type": "Text _O"}, "119": {"inputs": {"weight": 1.0, "noise": 0.0, "ipadapter": ["121", 0], "clip_vision": ["120", 0], "image": ["77", 0], "model": ["89", 0]}, "class_type": "IPAdapterApply"}, "120": {"inputs": {"clip_name": "ip-adapter-image-encoder.safetensors"}, "class_type": "CLIPVisionLoader"}, "121": {"inputs": {"ipadapter_file": "ip-adapter-plus_sdxl_vit-h.bin"}, "class_type": "IPAdapterModelLoader"}, "122": {"inputs": {"radius": 1.21, "strength": 1.0, "images": ["93", 0]}, "class_type": "VividSharpen"}, "123": {"inputs": {"output_path": "[time(%Y-%m-%d)]/JPG-To-Share/vivid/", "filename_prefix": ["44", 0], "filename_delimiter": "_", "filename_number_padding": 4, "filename_number_start": "false", "extension": "jpeg", "quality": 86, "lossless_webp": "false", "overwrite_mode": "false", "show_history": "true", "show_history_by_prefix": "true", "embed_workflow": "false", "show_previews": "true", "images": ["122", 0]}, "class_type": "Image Save"}, "124": {"inputs": {"mode": "always", "volume": 0.30000000000000004, "any": ["95", 0]}, "class_type": "PlaySound|pysssss"}, "125": {"inputs": {"mode": "always", "volume": 0.4, "any": ["88", 0]}, "class_type": "PlaySound|pysssss"}, "126": {"inputs": {"tile_size": 1024, "samples": ["23", 0], "vae": ["9", 0]}, "class_type": "VAEDecodeTiled"}, "127": {"inputs": {"output_path": "[time(%Y-%m-%d)]/JPG-To-Share/vivid-plus-CAS/", "filename_prefix": ["44", 0], "filename_delimiter": "_", "filename_number_padding": 4, "filename_number_start": "false", "extension": "jpeg", "quality": 86, "lossless_webp": "false", "overwrite_mode": "false", "show_history": "true", "show_history_by_prefix": "true", "embed_workflow": "false", "show_previews": "true", "images": ["128", 0]}, "class_type": "Image Save"}, "128": {"inputs": {"radius": 0.86, "strength": 0.7000000000000001, "images": ["88", 0]}, "class_type": "VividSharpen"}, "132": {"inputs": {"mode": "Noodle Soup Prompts", "noodle_key": "__", "seed": ["20", 0], "token_normalization": "none", "weight_interpretation": "comfy++", "text": ["31", 0], "clip": ["3", 1]}, "class_type": "CLIPTextEncode (BlenderNeko Advanced + NSP)"}, "133": {"inputs": {"mode": "Noodle Soup Prompts", "noodle_key": "__", "seed": ["20", 0], "token_normalization": "none", "weight_interpretation": "comfy++", "text": ["31", 0], "clip": ["62", 1]}, "class_type": "CLIPTextEncode (BlenderNeko Advanced + NSP)"}, "134": {"inputs": {"mode": "Noodle Soup Prompts", "noodle_key": "__", "seed": ["20", 0], "token_normalization": "none", "weight_interpretation": "comfy++", "text": ["32", 0], "clip": ["3", 1]}, "class_type": "CLIPTextEncode (BlenderNeko Advanced + NSP)"}, "135": {"inputs": {"mode": "Noodle Soup Prompts", "noodle_key": "__", "seed": ["20", 0], "token_normalization": "none", "weight_interpretation": "comfy++", "text": ["32", 0], "clip": ["62", 1]}, "class_type": "CLIPTextEncode (BlenderNeko Advanced + NSP)"}, "136": {"inputs": {"sharpness_multiplier": 70.0, "sharpness_method": "anisotropic", "tonemap_multiplier": 0.7000000000000001, "tonemap_method": "reinhard_perchannel", "tonemap_percentile": 91.0, "contrast_multiplier": 0.15, "combat_method": "subtract", "combat_cfg_drift": 0.0, "rescale_cfg_phi": 0.0, "extra_noise_type": "perlin", "extra_noise_method": "add", "extra_noise_multiplier": 0.21, "extra_noise_lowpass": 100, "divisive_norm_size": 0, "spectral_mod_mode": "hard_clamp", "spectral_mod_percentile": 5.0, "spectral_mod_multiplier": 0.0, "affect_uncond": "Sharpness", "model": ["63", 0]}, "class_type": "Latent Diffusion Mega Modifier"}}
Promt is maybe the most useful and it's a big line of stuff - see pasted below:
Of that huge list, the most important probably are the ckpt models and the text input:
{"inputs": {"ckpt_name": "SDXL\\AllAround\\juggernautXL_version5.safetensors"}
{"inputs": {"text": "beautiful geometric patterns of very fine color lines are projected onto an abstracted dark female form, full body in a dynamic dance pose, revealing the secrets of life in fine detail. In style of ••• SPAM ••• and style of Kehinde Wiley and Gustav Klimt\n\n"}
But it would be amazing to be able to search a set of images for ckpt model or any of the other input terms like Lora
and if there were a way to display these values in their own box, that would be an incredible resource to those using Stable Diffusion since there isn't any easy way currently to manage catalogs of images with this information
prompt contains:
{"1": {"inputs": {"samples": ["2", 0], "vae": ["9", 0]}, "class_type": "VAEDecode"}, "2": {"inputs": {"seed": ["20", 0], "cfg_scale": 17.6, "sampler": "dpmpp_2m", "scheduler": "karras", "start_at_step": 0, "base_steps": 83, "refiner_steps": 21, "detail_level": 1.0, "detail_from": "penultimate_step", "noise_source": "CPU", "auto_rescale_tonemap": "enable", "rescale_tonemap_to": 11.0, "model_model": ["136", 0], "model_refiner": ["3", 0], "CONDITIONING_model_pos": ["133", 0], "CONDITIONING_model_neg": ["135", 0], "CONDITIONING_refiner_pos": ["132", 0], "CONDITIONING_refiner_neg": ["134", 0], "latent_image": ["96", 0]}, "class_type": "KSamplerSDXLAdvanced"}, "3": {"inputs": {"ckpt_name": {"content": "SDXL\\OEM\\sd_xl_refiner_1.0_0.9vae.safetensors", "image": null}, "example": "[none]"}, "class_type": "CheckpointLoader|pysssss"}, "4": {"inputs": {"image": "fingerprint-plus (2).jpg", "choose file to upload": "image"}, "class_type": "LoadImage", "is_changed": ["46b83e12b1b6c0b29054f4d0182a2a202df93e8b82afabdb9c9748ca97ce346a"]}, "5": {"inputs": {"pixels": ["7", 0], "vae": ["60", 0]}, "class_type": "VAEEncode"}, "6": {"inputs": {"width": ["33", 0], "height": ["33", 1], "method": "lanczos", "images": ["61", 0]}, "class_type": "ImageTransformResizeAbsolute"}, "7": {"inputs": {"intensity": 1.0, "scale": 100.0, "temperature": 0.0, "vignette": 0.0, "image": ["6", 0]}, "class_type": "FilmGrain"}, "9": {"inputs": {"vae_name": "diffusion_pytorch_model_XL09_RefinerVAE.safetensors"}, "class_type": "VAELoader"}, "10": {"inputs": {"lora_name": {"content": "SDXL\\GraphicD\\LeParc_AI_LoRA_XL_v2.1-000020.safetensors", "image": "loras/SDXL\\GraphicD\\LeParc_AI_LoRA_XL_v2.1-000020.png"}, "strength_model": 0.96, "strength_clip": 0.99, "example": "[none]", "model": ["39", 0], "clip": ["39", 1]}, "class_type": "LoraLoader|pysssss"}, "11": {"inputs": {"model_name": "Best\\4x_UniversalUpscalerV2-Sharper_103000_G.pth"}, "class_type": "UpscaleModelLoader"}, "12": {"inputs": {"upscale_by": 3.0, "seed": 413547168058634, "steps": 65, "cfg": 21.200000000000003, "sampler_name": "dpmpp_2m_sde_gpu", "scheduler": "exponential", "denoise": 0.4, "mode_type": "Chess", "tile_width": 1024, "tile_height": 1024, "mask_blur": 8, "tile_padding": 32, "seam_fix_mode": "None", "seam_fix_denoise": 1.0, "seam_fix_width": 64, "seam_fix_mask_blur": 8, "seam_fix_padding": 16, "force_uniform_tiles": "enable", "image": ["77", 0], "model": ["119", 0], "positive": ["133", 0], "negative": ["135", 0], "vae": ["9", 0], "upscale_model": ["11", 0]}, "class_type": "UltimateSDUpscale"}, "13": {"inputs": {"vae_name": "diffusion_pytorch_model_XL09_BaseVAE.safetensors"}, "class_type": "VAELoader"}, "14": {"inputs": {"model_name": "bbox/face_yolov8m.pt"}, "class_type": "UltralyticsDetectorProvider"}, "15": {"inputs": {"threshold": 0.5, "dilation": 10, "crop_factor": 2.0, "drop_size": 10, "bbox_detector": ["14", 0], "image": ["98", 0]}, "class_type": "BboxDetectorSEGS"}, "16": {"inputs": {"model_name": "sam_vit_l_0b3195.pth", "device_mode": "AUTO"}, "class_type": "SAMLoader"}, "17": {"inputs": {"detection_hint": "center-1", "dilation": 0, "threshold": 0.93, "bbox_expansion": 0, "mask_hint_threshold": 0.7, "mask_hint_use_negative": "False", "sam_model": ["16", 0], "segs": ["15", 0], "image": ["98", 0]}, "class_type": "SAMDetectorCombined"}, "18": {"inputs": {"segs": ["15", 0], "mask": ["17", 0]}, "class_type": "Segs & Mask"}, "19": {"inputs": {"guide_size": 1024.0, "guide_size_for": false, "max_size": 1024.0, "seed": ["20", 0], "steps": 35, "cfg": 8.5, "sampler_name": "dpmpp_2m_alt", "scheduler": "karras", "denoise": 0.25, "feather": 5, "noise_mask": true, "force_inpaint": true, "wildcard": "laughing", "image": ["98", 0], "segs": ["18", 0], "model": ["39", 0], "clip": ["39", 1], "vae": ["13", 0], "positive": ["21", 0], "negative": ["135", 0]}, "class_type": "DetailerForEachDebug"}, "20": {"inputs": {"seed": 165369254788705}, "class_type": "CR Seed"}, "21": {"inputs": {"conditioning_1": ["22", 0], "conditioning_2": ["133", 0]}, "class_type": "ConditioningCombine"}, "22": {"inputs": {"text": "laughing, professional face, photography, detailed, macro crisp quality ", "clip": ["39", 1]}, "class_type": "CLIPTextEncode"}, "23": {"inputs": {"add_noise": "disable", "noise_seed": ["20", 0], "tile_width": 1024, "tile_height": 1024, "tiling_strategy": "random", "steps": 115, "cfg": 16.0, "sampler_name": "dpmpp_2m_sde_gpu", "scheduler": "exponential", "start_at_step": 65, "end_at_step": 10000, "return_with_leftover_noise": "disable", "preview": "enable", "model": ["59", 0], "positive": ["132", 0], "negative": ["134", 0], "latent_image": ["24", 0]}, "class_type": "BNK_TiledKSamplerAdvanced"}, "24": {"inputs": {"pixels": ["12", 0], "vae": ["9", 0]}, "class_type": "VAEEncode"}, "25": {"inputs": {"images": ["126", 0]}, "class_type": "PreviewImage"}, "26": {"inputs": {"output_path": "[time(%Y-%m-%d)]/PNG-HR-workflow/", "filename_prefix": ["41", 0], "filename_delimiter": "_", "filename_number_padding": 4, "filename_number_start": "false", "extension": "png", "quality": 100, "lossless_webp": "false", "overwrite_mode": "false", "show_history": "false", "show_history_by_prefix": "true", "embed_workflow": "true", "show_previews": "true", "images": ["93", 0]}, "class_type": "Image Save"}, "27": {"inputs": {"images": ["12", 0]}, "class_type": "PreviewImage"}, "28": {"inputs": {"output_path": "[time(%Y-%m-%d)]/JPG-To-Share/", "filename_prefix": ["44", 0], "filename_delimiter": "_", "filename_number_padding": 4, "filename_number_start": "false", "extension": "jpeg", "quality": 86, "lossless_webp": "false", "overwrite_mode": "false", "show_history": "true", "show_history_by_prefix": "true", "embed_workflow": "false", "show_previews": "true", "images": ["88", 0]}, "class_type": "Image Save"}, "29": {"inputs": {"output_path": "[time(%Y-%m-%d)]/Base-Gen-PNG/", "filename_prefix": ["47", 0], "filename_delimiter": "_", "filename_number_padding": 4, "filename_number_start": "false", "extension": "png", "quality": 100, "lossless_webp": "false", "overwrite_mode": "false", "show_history": "false", "show_history_by_prefix": "true", "embed_workflow": "true", "show_previews": "false", "images": ["90", 0]}, "class_type": "Image Save"}, "30": {"inputs": {"lora_name": {"content": "SDXL\\clothes\\xl_burlesque_dress-1.0.safetensors", "image": "loras/SDXL\\clothes\\xl_burlesque_dress-1.0.png"}, "strength_model": 1.02, "strength_clip": 0.99, "example": "[none]", "model": ["10", 0], "clip": ["10", 1]}, "class_type": "LoraLoader|pysssss"}, "31": {"inputs": {"text": ["111", 0]}, "class_type": "ttN text"}, "32": {"inputs": {"text": ["112", 0]}, "class_type": "ttN text"}, "33": {"inputs": {"width": 1024, "height": 1024, "aspect_ratio": "3:4 portrait 896x1152", "swap_dimensions": "Off", "upscale_factor1": 1.0, "upscale_factor2": 1.0, "batch_size": 1}, "class_type": "CR Aspect Ratio SDXL"}, "34": {"inputs": {"text": ["35", 0]}, "class_type": "ShowText|pysssss"}, "35": {"inputs": {"int_": ["33", 0]}, "class_type": "CR Integer To String"}, "36": {"inputs": {"text": ["37", 0]}, "class_type": "ShowText|pysssss"}, "37": {"inputs": {"int_": ["33", 1]}, "class_type": "CR Integer To String"}, "38": {"inputs": {"text1": ["39", 3], "text2": "Refiner-HR_", "separator": "_"}, "class_type": "Concat Text _O"}, "39": {"inputs": {"ckpt_name": "SDXL\\2DandArt\\wanxiangxl_v50.safetensors"}, "class_type": "Checkpoint Loader w/Name (WLSH)"}, "40": {"inputs": {"text": ["51", 0], "old": "\\", "new": "_"}, "class_type": "Replace Text _O"}, "41": {"inputs": {"text": ["40", 0], "old": ".", "new": "_"}, "class_type": "Replace Text _O"}, "42": {"inputs": {"text1": ["39", 3], "text2": "Refiner-HR-Share_", "separator": "_"}, "class_type": "Concat Text _O"}, "43": {"inputs": {"text": ["53", 0], "old": "\\", "new": "_"}, "class_type": "Replace Text _O"}, "44": {"inputs": {"text": ["43", 0], "old": ".", "new": "_"}, "class_type": "Replace Text _O"}, "45": {"inputs": {"text1": ["39", 3], "text2": "-Base-", "separator": "_"}, "class_type": "Concat Text _O"}, "46": {"inputs": {"text": ["55", 0], "old": "\\", "new": "_"}, "class_type": "Replace Text _O"}, "47": {"inputs": {"text": ["46", 0], "old": ".", "new": "_"}, "class_type": "Replace Text _O"}, "48": {"inputs": {"text": ["44", 0]}, "class_type": "ShowText|pysssss"}, "49": {"inputs": {"text": ["41", 0]}, "class_type": "ShowText|pysssss"}, "50": {"inputs": {"style": "%Y-%m-%d-%H%M"}, "class_type": "Time String (WLSH)"}, "51": {"inputs": {"text_a": ["38", 0], "text_b": ["50", 0], "linebreak_addition": "false"}, "class_type": "Text Concatenate"}, "52": {"inputs": {"style": "%Y-%m-%d-%H%M"}, "class_type": "Time String (WLSH)"}, "53": {"inputs": {"text_a": ["42", 0], "text_b": ["52", 0], "linebreak_addition": "false"}, "class_type": "Text Concatenate"}, "54": {"inputs": {"style": "%Y-%m-%d-%H%M"}, "class_type": "Time String (WLSH)"}, "55": {"inputs": {"text_a": ["45", 0], "text_b": ["54", 0], "linebreak_addition": "false"}, "class_type": "Text Concatenate"}, "56": {"inputs": {"images": ["19", 0]}, "class_type": "PreviewImage"}, "57": {"inputs": {"mask": ["17", 0]}, "class_type": "MaskToImage"}, "58": {"inputs": {"images": ["57", 0]}, "class_type": "PreviewImage"}, "59": {"inputs": {"multiplier": 0.38, "model": ["3", 0]}, "class_type": "ModelSamplerTonemapNoiseTest"}, "60": {"inputs": {"vae_name": "diffusion_pytorch_model_XL09_reinstate_Base.safetensors"}, "class_type": "VAELoader"}, "61": {"inputs": {"start_x": 0.12, "start_y": 0.11, "end_x": 0.88, "end_y": 0.89, "images": ["4", 0]}, "class_type": "ImageTransformCropRelative"}, "62": {"inputs": {"lora_name": {"content": "SDXL\\body\\popovy_SDXL-fashion-doll-000006.safetensors", "image": "loras/SDXL\\body\\popovy_SDXL-fashion-doll-000006.png"}, "strength_model": 1.17, "strength_clip": 1.08, "example": "[none]", "model": ["30", 0], "clip": ["30", 1]}, "class_type": "LoraLoader|pysssss"}, "63": {"inputs": {"b1": 1.0, "b2": 0.75, "s1": 0.9, "s2": 0.75, "model": ["62", 0]}, "class_type": "FreeU"}, "64": {"inputs": {"strength": 0.8, "segs": ["66", 0], "control_net": ["65", 0], "segs_preprocessor": ["87", 0]}, "class_type": "ImpactControlNetApplySEGS"}, "65": {"inputs": {"control_net_name": "OpenPoseXL2.safetensors"}, "class_type": "ControlNetLoader"}, "66": {"inputs": {"bbox_threshold": 0.5, "bbox_dilation": 0, "crop_factor": 5.0, "drop_size": 10, "sub_threshold": 0.5, "sub_dilation": 0, "sub_bbox_expansion": 0, "sam_mask_hint_threshold": 0.7, "bbox_detector": ["67", 0], "image": ["19", 0]}, "class_type": "ImpactSimpleDetectorSEGS"}, "67": {"inputs": {"model_name": "bbox/hand_yolov8n.pt"}, "class_type": "UltralyticsDetectorProvider"}, "68": {"inputs": {"images": ["72", 0]}, "class_type": "PreviewImage"}, "69": {"inputs": {"images": ["72", 6]}, "class_type": "PreviewImage"}, "70": {"inputs": {"images": ["77", 0]}, "class_type": "PreviewImage"}, "71": {"inputs": {"images": ["77", 6]}, "class_type": "PreviewImage"}, "72": {"inputs": {"guide_size": 256.0, "guide_size_for": true, "max_size": 768.0, "seed": 0, "steps": 20, "cfg": 8.0, "sampler_name": "dpmpp_2m_sde", "scheduler": "karras", "denoise": 0.6, "feather": 5, "noise_mask": true, "force_inpaint": true, "wildcard": "", "refiner_ratio": 0.5, "image": ["19", 0], "segs": ["64", 0], "basic_pipe": ["76", 0], "refiner_basic_pipe_opt": ["82", 0]}, "class_type": "DetailerForEachDebugPipe"}, "73": {"inputs": {"ckpt_name": "SDXL\\AllAround\\juggernautXL_version5.safetensors"}, "class_type": "CheckpointLoaderSimple"}, "74": {"inputs": {"text": ["31", 0], "clip": ["73", 1]}, "class_type": "CLIPTextEncode"}, "75": {"inputs": {"text": ["32", 0], "clip": ["73", 1]}, "class_type": "CLIPTextEncode"}, "76": {"inputs": {"model": ["73", 0], "clip": ["73", 1], "vae": ["73", 2], "positive": ["74", 0], "negative": ["75", 0]}, "class_type": "ToBasicPipe"}, "77": {"inputs": {"guide_size": 360.0, "guide_size_for": true, "max_size": 768.0, "seed": 0, "steps": 29, "cfg": 8.0, "sampler_name": "dpmpp_2m_sde_gpu", "scheduler": "karras", "denoise": 0.4, "feather": 5, "noise_mask": true, "force_inpaint": true, "wildcard": "", "refiner_ratio": 0.2, "image": ["72", 0], "segs": ["72", 1], "basic_pipe": ["72", 2]}, "class_type": "DetailerForEachDebugPipe"}, "78": {"inputs": {"images": ["77", 4]}, "class_type": "PreviewImage"}, "79": {"inputs": {"images": ["77", 3]}, "class_type": "PreviewImage"}, "81": {"inputs": {"ckpt_name": "SDXL\\OEM\\sd_xl_refiner_1.0_0.9vae.safetensors"}, "class_type": "CheckpointLoaderSimple"}, "82": {"inputs": {"model": ["81", 0], "clip": ["81", 1], "vae": ["81", 2], "positive": ["83", 0], "negative": ["84", 0]}, "class_type": "ToBasicPipe"}, "83": {"inputs": {"text": ["31", 0], "clip": ["81", 1]}, "class_type": "CLIPTextEncode"}, "84": {"inputs": {"text": ["32", 0], "clip": ["81", 1]}, "class_type": "CLIPTextEncode"}, "87": {"inputs": {"detect_hand": true, "detect_body": false, "detect_face": false, "resolution_upscale_by": 1}, "class_type": "DWPreprocessor_Provider_for_SEGS //Inspire"}, "88": {"inputs": {"amount": 0.75, "image": ["93", 0]}, "class_type": "ImageCASharpening+"}, "89": {"inputs": {"mimic_scale": 4.0, "threshold_percentile": 1.0, "mimic_mode": "Power Up", "mimic_scale_min": 4.0, "cfg_mode": "Power Up", "cfg_scale_min": 2.0, "sched_val": 4.8, "separate_feature_channels": "enable", "scaling_startpoint": "MEAN", "variability_measure": "AD", "interpolate_phi": 1.0, "model": ["62", 0]}, "class_type": "DynamicThresholdingFull"}, "90": {"inputs": {"upscale_model": ["91", 0], "image": ["97", 0]}, "class_type": "ImageUpscaleWithModel"}, "91": {"inputs": {"model_name": "Deblur\\1x_ArtClarity.pth"}, "class_type": "UpscaleModelLoader"}, "93": {"inputs": {"upscale_model": ["94", 0], "image": ["126", 0]}, "class_type": "ImageUpscaleWithModel"}, "94": {"inputs": {"model_name": "Deblur\\1x_ArtClarity_strong.pth"}, "class_type": "UpscaleModelLoader"}, "95": {"inputs": {"images": ["90", 0]}, "class_type": "Preview for Image Chooser"}, "96": {"inputs": {"batch_size": 1, "latent": ["5", 0]}, "class_type": "CR Latent Batch Size"}, "97": {"inputs": {"amount": 0.0, "image": ["1", 0]}, "class_type": "ImageCASharpening+"}, "98": {"inputs": {"id": 4421395, "choice": "1", "mode": "Only pause if batch", "images": ["95", 0]}, "class_type": "Image Chooser"}, "99": {"inputs": {"text_positive_g": "", "text_positive_l": "", "text_negative": "", "artist": "No Artist", "negative_prompt_to": "Both", "log_prompt": true}, "class_type": "ArtistStylerAdvanced"}, "100": {"inputs": {"text_positive_g": "", "text_positive_l": "", "text_negative": "", "composition": "No option", "negative_prompt_to": "Both", "log_prompt": true}, "class_type": "CompositionStylerAdvanced"}, "101": {"inputs": {"text_positive_g": "sharp focus, extremely fine details", "text_positive_l": "", "text_negative": "", "focus": "No option", "negative_prompt_to": "Both", "log_prompt": true}, "class_type": "FocusStylerAdvanced"}, "102": {"inputs": {"text_positive_g": "", "text_positive_l": "", "text_negative": "", "milehigh": "no style", "negative_prompt_to": "L only", "log_prompt": true}, "class_type": "MilehighStylerAdvanced"}, "103": {"inputs": {"text_positive_g": "", "text_positive_l": "", "text_negative": "", "mood": "No option", "negative_prompt_to": "Both", "log_prompt": true}, "class_type": "MoodStylerAdvanced"}, "104": {"inputs": {"text_positive_g": "", "text_positive_l": "", "text_negative": "", "environment": "No option", "negative_prompt_to": "Both", "log_prompt": true}, "class_type": "EnvironmentStylerAdvanced"}, "105": {"inputs": {"text_positive_g": "", "text_positive_l": "", "text_negative": "", "subject": "No option", "negative_prompt_to": "Both", "log_prompt": true}, "class_type": "SubjectStylerAdvanced"}, "106": {"inputs": {"text_positive_g": "", "text_positive_l": "", "text_negative": "", "lighting": "No option", "negative_prompt_to": "Both", "log_prompt": true}, "class_type": "LightingStylerAdvanced"}, "107": {"inputs": {"text_a": ["102", 2], "text_b": ["105", 2], "linebreak_addition": "false", "text_c": ["100", 2], "text_d": ["101", 2]}, "class_type": "Text Concatenate"}, "108": {"inputs": {"text_a": ["102", 5], "text_b": ["100", 5], "linebreak_addition": "false", "text_c": ["105", 5], "text_d": ["101", 5]}, "class_type": "Text Concatenate"}, "109": {"inputs": {"text_a": ["99", 2], "text_b": ["104", 2], "linebreak_addition": "false", "text_c": ["103", 2], "text_d": ["106", 2]}, "class_type": "Text Concatenate"}, "110": {"inputs": {"text_a": ["99", 5], "text_b": ["104", 5], "linebreak_addition": "false", "text_c": ["103", 5], "text_d": ["106", 5]}, "class_type": "Text Concatenate"}, "111": {"inputs": {"text_a": ["117", 0], "text_b": ["107", 0], "linebreak_addition": "false", "text_c": ["109", 0]}, "class_type": "Text Concatenate"}, "112": {"inputs": {"text_a": ["118", 0], "text_b": ["108", 0], "linebreak_addition": "false", "text_c": ["110", 0]}, "class_type": "Text Concatenate"}, "113": {"inputs": {"text": ["111", 0]}, "class_type": "ShowText|pysssss"}, "114": {"inputs": {"text": ["112", 0]}, "class_type": "ShowText|pysssss"}, "117": {"inputs": {"text": "beautiful geometric patterns of very fine color lines are projected onto an abstracted dark female form, full body in a dynamic dance pose, revealing the secrets of life in fine detail. In style of ••• SPAM ••• and style of Kehinde Wiley and Gustav Klimt\n\n"}, "class_type": "Text _O"}, "118": {"inputs": {"text": "cartoon\nembedding:SDXL\\Neg\\unaestheticXLv31\nblurry\nundetailed\nhorror \nugly \n"}, "class_type": "Text _O"}, "119": {"inputs": {"weight": 1.0, "noise": 0.0, "ipadapter": ["121", 0], "clip_vision": ["120", 0], "image": ["77", 0], "model": ["89", 0]}, "class_type": "IPAdapterApply"}, "120": {"inputs": {"clip_name": "ip-adapter-image-encoder.safetensors"}, "class_type": "CLIPVisionLoader"}, "121": {"inputs": {"ipadapter_file": "ip-adapter-plus_sdxl_vit-h.bin"}, "class_type": "IPAdapterModelLoader"}, "122": {"inputs": {"radius": 1.21, "strength": 1.0, "images": ["93", 0]}, "class_type": "VividSharpen"}, "123": {"inputs": {"output_path": "[time(%Y-%m-%d)]/JPG-To-Share/vivid/", "filename_prefix": ["44", 0], "filename_delimiter": "_", "filename_number_padding": 4, "filename_number_start": "false", "extension": "jpeg", "quality": 86, "lossless_webp": "false", "overwrite_mode": "false", "show_history": "true", "show_history_by_prefix": "true", "embed_workflow": "false", "show_previews": "true", "images": ["122", 0]}, "class_type": "Image Save"}, "124": {"inputs": {"mode": "always", "volume": 0.30000000000000004, "any": ["95", 0]}, "class_type": "PlaySound|pysssss"}, "125": {"inputs": {"mode": "always", "volume": 0.4, "any": ["88", 0]}, "class_type": "PlaySound|pysssss"}, "126": {"inputs": {"tile_size": 1024, "samples": ["23", 0], "vae": ["9", 0]}, "class_type": "VAEDecodeTiled"}, "127": {"inputs": {"output_path": "[time(%Y-%m-%d)]/JPG-To-Share/vivid-plus-CAS/", "filename_prefix": ["44", 0], "filename_delimiter": "_", "filename_number_padding": 4, "filename_number_start": "false", "extension": "jpeg", "quality": 86, "lossless_webp": "false", "overwrite_mode": "false", "show_history": "true", "show_history_by_prefix": "true", "embed_workflow": "false", "show_previews": "true", "images": ["128", 0]}, "class_type": "Image Save"}, "128": {"inputs": {"radius": 0.86, "strength": 0.7000000000000001, "images": ["88", 0]}, "class_type": "VividSharpen"}, "132": {"inputs": {"mode": "Noodle Soup Prompts", "noodle_key": "__", "seed": ["20", 0], "token_normalization": "none", "weight_interpretation": "comfy++", "text": ["31", 0], "clip": ["3", 1]}, "class_type": "CLIPTextEncode (BlenderNeko Advanced + NSP)"}, "133": {"inputs": {"mode": "Noodle Soup Prompts", "noodle_key": "__", "seed": ["20", 0], "token_normalization": "none", "weight_interpretation": "comfy++", "text": ["31", 0], "clip": ["62", 1]}, "class_type": "CLIPTextEncode (BlenderNeko Advanced + NSP)"}, "134": {"inputs": {"mode": "Noodle Soup Prompts", "noodle_key": "__", "seed": ["20", 0], "token_normalization": "none", "weight_interpretation": "comfy++", "text": ["32", 0], "clip": ["3", 1]}, "class_type": "CLIPTextEncode (BlenderNeko Advanced + NSP)"}, "135": {"inputs": {"mode": "Noodle Soup Prompts", "noodle_key": "__", "seed": ["20", 0], "token_normalization": "none", "weight_interpretation": "comfy++", "text": ["32", 0], "clip": ["62", 1]}, "class_type": "CLIPTextEncode (BlenderNeko Advanced + NSP)"}, "136": {"inputs": {"sharpness_multiplier": 70.0, "sharpness_method": "anisotropic", "tonemap_multiplier": 0.7000000000000001, "tonemap_method": "reinhard_perchannel", "tonemap_percentile": 91.0, "contrast_multiplier": 0.15, "combat_method": "subtract", "combat_cfg_drift": 0.0, "rescale_cfg_phi": 0.0, "extra_noise_type": "perlin", "extra_noise_method": "add", "extra_noise_multiplier": 0.21, "extra_noise_lowpass": 100, "divisive_norm_size": 0, "spectral_mod_mode": "hard_clamp", "spectral_mod_percentile": 5.0, "spectral_mod_multiplier": 0.0, "affect_uncond": "Sharpness", "model": ["63", 0]}, "class_type": "Latent Diffusion Mega Modifier"}}
-
- Author of XnView
- Posts: 45193
- Joined: Mon Oct 13, 2003 7:31 am
- Location: France
Re: How to display Extra\Prompt in different views?
so you would like to view prompts text in 'Info'?
Pierre.
-
- Posts: 11
- Joined: Sat Jul 29, 2023 6:01 pm
Re: How to display Extra\Prompt in different views?
Hi Piere,
Yes, that would be helpful definitely. And also to be able to search my image catalog for things inside the prompt field. As an example - it would be useful for me to sort my images by which checkpoint model I used. But as you can see, the prompt can be quite large and cluttered so I'm not sure what's the best approach.
But whatever is done, it would be helpful to everyone who is using Stable Diffusion and in particular - those that use ComfyUI - as there is nothing out there that can catalog and sort images based on prompt - or anything that lets them view the prompt information easily.
Yes, that would be helpful definitely. And also to be able to search my image catalog for things inside the prompt field. As an example - it would be useful for me to sort my images by which checkpoint model I used. But as you can see, the prompt can be quite large and cluttered so I'm not sure what's the best approach.
But whatever is done, it would be helpful to everyone who is using Stable Diffusion and in particular - those that use ComfyUI - as there is nothing out there that can catalog and sort images based on prompt - or anything that lets them view the prompt information easily.
-
- Posts: 2
- Joined: Mon Jun 15, 2015 6:56 pm
Re: How to display Extra\Prompt in different views?
For ComfyUI, we like everything between the tEXt and IDHR markers pulled out and represented in JSON or as KV pairs. The "prompt" and NUL start our valid data. ComfyUI pads the end of the block with some garbage(?) characters.
ComfyUI, SDWEB, and InvokeAI all store generation data inside the PNG, but all three use a different data format. It's basically the same information but organized differently.
I can send an image from all three, if you'd like.
As Eric mentioned, there's nothing out there for organizing and cataloging images as we need. The apps purporting to are rudimentary and aren't fit for purpose.
There are a couple of relevant datapoints for sorting/filtering:
ckpt_name - There can and often are multiples.
noise_seed - There can be multiples (it's probably always identical, but I wouldn't bet a paycheck on it.)
ComfyUI, SDWEB, and InvokeAI all store generation data inside the PNG, but all three use a different data format. It's basically the same information but organized differently.
I can send an image from all three, if you'd like.
As Eric mentioned, there's nothing out there for organizing and cataloging images as we need. The apps purporting to are rudimentary and aren't fit for purpose.
There are a couple of relevant datapoints for sorting/filtering:
ckpt_name - There can and often are multiples.
noise_seed - There can be multiples (it's probably always identical, but I wouldn't bet a paycheck on it.)
-
- Posts: 11
- Joined: Sat Jul 29, 2023 6:01 pm
Re: How to display Extra\Prompt in different views?
Thank you for jumping in and adding great info!
Having already a capability to even view the information is wonderful, but it can be enhanced by providing search and display functions. The display can be limited to a few items that are always present in every generation data like the ckpt_name and seed as pointed out.
Seach: In an ideal world, I'd be able to search for many of the terms inside the two big fields Extras/Prompt and Extras/Workflow.
For example, I might like to search for images I made with a certain Lora or even a certain upscale sampler model or even a combination of terms.
Thank you
Eric
Having already a capability to even view the information is wonderful, but it can be enhanced by providing search and display functions. The display can be limited to a few items that are always present in every generation data like the ckpt_name and seed as pointed out.
Seach: In an ideal world, I'd be able to search for many of the terms inside the two big fields Extras/Prompt and Extras/Workflow.
For example, I might like to search for images I made with a certain Lora or even a certain upscale sampler model or even a combination of terms.
Thank you
Eric
-
- Author of XnView
- Posts: 45193
- Joined: Mon Oct 13, 2003 7:31 am
- Location: France
-
- Author of XnView
- Posts: 45193
- Joined: Mon Oct 13, 2003 7:31 am
- Location: France
Re: How to display Extra\Prompt in different views?
In your example, there is more than 1 'ckpt_name', so which one to display?
Pierre.
-
- Author of XnView
- Posts: 45193
- Joined: Mon Oct 13, 2003 7:31 am
- Location: France
Re: How to display Extra\Prompt in different views?
See issue for current status and some details.
Pierre.