-
-
Notifications
You must be signed in to change notification settings - Fork 431
Hints
Vladimir Mandic edited this page Oct 28, 2024
·
2 revisions
- Below is a full list of hints as copied from
html/locale_en.json
- Best way to submit additions/changes is to create PR with changes to
html/locale_en.json
Alternatively, you can use discussions or issue tracker to suggest changes and they will be applied manually - Add or edit hints displayed for each UI item
- Do not change label directly
if you have a suggestion for a better naming, place it in localized field so changes can be tracked - If you want to validate you JSON file before submitting,
save it locally and runpython cli/validate-locale.py
- If you want to add additional hints, just follow the template
and create new entry wherelabel
field matches exactly what is visible in the UI
Full list of existing hints:
{"icons": [
{"id":"","label":"🎲️","localized":"","hint":"Use random seed"},
{"id":"","label":"♻️","localized":"","hint":"Reuse previous seed"},
{"id":"","label":"🔄","localized":"","hint":"Reset values"},
{"id":"","label":"⬆️","localized":"","hint":"Upload image"},
{"id":"","label":"⬅️","localized":"","hint":"Reuse image"},
{"id":"","label":"⇅","localized":"","hint":"Swap values"},
{"id":"","label":"⇦","localized":"","hint":"Read parameters from last generated image"},
{"id":"","label":"⊗","localized":"","hint":"Clear prompt"},
{"id":"","label":"🗁","localized":"","hint":"Show/hide extra networks"},
{"id":"","label":"⇰","localized":"","hint":"Apply selected styles to current prompt"},
{"id":"","label":"⇨","localized":"","hint":"Apply preset to Manual Block Merge tab"},
{"id":"","label":"⇩","localized":"","hint":"Save parameters from last generated image as style template"},
{"id":"","label":"🕮","localized":"","hint":"Save parameters from last generated image as style template"},
{"id":"","label":"⇕","localized":"","hint":"Sort by: Name asc/desc, Size largest/smallest, Time newest/oldest"},
{"id":"","label":"⟲","localized":"","hint":"Refresh"},
{"id":"","label":"✕","localized":"","hint":"Close"},
{"id":"","label":"⊜","localized":"","hint":"Fill"},
{"id":"","label":"⌾","localized":"","hint":"Load model as refiner model when selected, otherwise load as base model"},
{"id":"","label":"🔎︎","localized":"","hint":"Scan CivitAI for missing metadata and previews"},
{"id":"","label":"☲","localized":"","hint":"Change view type"},
{"id":"","label":"📐","localized":"","hint":"Measure"},
{"id":"","label":"🔍","localized":"","hint":"Search"},
{"id":"","label":"🖌️","localized":"","hint":"LaMa remove selected object from image"},
{"id":"","label":"🖼️","localized":"","hint":"Show preview"},
{"id":"","label":"✎","localized":"","hint":"Interrogate image using BLIP model"},
{"id":"","label":"✐","localized":"","hint":"Interrogate image using DeepBooru model"},
{"id":"","label":"↶","localized":"","hint":"Apply selected style to prompt"},
{"id":"","label":"↷","localized":"","hint":"Save current prompt to style"}
],
"prompts": [
{"id":"","label":"Prompt","localized":"","hint":"Describe image you want to generate"},
{"id":"","label":"Negative prompt","localized":"","hint":"Describe what you don't want to see in generated image"}
],
"common keywords": [
{"id":"","label":"fp16","localized":"","hint":"Number representation in 16-bit floating point format"},
{"id":"","label":"fp32","localized":"","hint":"Number representation in 32-bit floating point format"},
{"id":"","label":"bf16","localized":"","hint":"Number representation in alternative 16-bit floating point format"}
],
"tabs": [
{"id":"","label":"Text","localized":"","hint":"Create image from text"},
{"id":"","label":"Image","localized":"","hint":"Create image from image"},
{"id":"","label":"Control","localized":"","hint":"Create image with additional control"},
{"id":"","label":"Process","localized":"","hint":"Process existing image"},
{"id":"","label":"Interrogate","localized":"","hint":"Run interrogate to get description of your image"},
{"id":"","label":"Train","localized":"","hint":"Run training"},
{"id":"","label":"Models","localized":"","hint":"Convert or merge your models"},
{"id":"","label":"Agent Scheduler","localized":"","hint":"Enqueue your generate requests and run them in the background"},
{"id":"","label":"Image Browser","localized":"","hint":"Browse through your generated image database"},
{"id":"","label":"System","localized":"","hint":"System settings and information"},
{"id":"","label":"System Info","localized":"","hint":"System information"},
{"id":"","label":"Settings","localized":"","hint":"Application settings"},
{"id":"","label":"Script","localized":"","hint":"Additional scripts to be used"},
{"id":"","label":"Extensions","localized":"","hint":"Application extensions"}
],
"action panel": [
{"id":"","label":"Generate","localized":"","hint":"Start processing"},
{"id":"","label":"Enqueue","localized":"","hint":"Add task to background queue in Agent Scheduler"},
{"id":"","label":"Stop","localized":"","hint":"Stop processing"},
{"id":"","label":"Skip","localized":"","hint":"Stop processing current job and continue processing"},
{"id":"","label":"Pause","localized":"","hint":"Pause processing"},
{"id":"","label":"Restore","localized":"","hint":"Restore parameters from current prompt or last known generated image"},
{"id":"","label":"Clear","localized":"","hint":"Clear prompts"},
{"id":"","label":"Networks","localized":"","hint":"Open extra network interface"}
],
"extra networks": [
{"id":"","label":"ui position","localized":"","hint":"Location of extra networks"},
{"id":"","label":"cover","localized":"","hint":"cover full area"},
{"id":"","label":"inline","localized":"","hint":"inline with all additional elements (scrollable)"},
{"id":"","label":"sidebar","localized":"","hint":"sidebar on the right side of the screen"},
{"id":"","label":"Default multiplier for extra networks","localized":"","hint":"When adding extra network such as Lora to prompt, use this multiplier for it"},
{"id":"","label":"Model","localized":"","hint":"Trained model checkpoints"},
{"id":"","label":"Style","localized":"","hint":"Additional styles to be applied on selected generation parameters"},
{"id":"","label":"Styles","localized":"","hint":"Additional styles to be applied on selected generation parameters"},
{"id":"","label":"Lora","localized":"","hint":"LoRA: Low-Rank Adaptation. Fine-tuned model that is applied on top of a loaded model"},
{"id":"","label":"Embedding","localized":"","hint":"Textual inversion embedding is a trained embedded information about the subject"},
{"id":"","label":"Hypernetwork","localized":"","hint":"Small trained neural network that modifies behavior of the loaded model"},
{"id":"","label":"UI disable variable aspect ratio","localized":"","hint":"When disabled, all thumbnails appear as squared images"},
{"id":"","label":"Build info on first access","localized":"","hint":"Prevents server from building EN page on server startup and instead build it when requested"},
{"id":"","label":"Show built-in styles","localized":"","hint":"Show or hide build-it styles"},
{"id":"","label":"LoRA use alternative loading method","localized":"","hint":"Alternative method uses diffusers built-in LoRA capabilities instead of native SD.Next implementation (may reduce LoRA compatibility)"},
{"id":"","label":"LoRA use merge when using alternative method","localized":"","hint":"When loading LoRAs, immediately merge weights with underlying model instead of applying them on-the-fly"},
{"id":"","label":"LoRA memory cache","localized":"","hint":"How many LoRAs to keep in network for future use before requiring reloading from storage"}
],
"gallery buttons": [
{"id":"","label":"show","localized":"","hint":"Show image location"},
{"id":"","label":"save","localized":"","hint":"Save image"},
{"id":"","label":"delete","localized":"","hint":"Delete image"},
{"id":"","label":"replace","localized":"","hint":"Replace image"},
{"id":"","label":"➠ text","localized":"","hint":"Transfer image to text interface"},
{"id":"","label":"➠ image","localized":"","hint":"Transfer image to image interface"},
{"id":"","label":"➠ inpaint","localized":"","hint":"Transfer image to inpaint interface"},
{"id":"","label":"➠ sketch","localized":"","hint":"Transfer image to sketch interface"},
{"id":"","label":"➠ composite","localized":"","hint":"Transfer image to inpaint sketch interface"},
{"id":"","label":"➠ process","localized":"","hint":"Transfer image to process interface"}
],
"extensions": [
{"id":"","label":"Install","localized":"","hint":"Install"},
{"id":"","label":"Search","localized":"","hint":"Search"},
{"id":"","label":"Sort by","localized":"","hint":"Sort by"},
{"id":"","label":"Manage extensions","localized":"","hint":"Manage extensions"},
{"id":"","label":"Manual install","localized":"","hint":"Manually install extension"},
{"id":"","label":"Extension GIT repository URL","localized":"","hint":"Specify extension repository URL on GitHub"},
{"id":"","label":"Specific branch name","localized":"","hint":"Specify extension branch name, leave blank for default"},
{"id":"","label":"Local directory name","localized":"","hint":"Directory where to install extension, leave blank for default"},
{"id":"","label":"Refresh extension list","localized":"","hint":"Refresh list of available extensions"},
{"id":"","label":"Update all installed","localized":"","hint":"Update installed extensions to their latest available version"},
{"id":"","label":"Apply changes","localized":"","hint":"Apply all changes and restart server"},
{"id":"","label":"install","localized":"","hint":"install this extension"},
{"id":"","label":"uninstall","localized":"","hint":"uninstall this extension"},
{"id":"","label":"User interface","localized":"","hint":"Review and set current values as default values for the user interface"},
{"id":"","label":"Set new defaults","localized":"","hint":"Set current values as default values for the user interface"},
{"id":"","label":"Benchmark","localized":"","hint":"Run benchmarks"},
{"id":"","label":"Models & Networks","localized":"","hint":"View lists of all available models and networks"},
{"id":"","label":"Restore defaults","localized":"","hint":"Restore default user interface values"}
],
"txt2img tab": [
{"id":"","label":"Sampling method","localized":"","hint":"Which algorithm to use to produce the image"},
{"id":"","label":"Steps","localized":"","hint":"How many times to improve the generated image iteratively; higher values take longer; very low values can produce bad results"},
{"id":"","label":"Tiling","localized":"","hint":"Produce an image that can be tiled"},
{"id":"","label":"full quality","localized":"","hint":"Use full quality VAE to decode latent samples"},
{"id":"","label":"detailer","localized":"","hint":"Run processed image through additional detailer model"},
{"id":"","label":"hidiffusion","localized":"","hint":"HiDiffusion allows creation of high-resolution images using your standard models without duplicates/distortions and improved performance"},
{"id":"","label":"HDR Clamp","localized":"","hint":"Adjusts the level of nonsensical details by pruning values that deviate significantly from the distribution mean. It is particularly useful for enhancing generation at higher guidance scales, identifying outliers early in the process and applying mathematical adjustments based on the Range (Boundary) and Threshold settings. Think of it as setting the range within which you want your image values to be, and adjusting the threshold determines which values should be brought back into that range"},
{"id":"","label":"HDR Maximize","localized":"","hint":"Calculates a 'normalization factor' by dividing the maximum tensor value by the specified range multiplied by 4. This factor is then used to shift the channels within the given boundary, ensuring maximum dynamic range for subsequent processing. The objective is to optimize dynamic range for external applications like Photoshop, particularly for adjusting levels, contrast, and brightness"},
{"id":"","label":"Enable refine pass","localized":"","hint":"Use a similar process as image to image to upscale and/or add detail to the final image. Optionally uses refiner model to enhance image details."},
{"id":"","label":"Denoising strength","localized":"","hint":"Determines how little respect the algorithm should have for image's content. At 0, nothing will change, and at 1 you'll get an unrelated image. With values below 1.0, processing will take less steps than the Sampling Steps slider specifies"},
{"id":"","label":"Denoise start","localized":"","hint":"Override denoise strength by stating how early base model should finish and when refiner should start. Only applicable to refiner usage. If set to 0 or 1, denoising strength will be used"},
{"id":"","label":"Hires steps","localized":"","hint":"Number of sampling steps for upscaled picture. If 0, uses same as for original"},
{"id":"","label":"Upscaler","localized":"","hint":"Which pre-trained model to use for the upscaling process."},
{"id":"","label":"Upscale by","localized":"","hint":"Adjusts the size of the image by multiplying the original width and height by the selected value. Ignored if either Resize width to or Resize height to are non-zero"},
{"id":"","label":"Force Hires","localized":"","hint":"Hires runs automatically when Latent upscale is selected, but its skipped when using non-latent upscalers. Enable force hires to run hires with non-latent upscalers"},
{"id":"","label":"Resize width to","localized":"","hint":"Resizes image to this width. If 0, width is inferred from either of two nearby sliders"},
{"id":"","label":"Resize height to","localized":"","hint":"Resizes image to this height. If 0, height is inferred from either of two nearby sliders"},
{"id":"","label":"Refine sampler","localized":"","hint":"Use specific sampler as fallback sampler if primary is not supported for specific operation"},
{"id":"","label":"Refiner start","localized":"","hint":"Refiner pass will start when base model is this much complete (set to larger than 0 and smaller than 1 to run after full base model run)"},
{"id":"","label":"Refiner steps","localized":"","hint":"Number of steps to use for refiner pass"},
{"id":"","label":"Refine CFG Scale","localized":"","hint":"CFG scale used for refiner pass"},
{"id":"","label":"Rescale guidance","localized":"","hint":"Rescale CFG generated noise to avoid overexposed images"},
{"id":"","label":"Refine Prompt","localized":"","hint":"Prompt used for both second encoder in base model (if it exists) and for refiner pass (if enabled)"},
{"id":"","label":"Refine negative prompt","localized":"","hint":"Negative prompt used for both second encoder in base model (if it exists) and for refiner pass (if enabled)"},
{"id":"","label":"Width","localized":"","hint":"Image width"},
{"id":"","label":"Height","localized":"","hint":"Image height"},
{"id":"","label":"Batch count","localized":"","hint":"How many batches of images to create (has no impact on generation performance or VRAM usage)"},
{"id":"","label":"Batch size","localized":"","hint":"How many image to create in a single batch (increases generation performance at cost of higher VRAM usage)"},
{"id":"","label":"cfg scale","localized":"","hint":"Classifier Free Guidance scale: how strongly the image should conform to prompt. Lower values produce more creative results, higher values make it follow the prompt more strictly; recommended values between 5-10"},
{"id":"","label":"Guidance End","localized":"","hint":"Ends the effect of CFG and PAG early: A value of 1 acts as normal, 0.5 stops guidance at 50% of steps"},
{"id":"","label":"CLIP skip","localized":"","hint":"Clip skip is a feature that allows users to control the level of specificity of the prompt, the higher the CLIP skip value, the less deep the prompt will be interpreted. CLIP Skip 1 is typical while some anime models produce better results at CLIP skip 2"},
{"id":"","label":"Initial seed","localized":"","hint":"A value that determines the output of random number generator - if you create an image with same parameters and seed as another image, you'll get the same result"},
{"id":"","label":"Variation","localized":"","hint":"Second seed to be mixed with primary seed"},
{"id":"","label":"Variation strength","localized":"","hint":"How strong of a variation to produce. At 0, there will be no effect. At 1, you will get the complete picture with variation seed (except for ancestral samplers, where you will just get something)"},
{"id":"","label":"Resize seed from width","localized":"","hint":"Make an attempt to produce a picture similar to what would have been produced with same seed at specified resolution"},
{"id":"","label":"Resize seed from height","localized":"","hint":"Make an attempt to produce a picture similar to what would have been produced with same seed at specified resolution"},
{"id":"","label":"Override settings","localized":"","hint":"If you read in generation parameters through 'Process Image tab' and individual generation parameters should deviate from your system settings, this box will be populated with those settings to override your system configuration for this workflow"}
],
"img2img tab": [
{"id":"","label":"Fixed","localized":"","hint":"Resize image to target resolution. Unless height and width match, you will get incorrect aspect ratio"},
{"id":"","label":"Crop","localized":"","hint":"Resize the image so that entirety of target resolution is filled with the image. Crop parts that stick out"},
{"id":"","label":"Fill","localized":"","hint":"Resize the image so that entirety of image is inside target resolution. Fill empty space with image's colors"},
{"id":"","label":"Mask blur","localized":"","hint":"How much to blur the mask before processing, in pixels"},
{"id":"","label":"original","localized":"","hint":"keep whatever was there originally"},
{"id":"","label":"latent noise","localized":"","hint":"fill it with latent space noise"},
{"id":"","label":"latent nothing","localized":"","hint":"fill it with latent space zeroes"}
],
"control tab": [
{"id":"","label":"Guess Mode","localized":"","hint":"Removes the requirement to supply a prompt to a ControlNet. It forces Controlnet encoder to do it's 'best guess' based on the contents of the input control map."},
{"id":"","label":"Control Only","localized":"","hint":"This uses only the Control input below as the source for any ControlNet or IP Adapter type tasks based on any of our various options."},
{"id":"","label":"Init Image Same As Control","localized":"","hint":"Will additionally treat any image placed into the Control input window as a source for img2img type tasks, an image to modify for example."},
{"id":"","label":"Separate Init Image","localized":"","hint":"Creates an additional window next to Control input labeled Init input, so you can have a separate image for both Control operations and an init source."}
],
"process tab": [
{"id":"","label":"Process Image","localized":"","hint":"Process single image"},
{"id":"","label":"Process Batch","localized":"","hint":"Process batch of images"},
{"id":"","label":"Process Folder","localized":"","hint":"Process all images in a folder"},
{"id":"","label":"Scale by","localized":"","hint":"Use this tab to resize the source image(s) by a chosen factor"},
{"id":"","label":"Scale to","localized":"","hint":"Use this tab to resize the source image(s) to a chosen target size"},
{"id":"","label":"Input directory","localized":"","hint":"Folder where the images are that you want to process"},
{"id":"","label":"Output directory","localized":"","hint":"Folder where the processed images should be saved to"},
{"id":"","label":"Show result images","localized":"","hint":"Enable to show the processed images in the image pane"},
{"id":"","label":"Resize","localized":"","hint":"Resizing details. Higher resolutions require additional processing memory."},
{"id":"","label":"Crop to fit","localized":"","hint":"If the dimensions of your source image (e.g. 512x510) deviate from your target dimensions (e.g. 1024x768) this function will fit your upscaled image into your target size image. Excess will be cropped"},
{"id":"","label":"Refine Upscaler","localized":"","hint":"Select secondary upscaler to run after initial upscaler"},
{"id":"","label":"Upscaler 2 visibility","localized":"","hint":"Strength of the secondary upscaler"}
],
"models tabs": [
{"id":"","label":"Calculate hash for all models","localized":"","hint":"Calculates hash for all available models which may take a very long time"},
{"id":"","label":"Weights Clip","localized":"","hint":"Forced merged weights to be no heavier than the original model, preventing burn in and overly saturated models"},
{"id":"","label":"ReBasin","localized":"","hint":"Performs multiple merges with permutations in order to keep more features from both models"},
{"id":"","label":"Number of ReBasin Iterations","localized":"","hint":"Number of times to merge and permute the model before saving"},
{"id":"","label":"cpu","localized":"","hint":"Uses cpu and RAM only: slowest but least likely to OOM"},
{"id":"","label":"shuffle","localized":"","hint":"Loads full model in RAM and calculates on VRAM: Less speedup, suggested for SDXL merges"},
{"id":"","label":"cuda","localized":"","hint":"Loads models into VRAM automatically unloading current model: fastest option but unlikely to handle SDXL Models without OOM"},
{"id":"","label":"Base","localized":"","hint":"Text Encoder and a few unaligned keys (1 value)"},
{"id":"","label":"In Blocks","localized":"","hint":"Downsampling Blocks of the UNet (12 values for SD1.5, 9 values for SDXL)"},
{"id":"","label":"Mid Block","localized":"","hint":"Central Block of the UNet (1 value)"},
{"id":"","label":"Out Block","localized":"","hint":"Upsampling Blocks of the UNet (12 values for SD1.5, 9 values for SDXL)"},
{"id":"","label":"Preset Interpolation Ratio","localized":"","hint":"If two presets are selected, interpolate between them"}
],
"train tab": [
{"id":"","label":"Initialization text","localized":"","hint":"If the number of tokens is more than the number of vectors, some may be skipped.\nLeave the textbox empty to start with zeroed out vectors"},
{"id":"","label":"Select activation function of hypernetwork","localized":"","hint":"Recommended : Swish / Linear(none)"},
{"id":"","label":"Select Layer weights initialization","localized":"","hint":"Recommended: Kaiming for relu-like, Xavier for sigmoid-like, Normal otherwise"},
{"id":"","label":"Enter hypernetwork Dropout structure","localized":"","hint":"Recommended : leave empty or 0~0.35 incrementing sequence: 0, 0.05, 0.15"},
{"id":"","label":"Create interim images","localized":"","hint":"Save an image to log directory every N steps, 0 to disable"},
{"id":"","label":"Create interim embeddings","localized":"","hint":"Save a copy of embedding to log directory every N steps, 0 to disable"},
{"id":"","label":"Use current settings for previews","localized":"","hint":"Read parameters (prompt, etc...) from txt2img tab when making previews"},
{"id":"","label":"Shuffle tags","localized":"","hint":"Shuffle tags by ',' when creating prompts"}
],
"settings menu": [
{"id":"settings_submit","label":"Apply settings","localized":"","hint":"Save current settings, server restart is recommended"},
{"id":"restart_submit","label":"Restart server","localized":"","hint":"Restart server"},
{"id":"shutdown_submit","label":"Shutdown server","localized":"","hint":"Shutdown server"},
{"id":"settings_preview_theme","label":"Preview theme","localized":"","hint":"Show theme preview"},
{"id":"defaults_submit","label":"Restore defaults","localized":"","hint":"Restore default server settings"},
{"id":"sett_unload_sd_model","label":"Unload model","localized":"","hint":"Unload currently loaded model"},
{"id":"sett_reload_sd_model","label":"Reload model","localized":"","hint":"Reload currently selected model"}
],
"settings sections": [
{"id":"","label":"Execution & Models","localized":"","hint":"Settings related to execution backend, models, and prompt attention"},
{"id":"","label":"Compute Settings","localized":"","hint":"Settings related to precision, cross attention, model compilation, and optimizations for computing platforms"},
{"id":"","label":"Inference Settings","localized":"","hint":"Settings related image inference, token merging, FreeU, and Hypertile"},
{"id":"","label":"Diffusers Settings","localized":"","hint":"Settings related to Diffusers backend"},
{"id":"","label":"System Paths","localized":"","hint":"Settings related to location of various model directories"},
{"id":"","label":"Image Options","localized":"","hint":"Settings related to image format, metadata, and image grids"},
{"id":"","label":"image naming & paths","localized":"","hint":"Settings related to image filenames, and output directories"},
{"id":"","label":"User Interface Options","localized":"","hint":"Settings related to user interface themes, and Quicksettings list"},
{"id":"","label":"Live Previews","localized":"","hint":"Settings related to live previews, audio notification, and log view"},
{"id":"","label":"Sampler Settings","localized":"","hint":"Settings related to sampler selection and configuration, and diffuser specific sampler configuration"},
{"id":"","label":"Postprocessing","localized":"","hint":"Settings related to post image generation processing, face restoration, and upscaling"},
{"id":"","label":"Control Options","localized":"","hint":"Settings related the Control tab"},
{"id":"","label":"Training","localized":"","hint":"Settings related to model training configuration and directories"},
{"id":"","label":"Interrogate","localized":"","hint":"Settings related to interrogation configuration"},
{"id":"","label":"Networks","localized":"","hint":"Settings related to networks user interface, networks multiplier defaults, and configuration"},
{"id":"","label":"Licenses","localized":"","hint":"View licenses of all additional included libraries"},
{"id":"","label":"Show all pages","localized":"","hint":"Show all settings pages"}
],
"settings": [
{"id":"","label":"base model","localized":"","hint":"Main model used for all operations"},
{"id":"","label":"refiner model","localized":"","hint":"Refiner model used for second-pass operations"},
{"id":"","label":"Cached models","localized":"","hint":"The number of models to store in RAM for quick access"},
{"id":"","label":"Cached VAEs","localized":"","hint":"The number of VAE files to store in RAM for quick access"},
{"id":"","label":"VAE model","localized":"","hint":"VAE helps with fine details in the final image and may also alter colors"},
{"id":"","label":"Load models using stream loading method","localized":"","hint":"When loading models attempt stream loading optimized for slow or network storage"},
{"id":"","label":"xFormers","localized":"","hint":"Memory optimization. Non-Deterministic (different results each time)"},
{"id":"","label":"Scaled-Dot-Product","localized":"","hint":"Memory optimization. Non-Deterministic unless SDP memory attention is disabled."},
{"id":"","label":"Prompt padding","localized":"","hint":"Increase coherency by padding from the last comma within n tokens when using more than 75 tokens"},
{"id":"","label":"Original","localized":"","hint":"Original LDM backend"},
{"id":"","label":"Diffusers","localized":"","hint":"Diffusers backend"},
{"id":"","label":"Autocast","localized":"","hint":"Automatically determine precision during runtime"},
{"id":"","label":"Full","localized":"","hint":"Always use full precision"},
{"id":"","label":"FP32","localized":"","hint":"Use 32-bit floating point precision for calculations"},
{"id":"","label":"FP16","localized":"","hint":"Use 16-bit floating point precision for calculations"},
{"id":"","label":"BF16","localized":"","hint":"Use modified 16-bit floating point precision for calculations"},
{"id":"","label":"Full precision for model (--no-half)","localized":"","hint":"Uses FP32 for the model. May produce better results while using more VRAM and slower generation"},
{"id":"","label":"Full precision for VAE (--no-half-vae)","localized":"","hint":"Uses FP32 for the VAE. May produce better results while using more VRAM and slower generation"},
{"id":"","label":"Upcast sampling","localized":"","hint":"Usually produces similar results to --no-half with better performance while using less memory"},
{"id":"","label":"Attempt VAE roll back for NaN values","localized":"","hint":"Requires Torch 2.1 and NaN check enabled"},
{"id":"","label":"DirectML memory stats provider","localized":"","hint":"How to get GPU memory stats"},
{"id":"","label":"DirectML retry ops for NaN","localized":"","hint":"Retry specific operations if their output was NaN. This may make your generation slower"},
{"id":"","label":"Olive use FP16 on optimization","localized":"","hint":"Use 16-bit floating point precision for the output model of Olive optimization process. Use 32-bit floating point precision if disabled"},
{"id":"","label":"Olive force FP32 for VAE Encoder","localized":"","hint":"Use 32-bit floating point precision for VAE Encoder of the output model. This overrides 'use FP16 on optimization' option. If you are getting NaN or black blank images from Img2Img, enable this option and remove cache"},
{"id":"","label":"Olive use static dimensions","localized":"","hint":"Make the inference with Olive optimized models much faster. (OrtTransformersOptimization)"},
{"id":"","label":"Olive cache optimized models","localized":"","hint":"Save Olive processed models as a cache. You can manage them in ONNX tab"},
{"id":"","label":"File format","localized":"","hint":"Select file format for images"},
{"id":"","label":"Include metadata","localized":"","hint":"Save image create parameters as metadata tags inside image file"},
{"id":"","label":"Images filename pattern","localized":"","hint":"Use following tags to define how filenames for images are chosen:<br><pre>seq, uuid<br>date, datetime, job_timestamp<br>generation_number, batch_number<br>model, model_shortname<br>model_hash, model_name<br>sampler, seed, steps, cfg<br>clip_skip, denoising<br>hasprompt, prompt, styles<br>prompt_hash, prompt_no_styles<br>prompt_spaces, prompt_words<br>height, width, image_hash<br></pre>"},
{"id":"","label":"Row count","localized":"","hint":"Use -1 for autodetect and 0 for it to be same as batch size"},
{"id":"","label":"Update JSON log file per image","localized":"","hint":"Save image information to a JSON file"},
{"id":"","label":"Directory name pattern","localized":"","hint":"Use following tags to define how subdirectories for images and grids are chosen: [steps], [cfg],[prompt_hash], [prompt], [prompt_no_styles], [prompt_spaces], [width], [height], [styles], [sampler], [seed], [model_hash], [model_name], [prompt_words], [date], [datetime], [datetime<Format>], [datetime<Format><Time Zone>], [job_timestamp]; leave empty for default"},
{"id":"","label":"Inpainting conditioning mask strength","localized":"","hint":"Determines how strongly to mask off the original image for inpainting and img2img. 1.0 means fully masked (default). 0.0 means a fully unmasked conditioning. Lower values will help preserve the overall composition of the image, but will struggle with large changes"},
{"id":"","label":"Clip skip","localized":"","hint":"Early stopping parameter for CLIP model; 1 is stop at last layer as usual, 2 is stop at penultimate layer, etc"},
{"id":"","label":"Images folder","localized":"","hint":"If empty, defaults to three directories below"},
{"id":"","label":"Grids folder","localized":"","hint":"If empty, defaults to two directories below"},
{"id":"","label":"Quicksettings list","localized":"","hint":"List of setting names, separated by commas, for settings that should go to the quick access bar at the top instead the setting tab"},
{"id":"","label":"Live preview display period","localized":"","hint":"Request preview image every n steps, set to 0 to disable"},
{"id":"","label":"Approximate","localized":"","hint":"Cheap neural network approximation. Very fast compared to VAE, but produces pictures with 4 times smaller horizontal/vertical resolution and lower quality"},
{"id":"","label":"Simple","localized":"","hint":"Very cheap approximation. Very fast compared to VAE, but produces pictures with 8 times smaller horizontal/vertical resolution and extremely low quality"},
{"id":"","label":"Progress update period","localized":"","hint":"Update period for UI progress bar and preview checks, in milliseconds"},
{"id":"","label":"Euler a","localized":"","hint":"Euler Ancestral - very creative, each can get a completely different picture depending on step count, setting steps higher than 30-40 does not help"},
{"id":"","label":"DPM adaptive","localized":"","hint":"Ignores step count - uses a number of steps determined by the CFG and resolution"},
{"id":"","label":"DDIM","localized":"","hint":"Denoising Diffusion Implicit Models - best at inpainting"},
{"id":"","label":"UniPC","localized":"","hint":"Unified Predictor-Corrector Framework for Fast Sampling of Diffusion Models"},
{"id":"","label":"sigma negative guidance minimum","localized":"","hint":"Skip negative prompt for some steps when the image is almost ready, 0=disable"},
{"id":"","label":"Filename word regex","localized":"","hint":"This regular expression will be used extract words from filename, and they will be joined using the option below into label text used for training. Leave empty to keep filename text as it is"},
{"id":"","label":"Filename join string","localized":"","hint":"This string will be used to join split words into a single line if the option above is enabled"},
{"id":"","label":"Tensorboard flush period","localized":"","hint":"How often, in seconds, to flush the pending Tensorboard events and summaries to disk"},
{"id":"","label":"Interrogate: minimum description length","localized":"","hint":"Interrogate: minimum description length (excluding artists, etc..)"},
{"id":"","label":"CLIP: maximum number of lines in text file","localized":"","hint":"CLIP: maximum number of lines in text file (0 = No limit)"},
{"id":"","label":"Escape brackets in deepbooru","localized":"","hint":"Escape (\\) brackets in deepbooru so they are used as literal brackets and not for emphasis"},
{"id":"","label":"Filter out tags from deepbooru output","localized":"","hint":"Filter out those tags from deepbooru output (separated by comma)"},
{"id":"","label":"Upscaler tile size","localized":"","hint":"0 = no tiling"},
{"id":"","label":"Upscaler tile overlap","localized":"","hint":"Low values = visible seam"},
{"id":"","label":"GFPGAN","localized":"","hint":"Restore low quality faces using GFPGAN neural network"},
{"id":"","label":"CodeFormer weight parameter","localized":"","hint":"0 = maximum effect; 1 = minimum effect"},
{"id":"","label":"Token merging ratio for txt2img","localized":"","hint":"Enable redundant token merging via tomesd for speed and memory improvements, 0=disabled"},
{"id":"","label":"Token merging ratio for img2img","localized":"","hint":"Enable redundant token merging for img2img via tomesd for speed and memory improvements, 0=disabled"},
{"id":"","label":"Token merging ratio for hires","localized":"","hint":"Enable redundant token merging for hires pass via tomesd for speed and memory improvements, 0=disabled"},
{"id":"","label":"Diffusers pipeline","localized":"","hint":"If autodetect does not detect model automatically, select model type before loading a model"},
{"id":"","label":"Sequential CPU offload (--lowvram)","localized":"","hint":"Reduces GPU memory usage by transferring weights to the CPU. Increases inference time approximately 10%"},
{"id":"","label":"Model CPU offload (--medvram)","localized":"","hint":"Transferring of entire models to the CPU, negligible impact on inference time while still providing some memory savings"},
{"id":"","label":"VAE slicing","localized":"","hint":"Decodes batch latents one image at a time with limited VRAM. Small performance boost in VAE decode on multi-image batches"},
{"id":"","label":"VAE tiling","localized":"","hint":"Divide large images into overlapping tiles with limited VRAM. Results in a minor increase in processing time"},
{"id":"","label":"Attention slicing","localized":"","hint":"Performs attention computation in steps instead of all at once. Slower inference times, but greatly reduced memory usage"},
{"id":"","label":"Execution Provider","localized":"","hint":"ONNX Execution Provider"},
{"id":"","label":"ONNX allow fallback to CPU","localized":"","hint":"Allow fallback to CPU when selected execution provider failed"},
{"id":"","label":"ONNX cache converted models","localized":"","hint":"Save the models that are converted to ONNX format as a cache. You can manage them in ONNX tab"},
{"id":"","label":"ONNX unload base model when processing refiner","localized":"","hint":"Unload base model when the refiner is being converted/optimized/processed"},
{"id":"","label":"inference-mode","localized":"","hint":"Use torch.inference_mode"},
{"id":"","label":"no-grad","localized":"","hint":"Use torch.no_grad"},
{"id":"","label":"model compile precompile","localized":"","hint":"Run model compile immediately on model load instead of first use"},
{"id":"","label":"force zeros for prompts when empty","localized":"","hint":"Force full zero tensor when prompt is empty to remove any residual noise"},
{"id":"","label":"require aesthetics score","localized":"","hint":"Automatically guide model towards higher-pleasing results, applicable only to refiner model"},
{"id":"","label":"include watermark","localized":"","hint":"Add invisible watermark to image by altering some pixel values"},
{"id":"","label":"watermark string","localized":"","hint":"Watermark string to add to image. Keep very short to avoid image corruption."},
{"id":"","label":"show log view","localized":"","hint":"Show log view at the bottom of the main window"},
{"id":"","label":"Log view update period","localized":"","hint":"Log view update period, in milliseconds"},
{"id":"","label":"PAG layer names","localized":"","hint":"Space separated list of layers<br>Available: d[0-5], m[0], u[0-8]<br>Default: m0"}
],
"scripts": [
{"id":"","label":"X values","localized":"","hint":"Separate values for X axis using commas"},
{"id":"","label":"Y values","localized":"","hint":"Separate values for Y axis using commas"},
{"id":"","label":"Z values","localized":"","hint":"Separate values for Z axis using commas"},
{"id":"","label":"Override `Sampling method` to Euler","localized":"","hint":"(this method is built for it)"},
{"id":"","label":"Loops","localized":"","hint":"How many times to process an image. Each output is used as the input of the next loop. If set to 1, behavior will be as if this script were not used"},
{"id":"","label":"Final denoising strength","localized":"","hint":"The denoising strength for the final loop of each image in the batch"},
{"id":"","label":"Denoising strength curve","localized":"","hint":"The denoising curve controls the rate of denoising strength change each loop. Aggressive: Most of the change will happen towards the start of the loops. Linear: Change will be constant through all loops. Lazy: Most of the change will happen towards the end of the loops"},
{"id":"","label":"Tile overlap","localized":"","hint":"For SD upscale, how much overlap in pixels should there be between tiles. Tiles overlap so that when they are merged back into one picture, there is no clearly visible seam"}
]
}
© SD.Next