No-code platform to build and deploy web apps and APIs using ComfyUI workflows. Allows sharing workflows, scaling AI operations, and customizing apps without complex setups.
Allows users to turn ComfyUI workflows into web apps and APIs without writing code, enabling easy development, deployment, and sharing of workflows.
Users can upload their workflows, choose parameters, and run apps on scalable cloud infrastructure with minimal setup.
Enables sharing workflows with others, even if they don't have ComfyUI, via easy-to-use web apps or custom API endpoints.
Run apps and APIs on powerful cloud infrastructure instantly, providing scalable solutions for AI operations.
Users can download the source code for ViewComfy web apps, allowing for personalized customization.
Deploy APIs with a single click on any hardware you choose, simplifying the process of managing and deploying APIs.
Create beautiful user interfaces for your workflows without writing code, enabling quick and easy visualization of your processes.
Ensure your applications run efficiently with minimal delays during startup, providing an optimized and seamless experience.
Allows you to choose the LoRA and set its strength, which influences the output during video generation. Setting the strength to 0 ignores it entirely, while setting to 1 maximizes its effect.
Node responsible for loading the Hunyan video model. Offers a b16 and a b9e model, with the latter using less GPU space but offering lower output quality.
Lets you set the prompt for video creation using a specific format for subject, action, scene, style, and quality description.
Node where video generation occurs, enabling you to specify width, height, and the number of frames. Provides recommended settings for Hunyan video based on resolution and aspect ratio.
Use ViewComfy Cloud to deploy ComfyUI workflows as an API or a web app, simplifying the process without coding and managing dependencies. Scale across multiple GPUs.
Access a large library of models that can be linked to your deployment. The platform guides you through downloading models if they are not in the library.
Customize the user interface, group parameters by categories, add preview images, and manage output types to tailor the interface to your needs.
Select hardware type and number of GPUs for deployment, optimizing the setup for specific workflows. Automatically scales GPU usage based on demand.
Manage deployments through the ViewComfy dashboard, install new custom nodes, and edit workflows, offering flexibility and efficient handling.
Allows users to generate images using Stable Diffusion 3.5 in ComfyUI. Users need to download the sd3.5 files and add Stable Diffusion 3.5 or 'sd_xl_base_1.0' to their models folder for enabling this feature.
Enables users to load specific checkpoints for the Stable Diffusion 3.5 model within ComfyUI. Users can load checkpoints to make customizations on the generated images.
Provides settings for adjusting image dimensions, specifically width and height for better performance with Stable Diffusion 3.5. Suggested dimensions include multiples of 8, like 1024x1024.
Allows fine-tuning of image generation by setting a noise offset parameter. Suggests using this with a value close to 1 for improved results.
Enables users to input both positive and negative prompts to manipulate image generation.
Selects the Stable Diffusion SDXL model used for upscaling. Flexibility to use any SDXL model offers broad compatibility.
Allows the addition of LoRA to add new details to the image during generation. Multiple LoRAs can be stacked for customized results.
Includes nodes for both positive and negative prompts, enhancing the upscaling process by improving image description and results.
Uploads the image intended for upscaling, serving as the initial input for the process.
Core upscaling node that adjusts image size using a controlled scale factor. Works optimally with values between 2 and 4, with typical results between 0.3 and 0.6.
Allows users to create custom image-to-image transformations without writing code by using a visual node-based editor to set up workflows.
Sets up a Jupyter Lab environment to manage and interact with ComfyUI workflows directly from a web-based interface.
Enables users to select specific hardware configurations through a pod deployment system for the application setup, tailored to their processing needs.
Provides tools for setting starting images and parameters for transformation, allowing for fine-tuned results.