雅虎香港 搜尋

搜尋結果

  1. 2023年3月13日 · Check if the checkpoint is compatible with the StyleModel/ClipVision model. It is not possible to mix the StyleModel/ClipVision model for SD1.5 with the SDXL checkpoint.

  2. 加载风格模型节点加载风格模型节点 加载风格模型节点可用于加载一个风格模型。风格模型可以用来为扩散模型提供一个视觉提示,指明去噪后的潜在变量应该处于什么样的风格。 相关信息 输入 style_model_name 风格模型的名称。 输出 STYLE_MODEL 用于为扩散模型提供关于期望风格的视觉提示的风格模型。

    • Overview
    • Important updates
    • What is it?
    • Example workflow
    • Installation
    • How to
    • Troubleshooting
    • Diffusers version

    ComfyUI reference implementation for IPAdapter models.

    IPAdapter implementation that follows the ComfyUI way of doing things. The code is memory efficient, fast, and shouldn't break with Comfy updates.

    2024/02/02: Added experimental tiled IPAdapter. It lets you easily handle reference images that are not square. Can be useful for upscaling.

    2024/01/19: Support for FaceID Portrait models.

    2024/01/16: Notably increased quality of FaceID Plus/v2 models. Check the comparison of all face models.

    2023/12/30: Added support for FaceID Plus v2 models. Important: this update again breaks the previous implementation. This time I had to make a new node just for FaceID. The base IPAdapter Apply node will work with all previous models; for all FaceID models you'll find an IPAdapter Apply FaceID node. When using v2 remember to check the v2 options otherwise it won't work as expected! As always the examples directory is full of workflows for you to play with.

    2023/12/28: Added support for FaceID Plus models. Important: this update breaks the previous implementation of FaceID. Check the updated workflows in the example directory! Remember to refresh the browser ComfyUI page to clear up the local cache.

    2023/12/22: Added support for FaceID models. Read the documentation for details.

    The IPAdapter are very powerful models for image-to-image conditioning. Given a reference image you can do variations augmented by text prompt, controlnets and masks. Think of it as a 1-image lora.

    The example directory has many workflows that cover all IPAdapter functionalities.

    Download or git clone this repository inside ComfyUI/custom_nodes/ directory or use the Manager. Beware that the automatic update of the manager sometimes doesn't work and you may need to upgrade manually.

    The pre-trained models are available on huggingface, download and place them in the ComfyUI/models/ipadapter directory (create it if not present). You can also use any custom location setting an ipadapter entry in the extra_model_paths.yaml file.

    IPAdapter also needs the image encoders. You need the CLIP-ViT-H-14-laion2B-s32B-b79K and CLIP-ViT-bigG-14-laion2B-39B-b160k image encoders, you may already have them. If you don't, download them but be careful because the file name is the same! Rename them to something easy to remember and place them in the ComfyUI/models/clip_vision/ directory.

    The following table shows the combination of Checkpoint and Image encoder to use for each IPAdapter Model. Any Tensor size mismatch you may get it is likely caused by a wrong combination.

    FaceID requires insightface, you need to install them in your ComfyUI environment. Check this issue for help.

    When the dependencies are satisfied you need:

    There's a basic workflow included in this repo and a few examples in the examples directory. Usually it's a good idea to lower the weight to at least 0.8.

    The noise parameter is an experimental exploitation of the IPAdapter models. You can set it as low as 0.01 for an arguably better result.

    More info about the noise option

    Basically the IPAdapter sends two pictures for the conditioning, one is the reference the other --that you don't see-- is an empty image that could be considered like a negative conditioning.

    Please check the troubleshooting before posting a new issue.

    If you are interested I've also implemented the same features for Huggingface Diffusers.

  3. Class name: StyleModelApply. Category: conditioning/style_model. Output node: False. Repo Ref: https://github.com/comfyanonymous/ComfyUI. StyleModelApply 节点旨在将图像的风格集成到生成模型中。. 它接受视觉模型的输出,并将风格模型应用于条件生成过程,从而允许创建风格化输出。.

  4. Apply Style Model node. The Apply Style Model node can be used to provide further visual guidance to a diffusion model specifically pertaining to the style of the generated images.

  5. 其他人也問了

  6. The StyleModelLoader node is designed to load a style model from a specified path. It focuses on retrieving and initializing style models that can be used to apply specific artistic styles to images, thereby enabling the customization of visual outputs based on the loaded style model.

  7. 0:00 / 20:04. Embark on an intriguing exploration of ComfyUI and master the art of working with style models from ground zero. This detailed step-by-step guide places spec...

    • 20 分鐘
    • 16.4K
    • Amir Ferdos
  1. 其他人也搜尋了