当前位置:首页|资讯|AI绘画|Stable Diffusion

AI绘画保姆级教程!比Remini更好的黏土风格

作者:心在旅行发布时间:2024-05-16

原文教程:https://docs.loopin.network/zh/tutorials/stablediffusion/comfyui-how-to-implement-clay-style-filters

Comfy UI 是一种节点化操作界面,它比其他WebUI更容易操作和微调,很适合复现和调整图像效果。通过熟练使用,还可以定制工作流程。

🚀 新功能! 想先体验一下?试试我们的AI粘土风格滤镜 👉 https://docs.loopin.network/playground/comfyui-clay-demo

什么是Comfy UI?

Comfy UI 是基于节点的操作界面,很适用于Stable Diffusion XL 1.0模型。你可以直观地看到不同模型(如Base模型、Refiner精炼模型)生成的效果图。其高可控性让你能轻松复现和微调图像效果,熟练之后还能定制个人工作流。

如何在LooPIN上安装Comfy UI

1. 配置GPU实例

访问 LooPIN流动性池,使用$LOOPIN代币购买GPU时间。选择合适的GPU型号,如RTX 3080。

2. 兑换GPU资源

选择需要的$LOOPIN代币数量,通过滑块选择GPU数量,确认并完成交易。

3. 进入Jupyter Notebook

交易成功后,进入Rented Servers的Server区域,访问你的远程服务器,启动Jupyter Notebook(一般需2-4分钟)。

4. 验证GPU

在Jupyter Notebook中,打开终端窗口,运行nvidia-smi命令检查GPU是否已激活。

+-----------------------------------------------------------------------------------------+

NVIDIA-SMI 550.54.15 Driver Version: 550.54.15 CUDA Version: 12.4 |

-----------------------------------------+------------------------+----------------------+

GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |

Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |

MIG M. |

=========================================+========================+======================|

0 NVIDIA GeForce RTX 3080 Off | 00000000:01:00.0 Off | N/A |

0% 39C P8 21W / 350W | 12MiB / 12288MiB | 0% Default |

N/A |

+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+

Processes: |

GPU GI CI PID Type Process name GPU Memory |

ID ID Usage |

=========================================================================================|

+-----------------------------------------------------------------------------------------+

安装Comfy UI

1. 下载并安装Comfy UI

git clone https://github.com/comfyanonymous/ComfyUI.git && cd ComfyUI

2. 安装Python依赖

pip install xformers!=0.0.18 -r requirements.txt --extra-index-url https://download.pytorch.org/whl/cu121 --extra-index-url https://download.pytorch.org/whl/cu118 --extra-index-url https://download.pytorch.org/whl/cu117

3. 安装ComfyUI-Manager

cd /workspace/ComfyUI/custom_nodes

git clone https://github.com/ltdrdata/ComfyUI-Manager.git

4. 下载并安装Cloudflare tunnel

wget https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb

dpkg -i cloudflared-linux-amd64.deb

5. 启动Comfy UI

在Comfy UI下新建Jupyter Notebook,使用Cloudflare暴露端口

import subprocess

import threading

import time

import socket

def iframe_thread(port):

while True:

time.sleep(0.5)

sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)

result = sock.connect_ex(('127.0.0.1', port))

if result == 0:

break

sock.close()

print("\nComfyUI finished loading, trying to launch cloudflared (if it gets stuck here cloudflared is having issues)\n")

p = subprocess.Popen(["cloudflared", "tunnel", "--url", "http://127.0.0.1:{}".format(port)], stdout=subprocess.PIPE, stderr=subprocess.PIPE)

for line in p.stderr:

l = line.decode()

if "trycloudflare.com " in l:

print("This is the URL to access ComfyUI:", l[l.find("http"):], end='')

threading.Thread(target=iframe_thread, daemon=True, args=(8188,)).start()

!python main.py --dont-print-server

等待Cloudflare输出内网穿透的连接。

安装粘土风格工作流

1. 加载粘土风格工作流

我们使用XIONGMU的开源Comfy UI工作流( https://openart.ai/workflows/xiongmu/image-to-clay-style/KRjSiOFyPSHO5QCQ4raV)

2. 安装所需节点

在Comfy UI选择Load加载工作流后,打开右下角的Manager,点击Install Missing Custom Nodes安装缺失的节点(共11个),安装完成后点击restart重启Comfy UI。

下载所需模型

1. Checkpoint基础模型

wget https://huggingface.co/RunDiffusion/Juggernaut-XL-v9/resolve/main/Juggernaut-XL_v9_RunDiffusionPhoto_v2.safetensors -O /workspace/ComfyUI/models/checkpoints/Juggernaut-XL_v9_RunDiffusionPhoto_v2.safetensors

2. Lora模型

wget -c https://files.loopin.network/docs/tutorials/comfyui/DD-made-of-clay-XL-v2.safetensors -O /workspace/ComfyUI/models/loras/DD-made-of-clay-XL-v2.safetensors

wget -c https://files.loopin.network/docs/tutorials/comfyui/CLAYMATE_V2.03_.safetensors -O /workspace/ComfyUI/models/loras/CLAYMATE_V2.03_.safetensors

3. Controlnet模型

wget https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank256/control-lora-canny-rank256.safetensors -O /workspace/ComfyUI/models/controlnet/sai_xl_canny_256lora.safetensors

4. IPAdapter模型

mkdir /workspace/ComfyUI/models/ipadapter

wget https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/ip-adapter_sdxl_vit-h.safetensors -O /workspace/ComfyUI/models/ipadapter/ip-adapter_sdxl_vit-h.safetensors

wget https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/ip-adapter-plus_sdxl_vit-h.safetensors -O /workspace/ComfyUI/models/ipadapter/ip-adapter-plus_sdxl_vit-h.safetensors

wget https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/ip-adapter-plus-face_sdxl_vit-h.safetensors -O /workspace/ComfyUI/models/ipadapter/ip-adapter-plus-face_sdxl_vit-h.safetensors

wget https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/ip-adapter_sdxl.safetensors -O /workspace/ComfyUI/models/ipadapter/ip-adapter_sdxl.safetensors

5. ClipVision模型

wget https://huggingface.co/h94/IP-Adapter/resolve/main/models/image_encoder/model.safetensors -O /workspace/ComfyUI/models/clip_vision/CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors && wget https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/image_encoder/model.safetensors -O /workspace/ComfyUI/models/clip_vision/CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors

测试生成图片

在Comfy UI界面

上传原始图片(点击Load Image处的choose file to upload)。

点击Queue Prompt按钮开始生成图片。

等待5-10秒,生成的图片会显示在界面上。

总结

本教程介绍了如何在LooPIN平台上部署Stable Diffusion的Comfy UI,并体验粘土风格滤镜效果。未来我们将探索更多Comfy UI的高级功能和应用。


Copyright © 2024 aigcdaily.cn  北京智识时代科技有限公司  版权所有  京ICP备2023006237号-1