top of page
Search

Blender to ComfyUI workflow using IP adapters and Animate Diff





Frames generation:



Rendering CG with one HDRI light source. Outputting as a PNG image sequence cropped at 512x512 aspect for efficiency. Rendering depth as well from Blender which was normalised in Nuke. 


Running two ControlNet on the source using uncanny edge and the cg depth map to keep the composition close to the source.


Influencing the model with various IPadapters for each version. Mostly by the strength of .40. Passing the model and latent image through the AnimateDiff node using mm-stabilized_high model. Other models yielded lots of inconsistency and this one worked the best. Keeping the context length 16 for a smooth blend between frames. 


Upscaling :


Used SD upscale and latent upscale to add more resolution. Latent upscale gave the best results but added a lot of additional details not in the initial image generation. Encoding the images into a .h264 container.




Speed is one bottleneck in this workflow. Animate diff works great with a batch size of 16 above but slows down the sampler quite a lot. The best is to generate at 512 and upscale. There might be some good use of frame interpolation to fill in the frames in between for smoother results.


There was some unwanted light and shadow play which might be coming in from IP adapters so using less influence might help or using more negative promts.



7 views0 comments

Recent Posts

See All
bottom of page