top of page
Search



Grayscale Render into Ai Art



0 views0 comments




Frames generation:



Rendering CG with one HDRI light source. Outputting as a PNG image sequence cropped at 512x512 aspect for efficiency. Rendering depth as well from Blender which was normalised in Nuke. 


Running two ControlNet on the source using uncanny edge and the cg depth map to keep the composition close to the source.


Influencing the model with various IPadapters for each version. Mostly by the strength of .40. Passing the model and latent image through the AnimateDiff node using mm-stabilized_high model. Other models yielded lots of inconsistency and this one worked the best. Keeping the context length 16 for a smooth blend between frames. 


Upscaling :


Used SD upscale and latent upscale to add more resolution. Latent upscale gave the best results but added a lot of additional details not in the initial image generation. Encoding the images into a .h264 container.




Speed is one bottleneck in this workflow. Animate diff works great with a batch size of 16 above but slows down the sampler quite a lot. The best is to generate at 512 and upscale. There might be some good use of frame interpolation to fill in the frames in between for smoother results.


There was some unwanted light and shadow play which might be coming in from IP adapters so using less influence might help or using more negative promts.



3 views0 comments


Beeble Switchlight AI studio for relighting and compositing



The goal was to see how far I could push it and use it for extreme relighting and compositing FG into a different environment.


Workflow :


◾ Used stock footage from Pexels. The character is in a studio-lit environment with multiple sources of light. There is also a fair amount of spec and highlights on her, which added more challenge to AI to generate PBR passes.


◾ Using Switchlight AI background removal tool to extract the fg from the footage. There is some flicker but there is a temporal consistency slider that can be tweaked to get a more stable output.


◾ To generate the background, I used Leonardo AI with few prompts. Nothing fancy there. The generated background was then fed into Switchlight to generate an HDRI map using DiffusionLight open-source ML model.


◾ Generated HDRI and FG(without bg) were put into the Switchlight AI Virtual Production tool to relight the FG. Generated PBR passes and brought them into Nuke to put together a base comp.


◾ Base comp was then fed back into Leonardo AI to generate some ideas and details which can be rolled back into Nuke using traditional compositing. Finally added some lens effects and final touches.



8 views0 comments
bottom of page