Along the lines of the success of automatte - would love to see an object removal node. matte in to define the area. etc etc.
Midjourney type thing?
Not a Flame node, but Iâve come across a very reliable object removal / inpaint workflow with ComfyUI, using the MiniMax Object Remove. Itâs posted in the AI Toolbox thread on the Flame Discord.
Maybe a similar model/training could be added to Flame in a future version? ADSK would have to engineer and train it to meet release standards, as they did with Automatte.
ive been using weavy - but the best it seems to be able to accept is H264 and not aces. so not much use to me really. is comfy better?
There are some additional options in Comfy to read EXR sequences. But in general, none of the AI tools have a good high-res, non-destructive workflow. It would be ideal if we could read an 16fp EXR sequence in ACEScg and get a result back that way.
Some experiments exist from what Iâve seen, but nothing you can just drop in and run with. Or at least none that anyone shared.
Even if you could get files in and out like that, itâs still a question if the model could create any results in that fashion, since most of them have been trained on H.264 material. Keep in mind that for AI models to work, the input has to match the material it was trained on in terms of character and technical representation.
There is the VAE_HDR node, but itâs specific to a model, not a general purpose solution AFAIK.
I believe that will remain the domain of proprietary tools for now - BorisFX trains on 32bit footage, Iâm not sure if ADSK has disclosed, but I assume itâs better than H.264 8bit. So yes, in theory itâs possibly, but not widely available for the broader set of models out there.
If I missed anything, which is quite possible, it would be great to see pointers to details.
I have actually managed to get 16bit floating point EXR out of Comfy now. I used that HDR VAE node. Still very much testing it out though. It produced a different looking image to the normal VEA node. We will get there.
Thatâs good to hear. Yes, Iâve managed to read/write EXR sequences. But the rest is still resisting a clean round-trip. And agreed, that this is a matter of time before itâs solved. Itâs a critical ingredient to high-end VFX workflows.
Thereâs also a question/option of mapping the image into a different Rec709 friendly log curve for AI processing and then reversing that, to get a bigger dynamic range through AI models as alternate path. Kind of ACES for AI (call it AICES?). Essentially a HDR working colorspace thatâs AI friendly - More work to be done.
This is a comfy setup I have for object removal that returns the fixed image back to the original folder.
You export a PNG to a folder and put the full location of the image in the text box. Then paint over the object in the paint node (you have to clear canvas first), I couldnât get it to export an alpha correctly so I added a colour correct that you unmute after so only black goes through the paint node thus creating all alpha when painted on. Then it uses FLUX to remove the object. Then crucially it writes the file back to the original location with the suffix âfixedâ in the name and an incrementally increasing number so you can render multiple options. Itâs got difference matte and compare and stuff like that. TBH much quicker to do it in Photoshop, but just wanted to create this setup. The weakest link is the paint node, need a better solution for that element.
@RufusBlackwell If youâre also on the Logik Discord, we setup a separate channel called âai-toolboxâ with individual threads to make it easy to share Comfy setups within the community. That way theyâre all in one place.
After much pleading, I finally got access to NB in PS today and was able to roundtrip things in acesCG mostly non-destructively. Highlights peaked in the 2.5ish range and got them back within .01 or so. Everything sitting in the mids/shadows back to within .001. Only one shot so far and obviously values werenât too crazy but seems promising. My photoshop knowledge is decidedly wanting so may be possible to do even better.
Interesting and good to know. Any indication of the effective bit depth?
This is the way.
That is not what I was doing. Thank you for sharing. That indeed does seem to be the way.
With the newer Nano Banana Pro I believe it was trained on 2048x2048 images now, so you can make you selections bigger than I did in that test (1024x1024). I find working in patches preserves the original resolution and you can keep everything at your plate size. If I select a whole 4k image it feels like adobe downsamples it somewhere and it gets mushy.
how does one use photoshop for moving images though. I thought it was just for stills?
You can make your hero frames in photoshop and then drive movement and animation with a different model like Runway or Kling.
Yeah sorry weâve strayed from the main topic Jon, thatâs a solve for stills only.
On topic, when itâs released, episode 2 of this series will have what I hope will be a clear explanation of the WAN inpainting approach in Comfy. Which is more what youâre asking for. But yeah it still has the SDR limitations â or whatever log(ish) workaround you want to attempt.
On that note of HDR, âRadianceâ seems to be the beginnings of some more really solid colour workflow nodes for ComfyUI though. The viewer node is great, very much modelled on Flame/Nuke, but we still need actual genAI models trained on HDR as far as I understand to make use of these kinds of things right?:
Giving this a go right now and still getting noticeably softer results when going up to 2048x2048. My understanding was the 1024x1024 limitation is about photoshop not nano banana?
Also getting some color banding more generally with the method in your video. The import dialog on my version of photoshop was slightly different than whatâs in your video so hoping I got it all set up correctly. Maybe Iâm missing something. ![]()
Will have to keep experimenting. Thanks again!
Thatâs annoying about the resolution. If you run the basic included NanoBanana Pro workflow in Comfy itâs quite happy to work at higher resolutions. Itâs quite quick & practical to copy and paste between Photoshop and Comfy: copy an area from Photoshop, then select the load image node in Comfy and hit paste, do the processing, then copy the output by right clicking the Save Image node in Comfy. Cleaning up the results is much easier in Photoshop, you can again work on sections, and when NanoBanana annoyingly scales or shifts the image you can do a quick difference blend and realign it. Havenât tried this with ACEScct.
The banding must mean itâs still gets reduced to 8bits somewhere hey. ![]()
