Object removal

Along the lines of the success of automatte - would love to see an object removal node. matte in to define the area. etc etc.

Midjourney type thing?

Not a Flame node, but I’ve come across a very reliable object removal / inpaint workflow with ComfyUI, using the MiniMax Object Remove. It’s posted in the AI Toolbox thread on the Flame Discord.

Maybe a similar model/training could be added to Flame in a future version? ADSK would have to engineer and train it to meet release standards, as they did with Automatte.

3 Likes

ive been using weavy - but the best it seems to be able to accept is H264 and not aces. so not much use to me really. is comfy better?

1 Like

There are some additional options in Comfy to read EXR sequences. But in general, none of the AI tools have a good high-res, non-destructive workflow. It would be ideal if we could read an 16fp EXR sequence in ACEScg and get a result back that way.

Some experiments exist from what I’ve seen, but nothing you can just drop in and run with. Or at least none that anyone shared.

Even if you could get files in and out like that, it’s still a question if the model could create any results in that fashion, since most of them have been trained on H.264 material. Keep in mind that for AI models to work, the input has to match the material it was trained on in terms of character and technical representation.

There is the VAE_HDR node, but it’s specific to a model, not a general purpose solution AFAIK.

I believe that will remain the domain of proprietary tools for now - BorisFX trains on 32bit footage, I’m not sure if ADSK has disclosed, but I assume it’s better than H.264 8bit. So yes, in theory it’s possibly, but not widely available for the broader set of models out there.

If I missed anything, which is quite possible, it would be great to see pointers to details.

2 Likes

I have actually managed to get 16bit floating point EXR out of Comfy now. I used that HDR VAE node. Still very much testing it out though. It produced a different looking image to the normal VEA node. We will get there.

2 Likes

That’s good to hear. Yes, I’ve managed to read/write EXR sequences. But the rest is still resisting a clean round-trip. And agreed, that this is a matter of time before it’s solved. It’s a critical ingredient to high-end VFX workflows.

There’s also a question/option of mapping the image into a different Rec709 friendly log curve for AI processing and then reversing that, to get a bigger dynamic range through AI models as alternate path. Kind of ACES for AI (call it AICES?). Essentially a HDR working colorspace that’s AI friendly - More work to be done.

This is a comfy setup I have for object removal that returns the fixed image back to the original folder.

You export a PNG to a folder and put the full location of the image in the text box. Then paint over the object in the paint node (you have to clear canvas first), I couldn’t get it to export an alpha correctly so I added a colour correct that you unmute after so only black goes through the paint node thus creating all alpha when painted on. Then it uses FLUX to remove the object. Then crucially it writes the file back to the original location with the suffix “fixed” in the name and an incrementally increasing number so you can render multiple options. It’s got difference matte and compare and stuff like that. TBH much quicker to do it in Photoshop, but just wanted to create this setup. The weakest link is the paint node, need a better solution for that element.

4 Likes

@RufusBlackwell If you’re also on the Logik Discord, we setup a separate channel called ‘ai-toolbox’ with individual threads to make it easy to share Comfy setups within the community. That way they’re all in one place.

3 Likes

After much pleading, I finally got access to NB in PS today and was able to roundtrip things in acesCG mostly non-destructively. Highlights peaked in the 2.5ish range and got them back within .01 or so. Everything sitting in the mids/shadows back to within .001. Only one shot so far and obviously values weren’t too crazy but seems promising. My photoshop knowledge is decidedly wanting so may be possible to do even better.

1 Like

Interesting and good to know. Any indication of the effective bit depth?

Did you try the OCIO avenue in photoshop? This is my workflow, was yours any different?

3 Likes

This is the way.

1 Like

That is not what I was doing. Thank you for sharing. That indeed does seem to be the way.

1 Like

With the newer Nano Banana Pro I believe it was trained on 2048x2048 images now, so you can make you selections bigger than I did in that test (1024x1024). I find working in patches preserves the original resolution and you can keep everything at your plate size. If I select a whole 4k image it feels like adobe downsamples it somewhere and it gets mushy.

1 Like

how does one use photoshop for moving images though. I thought it was just for stills?

You can make your hero frames in photoshop and then drive movement and animation with a different model like Runway or Kling.

Yeah sorry we’ve strayed from the main topic Jon, that’s a solve for stills only.

On topic, when it’s released, episode 2 of this series will have what I hope will be a clear explanation of the WAN inpainting approach in Comfy. Which is more what you’re asking for. But yeah it still has the SDR limitations – or whatever log(ish) workaround you want to attempt.

On that note of HDR, ‘Radiance’ seems to be the beginnings of some more really solid colour workflow nodes for ComfyUI though. The viewer node is great, very much modelled on Flame/Nuke, but we still need actual genAI models trained on HDR as far as I understand to make use of these kinds of things right?:

1 Like

Giving this a go right now and still getting noticeably softer results when going up to 2048x2048. My understanding was the 1024x1024 limitation is about photoshop not nano banana?

Also getting some color banding more generally with the method in your video. The import dialog on my version of photoshop was slightly different than what’s in your video so hoping I got it all set up correctly. Maybe I’m missing something. :thinking:

Will have to keep experimenting. Thanks again!

1 Like

That’s annoying about the resolution. If you run the basic included NanoBanana Pro workflow in Comfy it’s quite happy to work at higher resolutions. It’s quite quick & practical to copy and paste between Photoshop and Comfy: copy an area from Photoshop, then select the load image node in Comfy and hit paste, do the processing, then copy the output by right clicking the Save Image node in Comfy. Cleaning up the results is much easier in Photoshop, you can again work on sections, and when NanoBanana annoyingly scales or shifts the image you can do a quick difference blend and realign it. Haven’t tried this with ACEScct.

The banding must mean it’s still gets reduced to 8bits somewhere hey. :frowning:

1 Like