Z Depth Estimation

Hi,

Have you tried this one? It is like our ML Depth Map, but working as a charm in Nuke.

Is there a way to make this one a matchbox? Or there are legal issues?

@lewis You are my hero

Yeh, I have used it several times since V1 within the standalone side, its command line but its pretty straightforward to use. I think its good but since it is photo based the output is flickering which may not work sometimes but for stills is quite powerful… Matchbox? I dont think its possible to make a matchbox of out this, it would an python pytorch implementation like andry did with the TimeWarpML which is based on ECCV2022-RIFE translated to make it work inside of flame without the need of coding on terminals. So we need someone with python pytorch skills to convert this and make it work with flame.

Also, there’s a much better depth estimation tool called ChronoDepth, which is video/image sequence focused, the output is temporal consistent and has [IMO] the least of flickering i’ve seen all the depth estimation i’ve tried. The only “downside” of this is that it requires a lot of vram, but if your flame box has a 48Gb which many of you guys have then will be good to go. Dont worry about the demo’s flickering outputs, it seems they processed less frames than actual footage is so thats why it flickers there. The –num_frames=10 flag means the number of frames ChronoDepth will process and make consistent before it jumps, not the number of frames the footage has.

2 Likes
3 Likes

I tried DepthAnything v2 in NUKE.
The results are much better than Flame’s ML Depth Map and NUKE’s MiDas, with less flicker and more detail.

Flame might be able to use DepthAnything v2 (or DepthAnything v2 in NUKE) via PyBox, but it will likely require more GPU specs as it will consume more GPU resources. I want Flame to have more flexible ML features such like NUKE’s CopyCat and Cattery honestly.

5 Likes

@Hiroshi - I concur, this tool is incredibly fast and temporally coherent.

2 Likes