Image to z-depth

if i wanted to use a camera depth oh field but i don’t have a z layer.
how could i transform a normal rampe into z information?
i have a feeling there was something like that in nuke, but in Flame? (old flame 2016) without all funcy AI additions.
Or maybe an outside solution to creat a fake zdepth ?

You can use Runway ML to create a z-depth pass from a live action clip. I actually haven’t used that particular tool, but all the other tools are v good. If you don’t have a Runway account I’ll happily run it through for you and do a test, (if you’re able to share the clip).

If you have a normal map, you might be able get a depth map out of it using action together with the depth map normalizer matchbox.


How many shots? Want me to run a couple through a new version of Flame’s ML Depth Extract for ya?

Flames ML depth map really helped me on House of the Dragon.

I just used Flame’s ML depth on a project and it worked reasonably well. Have to mind the parameters a bit, and watch the matte. I had to roto an area where the matte was weak, so I could keep the parameters in the sweet spot for the rest. But definitely solid tool in the right situation.

Blue channel on a normal map is the facing ratio, which could be used as a hacky depth map.

Facing Ratio. Sounds like dope band name.

1 Like

I have 2 shots. I’ve tryed Runway ML in free mode, but its not precise enough. Do you think I could send you those shots @randy for a quick test please?

1 Like

it’s not a 3D shot. it’s a dron shot where I would like to loose a real life look and push it to a tilt shift mockup look

Sure. Send me a link

thank you so much @randy
1 shot is here
2 shot is here - I’m not sure how it can work for this little ladder
3 - if there is a problem with the shot 2. maybe this one would be better ? -

i’ve tested it with depth scanner and I have a feeling it’s not possible to have a clear z from shot 2 and 3 - the difference between BG and the chimney is too big. I should clean the chimney, do the z, add the chimney :nauseated_face: :face_with_head_bandage:

I had a similar issue recently where I was adding fog to a city. ML got me tantalisingly close but in the end I had to create the depth map manually using lots of roto. I’d imagine for a DoF effect the map would have to be even more accurate.

Out of curiousity…

Realizing you’re looking for a Flame solution, but are possibly considering Runway ML to process this externally - are some of those situations a case where CopyCat could help? Manually do a few frames and train a data set. Seems like a good use case where the fully automated tools have failed. Also, if you had multiple z-depth tools, any use to averaging their mattes rather than picking just one? Or combining them by zones where each one does better?