Depth of field dof

Lewis Saunders, thx as always. Your insight is always much appreciated. The lentil.xyz plugin looks like a great solve for me in Houdini fir near term.

1 Like

@Tim_miller

We had a very long discussion about DOF here… Physical Defocus

I can recommend frischluft.com - Lenscare Description …well worth the money and the only way to really get DOF correct for 3d rendered assests with a z-depth matte.

Have a good read through the link on the forum, I think I went through everything. I used the LensCare matchbox on the robot job and it did an excellent job.

2 Likes

For those interested in DoF as related to sensor size,
I found these PDFs from Panavision really helpful to understand what’s going on.
They also have a video series called Panalab. The Five Pillars of Anamorphic are worth a watch.
Another site to check out is yedlin.net
Steve Yedlin has posted a bunch of interesting research there that bridges the VFX and camera worlds.

Cheers-
Andy D.

Panavision PRIMO 70 The Look.pdf (1.9 MB)
Panavision Sensor Size FOV.pdf (1.6 MB)
Panavision Sensor size Perspective.pdf (871.8 KB)

9 Likes

Revisiting this post. Tried all the different DOF nodes I could find except Frischluft. Checked out @imag4media ’s web pages too (many thanks)

The thing I’m missing is something easy. My experience with depth of field is a single lens reflex camera. It would be lush if the physical defocus had field of view and lens size ganged together. Surely the one is dictated by the other and the ratio is the neg size. I don’t know, it all just seems to be overly complex.

I want some things blurry. Some things not. Choose how harsh to feather in between and gain up or down.

If you got a zDepth (either from CG or Action) then what you want is Physical Defocus node.

Check out the docs on it. It’s really versatile and pretty quick.
World Scale is the one I change to taste.
Look at slices to see falloff.

I usually use it using FoV that I’ve generated from the cam data:
degrees(2 * atan(sensorHoriz/(2 * focalLength)))/lensSqueeze

Plug in your own values for sensorHoriz, focalLength and lensSqueeze.

I plug these into the appropriate fields w expression link in Physical Defocus. (Sometimes also Physical Glare, Lens Flares and Action can source the same data)

Hope that helps.

A

2 Likes

Physical defocus was definitely getting some nice results. It was annoying that FOV and lens size were two values. Maybe I needed more slices. World view I couldnt wrap my head around.

It made me yearn for my Olympus OM 1.

When you say docs: you mean the online manual in flame help?

1 Like

To get FoV from action, you can make sure using Cam3D and make sure the filmBack is correct for the sensor that you are matching. Then if you put in the focalLength on that camera, it’ll output the FoV. FoV is a linkable channel, but focalLength isn’t. Or just use the wonky expression to give you that same data.

Docs: online manual or the helpful vids on flame learning

FORGOT to mention:
Action FilmBack is measured in inches which is annoying.
That’s why I use my wonky expression.

2 Likes

@johnt this is your future self. They’re all shite.

Just had a job with a bit of cg and zdepth. Trying to match up to the DOF in the shot plates which were Alexa 35mm F2.8. I could not get 3d blur, depth of field or Ylens to match. Physical defocus matchbox tested my patience because it’s fiddly and I’ve forgotten how to use it.

In the end I used a combination of Sapphire z defocus and ordinary blur to help with chroma blur bleeding. Such a faff. It never gets any easier. I wish there was a one click AI thing where it looks at the shot and just does it.

4 Likes

mmm, how does it work? I often have issues with zdepth when using mattes with blur. matte threshold setting on action’s output option is too coarse.

It didn’t work very well, but it worked better than without the pixel spread. For that particular job, close up of a rabbit with whiskers, I ended up creating a DOF matte using a grad and the rabbit matte. That’s all I can remember at this point.

1 Like

I meant , if you could explain that tip. :wink:

Aren’t the blur artifacts at the matte edge related to the fact that depth passes aren’t and shouldn’t be anti-aliased, and so you can get into the situation where your beauty pass is anti-aliased but your depth matte isn’t, and you get blurs on the wrong pixels, but if you anti-alias the depth pass you get weird depth spikes from fake depth values.

There are lots of posts on how to handle this, as there is no perfect answer. But pixel spreading the depth pass (or eroding it), is the most common work-around.

For reference: https://youtu.be/TkrbmaZoUSQ?si=qltBYjKpPZPs4Dd8&t=240

2 Likes

hi Kily! Allklier link above explains it really well!

This is great for one layer - where it gets tricky is when you have a full cg environment e.g. a room with lots of things in and you want to get a nice falloff from the focal point. The z-depth is still going to have the same problem but you can’t just grow it.

I don’t know the best solution but I just blur the zdepth. Interested to hear other’s solutions. I imagine @cnoellert will say deep compositing…

Ok. A little bump again. I can understand it , by applying pixel spread using the alpha, and spread the pixels around the limit of the zdepth. Like in Hugo’s Tutorial.

But, what about when the issue is in between the slices inside of z-depth. Just like this.

z

Well, the pixelspread is a work-around, not a technically correct solution. And there are definitely scenarios where it will break. Or you possibly need to paint/mask the depth matte to correct things.

Look like in your example the pixlespread maybe wider than needed. Generally you want only 1 or 1.5 pixels, just to get past the aliased pixels.

And it only works if you are working with separate plates where the occluded/far pixels exist. If you use a depth matte on a single plate you have a separate problem where the blur creates a halo because it cannot see behind your foreground object.

Here’s a cool deep dive into the topic: https://www.youtube.com/watch?v=HTM59OFuQfQ&list=PL0ex-VZr1W8hxhoC8-JJucxk9DTVWDtcy

No. In the example there is no pixel spread applied. It’s the result of the zdeph output by an multilayer action “as-is”, and using any of the blur nodes (3D blur, depth of field…) That issue happens when the original layer has a (even slightly) blured matte.

That is the issue I was talking about above. That makes unusable action’s zdepth output in flame to me.

Ah, didn’t catch that. I’ll have to experiment more with that.

Can’t quite replicate your scenario. Here’s what I played with:

The upper plate was sent into the back via z-axis about about 1,000.

In the PhysicalDefocus I get what I would expect, setting the focal point either on the background or the foreground. I do get a nice sharp edge on the foreground and nicely blurred background.

If I crank up the blur on the matte of the foreground, it behaves ok until about ~10 on the blur value. Then it just stops working altogether.

PhysicalDefocus was in ‘Custom’ model, and only other value I changed was setting the normalization to ‘min-max’.

@lewis did you ever persist with your defocus project?