Depth of field dof

DEPTH OF FIELD DOF
There seems to be a real lack of knowledge among flame artists when it comes to DOF.
Many artists (myself included) seems to be winging it. Especially when we get a z-depth pass from CGI.
I’ve recently learned a lot more about it than ever before but still feel like there is a lot I don’t know.
Is there an in-depth article or tutorial about it besides the flame ones?

Let’s discuss!
I’ve had to Pixel spread the z-depth pass so that the blur doesn’t suddenly shoot through the roof when it gets to the edge of your matte. IS THIS NORMAL???

Why do I crank the blur up to 100 and the image is barely blurry?
How do you deal with whiskers over white in the z-depth pass? The DOF seems to destroy the whiskers.

2 Likes

It’s a tricky problem that is rarely solved well in 2d. (maybe PG Bokeh does?), because outside of deep passes, you have one pixel-depth value per x/y screen position; the solutions provided in Flame are all lacking for things with transparency, anti-aliasing, and/or general softness.

This is further complicated by Flame’s three in-software offerings all being different.

  • 3d blur is the oldest, but you can input a kernel
  • DoF is newer and has some pixel-spread-like tools to help with your whiskers issue. It’s not bulletproof but it can help. You can customize the kernel but not input your own
  • Action Physical Defocus is the newest, though I find it lacking on all fronts, a sort of less intuitive 3d blur.
  • My personal favorite is the matchbox Y_lens_blur. It works very well with gradients (though not depth maps) and is super fast. It’s noisy bokeh is also quite pretty and often feels more lens-like than the other options.

My general feeling on depth of field is you only need depth maps in very specific instances and most other times blurring the image evenly (with a bokeh/lens blur) is a better approach. Outside of macro shots an object being both in and out of focus is rare, photographically speaking.

The only real “trick” to using any of the flame-supplied nodes is increasing the number of slices. It’s not a perfect system and back when I was more passionate about this stuff I kept asking for an “exponential slice” system, because I’d often increase the slices just to get a nice transition from totally sharp to just a little out of focus. By the time the slices are going from huge to more huge I don’t care. That may have kind of happened with the action node, but that node also adds some measure of softness to the sharp layer so I’ve never employed it professionally.

Lastly, with render power being what it is these days, tricky DOF should be done on the 3d side and rendered in.

5 Likes

2D dof sucks.

PgBokeh is the best ive seen though even without deep.

What I really hate is if scale and values are all
over the place, the stuff I used to do normalize zDepth and whatnot, ugh all the hacks.

Nowadays I usually try to get a simple depth pass that is just linear value = distance from the camera. that way I can put the netadata from the camera (per frame focus distance) as well as sensor size and stuff like that into OpticallZdefocus (or whatever thats called in nuke and i am 95% there. )

I stay away from doing cg comps in flame, nothing is made to work with absolute float value there, I dont know just guessing lens geometry is boring when you have maths :rofl:.

I dont agree on baking dof in in every situation, its hard to finetune this in cg. For full CG I usually bake it in, but even then it can be tricky if you have a bunch of layers and stuff from different renderers thrown together…

And then add some moblur to the mix ugh.

1 Like

I agree with Andy, there really aren’t solid options for doing DOF photorealistically in Flame (that I’ve found). Also agree that y_lensblur is the best for doing creative selective focus on top of a flat plate or comp. There are others like MD_dof, crok_dof_cfx, etc. - all of which are real time. But they have edge issues as well, depending on the context. If it’s a quick shot and you can get away with stuff, these or the ADSK tools might suffice. But yeah, if you’re lingering on a bunch of 3d elements with rack focuses, true DOF, etc. - it’s best to get it from 3d.

I haven’t gotten to play with Action Physical DOF yet, but am eager to try it.

I’d also be aware of your action camera’s near and far setttings - generally best to keep them as narrow (close to the objects in z-space) as possible - this makes a significant difference no matter which DOF tool you’re trying.

1 Like

A slight deviation of the thread: I was playing about with my iPhone in Londinium a few weeks ago with HDR footage and testing the depth-of-field handheld in FilmicPro on a wall and tracked in some perpective grid text and was trying to match in the DoF manually. The rack focusing was chaotic and I was thinking that it would be great to have an ML node in Flame that, like the DeGrain and ReGrain, could read the DoF in the image and apply it to objects in Action, or post-Action. Even if you could set what is max in-focus and max out-of-focus manually, as a kind of helper-benchmark-bracket, it would be nice to have Flame then be able to read the image DoF movements after that and have it apply the changes to whatever you pipe out. A little like MLHumanBodyExtraction, but ML_DoF_Extraction? You could then use this inside of Flame for whatever Flame generates also to generate z-depth?

Cheers
Tony

1 Like

My tuppence: they’re all hacks. When I compared PG Bokeh to flame there didn’t seem much difference to be honest.

Haven’t tried Frischluft. Wouldn’t mind a go.

I can get pretty good results in depth of field. All the usual criteria apply as noted above. One thing not mentioned is the z depth. Use the histogram in colour corrector to set black and white points before connecting to depth of field type nodes.

The y lens matchbox is nice for certain things and worth a try but in my experience falls apart quickly.

Sampling a bokeh and connecting to 3D blur works wonders on occasion.

95% of the time DOF does the trick. Getting the pull focus animation timed right is the trick. And getting the highlights the right value to match the blooming.

2 Likes

Thanks Andy! I guess the reason I’m having more trouble than usual is that it’s Macro- photography where the CGI subject is moving and both in and out of focus. I haven’t had to do this in a long time.

1 Like

Thanks GPM! I find that Action Physical DOF like sting ray motion blur is great for pre-vis quickly but when you need the final render it’s better to do it somewhere else.

Hi TonyRichards! I like that idea.

1 Like

Hi Johnt,

Yes setting the histogram so that you can see the different shades of the z-depth is key! Up until someone showed me that I thought CG was just sending me white frames! HAHAHA

2D DoF is a hassle and always a hack. Sometimes I even get an antialiased z_depth pass with motionblur. Lol. I’m still waiting on a ( non deep ) way to combine post DoF AND motionblur. Won’t happen. As others mentioned, in non deep images there’s just not enough information. Maybe some smart machine learning based tool could solve that. But I have yet to run into an ML tool that lacks enough temporal inconsistency to reaaaally nail things.
Anyway. Frischluft is by far my favorite even though it’s old and slow.

1 Like

And having said all that, today I used Y_Lens. Comme see Comme sa Rodders.

Betting Bbc GIF

Y_Lens is my favorite for z-depth.

1 Like

Y_lensblur is totally my go to for super shallow depth of field phone comps. I just hack together a “depth map” from garbage masks and then good to go!

1 Like

The old 3D Blur actually tries to do this I think… has a motion vector input to combine the depth of field blur with the motion blur and do both in a single step? Not that the result is pretty but it avoids the worrying about which to do first, defocus or motion…

Having both aliased (correct depth values but pixellated edges) and AA’d (smooth edges but depth values across the edge are wrong) Z is def a good shout if you can get them, so you can use the AA’d depth to split the image into FG and BG layers yourself instead of relying on automatic slicing. Can also take the blue channel of a camera space position pass to use instead of standalone Z because that’s usually smooth.

There’s been a lot of cool research into lens simulation in the last few years that might lead to some new ways to do this stuff - in CG there’s already https://www.lentil.xyz/ which is both way more accurate and faster than Arnold’s built in depth of field. I’m trying to keep track of it and eventually make a 2D equivalent - you can follow my progress on https://twitter.com/dearlensform but the dayjob has taken over for the last couple of months. I did go through how Unreal Engine and Eevee do their DoF, which is a 2D process that cuts a lot of corners but is crazy fast: https://twitter.com/dearlensform/status/1468301578377568264

12 Likes

Amazing @lewis :star_struck:

1 Like

Lentil is indeed super cool and I’m excited that you’re working on a 2d equivalent. Please keep us posted!

2 Likes

lenti_01

Installed in Houdini and gave it a whirl. The above was just 3 samples and 5 in the sample multiplier but it’s pretty impressive. What blows my mind is that things like this aren’t being developed by Autodesk in house… you would imagine they would be all over this.

5 Likes

Yeah it’s crazy - it’ll be interesting to see if something emerges from Weta via Unity since this stuff would be a natural complement to what they revealed about their Physlight system.

There’s another bunch of adjacent research into physically correct flares which also seems like it’s absolutely begging to be implemented - Animal did something with it in-house: https://twitter.com/dearlensform/status/1466137019931934720 …but this even older technique also looks absolutely amazing and does the beautiful folding/distortion of flare elements that you can’t get from video copilot (or action): https://twitter.com/dearlensform/status/1466127962760228868

3 Likes

Nothing like seeing a gorgeous flare and the sentence “these flares can be rendered at hundreds of frames per second on current hardware” on a paper that’s 11 years old.

Jesus this would be great in Action. Seems like a very natural fit.

4 Likes