Is there a "more correct" anamorphic workflow

Howdy all,

I recently finished a campaign where all the footage was anamorphic (in my commercial setting it doesn’t happen everyday) and it made me really question my workflow and wonder if there is a “more correct” way of dealing with anamorphic footage than what I was doing.

In the past I’ve done the desqueezing in the timeline with a resize or an action, albeit hesitantly. This time I resized the footage on import from the mediaHub into a library and set that library as my destination for the conform. To be clear, the footage was 4480x3096 from telecine. I basically eye matched to the picture reference to determine the ratio for the resize, set the resize to Fill, and came up with something unexpected and seemingly random at 4480x1715. I assumed if the anamorphic was 2:1 there would’ve been some very simple math to determine my resize resolution, but I failed to solve for x correctly apparently. I’m still dismayed by that. The upside that I determined was that all of my plates were then desqueezed, so when I doled out shots for other artists, their heads didn’t explode when they got squeezed plates and the guesswork was eliminated in that regard. I didn’t really see a downside, except that I like to learn the proper way to do things and I’m not sure this is it.

This isn’t as heady of a topic as some of the color management or python scripting topics that I’ve read and love on the forum, but as someone who learned mostly in trail by fire out on an island of sorts, I’d appreciate any refined advice from this experienced crew.

Thanks!

1 Like

Just load the images with their own Aspect Ratio. Once in Flame activate this button:

2 Likes

Hey @schmagen

I find that with commercials, when I am the final step of the process, removing the anamorphic nature of the pixels makes everything much easier to deal with. In the past I have worked on long form projects where changing the plate size was not an option.

Things might have improved but I would regularly find bugs when working on anamorphic footage. My gmasks would suddenly spring out of shape or my paint nodes would get messed up.

Not all anamorphic lenses are 2:1. I have been stung in the past so now I find out from production what type of lens was used. You can’t always trust the offline or the transcodes to have done it correctly.
There was one job where all of the transcodes and offline assumed the lenses were 2.0 but they were Arri a-miniLF 4480x3096
Chameleon FF 1.79 x Anamorphic.

As previously mentioned, if I am the final step in the process and I need to deliver standard square pixels, I prefer to do the resize on import using the custom resolution in mediaHub. Since my deliveries are often smaller than the native resolution I prefer squeezing the height rather than stretching the width. This way I am not stretching and creating any new information.

When getting shots tracked I am normally against modifying the plates in anyway but I have been applying the same transform to the tracking shots as well as the lens grids. This is where I believe I am sacrificing some accuracy.

There was some discussion on our last anamorphic job that we would also loose some of the anamorphic nature of our shots. Defocus and blurs for example. I figured that we could replicate this by having all x blur values double the y.

I guess it comes down to the type of work you will be doing. I might consider doing an entire job in anamorphic, if it was a big CG character job, to maintain authenticity.

Regardless of whether you keep the pixels anamorphic or your squash them to make them square. The distortion that anamorphic lenses cause can often double the tracking and cleanup budget. A solid lens distortion pipeline is also required.

6 Likes

My head hurts a little when I get deep into the maths of it all which is why is highly recommended doing some research into the particular type of lens used.

The traditional 2 x anamorphic squeeze came from film. Trying to get a wider image from a 4 perf 35mm film which was squarish in shape. The image was then un-squeezed when the projector projected on to the screen. The maths behind the ratio stems from the size of the negative film and the target ratio.

Different cameras and different lenses mean that it is no longer tied to a specific ratio.
I found that with my Chameleon anamorphic lens the full 4:3 sensor of an Alexa camera wanting to get to a target size of 2.39:1 would need an anamorphic squeeze ratio of 1.7925. The numbers and the maths got messy but I luckily had access to the raw files which had the correct metadata.

Cooke has released a range of anamorphics with 1.8 squeeze ratio which is where I believe you got your vertical height of 1715 from but again the maths isn’t as clean as I would like it to be. Yuck.

Maybe someone can help shine more light on these painful decimals and ratios.

2 Likes

It’s a fuck on.

As Rich Betts points out some of the nodes misbehave with anamorphic. Especially paint. Pressing the view with ratio doesn’t fix the bug with paint. You need to resize before and after paint to be safe.

Added to that, the lens distortion is bugger too and needs to be removed for cg. It also helps with tracking and comping since you don’t have to deal with bending at the edges.

Now if we could get that Disney research thing to remove chromatic aberration and reapply after that would help too.

3 Likes

While i’m sure it’s not as multifaceted as the Disney system (those people are amazing), but I’ve been using Julik’s syntheyes matchbox to remove chromatic aborration. You zero out all the values and then add VERY small values to the red and/or blue and you can usually kill most of it, at least with uncomplicated lenses. Then copy the node and invert it and it’ll re-apply.

And I echo Rich’s approach. If you are the last person in the chain, remove the anamorphic squeeze by squishing vertically. Otherwise make sure you’ve got your pixel aspect assigned correctly on import, but as John notes there are some nodes (or parts of nodes) that do not correctly deal with anamorphics. I believe it’s gotten much better, but you may still run into issues someplace. It’s been a while since I had to round-trip anamorphics.

5 Likes

I generally remove anamorphic squeeze for tracking, roto, and CG plates by scaling horizontally to full resolution. A framing chart is essential so hopefully your camera department shot one. I always aspire to composite on the pristine plate without any modification (resize, degrain, etc.) unless there is a good reason (reprojection, odd resizes or stabilizations, extreme looks/grain, etc. etc.). If you’re in a rush and don’t care about purist sensibilities just unsquash, degrain, and munge the whole thing into a 1:1 grain free digital quantization and be done with it.

2 Likes

These are some great things to consider. I appreciate all of your replies!

Very helpful. I’m about to work on a few shots on a long-form project that get round-tripped, I’m not the last step.

Similar to the stabilize - paint - unstabilize workflow, one option could be to render de-squeezed plates, work on them in traditional workflow, then re-squeeze them, possibly with a comp/mask to only affect the changed area. Or is there a gotcha or no-no in this? Thinks like aliasing or artifacts. That could get around the tools that don’t do well with the ratio on the fly. In my case ratio is 2:1 so it would be an even pixel operation.

1 Like

The only gotcha other than perhaps some filtering goofiness etc that you alluded to above is that you’re working on a plate that is kind of significantly larger than you really need. And I like to go fassst if I can. I’ve worked on plenty of anamorphic shows, I just keep that use ratio on by default and most times never even think about, don’t really have problems, even when it comes to tracking. And if I’m remembering this correctly, last time I was having a tracking issue, I made an unsqueezed plate and used that just for tracking, but not for my actual compositing (I think the actions and gmask tracers were able to just account for the resolution difference because it was te same aspect ratio and it was fine, could be wrong here though, been a second since I had to do that). I almost always prerender my paint though, that might even just be superstition at this point, don’t know, but it’s probably just good practice to prerender flame paint all the time anyways?

1 Like

Thx for the perspective. It’s a combination of work - quite a bit of paint that can be isolated to region of interest, stabilization, and lots of comps. It was the paint issues folks mentioned that made me think of this.

1 Like

Totally makes sense. definitely, cropping/ roi is something I don’t do enough of (thanks for the reminder)!!! And yeah, the paint can be wonky, but it’s so hard to tell sometimes what the root of the wonkiness is and I’ve found it generally to be ok(?) but I prerender that stuff out as soon as I’m happy and am always duplicating the node before altering. It’s like paint in linear space too… always forget until I get bit that I should just paint in log.

Follow-up as I’m working through this…

This workflow is panning out well. It’s a bit quirky on how to render out the shots back to anamorphic resolution. In the export you have to resize just the horizontal to half the resolution and then reset the wh ratio to the original. That causes it to rendered squezzed resolution with 2:1 pixel ratio. Nuke makes this a bit easier in the reformat node it handles pixel ration separate and pre-populates the format from when you read the file.

Regarding the RoI, one nice side-benefit of using perspective grid with UV to stabilize a shot, is that the stabilized shot is automatically cropped to the stabilized region, and then resetting the UV to vertices expands it back out. So no separate RoI operations needed. Can even render this out and round-trip it through Silhouette for paint.

One should keep the general dimension of the stabilized region similar to the shot, otherwise the rendered file is stretched, and then on putting to back together you may end up with artifacts from the paint. My first pass was unusable because of that.

The other question is for @andymilkis. I used the grain/degrain flow of the recent Beauty tutorial, which subtracts/adds the grain. Simple workflow, and re-uses the original grain. However, I ended up with some issues where the paint job replaced dark background with bright background. The grain became very pronounced and distracting. Ended up having to use a fresh grain plate with a lumakey to modulate it. Are there any other good tricks in re-using original grain, but accounting for significant luma shifts in the in-between paint?

You might want to have a look at this thread: How to use Highpass

1 Like

Thx. Yes, the method I took from Andy’s tutorial is what that thread refers to as Grain Theft. I’ve used Frequency separation a lot too over the years. And the Silhouette implementation is a very nice time saver.

Apologies for the poor description of the issue I ran into. The technique works great. The problem was that the amount of grain present (or stolen in this case) differs by luma zones of the shot. If you put it back on material within the same zone it looks perfect. But if the underlying image changed during paint, it won’t match as well. In this case I was painting out a person and rebuiding the background forest from adjacent pixels, leading to mismatched zones - bright and dark areas swapping places. Kind of the same as with film grain applications where you change it separate for H-M-L zones. There may not be any procedural trick, or it could get complicated, as you would have to take a luma key of the before and after, do a diff between them, and then use that modify the stolen grain up or down to account for the paint shifts. I’ll have a play with that once the job is delivered and I have a bit of downtime.

The grain theft is not a magic bullet because of the issues with varied luma and grain mismatches that can occur and ghosting from high contrast areas. The very simple way to fix this is to source the grain you need for the patch from another part of the plate- doing this on the grain pass that’s already been subtracted and will be added at the end. This is not always an option though and the more complex the background the less likely it is to truly succeed. That’s when you just pull out the old regrain node and cycle your RGB channels and match em up.

1 Like

Instead of adding and subtracting you can divide and multiply.

Divide and multiply can create its own issues but generally will give better results than adding and subtracting when the luminance of the plate has changed significantly because of the cleanup work.

Never took the time to truly think about the math of it but I suppose its because divide and multiply are relative operations.

More generally though I find myself doing what you’ve already done. Stick with add and subtract and then use a lumakey to adjust accordingly. Or as Britt mentioned just sourcing it from somewhere else. I love shooting a grey card on set just to get extra generic grain specifically to cover these kinds of situations.

1 Like