Is there a "more correct" anamorphic workflow

Howdy all,

I recently finished a campaign where all the footage was anamorphic (in my commercial setting it doesn’t happen everyday) and it made me really question my workflow and wonder if there is a “more correct” way of dealing with anamorphic footage than what I was doing.

In the past I’ve done the desqueezing in the timeline with a resize or an action, albeit hesitantly. This time I resized the footage on import from the mediaHub into a library and set that library as my destination for the conform. To be clear, the footage was 4480x3096 from telecine. I basically eye matched to the picture reference to determine the ratio for the resize, set the resize to Fill, and came up with something unexpected and seemingly random at 4480x1715. I assumed if the anamorphic was 2:1 there would’ve been some very simple math to determine my resize resolution, but I failed to solve for x correctly apparently. I’m still dismayed by that. The upside that I determined was that all of my plates were then desqueezed, so when I doled out shots for other artists, their heads didn’t explode when they got squeezed plates and the guesswork was eliminated in that regard. I didn’t really see a downside, except that I like to learn the proper way to do things and I’m not sure this is it.

This isn’t as heady of a topic as some of the color management or python scripting topics that I’ve read and love on the forum, but as someone who learned mostly in trail by fire out on an island of sorts, I’d appreciate any refined advice from this experienced crew.


1 Like

Just load the images with their own Aspect Ratio. Once in Flame activate this button:


Hey @schmagen

I find that with commercials, when I am the final step of the process, removing the anamorphic nature of the pixels makes everything much easier to deal with. In the past I have worked on long form projects where changing the plate size was not an option.

Things might have improved but I would regularly find bugs when working on anamorphic footage. My gmasks would suddenly spring out of shape or my paint nodes would get messed up.

Not all anamorphic lenses are 2:1. I have been stung in the past so now I find out from production what type of lens was used. You can’t always trust the offline or the transcodes to have done it correctly.
There was one job where all of the transcodes and offline assumed the lenses were 2.0 but they were Arri a-miniLF 4480x3096
Chameleon FF 1.79 x Anamorphic.

As previously mentioned, if I am the final step in the process and I need to deliver standard square pixels, I prefer to do the resize on import using the custom resolution in mediaHub. Since my deliveries are often smaller than the native resolution I prefer squeezing the height rather than stretching the width. This way I am not stretching and creating any new information.

When getting shots tracked I am normally against modifying the plates in anyway but I have been applying the same transform to the tracking shots as well as the lens grids. This is where I believe I am sacrificing some accuracy.

There was some discussion on our last anamorphic job that we would also loose some of the anamorphic nature of our shots. Defocus and blurs for example. I figured that we could replicate this by having all x blur values double the y.

I guess it comes down to the type of work you will be doing. I might consider doing an entire job in anamorphic, if it was a big CG character job, to maintain authenticity.

Regardless of whether you keep the pixels anamorphic or your squash them to make them square. The distortion that anamorphic lenses cause can often double the tracking and cleanup budget. A solid lens distortion pipeline is also required.


My head hurts a little when I get deep into the maths of it all which is why is highly recommended doing some research into the particular type of lens used.

The traditional 2 x anamorphic squeeze came from film. Trying to get a wider image from a 4 perf 35mm film which was squarish in shape. The image was then un-squeezed when the projector projected on to the screen. The maths behind the ratio stems from the size of the negative film and the target ratio.

Different cameras and different lenses mean that it is no longer tied to a specific ratio.
I found that with my Chameleon anamorphic lens the full 4:3 sensor of an Alexa camera wanting to get to a target size of 2.39:1 would need an anamorphic squeeze ratio of 1.7925. The numbers and the maths got messy but I luckily had access to the raw files which had the correct metadata.

Cooke has released a range of anamorphics with 1.8 squeeze ratio which is where I believe you got your vertical height of 1715 from but again the maths isn’t as clean as I would like it to be. Yuck.

Maybe someone can help shine more light on these painful decimals and ratios.


It’s a fuck on.

As Rich Betts points out some of the nodes misbehave with anamorphic. Especially paint. Pressing the view with ratio doesn’t fix the bug with paint. You need to resize before and after paint to be safe.

Added to that, the lens distortion is bugger too and needs to be removed for cg. It also helps with tracking and comping since you don’t have to deal with bending at the edges.

Now if we could get that Disney research thing to remove chromatic aberration and reapply after that would help too.


While i’m sure it’s not as multifaceted as the Disney system (those people are amazing), but I’ve been using Julik’s syntheyes matchbox to remove chromatic aborration. You zero out all the values and then ad VERY small values to the red and/or blue and you can usually kill most of it, at least with uncomplicated lenses. Then copy the node and invert it and it’ll re-apply.

And I echo Rich’s approach. If you are the last person in the chain, remove the anamorphic squeeze by squishing vertically. Otherwise make sure you’ve got your pixel aspect assigned correctly on import, but as John notes there are some nodes (or parts of nodes) that do not correctly deal with anamorphics. I believe it’s gotten much better, but you may still run into issues someplace. It’s been a while since I had to round-trip anamorphics.


I generally remove anamorphic squeeze for tracking, roto, and CG plates by scaling horizontally to full resolution. A framing chart is essential so hopefully your camera department shot one. I always aspire to composite on the pristine plate without any modification (resize, degrain, etc.) unless there is a good reason (reprojection, odd resizes or stabilizations, extreme looks/grain, etc. etc.). If you’re in a rush and don’t care about purist sensibilities just unsquash, degrain, and munge the whole thing into a 1:1 grain free digital quantization and be done with it.


These are some great things to consider. I appreciate all of your replies!