Managing Timelines with mixed tagged color spaces

Hi I was hoping that someone could help with a timeline color space conundrum. I have a Timeline where the majority of clips are tagged ACEScg, and a handful are Rec709. I’d like to color manage the Rec clips so that they look correct when the timeline is exported to ProRes through a Color Transform LUT (ACES_to_HD-video_1.0). I’ve tried a few view transforms, but the Rec709 clips don’t come out matching the native Rec709. Any ideas?

1 Like

yo yo yo @scottj. I would attack this a different way. Don’t worry about setting it up on export, you need to set it up so it looks right on your timeline. Thats like 95% of it. Think of it this way. If you had a timeline full of Scene Linear Rec 709, you’d have View Transforms that, when a Scene Linear Rec 709 appears, makes it look like Rec in the player. And, upon export, it’d look wrong, as its actually Scene Linear Rec 709. But, if you put an explicit Color and/or Input Transform above each type of clip beneath it, 1 flavor for the ACES stuff, 1 flavor for the Rec stuff, and the Tagged Color Space is Rec 709, then THAT bakes in the look that you want to achieve on export.

Does that make sense?

1 Like

Hey Randy! Thanks for the response. Yes it does make sense, though I must admit the many variables between color transforms and input transforms gets a bit confusing. I was using ACES 1.0 policy, and the ACES Tagged clips displayed correctly, I guess my problem was in not getting the correct color transform above the Tagged Rec clips, then made more complicated by a Color Transform an export. Ugh.

Input transforms mean “actually change pixels.”

Viewing Transforms mean “if this is tagged this then change the viewer to that”

Colour Transforms are like Input Transforms but stackable and more transparent under the hood.

If you post your specific Color space options on the timeline clips themselves, someone will surely chime in with what you want to color transform them to make them look correct on export.

A view transform set to “from source” and “from rules” to “rec709” on export should convert everything correctly, though having fucked this up in the past what I like to do now is make a layer for the scene linear clips, then a gap effect that coverts them all to rec709 using a view transform. Then I put the rec709 material above that. That way it’s ALLL rec709, (and you can check by bypassing the viewer) but you don’t need to convert your comps to rec709.

…that said, I feel the logarithmic ACEScct is a superior timeline format to ACEScg–Lanczos resizes do some ugly ass things to scene-linear.


Thanks Andy. Would you set a new rule for ACEScg renders coming out of Nuke converting them to ACEScct?

Yeah. Just an input transform from ACEScg to ACEScct. It’s lossless (or, at least as lossless as anything needs to be–always hard to describe the bleeding edge of nigh-absolutism), so there’s not any danger in fucking things up.

And then your clips are in log, so if you need to resize them or grade them you won’t hate life. Haha.

1 Like

Here’s a softball for you @andy_dill. What does Lanczos do to scene linear images?

1 Like

It FUCKS THEM UP @randy!

so, the reason we all like the Lanczos (pronounced “lunk zosh”) algorithm is because it adds a small amount of sharpening as you can see in it’s kernel here:Screen Shot 2020-12-14 at 5.17.06 PM

Those little dips below zero are what causes sharpening to happen. A sharpen filter looks at a pixel, brightens it, then darkens the ones around it relative to how much brighter it got.

You can see this working if you load up the ole FILTER node and load in the Sharpen setting. It brightens the center and pushes all the neighbors down by negative one.
Screen Shot 2020-12-14 at 5.51.15 PM

This all evens out when every pixel is looked at, but If a very bright pixel is near a less bright one, this can push values negative.

It’s kind of like jumping in a pool: you get a bigger splash from the 20-foot platform.

this manifests in our images as weird black or otherwise off-color dots on the edges of detail that only appear AFTER a resize using a Lanczos or Shannon filter. To avoid the problem but still keep the benefits of the Lanczos filter, convert the image to log, then resize it.

Look at the edges of the greenhouse around the sunspot in the image below.

Log resizes correctly because all of it’s values are stored between 0 and 1, so values are never high enough to push their neighbors into the negative.

This is the primary reason why I believe that all timelines should be logarithmic. Log is HDR, easier to grade, and doesn’t artifact when resized.


thanks for this awesome explanation @andy_dill. The jumping from higher / splash analogy is great with the visual refs you have provided. My only question is switching everything to log… it seems like a lot of these artifacts are pretty easily dealt with by lowering the shannon crisp/soft default value from .083 to something like .05 or if you would rather carpet bomb the timeline as opposed to case by case you go to preferences under the Tools / TL FX menu and switch from Shannon to Mitchell. going from 4k or 2k to HD, the softness is almost imperceptible. If you go Log or ACEScct for everything, just seems like a lot of hoops to jump through everytime you want to go to comp in batch… converting everything to Linear and then back again at the end of your batch flow graph. curious to know your thoughts…


I don’t mind the conversions—it’s a node at the beginning and a node at the end. You can even make the read and write nodes do that, but I’m not a fan of secret colorspace stuff.

2021 is the year of Log. Just like every other year. Unless you’re working on already graded stuff. Log is queen. Alexa LogC to be specific. Of course this just my very biased opinion. Can’t be that wrong.


Ever since Kodak released Cineon it’s been the year of log, it’s just taken 20+ years to folks to realize.


Typical Kodak. they invented it but weren’t around for its heyday.


OT but they’re having a bit of a resurgence. People are shooting film and the sheer volume of patents they hold for imaging is staggering.

That being said, I would maintain that developing a log based digital image container where the 10bits available were exactly the 10 bits required to print calibrated at standard Kodak aims was fucking brilliant. Adding motion picture data to the header for dpx was just the icing on the cake.


@cnoellert. yes yes yes. As a side note and to date myself. Cineon was the first compositing software I learned. What a beautiful beast it was. Happy New Year everyone.