I know there are great videos from Andy Dill and John Ashby. But would someone consider putting it into simple terms, please?
Input transform, view transform, color transform, which one to use when…
Scene-linear for comp, logarithmic for color… So for keying, I’m guessing scene-linear?
To convert AlexaWideGamut to ACES-cg Andy used “Input Transform” in the color management node, why not color transform or view transform?
I can go with the way Andy or John works, but I just need some fundamental understanding of the underlying logic.
This is a big question, but let`s try to make it simple.
Both input transform and view transform are color transforms. Color transforms are operations that converts your pixels from one color space to another. Best analogy would be is transfering bottle of liquid to a glass and back.
Difference between input transform and view transform is that view transform usualy include some kind of “look” to make result pleasing to eye. Bad news is that this look is can not be reversed. Input transform is plane math operation and can be reversed anytime (or convert to another color space).
So Input transform should be used to convert your sources to suitable colorspace (lin for comp, log for grading etc). View transform should be used for previewing and, in some cases, for delivery.
Think of it in terms of “working space” - which you want to be scene linear in pretty almost all circumstances when compositing (there are exceptions that break this rule for certain tasks, but then you want to go right back to scene linear after that) and “viewing space” - i.e. what the final image should look like. This could be a chain of LUTs that take it all into Rec709. You are still working in scene linear, but you are looking at a Rec709 image in the monitor.
You never want to work in straight Rec709, there are people here that do but they are wrong to do so. That’s some legacy Flame mentality that needs to die and go the way of the Modular Keyer.
Switch to log for any kind of sharpening operation and for resizing (this help avoid those black edges caused by NaNs / negative values) then switch back to scene linear (when doing this use a 1D transfer - lin2log. This way you aren’t switching gamuts.
For comping graphics @andy_dill showed a cool little tip using log space to comp a graphic.
Log is sometimes also good for capturing some high hair detail.
For any tracking operation in Flame it’s best to use a 1-0 space (Including the Planar Tracker and Motion Vectors) - just keep a color management node floating around does a tonemap operation (i think there is a LUT in the Color Management node specifically called ToneMap, you could also use a acescg > rec709 LUT, this is also tonemapping but does do a gamut switch which may or may not be desired, the tonemap will keep you in the same gamut and is reversible should you need to do some temporary fix then revert afterwards) and just flip it on for tracking, then turn it off or break it out the connection. It’s a minor annoyance but it’s not a deal breaker.
My feelings: Ignore every option in the Color Management node other than: View Transform and Input Transform. (and maybe “tag only” for fixing tags)
View Transforms are like tinted sunglasses. They exist to display your footage correctly without changing pixel values. They should be loaded in the Preferences > Color Management > Viewing Rules tab and not used in your comp.
The only time you should apply them is on the timeline for rec709 postings and deliveries. The one in-comp exception would be the inverted view transform to get your video material into a bigger colorspace in a WYSIWYG manner.
Input Transforms convert color from one space to another. They are your in-comp go-to’s. LogC plate comes in, convert to ACEScg to comp some stuff, then back to LogC to do some color (or a graphics comp). Input Transforms are great because they are lossless (though I would avoid video color spaces), so you can use as many as you like in a comp and still get back to the original plate’s color.
I detailed this more in the post above, but I try very hard to only use input transforms inside comps, and to use only view transforms in my viewing rules. The rest of it I avoid because it’s all less modern and more complicated.
I’m inclined to say log may work a little better due to it’s 0-1 nature, but it also may work a little worse due to it’s low contrast. Sometimes I’ll apply a log-to-video transform or just CC the footage to be more contrasty. I’ve yet to find a hard and fast rule.
Input Transforms are the simplest. From A, To B. They’re also correct, modern, and source specific, but my favorite thing about them is how uncomplicated they are. Color space is hard enough. Haha.
one cool thing about input and view transforms: all the steps they’re taking are listed over on the right panel, so if you want to build up the same transform using the Color Transform > Custom stack, you can. This can be useful if you have a show lut that works on LogC footage but your working space is ACEScg. You can build up an “ACEScg to LogC” stack and then import the show lut, then export that whole stack out as a single transform to load into the Viewing Rules.
I just watched Andy’s video again and things clicked into place. The trick he uses to comp rec709 material is to use a view transform from rec709 to ACEScg and INVERT it. That’s because. we do not have a rec709->ACEScg view transform in the pop-up menu. And this works perfectly for my workflow involving Phantom Cine files which are, regardless of my transfer mode, somehow turns up as rec709 for me in Flame.
Do you have any suggestions about Phantom Cine file workflows in Flame? Currently, we transfer cine files using Seance on Mac. Then usually do a transcode render on Resolve to Prores4444 to use in Flame. But this crushes the blacks. I am trying the MediaReactor plugin from DrasticTech at the moment and this gives the details back in the blacks. But which colorspace should I tag the material? Rec709 “seems” to provide a good result, but when does it keep all the latitude?
Last time I work with Phantom was several years ago… But I’d tried next:
1 Resolve should open Phantom Cine Files
2 It can Debayer them to 3 curves rec709, log1 or log2
3 I’d try to convert them to something Flame can interpret right colourspace wise (AlexaLogC or ACEScc with RCM or ACES colorscience) and output to ProRES 4444
4 Tag that proves files accordingly in Flame
Sorry, I couldn`t find any Phantom CINE samples with google. If you can provide me some frames I can look at them closer.
A Colour Policy provides a way to save colour management settings in a way that may easily be used to configure new projects. A Colour Policy contains the following:
Input Rules: A way of automatically tagging sources with a colour space based on the file name.
Viewing Rules: A way to have viewports automatically apply the correct viewing transform for a given colour space and display.
Project Working Space: The default Working Colour Space for the project.
Action Colour Space: The default Colour Space for Action output. This also defines the colour space used to convert Substance generated textures, as well as textures brought in via Action’s Import node.
User Colour Spaces: User-defined Colour Spaces, Viewing Transforms and Displays, created by you or your facility that are made available to the project.
I’m trying to think of examples: if you change the scale of an image and there is interpolation, if it is rec709 there might be dark colours introduced between colours? but if it is done in linear, the blurs would be clean?
Just a guess: I think in the resize node two things may be done: a resize and also a filter/sharpen operation. Ideally the resize would be done in linear, and the sharpening done in video/log
Nope, sharpening/filtering is a part of resize algorithm, it can`t be separated.
I personaly think that Flame developers made some clever hacks in early years, (before float and linear find their way in Flame) that gives best possible quality in gamma corrected space. And this hacks didn`t work as expected in scene linear.
I mean, Flame still shows it’s “works best in video” roots, but this specific thing is intrinsic.
It’s due to the sharpening kernel in Lanczos and other algorithms having a negative dip (which is what creates the sharpening). If you get a high enough spike (50+) it’ll push those dips on the neighboring pixels down aggressively enough to cause artifacts.