Timeline in 10bit or 12bit..?

thats a classic one, thats such a weird design decision.

You can also scroll horizontaly in the left panel to see each timelines metadata.

Maybe there should be a feature request to remove 8/10 bit options entirely? would there be any actual usecase for these 2 ? I cant think of anything…

I mean even better flame would just be 32b float entirely like resolve…

It could have been a decision made about speed & efficiency in the past which would have definitely made sense when the majority of people were working on graded material. It still could make sense now in that workflow to a fair degree, as much as being happy with DWAA/DWAB compression (in linear workflows of course). There are compromises but they are liveable considering the end result doesn’t suffer enough to outweigh the benefits of the compromise, in this case being faster to render.

When working on ungraded camera original though, not so much.

At the very least eliminating 8-bit feels like a good idea. The number of times I’ve picked up someone else’s project and discovered all the timelines are 8-bit because they dropped the EDL on the offline instead of the opposite is truly disturbing.

2 Likes

We don’t have to eliminate it. But at least it shouldn’t be the default out of the box anymore, which is where it goes wrong easily. And maybe even add a warning - ‘are you sure you only want 8 bits, you know those extra bits aren’t that expensive anymore??’

3 Likes

just adding a colored stripe under the timebar or something as a reminder , or yes we always get those triangles warning us with 32bit stuff maybe do that for 8 and 10…

I guess the reason is “faster processing”

wouldnt this also impact flame benchmark performance then?

Super odd i need to understand where it matters, so you can have a 16b timeline but project set to 8bit , what happens ? or is this just for batch?

There doesnt seem to be any set bitdepth in published batches either looking at the batch file
itself … hmm so its like a project based override that gets set on importing/starting the batch?

Some answers for you @finnjaeger

Tried to be more systematic to understand where it plays a role, and the more I dig the weirder it gets… And note this was a single experiment. There may be more to it, or I may have missed details.

Project Bit Depth

I created 5 identical projects with Arri test clip. One each for 8, 10, 12, 16fp, and 32fp. In each I created a sequence and a batch group with that clip.

As expected this project bit depth drives the default for the timeline, and it drives the default for any node you add to batch if the node has different versions.

The timeline you can change after the fact. Batch seems to be fixed. If you get a node wrong, you have to change the project default and then replace the node. Painful.

I added the media, a color mgmt node, a blur node, an action, and an image node. And then did a significant exposure push while taking a look at the scopes.

In the 8 bit project:

Media: 10bit (per footage)
ColorMgmt: 8bit
Blur: no info
Action: 8bit
Image: 8bit

In the 12bit project:

Media: 10bit (per footage)
ColorMgmt: 16fp
Blur: no info
Action: 16fp
Image: 16fp

It appears several nodes don’t have a 10 or 12bit mode, and jump to 16fp immediately.

In fact it seems there are only 3 bit depths batch operates in: 8, 16fp, 32fp. If your project is set to 10bit or 12bit, batch just jumps up to 16fp. 8 bit seems to be some sort of legacy mode. One more reason to semi-deprecate it and only show it with a warning.

Processing Depth

Using the exposure push in Image and the scopes to look for banding as an indicator at which bit depth textures are being processed in each project (batch & timeline)

To my surprise, when I rebuild the projects I had skipped the blur node. And it all worked. Even at 8 bit I was able to push the image and didn’t get any banding. Then I added the blur node, and bam, shit happens.

Even though Action and Image indicate in batch that they’re operating at 8bit, internally their textures must be set to a higher bit depth (16fp or 32fp as we know from other apps?).

I then confirmed in the 12bit project that the blur node can operate correctly at 12bit by switching to that project, and I was able to do the exposure shift without banding. So it’s not that the blur node is not compatible with higher bit depths. But if your project default is at 8bit, you may get problems. And since the blur node doesn’t even have a bit depth moniker in the node tree, it’s actually not possible to see on inspection. Making it more important to never create 8 bit projects unless you really need to.

Timeline bit depth

Intrigued by my blur/image surprise in batch, I tried to recreate my problem from a year ago. In the 8 bit project on the timeline I used image to see if I would see banding. And I didn’t to my surprise. I could do whatever I wanted in image, it seems to be operating at 16fp as minimum. Tried diamond keyer and selectives, all still worked.

But then I was able to recreate it: Adding a ColorMgmt TL-FX in front of Image for my input transform killed it an I saw banding, which is what I had last year.

So doing the input transform in the media ‘color mgmt’ works fine, adding the ‘color mgmt’ as TL-FX breaks. What?

Turns out inside the ColorMgmt transform there is a bit depth selector. If you set that to 8bit it breaks, if you set it to 10,12,16fp you’re good.

Except, another trap door: In batch and in the ‘Format Options’ version of the color management node you can make that change. If however you add the Color Mgmt TL-FX, the bit depth defaults to the project bit depth and is grayed out for inexplicable reasons. WTF?

Conclusion

In summary - you’re crossing a strip of land mines laid out in a semi-random pattern, and some have little flags, others are invisible unless you have a smoke test like scopes that flag it easily (not always trivial).

Your best bet is to never ever use the 8bit project bit depth. Not worth it. Work at 10 at a minimum, but you’re better off just doing 16fp, as most nodes seem to only do 8 bit, 16fp or 32fp anyway. So working on a 10bit or 12bit timeline may be a bit of an illusion in terms of performance gain. You may actually get 16fp processing either way.

And now you wonder why juniors need to extra time to learn Flame vs. some other apps?

2 Likes

this is super gold, thanks for checking this out.

Well said that its a minefield, i will stick to 16bf that seems to be where its at.

However - if you work with log sources and all your processing is 16b float … thats not great either you need 32b float for that.

ugh.

Some more fun:

Just opened my 8bit project to figure something. Noticed that everything was 16fp in batch, but that was because I had changed the dropdown inside ColorMgmt to 16fp. That impacts anything downstream.

Made me think of how I always deal with the warning of NeatVideo only outputting 32fp. I always add a resize node. That’s another node that’s good for translating bit depth.

So I added the resize node before color mgmt. But it didn’t influence the downstream nodes, because ColorMgmt has it’s own bit depth selector (which by now I had returned to the 8bit default).

Morale of the story - you can change the bit depth processing by using nodes that allow bit depth selection. Just be aware that if they’re daisy chained you need to check every single one of them.

Conclusion stands: Just work in 16fp and save you the headache. Or 32fp for that matter - based on Finn’s note.

1 Like

Batch only processes at 8int or 16float. All sources over 8 automatically process at half.

3 Likes

The conclusion is correct but your supposition is slightly off. The main reasons for half timelines is the timelineFX image pipeline and not quantizing the result of whatever is in the stack down to a lesser bit depth for export obviously. But there is a performance gain to be had in 10 or 12int timelines over half but it’s not in processing but in storage where certain sw config setups can yield significantly smaller files which helps with playback of larger resolutions although with the advent of 2025 and compression choices for half those gains are much smaller.

Ultimately if your project setup is 10 or higher, everything will get pushed to float for processing and the endresults will be quantized to whatever bit depth the render node is set to provided you don’t explicitly bang it to random bit depths along the way. Same follows for the timelineFX pipeline with the addition of a resize at the end of the chain to force quantization into the timeline bit depth and size.

4 Likes

batch can also internally process at 32-bit precision for certain nodes like action.

1 Like

Yes, but the number of nodes that can handle 32 is limited and will likely break a lot of things. Just add neat and forget to shit it down to 16…

:boom:

1 Like

it breaks many things.
i’m only interested when multi-channel, openclip workflow becomes a real thing.

2 Likes

@cnoellert Thanks for the clarification/corrections. These were empirical data points and interpretations. But always good to add the depth of this forum to refine the conversation and understanding.

2 Likes

Thanks for the detailed reply, surely you put some time so I truly appreciate the insight… quite a bit to unpack though…

May I try to go over some points? From my perspective, whatever the gamma function (linear/non-linear) has nothing to do with the colour primaries (rgb/rec709/aces…) or how the data is encoded which is what probably needs to be defined a bit better.

The encoding of the pixel may be using integers as we have been doing for ages when dealing with DPX/Cineon files, rec709 video and sRGB pictures and therefore, more bits to encode the image, the better at the cost of enormous files, first 8bit, then 10 then 12bit until a point that more than 12bit log is not really adding much due to scanner tech, display systems, blah blah blah

With the development of ACES a set of new primaries have been defined creating a “bigger canvas” and the encoding in the ILM implementation of EXR is 16 bit half float, meaning, every pixel is encoded using a part of the byte with integers and another part with decimals (and a sign) allowing thanks to some clever maths, to cover a much wider gamut than rec709, etc…

So, it is not about linear on non-linear, but how the pixels are encoded and this is where I suspect selecting Legacy + 16bit hf does NOT add any more quality to the whole process.

I also suspect, Flame internal mechanisms to deal with both, advanced colour management and these new formats, have forced them to alter how internally it operates and with it, convert the material to 16bit hf to do the operations in a consistent predictable manner.

Nuke when using ACES, follows a very simple and elegant approach, converts everything to a linear workflow using the ACES profiles supplied into a unified linear workflow using ACEScg (in my case) as the colourspace of all operations.

So, when you do the colour management on Batch, you are effectively doing the interpretation Nuke does by hand (if you select the same setting, that is)

What it is important to understand is that the maths behind it all are using a linear gamma, therefore, linearising the material is highly recommended even if you have managed to work in log and rec709 for years and produce amazing results. (respect)

But anyway, I would love to get more familiar on the Flame mechanisms but our setup is simple;

Always ACES, we normalise all the material on ingest (when caching) and we always render ACEScg EXRs.

When delivering, then we do the conversion to rec709 and what not.

I hope this make sense…

Create projects with a minimum of 16-bit.
This has been normal for a minute or two.
Ignore import values for AAF/XML (import preferences).
Attempt to maintain a consistent color management pipeline for timelines.
Promote to 32-bit when and if multi-track/multi-channel openclip is real.
Export color managed pipelines using a consistent transform.

1 Like

(delayed response as it overlapped with Logik Live)

It’s true that ACES defined new color primaries and thus the color spectrum covered is bigger, and more precision is needed to differentiate to the same degree. But that isn’t the crux of the debate of int vs. float.

There is a perception of ACES = linear. That is mostly true in VFX context, because the ACEScg color spaces uses the ACES AP1 primaries with a linear gamma. That is not be confused with the ACEScc and ACEScct color spaces, which use a log curve. Therefore ACES is not synonymous with linear.

That said, I did overlook one aspect. ACEScc and ACEScct color spaces exist to facilitate color grading operations and to keep them similar to what artists are used to. However, they are rarely stored in files. So their relevance to encoding for storage is not present. These are primarily working color spaces.

The discussion on using int vs. float for storage is not related to color primaries, but the precision allocated to the values in each channel. How they are distributed and where an optimal pairing of precision with the most meaningful data lives, is dependent on the gamma value used. There’s a good discussion of this by Charles Poynton available on fxphd and many forum discussions.

Log curves compress certain parts of the luminance spectrum to fit it into existing value ranges. That means additional precision in the flat parts of the curve have considerable impact.

The 16fp encoding is not like an ASCII string with digits and a decimal point, but an exponent/fraction encoding which allocates available precision unequally, but beneficial to the range between 0 and 1. The issue with 16fp encoding is that the fraction only makes up 10 of those 16bit, and thus can have less precision than 12bit integer in some circumstances. A limitation that goes away with 32fp.

Regarding batch and bit depth - as I wrote in one of the posts, and you re-iterated, it is quite possible to emulate the behavior of Nuke by using ACEScg as working color space, and to force a color management IDT for all media, and then an export to ACEScg/EXR or some other format with the appropriate ODT applied.

However that is not a default behavior, and the implication of bit depth on certain operations have to be understood by the artist. The default behavior of Flame by no means implies that all operations will be linear as they are in many other apps. From some of the observations in the later part of this thread, you can see that Flame is mixture of legacy, leading edge, and everything in between. Plenty of rope to trip over.

With that, I’m going back to what @finnjaeger has hammered home for years now :slight_smile:

Non-Linear gamma: 12bit or 32fp
Linear gamma: 16fp, consider 32fp

If you linearize everything to ACEScg, 16fp is adequate.

Thank you @allklier once again for the consolidation of thoughts, now your previous post makes a lot more sense.

I am particularly interested in forcing a Nuke style workflow in which material is always normalised, always linearized and we can comp in a predictable way. Ideally something I can automate so when the Flame starts, all the correct defaults are put in place to this workflow style.

QUESTION
What woud your preferred approach;

  • Do everything on ingest/cache (meaning, unification of material)
  • Do everything inside batch (error prone?)
  • Any other way?

Thanks a lot.

It’s both a philosophical and practical question.

The way I would frame it is such: Do you/should you

A: Drive the family mini van (aka Resolve). Suitable for millions (literally), totally adequate for getting around and taking kids to work. You don’t have to be poor, you could also be frugal, or just don’t really care. Bill Gates is known to drive cheap cars and even mini vans.

B: Drive the Mercedes sports model or Tesla if you’d like (aka Nuke). Zippy, plenty of features. Does require some more driving skills so you don’t get yourself into trouble. Good for the 50-99% of the population.

C: Drive the Ferrari (aka Flame). Hand crafted. Generally need to know how to drive with a clutch and manual transmission (though there are automatics, a bit weird). Doesn’t fit into every garage. For the advanced drivers (or the rich people).

Puns aside, the comparison has a point. There are very few Flame artists in total, and they generally have significant tenure and experience. There are many more Nuke artists of all natures, and there literally millions of folks using Resolve.

You can do something meaningful in Resolve after watching 1-5 YT videos. It won’t be grand, but it serves its purpose. There are plenty of advanced features in Resolve and very senior artists use it every day for top shelf work. But they do so in well defined pipelines, and repeatable and predictable environments and products. They’re grading long form films rather than sending folks to the moon. Way fewer curve balls in that game than what might land on a Flame artist’s plate.

Nuke is much more advanced and does require more training. But it also simplifies critical aspects. Channel routing is more transparent. Studio aside, it’s shot work, the 3D environment is more tightly integrated with the 2D environment. Colorspace management has been simplified. If you need a few hundred Nuke artists, they’re easy to find and can do fantastic work at scale.

Flame is the one tool you want to be on if Adam Savage comes in the door after coffee. You don’t know at all what the day will be, and what requests come up. There is a way of doing it. It may not be the most elegant, it may be cost you a few hairs, but at the end of the day there will be smiles. That flexibility comes at a cost, and that is you need the experience to cross a minefield safely or time will not be extended. And as a results there are few of us, and Flame is not an early career tool. Because you do need to understand concepts like color spaces, bit depth, encoding, and more to survive.

That was a long pre-amble to answer your question.

The best way of looking at why Flame is the way it is, as best as I can tell:

First of all today’s Flame is the accumulation of several powerful tools that have all historical quirks. And Flame has done a somewhat adequate job at harmonizing them, but less than anyone else. I doubt Nuke always linearized it’s workspace in the early days (wasn’t a user of it back then, speculating here). But at some point they’re made wholesale changes and said ‘this is the new way’. Flame tends to keep all the antiques around, and hands you a few wrenches and dust towels to make due. The unwillingness to let go of the past is a nod to its senior artists, but also its achilles heel. As evidenced again in Sunday’s discussion on how many folks are still using antiquated color warper nodes among other examples. And the frowns when centralized configs came along.

It’s that same belief that I can find as the only reason for not harmonizing footage. The unwillingness to do any unnecessary step on the footage. The fear that an extra colorspace transform which does move bit values around at the risk of small degradations of the image, could be harmful to the end result. Thus we only tag color spaces, and do one final transform at the end (unless forced and we understand the implications).

[separate tangent] This reluctance to translate pixels for color is at interesting odds for a propensity of every Flame artist to stabilize / unstabilize for various tasks, which due to its need for filtering is vastly more destructive to pixels than any unified color pipeline would be. [/end tangent] (see @ytf point on that below)

Having laid the ground work for actually answering your question (if you’re still reading):

I believe simplifying workflows and reducing the land mine count is good for any artists - junior or very senior. Everyone makes mistakes, everyone needs to move fast. Unless there are substantial reasons or severe penalties, Nuke’s approach is preferable over Flame. And Resolves’s approach (choosing color philosophy during project setup beyond just OCIO configs) maybe the right middle ground if you need flexibility.

In the past our hardware limitations may have warranted skipping unnecessary color space transforms, but that’s no longer the case. A color space transform is at most a 3x3 matrix for the primaries are some basic log functions for gamma. Not a heavy lift on today’s systems.

As outlined, you can do all of that in Flame, but it requires setting up a more rigid pipeline (Phi’s Logk Projekt might be a vehicle) and then adhere to it. The anti-thesis of Flame work, though also exists in different shapes in many shops.

The tool won’t necessarily constrain you, so it comes down to significant discipline, that may get human push back or get bent in the heat of battle.

But yes, I think working in an ACEScg workflow, not necessarily through caching, but just in real time by setting up your media nodes with the right color management is the way to go. Then pair it with the appropriate storage encoding (to solve the int vs. float discussion), and survey the nodes you’re using and discard any that are not suitable for a linear workflow or that don’t support 32bit processing (if you choose that). I suggest 16fp is adqueate for most work, but 32fp is not unreasonable for premium work.

And I find that Flame’s secret weapon is that it’s the only top shelf tool that allows you to work in a timeline and do shot work in an integrated workflow. I almost always setup my timeline and then escape to batch groups where called for. It’s a brilliant set of tools that has no limits.

It’s superior to Nuke Studio by a mile, and because of it’s open clip, connected conform, and other advanced workflows remains superior to Resolve, which does come a bit closer than Nuke to this.

And there may be other helpful color spaces to consider. Baselight has some interesting T-Log spaces, and has made a new color space as part of its look dev tools that has a lot of promise for the modern visual craft.

All that said, we should encourage our colleagues to let go of some legacy, and we should continue engaging with the devs to evolve Flame in what are yet better ways to work - for processing, storage, and color management.

2 Likes

I find that a hollow argument; every flame artist who does that should only be re-comping the changed pixels, thus they’re only being filtered once, the same as if there were no stabilizing, only tracking. And if the stab/unstab is done on x,y position only it can be done as integers with no filtering at all.

1 Like