we create our timelines in Flame in 10bit, now we are wondering because the entire clip is processed from start to finish in 12bit in grading, comp etc. (aces is excluded). But then it is cut to 10bit in the timeline at the end, now we are considering whether we should also create the timelines in 12bit.
How do you do that?
When I import my AAF/XML, for conform, I change the bit-depth via MediaHub. This will make my timeline 12-bit but you could also just reformat the timeline changing the bit-depth.
thanks for the answer but i know how to do it, maybe i have described it inaccurately. We are considering whether we should also create our timelines in 12bit instead of 10bit and I am interested in how others deal with this.
I usually work with 16bit timelines and convert them to 12bit when mastering or keep them at 12. Depending on the project and input. But never 10 anymore.
It depends a bit on the nature of your grade, but in general terms, itās a good idea to use a higher bit depth on your timeline than what you deliver.
When you do multiple color (or other) operations there is a chance that in-between values will be computed. If your bit depth is the same as the final render these temporary in-between values can be truncated, and in some cases degrade your result.
If you deliver 8bit web files, work at least at 10bit. If you deliver 10bit ProRes, work at 12bit. And at any time working in 16fp is a good choice regardless. With todayās GPUs unless you do particularly heavy processing 16fp should be well within the parameters of your hardware and give you a lot more flexibility for retaining intermediate image content in your color pipeline.
all the timelines at MILL are 16bit.
Depends on your delivery needs. For tv spots I load my footage at source resolution and bit rate, do all my work at 16fp and camera resolution, (since thatās what flame does) but all my timelines are 10bit. Since the source footage is usually 12, I have that range to do anything I need. Since my deliveries are at best, HD prorez HQ, I think anything greater than 10 is a waste. In fact, itās only in the last couple of years Iād get colour at 12 bit.
When I load my roughcuts, I load them as 10 bit. They are the base for my timelines. All work getās dropped on top of it. If I wanted to change that I would reformat just the roughcut and copy/paste everything on top of it again.
Thatās interesting. For deliveries beyond broadcast I get it but for straight up commercial work itās never going out at more than a ProRes422 HQ.
Whatās the reasoning if I might ask @TimC?
Iām with Tim. Same exact workflow.
It depends a bit on what you are doing on the timeline.
If you are just comping, or brining in open clips from batch, then 10bit should be totally sufficient. Where it would come up short is if you use the Image TL-FX, which means color happens in your final bit depth, not intermediate bit depth.
I think that only depends on where you place it in the order of effects. For instance, ahead of an action will give you original res, while after action gives you TL rez. Also, thereās the green image node on the far left which happens ahead of everything and canāt be re-ordered.
For the longest time my philosophy was to make sure all of my timelines were built at 12-bit since Prores 4444 (which is 12-bit) would often be my largest bit-depth export, and I thought it made sense to align those two.
Recently Iāve moved to 16-bit timelines after I started comping exclusively in linear. I ran into a pretty specific issue on few occasions where when working with camera raw footage converted to linear, the linear exrs did not display correctly in my 12-bit timelines. I think it has to do with values that are mapped from their 16-bit values down to the 12-bit timeline and it doesnāt quite align correctly. I donāt pretend to understand all of it, but because of that circumstance and how Iām often working on the camera raw first and then graded footage later, Iāve just adopted 16-bit timelines across the board and havenāt looked looked back.
A pure technical reason for choosing 16 or 12 bit
16bit is floating point
12 bit is integer
depending on your content one of the other has more precession.
log/rec709 ā 12 bit integer has more precision than 16b float
linear ā 16bit float is your only choice
this also extends to your caching format
Integer can be cached as uncompressed/DPX or proresXXX
floating point data can only be cached as EXR .
so bascially you have files from grading ā integer 12 bit
if you have linear EXRs/ACES stuff on the timeline ā 16b float
The only comping I do on a timeline is either a straight up gmask or titles. I keep my timeline as close to my delivery format as possible. All bit depth and colour space conversions are done in the batch render node. And 99% of all the titles and graphics I get are 8 bit. All of this satisfies my short-form only needs.
My timelines depend always on the workflow of the project, but usually (99%) we have heavy VFX work, and all our plates are exported and worked in Acescg 16bits, meaning our timelines are always 16bits. If by any chance I made a mistake in the bit depth of the timeline, 12 or 10 bits, I can see it quickly and easily as it clamps the values over 1. For me, it is always 16 bits, as the grade is done after VFX.
Agreed. Handy for that last minute colour correct in a title gap bfx.
But @TimC and I do have the luxury of in house colourists who take pride in their work and fix many issues that could crop up. We donāt need to rely on cowboys from Company 3 thank goodness.
My understanding is that if you work on non-ACES workflow, that is, integer based encoding, 12 bit is able to store more than 16bit half-float.
If you are using ACES, then 16bit half-float is superior as this is a floating point encoding rather than integer.
Therefore, my understanding is that if you work on legacy color policy management, you should pick 12bit, if you use ACES, you would pick 16bit half-float.
Please let me know if I am missing something here.
Am I correct?
That is mostly correct to my knowledge (also see @finnjaeger post earlier).
With the caveat that is only applies to 16fp. If you used 32fp, both are good regardless. Of course thatās also more bits.
The reasoning behind this actually has less to do with ACES vs. non-ACES, but whether you use a log encoded color space vs. a linear color space. And that refers to the āmostlyā - because ACEScg is linear and would fall into what you described as āACESā, whereas ACEScc is log based, and would fall into the ānon-ACESā part of your description.
At issue is whether the available code value precision is allocated to an area of the image where itās most beneficial.
And to make matters more complicated, in Flame we often only tag the color space, but donāt yet convert it until the very end or until a node with color management is encountered.
If you are working ACES 1.1 with ACEScg as working space, and load an Alexa clip and tag it as such, all nodes will still operate on that image in LogC (and thus log encoded images), until you add a color management node that translates it into ACEScg.
Similarly in batch nodes will default to the bit depth that is selected in the project settings, unless they have reason to do otherwise. See example at end:
That is different from Nuke, where the read node will translate it immediately into the working color space and everything is operating in the same plane. Or Resolve which does all itās internal processing at high bit depth.
There are pros and cons to these approaches. But it means you have to pay more attention in Flame than usual on these tradeoffs, as you may be in a mixed environment. In batch it can be worth to keep ColorSpace in the node icon enabled.
If you prefer Nukeās way of working, in the clip/media node, enable āColorMgmtā and change the mode to āInput Transformā. Then each media node will include the transform to your selected working color space and the rest of the batch node tree will be uniform in terms of color space.
But then keep in mind that weāre talking about precision in small fractions. This will mostly matter in high-end work or processes like keying. This is not as bad as the banding in 8 bit footage that we all hate. Good to know, not a show stopper for many.
Batch example in Flame 2025. Project originally setup with 8bit:
Arri footage comes in as 10bit, Paint also default to 16fp, but then the ColorMgmt clip is back to 8 bit.
Changed project settings to 16fp, also in Arri input clip set the bit depth to 16bit. Now everyting runs at 16fp. Had to create a new batch group though. The original batch group did not update once I changed project settings.
The Flame user guide states that the project bit depth is what drives any image rendering at the graphics card. There is probably more complexity to it - what the graphics card uses to display, what the texture bit depth is for the internal pipeline, etc.
There doesnāt seem to be a batch group setting for bit depth, that just depends on project settings when the node was added. For timelines they are what you pick when you create them, and you can reformat them after the fact.
In that first batch group with 8 bit project default, I added a Master Grade and pushed the exposure, you can see how the scope falls apart with banding. That didnāt happen in the 2nd batch group after I changed the project default bit depth.
So itās all far from trivial and not always transparent. A sequence of pot holes to navigateā¦
And Iāve fallen into one of them in the past. Having a timeline that was too thin and then struggling with corrections.
All this makes Flame a powerful app, but also demanding on the driver. Itās the Ferrari, not the a Mercedes. Weāre earning that premium rate over the Resolve and Nuke folks with sweat and sometimes tears.
Wow I did not know yhe project bitdepth even matters, I just always set it to 32b , I assumed its for ādefault timeline settingsā I need to check that out thats super dangerous - wow.
Most other apps now just do everything in 32b float. Resolve and nuke for example and you cant change that because why would you. sadly in flame a lot of nodes cant handle 32b float.
What happens when you export the batch and import it back in?
from the docsā¦
Select a default project bit depth. The bit depth for images rendered by the graphics card. If working with a mix of 8 and 16-bit resolutions, select 16-bit FP graphics display. Even when working with only with 8-bit images, results will be better with 16-bit FP graphics rendering when transparencies, blending, and gradients are part of an effect. 16-bit FP rendering takes longer. Projects from previous versions of the application with a graphics bit depth higher than 8-bit are mapped to 16-bit FP. Also, if your output is ultimately an 8-bit format, having the best possible quality immediately prior to output produces the best results.
Exactly. Wasnāt totally on my radar either until I was making examples for the post above. I think it acts as the default for batch nodes, unless the node has specific defaults.
Itās just easy to dig a hole your donāt realize you have. I remember one project where I thought the footage was badly captured, because it was giving me a terrible time in color when trying to match. Just couldnāt push it far.
And I checked the timeline multiple times. However, I checked it by brining up the āreformatā dialog for the timeline, thinking that it would show me current resolution and then let me pick new and change. Turns out that dialog doesnāt populate with current data at all, but just default. You need to Opt-Click on the timeline to check the current bit depth. So I had good footage, and was fighting 8 bit log the entire time. And then was far enough down the project, that switching it would have reset too much. Very painful.