ProRes - Legal vs Full

Anyone know the technical differences between Legal and Full on a quicktime 444 export ?

1 Like

According to quicktime specs prores is not specified to any specific level. You can put in full-range if you really want to though, just need to make sure everything in your pipeline supports that .

For some reason Autodesk tells you to import alexa prores as full-range (YUV headroom) which is not the default behaviour of Baselight and Resolve , not sure what you mean with technical differences? one uses video and the other one Full/Data levels.

1 Like

I’d love a clearer picture here. I had some ProRes renders from Baselight at full range which really tripped me up. Up to this point I haven’t really looked at or changed the headroom settings. Even when importing ProRes rushes from Alexa, I haven’t. I’m now worried that I’m missing a trick here!

but I am curious and just had something weird happen, when I exported a Log file back to pro res 444 at full range it appeared some sort of gamma shift when viewing on a mac, so some color changes were happening on export so not sure why this is I would think it would just have a larger numerical value set in the file but not actually change the viewing specs of the file.

1 Like

dont trust quicktime for anything you are doing anyhow, there is a lot more happening than just video/data levels.
NCLC, gamma tags, flame writing wrong tags etc etc.

That said, quicktime inteprets all
prores as video levels. so exporting full range will give you a shift in quicktime even if all metadata is the same.


there isnt really a clean picture, just conventions and no standards, I always treat all prores as video levels unless specified differently. Resolve does it the same way. To me prores is a YUV based codec and not a RGB codec like dpx so it should be video range.

Cant remeber on top of my gead if quicktime supports full/video metadata or if that was only mxf… (update: it
does not have metadata)

Dolby visio masters are supposed to
be fullrange.

for prores rushes from alexa , autodesk says full-range , resolve and baselight say video range… fun stuff…

regarding the difference between full and video range, internally flame processes rgb data at full range and is then putting that fullrange data into a container, so either video or data levels, it doesnt really change the contents just scales the signal on input and output , thing is your video player needs to know how to decode it to
then send the appropriate stuff to
your gpu and your gpu is sending video/data signals over the respective cable used , displayport or hdmi or whatever.

so always the whole
pipeline needs to be looked at, just like when you send a sdi signal from a aja card to a monitor everything has to match in levels.

then there also is some overshoots that can happen when going RGB->YUV , where you would need the additional “yuv headroom” thats also why you can end up with out of gamut colors when delivering TV masters . (sinplified)

thanks finn this was very helpful, its interesting that the browsers aka firefox doesn’t have color management that was an interesting thing, sounds more like a giant metadata problem with quicktime than anything, I was thinking full was more of a log color space and legal would be more video but was i wrong on that assumption. hopefully auto desks fixes the metadata in quicktime in a future release that would help things a lot.

1 Like

finn just curious but in flame export their is a YUV encoding (auto color) / Rec601 / Rec709 / Rec2020 do you know if that changes the metadata within the quicktime file to correct for the bad tags? Or perhaps what those selections do to the file?

Not sure what you mean here, i havent really checked all of flames options and the resulting tags, but some things -

When working in “simple linear workflow” it will tag the quicktime on output as what the output “lut” is selected to- to some degree - like you select srgb and it exports 1-13-1 so proper srgb tags. when you render 709 it tags it as 1-1-1 which is roughly gamma 1.96. same happens on “untagged” export like “legacy” colormanagement

Now what I just figured out today, from a friend using older OSX Mojave on a iMac Pro : 709 gamma 2.4 tagged quicktime(1-2-1) looks exactly like a 1-1-1 tagged quicktime in quicktime player … So this is a pretty new issue which makes sense as it just starting popping up really in 2020… dont want to go down the rabbit hole of old macos versions though.

Sorry for jumping late in this thread, I guess I missed it :wink:

Ah! the famous Legal vs Full Range debate :wink:

Thanks for pointing out the documentation error about ARRI Alexa ProRes and Full Range. This will be fixed shortly and for now stick with Include YUV Headroom disabled for this format.

As stated earlier in this thread, QuickTime format does not have metadata to define the range of the image and it seems to be a general consensus that most camera vendors generating Apple ProRes media files (in both QuickTime and MXF containers) use Legal range. In Flame Family products, the Include YUV Headroom option is per format or container and not per compressor so you need to be carefull when importing a selection of QuickTime files from various sources since imported content might be wrong. You can still edit this option in PreProcessing option after importing but still, that can be confusing.

Some camera vendors record Full Range ProRes in MXF format and thanks there is metadata in the files you should be good to go.

There is also the workflows related to record SDI or HDMI output of the cameras to external recorder which also adds a level of complexity. if you google for this topic, you will see many threads and based on the camera/gamma and color space/recorder type, etc, everything seems to be possible these days!

At least, MXF does not show these issues and hopefully, more and more acquisition workflows will move toward this container format. ARRI, since the ALEXA Mini LF, started to record into MXF container and latest ARRI Alexa 35 camera follow the same modern recording format.

Good luck!


Thank you!

Yea there is additional testing needed when it comes to SDI/HDMI outs and matching that to whatever the file is…

What camera records prores full range? would be interesting to know…

Argh now I need to go down that rabbit hole again and try it out and test and make lists…

Btw I found a DekTec card that allows me to record and playback SDI signals “as is” with full metadata… will need to buy this and then create a library of cloned camera signals

also want to add

Alexa seems to for sure be video range

“While ARRIRAW, ProRes and DNxHD are legal range encoded, the LUTs are applied to RGB images in the software and these are full range. So please download LUTs as following for Log C to Rec709 conversion:”

1 Like

Sony Venice / Venice 2 can record ProRes media files and based on the gamma selection you could get ether Legal or Full Range. But since these recordings are in MXF container AND that our MXF import option for YUV Headroom is set to Auto mode, your import will be good. I am looking for a public document about these to share with the community. But then you are able to export either Legal or Full Range based on your deliverable requirements and there you back to confusion land!

1 Like

i checked with arri and they confirmed that the alexa prores is always legal range.

" ARRI cameras SDI output is legal range as well as the ProRes encoded files.
There’s no switch to change that.*

*ALEXA Classic + XT cameras have a menu setting to change the SDI to full range. ProRes is/was always legal range.


1 Like

I have been wondering, why do we have video/legal data range?

I know it is inherited technology and we carry it around because it is locked into a very common standard.

But what were the engineers doing within the values 0-15 and 234-255?

Was it so that they could carry another data stream like early metadata? Was it to stop noise from creating values that might clip outside of the maximum 0-255 range? Did early CRT technology not like super dark blacks and bright whites?

Most of it had to do with broadcast limitations. For instance a full white screen at 100 ire would cause buzzing in the audio. 0-255, however was not a measurement used for tv signals. They used a scale of -40 to 100. For active picture, anything below 7.5 in ntsc and 0 in pal and secam was illegal for broadcasting. -40 black was the sync signal in the vertical interval. It wasn’t even until the 80’s that we attained devices that could create black in the raster area that fell below 0. We called it SuperBlack and would use it as a keying background to that we didn’t need mattes for graphics that contained regular black.
Chroma also had it’s limitations. To much and it would tear on a tv screen. To low, and it got VERY noisy. We monitored this on scopes because in the edit suite, things could look fine, (except for the noise) but it would fall apart on broadcast.
For the most part we are still carrying around baggage from a system developed 100 years ago.


You might find this video interesting in this regard. It’s a recording of the SMTPE DC section meeting a few months ago on the ongoing discussion of getting rid of fractional frame rates. It starts with a long background on the origins of the fractional rates.

As is fairly well known it was a fix for audio carrier and frequency separation. What is fascinating is that there were two equally valid choices, one which went in favor of frame rate and one which went in favor of audio. In today’s complex pipelines you would have of course optimized for the even frame rate. But as Merrill Weiss reasons, back then there was no recorded signal. Everything went straight from camera to airwaves, so there was much less concern about the implications of an odd frame rate. Thus you can’t blame them for creating what is a ginormous mess these days.

Great watch. I was on the Zoom call as I know one of the other speakers from a different weekly meeting where we had batted around this question quite a bit already.