Grain pipeline

I want to talk about grain management in a commercial pipeline, so far i have seen these 2 cases:

  1. grade then comp
    Colorist does whatever, degrains whatever shots he feels like in timeline res (maybe) or something stuff comes out incosistent and crazy. Aybe colorists adds grain for artistic reasons to the shots in shot res.

Stuff goes into comp, we degrain the graded and possibly allready degrained or extra grained or whatever, do our comp work and then regrain it to look like the original

In finishing maybe someone adds another layer of “creative grain” onto the timeline.

  1. comp then grade

Plates are degrained, then comp work is done then regrained.

This then goes into grading where again, colorist does whatever, degrains this, adds grain here blah blah.

this goes into finishing where we might add another layer of creative grain.

To me this all is pretty horrible. So what i have been doing instead on the last 10 jobs or so without seeign any drawbacks is:

I first degrain everything, thats part of my conform I just go through with neatvideo set to maximum quality and let it render out new degrained plates, usually in resolve due to metadata handeling.

Then if comp is first which it usually is for us, they use these plates and DONT regrain .

Those degrained comps then go into grading, grading does not need to degrain during color sessions which is a big performance benefit.
added bonus is that for remote streaming degrained is a lot better on the encoder.

in finishing we then add artifical grain in timeline resolution, that way we get a coherent sized grain across ALL shots even if they have been resized, we only degrain once in the whole pipeline.

I dont understand why we obsess over “plate grain” we literally resize 4-6K to HD all day filtering everything like crazy by “cropping in” or doing whatever - loosing all that “special alexa/venice/whatever grain” anyhow.

If we grade first, i still do this, and then export degrained-graded from grading into comp, again just one single degrain at the beginning of the plates journey.

(I get it if you only do comp then yes you want to degrain and regrain and just do your job…) .


I did this on my last job. It was shot low light and I was worried about shots getting an aggressive grade.

I had control over the quality of the denoise. Comp didn’t worry about the grain. The colourist got to set the grain/noise with the clients during the grade.

I hadn’t considered doing the grain at timeline resolution :thinking:

If we do grade first I don’t want any creative grain added on VFX shots.

I like the idea of minimizing the grain operations, rather than stacking them. Repeated and uncoordinated grain operations would most certainly take away fidelity.

And I like the of the grain being handled by someone who cares/knows about grain, rather than just slapping stuff on because.

Putting grain at timeline resolution rather than source resolution is an interesting question. Though in reality in the past you stacked various grains too - the camera original and then various intermediate prints and various operations, as well as video noises.

Add to this that some artistic grains get added rather carelessly.

So I think I stay in the camp of retaining the original grain of the shot, as it’s more likely to retain qualities influenced by lighting and camera details.

Assuming that you have access to one of the variations of DasGrain, I would suggest an alternate setup: It can export the the grain plate of your original footage. Save that along with the shot, and then at the end of the pipeline use that saved grain plate for the specific shot to regrain with it properly. Yes, you may get variations between shots from different camera and vastly different setups, but again, this would have been the case in the past as well, when you mixed film stocks, or had very different lighting conditions.

That stays true to the original grain, isn’t a huge workflow burden, and accomplishes all the benefits you are looking for. In theory you wouldn’t need to preserve a grain plate for each shot, but only one per setup.

For me applying authored timeline-sized grain is a great benefit in a world we we can do that and not have to worry about the past where we had to deal with layered/mixed grain from all over the place.

I dont see a benefit following " in the past we did this because it didnt work in a different way" we also didnt re-frame 90% of the shots, add timewarps to half of it and all that jazz, so I havent had a job where “plate grain” was even possible to be preseverd in the end product - by the extends of what happend to the plates, as they all got filtered to oblivion anyhow.

Its a bit different way of working and maybe a bit scary , but I think its beneficial to do a final grain pass in timeline resolution, wether thats scanned grain, dasGrained extracted/scattered grain.

I think the main factor here lies more in how that grain is generated if one is pedantic, which one should be, i am working on some nice presets for low/mid/strong for different resolutions

1 Like

So why not generate 1 or 2 grain plates from the source footage from representative clips. Then resize them to timeline resolution and preserve them. Then apply them at the end. Instead of using for example CineGrain plates which are sampled from actual film stock. Following your line of thinking - we should not apply film grain, but modern Alex/RED/Venice digital grain, properly preserved and handled.


totally , thats what i mean with “how we generate this grain”.

its a artistic choice in the end, and however you get to it can be your little secret sauce you sprinkle on the footage.

But yes dasgrain a synthetic grainplate and then resizing and using that → very valid approach for sure!

Is it all utter overkill for commercials beign delivered at some low bitrate h264 ? probably :joy:

And yes you can choose to “preserve” camera grain, use scanned grain, whatever works with the film really, I think this can be part of finishing and maybe should.

Same wirh sharpenig too but thats another can o worms

Just as much as filming some high end beauty shots at 8K, retouch at precision, and then render it down into a 1024x1024 GIF < 10MB, which means 80%+ compression. :rofl: Done this twice last year. What do you mean we can control the colors? Well, there’s only 256 to go around for the whole file.




Try that in feature film world.


How is that usuallt handled there, i only worked on the comp/vfx sides of things in longform/features never on the finishing/DI side.

Is it all plate grain is kept till death and then the whole thing is filtered/exported to whatever format for DCI or whatevs?

The funny thing is, why are we still even referring to it as grain?!! Considering in 99% of cases the image capture is done digitally, then it is noise, not grain.

I find it quite amusing that you used to spend over a million dollars for the top of the line hardware grain removal tool when dealing with film on telecine and film scanners and now people want to spend thousands of dollars on “Livegrain” to put noise back into a digital source because it feels too clean. It’s funny how you add a bit of blur and grain and suddenly it “feels so much better and filmic” even though people were wanting to shoot IMAX because it was better definition with less noise but was too expensive and hard to handle.

They spend all this money only for when it is streamed/broadcast (in the best case scenario), it has the noise removed during compression and is simulated (not identical) then introduced when decompressing at the viewer end. Unless it is a codec that doesn’t support this approach at all which then instead turns it into blocky noise. Admittedly, it does hold up reasonably well on 12bit RGB DCPs but when you are watchinn a heavily compressed 8bit streaming file?!!

“Grain” has to be one of the biggest wanks in the industry!!


I should add that I am not mocking Finn or anyone’s workflow. It is an accepted part of what we need to do and I actually think the idea of limiting denoise then renoise to only occur once is an excellent idea to maintain maximum fidelity. I also agree wholeheartedly that it makes more sense to add noise/grain at timeline resolution as this will have the best chance of making its way to the viewer in a closer way as intended (as there is no resampling of noise during scaling).

So a big thumbs up from me on the workflow.

That’s a fair observation.

Some of it is cultural. I understand that between different countries there are actually quite different customs on how skin is treated, and a long tradition of keeping things the way they’ve been. Yesterday I was privy to a discussion about trends in Asia and how they no longer reshape figures as of recent.

The logic seems to be - if that’s what people are used to seeing, it will be safer for us to stay in that same swim lane rather than defining a new look. For example there was some backlash on a few films that were captured and finished at 120fps as being too crisp for lack of motion blur.

All the same mechanism.

And by that measure we do see a lot of grain free imagery, as in motion graphics, animations, and ‘digital’ content etc. And we see a lot of very noisy imagery from bad iPhone imagery in near dark settings. All of those around us every day in social media.

So a ‘designed’ grain that is more than zero, and less than crazy seems like a quality bar people hang their hat on - as not being too digital, but also of high quality.

I think in the end it matters more that it’s a controlled process, rather than the exact amount or nature of it. And then us as post people who work hard to avoid mistakes, blemishes, or anything that takes away from perfect execution, and retain the original as much as possible (or the way the camera saw it), worry about how to make that happen, and alas you get this discussion.

I guess it obviously matters for theatrical presentation, but doesn’t Netflix want degrained masters that it adds grain to in the client so it can save money on streaming bandwidth?

afaik the degrain and regrain is actually part of their encoder/decoder or rather for AV1

AV1 is bascially doing a degrain, and then regrain on the other side by analyzing the grain , i guess similar to the scatter tool in dasGrain.

If you take a dasgrain node that has analyzed values its a few kilobytes and it can replicate the sources grain profile really well for hours… i guess thats the idea here looking at the schematic, so netflix would need degrained and grained masters, i dont think they want that but rsther jusr degrain “before” encoding .

AV1 is cool

1 Like

makes me wonder what digital cameras actually do to the captured image. Do they all add their flavor noise/grain to simulate what we perceive as cool? or is it part of the image capturing sensor? If its the case that its applied wouldnt it be great if cameras could turn off noise addition and or give us a desktop tool that adds this grain when we want it ( assuming their grain is better than the softwares we all use) it would minimise us having to denoise all the time - we would get clean crisp images natively. Either way grain is the least of anyones problems.

Except for the dumb Textures stuff Arri added to the new Alexa it is an inherent part of the image capture process dependent on both the sensor and the amount of light hitting it.


I still dont understand how thats in the raw file…

1 Like

If it would be that easy…

No, digital camera noise is inherent to modern sensors. Think of it as rounding errors in the Analog/Digital converters of the sensor chip, and just basic inaccuracies. They are worse in the shadows than in the highlights.

In fact cameras go to great lengths to minimize the digital noise. So what we get is optimal case, not worst case.

The one exception is the new Alexa 35. It too has noise as any other sensor, but that camera can add ‘textures’ in camera, which can be film grain patterns. But that’s a whole topic in itself.

I saw the demo of the guy with the 70s mustache and the baked-in worse-than-16mm grain and it drove me into a blind rage. WHYYYYYYYYY