Versioning rant

warning, its a rant

every single project I touch this year has over like 300 versions.

My brain cant do the same thing for days anymore , I am tired of it, no passion, cant give it to juniors or anyone really because nobody has a attentionspan to properly rename 600 timelines because clients afe changing the naming scheme mit project. So i am sitting here my wrist hurts from doing the same 4 clicks for hours.

every. single. time.

my current job is literally 70 edits at 1.5min in length in average , 70 xmls to conform, very little overlapp, 16tb of footage. Estimated like 600 deliverables…

This is pretty much a feature length film … not considering stuff like clean/subtitle/altCharts etc.

Its the worst part of my job, i like creating and managing workflows but I cant automate anything because clients keep changeing stuff and I am just running behind trying to fix everything with code…

If this is what commercial finishing has become I want no part of this.

I have clients shooting stuff like a modular system where you have 20 intros , 40 different middle parts 20 outros and then 5 different tag-ons or whatever and then they just mix and match them in their edits , not like they have a automated system or whatever its all manual, usually in resolve because you cant have a single person dealing with all this stuff.

and of course, clean version, prores, h264, subtitle version, english, german , french , spanish, tvc, webmix, stems…

We dont have any tools to manage this its a absolute nightmare and every semi flexible workflow I can think of just becomes extremely painful because the client always finds something to completely throw a wrench in whatever I try to do.

I just want to do it right, but I just cant, we need new tools, new workflows, new everything to deal with this mess.

anyone else has to deal with that ? how do you deal with this stupid stuff day in and out?

I want to go back to having like 5 versions, that I can spend time with and massage to be the best it can be, I want to do finishing and not versioning :frowning:


Seems like a junior should and must have the attention span and patience to do this grunt work that you shouldn’t have to, no? No one wants to do it, but that’s what entry-level is for…


my juniors are great, maybe bad example with the renaming (I use scripts for this anyhow nowadays…) , but they are not trained for QC like I am, I can smell a jittering timewarp a mile away, can see black edges on a bad repo in my sleep, it takes years to train that and develop a eye for this.

but nobody can QC 600 versions of the same damn thing. I need a AI to do this for me… and there is always something wether its a missing font on one of the rendernodes, bad resize filter, 1 frame too long on the audio… wonky grading matte or something stupid like that.


Yeah I find that it takes a good level of skill and responsibility to do versioning.

The attention to detail in the face of something so incredibly monotonous and repetitive. You can’t afford to have any mistakes.

Sitting through and trying to QC what your juniors have done is almost as much work as doing it all yourself.

I don’t know anyone who loves this aspect of the job. Except for company owners. Surely there is some easy money in it or why else would all the agencies try and take it back and do it all internally?


Not sure there is easy money in it anymore. Budgets are stretched really thin as it is. And there are more channels to fill with content. There’s a disconnect between marketing budgets and content creation but that’s another chat.

@finnjaeger i feel some of your pain. I think my maximum was about 100-150 masters.

What would make the process work better? You mentioned resizes: would it be better to resize on the desktop once then populate the edits? Same for timewarp?


at the point where you hit 100+ version i dont think flame in its current state is the right tool to handle it, we need 4+ people dealing with conforms and grading and everything at the same time, resolve is the only choice to even get close to efficiency.

There needs to be a completely new workflow developed to handle this, anything we are used to doing is not working with this new way of doing content, resizes are just one little thing.

  1. It starts with offline, they have no tools to make sure they re-use shots as often as possible, so they might use a slightly different take in version 78 vs all the others, so editorial has to think of ways to streamline the offline process, whatever that entails, i cant think of much but I suppose you have to do a nested approach here as well where you create a modular system of smaller timelines that then get put together at the very end or something like that.

  2. XML/AAF workflow does not work anymore, if I have to import 200 xmls i am allready loosing more than a day just clicking the same 4 buttons, same with offline, not like you could batch export xmls from any nle that I know of. So realistically you have to do offline&Online in the same software… conform isnt really a option anymore.

  3. nested procedualism, this is where its at for versioning, i think every clip should be in its own “source” timeline which is then nested/precomped into all the edits you have, this is painfull right now with resolutions etc

see, its not perfect, you can run around in circles with this and I do believe the only way to get a grip in this is to not even bother with regular timelines or the concept of a edit anymore, it has to be a modular delivery system where we do something like deliver intro/middle/outro A/B/C/D all clean and then author a metadata track for each that holds information as to how to crop it for social media, where logo fade in starts etc. then you upload that to a website of some sorts where the clients can just on the fly generate the films they need with 3 clicks ( or have them auto generated to fit a certain demographic or whatever)

sort if like dolbyvision but for versioning. A man can dream, right?

I dont know where this will go, but its just not 1 TVC anymore, we have to fit into all lengths and aspect ratios for all
platforms and saftey margins and sound mixes and languages and and and.


Thanks for the reply.

The fact of using different takes is something I experienced on the job I’m on. Only 11 versions. In most cases I just tell the client and they say it should be the same and we fix it.

Agree that scaling up to 600 it becomes less easy to manage and imagine that the offline editor has just the same problem of media management.

Offline and online in the same software. Possible but then it changes the nature of the business. Perhaps on a project by project basis this sort of collaboration could reap benefits however imagine the politics and finance might be tricky to navigate especially responsibilities for things. Not insurmountable. Or you could hire an offline editor and try and bring the whole project in house. Again it might have repercussions with offline companies etc.

We used to have a tool at the mill made by Paul Crisp to scan edls and name the shots. Much like the linked timeline conform in flame but better. It was done through a webpage, it would name shots the same even if they had different timewarps. It would name the shots and make new edls with the consolidated names. The coup de grace was that you could import new edls and it would check them for new shots and extended shots within a tolerance of frames. It was very good. It wasn’t very popular because every element in the timeline was given a new shot name. So shared plates could also be tackled once. However, this was confusing for nuke people and cg. I think it was almost perfect and I trusted it much more than the flame linked conforms which I still struggle with (timewarps).

I think nested proceduralism exist already with big batch. This isn’t automated. You can do this yourself as you hint at but it is time consuming. The part that makes this all the more difficult or time consuming is the translation from offline to online. That part where you familiarize yourself with how the job was put together. Figure out where the work is, the shared shots: in short making a c-mode for vfx.

Paul’s tool was a very good half way house. I saw a demo from nvidia a couple of years ago which connected different software to the same shot so they could all understand if a shot was lengthened or changed in some way. And then there’s the open timeline, which I don’t know much about but wonder if it would assist in the translation between offline and online.

In my head, the data is all there and flame has the capability to take advantage of this. It would take a readjustment in workflow but it seems possible.

Thoughts @fredwarren @YannLaforest @finnjaeger ?

I am at the moment building the pipeline at Rascal using Shotgrid. The way is recreates the timeline and sucks in the renders (open lip) is very useful. If this could be scaled up to include issue above I think we could have a solution. A closer integration between Shotgrid and flame with well thought out templates for commercials/versioning. It’s great that we have the latitude to build our own but it would be even better to have it there already. I am more than willing to collaborate with Autodesk to create this.

1 Like

I will do a writedown of my current workflow which is mainly based on overwriting files in different stages and using resolve as a hub maybe this gives you a idea of what I really want.

Goes something like this(i can do a more elaborate writedown)

→ Conform all sequences from offline in resolve or flame

→ create sources sequence (in flame or resolve)

→ Ingest sources sequence into nukeStudio, creating a nuke comp and rendernode for each shot

→ render all shots as “version0” from nukestudio so i have placeholders for each render and I can check colormanagement issues

→ (insert pattern browsing here if going to do finishing in flame)

→ Re-ingest all the v0 renders into nukestudio again, write them all out as versionless (so ns becomes the versioning hub for resolve).

→ Reconform sources sequence to the published renders, every clip in my timeline is now a comp render

→ Grade the comp renders in the sources sequence

→ publish sources sequence graded (versionless)

→ reconform all timelines to the published graded shots,

Rinse and repeat once comp update are rolling in…

Render comp v2 → version in NS-> export versionless overwriting old version → export sources sequence with grades on it overwriting old render again → they all magically appear in all the timelines…

its a funky workflow but it keeps everything pretty darn flexible, but it shouldnt have to be that way really…

Like many things in post pipelines, there are inefficiencies that are allowed to persist because the incentives aren’t aligned. An old rule is ‘you can win any argument if you focus on what’s in it for the other party; no one cares about what’s in it it for you’.

Unless editorial has an incentive to re-use shots (and tools that make this easy), they’re not going to spend the extra time doing so, because it adds work for them. It may make sense in the bigger picture, as the overall budget benefits. And editorial is in the same boat, because often the camera department has no incentive to film fewer takes, run cameras at a resolution sufficient for the end result, but not at 8K just because they can and nobody told them not to.

Thus the solution has to start with tool improvements for editorial that makes it easier for them to manage their timelines. Right now we’re creating sources sequences in Flame after the fact. In fact, for this type of work, the NLE should be managing the sources sequence so it’s easier for the editors to see and then deviate only from that when there are reasons for it.

Short of that being a central NLE function, there are tools that could help. PlumePack for Premiere has some functions on consolidating projects. If that could be expanded to analyze projects and create a report, this could potentially flag issues earlier in the editorial cycle and long before they turn over. Nuendo has a decent re-conform tool, that could be another pattern. There are some 3rd party AAF tools that touch on some of this space. None of them are perfect matches, but ideas of where this could go.

But in part editorial is in the same hell-hole with so many different timelines. You almost want to have a totally different tool - more like a document management system, where you make some of these edits and variations not as individual edits, but in terms of some meta data that manages 600 deliverables which are all variations of the same story - different languages, different crops, different supers, etc. etc. From a data modeling perspective this would be an easy problem to solve. And so are a lot of the GFX, they’re often just variations with minor differences easily saved as metadata.

This has grown organically and gotten out of hand. But the problem is distributed without a single owner and the one place where it all comes together at the client and budget, they generally don’t have visibility into details and what’s driving the madness.

Back when I worked at Amazon in the oughts, every year Jeff would pick 10 issues that were causing major pain, but were so cross-functional that nobody would own the fix naturally. So he would assign an SME to each of these issues that owned fixing it and working across the company to do so. For 18 months I had one of these assignments to fix high value customer experience. I was reporting to retail, but worked with call centers, fulfillment, transportation, merchandising, payments, and data warehouse to get to the bottom of these things. That worked because it was the same company, and the blessing from the top that this was a priority. Once you have multiple businesses in the whole deal, that becomes a lot harder. Even more so when you’re using slow moving external tools you can’t easily change. You need a working group of stakeholders to sit down at a table and say this is nuts, we need to do this better. So I guess @finnjaeger’s rant may be the beginning of that.

There could be a place in the market for production companies that integrate this vertically from set to delivery that at least have all the stakeholders under one roof and can optimize this and deliver a better and less financially wasteful product to the clients. Maybe some of them are, not sure? Don’t have full visibility there.

1 Like

Some great points @allklier. The owning of the problem being the key. In this regard and having watched this video:

It does feel that OTIO could be the lingua Franca that could push it through. The ability to have communication easily between artists, departments no matter the location: offline online all integrated through OTIO and shot management system such as Shotgrid is exciting because of the efficiencies I feel could be made.

I was slightly disappointed that the only Autodesk contribution being mentioned was RV. I had hoped there would be Shotgrid, Maya and flame. Instead we had ftrack, Blender and nuke studio in the discussion. There’s an opportunity here to be at the leading edge. And not only that, potentially solve the very gripe @finnjaeger is spotlighting. I hope that it’s already happening @YannLaforest .

1 Like

AI would be very useful. Or at least some kind of easy way to ingest a spreadsheet/csv file of data. It’s possible to do now but it takes a while to set up.

This comment is based on @finnjaeger comme t above and this TikTok post:

this reminds me of this project

100.000 individual versions.

Just as a general thought for versioning… I was thinking we could have different resolutions in the different versions in the timeline, and the versions could feed in to each other.

So for example your main edit on version 1 in the timeline could be square 3840x3840 untitled.

Then version 2 could be UHD 3840x2160, with the version 1 feeding in to the bottom if it and a choice for resizing. In this example it would be set to crop edges so you get a full UHD image on that version 2 of the timeline.

Version 3 could be UHD but portrait, also set to centre crop. etc etc

You would then set the titles etc as appropriate for each version.

The advantage would be that if you changed the main edit, the other versions would all be changed too.

Just a thought.


Yes exactly stuff like this, I am dreaming of a timeline where I can just define crops for different aspect ratios as you said, just imagine a type if dolbyvision metadata track but for different output aspect ratios! this or some other way that makes more sense.

The other thing is imagine a renderlayer setup like action has but for timeline tracks, where you could say track layer 1/3/6 = clean output , 1/3/6/7/A4/A5 = french version or whatever.

OR even lets say you drag a timeline into batch, imagine you could use each track as a output of that node to create versions downstream in batch?!

for me the biggest thing right now would be how to generate multiple deliverables from a single timeline, having less timelines and some kind of template where I just say all V5 tracks are subtitles … would be great.

While writing this down I just realized NukeStudio does have that feature where I can make render presets that use certain tracks :thinking: :thinking:

Wouldn’t one timeline not make everything more complicated at some point of production?

You have 5 TVCs 20s timelines. Client makes late changes in all of them. Now you have 4 20s TVCs timelines and 1 OLV 21s one. Easy to keep track on it even with 400 sequences.

You have 1 20s timeline with 5 versioning tracks. Client again makes late changes in all of them. Now you have 4 even tracks and one doesnt match the base cut anymore. With 80 sequences and 5 sub versionings for them each, would you split the new edits now back from the clean/language tracks until you are at 200 sequences again?

Versions are getting worse and worse, with not even the same edits anymore for different languages. I guess that won’t get better in future, so there should be a much more modular system, that is not timeline based at all.

1 Like

I dont usually have to deal with all this… but just to throw out another idea…

Context Variables is a core feature in Gaffer and Katana (with a different name). Its a method/workflow that will let you adjust parameters downstream to modify attributes upstream.

Say you have a timeline where many parameters are just attributes that can be modified with variables. Segments can have attributes like “source clip”, “length” or even “visibility”. Text effects can have attributes like “placement” and “language”. Timeline can have “format” or “fps”. You name it. At any point in the project, attributes can be set to use a “Context Variable”. Also at any point these variables can be modified to re-set attributes upstream.

In Gaffer you can go a step further by using the Spreadsheet node. Where you can have single context variable affecting multiple parameters. So you can have a context variable called TikTok, that will define how a bunch of attributes and parameters will change upstream.

There are other interesting concepts in gaffer regarding proceduralism, like Edit Scopes, where you can do a bunch of editing on your node-graph, but keep it contained withing a specific scope… like a Scene Name.

I think these are worth understanding a mimicking for certain workflows. Not a new concept really… has been in Houdini for ages. There are a couple of other concepts from the USD world that could be worth understanding. “Purpose” and “Variant” are used to modify the resulting product, and are attributes that can be set at any point on the evaluation of the scene graph. You can probably guess how would they work.

Here is an example of a context variable for an input image. The context variable is modified right at the end of the graph, affect the very first node:

Here is another example on how a single context variable named “movie preset” can trigger a bunch of adjustments for upstream nodes:

More on Context Variables


Yea this is cool stuff, we need to somehow get that merged in a smart way with timeline based editing… there needs to be a change no tool is sufficient at the moment to deal with this.

I should’ve included another example featuring the Task list node, where at render time Gaffer iterates through all different “Versions” (in Post lingo) doing all necessary changes to the graph and render all different variations with one click.

I know this Gaffer example may be a bit of a blue sky thing for Flame, but I think is really clever concept worth following.

On the same note the USD methodology as a workflow is something worth looking at. Concepts like Layering, Purpose and Variants can absolutely be applied to the Editorial, Post, Finishing world.

It would really help if Autodesk opens the Python API a bit more. Just to be able to have external access to the API would be a big step towards building clever workflows.

This should be nr1 priority for autodesk. Even if you a really good flame job. The client is going to remember the time spent versioning.

My low end solution is this. Could be implemented into flame without disrupting anything big. Not the perfect solution but a good stress relief until proper solution comes


2 questions…

Dont bite my head off… Is it really 600 masters?? At this facility Im at… there are producers that decipher the deliverables and usually they can turn 600 deliverables into actually only 80-120 slated masters. The rest are just unslated variations of encode specs from those slated masters. 600 unique masters would be a MASSIVE campaign to me. Multiple leads for sure.

Second… I feel like connected sources segments & connected segments works really well… what it might be missing would be a python tool that it makes it easier to remove/group connections? Like right now, if you want to make a new connected group, you would have to go to each individual segment in each timeline, remove connection, then create a new connection without overwriting. Its a bit cumbersome hopping btwn the sequences. What if you could select various segments in a table view of all selected sequences to make new grouping easier? Somewhat like the Conform tab table view.

1 Like