I would take it one step further and just do remote sharing , we all have ADSK accounts now and flame should be able to add users to specific libraries to wire stuff directly in and out like working in a facility but remotely so I could invite @AdamArcher to see my “out” library and it would open that up to you via web not just internally.
here is the feature requested
and PS I know they have a lot more pertinent requests but its nice to have it one the roadmap for future decisions and discussions.
Nuke has 10 legs up flame when it comes to remote collaboration in this way, especially for remote work.
→ no libraries, databases, just scriptfiles on a server/dropbox/whatever
->easy pathing, path translation, whatever, no problemo.
→ can have 400 nuke versions installed.
→ shared plugin/gizmo location through a simple env variable
->easy deadline integration also for remote clients, they can use their license, work on it and then still render on the central farm → cool!
→ almost no version compatibillity issues, also backwards to a degree.
→ no archiving needed, no chance of stuff beign double or version mismatching etc.
→ sinple rlm server for sharing licenses with freelancers via vpn if needed (and if applicable in the ToS)
→ can easily copy paste nodes/setups as text across chat apps like slack, helps a lot with supervising
I agree though for flame a checkin/checkout system like UE/GIT/Perforce is probably more the way to go, I was thinking of doing git with nukescripts also would be kinda cool
two sides to every discussion and i think with flame you can do all of these but it comes with its own set of issues and or limitations I think people choose to work cached is the interactivity when you get a big facility you can see issues arise hence why people do client sessions in flame and not in nuke but maybe that was the tenth leg you were going to mention.
though i would love to be able to copy paster scripts that would be amazing
Sure flame has its set of advantages or we wouldnt even use flame, my point is that sharing work and working collaboratively across the planet with multiple artists is a breeze with nuke, and it currently isnt with flame (Unless you count teradici which is not economical for my usecase for example) .
Just the fact that we cant conform/relink media from a saved batch on another system… at least I can open batches but have to manually replace all footage which is a pain, even when the remote flame used the exact same path to the media as I did, thats one of the more simpler that as a nuke person makes you question life (and no i dont want to talk about archiving, haha)
I don’t disagree with you at all it should be easier , have you used the storage path translation in the prefs I have found that helps a lot. Yes lets not talk about archiving that just makes me have seizures.
yea but that doesnt work with sending saved batches, only with archives, the saved batches i have inspected deeply they only point to a media UID thats only known to the original flame, so you cant resolve the original media path or change it in any way, I think we would have to use read file nodes in batch for that to work? Apparently thats depreceated though?
yes use the read files on super fast storage and that should solve your problem, I have a 8 stick nvme raid that I work off of and no reason to cache.
i heard from someone that readfile nodes have different issues thoug? need to test this.
not sure about the issues, but maybe speed related since its just more tied to the network but maybe I am missing something but should be more nuke like in its workflow.
We use them all the time. As far as I can tell, there is no difference using a readfile node vs soft-importing a clip.
As an aside, in our last conversation with the ADSK team they mentioned that they were thinking of dropping that node as it’s beyond old and they didn’t realize anyone still uses it. Luckily they haven’t, that would upend out workflow.
Doesn’t readfile have a max bit depth of 16bit?
that would be a absolut dealbraker
Nope. We use it to read 32bit UV maps all the time.