What I would like to see, considering that most Flame freelancers already have their own setup (and don’t want to go down the Flame in the cloud road).
A Post Production facility being able to host a central Flame project server that a Flame artist can remote connect into. From that server a remote Flame/Flrw artist can download their media, either in full resolution in original file format/bit depth/etc; or in a proxy media format that can be selected in terms of resolution/bit depth/codec,etc;. They work on their shots locally (Flame deals with any filepath translation). When the artist is happy they can then connect back to the central Flame project server that also doubles as a burn server and can render the result full resolution from the original media in the main location. With the potential to then download a proxy or full resolution version of the render for Quality Control. Essentially a remote burn server scenario. What would be really cool is if it had some clever UDP file transfer protocol to maximise the transfer speed between the central server and the remote Flame (Think Signiniant/Catapult/Aspera type speeds but all done within the familiarity of the Flame environment).
This would all use the shared library system in Flame so you wouldn’t necessarily need any kind of tie in with Shotgrid or anything like that. It would be cool to have permissions on a central shared library based around Autodesk user ID so you could lock down access to folders or libraries within a parent shared library for the project.
So you’d have a Flame or even Flame Assist in the host studio that could build Media folders or distribute batch groups or whatever way you wanted to do it to prepare media for remote Flame Artists.
Here’s a scenario and way of working where Flame could find its own remote artist niche workflow. One that would be attractive to a lot of studios that wouldn’t need to rely on a pipeline or TDs as heavily. Something that utilises the fact that there is a large international network of highly talented Flame artists with their own license & hardware.
Call me crazy but I reckon it would work brilliantly.
Avid has built something similar with Nexis Edge. Their implementation makes proxies more seamless than they used to both in the app and for the remote user. So with the right workflow design it’s totally feasible.
However there is little difference between proxy and full res in editorial, in fact it often doesn’t add any benefit to work in full res. That’s different if you have to key, track, mask, do paint work, etc.
In my mind it’s still better to bring the eyeballs to the data, than the data to the eyeballs. In the current long-distance network environment it’s more predictable. On the cost side it would be an interesting comparison - generally cloud compute power is more expensive than local except for very short-term spikes. But network egress is also a major cost factor, so sending the data out could end up being more costly still. All depends on the specific scenarios.
But to make remote app workflows work, more work remains on the tools as well as on the apps. Some apps are easier than others to use remote, or in pooled configs. And then there is the issue of all the other tools we have in the workflows like broadcast displays, scopes, control surfaces, tablets, multi-screen setups, etc. A long way to go.
All of which work remotely and I’ve demonstrated over 2 years ago, even remote Broadcast monitoring, remote Tangent and DaVinci panels, multiple monitors in Flame and Resolve and for ancient Lustre, remote scopes.
It is way easier to remote into a machine, than to have some magic co-ordinator.
Yes, several folks have demonstrated it in various scenarios. But I believe it does take quite a few steps and parts to pull together, it’s definitely not plug and play yet, which is a barrier for wider adoption. And the more pieces there are, the more things can break / need maintenance. We just tried on Discord to @johnag up and running with NDI output, and it was hit and miss.
I’ve actually got a more labour intensive version of this suggestion, which is more to do with the data management and delivery side of things, which actually works well in terms of workflow. This is solely for remote VFX work, it wouldn’t cut it for finishing.
Anyway, I think I am going to try and script parts of my suggested workflow. The Burn part is the real issue, but I haven’t looked through Python implementation in Flame at all, so I think there might be a clever way of achieving that side of things using Python. Like some kind of watchfolder for batch setups or script to point to a folder containing setups, work out media oth translation on a per folder basis then automatically submit that to render.
I don’t understand though, if you have a main office, why not just remote into those machines and then you have all your stuff in a single infrastructure?
Because if you are using Freelancers with their own infrastructure, you may not have the internal infrastructure at the main studio to host multiple Flame artists working remotely. You wouldn’t use Flame In The Cloud if you Hd a whole lot of workstations sitting around not being used, it is to scale up infrastructure because you don’t already have it.
My solution is trying to utilise flame freelancers who hve their own infrastructure already, so you’re not having to pay AWS for infrastructure you could already utilise.
From what I’ve seen, most people with “Flame At Home” (FaH) have sub-par infrastructure compared to on-premises. You are paying a premium for a freelancer to work slower at home.
It has worked really well for us. I just factor in a 25% overhead in time and cost as to what it would take to the job remotely as opposed to on premises. I haven’t had a single situation where I’ve been told my quotes are too expensive or that we work too slow. When I factor in that I am not lying for inactive infrastructure or employees, it is way more cost effective. I’ve always paid freelancers market rate and generally get great work back.
I’m not sure I get it. I use remote freelancers all the time, all of whom have various levels of gear. We have a service we download and upload things to, and it’s worked without a hitch. At least no significant ones that I can remember.
Could be, I guess. But I see a lot of points of failure in integration and performance — with a tremendous amount of time required from the devs to make something that does not have fundamental flaws and does not require them to keep maintaining and upgrading it
ie. When there are plenty of really critical things needed in what Flame is supposed to do - composite and make pretty pictures.
You are 100% correct on this, hence no feature request from me.
I’d like to see the Micro freeze issue fixed, proper m2 support, some kind of API to get tools such as Keen Tools integrated, Text&Paint, Render Engine integration, etc; before something like this as it is such a limited use case.
Totally understood. Also, when anyone throws up a long-shot dream here, it does also generate replies, which generates a trending thread, which then our ADSK friends have to take their time to look at and and even respond to…
We are doing something very similar when we use Unreal Engine, not trivial, not cheap, but it does work. That said, this check-in/check-out mechanisms imply all artists to follow a few rules and to have a very good connection to upload big assets. Not only that, the server performance and capacity does not make it very cheap either so it is certainly possible (I suspect in Flame too) but needs some standarisation.
Another approach would be to use a P2P filesystem, again, not trivial, but I am sure that will be more performant.
Check this out
To be fair though, I am yet to see Fred write in his implemented features list after a release “from Flame Logik”. I think it can be healthy to gauge opinion before submitting an official feature request through the proper channels for the very reason you mention. Or someone might say something that makes you feel differently. But anyway…