Flame on the Cloud

It is a game changer for me using Flame on the Cloud the last 6 months. I create and use the Flame on AWS Virtual Machine only when I’m booked. The Google Cloud also works well. Autodesk is showcasing AWS at NAB. I can choose to upgrade my hardware and storage configuration per project. It is completely freeing for independent Flame artists. And you don’t need to invest in storage or hardware.
You can tell your clients you have a machine in any part of the world. And not have to lug hardware around the world. You can work off a laptop. All you need is a good hardwired internet connection and a laptop. I would recommend. Buy a nice monitor for use at home or office. And rent on location if needed. Transfer speeds are blazing fast. Talk to these guys at NAB or you regular Flame systems integrator to get you set up. I use Gunpowder.tech. I highly recommend them. Tell them you heard about them from me. Have a Good Friday!


Those are all great points… I do think though, that for the vast majority (almost all) Flame artists interested in doing this, they will need to use a managed service, as self configured FoC is rather complicated.

1 Like

It solves a bunch of problems whilst causing a few all at the same time. For some artists and some instances, it makes a lot of sense. For some, it doesn’t. To unpack all this a bit, @andymilkis and I are gonna do a hardware focused Logik Live soon.


Where does your data go between jobs? Do you have storage you’re always paying for whether or not you’ve got a workstation spun up? I don’t have any AWS experience, so sorry if this is a dumb question!


I already got some big issues handling 2-10TB projects on a local machine and server. To have no worries about the machine sounds nice, but I literally have no clue, how datamanagement should work this way. Not only with NDA ones, but more with big projects, needing a lot of data going in and out in max speed.

1 Like

and no one is talking about the cost. Let’s say you purchase a full spec Lenovo P620 with 5995WX and A6000. This is the best you can buy right now, and its about $15K USD. Amortize that over 3 years, and that is about $416/months. Lets add another $250 to account for electricity and cooling. So for about $675/month you have the fastest machine available, 24/7 with no confusion. What is the average cost of equivalent capabilities of AWS even for just 40hrs/week.

Cloud does NOT makes sense for the vast majority of situations, and has an increased complexity in configuration and system administration.


Agreed. For the vagabond it’s the answer they’ve been waiting for. Same for the artist in CPH you really want as a collaborator on your next show… or selfishly when you’re back in Stockholm over the summer and close to an Amazon data center.

I can also see the argument for on demand resource expansion without relying on available rental or purchase inventory—especially right now.

Of course if you don’t have insane LA rent for brick and mortar you can spin up a lot of instances… so that’s another aspect worth factoring in to the financial equation.


The point is as alan as jeff is right from his perpective. The important is autodesk give us the chance of choose the best option for us, according to our needs. And this is really great.


That’s the point I was trying to make @kily. While it’s not a solution for everyone it is an alternative that has a place.

Alternatives are always welcome.


Alan, I am an on demand 1 man band. It could be Flame this month, A feature to grade on Resolve the next month. Directing, VFX Supervising, or creative editing the next month. As I suggested. Use your favorite Systems integrator. In my case I became familiar with the Gunpowder guys when I was doing Superbowl spots. My next gig days later. I fired up Flame on the AWS cloud with the help of the gunpowder guys. I tried to do it myself. But that didn’t work. I do shots and shows. Not IT. I live by the right tool for the right job.

‘I live by the right tool for the right job’

Amen to that @jeffolm - have you come from the future to teach us your ways?

1 Like

Thank you Gareth " @jeffolm - have you come from the future to teach us your ways?" I think both. I started on Flame in 1995. Factory trained in Montreal 1995. Regarding the future. On demand Cloud computing for heavy lifting for our business and all businesses is going to be a continuing trend.

1 Like

I would think a solution for data stored between jobs could be Amazon S3 storage… it can be easily secured and is super cheap. The main cost is I/O to and from the storage, so it’s ideal for cold storage in general.

Yes that is a good option. You can actually get nice performance using Resolve off AWS S3 with Lucidlink.

But what no one ever provides is the real world costs of these virtual Flame/Resolve/Nuke.

Since you have been using this for at least six months, can you please share your ALL IN, costs (including GunPowder fees, storage fees, network in/egress fees, compute fees).

My previous gig paid for it for 5 months.
I’m only a few weeks in on this gig.
I built a pad into my rate for this short gig.
It was a matter of doing that. Or getting a machine shipped to me.
I have using windows VM’s for years.
They are not very expensive.
There are only egrees fees when you output from an AWS machine to another region or outside the AWS Cloud.
I will update you Alan. After the smoke clears on this gig.

The previous gig, where the company paid, do they not have a Flame of their own on premises? Were you working collaboratively with other people, or was it a one man solo only project?

That’s what I’m curious about as well. I find it hard to imagine the financial benefits for a larger facility, especially with color and 3D but I realize that’s not who this is for yet.

Personally I still like a broadcast monitor so I’d be curious to know the costs of pumping out a NDI signal over the course of a normal working day.

NDI is terrible, and almost every device, tool and NDI software is 8bit only. Additionally the NDI dataload is not in-significant, even at HD. You’d really want to have a secondary encoder to grab the NDI before it leaves AWS and transcode it to an H.264/5 stream.

1 Like

Fair points…but now it’s yet another machine in the cloud that’s been thrown into the mix. So still added costs if one wants to have a broadcast display.