Flame Benchmark Archive

Darn. I forgot to grab the Flame Benchmark Archive from Facebook. Anybody have that archive for Flame performance benchmarking lying around?


1 Like

Here is the link to my GDrive hosted spreadsheet,

And this is the link to the flame archive hosted on my GDrive.


Here’s a dumb question on the bench, when rendering, are people selecting everything on the timeline and hitting render-sel or only rendering the top layer with nothing selected. I’ve been doing the former…

Whatever is faster. Right? :slight_smile:

We…and by we I mean @Ton , were kickin around the idea in Discord of making a bit more of an easier Benchmark to make that also included some ML tools. The old benchmark archive, I THINK, was timeline in nature in order to include the ol’ Smoke on Mac/CFX/Batchless machines out there. Which, if memory serves me right, may now be obsolete.

Love mail? Hate mail?

1 Like



on a rainy Sunday morning I took finally the time to check my website (Autodesk Flame – SpeedCheck – Brylka – TooDee – PanicPost) and the old Flame Speed Check from 2018. I am on the flame beta and the first thing I try to render was the old benchmark.

I am sorry that I never updated the benchmark anymore. I also found the old benchmark results in a google sheet from I don’t know when :slight_smile:

So the more I am happy that I found over the Flame M1 thread that the benchmark is still in use and it got updated as well. That is very cool.
I would like to update the flame archive on my website as well.
Is the Flame_Benchmark_2020.2.zip the latest version that everyone is using?

I saw that it has some updated shaders and now its UHD@23.976 (16-bit fp).

I would like to add a line or two in the disclaimer and as the sequence is already now in float and I am a nerd for HDR (hdr.toodee.de), I think I need to render a UHD version of the benchmark :sunglasses:

best regards


I know its outdated now, but my old benchmark with the lastest beta is a lot faster now. :nerd_face:


…and here is a fresh render of the 2020.2 benchmark that I downloaded from the google sheet link.
In rendered it for fun in an ACES project and exported an HDR version. I wanted to try out the HDR UI function in flame for the first time. :sunglasses:

Has anyone tested 2023.3 vs. 2024 on Apple Silicon?
I ran the benchmark on my M1Ultra, but 2024 is actually slower…?!

2023.3 = 6m51s
2024 = 6m54s

I checked that I am using frame based rendering and only selected the top track to render.
At first I thought my TB-NVMe was a bottleneck, but using the internal SSDs gives the same result.
My setup: MacStudio M1Ultra 64Cores and 128GB SSD with macOS 13.2

What makes me particularly wonder is that in the Google spreadsheet, an M1Max 32Cores outperforms my setup by over a minute.

Flame config:
Uncompressed / EXR (PIZ)
Legacy colo(u)r policy

Anything I’m missing here?

Its amazing how fast things are changing and going to change in the next year or two, will apple cpu support the next hurdle will be graphics based then with the nvme storage being so fast I have adopted working completely uncached and archives now are MB not hundreds of GB which is amazing, it takes a bit to get used to that workflow but once the switch is done it sort of makes sense and figure in five years most will be working this way and save a ton of duplicate file space not to mention the uncompressed archives.


This is the way.


Thats what I said five years ago, and here we are with people still manually kicking off archives through the GUI on the nightly. We’ve still got a long way to go…

1 Like

I think the price and speed of nvme is different than five years ago with ssd speeds.

The other issue is generally a structural one on the software side. There needs to be a one button setup for a completely un-managed workflow so that it’s as simple to work unmanaged as it’s always been to work managed.

We’re close, but not there yet.


agreed but working in a facility off a server is the biggest hurdle to overcome imho, the servers aren’t fast enough for say 5 flames to be able to playback but with more people working from home and being their own islands then it has a chance but still a ways off in terms of facility adoption.

I was speaking more to the culture. Hard to teach an old dog new tricks and such.

Uncached workflow with centralized S+W is pretty definitively the best way to work if you can get it configured, but no matter how many times Alan shouts it from the mountain top I don’t see anyone else working that way and its been possible for almost a decade at this point.

1 Like

every thing has its use cases I think working as an island is better suited than facilities , I worked at a facility and people were working soft imported and the server went down for a few hours, nothing like seeing about 15 flame artists standing around doing nothing for several hours to see the fault of this but if you are an island it works out a bit better.

1 Like

Most of the facilities I work for want you to work fully cached on a local framestore for precisely that reason. If the main server dies you can keep working.

I agree that its definitely a risk, but people are vastly underestimating the time lost daily to a cached workflow. You only need to save a few minutes a day to make up for the occasional downed server.


sure but the cost of the hardware to let say 15 flame artists being able to playback 4k without hiccups would be more expensive I am guessing than the cached workflow in terms of investment , that’s always the thing with flame and work flows what works for say 2-3 flame artist facilities dont always scale and cost can be a factor . That’s one of the great things about flame is that it has so many variable paths to scale and change for what works for various facilities . I was only speaking of work from home island workflow this uncached works really nice for a few reasons.

100% That’s why I think there needs to be a little of an internal switch in terms of how unmanaged workflow functions and make some tweaks that will make it accessible application-wide.

I can mock this up later, but first off, we need a global variable (token) for project location and then allow all pathing and other tokens to resolve relative to the project token. So if resolves to


…then we can start setting paths using like


Which might resolve to:


Once that’s in place, Flame should by default create all setup directories under a user definable location in preferences allowing us to use that token. For example


…which would allow for all project metadata to be generated directly into a more useful location than things tend to default to and in a templated form. Those are the first structural changes that need to be made to support a more open workflow.

The second structural change that would need to be made in order to facilitate an open workflow would be creation of a rendering mode toggle that either enables “legacy mode” or enables “modern” mode. To explain a little further, “legacy” toggled-on means things function exactly as they do today… when you render a render node, it renders a SW managed clip which is completely opaque to the project file-system. The world continues as people are used to.

Conversely toggling to “modern” would default render to a a user-definable location, also a preference. So to continue to use the example from above that might be:


The file format rendered would be defined by cache settings at startup. So in the instance of prores444/EXR PIZ, if you flipflop a 16bit clip on the desktop, which would normally create a new source on the desktop, it would render out a half exr to the path above and imports an uncached openclip of the render to the desk referencing the result of the render. The second that we can treat all renders (except timeline cache renders) as unmanaged renders referenced only on the desk, a whole world opens up. Batch can largely already be made to work this way, with the added benefit of multiple iterations being able to be stored in a single reference–the open clip. With this new structure there would be no reason why one couldn’t use the same strategy for every module and allow for versioning of clips from modules using open clips like batch. This is the second structural change that needs to be made.

The last structural change that needs to be made is to path translation so that it actually does what it should. An artist working unmanaged should be able to define a localized path to a project using the aforementioned token and that pathing should be able to be updated whenever and every reference to media, setups, models, whatever would remain in tact. As it is today, it’s to easy to break a whole project, clips, anything with incorrect pathing. In a truly unmanaged workflow, all references to media need to reference a central location. This is only possible with the changes listed above.

Beyond that the caching mechanism works by and large well enough to allow for this whole system to work. You cache a clip from the network you get playback. You’re working locally on a home machine you can work uncached with a reference point to the same location as your project metadata is housed. This system would also allow for centralization of a ton of other data that currently is spread all over the place–things like user profiles or color policies or even matchbox shaders. It also can scale well either by machines or by artists using existing tokens.

In my mind this is the way @fredwarren and what I was trying to explain (poorly) a couple months back at that dinner.