Better multilayer Action and/or workflow

A quick question for the forum.

I can appreciate that integrating multilayer exr support across all nodes is a big task. Who would be satisfied as an interim solution to have the ability to import a multilayer exr and have it automatically connect all its inputs into an Action?
Even better if it was smart enough to detect what type of element it what and automatically create/connect the correct nodes in Action as well as correct blend modes?

As much as I like the idea of multichannel workflows, it can sometimes be more time consuming selecting what layer is controlling what instead of a quick pipe into the appropriate input.

As a second option, could there be some kind of multilayer input just into Action, that then breaks it apart into different media layers within action. To be honest, that’s the main node where multichannel would be the most helpful. I don’t necessarily need it throughout my batch tree. Could we have a separate node within Action too when you have mattes in the RGB channels.

I will write a feature request for this but wanted to open it up to thoughts/suggestions before I submit it.

1 Like

Respectfully, no, not interested…

I’d rather we spend the time to move into the 21st century. There are a lot of other potential ui tricks we could use to get a faster experience out of a multichannel workflow and make it feel fast.

Imagine dragging a connection out of a node, hit a hot key, a pulldown pops up with the layer/channel sets under cursor, and your selection of channels is what’s connected to the node you drop that connection on. In the node you would have a layer selection set to your selection from the previous step but could change your mind and set it to something else after the fact.

We need the ability to process more than RGBA data as it winds its way through comps.

What you’re talking about is a python script for building beauty renders in action almost.

2 Likes

I love your comment @cnoellert and was exactly the kind of response I was hoping for. Don’t get me wrong. This is what I’d like too. This is also where I’d like to see AI actually help. I shouldn’t have to tell Flame what each layer is or what it is for, it is smart enough to know what is a depth map, a motion vector, a normals map, etc; and in the case of multiple versions of that either make a best guess or something clever. That is definitely what I would like to see. Maybe if you did want to apply a matte a contact sheet immediately opens for you to select which matte (separated when contained within RGB), or so on with surfaces, normals, motion vectors, diffuse, depth, displacement, etc; Essentially a super fast way of working with multiple channels. I don’t want it to be instead of connecting one thing to another that you instead have to make selections through a drop-down menu which is just connecting in a different way. There needs to be a smart multi channel workflow. What could that look like?

I guess what I am really hoping for is a conversation as to what a multi channel workflow would look like in Flame.

1 Like

Interesting topic. working on ml-exr splitter coming from different sources (blender/c4d…)
the layer names vary for every app.


1 Like

So much of it is about assumptions. That’s largely how Nuke deals. When you defocus, the default depth source is a channel called depth. Mattes assume you want the alpha input of the main layer set. Nuke isn’t magic, it has the same issues—it just makes intelligent guesses as to what you might want to do.

We had a really nice exchange at the Logik after dark regarding this with @fredwarren. I threw down quite a few suggestions regarding all of this. Based on that convo there seems to be a concern regarding Flame artists relearning to deal with multichannel pipes and b-tree comps—that if we were to fuck with batch by changing the fundamental way artists construct their work that folks will riot. I think there are ways to make large scale changes but still retain the basic feel of the rgb+a workflow we have today.

In my mind the way that could work in addition to direct manipulation like I outlined above, would be by extending some of the concepts put forward with EXR channel display and naming and combining it with a series or rules similar to what one does in color but more for defining default connection behavior. For example legacy config would only allow for manual direct routing by default mimicking what we have today. The next step would be for defining rules, and creating those assumptions like Nuke has.

Translate that to Flame language I could see a workflow in action with maps where let’s say you add a ppass map to your surface that it defaults to your layer’s ppass, or uv for uvs so on and so forth so that layer selection not only happens at the input level but maybe at the map level as well allowing you to even override your diffuse with a layer set from a different action input that’s not it’s default beauty. All of this could be defined by rules in prefs.

Same with a comp node. The matte inputs could be set to the rbga alpha by default or overridden by a direct connection if folks want legacy or a selection to change from the incoming layer set.

This is all the long way of suggesting I think both paradigms can co-exist and we can benefit from having the flexibility.

The next part of this convo is about the analogous nature of a timeline to multichannel layer sets and channels (layers and tracks) and all the insane shit one could do if those timeline/editorial principles were blown wide open. Imagine a procedural batch node for combining, manipulating and ultimately editing multichannel data in the same way you edit timelines… in fact what if they were the same process? What if editing Multichannel procedurally in batch was the same as editing a timeline procedurally in batch and vice versa? Let that sink in.

1 Like

I kind of think there needs to be some adapt or die mentality, for both Flame Artists and Flame itself. I think there are ways to introduce new approaches without drastically changing the overall feel of Flame.

I think back to the 2013 Anniversary Edition. So many Flame Artists hated the changes but it was a necessary revamp and Flame is so much better because of it. What we are talking about in regards to multichannel workflow is nowhere near as radical a change but Flame would vastly benefit from it. If they can do it for Fusion (and the user base is pretty excited about that and deep) then it can be done in Flame. I’m really thankful to the dev team for taking the time to recode for Vulkan and Metal, as well as native Apple Silicon. I think we are starting to see the benefits of what was a necessary rebuild of the engine room. We’ve seen some great new tools and improvements. Time for some further updates to embrace some more contemporary approaches to visual effects.

I’d also like to see some line in the sand versions of Flame after which archive compatibility is not guaranteed. Essentially long term editions that will be maintained for a number of years so that old archives can be restored into them which would then allow newer versions of Flame that may have revamps in approaches which would remove the ability to restore older archives.

Whatever the multilayer approach, it needs to feel fast. Speed has always been the thing about Flame that keeps me on it. Flame needs to embrace multichannel to keep the feeling that you can do things faster in Flame IMHO.

1 Like

Interpreting multi layer files correctly would be the first step in this journey.
It’s currently broken.
The second step would be fixing multi channel open clip workflow.
It’s currently broken.
The third step would be to enable a write file node to export channels or parts into separate exr files, complete with independent compression per channel/part.
Otherwise, there will be no performance advantage when trying to decompress every channel/part via cpu of your 360Megabyte per frame openexr which needs to playback at 24 or 30 frames per second for 2000 frames.
I think I submitted some bug/improvement stuff at some point.
All of the above is possible if you consider using minimum layers in your containers, and use batch to multiplex the passes/parts/channels to do what you want.
I just want that brief window of opportunity back where it was possible to export those hard earned motion vectors instead of adding dead weight to archives.

3 Likes

This is a very good point too. I have also previously read some people suggesting that multilayered workflows can cause system slowdown in Nuke when you have too many channels. I’m assuming each node would have to check whether a contained layer should be affected by each and every node or passed through which on a heavy comp with a large node tree could be a pain. I’m also imagining how messy such a large comp could potentially get in Flame.

One thing about the current way of working is you have absolute control of what goes where. On the flip side you have to connect absolutely everything you want to use but at the same time you are only connecting things you need.

I think this conceptually could be a very interesting topic of discussion. We would need to understand the complexities and trade offs of a multilayered workflow vs the benefits.

1 Like