Tracked Clone Brushes

Still relatively new to Flame, coming from Nuke and SilhouetteFx.

Doing some beauty work, and watched Andy’s excellent tutorial. I like his technique of offsetting a gmask for cloning and point tracking that.

I very much come from a workflow of tracked clone brush and auto paint (via Silhouette & Nuke).

I’ve worked out a Flame equivalent (thanks to a post I found from Grant), where I use a paint node in batch to doe the clone on frame 1. Then I have a separate action node with an axis object to get a point tracker, and then I link the x/y channels in the animation editor. That works quite well, but is a multi-step process and requires organization to know which tracker goes with which brush stroke.

Is there a simpler way? It’s too bad that the paint node doesn’t allow you to attach a point tracker to the x/y coordinates directly like the Axis object does. That would round this out quite nicely. Maybe a feature request?



Well, there’s always the route of stabilizing the image first and then putting your clone brush where you want it and doing the fix then inverting that stabilization to get back to the original motion of the plate and comping that back over.

I do have one other opinion on this that may be controversial and/ or unpopular: live painting like that can be a really unnecessarily heavy process in flame and can really bog a batch down. There’s a lot of times where you could just as easily matte the area you want to fix, and use a front/ matte offset to achieve pretty much the same thing- again you could stabilize before hand if you want. Use a gmask and 2d transform into an action, and you’re blazing fast.

Welcome! So, Flame doesn’t have a “trackable paint brush” like you are used to in Nuke in Silhouette. There are a lot of Silhouette and Mocha users here, and of course the Silhouette Paint plugin is available within Flame as an OFX plugin. You’ll see a lot of Flame users stabilize, paint, and then invert the stabilization whether that is a 1 point, 2 point, planar, motion vector, or camera track.

Here’s a thread that covers a few techniques.


1 Like

Thanks @BrittCiampa and @randy.

Good point. It’s a common problem that you try to re-create what you know, instead of using the platform as it was intended to. Touche.

I’m familiar with the process of stabilize/process/reverse. Somehow I never took as much to it as I should. May this be the incentive to do so.


1 Like

I can attest to this. I’ve tried sequence paint on a stabilized plate and in some cases I was really happy with the results with lighting changes etc. But I’ve seen render times balloon very quickly, especially with a lot of strokes. Also if you have to chain multiple stabilize/sequence paint/unstabilize together for fixes across the shot, renders can get pretty long. You could render those paint strokes out and bring back in but it is something to keep in mind when you make changes.

Also welcome! Recognize the name from LGG and MixingLight.


This is a little OT, but relevant still I think, and excuse me if it’s overly obvious and this is no deep comping philosophy.

But in regards to pre-renders and having them chained throughout a batch and then having a nightmare of having to change what you did on the the very first PR: I try to think vertically as much as I can as opposed to horizontally. So what I mean is, if I’m fixing five different things in a shot, I’m organizing it so I’m going to do all the fixes independently, as if each one is the only thing I’m fixing in the shot. Doing them in their own little sort of micro setups, one above the other, and then feeding them with mattes into one action that combines all the fixes together for what will be the final. Obviously this only works if the fixes aren’t sequentially dependent on each other.

The advantage though is that if a client says, I see a little edge poking out on the roto or something on what would be my first PR, if that is in the chain that is feeding subsequent PR renders I’m going to be doing a lot of re-rendering to just fix one aspect. But, if it’s own little micro setup feeding into that final action, I’m just fixing that one PR and the rendering out the shot again.

Like I said! This probably isn’t as novel a concept as I think it is, and a lot of people probably think/ work this way; I mainly just think about it when I DIDN’T work that way and I’m waiting for my third PR render to finish so I could render the fourth and so on and so forth.

1 Like

Paint can get heavy, but sometimes it’s the only thing that can get you through. I find I’m doing just about every cleanup these days with a stabilize->paint->invert stabilization rather than tracking in patches, mostly because if you do it right (Rec Clone and Drag set to sequence, and @allklier make sure you disable “Consolidate” on the left or Paint will act dumb and weird), you get lighting and focus changes for free.

It does get heavy, which sucks, but if it works, it works. If I know I’m gonna be doing a bunch of high-rez paint, I like to throw a Resize onto the stabilized plate set to Center/Crop and crop it down to either 1920x1080 or 1280x720. Leaving it set to Center/Crop is the key to avoid introducing filtering artifacts. The Paint is much faster at the lower rez, and then when you’re done you can pipe its fill and matte into another Resize before the stabilize, t-click the stabilization node to set the Resize up (again center/crop) and it should slot right back into your invert stabilize Action or 2dTransform.


I hear you! I guess where I’ve never been 100% is the inherent advantage of sequential painting over just sourcing something with a 2d transform and a matte into an action. If I need a really articulated matte that would be easier to paint, I just paint the matte- freeze it- and then use it as the matte for my sourcing. I feel like I’m missing something and/ or missing out!

Edit: I’m very guilty of not cropping something down when I can. It’s good to get a reminder, no matter what the process you’re doing!

Interesting perspective @BrittCiampa and @CoryJohnson. I don’t necessary care about render times too much with Paint. Of course, I try the procedural techniques with blurs/frontmatteoffsets/A2Beauty/Crok_beauty etc. But inconveniencing electrons and being able to step away or do something else or rendering in the background or Burn makes that an easy pill to swallow.


For cloning, sure, but the Drag brush is just so key for removing things out of smooth gradiated areas (taking stuff off car metal, laptop logos, phone tracker marks with reflections moving through them), as is the ability to switch from Sequence to individual frames for when the talent swipes across the tracker mark and you have to clean tracker out of motion blurred finger etc. I feel like I use it on just about every cleanup I do.

1 Like

I was raised in the Danny Yoon school of render optimization, 'tis engrained in me now!

1 Like

Worked through it, and am happy with the perspective grid stabilize/paint/unstabilize workflow.

Re: render times, CPU horse power is reasonably affordable these days, unless you’re stuck working on a laptop. And getting a little eye / fresh air brake is certainly not bad.

SilhouetteFx is a slow to render with paint. Running it on a 32 core Xeon system, it’s not uncommon for me to max all cores out and still only get 5fps render times. That’s one of the things I’m looking forward to with Flame, is to do fewer cross-app round-trips (not a big fan of the plugin versions, too obscure and difficult to version manage), and have an app that can handle all finishing aspects.

And better workflow habits.The notes on pre-comp renders and organizing that is well taken.

Thanks all for the quick education and tips.


I do that diff comparison for my own OCD. To get around it I comp the painted frame over the original using its matte and do not use stabilized&unstabilized pixels if they are not touched.

Stabilise, paint, invert stabilise is fine if you then diff your result with the plate, then use the resulting matte to comp over the plate. That way, the only difference will be your paint work…

1 Like

I think for this reason it would be nice to get the point tracker in the paint node, as I asked in my original post. If you have numerous paint strokes to take care of, then stab/paint/unstab/comp is effective. But if you have just one or two little details to take out, it would be more efficient to be able to quickly paint and point track, which by its nature would be isolated. Less time, less node tree real estate.

This makes this request an efficiency improvement, not a must have.

This. Don’t filter the whole plate. Batch Paint automatically outputs a matte for all your strokes. Just pipe its output into the inverted stab and Comp it over your original plate.


Excellent point to make this very seamless.

Expanding on the stabilize / paint / unstabilize workflow, here’s one more variation of that using the ‘recent’ addition of the power mesh to Mocha Pro.

Sandwiching the paint node between two duplicated Mocha Pro nodes, the first with a tracked power mesh and in stabilize/unwarp render mode, the second one a duplicate but in stabilize/warp mode.

This may have the benefit over the perspective grid that it’s more granular for morphing textures like facial regions. And it has the benefit over the paint and motionvector technique that you don’t freeze a frame, but you’re using all frames and thus lighting changes can propagate. As the Mocha Pro nodes do render in real time (albeit a bit slow, caching definitely recommended) it also remains a non-destructive process. Though comping it on the background with a mask is still recommended.

In my example tree below the mask comes from the Mocha mesh, I should refine the alpha operation to further cutout only the paint strokes. And the Mocha UI didn’t have the colorspace set right, but it’s not material to the result.

Screenshot from 2022-08-26 09-17-56

Any thoughts on this? Pitfalls I’m overlooking?

I was toying with the idea of exporting the alembic data and see if the unwrap could be accomplished with a camera projection, instead of the Mocha module renders. But haven’t arrived at a workable solution yet.

This is cool! I’m yet to really dig into the power mesh side of mocha, but I’m keen to! Regarding the last bit about exporting the alembic, if you’re looking to unwrap UVs in flame, there’s no reaalll way to do that. There’s some hacks that’ll get you close, but it’s no simple process and a simple unwrap uv function would be something I personally would love to see in flame one day.

1 Like

Here’s a rendered example:

I cloned out about 1/4 of her brow. All cloning only on the first of 310 frames with duration of sequence, no further adjustment. Holds the morphing skin reasonably well. Render time for 4K clip @ 310 frames about 7 minutes.