Tracked Clone Brushes

Used to use this a lot, but now hardly ever use any desktop processes any more so it sort of fell out of use.

So, regarding powermesh if you export an alembic, Mocha projects UVs onto the surface at what ever frame you select. In my case 1001

We import the abc into action, make a new output set to UV+matte and we’ll see that the alembic does in fact have UVs. Don’t forget to set your result camera to the included camera and double check your start frame–Mocha tends to have starts at 0 as opposed to 1 so the animation will be a frame off.

Next, we generate a full set of UV’s at frame size from another action by just adding an image and changing the default output to UV. We take that full frame of UVs and comp our powermesh UVs over them using the matte output with a little edge matte shrink.

The output of that comp looks the same as our unperturbed UVs we have plugged into the back of the comp. That’s how we know the rest position is the frame specified when we exported from Mocha. If we move a few frames forward we can see the UVs of the powermesh being pushed and pulled around.

Then it’s time for some simple vector math. We need a difference vector to know where to push our pixels. We know that our rest position is our first frame, and we know that our first frame looks exactly the same as a full frame of unperturbed UVs so to get the difference we just subtract our moving UVs (the output of the comp node) with the full frame unperturbed UVs creating our difference vectors.

Once we have the difference vector we take our original frame and add it to the front of a pixel spread and take our difference vectors and add them to the pixel spread vector input like so…

Then we adjust the parameters of the pixel spread to accommodate for our specific frame size. The max distance parameter is going to be the largest dimension, which in my case is 4448 pixels so that becomes our distance. Threshold will be salt and pepper to taste. All UV maps start at lower left corner so vector origin will be 0,0. Our X pos will be defined by red and at full strength since the X dimension is our largest dimension and the greatest distance a pixel will need to travel. Our Y vector will be our green channel, but we can’t put in a value of 1 because our frame is not 4448 high, it’s 3096. Instead we scale the value of the Y vector by the ratio of the frames height/width for a value of 0.7. This means the pixels will only ever be able to move in Y 4448*0.7 or 3096 pixels. We can immediately see if we scrub that the area of the powermesh is remaining still. If we see a lot of tearing on the edges we activate under to tuck the tearing edges under the spread.

Now that the powermesh area is still we can sequence paint to our hearts content. We then add our stabbed and painted output into the first layer of a new action who’s back is the original clip. We take our original powemesh UVs and plug them into a second layer of that action.

From there it’s just the standard action setup of a single image node of our stabbed and painted layer using a UV map from our original powermesh driving the diffuse’s position and we’re done.

EDIT: Had to delete some of the screenshots for content I realized. Sorry. I’ll add them later blurred or something.

8 Likes

Very cool, thanks for all this detail. Will try it out.

1 Like

This is dope @cnoellert! Never used the vector input in pixel spread before- nice to get a breakdown of a use case!

1 Like

Sick.

1 Like

Nomination for content of the month right here.

2 Likes

Amazing solution. Works really well.

Only minor thing. I had to add a matte input to the Pixelspread node to make it work (I’m on 2023, could be new).

Thanks again.

1 Like

It’ll error until you change it to vector warp and then you should be fine at least in 2022.3.

Glad it’s working for you

Ah, that explains… Make sense.

Thx

1 Like

just want to say that I am happy i wasnt the only nukediot that was trying to track my clone brush/autopaint in :woozy_face: