Camera tracking and UV unwrap techniques


Hi @cnoellert - 8 times out of ten when I use this set up, the pixel spreads node seems to corrupt it not work. The footage does not unwrap and just looks like a spiky mess. I can’t problem solve because I don’t understand it. Is there somewhere that explains all of this?

Why 4096? Why 1 red 1 green? Why other settings in pixel spread.

Here’s a simplified setup for 2025.

First we layout two color source, one set to bars and the other set to stmap. Make sure the stmap is 32 bit.

New we lay down an action, and do some animation. In the output section we make sure we have a matte for our main output. We add an additional output layer for the UV of all objects which is just one surface in our case. I’ve also turned out 32bit rendering for action outputs.

Next we layout a comp node and set it to subtract. This node will create the vectors pointing in the direction we want to unwrap. So we subtract where we are from where we want to go, so the UV output from action goes in the back input and our stmap we originally created goes into the front.

Next we add a pixel spread and pipe in our action render result as the front and our vector from the comp node as the vector input. The node will error until we set the mode to vector warp. Settings-wise this is where things are going to be a little different and I’ll try to explain. We’ll set the origin to 0,0 to start with. We do this because stmaps start in the lower left at the pixel coordinates of 0,0, so we need to tell Flame that’s where our coordinate space is warping from. We’ll then set a distance of 1. This is the maximum distance we want to travel. This number represents the length of the vector we want to push our pixel along. That being said, it’s a little bit misleading because the length of the vector get’s multiplied by the gain values in the vector gain column.

distance * gain(x,y) = maximum distance a pixel can travel along the X axis or the Y axis

Why are the gain numbers different? It’s actually pretty simple. Our frame’s width in this example is 1920, so if I want a pixel to have the ability to travel the full frame’s width it needs to be able move 1920 pixels in the X:

1 (or distance) * 1920 (or gain(x)) = 1920

We then do the same for the Y gain, allowing for a pixel to travel along the Y in 1080. With those parameters in place our frame unwraps.

Next bits are just for proof of concept. We clone the green over in a sequence paint…

…and then use the newly minted stmap node and out uv output from our original action to move our painted piece and matte back into the correct position with our choice of filtering modes…

…and lastly comp it over our action render output to complete the setup.

Hope this helps man. (217.3 KB)


This refers to something I was trying to do today; I did a camera analysis and created a geo then projected onto it. Would this technique outlines above be able to unwrap the projection do you think a la Jet Li?

It depends on the complexity of the geo and the underlying uvs and what the camera move does. One technique you can use to circumvent most of those issues is what one often does in Nuke: not only project your RGB pixels on the geo in question but to also project a UV map onto the geometry using a frame hold on the camera where the texel density works best for whatever task it is you need to complete.

Then you can use that resulting uvmap to unwrap the same way as above. Does that make sense? The Jet-Li demo is just a Nuke card in 3d space, unwrapping the card, painting and putting it back. If you can get simple geo with a simple UV layout–like an image (or Nuke’s card) into the right place in the scene it’s a great technique. Not as bulletproof as Nuke’s version but it can work a treat.


Appreciate it as always @cnoellert

I watched the jet li thing and thought that’s what the lad did. It did seem a little too simple the way he explained it. The bit he missed out was the weeks of modelling and tracking in cg that went on before it hit nuke.