Camera tracking and UV unwrap techniques

Checking out the new Camera Analysis now. Question for you @La_Flame. I’m really keen to reproduce the PFTrack Texture Extraction technique but in Flame.

So the Perspective Grid does what I want. Track something, set it to 2D, Invert, BOOM. You can paint on the texture of the perspective grid and un-invert downstream.

Question is, can you do this with the new Camera Analysis? I can make a Perspective Grid the child of the Flame 2022 Trackers Axis, and the Perspective Grid now moves on the same plane as the ground. Awesome. But when I go to hit the Invert button, nothing happens. Any idea how we can merge the 2 tools, or, how I can effectively paint on an extracted texture of a Perspective Grid or Geometry generated by Camera Analysis?

In the past the way I’ve gotten around this with flat surfaces has been to

  1. Position an image correctly in 3D space to the surface you want to unwrap.

  2. Project the scene on to that image with your preferred method.

  3. parent a new camera with that image.

  4. Switch render out camera to your new camera and position it so it records as much or as little as you want to work on.

  5. Paint that output to your hearts content.

  6. Duplicate your first action, adding your painted output as a layer.

  7. Project painted output from the position of the camera parented under your correctly aligned image plane using your preferred method.

…guessing you know this already but figured I’d write it down anyway.


This sounds like a much smarter version of what I do, which is to position spheres in space feeding the matte output and track those with a downstream perspective grid (set to 4 point), paint, invert grid, etc.

It felt smart, but I’ve always suspected there was a smarter way.


Like most things it’s whatever works. I like the projection technique since it take guess work out of things. You know it will go back correctly and you have control over texel density by virtue of your camera position and resolution @kirk

1 Like

Oh! That’s a cool other option. Nice tip. I’m totally going to try that if I get into a corner where the solve doesn’t work.


So that’s how the FlamistsTM do it, eh?

Thanks for the notes @cnoellert. I’ve always thought about doing it that way, but for some reason always got tripped up…UNTIL NOW. Thank you for that write up. I was neglecting a step and then…Law and Order dun dun…I was caught out by the Projector Default Z set to 500. Doh. Thank you for that!

So, I did precisely what you mentioned. And, I just need to get my head wrapped around manually a camera around to view something. I’ve always struggled with the Flame 3D system to bailed early on but the new Camera Analysis has me optimistic as there is now a way of generating actually helpful geometry.


@randy, there’s another method to do what you’re after based on a technique @DannyYoon adapted for flame some time back. It’s pretty clever but not for the faint of heart…

Here’s a demo setup I put together if you’re interested.


Ever since I saw that Jet Li Nuke Demo of UV unwrapping I’ve wanted it in flame. Such a powerful tool.


@cnoellert Funny, but I’ve never use the UV Unwrap that much. To give credit where it’s due…I started with a rig that Lewis Sanders made:

From Logik Facebook:

Projection Map to UV Map Setup

Hey guys…there was a recent thread about UV unwrapping and Lewis Saunders reposted his awesome setup.

I improved it by increasing the UV texture resolution and adding a blur node to get rid of the artifacts in the original setup. Also, the action setup has examples of an image plane and the original Sphere (which is hidden). Just to show different scenarios of projecting onto an image plane or a 3d object. There’s also a Paint node in there as a starting place for you to fix the UV texture.

Why is this cool? Ever get stuck when you’re using a tracked 3d camera and Projection maps and then during the move, you start seeing nasty warps and or cropping? This setup might help! You can unwrap the projection texture and clone or paint the problem areas.

Thanks Lewis!

The key to the unwrap is you literally subtract the UV pass from a UV Gradient still. This is the “difference” vectors that will unwrap the footage via Pixel Spread vector mode. It’s simple math! That’s the genius of Lewis’ setup.

One type of shot this is good for is if the plane you’re fixing has smoke or moving shadows, but i guess you could just use the perspective grid trick for that.

Maybe if I have time, I will post a setup using the new 3d tracking in 2022, but you can just look at the UVUnwrap rig and it has an example of using Action 3D geo.


I use it in PFTrack all…the…time…and its amazing. And its so simple and fast. I’ve downloaded the setup @DannyYoon and @cnoellert. I’m following along and must be doing something silly as my Pixelspread is yielding close but not quite correct results. I’ll check it out in the morning and the “extract the problem area with an image” part has me a little stumped. All you really need to do, I think, its select a damn part of the frame, right? Like roto, or key, or projection and output the UV and matte with then Subtracts from the UV Gradient, right?

Thanks for posting.


It’s interesting because I use unwrapping all the time in Nuke (scanline render wise that is) and almost never in Flame because it tends to be such a pain. Just throwing out a a camera and recording the image from the angle that makes the most sense and projecting that back on tends to have the most mileage for 3D solves but most of the time the mocha unwrap shot is the way. In Nuke the setup is so simple there’s no reason not to use it…

Different strokes…

Another fun thing I’ve done with putting 3D tracking markers in space is pin downstream bicubics to them, copy to uvs, and then warp/straighten things without a lot of hassle.

Well, once. I did this once. And it was only after scouring the internet for @john-geehreng ‘s tutorial on distort node stabilization. He’d shown a version of it a while back at a user group and I hadn’t had the presence of mind to take notes. It seems like that functionality has since crept into other tools, though.

More to the point, the new 3D tracker is super cool. I’m excited about it!

Hi Randy,

The only change in that workflow is the Camera Tracker that puts out a tracked camera and Axis. Everything should be the same.

We would need to look at the workflow closer to see what is different.

But I see that others may have found your solution.


1 Like

thanks @La_Flame! Yes, there are other solves, but the Perspective Grid 2D/Invert workflow is sooo slick…for some reason I can’t figure out how to make it work in a Camera tracked scene…any wisdom? I must be doing something wrong.

1 Like


“solves” is a verb.


1 Like

Thanks for the demo Chris!

1 Like

That’s funny, I didn’t realize RISD had an English program.

1 Like


They did!—and are the reason it took me 5.5 years to finish college—I failed those classes like it was my job.

The language nerd bit came later, lying dormant until I saw “architect” used as a verb.


So no camera solves, only camera solutions? :wink: