Not Randy's Black Belt Camera Projection

Thanks @BrittCiampa! I have not played around with adding cards. My time in syntheyes these days is pretty limited, and I tend to track shots before knowing precisely how they’re going to be tackled (or by whom, frankly).

Worth a look, though!

And as mentioned somewhere else, Danny’s UV unwrap technique combined with a Syntheyes object track and fairly accurate geometry can do some very exciting things, cleanup-wise.

As a technique, this is a great jumping off point to all kinds of tricks. And I very much SHOULD have mentioned earlier that it builds upon techniques demonstrated long ago by @ALan and @La_Flame, so thank you to both of those fine folks.

2 Likes

@kirk I think you’ll dig it! Once you’re in the “add card” mode in the perspective view- I set it to “locked” for this because it’s just easier for me to see the plane, just lasso select locators on the planar surface, then boom perfectly (hopefully) aligned card in like 10 seconds.

3 Likes

Yo @kirk I’ll take some of that sugar… :wink:

7 Likes

Ooh! I think my days of adding a plane and rotating it in 3 dimensions until it lines up with four tracker points are coming to an end! Thanks, Britt!

2 Likes

It’s worth trying the Flame 3D track. With the point cloud it creates, the axis you use for the card is automatically aligned, no need to do any 3 dimensional rotation. You select a number of points in the point cloud that sit in the right place, Flame averages out the positions and automatically lines up the card. Plus you don’t have to leave Flame and import something. Worst case it’s super close and you can fine tune it.

May not work for all shots. SynthEyes and 3DE are better trackers. But for your average everyday stuff it may totally be good enough.

1 Like

I’ll second that. Flame’s camera trackers are massively underrated in my opinion. Both trackers have built in functionality to align a plane with any number of points. In practice I find that the new tracker usually struggles to align the plane correctly, but the mono analyzer is almost always rock solid.

Also I find there is usually no need to align the card perfectly with the object. I’ll often pipe the stabilized image into another action and rectify with a perspective grid instead.

Fantastic technique. I didn’t know it. I’m always stabilizing (2dtransform or perspective grid) even for easy tasks. Everything is easier with a pre-stabilized shot. And I always missed an stabilizing technique with 3d camera tracking. The point is (maybe this comment should goes to “Today I learned” thread) today, after almost 20 years I realized that there is two camera nodes , a “camera” and a “3D camera” :face_with_open_eyes_and_hand_over_mouth:

In your defense, the 3d camera node didn’t go in until the rewrite I think? It hasn’t been there the whole time.

2 Likes

Yeah, it got introduced to add compatability with FBX cameras if I’m not mistaken, sometime around 2012.

There’s a nice hack using @MikeV’s Axis to Locator python script for those moments when you don’t have locators in the right place.

  1. Use the @lewis Find a point technique to locate the four corners of your surface (or really just 3).
  2. Convert each of the axes you’ve located to a point locator using the python script.
  3. Add a new axis per locator and snap each individual axes position to each of the locators.
  4. Box select all the axes which you’ve snapped to the locators and convert those to locators.
  5. A single locator node is created which you change to plane transform and parent an image beneath.

MikeV’s script creates locators based on the selected axis’ position relative to their parent and not in world space. Luckily the locator that is created is auto parented by the same parent axis as the selected axis at creation. Then you can create a new axis and snap it in world space to that locator, thereby generating the world space coordinate you need on each corner to correctly complete the planar transform. Select all those world space axes, convert them to a single point cloud and you’re home. Sounds convoluted but it works really well and means you’re not at the mercy of whatever arbitrary points you have as locators in the point cloud.

7 Likes

@kirk thank you so much for the video. I’m an idiot it was just the camera node not the camera3d that I totally overlooked.

I think we’d all love to select some points and go create card like nuke but that work around from @cnoellert sounds interesting too.

Stoked on this. Thank you

3 Likes

Happens a lot! To very smart people! That’s why I made a point of mentioning it.

I’m glad you got it working. After showing it to a colleague a few months back I was very pleased to hear him yell out his door, “IT’S LIKE CHEATING!!!” I hope your experience is similar.

2 Likes

Since a video is better than many words on Discord - here’s a demonstration with the Camera Analysis 3D tracker inside Flame: Flame - 3D Projection with internal 3D tracker - YouTube

Maybe it adds a useful tidbit to the conversation.

5 Likes

While perusing the net I found this little gem of a toolset for Nuke, which, amongst a lot of super useful concatenation hacks and other transformation converters, would allow a card to be converted to a cornerpin which you could then invert. So smooth….

Lovely what having access to the matrix data for transforms allows you to do.

5 Likes

Matrix utility is awesome in Nuke, you can do a lot with these, but remember you got the UV output in the ScanlineRender node. For how common is this type of work in Flame projects, it does seem like a way overdue feature in Action.

The interesting this is that for surfaces the output is already there when you are in UV/Vertex mode. Adsk just need to pipe it to Action’s Output.

2 Likes

This feature request makes exactly this point and is 6 years old. It has 30 votes. Maybe it needs a little love and persuasion?

7 Likes

31 now…

2 Likes

Voted.

2 Likes

Voted.

1 Like

Thanks for this Kirk! Really appreciate it!

2 Likes