I’ve got some stuff shot with an extreme fisheye lens. I need to replace the artwork in the billboard. I’d like to find some way to un-fisheye the footage to make the replacement easier. I’d then need to re-fisheye the shot. Understand that such an extreme lensing will result in quality loss with all the stretching, etc, but I’m not concerned about that. Any suggestions? Also open to other tracking solutions outside of Flame. I have the lens info (iZugar MKX-22) but no distortion grid.
got me wondering if you could project your billboard gfx onto a transparent sphere tracked to match the forward motion of the lens, and shaped to replicate the lens curvature?
not sure how it would work on moving footage, but just tried using the fx nodes/map convert and the Input format option gave some encouraging results on the screengrab, using Spheric input and cubic output.
I have a “repo-master” for extended bicubics. It has 2 action nodes with a 16-celled extended bicubic. You can tweek one and the other reverses the tweek
Another possibility is to use Syntheyes. There’s a couple tutorials on the site to track fisheye and latlong footage. Also this one on calculating distortion of a fisheye.
This looks like one half of a 360 camera. I had a shot recently where we had to do a bunch of work on something like this. Not what you want to hear but Nuke has some great tools for dealing with footage like this.
Converting between maps would be the way to go. When I get into the office I can look up what the settings are for the sensor and FOV.
You might also want to check out Jeron’s 360 VR setup that he posted on FB a loooong time ago. You might be able to hi-jack that to output something that is trackable and then re-map to the original image.
So…in Nuke you can use a SphericalTransform and use Rectilinear project to basically aim at something and have it undistorted and looking normal. You can link these so it’ll undo what you’ve done to bring you back to your original.
We don’t really have a tool quite like that but I had a look at what Jeron did ages ago and then stuck your footage in. It’ll output the 6 cameras…so in theory you could animate the “aim_here” node while viewing your front camera to get you something nicer to work with. You should be able to invert aim_here and get back to where you started. Below is the setup.
EDIT: I just tried the inverse and it’s not working like I thought. I should have checked before posting. You can offset one axis and it works but the second you add another offset it throws it off. I’ve updated the setup below. You have to adjust each axis on it’s own and chain those together and then do the inverse. A bit of a pain but I checked it this time and it works. In this case, you’d want to work on output2.