I was bored on a Saturday night and decided to watch the Revenant (because who doesn’t wanna see a bear maul Leo DiCaprio?) and there is some heavy lens distortion in there that got me thinking- why do we have such an archaic lens undistort/ redistort workflow in flame?
When I have to work with really heavy distortion, I default to syntheyes and ST maps, and that’s fine I suppose. But it is a curiosity to me that in Flame there’s just not a good option to undistort and Redistort.
Is this something that deserves some feature request love, or is it something that is just accepted as not being inside the software we use? Granted, I don’t often use a distortion workflow for the work I’m doing, but I’d probably use it more if it was within Flame. And I’d like to add- I love the forward thinking in regards to machine learning (and I can’t wait to see where it gets to), but I gotta say, I’ve definitely needed to undistort a plate more than I’ve ever needed to have a depth matte of a face. At the end of the day, I think I’m just curious why this facet of composting seems to be conspicuously overlooked in Flame. Any thoughts? Or perhaps there is a good workflow that I just don’t know about?
Completely agree that this is something that would make a much bigger impact day-to-day than some of the other ML tools and I would imagine that ML could be used to solve lens distortion.
The one nice thing about the STMap workflow is the way that it’s usable across different apps (e.g. Nuke) but if there were a user friendly way to generate, implement and export STMap for distortion from Flame, that would be a huge improvement over today.
This is really driving me crazy at the moment. We totally need a better solution!
Using lens distortion grid analysis from Nuke and getting two STMaps back
I can remove the distortion and get an acceptable round trip but without over scan protecting all of the pixels
I can have overscan and round trip but the removal of the distortion doesn’t match the one without overscan
It is SO UNSATISFYING in Flame. I thought I had a technique that was working but anamorphic really throws that under a bus. It must be degrees of error and anamorphic is really pushing it.
Based on your schematic alone it looks like your overscan UV map isn’t the same resolution as your overscanned plate, which it ought to be. Are you importing it using display window or data window?
Something was jacked there with your STMAPs so I remade them (quickly), double checked the output in Flame and then packaged it all up and sent it back–including the .nk flile and media. I haven’t bothered to double check filtering in the action nodes so wise to take a look. One thing that I found odd was the pixel offset I needed to add to the diffuse to get things to line up: 3.0, -3.0. Strange. Anyway it’s all here @PlaceYourBetts →
I have experimented with display and data window (that was a revelation when I discovered that!) but I have been able to use display resolution that is smaller than my overscan resolution. It uses the coordinates of the original res and pushes it out beyond and into the larger canvas area (input 2).
I’ll investigate a little further today—got a little late for my geriatric self yesterday. If it’s Flame and it’s UV handling then we can alert the troops.
I find the UVWarp32 matchbox to be very good, but the internal tools are, um, less than ideal. I’ve had some luck with the Lens Distort node. I’ve also had a lot of not-luck.
Things Flame should be able to do:
–Undistort and Redistort plates correctly and easily.
–Generate UV maps from the lens distort node should we want to be leads on a project and share the maps with non-flame software.
This is one tool I’ve never really understood why there wasn’t a good inbuilt tool for in Flame.
If there is a really solid way of dealing with lens distortion with inbuilt tools could it please be shared?! I’ve always gone to external tools when I’ve needed to do it (there is this software called Nuke that does it really well, not sure if you have heard of it before).
I’m certainly not someone who does it very often, but there are shots where you just need to do it, especially when an element you’ve tracked on enters frame through one of the corners of the frame.
3DE creates a little series of matchbox’s that overscans, undistorts, redistorts then crops back to native res. About the neatest way to do it in Flame. Makes no sense that you can’t do that natively, the hopeless lens distort tool notwithstanding.
Thanks for sharing that Nuke script. I learnt a lot.
I didn’t know that the STMap output was giving off both UD and Distort values. The shuffle node is really cool. We were doing something a little strange in hindsight.
I found something that works now.
We can expect our STMaps to be supplied at source res. No accounting for loss of pixels due to the undistort and it is this the bit I have been trying to solve.
Again, Nuke does it very nicely by keeping data in the bounding box so it was a very nice discovery to find that those UD STMaps have hidden data. You get a change in resolution if you select Data Window instead of Display window in EXR import:
This STMap will now undistort and retain all pixels
The redistort STMap doesn’t like to play with that new resolution so we use Action to apply the UV and then apply the overscan UD as a diffuse map like so:
I do find this workflow @cnoellert showed me to be solid. We’ve had few times where it hasn’t worked due to incorrect st maps (they didn’t work in nuke either).
I did not know this. Is there anything in particular that makes Nuke better than Mocha Pro for this? Since I already have it, I’d probably go with Mocha.
If you’ve got lens grids Nuke’s pretty failsafe… you can basically one-click your way to a pretty good result.
Mocha’s process is a little more involved but with some effort can give workable results. I’ve used both and in a pinch (laziness really) I’ve undistorted in Mocha and it works just fine.