That’s pretty interesting and hopefully just the beginning.
Back when I was staff, I had the idea to train a model on our 3D renders and source footage that would do pretty much this with the main objective of being able to take gen ai content (or stock for that matter) and bring it to Log space with more detail in the highlights/shadows.
Sadly I never followed through with it as I think the main issue was not being able to output high bit-depths as the route wasn’t via Nuke.
I feel like there’s an opportunity out there and I wonder if any companies are already exploring that. I’m guessing if you did spend all the time and resources to make it, odds are one wouldn’t go around sharing it.
Interesting, we were talking about exactly this as possibility in the Patrons Q and A chat on Sunday. Hope it’s available as a service some time, would be great for AI.