NERF development

There is some incredible NERF tools out there. This is a new one that looks pretty mad!! HybridNeRF: Efficient Neural Rendering via Adaptive Volumetric Surfaces

I’ve posted something similar before but how awesome would it be to have some kind of tool/allowance within Flame to build 3D environments within Flame that could also intake camera data. I get the feeling that this will integrate with something like Unreal more likely, but how could would it be for a VFX Supervisor to whip around an environment with a camera when there was a lot of cleanup work. Or utilising NERF between two separate camera takes to enable better blending between them with a new camera? Or have some kind of tool that can help remove an object/person/whatever from an environment more cleverly by comparing what the camera is seeing with the NERF created environment and replacing it. Then there could be things like relighting, depth based compositing, etc; based on NERF data. I’d love some better depth based and volumetric tools within Flame. Some of you will say, get CG to do that, but there are now so many situations where it would be helpful to have better tools to composite within a 3D environment. Nuke & Fusion both seem to have this better covered than Flame does.

I’m going to make a bit of a prediction here, at some stage in the next few years, Unreal Engine will become a competitor to Flame & Nuke in the compositing world. With realtime tools such as EmberGen being able to send VDBs into Unreal and get real time results, with a few more improved 2D compositing tools it could be a compelling option. I know of one developer who has gone to SideFX to work on better 2D tools for Houdini but I have seen a few things and posts that makes me think that Unreal might be doing something similar. As it is a close to realtime engine, it would be an intriguing proposition.

Which leads me back to Flame. In the 90s and early 00s Flame used to bring out tools that blew your mind. I’d love it to bring out some left field tools that absolutely blew your mind and brought back some buzz about Flame. Would be great to be a market leader on some of these new developments. Just not sure there is the time or resources that there used to be for Flame dev. Nothing against those folks though, they still are doing an amazing job with Flame.

1 Like

I’m in the middle of a sizable comp job that includes moco camera track, multi-pass, lots of CG elements with AOVs, depth, and lots of cleanup and object tracking.

For unrelated reasons we ended doing it in Nuke. And it has served us well.

One thing I’m looking to do is to find the Flame equivalent of all the things we had to do. For my own education and practice. I know some exist, but not as straightforward. I know some don’t work, like Keentools.

But I agree on the surface Nuke is better positioned for this, and I can totally see Unreal pulling ahead with all the energy and mindshare behind it.

That said, sharing the Nuke script within the team was challenging, we ended up with a serious bug that crashed Nuke and required some work to be redone. Foundry reproduced and will fix in future version. And render and playback performance is abysmal.

I would love for Flame to be usable for this next time.

2 Likes

I agree @AdamArcher.

Having some additional tools for dealing with point clouds, either it be NERF, Gaussian Splat, .ply or lidar scans would be great and allow for some interesting effects.

I’ve used Atomize for hacking lidar scans into Flame before and it’s amazingly fast to render, but struggles when you try and apply colour information. So maybe allowing the import of .ply with vertex colours would be a good first step.

1 Like

NERF and Gaussian Splats are currently being used heavily in VFX. You may see some articles coming up mentioning them in the VFX breakdown of big budget features. I have seen some of the examples Adam mention working really well. Whether is within a comp app or a standalone tool is a system worth understanding and learning.

2 Likes

I’m almost certain that it would be more efficient to look at gaussian splatting since action is 3d, supports particles with sprites and such.

Or maybe there’s a way that we can do volumetric shading on vertices and exploit that?

USD support in flame would be valuable.

But it’s way faster to just launch unreal engine.

I believe I just read about a tool to gather real time data from a drone with a 360 cam, to a gaussian splat and review it immediately in unreal.

I might have been hallucinating that - if I find the article and the github repo I’ll post it.

The next time I get a decent bit of time (there’s the problem there) I’d really like to sit down with Unreal and explore it a lot further.

I kind of feel that on a lot of shots, it would make more sense to do some 2D work in an application (Flame/Fusion/Nuke/AE) to provide elements to be composited together within a 3D/CG engine instead of the other way around. Having the 2D elements existing as cards. Imagine a pipeline where you could create & publish elements that would live update within an Unreal or similar 3D engine. I’m thinking this could be something leveraged using USD in the future, if Unreal don’t build something themselves.

There have been multiple shots recently on a couple of features we are doing VFX on that I would have loved to have 2D compositing as well as volumetric effects living within an interactive 3D environment. It just can feel so clunky sending one thing to another tool that outputs something that goes somewhere else that you try and then composite into a 2D environment.

One thing I certainly would have LOVED being able to use would be a NERF scan of a couple of the environments. It would have made some tricky shots a whole lot easier!!!

I do have to say that I think that Flame is the ultimate tool for 2D cleanup work. It is fast and powerful. For more complex shots, especially in a 3D environment, Flame feels a bit clunky in comparison to Nuke or Fusion. Just navigating around the 3D environment is a pain in Flame. Plus the inability to easily unwrap tracked geometry in camera for cleanup work and other such tools just makes it harder. Definitely a case of horses for courses I guess?!!

One day Adam, one day. In the meanwhile Nuke and Fusion are good alternatives for having a 3d environment that support volumetrics. Fusion with its newly introduced support for VDB caches and Nuke with Eddy and Plume… VRay? Embergen is a good companion for generating VDBs and is fast and fun. If you need to finish in Flame, you can create the holdouts in Fusion or Nuke… or Gaffer or Blender.
NERF/Gaussian come with their own set of challenges, specially for getting a sharp final result. It is getting better every day, but for now you have pull every trick up your sleeve to get a good sharp final. My experience anyway.

3 Likes

Good, synthetic depth of field will also help to hide a multitude of sins.

Much of the desktop/hobbyist/research material does not account for cinematic style or artistic stylization, since the focus (forgive the unintentional pun) is on simulacrum of reality.

A little bit of color modification, some vignetting, some heavy bokeh and of course 3d relighting…

We’ll see how far it gets pushed.

You’re much more likely to make these kinds of decisions as a flame artist, less likely in some other disciplines where practitioners often rely on instruction or direction.

1 Like

Whats “desktop/hobbyist/research material“

Anything that falls out of the realm of extensively modified material used for visual effects purposes.
Words like unvarnished, ungraded…
Many people are experimenting with this technology.
Many people don’t agonize over the modification or stylization of the result.
Many people are generating results on a single home computer/cloud instance.
Something like that.

1 Like

Can somebody with a usable NERF workflow (whatever the software is) post it to here please? Bonus points for Blender (or Flame :joy:) being part of the pipe as I already know how to use those.

1 Like

I think that’s a very interesting discussion. You guys (@AdamArcher and @milanesa) are looking at that from feature film vfx and complex worlds, I’m looking at this more from commercial applications (beauty, food, product, etc.). Slightly different scenarios.

There is definitely a lot of complexity having to create assets in separate apps (3D & 2D) and then bringing them together and bridging the two worlds. In Nuke there are a lot of tools that help in mapping 3D positions and 2D assets with a solved camera track (e.g. CameraToTrack, RayRender, Light shadows, projections). However with some of them comes the burden of having to bring in some 3D geos. That is easy if it’s all CG and already exists. But in cases where it was IRL and obtained by camera, additional rebuilding is required. Photogrammetry or NERF can help with that effort significantly.

It also feels like there are a lot of 2D tools that help with the refinement and cleanup. In a few experiments I did brining in 2D cards into 3D apps, I found the support for refining the 2D elements very limited. Probably doesn’t matter on small elements, but if you had a larger 2D scene with it’s own depth, you may ideally have to merge depth data to properly defocus as your 2D card gets moved around if realism matters. Can get complex quick.

While it would be so much more intuitive to stay in the 3D environment, it seems like the tool gap may be bigger, and I’m not sure if there’s as much energy to solve it, as the 3D apps have sexier problems to chase.

In one test I had a 2D scene, trying to replace one component with CG, so rebuilding part of the scene in 3D to run sims and interact. But also had bring in the 2D texture so that refractions could take the 2D captured detail into account in the 3D render (honey drizzle over fruit), and then bring that all back into 2D for integration. So it’s a lot of lining up and thinking about back and forth workflow, which eats up time, when part of the objective is to save time while also increasing post-shoot flexibility to realign elements as the shot matures.

Some of this is easier if components are fully opaque. Where you have realistic light intereactions and any level of transparency in the materials, the complexity goes up. As does it if there is a lot of precise layering of the 2D and 3D elements, including some possible wrapping around and pixel accurate intersections.

Hope this helps! I’m working on a mac btw, might not be the same for Linux!

2 Likes

https://www.jawset.com/

NerfStudio is a popular one

2 Likes

http://scan2fx.com
There is a Flame exporter

Hi. I’m on a mac too but I only see Nerf Studio instalations for Windows & Linux

It’s funny, I don’t want the Flame dev team to spend a whole heap of time porting Flame to Windows so am actually anti the idea on those grounds.

However, with there being quite a few amazing tools that are Windows only based, and I really can’t believe I am about to say it, but if there was a Windows version of Flame (not angling for one due to the above) then I would potentially be using it on a Windows based system. Sure, there is dual boot but it is such a pain in the arse rebooting the system every time when you have to switch between apps. I’d probably still remain mainly on Linux on a dual boot system but when you’re jumping between tools a lot then Windows would be handy.

Nuke or Fusion will have to be my compositor of choice when doing that I guess.

windows VM

2 Likes

Just have 2+ systems…

Dual boot is too disruptive for a workflow, especially if you have to integrate something that runs on one system into something on another and you have to iterate.

Yes, VM can work. Definitely great option for general apps. But when it comes to heavy duty apps, GPU dependencies - do you run into corner cases? Or support walking away if something goes wrong - saying, that’s not a reference system?

It’s easy to work that way with remote access apps on local network. Work on one system, and keep some app open that you can just access the other system on.

These days I mostly work on my MacStudio phyiscally, but I remote into my Linux Flame via VNC and my Windows system via Parsec. Have windows open next to each other, filesystems cross-mounted. Wacom tablets work regardless, you can forward broadcast signal via NDI. Very fluid workflow.

Yes, a bit of extra cost. But if this is more than just a test or experiment, may be worthwhile.

As Adam said, rather have Flame devs focus on more unique features. A Windows port would mean a whole year without innovation.

Nerf-XL

ummm, Laguna Seca…

and a 25km2 cityscape