if you have 2sources - first is a video and second its depht map. is there a possibilitis to combine it into a a 3D space?
U could kinda cheat using a displacement map in action
Well, technically Action is a 3D environment…with textured square polygons seen through a camera. What are you looking to achieve?
I guess you could use the depth as your z coordinate and then a simple UV map to generate your x and y in sort of a poor mans pmap. There used to be a nice trick for point position to 3D space in flame or am I conflating?
In short, the answer is yes.
However like everyone is alluding to, there are multiple ways of doing something but before you make any decision, you need to know what you want before anyone can make any suggestions.
my client would like to provide video + depth from a volumetric camera,
and would like to generete a 3d space a move around the main actor (lets say 30-40 degree)
he could provide a clean BG
what I need to do, is extract the actor from the plate generating the alpha from zDeph.
would it be possible to use a different (more sofisticated) tool than only the luminance of zdepht? - the edges will wont be clean if I start to play with luminosity of it
and I suppose the effect that he wants is 3d illusion like 3d-iphone photos but without the smear behind
i suppose i can’t do it in version 2016? can I? what node do you use to generate it?
Oh god no…not with this technique. I mean, theoretically you could get away with something for stereoscopic, where the distance between two cameras is like an eye width’s apart…but 30-40 degrees of field of view? I’m afraid not. And this has nothing to do with Flame 2016. Displacement has existed in Flame forever, but, what the client is asking is much more involved than a particular node.
To do what the client is asking, depending on how close to camera the subject is, might require a digi double human, or at the very least a fully matchmoved human proxy geo with a ton of projections and tons of comp to clean it up. The bg, depending on how complex the camera move is, with lighting/surface changes due to the camera move, could be either a simple reproject or a full on build.
If you’d like more of a precise solution to share with your client, if you are able to provide a few more details about the required location, time of day, interior/exterior, we can help provide a solution.
i ment going from depth to point possition, won’t be possible in 2016
I’m not aware of any way to use a Z depth in the same way that you’d use an object or world position pass. Depth pass is calculated from the point of the camera while those types of position passes are giving absolute values within a 3D scene. (there is such a thing as a camera position pass but that’s basically a z-depth pass with more functionality). I don’t think a volumetric camera is going to give you anything useful beyond reference positions that you can use to create 3D geo to project your foreground onto. This all depends on how big a camera move and how detailed the foreground is. You might be able to get away with something as simple as roto-ing your foreground and projecting that onto a extended bicubic that you model for your needs or if it’s something really elaborate, you’re talking rotomation of a body in somehting like maya and projecting your foreground onto that.
Displace the image with the displacement set to “camera displace” which will mimic the correct positioning of a point pass. Film the result with a second camera.
You’ll still have the depth smear though.
For a cheeky version just animate the x-shift in the displacement to give the same feel, and use the depth to cut the image into slices so you can rebuild the smeary area.
thank you Guys for all the answers
it was very helpfull