I know flame cannot currently do this, but if it COULD, would it be something you’d see use for?
Specifically you could feed a “Weight” image into a TW node and any pixel that was 1.0 would timewarp whatever value you had in there, but if it was 0.5 it would timewarp half as much.
I was thinking it could be useful for adding a sort of turbulence to smoke plates and weird stuff like that.
Wouldn’t that be the same as a ‘designer’ motion vector map? We usually get motion vector maps out of camera analysis. But I could see a generator for those that could produce some interesting results.
For starters, if you took a fractal generator and have it build a MVM, might be interesting to see the result.
This is related to a long-standing wish I’ve had: to be able to have an effect multiplied by a gradient.
For example you have a matte with a nice feather. You want to apply this to a grassy area. It would be nice to deform that feather related to the value of the feather. 0 (black) equals no effect. 1 (white) equals 100% effect.
I think there might be other creative applications.
I’ve never figured out how to do it.
@hBomb42 is the after effects plug-in like the Sapphire time spark?
Interesting. In some cases you can accomplish this with mask that is multiplied with the effect. But that is just the effect output. You can automate certain effect parameters, but only for the full frame. The ideal case would be if you could connect anything that has an automation lane and have it varied by a matte for each pixel as it’s processed. So in areas where your matte pixel is black, the automation value would be at the defined minimum of the range, and when white it would be at the maximum of the defined range (and such range is part of the automation setup, not the whole allowable range). That exists in synths like Massive X where you can connect an LFO output to any other parameter and then specify the mapped range of movement.
There’s a difference between driving the effect parameters vs. just multiplying the result with a matte.
I’m absolutely thinking it would be most powerful as an expression.
My suggestion would be more akin to an input connection look up in Houdini with an opinput(“.”, 0) which would look at the first connected source of the current node for whatever aspect you were going to reference…
If we just call our fictitious expression pixeleval() and say it takes 3 arguments:
Node- input node to consider
Node input - mux style which node input to use
Channel - choice of R, G, B or Y.
Then you could use and expression on a blur node for example for the width channel calling our function and referencing the the luma channel of the first connected image on the current node as:
pixeleval(“.”, 0, y)
All nodes have unique names so you could even reference an upstream node where you specifically design your gradient by adding a mux to its output, naming it “muxGradient” and referring that:
pixeleval(“muxGradient”, 0, y)
For me moving it to being an expression makes it more powerful since then, it can drive any parameter regardless of matte input. The two coexist. Plus, by having it capable of referencing any input on any node you alleviate the need to have extra node connections. It operates “under the hood.”
You would wrap PixelEval with that if needed to provide the value.
It would allow you to specify that the 0…1 of the input should be remapped to a range 0.3 … 0.7 or whatever suits you. Also would deal if the value range is 0…255 or some other common range.
This could be done in the expression, but just creates busy work and clutters the expression.
In MassiveX this looks like this
The mapped input from LFO, etc is effecting the just the yellow range of the control, with the minimum being the resting position of the main control and the maximum being the upper end of the yellow range.
Making it a separate wrapper function makes it optional to use and doesn’t complicate the parameter range. Or maybe such a function already exists? Haven’t checked.
Not TW related, but there is a pretty cool Nuke vid using a time offset ST_Map to displace stock footage. I’ve used this technique for trailing smoke/particles. Or to get some some cheap interaction with a stock plate without resorting to a bunch of grid/spline warps.
That’s basically how the fit functions work in Houdini. There’s an unbound version called fit which works exactly as your example above and then there are two other shorthand variations called fit01 and fit10 which assume a normalized output and an inverse normalized output respectively and require only inMin/inMax.
All three are invaluable. To @andy_dill’s point, that’s basically the 2d histogram node, but having range mapping tools in expressions is arguably a must.
While not time related, y_ixops uses the strength input to control whatever transform you’ve adjusted. I feel like it’s a matchbox that has flown under the radar sadly.