We tried using pybox for Dasgrain and could never get the input clips to be recognized properly.
A simple bounding of the voroni segments based on which of the scattered points the current pixel is closest to should allow you to create multiple flat fields with no blending around the extents of the segments… The blending is based on the distance fields which are only the length of the vector of the current pixel position to the closest scattered point.
#ifdef GL_ES
precision mediump float;
#endif
uniform vec2 u_resolution;
float random (vec2 st) {
return fract(sin(dot(st.xy,
vec2(12.9898,78.233)))*
43758.5453123);
}
void main() {
vec2 st = gl_FragCoord.xy/u_resolution.xy;
st.x *= u_resolution.x/u_resolution.y;
vec3 color = vec3(.0);
int seed = 26;
// Cell positions
vec2 point[5];
point[0] = vec2(0.83,0.75);
point[1] = vec2(0.60,0.07);
point[2] = vec2(0.28,0.64);
point[3] = vec2(0.260,0.210);
point[4] = vec2(0.71,0.81);
float m_dist = 1.; // minimum distance
int pointNum = 5;
// Iterate through the points positions
for (int i = 0; i < 5; i++) {
float dist = distance(st, point[i]);
// Keep the closer distance
if(dist < m_dist) {
m_dist = min(m_dist, dist);
pointNum = i;
}
}
// Fill each distance field with a random value based on the number of the closest point
color = vec3(random(vec2(float(pointNum + seed))));
if (m_dist < .005) {
color += .25;
}
gl_FragColor = vec4(color,1.0);
}
I don’t know how dasGrain works internally but I can imagine that once you’ve bound a group of pixel coordinates by the closest scatteredPoint, you could use that int to pull a frame from an input an grab it’s texture color into each individual group at the same pixel coordinates. At that point you will have a grain pattern from a series of different frames at random locations on your output.
From there what would need to happen would be generating random coordinates for a fixed number of points with a seed that updates each frame. I don’t think glsl supports variable array creation so we would set a max size and then breakout when we hit our desired number of points. I just cobbled this together and it seems to do the trick…
#ifdef GL_ES
precision mediump float;
#endif
uniform vec2 u_resolution;
float random (vec2 st) {
return fract(sin(dot(st.xy,
vec2(12.9898,78.233)))*
43758.5453123);
}
void main() {
vec2 st = gl_FragCoord.xy/u_resolution.xy;
st.x *= u_resolution.x/u_resolution.y;
vec3 color = vec3(.0);
//Seed value should be time based
int seed = 15;
//User definable number of scatter points
int maxPoints = 50;
// Cell positions capped at 100
vec2 point[100];
for (int i = 0; i < 100; i++) {
float x = random(vec2(float(i + seed)));
float y = random(vec2(float(i - seed)));
point[i] = vec2(x,y);
//Break after the maxPoints is reached
if (i >= maxPoints) {
break;
}
}
float m_dist = 1.; // minimum distance
//Define the id of the closest point
int pointNum = 0;
// Iterate through the points positions
for (int i = 0; i < 100; i++) {
float dist = distance(st, point[i]);
// Keep the closer distance and define point group
if(dist < m_dist) {
m_dist = min(m_dist, dist);
pointNum = i;
}
//Break after the maxPoints is reached
if (i >= maxPoints) {
break;
}
}
// Fill each distance field with a random value based on the number of the closest point
color = vec3(random(vec2(float(pointNum + seed +1))));
if (m_dist < .005) {
color += .25;
}
gl_FragColor = vec4(color,1.0);
}
Seems to do what is needed…
The position of the scattered point is there just for me to debug btw.
Getting a randomized grain from a static small patch is only one of the problems. Even harder is analyzing the grain response curve and adapting that for the shot.
Just solving one issue at a time man
Blockquote
The screenshot of the code shapes reminded me of this article I read the other day. Einstein shapes:
From what I recall, adsk_time is available for matchboxes so that’d take care of your time based seed…
Definitely. What I’m curious about is if one could use the three temporal calls:
vec3 prev = texture2D( adsk_previous_frame_input1, coords ).rgb;
vec3 next = texture2D( adsk_next_frame_input1, coords ).rgb;
vec3 curr = texture2D( input1, coords ).rgb;
as a hack to temporarily vary the normalized grain samples as well… That would give you at least 3 different frames per frame, current + 2 to grab samples from.
Did anyone crack this problem with the Das flame grain?
I think the energy left the room ever since BorisFX offered a genuine implementation. Easy enough now to do.
In talking to the BorsiFX team, they worked directly with the developer of DasGrain, and in fact they added some improvements, that even Nuke doesn’t have (at least for now)
Ooh. What’s that plugin called?
I’ve been making my own hacky work around that isn’t scalable so have stayed quiet.
It’s part of SilhouetteFX, which can run as an OFX Plugin.
I’ve used it that way, though it can bring Batch to a crawl, even if the node is disabled for some odd reason. So best to bring in, render and then delete. The Silhouette tree is always the same, so you can easily re-add it.
It’s pretty expensive to buy silhouette just for a grain tool.
True.
I think a lot of folks have the whole BorisFX suite already. You also get an alternate paint tool that can be quite valuable for retouch.
Long-term it would be nice if BorisFX or someone could take that code and put into a standalone OFX. I think people would pay decent money for it. When Silhouette went OFX early on, it was just the paint tool on it’s own, not the whole app and had discounted price. So there would be precedent. You should help put the bug in their ear, they’re definitely receptive.
They had a huge flaw in their memory management implementation. It is mostly fixed in version 2023.0.4.
That would explain it. Didn’t have time to trace it down and file ticket. Thanks for chasing that down.
I’ve talked at length to Marco and Paul about that. Those guys are great, but they make Silhoutte and only Silhoutte. If would have to be a completely separate team from Boris to take Primate & DasGrain and turn them into stand alone OFX. I see no indication this will happen.
You also get Primate in there, which in my opinion is the best keying technology if you know how to use it properly which few do.
Agree there. We had a discussion with Boris himself at the MPE event in NYC a few weeks ago. He might be receptacle to that if we can show him the business opportunity. May be worth reaching out to him instead of just Marco and Paul if you want.
Yup, I worked with ADSK guys to track that down, and then Paul turned the bug fix around in less than 24 hours.
Looks like this version is not yet on their site, but if you need it, reach out to Marco.