Copycat node in nuke!

Hi. Just saw quick videos of the copycat node of nuke.
The principle is amazing!!! I hope Autodesk sec team keeps an eye on that.
The idea to work on a few frames, give them to the ML node as a training set so it can tale care of the hole sequence is just so great…

It has potential but the training times fir a roto and fixes to the model of the tutorial videos I watch are crazy. 5 hours to train a roto then hope it works… great Start but not really practical as of yet

Yes sure.
I just think about the ability to train your own model on your own shot…

You clean 5 frames before you leave beauty or object removal for example) , let the ML train all icky and give it a try next morning.if it works great, you win. If not, no time most you just do it the regular way…

Would love to play with that. :wink:

What would be groundbreaking is if there was some kind of AI/ML based database existed in the cloud so that everyone’s efforts were pooled in a way that everyone could benefit. ML results improve the more times they are run.

If each ML attempt had a tag, say rotoscoping, depth matte, speed, pimple removal, etc; it could get crazy good within a couple of years.

3 Likes

Agreed… would be so nice. Would also be a great way to allow people who don’t know ML thing in depth to add their stone anyway. (My case obviously)

I’ve been obsessing over this for the last couple of weeks. The thing is AMAZING if you understand the limitations and potential.

@Skulleyb let me reframe it for you. I just did some roto with it last week. I’m working on a promo with 4 diff talent. I took one shot of each person: 4K shots, 130 frames each. I made 4 super precise roto frames per clip and let CopyCat do the rest. Took me about 30 mins to do the 4 frames, and the training was 4 hours. I have 2 machines and so I could do two at the same time. Results were shockingly good. The best part is that I have 5 more shots of each person, and I can use the trained models on the other shots for them. I might need to roto a couple more key frames but it’s worth a shot.

Now…ask me about the beauty work :grinning:

5 Likes

I’ve seen these mattes up close and personal from Andy I can confirm they are indeed SHOCKINGLY good. It’s a really cool use-case we have here for the project we’re working on, but obviously it wouldn’t be the best route for every project. Perfect for an overnight render and you wake up in the morning and if it’s good, you’ve saved several hours, if it’s bad you’ve wasted 30 minutes.

2 Likes

I used it for matchgrading some replacement shots and it saved me from dragging them through grading again.

-Input old ungraded shot
-Groundtruth Old graded shot

I just frameheld like 10 frames and let it train, result was pretty much spot on, got some minor artifacts in some places but for a tvc it would be enough. pretty cool.

4 Likes

That’s another really cool application! Great idea!

I think I made a suggestion for this to the flame improvement and so far only two votes, FI-02518

1 Like

Good luck… we are still trying to get MotionVector Tracking in flame that doesn’t crash the machine or constantly require re-calculation, years after its release.

2 Likes

I just added a vote to this.

thank you! You are a legend!

The flame improvements/beta improvements tools could do with some improvements…

It’s kind of weird that you can’t just see the whole list at one time, what has votes, what doesn’t have votes, what has cobwebs, (what has a pulse)…

I mean, what differentiates a feature request from an improvement and when is it appropriate to request an improvement for a release version (which by definition is too late) or an improvement for a beta which may or may not see the light of day?

Its weird.

I’ve yet to understand it.

I don’t disagree with you, but I just do what I can to try and contribute and help what seems like a small team over at ADSK see what troubles we have and tools we want , I am not sure of a better option but I bet if you talk with them they always seem receptive to suggestions.

@andymilkis#8930 or anyone else. Do you have to use nukes roto tool for copy cat? Or could you just supply a matte? Photoshops smart object detection is really good. So you could make your training frames, using another programs machine learning. :slight_smile:

You can just supply it a matte.

Probably a not nice thing to say, but at Asylum we had a full on Roto Dept. I remember when production started outsourcing. I was so pissed I told them we should outsource Production… But I have to say the costs are reasonable and there’s no babysitting. For roto I’m send it and forget it. Until my un-outsourced producer tells me it’s ready. Rarely do I have to send any back. Besides ML is kinda like outsourcing everything to a XPU… I need to keep working for another decade at least.

2 Likes

hi @andymilkis, did you have to work within nuke or did you find a way do it partially in flame. Do you think that through a pybox node in flame would be possible to do at least part of the job (roto or just final result).

For the training I think you’re gonna want to work in Nuke. Once you have the model trained it should be pretty straightforward to set up a pybox to feed it your shot and get the result frames back. It’s def on my list to try soon!

1 Like