Runway ML with amazing roto!

Thos looks very interesting for quick and easy rotos.


Fascinating to see automatic roto getting better and better.

1 Like

I’m blown away by how awesome it looks, but also by how amazing that interface is. For such groundbreaking technology, I’m so impressed that it looks so nice and easy to use. Reminds me of!

This looks great. Sent off about 20 shots for roto this week, so I’ll give this a go and see how they compare.

1 Like

It’s a nice idea but I just tried it with something that would be simple for a human to do (a building against a blue sky - undistorted and denoised) and the results I’m getting are not usable. Oh well…

There’s a bunch of other ML tools on that site that seem intriguing, though. And there’s something in there about using it to create your own ML tools within their system, so there’s definitely a lot of potential there.

I wonder whether it is trained to recognize humans and that’s why buildings don’t work?

Well, I expected that to the the case but there’s nothing saying that it’s only for people - it just says pick the object to extract. And it does a good job selecting the object in a single frame - its just that once it process the video, the edges jump from frame to frame and smaller features pop on and off.

Yeah its still early days but i imagine over the next few years it could become a very useful and timesaving tool. Obviously there are issues around how the shots are uploaded and stored? Makes it a non starter for alot of professionals but hopefully that will change.

Trying to get these guys on an upcoming Logik Live. Stay tuned!



Yes, this has a lot of potential. I tried a version of Rotobot in Flame and it was useless and super slow.
It definitely looks like it’s done most of it’s learning with human figures and that’s probably good because people wiggle around a lot.
I think what RunwayML is missing is refining controls such as denoising to reduce the chatter between frames, and feathering of some sort for the motion blurry parts. I guess machines still have a lot to learn (pun intended) but still, very impressive and helpful in certain scenarios

Just tested out Green Screen on a 150 frames shot of a dude dancing in a dark blue shirt in front of a dark blue curtain and there’s a quite opaque haze overall.
Well… yes… mindblown… Not 100% but as a starting point it saves me hours of roto!

One question remains: In the fabulous logik event they talked about an offline tool of green screen. I’ve the creators subscription but can’t find anything download tool related… Am I missing something?