MotionVector Tips for Difficult Shots

Picking backing on the tracking thread we just had.

I’ve been working my way through some tricky cleanup and relying on MV for part of it, learning more of the ins and outs.

Things like using a projector instead of just parenting the surface, possibly deforming the projection surface, setting multiple reference frames, getting proper motion blur response, hold-outs, etc.

But I still have some issues. The latest shot is around 300 frames, workin on person’s shirt and they move their arm mid-shot in front of the area of interest. A hold-out (black pixels) helped but didn’t totally solve it. I would set a second reference frame on the other end of the shot and provide a second plate, but you lose some of the pixel level tracking, hasn’t really worked for me as well.

I could split the clip into two segments one before movement and one after, having the pixels push towards that transition from either end, but might end up with a jump in the center.

Any other tips or tricks when you a long shot disrupts the pixel harmonies?

Front and back of clip look like this:

Screenshot 2024-01-25 at 6.35.14 PM

Around the first third there is a twist and arm occlusion:

Screenshot 2024-01-25 at 6.35.27 PM

I’ve always broken clips into sections when doing this type of shot. And I might break the sections into different nodes. I’d potentially freeze the last good frame of the first section, warp it to feel right for the start of the second section then start fresh Motion Vectors from the first good frame of the 2nd section. Basically daisy chaining the sections instead of trying to do it all inside the one node.

2 Likes

why?

1 Like

Not for the standard use cases, but the way I understand it gives you a few more options if you break away from the camera alignment. Older tutorial here: https://youtu.be/k7v7dd9TA7I?si=14FHV9icjPYeTuMh. Several parts to this playlist. Also the ability to project onto a bicubic and distorting the projection.

Hey Jan,
This is part one of a three-part series of doing a shot very similar to yours in Sillhouette. Hope this helps:

4 Likes

Oh cool, will watch it in the morning! Thx.

I use mocha mesh and it works much much better , probably not the answer you were looking for

2 Likes

That’s a totally fine answer. I’ve used Mocha Mesh several times, not in this job yet. Nuke CopyCat and InPaint took care of a few, but not all of them.

Don’t mind mix and match, whatever looks right and is efficient. It’s just over 5,000 frames over 20 shots that need to be cleaned up. Some are single logo, some as many as three separate.

I have enjoyed understanding the intricacies of MV though. All about knowing the tools well.

2 Likes

Some progress, no silver bullets.

I tried Mocha Mesh. It did poorly compared to Flame MV. It’s a white t-shirt with significant movement and folding throughout the shots with areas disappearing and re-appearing. Not enough contrast / detail for Mocha to latch onto I guess.

Dividing up into segments has helped quite a bit.

Other thing I worked out - in areas where there are artifacts on the edges because of all the movement, rather than relying on the cleanplate path alone, I use it a bit more loosely on a difficult edge, keep it just clear of artifacts with the mask. Then expand the repositioned patch with Pixel spread to enhance the edge and comp that on top of the MV result. That hides the artifacts.

And lots of elbow grease.

1 Like

paint every Nth frame, deal it out, TW-ML back to original length.

7 Likes

Interesting approach. I’m half way through the last clip already. But I’ll definitely try it out to understand and have it in the toolbox for next time.

This approach can also be really handy when dealing with occlusion screwing up the MVs. Pick the last good frame of the first section, find the first good frame of the next section, MLTW inbetween them to the required length and use that for the inbetweens. (obviously from the render of the sections with your MV patch baked in).

2 Likes

to really be pro… paint->stabilize->crop->deal->TW_ML->invert stab->comp

4 Likes

Thanks @ALan. Just tried that, and by far the best result on one of the difficult clips.

One issue I’m running into - for a few reasons I’m doing this on my Mac Studio. The stabilized clip is 6020x2529. MLTW is running but warning about memory overflow and running on a single CPU thread, which is excruciating. It claims that it only has 31GB of free RAM, even though the OS reports 97GB free of the 128GB on the system.

I had to the scale the clip down to 720p to get it to actually run at all.

Original footage is UHD. I could forgo the stabilization and be very patient at UHD I guess with 1 instead of 12 CPU threads running. Or I could run it at 720p and then up-rez it just for that ROI area with a soft mask.

Just puzzled by the memory situation. Is that something in python that can be tweaked to make more memory available for it? On the Mac with unified RAM in theory that shouldn’t be the old NVRAM issue.

You are missing the crop step.

It’s cropped from the original, but then stabilization blows it back up because of the movement in the clip…

But I can add a 2nd crop for just the paint area, that should help…

Stab then crop

1 Like

also, always want to do your paint first, in case you need to change your crop or stabilize.

2 Likes

1 1/2 steps forward, not there yet…

I tried the every nth shot deal and MLTW to fill in the middle. At some point I painted 31 of the 378 frames in the shot. The paint part isn’t super complicated. But the results were still pretty messy.

In the latest variation on the theme, I’m dividing the shot into smaller segments yet, painting and rendering the segments and using the fluid morph option in MLTW. A bit of an undertaking on this shot, if I proceed for the full length. But I can refine individual segments that way, it’s not all or nothing.

But in the first test section, the results fell still short. MLTW seems to do ok where pixels get compressed, but not so good if pixels get revealed through the transition.

Here’s the original shot (one of 20 total, and on in 5 with this particular setup). The goal is to remove the Nike swoosh (pretty easy) and the bigger logo on the chest. But the movement and occlusions are so pronounced that it’s really hard to get anything to look half-way realistic. I understand that when you stare at individual frames it’s always easier to see the problems then when it plays at speed. But still.

Original: Frame.io
Current test section: Frame.io

Current batch that takes this 19 frame section and divides it into 3 segments for fluid morphing.

Stabilization doesn’t matter in this short section.

The problem is at the end where the arm swings back and pixels get revealed. And at one of the segment transitions there’s a noticeable jump, even though the same frame is the end of one fluid morph and the beginning of another one.

It’s just a lot of rapid motion, lots of occlusions, and fabric folding on top of each other.

I’m avoid to just have to blur the whole section. I had some limited success with Nuke InPaint, but where the arm frame boundary are it struggles, and I’d have to paint in stand-in textures on those frames for inPaint to behave reasonably.

Any additional ideas on how to wrangle this one?

put a link to the full rez camera raw footage. Is it the whole take you are trying to fix?