Change performance with Deepfake

Hey everybody,

I hope the hive mind has some nice ideas or experience to solve the following problem:

I want to change an actor’s performance (in terms of what he says). I found this nice paper, but they haven’t published any code (I guess due to ethical concerns about abuse of this tech):

I don’t have much experience with deep fakes, yet. From what I’ve seen tools like DeepFaceLab put the face of actor A on the performance of actor B. Of course I could bring this back onto actor A using traditional comp techniques, but I think there must be a smarter - machine learning based - way.

I can think of different “classical” ways to approach this, from 2d tracking to motion vectors, geotracker in Nuke etc… but I’m really interested in the machine learning based approach.
Has anybody experience with this and can guide me in the right direction?

Thanks in advance! :slight_smile:
Cheers,
Claus

PS: Deepfaking the audio would be interesting, too … :smiley:

From my limited experience, deep fake will get you 80 to 90% of the way there. But, that last 10% is potentially damn near impossible to achieve unless you write custom tools from scratch and have the time and development cycles to make that happen.

I’ve seen several high profile deep fake jobs be pulled from major VFX studios and executed in traditional ways.