I hope the hive mind has some nice ideas or experience to solve the following problem:
I want to change an actor’s performance (in terms of what he says). I found this nice paper, but they haven’t published any code (I guess due to ethical concerns about abuse of this tech):
I don’t have much experience with deep fakes, yet. From what I’ve seen tools like DeepFaceLab put the face of actor A on the performance of actor B. Of course I could bring this back onto actor A using traditional comp techniques, but I think there must be a smarter - machine learning based - way.
I can think of different “classical” ways to approach this, from 2d tracking to motion vectors, geotracker in Nuke etc… but I’m really interested in the machine learning based approach.
Has anybody experience with this and can guide me in the right direction?
Thanks in advance!
PS: Deepfaking the audio would be interesting, too …