Do you have questions about the ML TW Tool?

Hi Everyone…

Andriy (@talosh) is coming on Logik Live this Sunday to talk about the ML Timewarp tool. I wanted to ask if anyone had questions for Andriy or for the developers of the ML model that he is using. The developers of the model asked for questions in writing, since English is not their first language. So, please, if you have any questions about the model or Andriy’s implementation please post them here and we’ll get them answered on Sunday!

4 Likes

My only question is, how the hell can we pay him back and thank him enough for this?

6 Likes

First, thanks for all the hard work on this.

I do have a few questions. They may have been answered in the thread. But it’s too long now and there have been a few versions of the tool released.

  1. Do the timewarp node parameters matter for the ML timewarp results? For example if it is set to mix or motion.

  2. What do the model versions do? Do we always want to use the latest Model for best results?

  3. Would/could a timewarp like this be able to be implemented into a flame node? It could be handy to be able to tie it into expressions for comp retiming.

  4. Not necessarily a timewarp question but, would it be possible to have ML implemented to a flames day to day workload. Specifically episodic work. Same people, same faces week in week and out to help with beauty or body extraction for mattes?

Thank you again. It is such an amazing asset to us all.

I would like to know if we as a community can help to train / enhance the model. I don’t know how the training for this model works, but as a lot of us often deal with footage, that has high framerates, I would assume that this could be used to train the model (as a reference how it should look after a timewarp).
Since most of this footage can’t be shared with the developers due to NDAs and other restrictions, I wonder if there is another (easy and artist friendly) way to train the model locally and send this data back, which is then used to share the model for all other.

I would also like to know about the differences of the already existing models. Are they meant for different scenarios, objects, etc.? Or is the latest always the best for everything?

1 Like

Same here. I can’t help on python or ML side as I don’t have any of these skills, but Is there a way for me to contribute training or something else.

Great questions. Thank you! I’ll pass them along.

Hi guys, we were short of time yesterday to answer all questions so I’ll try to put the answers in writing here.

  • Do the timewarp node parameters matter for the ML timewarp results? For example if it is set to mix or motion.
    – No, currently it is just reading speed or timing curves out of it, so only this matters. If there’s a specific case you’d need other parameters please let me know

  • What do the model versions do? Do we always want to use the latest Model for best results?
    – It is two parts, say one part of it is a “brain” and another part is the knowledge for this brain. Currently v1 and v2 are slighly different “brains” and minor versions are different training sets. Depending on an actual footage it might give better results with different versions. Unfortunately it is difficult to include the all now in a single archive due to the size of it, so it will be distributed in a separate archives

  • Would/could a timewarp like this be able to be implemented into a flame node? It could be handy to be able to tie it into expressions for comp retiming.
    – Looks like the best way to do it is to create an OFX plugin to get the image and settings and bring it to ML engine. It is something that will require more hardcore coding skills with C++, maybe someone can help with putting it together.

  • Not necessarily a timewarp question but, would it be possible to have ML implemented to a flames day to day workload. Specifically episodic work. Same people, same faces week in week and out to help with beauty or body extraction for mattes?
    – Its likely depends on an implementation but should be possible in general

  • I would like to know if we as a community can help to train / enhance the model.
    – The best is to create an issue on the model github page and provide them with the screenshot of the problematic area and if possible with te original footage to reproduce the issue. This way it will be possible to fix this ans similar issues in upcoming model releases. The upstream model issues page is here:
    Issues · hzwer/arXiv2020-RIFE · GitHub

  • How do we thank Zhewei Huang and his team for making and improving RIFE-2020 algorythm?
    – Check out his github page for options to support their development work. Another idea I had is if you happen to use it with some more or less big title in ad or episodic or movies just let him know on github or via mail (598460606@163.com)

3 Likes

If nobody objects, I’d like to join this thread with the original ML Timewarp thread to keep things together for historical sake.

4 Likes