This is a “copycat” - style model and scripts that allows training it using your own “source/target” data.
Training script is command-line at the moment. In order to train your model you need to create a folder somewhere for your dataset and then create two more folders named “source” and “target”. Export your training data there as uncompressed exr sequences with no alpha channel. The script would assume that both “source” and “target” sequence are of the same dimensions and number of frames and exr’s are uncompressed.
When you run the script it would create a third “preview” folder within dataset folder and you can monitor the progress of model (hopefully) getting smarter there.
Training does not take a lot of GPU ram and can be run in a background so one can continue to use Flame.
One of the simple tests I’ve been using while writing is to give it the same sequence in colour as target and bw as source and teach it to colorise frames.
Trained model data is saved every 1000 iterations into your “homefolder/flameSimpleML_models/” as .pth file
To apply model select it from menu and navigate to that folder to load it. It is possible to use “F1 / F4” to see before / after.
This is a very first release and I’ve been testing it mostly on linux with Flame 2023.3 and I gave it some very limited testing on MacMini M2 using Flame 2025 tech preview. It might work on Intel macs if you sort PyTorch and Numpy dependencies out (give me a shout if you would like to try)
I’ve used some copy cat on a current job. I’ll try to rerun some of these scenarios on this script to test it and compare. Might be a few weeks before I get to it though.
The reason for that potential problems is the data being used to train ML models without consent, not the models per se. And this allows training using your own data.
I think that’s exactly right - the honeymoon with ML is over, where everyone just swoons instead of asking questions. Now the corporate lawyers are saying ‘not so fast’, and they definitely right to do so.
Those AI tools have two components - the technology and the data. The technology is evolving and getting better, but the breakthrough in the last year has been the training of very large datasets. And it’s those datasets that are quickly becoming the issue. We’ve briefly discussed this in the original thread where the training data for these video tools comes from, and there are definitely valid issues. Many of them were created for research purposes, not commercial use. Consent requirements are very different for those two scenarios.
It’s the equivalent of us snubbing our noses at stock agencies like Getty and ActionVFX and just grabbing files from the Internet for your next Superbowl ad. Except many of the data owners don’t know their stuff has been used, and they don’t have the resources to do anything about it, unlike Getty would.
That said, I think it would be helpful for companies like ADSK to have some of these conversations more publicly rather than just withholding tools without context. That will be the first step to create broader awareness, which is required to marshal the resources to fix it.
In the mean time tools that you can train yourself (@talosh’s effort here and Nuke’s CopyCat) can step into the gap to a certain degree. But make no mistake, it’s not the same thing. I use CopyCat regularly and with success, but it’s a different use case than what pre-trained models on large datasets can accomplish, so it’s not a replacement. Some of these quite useful tools only work if trained on massive data sets, and that is not anything that’s economical for individual artists to do.
I can give you some insight (even though a good part has already been answered)
The biggest reason we stopped delivering ML tools for the moment is that we are indeed looking at all the legal implications of using data sets that may or may not (that’s the whole point for now, nobody really knows) be illegal to use. We now have some customers who are asking us for lists of data sets that were used to train our tools and if they are not explicitly free of licence than they won’t use our product. That being said, that doesn’t mean we are not working on new tools and searching for ways to deliver them in a totally legal way.
This may not sound right in writing, but keep in mind that if Andriy can deliver this tool it’s because Autodesk is taking the time to develop and maintain platforms for vendors/users to do so. The goal here is not to take the credit for what has been done, but please realize that if you can use OpenFX tools from vendors, shaders from other artists, or ML stuff from Andriy it is because we took the time to implement/maintain OpenFX, Matchbox, Lightbox, Pybox, Python, Custom Actions, etc. This is not different than what other vendors are doing (think about Gizmos in Nuke). This gives the opportunity to multiple vendors/users to come up with more tools than what we could deliver ourselves and let them decide which ones they get instead of us deciding. And of course, developing/maintaining these tools doesn’t come for free; it takes a lot of time to do so.