flameSimpleML - Flame Machine Learning Source/Target tool with bespoke training

Been having a blast with this. wanted to see if anyone has any tips to get any more out of it for someone who is new to ML. like is there a “safe” number of epochs to run to get good results? or is it more about looking for it to approach a min/avg/max below a certain threshold? and learning rate… any sort of tell tale signs that you should turn it up or down?

Default learning rate is set very close to its maximum in order to be able to lear quickly on a set of similar inputs. It might erase itself after every pass if input data changes a lot. So for that it is good to try to set it to a lower value, say 0.0001 instead of 0.0034 default.

It is difficult to say what numbers of epochs is good because it is very dependant on a task and general length of a dataset

What kind of things have you been doing?

we do a lot of film remastering so I’ve been mostly feeding it old damaged film scans + the restored versions at the moment, it picked up on gate hair fixes really fast. I was able to get it removing tracking marks from screen insert plates pretty quick as well. (also tried to get it to comp the screen by giving it a second source of the insert but no go, maybe it needs more reps).

a fun quick one was finding a plate that had a few focus racks and feeding it sharp frames and blurred frames to get it to recreate nice lens blurs.

I just let it start training on an episode’s worth of wig fixes I did last year, gonna let it run all weekend and see what I get.

6 Likes

For more complex tasks with lots of variances I would try giving it a lower learning rate and leave it for at least couple of days.

1 Like

please let us know how the wig fixes came out

Wig fixes weren’t a total bust but it didn’t fully “pick up” my fix. After about 8k epochs it definitely seems to know that we’re doing something to the hairline but its basically just doing a very very soft blur. learning loss has stabilized out so I think I need to take a different approach on the data set. this was all in the logC that I worked in at the time and its pretty flat. might try running the same frames in linear and rec709 and see if it makes a difference.

It is actually doing some dynamic range compression for the values over 1 and below 0 so it should be safe to give it linear as well. As an idea you can give it highpass with very crank-up contrast as alpha (channel 4) on both input and target so it has additional guide on what to focus. I’m not sure if I have checked properly if it works with 4-channel target but model definitely can

1 Like

thank you Andriy, this tip was really helpful, and I was able to train a wig fixing model that gets pretty damn close to what I did.

here’s what worked for me:
-trained on denoised versions of source and target
-converted both from log to scene-linear
-took a 2d histo and cranked the contrast on the detail pass from ls_lumps that I used for the fix, and put the before and after in the alpha channels of the source and target files.
-cropped down a bit closer on the hairline to get the batch size smaller, the training set that worked was 5 RGBA source/target frames at 1024x768.

I kicked that off on friday night, it was just shy of 28k epochs when I got in this morning and applied it to a few of the full clips. they would all need some further manual cleanup on certain frames but it really is doing a great job overall. will check results again at 40k.

3 Likes

Hey @talosh Thank you so much for bringing this to flame!

I have trouble on a Mac M2 ultra. OS 14.4.1 Flame 24.2.1 OS Python 3.11.7

I get this warning: " _multiarray_umath.cpython-310-darwin.so" can’t be opened because Apple cannot check it for malicious software.
This software needs to be updated. Contact the developer for more information."

Then I go to the security tab and I get “libopenblas64_.O.dylib” was blocked from use because it is not from anidentified developer.

I then click Allow Anyway but it loops back to the first warning…

Any idea? Thanks so much!

Hey all…

This is a very impressive effort from Talosh, especially considering he is doing it on his free time, alone, as a hobby, and open-source. I spent a good amount of time playing with this. When given the same inputs, and 1/4 the training time, Nuke CopyCat vastly outperforms the results.

Not trying to shit on this effort, but this was our experience.

Alan

2 Likes

Hi Alan, I’m not quite surprised that CopyCat greatly outperforms it, and if you can share more details here or in private message I might try to have a look what can be done to make it a bit better

2 Likes

Hi Luc, it looks that Flame 24.2.1 does use python 3.11 and the dependencies bundled with the tool are for python 3.10. I don’t think installing it globally for the python bundled in Flame is a good idea but it is easy to get it installed within a virtual env and then copy the files over to the tool location. I can try to provide step by step instructions a bit later

1 Like

Thanks, Talosh! Are you planning to connect Comfyui to Flame?

5 Likes

@LucJob That’s a really cool idea!! Hope it will possible :star_struck:

2 Likes