flameSimpleML - Flame Machine Learning Source/Target tool with bespoke training

Been having a blast with this. wanted to see if anyone has any tips to get any more out of it for someone who is new to ML. like is there a “safe” number of epochs to run to get good results? or is it more about looking for it to approach a min/avg/max below a certain threshold? and learning rate… any sort of tell tale signs that you should turn it up or down?

Default learning rate is set very close to its maximum in order to be able to lear quickly on a set of similar inputs. It might erase itself after every pass if input data changes a lot. So for that it is good to try to set it to a lower value, say 0.0001 instead of 0.0034 default.

It is difficult to say what numbers of epochs is good because it is very dependant on a task and general length of a dataset

What kind of things have you been doing?

we do a lot of film remastering so I’ve been mostly feeding it old damaged film scans + the restored versions at the moment, it picked up on gate hair fixes really fast. I was able to get it removing tracking marks from screen insert plates pretty quick as well. (also tried to get it to comp the screen by giving it a second source of the insert but no go, maybe it needs more reps).

a fun quick one was finding a plate that had a few focus racks and feeding it sharp frames and blurred frames to get it to recreate nice lens blurs.

I just let it start training on an episode’s worth of wig fixes I did last year, gonna let it run all weekend and see what I get.

6 Likes

For more complex tasks with lots of variances I would try giving it a lower learning rate and leave it for at least couple of days.

1 Like

please let us know how the wig fixes came out

Wig fixes weren’t a total bust but it didn’t fully “pick up” my fix. After about 8k epochs it definitely seems to know that we’re doing something to the hairline but its basically just doing a very very soft blur. learning loss has stabilized out so I think I need to take a different approach on the data set. this was all in the logC that I worked in at the time and its pretty flat. might try running the same frames in linear and rec709 and see if it makes a difference.

It is actually doing some dynamic range compression for the values over 1 and below 0 so it should be safe to give it linear as well. As an idea you can give it highpass with very crank-up contrast as alpha (channel 4) on both input and target so it has additional guide on what to focus. I’m not sure if I have checked properly if it works with 4-channel target but model definitely can

1 Like

thank you Andriy, this tip was really helpful, and I was able to train a wig fixing model that gets pretty damn close to what I did.

here’s what worked for me:
-trained on denoised versions of source and target
-converted both from log to scene-linear
-took a 2d histo and cranked the contrast on the detail pass from ls_lumps that I used for the fix, and put the before and after in the alpha channels of the source and target files.
-cropped down a bit closer on the hairline to get the batch size smaller, the training set that worked was 5 RGBA source/target frames at 1024x768.

I kicked that off on friday night, it was just shy of 28k epochs when I got in this morning and applied it to a few of the full clips. they would all need some further manual cleanup on certain frames but it really is doing a great job overall. will check results again at 40k.

2 Likes