Multi-channel Inference support

With the amazing release of Inference support, adding this for future releases, so ive submited a feature request, if you think is good, help upvoting here:

https://feedback.autodesk.com/project/feedback/view.html?cap=5afe6c84-5cb3-447a-b36c-cbd7f0688f84&uf=b92a0c3a-3253-47e3-bde9-39d3ed1b61e4&nextsteps=1&nextstepsf=8fbadb72-c1b4-44ad-b55e-888e3fdc17e4&nextstepsuf=b92a0c3a-3253-47e3-bde9-39d3ed1b61e4

Why:

Current Inference node works with rgb channels, but, there is models that benefit of having multi-channels support and things. One of the examples is VITmatte, a model that uses “new type of AI” the amazing VIT-Encoder that has been proven more and more to be extreme usefull, ive been using on a projects and you can see a breakdown of one here: Frame.io
Frame.io

VITmatte uses the well pre-trained VIT encoder with head weight to do Key quality matte, think as a new form of trimap matte for details such hair, tiny edge things and such.

ONNX Multi-channel here:


11 Likes

Also:

FI-03326 - Machine Learning Timewarp - multi channel capabilities

5 Likes

Up voted both requests

1 Like

upvoted! thanks for all your efforts with ML stuff in flame @tpo

FLAME ON!!!

5 Likes

@fredwarren Hi Fred! Is the onnx output being quantized within flame? I’ve tried some models on Flame and they dont give the same results as outside (on Colab or locally) the accuracy output on flame is not as high as if running outside flame.

Is there a way to run them at the highest precision level?

Best,
Cristhian

2 Likes

I believe the Inference node is converting all input to 32bit log internally.

1 Like

The two things you should know about the process is that:

1- Linear clips are converted to Log.
2- All the processing is done in 32-bit.

If possible, please contact our support team and give them your model (and possibly clip) so we can reproduce the issue on side and we will have a look.

3 Likes

Thanks Fred for the insight!

Case submitted! Hopefully we can fix this.

You were right Alan, I’ve also tried applying Log2Linear and linear2log and 8bit, 10b, 12b, 16b, 32b, to the input and nothing worked!