With the amazing release of Inference support, adding this for future releases, so ive submited a feature request, if you think is good, help upvoting here:
Current Inference node works with rgb channels, but, there is models that benefit of having multi-channels support and things. One of the examples is VITmatte, a model that uses “new type of AI” the amazing VIT-Encoder that has been proven more and more to be extreme usefull, ive been using on a projects and you can see a breakdown of one here: Frame.io Frame.io
VITmatte uses the well pre-trained VIT encoder with head weight to do Key quality matte, think as a new form of trimap matte for details such hair, tiny edge things and such.
@fredwarren Hi Fred! Is the onnx output being quantized within flame? I’ve tried some models on Flame and they dont give the same results as outside (on Colab or locally) the accuracy output on flame is not as high as if running outside flame.
Is there a way to run them at the highest precision level?
The two things you should know about the process is that:
1- Linear clips are converted to Log.
2- All the processing is done in 32-bit.
If possible, please contact our support team and give them your model (and possibly clip) so we can reproduce the issue on side and we will have a look.