Inference Experience / Problems

Before I file a support ticket, thought I’d start a thread with experiences folks have with the new Inference node / ONNX models.

I just tried to use the DepthAnythingv2 model, and it’s not going smoothly.

This is on a timeline, added gap effect with BFX. Footage is 4K Canon Log, transcoded as ProRes.

Pretty simple BFX - back node, color mgmt (input transform), Inference, 2D Histogram, and y_lens.

It works most of the time while in batch. Occasionally I’m getting errors of ‘no result’ and ‘can’t start inference engine’. At first thought it was a conflict with one of the hold ADSK ML nodes.

Got it to work, but then everytime I render the timeline, I get an out of memory error…

The smalle rmodel should handle 4K footage. But I also tried resizing to HD inside BFX, same result.

Will now convert it a batch group and see if I can render it out there, just in case that’s a memory management issue inside a timeline render with BFX.

PS: Running this on Linux with 24GB VRam, Flame 2025.2

On the Nuke side this model quickly run out of memory above 2k on an A6000.

3 Likes

Got it to work by doing it in a batch group, changing inference to 16fp and ML engine cache.

Definitely requires a bit of experimentation. Good to collect experiences.

1 Like

I have not had issues with depthanythingv2 but I have noticed that vitmatte in nuke can run on significantly higher resolutions than in flame’s inference node on the exact same machine. Same error in the shell when I try it on anything above about 1024x1024.

The vitmatte onnx needs to be rewritten/re-converted with higher or adaptive size input, same was done for the nuke cattery version and some onnx models here.

2 Likes

I managed to get vitmatte running on a 2428x1363 footage with a 24gb gpu. The thing is, your vram usage needs to be empty in order to run this!

Will need testers to see how high it can run with 48gb vram gpu cards.

But the important thing is that this is not fixed sized input, which is important, and being honest 2428x1363 rez is quite decent!


4 Likes

Hi everyone,
I was trying to run this inference model realSR_BSRGAN_s64w8_SwinIR-L_x4_GAN.
Its not working.
I’m getting the following error messages

***ErrorMsg: Non-zero status code returned while running Reshape node. Name:‘/Reshape_2’ Status Message: /home/sansrem/git/onnxruntime/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:45 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, onnxruntime::TensorShapeVector&, bool) input_shape_size == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,200,300,240}, requested shape:{1,25,8,37,8,240}

Could not perform inference: Error [Non-zero status code returned while running Reshape node. Name:‘/Reshape_2’ Status Message: /home/sansrem/git/onnxruntime/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:45 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, onnxruntime::TensorShapeVector&, bool) input_shape_size == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,200,300,240}, requested shape:{1,25,8,37,8,240}
]
Could not perform inference
***ErrorMsg: Non-zero status code returned while running Reshape node. Name:‘/Reshape_2’ Status Message: /home/sansrem/git/onnxruntime/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:45 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, onnxruntime::TensorShapeVector&, bool) input_shape_size == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,200,300,240}, requested shape:{1,25,8,37,8,240}

Could not perform inference: Error [Non-zero status code returned while running Reshape node. Name:‘/Reshape_2’ Status Message: /home/sansrem/git/onnxruntime/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:45 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, onnxruntime::TensorShapeVector&, bool) input_shape_size == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,200,300,240}, requested shape:{1,25,8,37,8,240}
]
Could not perform inference
***ErrorMsg: Non-zero status code returned while running Reshape node. Name:‘/Reshape_2’ Status Message: /home/sansrem/git/onnxruntime/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:45 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, onnxruntime::TensorShapeVector&, bool) input_shape_size == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,200,300,240}, requested shape:{1,25,8,37,8,240}

Could not perform inference: Error [Non-zero status code returned while running Reshape node. Name:‘/Reshape_2’ Status Message: /home/sansrem/git/onnxruntime/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:45 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, onnxruntime::TensorShapeVector&, bool) input_shape_size == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,200,300,240}, requested shape:{1,25,8,37,8,240}
]
Could not perform inference
***ErrorMsg: Non-zero status code returned while running Reshape node. Name:‘/Reshape_2’ Status Message: /home/sansrem/git/onnxruntime/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:45 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, onnxruntime::TensorShapeVector&, bool) input_shape_size == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,200,300,240}, requested shape:{1,25,8,37,8,240}

Could not perform inference: Error [Non-zero status code returned while running Reshape node. Name:‘/Reshape_2’ Status Message: /home/sansrem/git/onnxruntime/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:45 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, onnxruntime::TensorShapeVector&, bool) input_shape_size == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,200,300,240}, requested shape:{1,25,8,37,8,240}
]
Could not perform inference
***ErrorMsg: Non-zero status code returned while running Reshape node. Name:‘/Reshape_2’ Status Message: /home/sansrem/git/onnxruntime/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:45 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, onnxruntime::TensorShapeVector&, bool) input_shape_size == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,200,300,240}, requested shape:{1,25,8,37,8,240}

Could not perform inference: Error [Non-zero status code returned while running Reshape node. Name:‘/Reshape_2’ Status Message: /home/sansrem/git/onnxruntime/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:45 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, onnxruntime::TensorShapeVector&, bool) input_shape_size == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,200,300,240}, requested shape:{1,25,8,37,8,240}
]
Could not perform inference
***ErrorMsg: Non-zero status code returned while running Reshape node. Name:‘/Reshape_2’ Status Message: /home/sansrem/git/onnxruntime/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:45 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, onnxruntime::TensorShapeVector&, bool) input_shape_size == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,200,300,240}, requested shape:{1,25,8,37,8,240}

Could not perform inference: Error [Non-zero status code returned while running Reshape node. Name:‘/Reshape_2’ Status Message: /home/sansrem/git/onnxruntime/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:45 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, onnxruntime::TensorShapeVector&, bool) input_shape_size == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,200,300,240}, requested shape:{1,25,8,37,8,240}
]
Could not perform inference
***ErrorMsg: Non-zero status code returned while running Reshape node. Name:‘/Reshape_2’ Status Message: /home/sansrem/git/onnxruntime/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:45 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, onnxruntime::TensorShapeVector&, bool) input_shape_size == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,200,300,240}, requested shape:{1,25,8,37,8,240}

Could not perform inference: Error [Non-zero status code returned while running Reshape node. Name:‘/Reshape_2’ Status Message: /home/sansrem/git/onnxruntime/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:45 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, onnxruntime::TensorShapeVector&, bool) input_shape_size == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,200,300,240}, requested shape:{1,25,8,37,8,240}
]
Could not perform inference
***ErrorMsg: Non-zero status code returned while running Reshape node. Name:‘/Reshape_2’ Status Message: /home/sansrem/git/onnxruntime/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:45 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, onnxruntime::TensorShapeVector&, bool) input_shape_size == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,200,300,240}, requested shape:{1,25,8,37,8,240}

Could not perform inference: Error [Non-zero status code returned while running Reshape node. Name:‘/Reshape_2’ Status Message: /home/sansrem/git/onnxruntime/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:45 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, onnxruntime::TensorShapeVector&, bool) input_shape_size == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,200,300,240}, requested shape:{1,25,8,37,8,240}
]
Could not perform inference
***ErrorMsg: Non-zero status code returned while running Reshape node. Name:‘/Reshape_2’ Status Message: /home/sansrem/git/onnxruntime/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:45 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, onnxruntime::TensorShapeVector&, bool) input_shape_size == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,200,300,240}, requested shape:{1,25,8,37,8,240}

Could not perform inference: Error [Non-zero status code returned while running Reshape node. Name:‘/Reshape_2’ Status Message: /home/sansrem/git/onnxruntime/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:45 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, onnxruntime::TensorShapeVector&, bool) input_shape_size == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,200,300,240}, requested shape:{1,25,8,37,8,240}
]
Could not perform inference

Any ideas on a workaround?

Im facing similar problems with some other inference models as well

Thanks in advance

This is due to vram capacity.

How much vram do you have?
What is your input rez? 1080p? 2k? 4k? keep in mind you are using a 4x upscaler, so if you are using 2k clip it’ll upscale to 8k and it needs a lot of vram.
What’s the vram usage right before running this model?
Try running it with empty vram usage.

I’d say try first with a low rez source, say a colour source grain at 720p then apply this 4x upscaler and see how high you can go.

1 Like

Hi @cristhiancordoba
How do I find out how much vram I have?
I’m up scaling a small image.

See the resource manager

Thanks @cristhiancordoba
I found it

Hi Cristiane, how did you manage to load it into flame because it gives me an error message

Are you on 2025.1 rather than 2025.2? Alpha channel support was introduced with 2025.2.

thanks

Hi, i was trying with no luck the inference node vitmatte that @cristhiancordoba prepared and after a few test from me and him, Cristhian discovered that seems a problem with the 2025.2. On 2025.1 seems to be working fine. Anyone had the same issue or can guess a possible cause of this issue?

Yes, @cristhiancordoba pinged me to test it, and I tried it on two systems on 2025.2.1 with mixed results, but none usable.

I think there are quite a few variables that still have to be sorted out about the inference node. It’s a real good starting point and concept, but when you look at color spaces, and other variables that come into play of how a model was trained and what you are feeding it, it’s a non-trivial problem and I don’t think all of this has been solved.

If you compare it with the recent match grain node, color space matters, but it’s mostly the input color space in that case, which is more contained.

With inference, it seems that the image needs to be fed in a way that’s identical of how the training data was fed to the model. Since the training happened at a different time, by different people it may not be as transparent, and right now the node doesn’t have enough controls to help you match things up.

For example yesterday on VitMatte I got vastly different results when I fed it scene linear vs. Rec709 (converted, not tagged). Clearly this type of detail matters.

Mind you these notes apply particularly to the matte extraction models, where high precision matters, and where training and inference is time shifted (as opposed to CopyCat where they’re correlated).

I believe the inference node will eventually get there after 2 or 3 more releases, but in the meantime I would consider it experimental - it may work or it may not.

Also it changes between releases and may not be backwards compatible. Without clear understanding what the differences are, it makes it hard to navigate.

What would be worth a look is how Cattery fares on these aspects, and if they have a more stable / predictable approach? Or is it the same quick sand?

2 Likes

Hi everyone,
which inference model do you find the best for human full body matte extraction?
Is there any difference with using CPU or GPU besides performance?

Thanks in advance