Hi All
Presenting Depth Anything V2 for Flame
I just converted Depth Anything V2 models to make them work inside of flame with high rez calculations. You can use them inside of flame 2025.1+
There are two models: Base which has 97.5M parameters and Large which has 335.3M parameters so it has more details. Why am I uploading this two? because one may works better for you in terms of details and performance. Why performance? because of larger datasets and bigger parameters you need more vram to run it. i,e 12gb vram users are quite limited.
depth_anything_v2_vitb = Base
depth_anything_v2_vitl = Large
Here is the download:
depth_anything_v2_vitb
depth_anything_v2_vitl
They are already submitted on the logik portal.
Please note these are INFERENCE NODES, not onnx nodes.
If anyone thinks/needs these inference nodes to work with even higher rez feel free to ask it but I would need someone’s help with high ram machine in order to compile them. In my case with 128gb ram I was not able to compile with high rez, so I think we would need 256gb+ ram. Also, take in consideration that if you go with higher rez, you’ll need really high VRAM (24gb not enough!)
PS: I am only converting these models, I am NOT the owner/creator of these models.
Also, I think we will need to find a host for these inference/onnx models, as I think this list will grow quick, so we’ll need some hosting like the matchbook site for matchbox shaders.
Enjoy.