Thanks for the great share. So, do you know a good workflow for upscale that works well? Iβve done a few projects with AI but I have issues like texture loss and blurry images with upscaling. - Iβve seen working workflows that can produce 16-bit EXR images. I think everything is getting deeper.
@andymilkis noisy you mean random noise bad or in the patched area model created new elements that looks artifacts? I went over to that in the video, this model can reason about and create elements to do seamless transition to link outside the mask region. For example:
if you mask a person to cleanup, and the shadows of that person are outside mask, the chances of the model just create a new person is huge, that would be the most logical to generate a seamless blend to interact with the shadows. That is where the seeds values + mask control come into play.
Try to grab the sample footage and replicate results i showed, will be the fastest way to find out if is installation issue or mask. Attached.
@cnoellert@Sean1985 sadly this is a 80Gb++ VRAM model since i port with native weights. Some reports working on 48Gb but with hacks. ComfyUI team has been working on a Quantize version that would allow smaller cards, but, so far, the results are heavy degradation in loss due gguf or smaller precision weights:(((
@CoryJohnson hahahaa bit of diversification lol:) believe or not i dont know how to use Resolve, such a non intuitive app for me.
@dnzyc there is a few, i have my own with my own trained LoRas and fine-tune, i found the best results with custom trained using V2V.
Hi Thiago. Sorry, I should have been more specific. I meant that the area I was matting, which was a macro shot of sand on a beach, was returning results that had a very visible noise pattern that changed slightly over time. My guess that the footage I was using was something outside of the modelβs training. Iβm going to give it more tests with different kinds of shots.
Hi brothers
itβs great, this is how I would imagine the work to be easier thanks to AI in comfyui. But unfortunately I canβt do that on my graphics. Only if I reduce the sample and shorten the length of the shot. But I donβt want to.
that comfy team are getting, im not following or using but is there also.
The way they are building to be open to quantize and/or lower precision versions of the models, while mine, is a direct port to stay true as most as possible to the quality results of the original research
This model works best with .mp4 files. Does a sequence of 16-bit access files work well? Or do I need to convert the access files to 709 recordings?Using LoadEXR in ComfyUI, the model works well with this Acess color space.
Iβm noob at this and not all that technicial. How would the pull request work? Do I need to redownload the repo from Github? Will google to figure it out.