This might not be helpful, but this is how I prep almost any shot now. I pull it in and run it through neat video to denoise it, then I convert it from it’s orig colourspace to REC709. I then export is as an EXR sequence. I thin run that EXR sequence through Topaz Video AI upscaling to UHD8k with the Artemis HQ algorithm. I export as an EXR sequence from Topaz, so you then pull back in to flame an amazingly high quality 8k image sequence. Then in Flame I resize down to UHD, this may seem pointless but it’s not because the process of upscaling to 8k in topaz created the extra fidelity. So then my beautiful clean, sharp, noiseless 4k image is read work on. That process would work a treat with either of your source image options.
If you do this, zoom in close and see the insane difference in quality, esp at the edges, between the source image and the processed image. You’l never work from the camera RAW files again.
Yeah it does seem to. I’ve got a project on ATM shot in ARRI log c quite high key. At first I upscaled it in Topaz as is, but I noticed it was smoothing some details it shouldn’t. So I converted to ACES CG, reduced the exposure a bit in ACES then converted to REC709 and upscaled in REC709 and got better results.
I would imagine the algorithm was trained on normal REC 709 footage, so that’s prob why the results are better.