here is my nuke workflow that works perfectly, looking if its possible to re-create this in flame
write out a file (exr sequence)
read the file back in (we have a read-from write hotkey)
connect read node to publish node , write a comment in there, say which colorspace the daily should have .
publish node read the incomming read-node file path , and throws a nuke job .jpeg sequence export to a tmp location on the renderfarm via deadline.
ffmpeg then launches as a dependent process to make a h264 from the jpeg sequence and possible .wav audio file thats located in the shot folder , simple bash script.
this h264 is uploaded to our shottracker for review.
I am pretty sure I can do this even more simple in flame maybe?
writefile node a image sequence
read filesequence back in (add to reels checked, will just ingest this back so thats good)
create a custom publish node that has a comment field and will render out a h264 in the right colorspace
some magic to push that h264 with the comments up to shottracker
This might be of use here as well…
basically can I parse the path of a upstream clip/readfile node or do I have to include it in a custom writefile node ? what are my options here
publishing out plates is what jeff does , and we do the same thing, but now I want to publish dailies of shots to a shottracking system like ftrack, so its a different part of the pipeline
You could still do this with publishing if you maintain a timeline of all the shots up to date. Unless I’m still not understanding what you’re looking for.
its more about how can I read the filepath or whatever using python to be able to extract all the needed info, create a h264 and upload it to my shottracker .
Like when you hit render in shotgrid you get a video you can scrubb on the shotgun website
@finnjaeger - you need to hire a workflow designer or something - or you need to get a heavy weight workflow architect like @bryanb to solve your problem.
these things are never one-liners - except for the days when you can write a one-liner…
Check said render and if all good, have a custom export profile that will create a H264 with a view transform (Source to Rec709 for example).
Post-export hook that deals with updating Ftrack via its API.
This export profile could first create a PySide gui where you can input your comment.
All the other bits like which job, film/episode/whatever you want to call it, and shot number could be automatiically inferred depending on your naming convention…or you could have it all populated by the GUI.
As @philm mentioned…it’ll require quite some heavy lifting.
@kyleobley - tags and color coding are going to contribute greatly to this level of automation for all future flame workflows - yet more reasons to get current.
post-export-hook us the missing piece for me here, thats the logical way of getting that to work I think, nice will give that a try
Heavy lifting is fine, we allready built a publish tool for the timeline that creates thumbnails in our shottracker and creates shots and stuff, very nice.
@finnjaeger You can pass extra variables along from the custom export to the post-export to send whatever else you need, fyi.
@philm Great shout about tags. One could definitely leverage that to help know where to send things. That said, with a proper pipeline I’d argue that you should be able to get all the information you need from the name of the render/clip.