I’m trying to create a virtual black segment in a sequence with Python. Any ideas?
Also… did I miss documentation somewhere that has answers to stuff like this? I have the Python API page from the online manual. Not sure where this would be an attribute.
I don’t think of it so much as an online manual, but rather more like an immersive mystery novel in which the clues are all there, but you need to find the solution.
Thanks Tim… I thought it was just me.
I don’t believe we have the ability to manipulate sequences via Python in that way at the moment.
Bob is right. This is not possible to assign a colour to a gap segment. You may want to submit a Flame Feedback request for this. That being said, you could probably work around this and make a script that works using other functions like Insert (using flame.execute_shortcut() ) but it would be much more complicated.
As for the documentation, I suggest you have a look at this Logik Live episode showing how to get all the latest documentation directly from the Python Console. That is the best way.
Thanks Fred. I watched that Logik episode just a few days ago. Should I be looking at the .doc to get descriptions of attributes?
So I can’t create the virtual black but I did make a script that will take a source on the desktop (Black) and insert it into each selected sequence at the head/tail. I use this to make black leader /trailer on edits going out for client postings.
If anyone is interested I’ll share.
have you found a way to also generate the black on the desktop via the python api or is this something that you do manually before starting your script?
This is something that has been on my to do list for a long while now. I’d be happy to see your approach!
Yes you have to start with the virtual black already created on the desktop and it has to be 1 second long. I tried to use 1 frame with ins and outs set to make it fit but there is some weirdness when you try to set the in and out after the last frame of picture.
You also have to make sure the track patching on the timeline is where you want the black to go. We can’t control patching with python (without a lot of hacky trickery).
_BB_black_head_and_tail.py (2.4 KB)
You could probably have the Black in a directory somewhere and import it when needed. The location would need to be tweaked if you share the script, but at least it could work without having to create it and select it.
Hey @bryanb, thanks a lot for sharing!
@fredwarren I’ve thought about this workaround, too. The image could be placed next to the script. Since retrieving the file path from the current script is easy. we could also get the image’s file path. However I see other potential problems:
- we can’t control the interpretation of a clip on import, so correct tagging of the colourspace would be dependent of the users input rules, which may lead to undesired results (especially if the element in the first in the timeline and dictates the colourspace, right?)
- we don’t have control over size, length, fps etc. and the would need different prerendered images just to cover most common cases
I have created a feature request, as I think having this available in the python api would really help for automated deliverables.
Good point @claussteinmassl
The request you have submitted is a duplicate of FI-01942 so I have flagged it as such.
Oh sorry, I didn’t see that one in my search. Thanks for merging them!