Possibly the future of Python for Flame? Very simple but opens up the possibilities.
All I entered was
create a folder for autodesk flame in linux

import os

folder_name = "/opt/Autodesk/Flame"

# Check if the folder already exists
if not os.path.exists(folder_name):

This code uses the os module to create a new folder with the specified name (in this case, “/opt/Autodesk/Flame”) and it’s a typical path for Autodesk flame installation. The os.path.exists() function is used to check if the folder already exists before attempting to create it. If the folder does not exist, the os.makedirs() function is used to create it.

Please note that this will create the folder only if you are running the script with appropriate permissions, otherwise you might get a permission denied error.

1 Like

Stumbled across this video on YT: Using ChatGPT AI to Create Animations (After Effects GAMECHANGER!) - YouTube

It’s using CGPT to write AE animation scripts. Does anyone know if this is the same language for Matchbox scripts?

1 Like

I think AE is Java, Matchbox is glsl. I’ve use chat to write some basic code for matchbox but don’t expect too much of it. I mean it’s very capable at writing languages but usually what you want is complex, and then there’s the Flame API. Same would go for Python. You also still need to compile and test yourself.

You will definitely need some knowledge of your own to compliment it or if your knowledge is limited (as mine is) chat makes a superb tutor. You can ask it to dumb explanations down for you in a way you understand.

1 Like

this chatGPT is crazy!
find similar videos in a folder using python and convolutional neural networks

Finding similar videos using convolutional neural networks (CNNs) in Python requires a few more steps than using image processing or video processing libraries. One way to do this is to use a pre-trained CNN model to extract feature vectors from the videos, and then use a distance metric such as cosine similarity to compare the feature vectors and find similar videos.

Here’s an example of how you might use the pre-trained 3D ResNet model to extract feature vectors from videos and find similar videos using Python:

import os
import torch
import torchvision
from torchvision import transforms

def extract_features(video_path):
    # define the transforms for the video frames
    transforms = transforms.Compose([
        transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])

    # read the video and extract frames
    frames = []
    cap = cv2.VideoCapture(video_path)
    while True:
        ret, frame =
        if not ret:

    # extract features using the 3D ResNet model
    model =
    features = []
    with torch.no_grad():
        for frame in frames:
            frame = transforms(frame).unsqueeze(0)
            feature = model(frame)
    return features

def find_similar_videos(folder):
    video_features = {}
    for root, dirs, files in os.walk(folder):
        for file in files:
            video_path = os.path.join(root, file)
            features = extract_features(video_path)
            video_features[video_path] = features

    # calculate cosine similarity between the feature vectors
    for video1 in video_features:
        for video2 in video_features:
            if video1 != video2:
                features1 = video_features[video1]
                features2 = video_features[video2]
                similarity = cosine_similarity(features1, features2)
                if similarity > 0.95:
                    print(f'{video1} is similar to {video2}')

This code uses a pre-trained 3D ResNet model to extract feature vectors from each video in the specified folder and then compare the feature vectors using cosine similarity. It returns the pair of videos which have cosine similarity greater than 0.95.
It’s worth noting that this is a simple example, and you can use more sophisticated methods like video hashing or video feature extraction techniques like 3D CNNs to compare the videos and detect similarities. Also, the pre-trained model is trained on a large dataset, so it might not work well on certain types of videos, in this case, you may have to fine-tune the model on your own dataset.


It’s cracking me up.
ChatGPT invented its own Flame API

Write a python hook for Autodesk Flame that will create a slideshow for a selection of clips


import flame

def create_slideshow(clips):
    # Create a new comp
    comp = flame.createComp("Slideshow")

    # Add the clips to the comp
    for clip in clips:

    # Set the transition duration between clips
    comp.transition_duration = 1

    # Set the comp to loop
    comp.loop = True

    # Play the comp

# Example usage:
selected_clips = flame.batch.getSelectedClips()

This script creates a new comp named “Slideshow”, and appends the selected clips to it, set the transition duration between clips to 1s, set the comp to loop, and play the comp.

Please note that this is just an example and the exact implementation may vary depending on the version of Autodesk Flame you are using and your specific requirements.


This is a great implementation, I would like to know how complex integrating Chatgpt inside Flame is?

There is more information here.:


Seems like a lot of work just to have ChatGpt be aware that you’re asking it about Nuke. Is there some other benefit to this that I’m not seeing?