So you think AI isn't going to take your job?

You joke, but feeding an image to GeeBeeTeeShat and asking it to reduce the image to a prompt is a pretty common way to get an idea of what an ai wants to receive.

Video generators require prompts as discussed—even those with image references—so it’s quite common practice to generate a still in one framework, iterate a little, then use that as your reference frame, including the original prompt that generated the reference while adding scene, action and camera cues to the text encoder, feeding another model to bring it all to life.

However ass-backwards that all seems.

1 Like

If you watch the LogikLive where Rufus is describing the tools he used on the lemon slide, he does call out that he had ChatGPT write the prompts for the other AI tools, and that this was very helpful indeed. PromptTranslater / Interpreter of sorts.

2 Likes

Good article here. Using one of my gift shares from NYT for it so click away one and all https://www.nytimes.com/2025/05/30/opinion/silicon-valley-ai-empire.html?unlocked_article_code=1.LU8.2WGm.7LVkqkFT1Fu0&smid=url-share

3 Likes

I’m now getting asked to do full TVCs in AI. A lot.

I can’t see how AI can be a push button service. You’re putting together entire scenes and making really complex client changes. Can we change the time of day, can I have 15 options for the glasses shes wearing, can we change the performance. Can we lose that bg extra. Can he be wearing this football top. Can you make him older. Can you replace this person with this specific actor but the same performance. Can you make the make up more plain. Can you keep it consistent across multiple shots. There’s so much that can be done, and it’s only gonna get more complex.

I didn’t ask for AI and I have massive reservations about its impact. But there’s nothing I can really do about it. Until it kills us all there’s some really interesting work to be had in this. I’m booked solid for months now.

If we can get ComfyUI and Flame as aligned as possible then we can create and finish the most insane projects.

9 Likes

Fascinating. Thanks for sharing.

This is to a large degree expected. Everyone has seen the demos, everyone thinks they can finally make big beautiful TVCs for much less, they can finally make TVCs with concepts that are harder with traditional workflows.

The question is - how many of the requests you listed are realistically feasible to their satisfaction today, or within the next two years? If the answer is 80%, then yes, this is the future. Sell the cameras, buy the GPUs. If the answer is that they’re all disappointed that this level of control is not achievable with AI, or not without equally expensive efforts, then this all just ends up being the equivalent of a teenage phase.

Either way, it’s great to hear you’re booked for months, and you get to make serious money with either outcome.

If you read the WSJ article about them using Veo3, and the many prompts they had to execute, and their blooper reel, it does seem great results can be had through brute force. I heard similar takes on a Zoom call with a ASC cinematographer who is involved in a project where the director and him are supervising a room full of prompt/AI people to source the right elements. Neither of which was cheap or fast.

Crazy times. Not sure where this is going, but we can’t afford to stand still.

2 Likes

Which AI tools do you tend to use on jobs?

First of all I’m glad you have work Rufus!! I’ve seen your stuff and watched the LogikLive and it’s so creative and well done! Kudos!

Here’s what gives me pause. I think there’s always this evasive argument that wants to seat itself at both poles. Generative AI is time and effort intensive so it’s not a question of just pushing a button but “this is the worst it will ever be” and the technology is increasing at such mind boggling rates that each successive leap is adding more solves that are in essence push button solutions. So the question is how long before the final red “go” button?

Using chatbots to generate prompts for generative AI images feels closer and closer to redundancy in terms of need for technical creatives who make a livable wage and aren’t just fighting over dregs and peanuts and competing with high school kids on fiver. This is always the route of automation. It’s not that the English textile workers (ya know, those dummy Luddites) didn’t know how to use the automated looms or cropping machines, it’s that they realized that children could learn how to use them and that’s exactly what happened in astonishingly disgusting and shameful exploitative ways. There was no skilling up necessary for the cottage industry weavers or the guild member croppers, just de-skilling of a generation by sitting them in automated factories.

So if the process of making generative AI remains time and effort intensive and requires high skill individuals for that work, then we’re in the “it’s just a new tool” world. But if the promise of the AI big tech bros to their venture capital investors and big companies they pitch to is to believed, this time and effort will be greatly reduced and in the world where its always the worst it will ever be (and that applies not just to number of fingers on hands but also ease of use to create out of the box usable iterations) it’s not even necessarily that it’s fewer workers (although I imagine a contraction in this scenario) but that the work is increasingly de-skilled, more and more folks are able to compete for the same number of positions, and wages suck for those jobs, invariably of the gig variety and seeking the cheapest possible labor sources.

I’m gonna close it out with this and then probably be done with the AI channel on here: I think there is a sense of lack of control in society in general, and feeling as though everything is being upended by a few trillion dollar tech companies for something that is hard to see as necessary for the betterment of most people in the name of some elusive progress that neither you or I define but Sam Altman seems sure of, gives a sense of vertigo. For generations we’ve all been told adapt or die adapt or die adapt or die but man, sure would be nice to live a little bit. We get 100 years max. And you can’t settle in a world where every inch is a frontier to be exploited because you yourself, your body, your movements, your searches, you dreams and desires, your gpu and your motherboard are the frontier for rendition and extraction. So we go on. And we try to keep up. And we struggle to adapt lest we die at the hands of the select elite who have marched into town and said these are the conditions for you now, the market has mandated it, submit or perish, and we try because we have no choice. We adapt or die. Adapt towards what? Being able to push the big red button? I selfishly hope I’m the one who gets to but I’m not sure I would be and I’m still not convinced that there’s anything for me to do in this environment that would up those chances.

10 Likes

Midjourney is great for creating stills, but you run up against problems when you really need granular controls. So then ComfyUI is what you need, but it’s a much much bigger learn. I guess it’s the equivalent of learning Batch, but with completely different type of tools. However it’s absolutely fascinating to use. One aspect that makes it a good learn, is you are working direct with the AI models, Loras, LLMs, Controlnets. As opposed to the online services where it’s hidden away in a black box. So Comfy UI is great for learning the theory.

The creation of videos is an order of magnitude more complex, at the moment I still use Kling and Higgsfield, cos they do a great job, but they’re essentially toys, they only spit out MP4s that you have to cleanup with Topaz. You can create video in Comfy UI, it’s just not that great yet, and slow as f. But my guess is that will change. We need to be able to create really high quality video in Log or floating point, without the clipped hi lights. That might mean making it the current way and having a second model to repopulate the clipped areas. I dunno.

We also need way more control of whats happening in the video.

The other reason Comfy UI is way better in a professional workflow is the ability to create workflows. I waste so much time uploading and downloading and setting the settings on midjourney, runway, kling, Leonardo, higgsfield, magnific. it’s such an annoying way to work. And all of those involve a subscription.

I want to bring as much off that in to ComfyUI as poss, and have a really slick workflow generating stuff in Comfy and finishing in Flame.

11 Likes
1 Like

Just catching up on this thread. Well said!

we all need wifeGPT. (Gloriously Proper Thinking)

3 Likes

AI Bigfoot Generator

Forest Salt Lick - All Natural

Skydive Bank Robbery

Let’s gooooooo

2 Likes

long overdue. not surprised Disney would be the ones to do it.

Disney Layoffs

Paramount Layoffs

1 Like

It’s all so interesting.. I’m working with a big (local) brand and google currently to (try, it’s an experiment) create an ad using veo (and lora’s) .. the interesting part is that while they’re ‘ditching’ the agency and production company, they do see the need for post-production/ vfx.. So while veo can do amazing things and google is like: ‘we know what sells, we’ve got that covered’ .. we’re still in the game to polish, fix and manipulate things in post… curious to see where this’ll lead..

5 Likes

Maybe Disney should start creating original stuff again? And not live-action-ish version of things we’ve already seen or part 2,3,4 of something? Maybe AI can come up with something original??? hahahah

1 Like
3 Likes
2 Likes