So you think AI isn't going to take your job?

The film was made with 1,963 Midjourney prompts that yielded 7,852 images, which were edited and then rendered by more than 900 computers. These renders were then processed by RunwayAI with the total film covering 25,957 frames at 1,000 MB per frame. The trickiest scene was Isaac Newton and his apple among the planets. It took the team more than 20 attempts to get Newton’s scene right, producing more than 9,800 frames.


I think AI could take projects like that. And in almost 20 years I’ve maybe made one or two spots like that. The rest have been celebrities or pack shots or in some Walgreens somewhere or removing camel genitals or polishing up someone’s shiny new thing.

Plus, I dunno about you, but most of my clients have trouble writing a script, let alone 1,963 detailed Midjourney prompts that form a forgettable piece of content.

It’s so high on the novelty scale but not so much on the usability scale.


you cant’ really art direct this stuff yet and the quality is all over the map. I’m sure it will get better at some point. Also no one wants to pay for any of this. ASML has the cash to finance this novelty. The compute power required to do this on even a moderate advertising schedule would be bananas.


I’m looking forward to the day where I’m doing face and head replacements of real celebrities onto AI images.

There are just too many proper nouns in the world and too many people using cameras for us not to be busy.


Can someone watch the AI video and summarize it? I’ve got no more desire to watch prompt-generated videos than I do read self-published Amazon books.

The monkeys in a room that type Shakespeare also type a lot of nonsense and it still falls on some poor human to sift all that.


On the one hand… all this ad makes me think is that ASML has never had to advertise and now that Canon is trying to eat their lunch, they tried and failed to make a splashy ad.

On the other hand… if any compositing of this quality was delivered by any of us for a real job, we’d be pixelf’d to oblivion.

On the third improbably rendered hand with seven fingers, in five years time we might be out of jobs.


Zoom in, spin around dramatic violin music with a lady who has a British accent zoom into next scene, spin around dramatic violin music with a lady who has a British accent zoom into next scene, spin around dramatic violin music with a lady who has a British accent zoom into next scene, spin around dramatic violin music with a lady who has a British accent zoom into next scene,spin around dramatic violin music with a lady who has a British accent zoom into next scene,spin around dramatic violin music with a lady who has a British accent zoom into next scene.
Push out into a bunch of things from the Future… end


Proof that super-computing is no more capable of trying to figure out what our clients are saying than we are. And yet more proof that a clients level of acceptance plummets when they do it themselves. At least the computer has figured out that if you cover it up with a bunch of nonsensical flashy light effects, no one might notice how bad the picture is.


We like to laugh at the amateurish effects, etc. but bear in mind that when I started in this business there were still people that thought the flatbed wouldn’t be replaced . . . And how about that “digital will never replace film” crowd.


Indeed @ytf My sarcastic review of the video aside I think @ALan has a real point with posting that video and the Hollywood Reporter article shows the direction things are moving towards very quickly.

As far as the visual quality of the video goes I will say that if they started making this last week and not when ever they did and then used the just announced last week new Google “time diffusion yada yada yada” video generator which looks way better then Runway and is pretty temporally smooth/stable(for ai generated video at least), it would look better. But still very obviously Ai generated.

Also, Midjourney and the likes are continually getting closer to non-uncanny Valley Photo realistic which each release.

So how quickly till the video is acceptably photo real enough for the Geniuses at the Ad agencies?

Then when they decide they don’t need so many Art Directors what will that mean?

I’m just saying that these things are moving at an extreme almost impossible to keep up with the latest pace.

Also there’s more real world usability being incorporated into 3D apps. Say for example in a year or two we have a mainly Prompt Generated Houdini? Blender?
What Happens to Maya and Max at that point?
I know there’s a lot of talk about the legal rights holders issues with these models.
But the developing case law is looking like these models will use what they’ve trained and the basic process will stay. Will Autodesk and overly cautious legacy(as far as ML and Ai), companies go under?

  • The New York Times lawsuit against OpenAi is odd in that a close look at what the NYtimes did in ChatGPT and is claiming is borderline baseless at best(NYT’s manipulated the results it seems).

GPT5 is said to being trained to be critical on it’s initial thought and question/challange that iteratively to come up with something it’s much more sure of before giving the result.

-Then there’s a new company looking into making ML hardware that’s a Transformer Network not traditional ARM or General Purpose GPU type ml compute. So removing the software part of the transformer network.

  • Caveat being they still need to get some real backing/funding, so still potential vaporware.

-Then IBM has announced(and I think published some papers?), their analog neuromorphic computer. This is the networks themselves are hardware modeled after and function like our/human brains and not in a binary digital way. They been working on this for a couple decades now. I actually specifically remember reading about it(don’t believe they used the term Neuromophic then), where they were working on mimicking a mouse brain with hardware.

  • Not vaporware but I would imagine that IF it does prove commercially feasible and not a one off machine, it may be at least few years before we see them in real world use.

My understanding is that incorporating new Ai and ML specific hardware will by orders of magnitude reduce the amount of power needed. Which will reduce the amount of money needed to run all these LLM’s etc.

I’m obviously assuming alot and like most am pretty ignorant and get my information from sources I need to trust at least a little. But technology always advances and industries will use technology that keeps them up and profitable way more then they’ll even think about keeping workers/people or more apt to the convo, humans.


I keep seeing this claim that the tech is moving so fast, too fast to keep up with, such that we’ll all blink soon and it will be too late. But seeing this compared to early demos of runway and midjourney and whatnot seems… just iterative? improved sure but all the core issues are still there.

1 Like

Ok yeah I get what you’re saying, but what what Midjourney can do now is better then what that video shows. As I mentioned, Google’s new “space time displacement”(or whatever they call it), is addressing the frame to frame stability(temporal) that is one of the Core issues. It ain’t there yet but it’s definitely much better

As an aside, I was never really impressed with Runway for what we do. I think I had higher expectations maybe. But definitely I have heard 1st hand of people using it to generate useful assets in design etc.
Also we should all remember that push comes to shove Runway’s goal(zero’ing on them as an example), is in reality to put us out of business basically.

Hmm, Midjourney look pretty wacky at best in it’s early days. It’s pretty night and day in my view.
Again, Runway not impressed/higher expectations with its video side of things at least. So won’t argue that one.

I don;t think the iterative improvement of say MIDJourney is going to stop and it wasn’t;t that long ago

What I mean by “fast hard to keep up” is the overall Ai development not just what they used to make the video.

I’m just saying things are changing fast and people are already losing their jobs and the future seems to be posed for much better versions of what we are seeing and new stuff we haven’t seen yet.

Love seeing Jeffery Katzenberg of “Quibi” fame opining about what the future looks like.

There are meaningful and interesting conversations to be had about ML/AI and it’s place in the world, but so, so, so fucking much of the discourse is a rich guy echo chamber. It’s exhausting to play defense against the boners these guys have for killing jobs. Every article framed like a rich guy thinks: fewer people working means more money for me; this is good.

I hate it.


I think the first thing that will happen is AI will allow one artist to do the work of 10. When I started doing flame it took like 3 days to manually stabilize a clip. Now that’s like the first 5 minutes of every composite. So instead of working in teams one artist will do the whole commercial. But there will be 10x more commercials so we will all still have jobs…we will just have to do 10x more work in a day for probably less money. That has been the trend and I expect it will continue. Films will have 10x more shots but the crews will stay the same size and people will do work faster using AI to automate certain tasks like de-grain, grain, roto, clean up and generate art graphics and backgrounds. Maybe it will free up some of our time on boring stuff so we can go back to creating art.


“So instead of working in teams one artist will do the whole commercial”
Just like in the old days then!


Like I do now . . .

It’s television commercials. 'nuff said.


So then…it’s already started. :slight_smile:

“It’s exhausting to play defense against the boners these guys have for killing jobs.”


Love this.

Actually AI is much better at replacing their job than ours. Most executive presentations and speeches when you read them are made up of the same 250 expression run through a randomizer, with inspirational stock photo in background, and power point iconography of arrows pointing in the upper right corner…

These are just greedy power trips. Nothing inspirational about any of this. But I put my money on my kids generation. They don’t buy much into that crap.

That was actually our coffee chat conversation this morning, on how out of touch the current executive class has become. Boeing is a prime example among many. Yes, you should care about your shareholders, but you also should care about the people creating the value. One without the other is a relatively short journey with a bitter end. There was a good article on Google and the loss of trust by the employee class overall in the current crop of leadership overall across the board.

From an internal Google post: “Thank you to our corporate overlords for our new annual tradition”. The post is referring to latest round of layoffs. […] those overlords like referring to them as “reduction in force” [actually old MSFT term] because nothing makes you sound honest like weasel speak instead of plain English.

from: How leaders at Google et al lost the room | by Andy Walker | Jan, 2024 | Medium (may be behind paywall)

The less talked about aspect in this is - yes, to some degree it’s the rich guys filling their pockets. But the other major offender here are actually the General Counsels of all these companies. All this ‘weasel speak’, and we’ll call it something else but everyone knows what it is, and we can’t give you any answers is primarily a legal risk mitigation. And nobody with any empathy has stood up to the General Counsels and said ‘I hear you, and no, this is not how we’re doing this’.