So you think AI isn't going to take your job?

On one side it’s very impressive. Would be interesting to learn more about how many versions they got for each shot before they landed it.

But when you scrub through slowly and repeatably you see all the little issue. The balloon color and quality is quite varied. Other colors aren’t consistent from shot to shot, like the blue pants. There are people and window reflection that don’t match the environment, sometimes physics aren’t accounted for as in the subway shot. The balloon knot and string is all over the place.

And that is with a story that picks a different setting for each shot with just some components making up the connective tissue. The continuity bar is about as low as you can set it.

So impressive accomplishment. Useful for real film making? Nope.

I saw someone mention that some OpenAi execs went to LA for meetings…


I would imagine the pitch is, “we’ll give you shares in our definitely-gonna-be-huge-AI-company,” and not, “here is how you can solve VFX problems you currently have.”


Brands are strictly prohibiting the use of AI in agency work.

1 Like

This is another good reality check on how far AI has come (with quotes from Yann LeCun, one of the three godfathers of today’s AI:

His point is very poignant: AI systems are still objectively horrible at learning tasks and are far from as intelligent as we make them out to be. This isn’t the first time LeCun has highlighted this issue. In an interview with the Observer, he stated that current AIs have about as much computing power as a common housecat’s brain but are way less clever, as the AI still can’t understand the physical world, plan complex actions, or have a level of reasoning. As such, according to LeCun, for AI to reach superhuman levels of intelligence, or just human levels of intelligence (also known as Artificial General Intelligence or AGI), requires more than just scaling up the current AI technology. Something new needs to be done to enable this deeper level of thought, reasoning, and planning.

from: Medium Article by Will Lockett

Given that MSFT is finding it challenging to upscale their ML training grids due to power grid limitations and availability of fast Internet connections, it seems like the current version of AI is already hard to sustain (financially and logistically), making the more capable versions everyone is dreaming off even further out of reach. (source unverified, but passes sniff-test).

I know it all sounds cool and magic. I think the reality is a bit more nuanced.

According to latest stats the entire tech industry only grew by 700 jobs in 2023, which is a tiny shadow of decades of platinum career paths and job security. Not all of this is pinned on the hopes of AI, but it’s a good junk of it.

One of the topics that interests me and was our coffee conversation today - AI (or more accurately ML) is blamed for just being a rehash/mash-up of the past. But one can argue that the same is true of most humans as well. Except every once in a while someone goes complete against the grain and breaks out into what becomes a new thing.

Example - a few weeks ago on a color job for a client references were made to Impressionism. Long story short, if you do look up the topic, there’s an interesting quote:

Impressionism emerged in France at the same time that a number of other painters […] were also exploring plein-air painting. The Impressionists, however, developed new techniques specific to the style. Encompassing what its adherents argued was a different way of seeing […]

The public, at first hostile, gradually came to believe that the Impressionists had captured a fresh and original vision, even if the art critics and art establishment disapproved of the new style.

The question is - can ML (common housecat version or near-future upgrade) only rehash the past, or does it have the opportunity to break out the same way. Are the AI hallucinations the pathway for that?

If not, then AI will get stuck doing grunt work, but will not solve all of our problems or take all your jobs.

In the meantime some jobs are definitely going away, as this recent day controversy over VO actors in the UK::

Sorry for the delay - we have had the approval from the BBC to use the AI generated voice so we wont need Sara anymore.

There were immediate calls for protections, Which I’m sure will be short-lived and misguided if history is to believed. Rather than protections and bans, we should look for managed offramps.

It all comes down to budgets. It’s not that everyone loves AI. But it is the most promising solution to a long simmering problem. As everyone gets squeezed from above, they look for any means to deliver at lower cost. And creative labor cost is a big ticket item. AI seems like the panacea.

Ask Boeing how squeezing the supply chain until the doors fall off pans out?? The real solution is to have the people at the top stop having unrealistic expectations and people in the middle stop promising stuff they don’t know how to make with the money available.


Does this mean I can’t turn on Mask Humans when I do a camera track?


My question is how is the labor cost the big ticket item when Reed Hastings of Netflix just bought an entire mountain? All these miraculous savings of AI- who is in dire need of this? I’m confused here. Is it the oligarch billionaires and the shareholders? I know the film and tv industry is not killing it right now, but a lot of that feels like self inflicted wounds by the big decision makers over the past decade. Trying to create an unsustainable mono-culture of superhero movies, investing far too much in the “money will be coming in any day now, trust me, just need to make more and more stuff, quantity over quality baby” model that drove the streaming wars, and a general disregard for the viewing intelligence and viewing preferences of audiences. If we dig into that last point, the idea that “these pigs will just eat any slop we throw at them” absolutely feeds into “so let’s make them AI slop” but the issue is there is an infinite slop trough in YouTube, Instagram, TikTok, etc. Seems to me like investing in more slop and how to make slop cheaper is not the answer that leads to sustained longevity. Just a way for the people at the top to ransack the ship as it’s sinking. And that’s a shame. There need to be real hard questions about what the future of large budget tv and film is going to look like. And degrading the quality even further just so Reed Hastings can buy an entire ocean next and shareholders can get a few new race horses for their stables doesn’t feel like that to me.


The labor cost isn’t as much as the top as in the middle. How many production companies have you worked with who say ‘we need x but only only have $y’ and expect you to make it happen regardless. The people at the top have lofty dreams. Which can only happen if people in the middle squeeze the layer below forcing them to deliver with unrealistic expectations. With the layer below trying to stay alive, AI seems like a solve of making it happen within budget. And since we all want to keep our jobs, we keep bidding on these jobs hoping that this time will be better than last time, maybe we get lucky.

Of course that teaches then people at the top that they can have that cake, which then just wants them to have more of it.

Luck is not a strategy.

1 Like

Agree there. A race to the bottom that I’m not sure what the finish line is. Ugh.

Slightly outside the video realm (thought lots of cameras were involved), just saw this story on Amazon abandoning the touch-less checkout at their grocery store that made big news years ago.

Apparently they required 1,000 people India watching and rewatching the videos in semi-realtime. As the story says:

Though it seemed completely automated, Just Walk Out relied on more than 1,000 people in India watching and labeling videos to ensure accurate checkouts. The cashiers were simply moved off-site, and they watched you as you shopped.

Creep factor aside, a good example of that magic technology is sometimes a whole lot less magic, and just as manual out of sight. Just like LeCun’s ding on the millions of hours of labeled video training data.


I laughed so, so hard at this. Hilarious! I’d always assumed it was something to do with RFID tags.

1 Like

Well, then you’d need 1000 people attaching tags. Either which way, if you’re not reading UPC codes, it takes labor. You can move it around, hide it in the backroom or overseas, but there is no magic.

Actually Amazon invented (or re-invented) the infrastructure for this type of thing. Back then it was known as ‘Mechanical Turk’ - a market place for online task rabbits. They presumably renamed it to a more PC name now.

The amazing thing about the truth about the store is that it mirrors the story of the original Mechanical Turk, where something pretended to be a machine but was secretly operated by people

1 Like

Yikes. Sometimes this all sucks. Feels like making bandages out of our own skin a little bit.


Not surprising yet fascinating.

Two things stand out, one specific to AI the other more to cloud infrastructure in general.

In the old days when there was a software startup, your series A and series B investments may have been $10M more or less. And most of that money went to paying a group of developers and some general overhead. Pretty reasonable business model.

With Gen-AI companies like this, they need $100M/yr as in this case just for cloud infrastructure, before you do anything unique with it. And before you even have some test results. Same infrastructure cost every other Gen-AI startup needs.

That puts you in a different risk category (more like space flight where you have to fly and blow up a few rockets before you can sell the first revenue flight). It also means you need to generate a lot more revenue quickly on a product that is mostly hype with some early adoption, but fewer success stories and power users that can’t live without it, willing to pay and keep costly subscriptions.

How many of you have and keep(!) $28/mo/user subscriptions for Gen-AI tools? And how many of you have to exist before you can collect $100M/yr in revenue (answer: 300K). And that’s just for one of them. :thinking: Looks like in their case they could only find 1/10th of that. Oops.

Do that in an economic environment where money isn’t as cheap as it used to be. That’s an all hands on deck situation to make it to the end of the tunnel.

Reminds me a bit of the streaming wars. Lots of investment money, until someone asked some questions. I hear lots of stories and know people who are out of work, because there is suddenly less content being made. Like the TikTok video that was all over the web the other week from Hollywood producers driving for Ueber Eats.

Be prepared for a more Jenga games.

That’s why Gen AI is so fluid with developments every day. Everyone is racing to make the math work and be the stand-out success before the sky falls.

If anyone should be worried about their job, I think ours isn’t near the top of the list, or at least not on account of Gen-AI. Streaming wars, different story.

The second issue is cloud infrastructure. I’m a big fan of the cloud in general, as it’s a more efficient and elastic resource that allows companies and individuals access to what can power their business. It’s good for business overall.

But it has changed some behaviors. In the old days if you needed an extra server, you had a pretty clear idea of how much it cost to put in the rack. And once it was there, there weren’t any surprises in terms of additional bills.

The other principle that shows up here - flexibility usually comes at a cost. The same compute unit in the cloud costs more on the spot than your on-prem version. Done right you can come out ahead due to the flexibility in scaling up and down. But if you have a very steady demand curve, you’re paying for flexibility you don’t really need.

Cloud infrastructure pricing is complex. It’s easy to make costly operational mistakes, and hard to estimate the actual cost. That makes it hard for companies who are used to selling you software at a fixed price (like a Flame sub), and suddenly they have to bundle all the compute cost which used to be on the customer’s infrastructure bill before. We have seen companies that fail to make that transition well with ugly results. It’s easy to think - well it’s in the cloud, so it must be easy. Just magically. It ain’t magic, it’s the same stuff, just a elsewhere in the supply chain.


Interesting article about the insanity of the AI company’s need for training material.

Mind you, the further to the bottom of the barrel they, the less likely the model will hold up to high-end post production standards.

1 Like

It occurred to me the other day that if they are training with low-rez internet shit, everything will resemble low rez internet shit.


And judging from the first couple of AI spots, they look like they are heavily influenced by low-rez internet shit.

1 Like