Layoffs and firings at Christmas time have been an ad agency tradition for as long as I can remember. Now that it’s affected mega-corps like Google it has become more noticed.
I keep thinking that AI is really good at creating boilerplate, generic stuff, a great custom stock-image generator but not so great a creating anything truly original. That’s where a clever prompt will be the key. Maybe the art directors of the near future will just be smart prompt-writers
Call me if you guys need that kind of work, we use AI for advanced face replacement, puppeteering and ageing/deaging (amongst others).
But you REALLY have to read my self-published Amazon book on buying a used Zepplin. It’s packed with quality information for those in the market, not to mention a stirring read. Will Smith was interested in optioning it as a vehicle for Jayden!
The time lapse feel of it is really what saves it though right? It seems incredibly temporally unstable still. I think people can argue that the bizarre every frame painted cartoon time lapse look will become a “look.” Great, we’ll use AI for it. We also though have no idea what the legal landscape for this stuff is going to be. There’s active lawsuits. There’s obvious copyright issues inherent with the technology and the way it’s trained. I think diving in head first is a true gamble. Could end up S.O.L.
Perhaps there’s a little bit of all of us in that spot. If I can figure out which pixel is mine I’ll sue.
A lot of grunt work will be removed, but AI’s ultimately a tool designed for humans to drive. It’s no different than the last time a whole ass industrial revolution hit our industry - When CGI took over and tools like Flame became prominent in the first place. If Phil Tippett could find work after stop motion was (sorta) phased out, I’d like to think the most skilled among us will manage too.
The people who’ll be hit hardest - and already are - are our friends overseas who do our roto. But again, that’s grunt work personified. I’ve done several jobs in the past year entirely with RunwayML mattes. It’s perfect for jobs where you can get way with a somewhat boil-y blob matte, and it turns out that’s more jobs than you’d think. I’m sure in five years it’ll be in a much cleaner spot.
Much of our job involves fixing images that were captured from a camera, right? They’re 80% there, just need some clean up, pretty sky, etc… I can see a future where we’re doing the same thing, but the images originate from AI - but they’re still only 80% there, and still need some tweaks.
wow, this is scary
That “Understand the potential dangers” headline is dumb. We can’t convince our uncles that Joe Biden isn’t clone; nobody needs video proof. That ship has sailed.
doubly rich that the article is in the NYT, an outlet who ran so many articles on Clinton’s email scandal it may have turned the tide of that election. Like sure, the AI video is going to be the thing that sways a narrow election, and not the nineteen front page articles a national newspaper writes on how old and incoherent one of the people running for president is.
It’s so far beyond that though. It’s fabricated pornographic images without consent, it’s incredibly sophisticated scams pulled off with ease, its misinformation campaigns blurring the lines between not just politics and conspiracy, but ANY information. It’s a perceptual break with being able to discern reality on a potentially massive scale. Real footage declaimed to be AI generated, AI footage presented as real footage. Having cameras in our pockets was at one point viewed as a way to increase accountability through visibility. We’ll have to see how that fares in this new age of battling over whether we can trust anything that we see on a screen. I find it all pretty disconcerting way beyond any realm of how it will affect my job.
To be clear, I think the toothpaste’s outta the tube and I don’t think there’s any going back with this much money on the line. I’m using AI stuff from Photoshop and whatnot. It’s a helpful tool. Make my little matte paintings. But I’m not gonna act like I don’t have concerns here.
We already have ethical dilemmas to deal with.
When do we choose AI to do matchmove work over paying someone to do it?
When do we choose AI to do roto over paying someone to do it?
When do we choose AI to do texture creation instead of paying someone to do it?
AI apps over MoCap?
I know I’ve crossed a lot of the boundaries above already, haven’t you? It’s hard to pick where to draw the line but that will likely be when people you know start to lose jobs rather than someone in a country where the grunt work was cheaper will do it, until now. That’s hard to deal with already.
On the flip side though, it has allowed me/us to do additional shots that the production would have had to drop due to not having enough money. Also, it may help smaller studios that actually look after their artists to be able to compete with the behemoths. I wonder if worker conditions could improve at the behemoths where artists could work a regular 5 day working hour position instead of working stupid hours 7 days a week to meet a deadline. Only if management are ethically minded of course.
It really is a double edged sword.
The nice(?) thing about AI becoming a popular method for dullards to make imagery is we as Flame users are uniquely suited towards fixing visual “errors” in non-traditional ways.
For decades now we have been pulling impossible tracks, patching all manner of mess, and cleaning up motion vectors.
Should this AI dreck truly take hold we’ll have more work than ever patching up shots.
I propose legislation that all AI be trained on nothing but re-runs of Gilligan’s Island and The Brady Bunch.
As I’m watching this discussion and several parallel threads over on CML (AI and the role of the Cinematographer, Another Take on AI) with all their varying degrees of freakout, naysayers, and anything in-between, I’m just struck with the fact that
a) This is developing at lightening speed (yesterday’s ‘near future’ is now ‘today’)
b) This is impacting white collar workers who aren’t used to this type of displacement
c) We generally failed handling this when it impacted blue collar workers over the last 50+ years
We really don’t have a reference point on how to evaluate and approach this. It’s a freight train going down a hill at breakneck pace powered by billions of $$$ in investment funds with wet dreams of a swarm of unicorns. That means, whatever breaks there maybe on the train, they’ll prove insufficient. Sparks flying and all.
At the same time I’m seeing people call for ‘protections’. It’s a normal human feeling. If I can’t fix it, and I didn’t cause it, well there ought to be someone that makes sure things remain fair.
Well, when steam engines, and trains, and airplanes, and computers came along. Did we protect the horses? Or the trains when planes provided better options? Or typists when Word Perfect came around?
No, and it would be absurd to do so.
But then what does that mean? Is the world about to end? Well, maybe. Though I think we’re hard at work at some other scenarios to do that, we don’t need AI in the mix there. At no point in recent memory has the news been so dominated with pictures of war, division, and dismantling and failure of things. But I disgress.
Though it makes our tolerance level of dealing with the AI threat much lower. You get disaster fatigue. What does another doomsday scenario matter, when there are so many on the table already? On the bright side - the ozone hole (a doomsday scenario a few decades ago), did recover.
But all is not lost. As has come up in our colorist Discord - all those AI images, including Sora, are for the most part trained on Rec709/sRGB assets. They don’t fit our current pipelines. You can’t easily use them in a Amazon Prime Dolby Vision project. Now, maybe we can retrain them, or modify the results with another set of AI tools. But training data that are ethically sourced are already difficult to get at scale, much less training materials in LogC. And what if that Mammoth really needs to hold the still unreleased Heinz Ketchup bottle redesign? Clearly the current models can’t achieve that.
But could they in 2 years? Which based on the inverse Moore’s law of AI, means they have that sorted by Fall.
Back to we really don’t know how this all pans out, nor do we have a model to help us think about it. So maybe we should slow down a bit.
Human nature is to catastrophize everything. Look around you and count the signs (I mean printed as in traffic signs, building signs, stuff printed on your Amazon box and latest production packaging). I haven’t done the math, but there is probably a 10:1 ration of ‘Caution’, ‘you will die’, ‘do not park here’, ‘you will go to jail if you do that’, etc. etc. vs. signs that are positive in nature ‘you can find parking two blocks ahead’, etc.
So it’s easy to fall into a deep hole reading all of this. So far humanity has survived a lot of bat shit crazy stuff. Let’s think positive that this will be no different. Not all of the doomsday will materialize and some new cool stuff will be around the corner. At a minimum a whole lot of people will be needed to write prompts, train models, fix stuff the model didn’t grok, develop and test the next model, and get the important imagery there the last 20%. Which has been our speciality all along. Do the stuff everyone else fails at.
Take a deep breath and be happy!!! At least it’s another day on the right side of the grass.