So you think AI isn't going to take your job?

With all the fidelity 1024 uprezzed mp4 has to offer. It’ll be a new gif renaissance.

6 Likes

Looks impressive, but in my experience almost everything I’ve gotten from Runway has fallen far short in the quality department for anything other than social media content. I agree, Ben, that the click is ticking, but I just heard on the Decoder podcast about the exponential compute cost for things like clip length and resolution. The guest was asked why Veo3 limits generative clip length to 8 seconds, and the answer was that doubling the length requires an increase in compute that was squared. Same must be true for any data metric (resolution, but depth, etc), and then to deploy that at scale has gotta cost more than $279/month. We’re about to find out!

6 Likes

All of the AI companies are betting on tech’s continual adherence to Moore’s law with advancements in AI acting as a force multiplier on each iteration developed by its human counterparts—AI will make more and better AI possible and then investors recoup their losses. Capital wins, play the long game.

It’s a familiar argument—the same that often gets used with regards to the environment. Consume more, develop more, don’t just stay on the path but make the path an 8 lane super-highway because that’s the only way we’ll figure out how to solve the problems of over consumption and develop the technology that will enable us to heal the earth.

It’s the new housing bubble. Someone soon will tell the emperor they’re not wearing any clothes and start calling in debts.

Note to self to short ai.

3 Likes

already happening:

openai gets into bed with oracle.

microsoft gets soft and pulls out.

big stargate only 20% of proposed size.

There’s an analog but I just can’t piece it together…

At least the DOE put us on the right track before everything else became some dumb artificial reality show.

Science, right?

4 Likes

Exactly.

One of the emperor calls may be this story from teh LA times a few days ago.

On July 17th a judge authorized a class-action for a suit against Anthropic for illegally accessing 6M copyrighted books in it’s training from shadow libraries. If the jury found the maximum penalty, Anthropic would be on the hook for $1.05T in damages. That would be put them out of the misery in no time, and change the formula all the AI companies have to use to run their business.

As it stands none of them are profitable despite all attempts. If they actually had to pay for training data, there’s no path to success on the horizon, regardless of how many dreams Altman has.

Stop engines.

Their entire business model was ‘shoot first, ask questions later’, and hoping that by the time people take issue, they’re ‘too big to fail’ and they get to live another day. Maybe not.

The AI quality-and-availability-of-training-data problem could be fixable IMHO while at the same time helping to offset the financial displacement countless numbers will feel once AI is doing their jobs.

Humans can opt-in to leasing-out their data and be paid-out as a foundational component of a UBI. There have already been massive breakthroughs in llm model partitioning—I forget the exact term—but basically being able to turn off sections of inference based on training data no longer being viable for use. It’s still early days but someone has seen the writing on the wall it seems. No longer want to ride the gravy train, wait till the end of the month and opt-out and the checks stop.

On a personal-note, I’ll say that I think the only quality models that will ever be generated will be when the actual data itself has value. The data of value is the stuff we tend to hold closest to our collective chests and that being the case we can either be people who get paid for our possessions or we can be a people who are stolen from.

Two stone with one bird. I’m going on break.

That’s already happening. Every week I get emails which essentially read like this ‘Don’t let the TB of video files in your library sit idle. Sign here, we’ll take all the files, prep them for model training, and you get paid.’ The modern version of stock libraries.

Of course most of the hundreds of TB of video files I have on LTO aren’t unencumbered, so there’s no option for me to do that.

The question remains if the big AI companies had to pay for that content, what does that do to the economics? It makes sense for smaller, targeted models that feed specific industry sectors, and have specific clients lined up.

We also see a lot of training in the private sector, where companies have internal models, which will take one of the big foundation models, and then augment with their own data.

True. One challenge though is, that while many of us have our own quality data, the volume isn’t big enough to train models with it. Short of large Fortune 500 companies that have massive internal data lakes.

These models are all statistical probabilities in the weights. Which also means you need to get to statistical significance when you want to ask them a broader range of questions - not a very narrow case as is common with CopyCat.

Back in days when I would build websites, the number of people that came and said ‘I want customer reviews and recommendations like Amazon’ was endless. But Amazon’s recommendations work because they have endless data points to run stats on. Pick your random Amazon SKU and the website will say ‘4K people bought this in the last week’. As opposed to let’s say Underwood Ranches (the real replacement for Sriracha), who sells maybe 10K units in a year, mostly at wholesale. There’s just not enough data for those folks to do anything involving statistical significiance.

So for most smaller operations high quality private data is insufficient to be meaningful.

@cnoellert get a copy of “More Everything Forever” by Adam Becker if you haven’t already. I think you’d dig it. That and “In the Age of Surveillance Capitalism” by Shoshanna Zuboff have been my two favorite non-fiction reads this year.

2 Likes

Yeah im not expecting this to wipe many of us out just yet but its only a matter of time before they sort out the quality and lengths. The bell has rang unfortunately and if this works out to be alot cheaper then paying artists then its game over for so many.

2 Likes

studios like lionsgate will not consider US$200,000/month a big deal, more likely they will be dumping US$2,000,000/month or possibly even $20,000,000/month, and they have a back catalog of training material…

1 Like

I feel like people are missing a bit of a trick here.

There’s an absolute ton of work coming in AI, and Flame Artists are perfectly placed to take advantage of it. Any AI content built to a proper client brief — with revisions, feedback, and approvals — is essentially a very complex VFX shot. And right now, there’s a massive demand for that skill set.

If you want to take advantage of it you should be furiously learning Comfy UI

It’s basically the cutting edge of the biggest VFX revolution in our lifetime.

I’ve been chatting with a director on a large-scale AI-heavy project, and he lamented the lack of department heads in these projects. Right now, the AI artist ends up doing everything — set design, DoP, art direction. But long term, that’s not how high-end work should run. You want those individual departments feeding into the pipeline, and a dedicated AI/VFX artist bringing it all together. You don’t want some grizzled old VFX artist designing hair and makeup do you? You want a proper stylist offering ideas and that being fed in to the eventual end product.

That central role — managing and integrating all those creative inputs — is the new online artist.

I’ve currently got three major AI/VFX projects on the go, and the work is some of the most creatively rich and technically interesting stuff I’ve done. Honestly you should be getting stuck in.

20 Likes

Great insight @RufusBlackwell but you need to remember that collaboration has always been difficult for the fearful.

5 Likes

Yeah it’s pretty incredible how perfectly positioned we are. We sit in front of the greatest image manipulator on the planet, powered by workstation class Nvidia hardware, at the end of a very specific image manufacturing pipeline, and happen to be the best problem solvers and creative minds in post.

It’s like what….3 lines of code to install Comfy? 2 lines in a terminal to setup a venv? 7 clicks to download and move Flux Dev into /models?

Fuck, even the workflows under the Browse Templates is enough to move the needle forward as a compositor’s companion.

7 Likes

I appreciate that sentiment, and it has a lot of weight, no less because you have demonstrated and showed it in action.

What makes this a bit harder to do is the overall debate on AI and the way most companies, the investors, the CEO class and everyone out there is approaching AI.

If AI were just the latest cool tech (like the iPhone, the Mac, or even some aspects of social media - lets say LinkedIn to stay away from controversies of IG), then it would be a lot easier to embrace with open arms. We’ve all done this so many times in our careers. With every new generation of compute power, newer versions of Flame or your favorite app, our workflows and opportunities have changed, usually for the better.

But on the background where AI is hyped into mega bubble territory by the Silicon Valley crowd, claiming that it will solve all the world’s problems by tomorrow, when clearly this is far from the case (in a CMU study performance of AI bots on basic tasks was somewhere between 7-26%, and the rest was hallucinations and other apologetic crap); and then the big companies salivating over eliminating as many jobs as possible (already no one is filling entry level positions, and Altman is claiming that AI is replacing lawyers and doctors); not to speak of the daily does of ‘here is the latest video AI has barfed out that didn’t take a human to make it’.

It’s emotionally very hard to reconcile that and embrace the opportunity even if we absolutely should. How can we say that on a human level we find this all repulsive, yet on a professional level we can’t wait to exploit this to keep putting food on the table? That’s somewhat incongruent.

Technological innovation in the past has displaced people in existing roles, but at the same time always provide new roles and everyone who was flexible enough had a seat at the table. The farmers became factory workers, and their children became knowledge workers. But generally there was enough work to go around.

The AI wave is built on the premise to replace humans en masse without explaining what the humans should do instead to earn a living. It’s an inhumane endeavor by the few entitled ones without regard for humanity. That cannot be ignored. Some things, however popular they maybe, one should not get on board or normalize just because it’s convenient.

To be clear, AI inherently is not a bad technology. The new models and tools are fantastic productivity enhancers if used correctly and responsibly. There are a lot of great use cases for AI tools, some of which you have shown. And I do embrace these tools in those scenarios.

And we also have to separate the contributions and progress made by AI researchers to the field, and the advancement by companies like NVidia to provide compute power to support these advances. The problem entirely lies with the commercial exploitation of this AI research and progress in unsavory ways, beyond the research and engineering.

9 Likes

Just tried Runway’s Aleph for the first time. :scream:

1 Like

What did you do with it?

i think you are having some selective memory here with how new technology sometimes arrives at out doorsteps…

i dont say to assess blame or who is the worse offender, etc. that is a much longer discussion. These changes just happen.

1 Like

to add a poignant indicator of how deep we are in our own shit, the linked guardian article is surrounded by advertisements for life insurance.

special.

tell your parents.

add it to your reel.

1 Like

Are you on Mac or Linux? Pinokio is probably the simplest implementation and runs on all 3 platforms:

Also has a bunch of other AI tools that are quite interesting.

On a MAC you can learn Comfy, but it is quite slow because it’s all optimised for Nvidia. Right now I’m Running Flame on a Mac then using Mimic PC to rent a workstation thats running Comfy. Bit of a weird setup, but I move round a lot and it essentially allows me to be running 2 fat systems simultaneously.

There are loads of different implementations of Comfy, and it can be quite fiddly, but it’s a really interesting learn.

2 Likes

The links above are for Mac only and are a forum member’s way of accessing Comfy from within Flame.

it’s not applicable to my operating system or use case so can’t speak to that.

Comfy is one of the most well documented AI apps on the planet and there’s thousands of Discords, classes, YouTube videos, and Medium articles going through it in detail.

starting at the GitHub is a good place to start.

1 Like