Cory Doctorow, the genius behind the theory of the enshittification of the internet (look it up) has what I feel is an interesting take on what AI might shape up to be and what we’ll all be left with afterwards.
He’s extremely thoughtful, looking forward yo reading it.
Good article.
I hadn’t thought about the leftovers from prior bubbles and how they either did little or fostered new good things.
An excellent read
I think that our commercial clients are still high stakes, risk intolerant. Despite what feels like a drop in quality/budget.
That was a fabulous read. Thanks!
Finally got around to reading it over coffee.
Here’s what I just wrote when I re-shared it via LinkedIn. Apologies for just cutting an pasting here, need to get back to my deadline.
A very good and thoughtful perspective on AI and the likely bubble it represents.
As an engineer I’m fascinated by how it works and what will be at some future point. As a post production artist I benefit from tech technology and see both the hyped and the valuable versions of it. Thinking like a business person, it’s clear that the business model of today’s version of AI is unlikely to survive the influx of investor dreams, when the accountants finally have to reconcile it all.
Not with the current generation of technology. That is not to say that future versions of technology can change the cost-benefit formula to make it work.
There are big players stuck between a rock and hard place, Google being one of the big ones. The bubble will take some of them down on the way out.
As a human we also have to think about the impact of eliminating jobs. There is no utopian world where we all have 6 figure engineering jobs. It’s like the law of averages, when you want to be above average (every parent says that to their kids) you have to remember that at least half of us need to below average to make the numbers work. Always appreciate those folks and treat them with the same respect as your peers. If the goal of AI is to eliminate the bottom 50% of the world, we’re not seeing the forest for the trees.
This was a great read. Thanks for sharing!
I’ll say what really sticks out to me and I’m still processing/ thinking about, is the idea that things will actually become more expensive to achieve, as a result of required human intervention and qc added to the inevitable rising costs for the higher level models, without necessarily netting much of a positive change in quality. The radiology example really shows this well.
And then what we’re left with at the end of the day is hype, a few people becoming new billionaires, a few others knowing a coding language that will never be all that broad, and an endless sea of misinformation laden content, scams, unconsenting pornographic images, and a few uncanny valley meme generators.
Makes me question a lot of things. Makes me feel a little rueful and uncertain about the future. I start thinking about single use plastic water bottles and coffee pods, things that embedded themselves sneakily in our world, for the sake of convenience, that turned out to not be such a great thing but are going nowhere now.
It’s a bit more complex. One of the main costs he refers to is the cost of query processing. And old Google search cost fractions of a penny every time someone searched on Google. With AI assisted search, each search can actually cost more than $1 in compute time (the actual number keeps moving, but it’s in that territory). Who of us would be willing to pay $1 every time you searched the Internet? Right we don’t have to, because investors subsidize that cost, which contorts the market.
I do pay a subscription to a search engine so I don’t have to use Google. For $10/mo I used to get 600 searches. That’s an acceptable cost for my privacy (they recently made it ‘unlimited’ in that case).
When I was doing a big roto job the other day, where we were training CopyCat for 20hrs, I computed the electricity consumed in the training. It wasn’t inconsiderable, but still less than $5 and all of this roto would have cost way more even overseas if done the old way.
So it’s very situational. But that gets lost in the hype machine of the Internet.
The other aspect that actually is more impactful on the cost side, and where his Cruise example is quite apt, is that AI tools in post are notoriously unpredictable. You do have to actually spend more time checking the result for artifacts, more so than traditional procedural techniques. And if you find an issue, often it’s not as simple as moving a keyframe into position, but often a much bigger work-around.
The biggest risk to post though is similar to what happened with DSLR cameras and photographers. Once the market got a taste of how good the tools had become in some cases, the clients no longer wanted to pay professional rates, because they didn’t understand the value model anymore.
As AI tools proliferate, there will be the expectation that all kinds of tasks we do should be fast and cheap, because ‘there are those AI Tools’ (insert heavy Texan accent here). With that the traditional budgets will disappear, because of perception. And once the budgets are gone, but the tools don’t deliver and we have to fix the results the old-school way, it will be harder to get paid for that work.
That’s my biggest concern in the short-term.
Most AI tools I have used work properly and produce usable results 50% of the time at best. And that’s before you worry about the biases in the training.
Agreed, and I see your points here! Eventually though, these AI companies need to make a profit for the investors that are subsidizing these costs, right? And that’s going to get passed on to consumers. But like enshittification posits, so many things will be left decimated by this AI takeover if it really takes off (the Uber vs taxis situation) that we’ll end up viewing this as a necessary service that we’ll have to pay for because that’s the new paradigm of seeking digital information and creating digital content. Maybe we can get into, can’t we just use google again, good old normal not expensive google: I think about cell phones. It’s very very difficult to find a dumb phone nowadays. There’s no market for a machine that just makes calls and texts. It’s been subsumed by the smart phone industry. And now a cellphone, a smart phone, is becoming an essential piece of technology. There’s no real going back. And if you’re a kid that grew up never even needing to read a map, good luck without an app. I dunno. I might just be a curmudgeon here, and oversimplifying. But I’m never a fan of allowing Silicon Valley to just go wild with the thing they love so much: disruption. Again, curmudgeon, but disruption isn’t necessarily a good thing…
That’s the ideal scenario. But that means the cost have to come down to a level acceptable to consumers by the time investors are finished subsidizing it. And that is unlikely to happen. In that case the bubble will burst.
The comparison might be the streamer wars. The same type of money was chasing streaming dominance and funded all kinds of content. However content creation cost got decoupled from actual revenue consumers were willing to pay. Investors made up the difference, until earlier this year. And now that profitability matters, suddenly there are way fewer shows and my prediction in the summer that the industry would shrink by 1/3rd seems to have come to pass.
Consumers weren’t willing to pay 3x their old cable bills for all that extra content to be available to them. They enjoyed it while it lasted, but it doesn’t work economically, especially with higher cost of living and all other headwinds.
And consumers won’t be willing to pay for AI based search and content more than they have so far, which btw for most consumers is $0. Very few people pay for search results.
In the AI case, the biggest impact will be AI related jobs, but there will be some collateral damage for certain.
And Google is screwed (to be PC). Their entire business has grown to the size it is on the cash cow of paid search. Huge margins on those old searches. Now to remain competitive they should embrace AI based search, but it would kill their #1 profit making product. But not doing AI will leave them behind in the market. It’s a no-win situation for Google. Not that I’m sad about that, TBH.
I completely disagree with this article that says much and then nothing really at the same time.
How is this actually at all like the DotCom bubble aside from investors hopping on the new hot thing?
There is for sure a hype bubble, of course. But I would say it’s rather different in 2 main ways. 1- the money flying around in the real economy, and 2 the nature of the technology itself.
Money flying around the "real’ economy: I was a Valet a two very hi-end restaurants in Seattle during the DotCom boom. There was money “being Spent” everywhere. As in people spending their new found riches on cars in restaurants at store little and local to buying tones of shoes at Nike Town.
This is on no way like that, sorry but a ridicules comparison.
Crypto bubble is the only real close comparison. It was like a “Very Mini” DotCom bubble.
But with Crypto and the housing and financial bubbles those are all overt financial crimes. Not investors buying into what they hoped would be the next big thing even if looking at it critically it made no sense.
I understand what the Author’s getting at but I simply don’t buy it.
He says towards the end:
“Just take one step back and look at the hype through this lens. All the big, exciting uses for AI are either low-dollar (helping kids cheat on their homework, generating stock art for bottom-feeding publications) or high-stakes and fault-intolerant (self-driving cars, radiology, hiring, etc.).”
These are both incorrect.
ChatGPT is being widely used in education already for example. My wife’s a teacher and everyone uses it to help lessen the insane workload now days.
His self-driving car example is the most apt but, but only in isolation. Tesla has subjectively the most refined self driving and they do not feel it is ready for autonomy.
So yes, the set-driving example was an overreach and symptom of hype around the technology. BUT, that in no way means the technology is not still evolving with a real probability of happening.
“generating stock art for bottom-feeding publications”
- Really, he thinks that’s all that will become of Ai generated Art. New models just released laterality last week have taken another big step to photorealism. Adobe is now pushing its resources all in on Ai stock art. This guy is completely disconnected from what’s actually happening it seems.
"or high-stakes and fault-intolerant (self-driving cars, radiology, hiring, etc.).”
He’s being disingenuous at best with the radiology and hiring. Ai actually made another big leap in its medical capabilities including the aforementioned radiology.
Ai is most certainly going to be a part of medicine and helping Doctors in evaluations and determining treatment courses.
Hiring- already being used buddy…
“Every business plan has the word “AI” in it, even if the business itself has no AI in it. Even as two major, terrifying wars rage around the world, every newspaper has an above-the-fold AI headline and half the stories on Google News as I write this are about AI.”
- Maybe he missed the memo that Ai is being used in both of these wars? Because it is. I just don’t get this statement. What does the news cycle have to do with the actual viability of something?
Also, the dude is in the tech sector and writes about Ai so of course his Google News feed will show that. Oh and how is that you say, because of Ai.
As to whole coast of running these large models, again he is being incredibly ignorant or in denial or and most likely is just obnoxious.
These large models can and have and are being used to train much smaller models.
Also this is actual technology(physical and compute), unlike the DotCom boom.
As such the physical and compute technology is and will continue to advance.
As for any sort of regulation. Sam Altman has already sweet talked Congress, The White House, and the Pod Cast world.
He has then successfully retained himself at OpenAi only now the small temp board is a bunch of Neo-Liberals in very lock step with the establishment. So I think any sort of regulation will be all good for the industry and let them do whatever they want.
Anyways sorry for the extended rant, LOL
Also, I can Still get an Uber regardless of their dubious financial nature. That in itself negates his entire argument.
You’re all good! I’m not all that passionate about this to be honest. I’m pretty resigned to just trying to make do in the hellscape we’re positioning ourselves into. Looking forward to the insurance company that owns the hospital I go to denying coverage for my surgery based on their ai algorithms after another ai algorithm disputed my doctor’s plan of care (I need the surgery after being run over by a self driving car, faultless in the matter, an AI bug, and it doesn’t even matter anyway because it’s worth being able to watch AI generated garbage while having my car drive me places so I can catch up on the millions of tv shows generated in every moment and that my children are hopelessly addicted to).
I can recognize this is not what Cory’s argument is referencing, but I can’t say I’m a huge fan of the direction we’re headed, and simultaneously don’t seeing it going away anytime soon! Not sure what that’s going to look like, hopefully not the above example! Mainly just trying to get a grip around if this is Pandora’s box, or pandora streaming internet radio.
Yeah I’ve gone deep down the Ai rabbit hole these last few months… ehh
Yeah I basically agree with all of this. It’s an interesting article, but I think the conclusions are wrong.
His stance is interesting though he has no citations to back anything up. I looked him up too and saw he is a blogger and sci-fi author. I filed this in my head as opinion.
I have used AI a lot. I think we are all interested in the picture slant. IMHO It is a tool which is difficult to control for pictures. Using words to ask a computer to produce a picture which is in your head relies on the two of you being in synch. I’m not plugged into the matrix it seems.
Nevertheless, it has been useful and assisted in many regards with a little bit of a picture here. A little bit of roto here. A few words there.
It is only as good as the datasets it has and right now it is learning all the time from what people ask it to do and then they choose from usually 4 options. The progress has been swift in a year.
What will the future hold? I think it will be absorbed into the toolbox. I am optimistic that humanity will still want variety and this will assist but not be the sole vendor.
I took a bit of time to consider the best way of putting it…
The very nature of bubbles is that there are lots of aspirations and lots of seemingly rational explanations on why this effort will defy all the odds and be a breakthrough. And a lot of very smart people that will construct complex and seemingly logical arguments to support the narrative.
In some ways this kind of thinking is required to push boundaries, to convince yourself that this is not wasted effort, or sunk millions of investment money. The vary nature of VC money is to believe in moonshots and counting on the statistical probability that one in ten of the bets will have a good return while the rest fizzle out. And that 1 in a million will be unicorn. That’s their investment thesis.
And for quite some time the results appear to be gravity defying, further convincing everyone that this actually will all hold true despite the naysayers, creating a bit of a herd effect that then leads to shutting down any cautious voices or what-ifs.
Until it doesn’t.
And sometimes shutting down cautious voices has to be done hard to protect the aspiration.
Films have been made about it. There were less than a handful of people who argued that the 2008 housing crisis was a house of cards. While the whole country convinced themselves otherwise. Based on very sophisticated investment risk models few could wrap their head around, but were flawed nevertheless.
Does Cory Doctorow have all the hard evidence that this is a bubble so people believe his cautionary tale? No. The evidence doesn’t exist yet. It’s a discussion based on reason and history, that some of the supposed proof that this is the big one may be flawed, even if we cannot connect the dots.
Will 99.999% of people disagree with him because convincing themself of the possibility that this gold rush is for real? Absolutely. That’s the nature of how this game works and has worked the last n times bubbles existed.
Does that mean he’s right? No. Does that mean he’s wrong? Neither. Is it worth considering his point of view. Yes.
It’s a good exercise in judgement to consider that there may be something to his argument so you don’t fall victim to believing that this moonshot will the one despite massive odds of history.
Actually, the crazier the moonshot, and the more hype there is to sustain, the more risk that something lurks in the shadows.
Even bright minds like the CEOs of the streaming businesses and major networks convinced themselves that this was all a good idea until gravity (err profitability) came back into consideration.
I think Tesla is a good example of an aspirational story of the unthinkable being possible. And much progress has been made that others thought impossible. But there is also considerable evidence that not all is well. There is significant discussion and evidence that the autopilot and how it’s being used suffers from the aspiration/reality disconnect, in what some may consider forward thinking and others quite dangerous. We won’t solve this question here, and I’m not trying to argue it. The success will be told in the future by major sustained breakthrough or a dismal body count.
The concern on the current AI push, the autopilot, and even back in the 2008 housing crisis, is that the benefits are realized by fews, while there is significant collateral damage to innocent bystanders. All in the name of progress and profits. Where you stand on this, is an age old question of individualism vs. community and how to balance the two. We all have to pick our position on that spectrum and we won’t all agree on where we land.
That’s kind of doing him dirty. Haha. Doctorow has been around for a long time and written a lot of salient stuff. He used to run the site BoingBoing.net, has a good blog currently, and has written a variety of books. I like him, as he’s one of the few people I’ve found that think and write about tech instead of pasting press releases. One of the good guys, if you will.
His theory of “enshittification” explains why Amazon is trash now, as well as why other companies and apps seem to “fall off” after a while:
Perhaps it’s doing him dirty. However I’m in the middle of a masters degree and unless your theories are supported by other academics they don’t carry weight. I applied the same rigour here.
I’m not saying I dismiss him. But until I see supporting evidence, I file under opinion.
It is an opinion article. My point is he’s got good opinions.