Oops, haha… Well in that case…
I think you can attend virtually for free?
Who needs AI?
David Li, Arenova Capital managing partner and Dream Machine chairman, says he sees opportunity in the devaluation of VFX houses and in scaling operations as industry behemoths raise prices and ad monetization kicks in.
There may or may not have been drugs, alcohol and loud music during this exchange of word salad, also strippers, dragsters and promises of a monorail or two…
Oh, and there’s some integration of the letters A & I, at some point.
They’ll be making ads, from their iPads, from their yachts in a fjord somewhere…
It’s so exciting.
Start the clock on the evergreen question: “how long till private equity ruins it?”
ETC was also recently acquired.
Perhaps AI really means Alternative Investments…
I really do NOT understand the massive wave of VC/PE aquisition of VFX companies right now.
Can someone please explain why and why now?
Purchase Private Propaganda Pipelines: Profit.
It’s been a spectacularly successful business model for the past 30 years, including talent and technology mergers, which is always trivial and immediately compatible…
But seriously, the handprints on the wall of my cave indicate that subscriptions are no longer sufficient for pouring 12 thousand million dollars a year on malcontent and there will be a big move to insert advertising everywhere.
So for some people it makes sense to buy advertising factories because the demand for adverts will increase and all adverts are formulaic and the same…
Easy money…
MonoRailCoin anyone?
THR article does state many acquisition deals died on the vine due to industry contraction but it seems like this Dream Machine conglomerate sees a pot of gold while streamers struggle to turn a profit as @philm postulates. Seems a very strange repurposing of longform VFX talent to crank out ads on streaming platforms. maybe they just got over their skis and didn’t realize until now…
“David Li, Arenova Capital managing partner and Dream Machine chairman, says he sees opportunity in the devaluation of VFX houses and in scaling operations as industry behemoths raise prices and ad monetization kicks in.”
This explanation isn’t specific to vfx but I think it may have something to do with the current wave of M&A.
There is more than $4tn in dry powder sitting around at private equity and venture capital firms. That number has doubled in the past five years, and had already doubled in the ten years prior. That period of time coincides roughly with the emergence and rapid growth phases of the tech industry. Now that the tech industry has entered a mature phase, most of the low hanging fruit for disruption has been picked and opportunities for organic growth have slowed. In lieu of these growth opportunities, M&A activity has increased(with some slowing as a result of post-covid economic uncertainty).
I would venture that this is causing people to buy dumb things they don’t properly understand simply because they can/have to.
To bring it back to the thread, it is almost certainly one of the driving forces behind the current feeding frenzy around AI, which may also be driving these purchases under the tenuous theory that AI will result in massive increases to productivity and efficiency in our industry.
It’s obviously a lot more complicated than that, but that’s my general theory of the case. Also, I eat crayons so this could have nothing to do with anything.
Interesting development in the larger picture. Last year Goldman Sachs was all in on GenAI.
This year’s report is a lot more mixed. Two of the four top analysts have a negative view of GenAI potential.
Headline: Gen AI: too much spend, too little benefit?
Full Report - pertinent summary view is on page 3. It’s 33 pages total with lots of charts and detail, if you’re bored between renders.
Some excerpts:
The promise of generative AI technology to transform companies, industries, and societies continues to be touted, leading tech giants, other companies, and utilities to spend an estimated ~$1tn on capex in coming years, including significant investments in data centers, chips, other AI infrastructure, and the power grid. But this spending has little to show for it so far beyond reports of efficiency gains among developers.
He estimates that only a quarter of AI exposed tasks will be cost-effective to automate within the next 10 years, implying that AI will impact less than 5% of all tasks. […], arguing that AI model advances likely won’t occur nearly as quickly—or be nearly as impressive—as many believe. […] So, he forecasts AI will increase US productivity by only 0.5% and GDP growth by only 0.9% cumulatively over the next decade.
Sidebar: US GDP is currently $28.27T, 0.9% of that is $254B. So we will spend $1T on a return of $254B?? That’s a quarter on every dollar. Woohoo! (caveat - this is US GDP, the capex spend is presumably global - but still)
Jim Covello goes a step further, arguing that to earn an adequate return on the ~$1tn
estimated cost of developing and running AI technology, it must be able to solve complex problems, which, he says, it isn’t built to do. He points out that truly life-changing inventions
like the internet enabled low-cost solutions to disrupt high-cost solutions even in its infancy, unlike costly AI tech today. And he’s skeptical that AI’s costs will ever decline enough to make
automating a large share of tasks affordable.And he questions whether models trained on historical data will ever be able to replicate humans’ most valuable capabilities.
The more rosy look from one of the other analysts:
He estimates that gen AI will ultimately automate 25% of all work tasks and raise US productivity by 9% and GDP growth by 6.1% cumulatively over the next decade. While Briggs acknowledges that automating many AI-exposed tasks isn’t cost-effective today, he argues that the large potential for cost savings and likelihood that costs will decline over the long run—as is often, if not always, the case with new technologies—should eventually lead to more AI automation.
Their concluding statement:
So, what does this all mean for markets? Although Covello believes AI’s fundamental story is unlikely to hold up, he cautions that the AI bubble could take a long time to burst, with the “picks and shovels” AI infrastructure providers continuing to benefit in the meantime.
That said, looking at the bigger picture, GS senior multi-asset strategist Christian Mueller-Glissmann finds that only the most favorable AI scenario, in which AI significantly boosts trend
growth and corporate profitability without raising inflation, would result in above-average long-term S&P 500 returns, making AI’s ability to deliver on its oft-touted potential even more crucial.
There’s your grain of salt…
Unfortunately but not surprising at all.
At the pace AI is evolving winners and losers are decided faster than the lawyers can keep up with. So it’s shoot now, ask questions later or be left behind.
It’s the part about this AI feeding frenzy I’m not crazy about.
There was a related story about OpenAI’s new Strawberry model, which supposedly is the model that the coup that took Altman out temporarily was so concerned about, because he’s known to move fast and cut corners on safety in order to win the AI war. Strawberry is the new leaked new long-inference model that has at least S1 reasoning capabilities supposedly.
And OpenAI’s CTO always awkwardly refuses the answer the question about their training data, because the answer is probably the same as here.
We should all be bothered by this crap.
Totally. But toothpaste is outta the tube. No going back now. If anything, the sad part is that as legislation catches up, it merely stifles competition and allows for monopolization from the biggest players by default. It’s like the Oklahoma land rush, the Sooners and the Boomers. Those who cheated the race end up the winners, those looking to compete don’t have a chance.
@BrittCiampa - yep.
@allklier - yep.
This is the same as the age old “you can’t use that on your showreel despite the fact that you did it, but we can use it on our showreel despite the fact that you did it…”
It’s exhausting just thinking about it…
(and that, sigh, is the strategy…)
Yes indeed.
Not AI related, but related to regulations. An interesting tidbit has emerged about the Crowdstrike outage - there is discussion on why security apps like this continue to be allowed low level API access, and that these apps actually represent new risks in their own right.
Apple has been trying to ban kernel extensions for some time now (painfully so, as many of us have experienced). Interestingly enough MSFT isn’t allowed to do that due to a settlement in the EU that requires them to provide 3rd party vendors equal access to such APIs.
Obviously much more complexity to that. But interesting how well intended regulations can sometimes just cause as much harm as they try to mitigate.
If people were just nice to each other, we wouldn’t have to worry about that type of thing.