Vimeo just updated their terms, and made several AI specific changes. They linked to this statement from their CEO.
Under these new terms the oft used Vimeo90K dataset would be in violation.
Vimeo just updated their terms, and made several AI specific changes. They linked to this statement from their CEO.
Under these new terms the oft used Vimeo90K dataset would be in violation.
@mux - it was inevitable, and we did it to ourselvesâŚ
Not sure if this will link, but . . .
Some interesting data and perspectives on the energy problems of AI. Now this relates to ChatGPT and general purpose LLMs, not the video variety. But one can assume that there are reasonable parallels.
At current standards, the worldâs energy grid wonât be able to meet the expected demand for AI products.
If we observe the distribution of issues AI will face in the foreseeable future, itâs a very long tail, particularly given the industryâs challenges in meeting foreseeable demand.
Assuming the status quo remains the same technologically; we might soon face a real GPU shortage. According to Meta , in a future where most humans use LLMs just 5% of their day, we would need one hundred million NVIDIA H100 GPUs to serve GPT-4 for that purpose at an acceptable latency of 50 tokens/second, and this is assuming an average very short sequence length.
NVIDIA is expected to deliver a total amount of GPUs in 2024 in the low 8 figures, an order of magnitude away from that scenario, and this is also assuming that no geopolitical event takes place.
Google is searched 8.5 billion times per day, GenAI-enhanced search could cost an average of 9 Wh. Assuming at least 60% of all searches will be based on GenAI generations, that gives a total energy demand for the service of 17 TWh, or a 2GW data center.
For reference, xAIâs upcoming 100,000 NVIDIA H100 GPU cluster, the largest in the world, will require 140MW to run , so we are talking about a data center 14 times larger than the biggest we intend to build in 2024.
But if envisioning the energy constraints from a GPU perspective is already daunting, if we look at global expected AI demand and, crucially, on the next generation of frontier AI, things get much worse.
Assuming the current compute and memory cost complexity of models continues, which is mostly quadratic relative to sequence length the estimate we did in the previous segment may fall short. Very short.
While LLMs have conquered memorization, their reasoning capabilities are very, very modest. Most people consider search-augmented LLMs \the solution. These LLMs explore the solution space instead of directly responding to your request, generating up to millions of possible responses before settling for one.
This paradigm not only increases the average token production by orders of magnitude, but will probably also require verifiers, additional models that validate each thought generated by the author during its search for the solution.
If this is the future of AI, then the numbers we saw above will indeed fall short, with some requests far exceeding the 9Wh mark we discussed earlier.
Nonetheless, according to the International Energy Agency, data center demand in the US, the EU, and China is expected to grow to approximately 710 TWh annually by 2026. For reference, thatâs almost as large as France and Italyâs combined energy consumption in 2022 (720 TWh).
Full artilce hereâŚ
Thatâs not only environmentally unsustainable. But once the investors are done paying for the party trick, this all has to be get paid out of the rate for the work. I just donât see the ROI for businesses to materialize there. They would have to offset with an equally valuable efficiency improvement, or pass the cost on to consumers, which are strapped to the max at is.
When the glove doesnât fit⌠Shut the servers down, pop the bubble and get back to business.
This is really interesting and not surprising! Iâve been reading âThe Ego Tunnelâ by Thomas Metzinger (awesome book by the way) and he says this while discussing theories of consciousness.
âŚwhich is effectively the proving ground of natural selection. Or in more familiar wordsâŚ
âOne of Godâs own prototypes. A high-powered mutant of some kind never even considered for mass production. Too weird to live, and too rare to die.â
We canât stop here. This is bat country!
Turbocharging global warming to make emails people wonât read and videos people wont watch. Awesome.
@andy_dill - oh AndyâŚ
Thereâs no such thing as global warmingâŚ
Itâs fake newsâŚ
Made by AIâŚ
OhâŚ
WaitâŚ
Expanding a bit on what I wrote on the bottom of my last post, because itâs worth considering as we weigh the chances of AI taking a meaningful foothold.
So we all know that AI is expensive to build and more importantly to run:
You have to build a suitable training set. No small undertaking. And getting more expensive as the early âscrape the Internetâ efforts are no longer meeting legal requirements. Finding enough cleared material is more laborious and expensive.
You have to train the model, which is a massive compute exercise both in terms of equipment and power consumption.Thereâs a separate but not oft talked about part of training, that the training material needs to be categorized and tagged, which is a human data entry effort, undertaking by armies of low wage workers overseas.
You have to parse prompts and run inference, a separate massive compute exercise and cost.
So how are we paying all this, once the investors move on?
All this incremental cost has to be covered by one of two ways:
A: The price of the final product goes up and is paid for by the customer.
B: Efficiency gains can be attained - same product & price to customer, for less effort (AI vs. human effort)
Itâs pretty clear that in the current economic environment, where prices have risen dramatically since the pandemic, there is very little slack left in the system for even higher prices - whether that is business customers or end consumers.
That means efficiency gains are the primary source to pay for this.
This falls into two subcategories - work that existed, but AI can do faster/better, and work that was previously impossible for human workers to accomplish (very large data analysis).
For this to work - the final incremental cost of AI has to be less than what it would cost humans to do the same job. The first question is - with the huge cost of AI, is that even possible, or are the numbers out of whack already? Workers already donât get paid much, so there is not much to subtract from, and you have to stay above zero - varies by industry.
If the math works, and human work gets replaced by AI within the same cost envelope, that might work for a while. However, in the past the market often has come back and said âhey, you are using new and easier tools - we wonât pay the same price for the productâ.
Weâve seen all this price pressure in countless cases - when professional photography was devalued because everyone had a camera for example. Or various edit, color, vfx tasks that the intern at the clientâs office can now do in-house. The scratch VO, etc.
So the bar is not what the client is willing to pay today, but what heâs willing to pay tomorrow. And is that still enough to cover all the cost of AI.
And all of that assumes that the results form AI are acceptable beyond the âwow!â phase. Jury is out on that one. If it does require re-work or patching to close quality gaps, that means thereâs less efficiency saving to pay for AI, math gets tighter.
Of course this is all based on the parameters of current AI solutions. As so many AI fans say, this will all be fixed in six months. Well, if this is a linear problem, that may be a reasonable statement. But we know that this is not a linear problem. It may take longer, or there are more complex trade-offs between getting the cost down (smaller specialized models, but also more fragmented audiences).
So many questions remainâŚ
We have seen a few very select examples of where people posted prices of doing larger AI video projects. And the numbers werenât off the chart. But itâs also not clear if these numbers were the discounted / investor supported / we need to find a market pricing, or actual realistic completely loaded cost (training allocation, full cost inference, etc.).
It would be really insightful to get more information on the cost for tangible tasks. We see the overall industry power bills, but itâs hard to map this to - what would this or that task cost in the long run.
If anyone know of any real accounting examples that pass the sniff test, please share.
I had the IRS automated system hang up on me and I donât have an accent. It kept giving me 2 choices but I didnât want either of those things so I asked for a real person, an agent. It kept saying âWe seem to be having a communication problem.â and then it hung up after about 4 or 5 attempts. This is a nightmare when you need to get help and there is no way to get through.
Or itâs the perfect bureaucracy.
A great opportunity to accurately use the term Kafka-esque
So true!! That was what I was thinking also as it hung up on me.
I attended a recent webinar hosted by VES, with representatives from Autodesk, Foundry and SideFX. It was a presentation of each developerâs tools for GEN-AI.
Each essentially argued that the use of AI would be to provide tools to support and promote getting work done faster.
However, at a certain point in Live, the ethical value was argued. Basically, each one responded about the privacy of the training. But I (my opinion) is that it is clear that they will increasingly implement AI in their products, whether users want it or not.
So my point would be, if, marking this âifâ point well, admitting, and projecting to âwhenâ. How will we prepare for this? It would be a case of opening up knowledge more, right here in this forum, to share effective tools. Strictly for use in Flame?
I donât know if Iâve gone beyond the topic of the discussion, but itâs something related thatâs making me think about it.
Sorry if my English is confusing.
This technological epoch is like those QR-coded coffee pods.
Nobody wants to find land in an agreeable climate, cultivate plants, wait for the earth to orbit the sun (sorry flat earthers!), harvest fruit, process the yield, establish logistics, filter water, pipe it to a kitchen, figure out fuel for energy transfer, heat the water, fashion some kind of receptacle, cut down trees to create some kind of stirring deviceâŚ
They just want to push a button and get that hot brown liquid in a cup.
With granulated sugar crystalsâŚ
And liquid squeezed from nutsâŚ
And while the masses gorge themselves on this push button magnificence, there is also a negligibly small band of sophisticates that want to drink coffee made from beans that have emerged from another animalâs bum.
So never fear, our peculiar specialties will always be in demandâŚ
Looks very interesting. If it werenât that far, Iâd definitely go.
I think the whole thing is virtual