So you think AI isn't going to take your job?

I’ll be the dumb guy in public: Can you to explain this because I’m not quite getting it. I web searched Maxwell’s Demon and understand it to be a thought experiment about how one could theoretically lower entropy in a closed system, but I can’t follow that through to the conclusion you’ve got but would very much like to understand it.

Which is all to say I’m genuinely curious and not trying to argue or any of that crap, but i’m so internet pilled I don’t even know if I’m coming off like an asshole in asking all this. Thank you.

I would, as they say in Starship Troopers, like to know more.

2 Likes

Let me give it a shot to put it into perspective. @cnoellert will have to confirm.

The first law of Thermodynamics can be considered a more traditional scientific law. In a simplified form it holds that the sum of energy in closed system remains constant, but can be converted between different forms of energy. As a practical side-bar: this is why your air conditioner works the way it does.

That first law is constrained by the second law, that is said to restrict certain transitions of energy (never from cold to warm). However, it is not a traditional law based on theory, but rather an empirical observation that seems to have held true and has been accepted to represent an axiom for more than 200 years.

Maxell’s demon is an experiment that appears to blow a hole into the second law, despite it’s endurance over many years and many bright minds.

So the comparison @cnoellert can be seen this way: The AI devotees believe that with just more data, more compute power, more something we haven’t yet formulated, mankind can outsmart nature reflected in the ability to build a General AI solution that will equal or possibly surpass nature’s creation.

That is a belief and an empirical observation based on the progress that has been made on neural networks and AI disciplines since the 70s. Think of this as the equivalent of the 2nd law of Thermodynamics. Not a known hard and fast principle, but an empirical belief.

So Maxell’s demon would then be the thought exercise that all those AI die hards may be wrong, but just don’t want to listen. And are willing to burn down the planet and displace most workers in their pursuit of upholding their empirical beliefs.

You simply ask the question: But what if you’re wrong? What cost are you willing to accept before you submit to the fact that nature may remain superior to mankind.

Which in itself is a recursive question. Would a system ever be able to produce something that it can replace itself with a superior system?

3 Likes

Answer To The Ultimate Question

4 Likes

haha absolute classic. RIP Douglas Adams.

3 Likes

For sure @andy_dill although @allklier did a good job of explaining both the rudimentary concepts in terms of thermodynamics as well as some of what I considered when I scribbled my original post down.

One might distill it further down though to one very simple idea: everything in the universe has a cost. Nothing comes for free. In that way, even though there might be an intelligence once day that can take heat, the highest entropy form of energy and separate the faster moving molecules from the slower ones by way of a clever trap door, and reduce the entropy of the system, the energy said intelligence would need to expend discarding information regarding which molecules are fast and which are slow, would far exceed the benefits gained by actually sorting the slow from the fast–that in the effort to outsmart nature you’ve wasted as much energy as you’ve saved. At best case the intelligence’s victory is pyrrhic (if you believe the second law as I do).

AI is a good analog to Maxwell’s demon ultimately because it makes similar promises based on similar types of theories all while trying to break similar kinds of steadfast beliefs and practices–that with a small expenditure in energy a digital trapdoor that runs on an Nvidia GPU can sort the high quality information from the low quality information and overcome the second law reversing the direction of entropy (which incidentally also is steadfast in information theory). But information’s entropy requires shared mutual context between the sender and the reciever. The encoder needs to send bits the decoder understands and can uncompress into the original intention.

For example a 16bit EXR frame of noise has a massive amount of information but also high (information) entropy–think of it as the heat of information–it costs a lot and tells us little in it of itself. It’s chaotic and random and has no structure. But if you take a 16bit EXR of an ARRI test chart, it contains the same number of bits of information but those bits are arranged in such a fashion that it has significantly less entropy, provided the viewer understands what they are looking at. That act of understanding is a cost. It’s the sum cost of all the energy it took for the viewer to create that level of understanding. Think about what you had to do to understand the differences between noise and a test chart. Think about what it took to know how to use them how to interpret them. Think of how much energy was expended finding uses for noise and the amount of energy expended by the person who taught you how to use the noise.

AI is trying to achieve the same level of mutual understanding that humanity has achieved from the dawn of the species up until this current point in time. Think of what the totality of that energy expenditure has been–what the sum total cost has been for humanity to get to where it is today with this level of mutual and shared understanding and experience. It’s an unfathomable amount of energy and as I write this today there is no doubt to anyone that the cost for AI to achieve that marker is too high and its mode, unsustainable given current level of technology. It violates the second law and goes back for seconds and thirds: the system continually expends more energy/information than it saves, no matter how you twist and turn and try to spin it and most importantly, that cost isn’t theoretical. That expenditure exists in reality and is a cost on every facet of human life.

But the idea of AI, just like the demon, is so seductive. Like @allklier surmised, the people who don’t believe in the second law (or are choosing to ignore it), these people who see AI as the demon that can eventually separate hot from cold and create something better than nature believe it so firmly–believe it with every fibre in their being that they will literally spend every dollar, burn every bit of coal, exhaust every natural resource, put every human out of a job just to prove it. Like religious zealots, they believe the technological rapture is nigh–that just around the corner there will be some incredible advancement created by humans with the help of fledgling AI that will rocket through the ceiling of the second law and allow AI to grow orders of magnitude so it can solve the exact problems it’s creating by needing to grow.

Sigh.

Blind faith by people in the name of science and technological advancement (or any other “ism”) is as nearsighted and backwards as the inquisitions of old. Pro-AI tech folks should not be taking bets on a Turing problem like they’re going all-in on black at the Cesar’s Palace roulette wheel.

10 Likes

…and thank for asking @andy_dill. I hope my diatribe wasn’t too boring a read.

1 Like

Yo… dig. In Order for something to go from mutation to adaptation, that adaptation needs to be able to pay for itself; it’d be rad to see more of the electromagnetic spectrum, trippy stuff, unfortunately, it doesn’t help in finding wild strawberries.

3 Likes

Another way of looking at the question: What’s the balance between pursuing useful improvements and chasing infinite perfection?

You can move along the curve and improve things. Sometimes small steps, sometimes big steps. but if you are moving along a logarithmic curve you’ll reach infinity before you reach perfection. Assuming that there’s a cost to this effort, you will have to exert infinite resources for ever smaller gains.

Two other popular ultimate ambitious chases come to mind, no less pursued by some of the same mindsets.

Colonizing Mars and cracking ultimate longevity.

Traveling to the moon came at a relatively moderate cost and significant insights. The first time on a solid surface beyond the Earth, a view of Earth from afar. Totally worth it. Traveling to Mars (beyond an unmanned probe) is a bigger effort (9 months one way vs. a few days) at diminishing returns. Colonizing Mars is whole big step function in logistics and problem solving. Only maximalist thinking would ever consider this at scale, and even then it would only benefit the privileged few at the expense of a lot of problems that could be solved here. It becomes a solution for the 0.1%ers, or current class of billionaires.

Figuring out how to live forever (which has fascinated many movie story lines) is less costly as an effort. However, it’s more costly to the social environment. If it’s reserved to the same 0.1%ers it’s an exercise in elitism. And I can’t say that the current billionaire class has aged well. Musk, Bezos, Zuckerberg all did some amazing good in their younger years, and largely have become isolated and toxic in their older years. Imagine how they would be at 250 or 500? Doesn’t seem like an aspiration for the rest of us. As of now we have the assurance that even the worst people have a defined expiration date.

If you open the idea to everyone, to let everyone live for however long they would like - that would be more democratic, but would stop evolution in its tracks, or replace it with a type of evolution that may not be favorable. Again that thing that we’re superior to nature. It would upset so many aspects of our life structure that exist for reasons.

In short - those goals or aspirations are interesting thought experiments. But better remain just thought experiments that help us better understand why the world is the way it is, and how to improve it, rather than to escape it or pollute it with some over-processed ego.

Always Be Curious, Always Be Humble - and if you will, trust science.

2 Likes

believe science…

Reminded me of this video here

1 Like

Believe science is systematically learning and pushing forward. Question the motivations those backing science and in which directions they are pushing.

2 Likes

I just think it’s a funny way to string two words together that cause such a commotion:

faith / evidence:
believe / science.

and it was 05:00 - i had been kept awake by other creatures…
:rofl:

English is such an absurd language - full of diametrically opposite meanings attached to single words, sanction / sanction for example.

2 Likes

I am rather good at finding wild strawberries. I love those tiny little things.

Thank you @cnoellert and @allklier for breaking it down. I enjoyed reading it all.

2 Likes

It was a cheap linguistic trick I pulled… my bad

1 Like

Often when agreeing to someone, I’ll say, “Yeah, no,” haha.

2 Likes

Hilariously, “yeah, no” means disagreement here e.g. “Yeah, no. We’re not gonna be able to make it to the concert in time.”

“No, yeah,” otoh…

hehe…

Reactions were decidedly mixed online. Mostly of them not favorable.

From a post on Medium

Looked up some of the ads:

Someone on this forum knows which shop tracked all the logos in on those Coke spots. I’m sure of it.

2 Likes

Apparently AI has not been trained on snow lanes. There are 5 clear lanes in the slow (as opposed to 4), and the trucks are in none of them :slight_smile:

And in a world first, it’s snowing straight down underneath trucks on a highway…

1 Like

On first view… I definitely thought it looked like Runway ML but some poor soul got to paint out any gobbledegook text and track in the real Coca-Cola text. And from my experience… they definitely got result shots that had trucks going backwards and they had to reverse them.

2 Likes