I think AI/ML/LLM tools are a world changing technology. But not in a Matrix/Terminator/Gutenberg type way (yet, at least). I just came across a Reddit post where someone asked how ChatGPT dramatically improved your life and this was the first answer:
“It’s let me see the world as a totally blind person. I use it constantly for image recognition for both daily living and enjoyment. I was able to enjoy Christmas cards because AI was able to read the handwriting on them.”
Regardless of whether the businesses of AI are a bubble, in just a year or so existence this technology has profoundly changed this person’s life. To dismiss this technology’s impact on the world because one sees a lot of billboards for it in The Valley seems like what those in academia call “a hot take”
(BTW I can attest to ChatGPT reading Xmas card handwriting because we got one with writing we could barely decipher and ChatGPT figured it out for us)
Having worked inside Amazon and watching what is going on, and occasionally having conversation with other ex-Amazonians who have similar observations, this is definitely true, and particularly at a company that didn’t used to be that way, and in some ways the poster child of not being that way. Not a Master’s thesis, but definitely good empirical first-hand data.
You can observe the same on a smaller scale with apps we use every day (apps including Matchbox shaders, and other tools). They often start with a great idea and by people that are practioners solving real problems and sharing those solutions with like-minded users. Then after a while the audience grows, and as it does, the dilemma for the well meaning practioner is whether he switches careers and becomes a software app business, or remains in the original job as a practioner and let the app slowly go to shit, or hand it off to someone who loves being in the software app business, but no longer has an emotional connection to the original problem or user base, other than making a profit. I can give you a very long list of apps that fall into that bucket.
To be fair, economists in essence are all opinion authors. It’s the science of educated guessing and a whole lot of times all that education hasn’t resulted in very good guessing
The final question he poses though is not a correct analogy in my opinion. ChatGPT or other LLMs can provide a slightly blurred yet summarizing image of sharp yet vastly distributed image fragments from all over the internet.
Point in case, it can provide a code snippet that you can start with and slightly modify to create a matchbox. Whereas you would need to read, watch and understand a lot of material if you want to start from scratch.
I used to work with someone who was not very technical but a brilliant designer, and he would constantly try to write code for websites (for more advanced designs) by Googling and randomly copying snippets from stackoverflow. But half of them where pure Javascript, the other half were jQuery style and it was this terrible jumble of code, where he threw mud at a proverbial wall long enough until something stuck and kinda worked momentarily but would never scale.
More than once I had to go in and clean it all up and put it back to basics with proper technique and make it work for all the use cases.
Is it nice that someone can kludge together something from ChatGPT code snippets? On some level yes. On the other side, it devalues the the mastery of that knowledge and that has unintended consequences. I think there is actually value to ‘you need to read, watch and understand a lot of material’. You don’t just get a result, you can repeatably make that result happen and also improve it.
We live in this instant gratification, use-once-and-then-throw-away culture that doesn’t value things that take time and effort. I appreciate the short-term gratification, I’m not sure I’m a fan of the bigger picture.
Makes me think of just picking up a batch set-up from someone. My wife, who has never used a flame in her life could just render something if I set it up. Hell, she might even be able to change the color of a color source. But at a certain point she wouldn’t even know where to look to figure out why something was working or not working or what she could do to fix it. She could ask chat gpt or whatever what’s wrong, but likely wouldn’t know the correct language to even approach it in a way that the ai would aggregate what she really needed to know. At the end of the day, spending quite a bit of time on trial and error, whereas someone that knew the software could just fix it immediately.
Liked the Ted Chiang article a bunch.
He’s got a great take on tech in general.
This book was one of my faves I ran across last year: Exhalation: Stories - Wikipedia
In regards to AI/ML/datasets, personally I believe that what will make these tools more than a novelty, is when the tools help us curate our own libraries.