Technology

What I think the post-bubble AI future looks like

The AI industry is a massive bubble (yes, it is), but what will it looks like once that pops? I have a theory.
Parallax Abstraction 6 min read
What I think the post-bubble AI future looks like
"They said I'd be running everything by now." Photo by Jezael Melgoza / Unsplash

AI in its current form is a massive bubble and if you don't believe that by now, you're either a tech bro who should be shot into the sun or just purposefully putting your head in the sand. The US economy and stock market are being entirely propped up by it and the handful of companies that are now stuck in an absurb financial circle jerk to try to keep it from popping just a bit longer, gutting their proven divisions to the bone in the process.

It is a business in search of a model, its costs and losses are skyrocketing along with its environment impact, and only one company is profiting from it while all the others burn hundreds of billions of dollars, driven by the FOMO of idiot venture capitalists and big tech executives who have run out of new ideas, executives who have quickly become some of the most dangerous people in society. Hell, the very term Artificial Intelligence that the industry is built on is a blatant lie because none of this stuff thinks and despite what dead-eyed sociopaths like Sam Altman claim, they never will. AGI (Artificial General Intelligence) is a complete myth to anyone with even a rudimentary understanding of computer science and even if it did somehow happen, it won't be within the lifetime of anyone reading this. It's not a question of if this bubble will pop, but when and with the quantity of money tied up in it, the economic broadside could very well be the worst we've ever seen and will only get worse the longer this is allowed to continue.

But this post isn't to explain what the bubble is. Others have already done that far better and I recommend these two videos for good explanations if you don't already know.

Bubble bubble...

...toil and trouble.

The first thing you always see what you posit this position is some straw men claiming that since I think the current AI industry is in a bubble that I must really think AI will go away entirely. It's a desperate position taken by those who can't face the truth of their own failures.

AI has its uses and indeed, ones I take advantage of myself. Despite never being right the first time and always needing a lot of cleaning up and reprompts, I use it to help me write scripts to automate large tasks in my IT job. It still saves me a ton of time, even with the headaches. I've even generated the occasional banner image for this blog with it.

The Internet didn't go away when the dot com bubble popped and indeed, it's what enables AI today. Crypto didn't go away after that bubble popped (though if wishing made it so) and is one of the many ways Trump is violating his oath of office to profit off the presidency. Video games didn't go away after the crash of the early 80s and are now the biggest grossing entertainment medium by far. Similarly, AI is not going to go away, but I think post-bubble, how it's used and in what ways are going to be very different from what they are now.

What's driving this bubble is Large Language Models. All the big AI players are pitching LLMs as the "One AI to Rule Them All", claiming that they can or will eventually do everything for us from research, to copywriting, to generating entertainment, to curing cancer and solving climate change and of course, run the autonomous robots that are coming any day now, will serve all our needs and that we'll somehow all be able to afford with the near totality of jobs they will eliminate. They are basing the entire future of the industry (and humanity if you listen to some of them) on winning the arms race to make the first "Everything AI".

The thing is, LLMs generally suck. They're incredibly inefficient, make constant mistakes, are based almost entirely off stolen works and serve few practical purposes. Most companies that have tried to replace workers with it have had to reverse those plans and 95% of corporate AI initiatives have generated zero return on investment. There is no rational reason to believe this is going to change any time soon. They're a gimmick and a largely bad one.

Don't get me wrong, there are some fields that LLMs are disrupting severely. The translations industry is undergoing a reckoning at the moment, entirely because of LLMs, as are some administrative tasks like corporate communications and summation, even if they all have the same awful tone and structure that you can tell is AI a mile away. To claim that LLMs are all we will ever need is simply ludicrous, but it's where most of the investment is going because it's the sexiest and simplest concept to understand.

So if I don't think this is the future of AI, what's going to happen after the bubble pops? As I said, I don't think AI is going away, nor should it. It has useful purposes that I think can be of great benefit to many. Those uses don't require multi-gigawatt data centres or hundreds of billions of capex every year with no plan for profitability. As was often the end result of past bubbles, I think the future of AI is much more focused and much less generalized.

LLMs won't go away, though I do think that segment's advancement will slow significantly and we could see the likes of Sam Altman, Satya Nadella, Elon Musk and Dario Amodei either removed from the industry or at least relegated to positions of far less significance, though I never rule out the ability of these types of slippery rodents to fail their way upward. I think LLMs and the "everything AI " idea will no longer be the driving force of the industry, but they will be the foundation on which a more sustainable one is built.

I think the ultimate future of this technology is not a generalized one, but one in which a number of focused, specialized, hyper-accurate models and products built on those drive it forward. These will be focused on specific industries like medicine, law, construction, design, finance, subsets of coding, governance, academia and more. Rather than trying to get good information from a giant model trained on the totality of the Internet for better and worse, you'll get it from one that's tailor-made to provide the best quality, most accurate answers possible based on your particular industry and job. Your say, medical AI tool won't rely on ChatGPT, but will use that foundational technology to develop and train its own model that's only about medicine, but is unmatched at that. All that training compute will be devoted to a much smaller and hyper focused set of data and parameters, which will be acquired legally and known to be a reliable source of truth.

These products won't be free and will be more expensive than one of the heavily subsidized paid tiers all the current AI players offer. The days of free and cheap access to LLMs were always going to be finite and those will cost more too, but these specialized tools will cost even more. Businesses and institutions will pay that price in order to ensure that the answers are reliable and something they can attach their own reputations to. This higher asking price combined with more focused training requirements will also mean that these AI startups can actually turn a profit because instead of trying to do everything half-assed at best, they'll be doing one thing whole-assed. It will be the rebirth of the industry into something realistic, achieveable and more importantly, sustainable.

Like any disruption of industry, this will still mean that some jobs will be lost and that some career paths may be rendered obsolete or have their overall staff needs reduced. But it's not, and never was going to be a situation where most of humanity would find itself out of work and Universal Basic Income becomes the only way for most to survive. For most jobs, AI will ultimately become a valuable way to augment existing productivity and capabilities of humans, not one that replaces most of them. In some fields, it's already doing that, but I think this will increase as specialized tools and models become the norm. In most cases, it won't replace your job, but it will make you faster and more competitive at it if you use it well.

This is a much better outcome because it means the technology can be viable, profitable and worth continued investment in. I've never hated the concept of AI, I just think it needs to be developed and used responsibly, not in the ludicrous, billionaire driven feats of hubris we're seeing now. What comes out of this bubble popping could show the true promise and benefit of the technology. I just hope it pops soon as even in its current state, the fallout could be more than the economy can bare.

My ultimate hope is that finally, big tech will learn something and stop making these insane bets and learn to grow and thrive in a sane and sustainably way. I think there's little chance of that and they'll be chasing the next shiny thing soon enough, but that's a bubble for another time.

What do you think the future of AI is? Leave a comment and let me know!

Share
More from Geek Bravado

Geek Bravado

The hobby blog of Parallax Abstraction where he posts musings on various topics, mostly gaming and tech.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Geek Bravado.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.