What could have been
>There isn’t a single day where I don’t have to deal with software that’s broken but no one cares to fix
Since when does this have anything to do with AI? Commercial/enterprise software has always been this way. If it's not going to cost the company in some measurable way issues can get ignored for years. This kind of stuff was occurring before the internet exists. It boomed with the massive growth of personal computers. It continues to today.
GenAI has almost nothing to do with it.
> What could have been if instead of spending so much energy and resources on developing “AI features” we focused on making our existing technology better?
This is a bit like the question "what if we spent our time developing technology to help people rather than developing weapons for war?"
The answer is that, the only reason you were able to get so many people working on the same thing at once, was because of the pressing need at hand (that "need" could be real or merely perceived). Without that, everyone would have their own various ideas about what projects are the best use of their time, and would be progressing in much smaller steps in a bunch of different directions.
To put it another way - instead of building the Great Pyramids, those thousands of workers (likely slaves) could have all individually spent that time building homes for their families. But, those homes wouldn't still be around and remembered millenia later.
I wonder about the world where, instead of investing in AI, everyone invested in API.
Like, surfacing APIs, fostering interoperability... I don't want an AI agent, but I might be interested in an agent operating with fixed rules, and with a limited set of capabilities.
Instead we're trying to train systems to move a mouse in a browser and praying it doesn't accidentally send 60 pairs of shoes to a random address in Topeka.
While I'm somewhat sympathetic to this view, there's another angle here too. The largesse of investment on a vague idea means that lots of other ideas get funding, incidentally.
Every VC pitch is about some ground-breaking tech or unassailable moat that will be built around a massive SAM; in reality early traction is all about solving that annoying and stupid problem your customers hate doing but that you can do for them. The disconnect between the extraordinary pitch and the mundane shipped solution is the core of so much business.
That same disconnect also means that a lot of real and good problems will be solved with money that was meant for AGI but ends up developing other, good technology.
My biggest fear is that we are not investing in the basic, atoms-based tech that we need in the US to not be left behind in the cheap energy future: batteries, solar, and wind is being gutted right now due to chaotic government behavior, the actions of madmen that are incapable of understanding the economy today, much less where tech will take it in 5-10 years. We are also underinvesting in basics like housing, or construction tech. Hopefully some of the AI money goes to fixing those gaping holes in the country's capital allocation.
More less you could say similar things about most of the crypto space too. I think maybe it's because we're at the point where a lot of things that tech can do, it's more than capable of doing, but they're just not easy to do out of a dorm room and without a lot of domain knowledge.
>What could have been if instead of spending so much energy and resources on developing “AI features” we focused on making our existing technology better?
I think we'd still be talking about Web 3.0 DeFi.
I've been watching this my whole life. UML, SOA, Mongo, cloud, blockchain, now LLMs, probably 10 others in between. When tools are new there's a collective mania between VCs, execs, and engineers that this tool unlike literally every other one doesn't have trade offs that make it only an appropriate choice in some situations. Sometimes the trade offs aren't discoverable in the nascent stage, a lot of it is monkey-see-monkey-do which is the case even today with React and cloud as default IMHO. LLMs are great but they're just a tool.
The author doesn't seem to appreciate that investors aren't incompetent, but malicious.
Investing 100 years of open-source Blender does not give them any fraction of monopoly.
Even if scientists present 100's of proposals for computation (optical, semiconductor,...) they will specifically invest in technologies that are hard to decentralize: growing monocrystalline ingots, reliant on dangerous chemicals, ... if there is no money in easily decentralizable processor manufacture, then it could easily be duplicated then proposals to pursue it would basically be equivalent to begging investors to become philantropists. Quite a naive position.
It's in the interest of the group to have quality software, manufacturing technologies, ... so the onus is on representatives of the group of taxpayers to invest in areas investors would prefer to see no investment in (even if someone else invests it). Perhaps those "representatives" are inept or malicious or both.
There is real value being created by creating interactive summaries of the human corpus. While it is taking time to unlock the value, it will definitely come.
This raises an interesting question.
The amount of money that's been spent on AI related investments over the past 2-5 years really has been astonishing - like single digit percentage points of GDP astonishing.
I think it's clear to at there are productivity boosts to be had from applying this technology to fields like programming. If you completely disagree with that statement I have a hunch that nothing could convince you otherwise at this point.
But at what point could those productivity boosts offset the overall spend? (If we assume we don't get to some weird AGI that upturns all forms of economics.)
Two points of comparison. Open source has been credibly estimated to have provided over 8 trillion dollars of value to the global economy over the past few decades: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4693148 - could AI-assisted programming provide a similar boost over the next decade or so?
The buildout of railways in the 1800s took resources comparable to the AI buildout of the past few years and lost a lot of investors a lot of money, but are regarded as a huge economic boost despite those losses.
Humans are fundamentally irrational. Not devoid of rationality, but not limited by it. Many social phenomena are downstream from that fact.
Humans have fashions. If something is considered cool, many people start doing that thing, because it automatically gives them a bit of appreciation from most other people. It is often rational to follow a fashion and reap the social benefits it brings.
People are bad at estimating probabilities. They heavily discount the future, and want everything now, hence FOMO. At the same time, they tend to believe in glowing future prospects uncritically, because it helps build social cohesion and power structures.
This is why fads periodically flush all over our industry, and our society, and the whole civilization. And again, it becomes rational to follow the trend and ride the wave. Say the magic word (OOP, XML, Agile, Social, Mobile, Cloud, SaaS, ML, more to come), and it becomes easier to get a job, press coverage, conference invites, investments.
Then the bubble deflates, the useful parts remain (often quite a bit), the fascination, hype, attention, and FOMO find a new worthy object.
So companies add "AI features" partly because it's cool (news coverage, promotions), partly because of the FOMO (uncertainty is high, but what if we'd be missing a billion-dollar opportunity?), partly because of social cohesion (following fashion is natural, being a contrarian may be respectable, but looking ignorant is unacceptable). It's not about carefully calculated material returns on a a carefully measured investment. It may look inane, but it's not always stupidity, much like sacrificing some far-future perspectives in exchange of stock growing this quarter is not about stupidity.
I've been using an app recently that added a bunch of AI features, but the basic search is still slow and often doesn't work. Every time I open it, I kind of brace myself, but it still disappoints me.
It feels like more and more products are focused on looking impressive, when all I really want is for the everyday features to just work well.
More than anything, I want to get back the era when the users were the customers, not the product.
ZIRP->AI/enshitification is the one-two punch combo that I think is going to devastate our economy for 50 years or more. We have an entire generation of executives, financiers and government that have only ever operated in an era of free money.
They've never had to generate a real return, create a product of real value, etc. This wave-of/gamble-on AI slop just shows that they don't even know what value looks like. We've operated for ~40 years on a promise of...something.
undefined
A lot of how I use AI is to assist me to build the software manually. I focus 1 function and ask it to fix or implement it. That's a good way to use AI. But if you mean using AI to improve existing systems, I also think that's being a done a lot. For instance, you know Krita the KDE drawing program? They naturally added a way to prompt image generation, based on your initial doodles, which makes a lot of sense.
> organizations such as Blender, Godot, or Ladybird and...
So you want an open source project to really succeed? It's not money, but real passion for the work.
Write better documentation (with realistic examples!) and fix the critical bugs users have been screaming about since over a decade ago.
Sure fine pay a few people real wages to work on it full time, but that level of funding has to deliver something more than barely documented functionality.
You could have a society where there's one single spreadsheet package made by a team of 20 people, a few operating systems, a new set of 50 video games every year (with graphics that are good enough but nothing groundbreaking so they'll run on old hardware) created according to quota by state-run enterprises, Soviet style.
This would be very efficient in avoiding duplication, the entire industry would probably only need a few thousand developers. It would also save material resources and energy. But I think that even if the software these companies produced was entirely reliable and bug-free it it would still be massively outcompeted by the flashy trend-chasing free-market companies which produce a ton of duplicated outputs (Monday.com, Trello, Notion, Asana, Basecamp - all these do basically the same thing).
It's the same with AI, or any other trend like tablets, the internet, smartphones - people wanted these and companies put their money into jumping aboard. If ChatGPT really was entirely useless and had <10,000 users then it would be business as usual - but execs can see how massive the demand is. Of course plenty are going to mess it up and probably go broke, but sometimes jumping on trends is the right move if you want a sustainable business in the future. Sears and Blockbuster could've perfected their traditional business models and customer experience without getting on the internet, and they would have still gone broke as customers moved there.
What could have been if instead of crypto trillions were invested in something actually useful? What about the housing bubble of which we learned nothing as we are falling into it again?
There is a lot of stinky garbage in AI, but at least you can rescue some value from it, in fact it could be most of the activity out there, but you only notice what stinks.
Greenfield development is infinitely easier than brownfield.
It's damn hard work to dig in, uncover what's wrong, and fix something broken - especially if someone's workflow depends on the breakage.
Flashy AI features get attention and even if they piss you off, they make you believe the thing is fresh. Sorry but you're human.
Maybe the epitome of shoving AI into everything is Gemini showing up in my Gmail to help me write (which I do not need) but their spam filters still allowing obvious phishing emails through.
I understand organizationally how this happens, and the incentives that build such a monstrosity but it’s still objectively a shame.
TFA kind of assumes that the companies involved would have improved their software in a world in which those resources weren't spent on AI. Since much software contained long-unfixed bugs well before the GenAI boom, I'm not convinced.
the recent stackoverflow survey said that only 25% of developers are actually happy at work https://survey.stackoverflow.co/2025/work#job-satisfaction-j... . Gallup said its only 33% of employees are engaged in the economy in general. Not everyone gets to go to conferences and network for Godot game engine like the author, most are doing super repetitive jobs. I definitely want an AI to automate as many as possible ASAP
I disagree. The hype is wearing people down and making it think it's a waste of time but LLMs just came out a couple years back and even the trendline from the past decade (pre-LLMs) is up up up.
The amount of interest to explore this opportunity is worth it. The bubble is worth it. I don't think it's lost years, and even if it is, the technology is compelling enough to make the gamble worth it.
The fatigue of reading the same shit over and over again makes people forget that it's only a couple years. It also makes people forget how ground breaking and paradigm shifting this technology was. People are complaining about how stupid LLMs are when possibly just 5 years back no one could even predict that that such levels of intelligence in machinese was even possible.
Hopefully users can start banding together and paying for the features they want directly instead of having all the income be funneled from ads.
>"while delivering absolutely nothing of value"
Well maybe for you and not the millions of people that use this technology daily.
is more maintenance work on old software really the highest aspiration of the tech industry?
Let's see.
1. Unwanted but coercive upgrades.
2. Delivering genuinely upgraded complexity and incoherence, under the cloak of new feature theatre.
3. In a context where customers just wish they could keep using what they have already paid for. Without paying again.
4. But they have to, because of artificially introduced cross-user new-version incompatibilities and strategically scheduled bit rot.
5. All so a company can keep extracting money from people it is no longer actually serving.
This is very similar to enshittification. Customers invest (time, money, data), in something they love. Their common investment has been transmogrified into a dairy, er, prison. And now they are all milked by all the dark patterns and milk pumps their supplier/master can insert between the users and water, users and grass, users and the midden, er, I mean into every remaining useful part of the service. While they shamelessly and unconvincingly claim to still be their hope, light and beloved benefactor.
Encrapification? No...
Fucked-up-grades? Choose which ever silla-bic em-fassis you prefer. (Sorry for the saucy stout boldness, to you of youth and gentle heart. But the point of crass terms is to not let targets off the verbial hook, with a weak or cute euphemism.)
Quickbooks, anyone? The list is long.
undefined
This is another genre of AI article that annoys me: The one where the author starts by agreeing with you that AI is a definitely a bubble, and we’re gonna just “know” that for the rest of the article, no argument necessary.
I don’t feel like this article is trying to start a conversation, it wants to end the conversation so we can have dessert (aka, catastrophizing about the outcome of the thing “we know” is bad).
Yet another article that could can be losslessly translated to a single sentence.
> What could have been if instead of spending so much energy and resources on developing “AI features” we focused on making our existing technology better?
The implied answer to this question really just misunderstands the tradeoffs of the world. We had plenty of money and effort going into our technology before AI, and we got... B2B SaaS, mostly.
I don't disagree that the world would be better off if all of the money going into so many things (SaaS, crypto, social media, AI, etc.) was better allocated to things that made the world better, but in order for that to happen, we would have to be in a very different system of resource allocation than capitalism. The issue there is that capitalism has been absolutely core to the many, many advances in technology that have been hugely beneficial to society, and you if you want to allocate resources differently than the way capitalism does, you lose all of those benefits and probably end up worse off as a result (see the many failures of communism).
> So I ask: Why is adding AI the priority here? What could have been if the investment went into making these apps better?
> I’m not naive. What motivates people to include AI everywhere is the promise of profit. What motivates most AI startups or initiatives is just that. A promise.
I would honestly call this more arrogant than naive. Doesn't sound like OP has worked at any of the companies that make these apps, but he feels comfortable coming in here and presuming to know why they haven't spent their resources working on the things he thinks are most important.
He's saying that they're not fixing issues with core functionality but instead implementing AI because they want to make profit, but generally the sorts of very severe issues with core functionality that he's describing are pretty damaging to the revenue prospects of a company. I don't know if those issues are much less severe than he's describing or if there's something else going on with prioritization. I don't know if the whole AI implementation was competitive with fixing those - maybe it was just an intern given a project, and that's why it sucks.
I have no idea why they've prioritized the things they have, and neither does the author. But just deciding that they're not fixing the right things because they implemented an AI feature that he doesn't like is not a particularly valid leap of logic.
> Tech executives are robbing every investor blind.
They are not. Again, guy with a blog here is deciding that he knows more than the investors about the things they're investing in. Come on. The investors want AI! Whether that's right or wrong, it's ridiculous to suggest they're being robbed blind.
> Unfortunately, people making decisions (if there are any) only chase ghosts and short term profits. They don’t think that they are crippling their companies and dooming their long term profitability.
If there are any? Again, come on. And chasing short term profits? That is obviously and demonstrably incorrect - in the short term, Meta, Anthropic, OpenAI and everybody else is losing money on AI. In the long term, I'm going to trust that Mark Zuckerberg and Sam Altman, whether you like them or hate them, have a whole lot better idea of whether or not they're going to be profitable in the long term than the author.
This reads like somebody who's mad that the things he wants to be funded aren't being funded and is blaming it on the big technology of the day then trying to back into a justification for that blame.
[dead]
[dead]
It’s not just the AI bubble. Think of all the public services, rights, and scientific and medical research being destroyed by rightwing extremists and their billionaire enablers. It will take years, decades perhaps, to undo the damage they’ve already done, in just over 6 months in power.