Calling GenAI ‘ChatGPT’ is like looking at a forest floor teeming with diverse fungi and saying: “Look at all those Portobellos!”
Yes, ChatGPT might be the most famous, digestible, and commercially available version of Generative AI. But it’s not the only tool and it’s definitely not the ‘best’ in all situations – much like a Portobello can taste like soggy cardboard compared to the refined, aromatic umami of a Porcini in a risotto.
As of March 2026, we’re also seeing a shift. ChatGPT’s user base is declining as people wake up to the broader ecosystem. The flight, which has been happening since summer 2025, is accelerating because of a growing boycott over ethics.
The floodgates started to creak ajar with the Open AI president’s $25 million donation to Donald Trump via the USA’s sinister Super Pacs, which are shadowy conduits for unlimited and often anonymous donations to influence elections. But the trigger for the gates to crash open was the sneaky, under-the-radar removal of bans on military use and a lucrative dive into defense contracts with the Pentagon. The Machiavellian move followed competitor Anthropic’s refusal to agree to the same terms and looked more than a tad opportunistic; a trumping of ambition and finance over ethics or responsibility.
Aghast, over 4m users have quit the service – according to the latest figures on the #QuitGPT website. Yes, that’s still a tiny fraction of ChatGPT’s 900 million or so weekly active users – not even 1% – but if you work at OpenAI that trajectory must look foreboding. Switching to another provider is relatively painless and investors are watching closely while the company has to stem the bleed. Yearly losses are running at around $15 billion.
Which brings us back to the figurative forest floor. In the past, with such early dominance, it made sense to just identify everything as ChatGPT. The environment was uniform with one species around. Today, with far more diversity that makes no sense at all.
There are highly specialized, enterprise-grade AI models buried deep within secure environments, hard to find but offering immense value for niche tasks. These are the truffles.
Inevitably, as AI becomes weaponized and politically compromised, we are seeing models that might look appealing on the surface but carry a systemic toxicity. A bit like the vomit-inducing Yellow Stainer or even the Death Caps that slay misfortunate folk each year with their potent amatoxins. On the flip side, there are ethically-mindful, safety-focused models – like those Anthropic is trying to cultivate – with a mission to enhance human cognition without poisoning the well. A bit like Lion’s Mane or Reishi.
New models are emerging all the time and beneath the surface, the open-source AI community acts as the vital mycelium, sharing code, data, and nutrients to keep the ecosystem alive, decentralized, and thriving.
“But wait… all Generative AI hallucinates, while only some mushrooms cause hallucinations.”
True. In the AI world, “hallucinating” means confidently making up facts – a frustrating quirk every single model shares. But the metaphor still holds. Perhaps the biggest hallucination of all was the collective illusion we’ve been under: the naive belief that one single company, driven by boundless ambition and questionable ethics, should dictate the future of this technology. We are finally snapping out of that trip.
- Let’s not call all fungi a Portobello.
- Let’s not call all fish a Great White Shark.
- Let’s not call all birds an Eagle.
Most pertinently: Let’s stop calling all Generative AI “ChatGPT.” The ecosystem is far deeper, more diverse, and more ethically complex than that brand name.




Leave a Reply