Back to Articles
AI Has Made “Send” the Most Expensive Button in Your Business
SmartCompany
ENRICHED
Details
- Date Published
- 13 May 2026
- Priority Score
- 1
- Australian
- Yes
- Created
- 14 May 2026, 12:00 am
Authors (1)
- Ali GreenNEW
Description
AI is making it easier than ever for leaders to generate polished ideas, but many businesses are paying a hidden price.
Summary
This article examines how generative AI serves as an accelerant for poor leadership habits, specifically by enabling the rapid creation of high-volume but low-substance 'strategic' content. It highlights the risk of automated tools creating a false sense of intellectual completion, leading to the dissemination of flawed premises and hallucinated market gaps. While it focuses on organizational productivity rather than direct existential risk, it underscores a critical failure mode in human-AI interaction: the degradation of human expert oversight and judgment when managing AI output.
Body
It starts with an email. A senior person forwards an article to their team with a single line: “Thoughts?”
The person receiving it stops what they’re doing, reads it properly, tries to work out what’s being asked of them, and puts together a considered response. And often, their reply is met with an “interesting!” and nothing else comes of it. In the meantime, they’ve deprioritised their actual work.
Related Article Block Placeholder
Article ID: 335709
Neural Notes: The hidden cost of using too much AI in your business
Tegan Jones
1
That dynamic has always existed. Most leaders don’t even realise they’re doing it.
Now layer AI on top.
Instead of a short article, it’s now a 12-page “strategy”.
Smarter business news. Straight to your inbox.
For startup founders, small businesses and leaders. Build sharper instincts and better strategy by learning from Australia’s smartest business minds. Sign up for free.
* indicates required
Email Address *
By continuing, you agree to our Terms & Conditions and Privacy Policy.
AI is brilliant at producing something that sounds coherent. It’s much less reliable at telling you whether that thing is right, new, or worth pursuing. You can iterate on a prompt 10 times and end up with something that feels sharper each time, but it can still be built on a flawed premise. And in my experience, the leaders most at risk aren’t the ones still figuring out whether to engage with AI at all. It’s the ones embracing it wholeheartedly, and why not, when it sounds so certain about everything.
Using AI can make people feel like they’ve done the thinking when they haven’t. The back and forth with the tool creates a sense of progress and ownership. The output looks polished, and it’s easy to feel like you’ve landed somewhere solid. But the hardest part of thinking hasn’t happened yet, the part where you properly interrogate the idea, and decide whether it actually holds up.
I’ve seen detailed proposals for market “gaps” that don’t exist. Strategies that fall apart the moment you apply real-world constraints or ask simple follow-up questions. And these are often created by smart people who have already asked their AI, or another AI, to sense-check their work.
The problem is that when you’re operating outside your own area of expertise, you have no way of knowing what the tool is getting wrong.
That output gets sent on: “Take a look.” “Would love your thoughts.” “Is this something we should pursue?”
And now someone else has to step away from their actual job. They’re no longer just reacting to an idea; they’re trying to work out what the point is, in a much longer and often more convoluted format, while also testing every assumption themselves, without the benefit of having done the iterative work.
That takes real time. It pushes more important work aside. Yet no one calls it out, because it’s coming from someone senior. So it just keeps happening.
Related Article Block Placeholder
Article ID: 336350
Neural Notes: Heidi Health CEO warns healthcare AI can ‘degrade’ under pressure
Tegan Jones
Organisations are spending significant money on AI tools, and rightly scrutinising whether those tools are delivering value. But the more insidious cost rarely shows up on any dashboard. It’s the cumulative hours of talented people redirected away from their actual priorities to respond to, sense-check, or make sense of output that wasn’t ready to be shared. That is not an AI problem; it’s a leadership problem with an AI accelerant.
The cost of getting this wrong isn’t measured in how long the document is. It’s measured in how many people had to stop what they were doing to deal with it.
None of this is a reason not to use AI. But it does raise the bar.
If you wouldn’t take a rough draft from a junior team member and send it straight to a colleague or a client, you shouldn’t be doing it with AI output either.
Before you hit send, three questions still matter: can you explain the idea clearly in your own words? Do you know what you’re actually asking the other person for? And is it worth someone else stopping what they’re doing to engage with it?
If the answer to any of those is no, don’t hit send.
AI hasn’t removed the need for good leadership fundamentals. In an environment where everyone can produce more, what sets leaders apart is judgement.
Stay in the know
Never miss a story: sign up to SmartCompany’s free daily newsletter and find our best stories on LinkedIn.