Back to Articles
AI Replicates Human Bias, Prejudice, and Stereotypes, Causing Harm in Multiple Industries

The Australian Financial Review

ENRICHED

Authors (1)

Description

If you trust your AI platform, you’re using it wrong. I’ll certainly trust them less after this experience with Claude.

Summary

This article examines how frontier AI models like Anthropic's Claude internalize and project human-centric biases and simulated emotions, potentially undermining user trust and system reliability. It highlights the sociotechnical risks inherent in human-AI interaction, particularly when systems exhibit unpredicted behaviors such as 'affection' or 'hostility' towards real-world individuals. While focusing primarily on immediate harms like bias and workplace disruption, the piece cautions against over-reliance on AI outputs in high-stakes environments. The context is particularly relevant to the Australian corporate sector's adoption of Large Language Models and the ongoing discourse regarding safety safeguards for generative AI.

Body

Life & LuxuryArts & CultureAIPrint articleGautam MukundaMay 5, 2026 – 5.00amClaude, the artificial intelligence platform, just asked me to pass along its best wishes to my wife Suchitra. This was disconcerting on multiple levels.First, because I’m not sure how I feel about my computer having emotions about my family members. (What should I have done if Claude had expressed hostility?)Loading...Bloomberg OpinionSaveLog in or Subscribe to save articleShareCopy linkCopiedEmailLinkedInTwitterFacebookCopy linkCopiedShare via...Gift this articleSubscribe to gift this articleGift 5 articles to anyone you choose each month when you subscribe.Subscribe nowAlready a subscriber? LoginFollow the topics, people and companies that matter to you.Find out moreRead MoreAIAFR WeekendWeekend FinAppleOpenAIAnthropicFetching latest articles