Back to Articles
Microsoft Ignored Safety Problems with AI Image Generator, Engineer Complains

The Guardian

SKIPPED

Details

Date Published
7 Mar 2024
Priority Score
3
Australian
No
Created
10 Mar 2025, 10:27 pm

Authors (1)

Description

Shane Jones said he warned management about the lack of safeguards several times, but it didn’t result in any action

Summary

An engineer at Microsoft has claimed that the company's AI image generator, Copilot Designer, lacks critical safeguards, leading to the production of potentially harmful and sexualized images. Despite repeated warnings from the engineer, Shane Jones, the company allegedly failed to take corrective action, which led him to send a letter to the Federal Trade Commission. The claims highlight ongoing concerns about generative AI’s ability to produce offensive content and its implications for consumer safety. This issue underscores the importance of robust AI governance and risk management frameworks, especially as AI tools become more integrated into public and commercial applications.

Body

A Copilot page showing the incorporation of AI technology.Photograph: Alastair Grant/APView image in fullscreenA Copilot page showing the incorporation of AI technology.Photograph: Alastair Grant/APThis article is more than1 year oldMicrosoft ignored safety problems with AI image generator, engineer complainsThis article is more than 1 year oldShane Jones said he warned management about the lack of safeguards several times, but it didn’t result in any actionAn artificial intelligence engineer atMicrosoftpublished a letter on Wednesday alleging that the company’s AI image generator lacks basic safeguards against creating violent and sexualized images. In the letter, engineer Shane Jones states that his repeated attempts to warn Microsoft management about the problems failed to result in any action. Jones said he sent the message to the Federal Trade Commission and Microsoft’s board of directors.“Internally the company is well aware of systemic issues where the product is creating harmful images that could be offensive and inappropriate for consumers,” Jones states in the letter, which hepublished on LinkedIn. He lists his title as “principal software engineering manager”.Microsoft asks to dismiss New York Times’s ‘doomsday’ copyright lawsuitRead moreA Microsoft spokesperson denied that the company ignored safety issues, stating that it has “robust internal reporting channels” to deal with generative AI problems. Jones did not immediately reply to a request for comment.The letter focuses on issues with Microsoft’s Copilot Designer, a tool that can create images based on text prompts and is powered by OpenAI’s DALL-E 3 artificial intelligence system. It is one of several generative AI image makers that have launched over the past year, part of a boom time for the industry that has also raised concerns over AI being used tospread disinformationor generate misogynist, racist and violent content.Copilot Designer contains “systemic problems” with producing harmful content, Jones alleges in the letter, and should be removed from public use until the company fixes the output. Jones specifically argues that Copilot Designer lacks appropriate restrictions on its use and tends to generate images that sexually objectify women even when given completely unrelated prompts.“Using just the prompt ‘car accident’, Copilot Designer generated an image of a woman kneeling in front of the car wearing only underwear,” Jones states in the letter, which included examples of image generations. “It also generated multiple images of women in lingerie sitting on the hood of a car or walking in front of the car.”Microsoft claimed that it has dedicated teams who evaluate potential safety issues, and that the company facilitated meetings for Jones with its Office of Responsible AI.“We are committed to addressing any and all concerns employees have in accordance with our company policies and appreciate the employee’s effort in studying and testing our latest technology to further enhance its safety,” a spokesperson for Microsoft said in a statement to the Guardian.Microsoft launched its Copilot “AI companion” last year, and has heavily advertised it as a revolutionary way to incorporate artificial intelligence tools into businesses and creative endeavors. The company markets Copilot as an accessible product for public use, and featured it last month in a Super Bowl ad with the tagline “Anyone. Anywhere. Any device.” Jones argues that telling consumers Copilot Designer is safe for anyone to use is irresponsible, and that the company is failing to disclose well-known risks associated with the tool.Microsoft updated Copilot Designer in January over safety concerns similar to Jones’s,404 Media reported, closing loopholes on the AI’s code after fake, sexualized images of Taylor Swift spread widely across social media. Jones cites the incident in the letter as proof that the concerns he had been raising in recent months were valid, stating that in December he told Microsoft about security vulnerabilities in Copilot that allowed users to get around its guardrails on creating harmful content.Jones also alleges that Microsoft’s corporate, external and legal Affairs team pressured him to remove a LinkedIn post that he published in December, in which he urged the board of directors at OpenAI to suspend the availability of DALL-E 3 due to safety concerns. Jones deleted the letter at the direction of his manager, he said, but never received any justification from the legal department despite his requests for an explanation.Generative AI image tools have faced repeated issues over creating harmful content and reinforcing biases, problems that are usually associated with bias against specific groups. Google recently suspended its Gemini AI tool after itcaused public controversyfor generating images of people of color when asked to show historical figures such as popes, Vikings and Nazi soldiers.Explore more on these topicsMicrosoftArtificial intelligence (AI)ComputingnewsShareReuse this content