You are currently offline

Microsoft Engineer Raises Concerns Over AI Image Generator, Copilot Designer, With FTC

A Microsoft engineer, Shane Jones, has taken a bold step by raising safety concerns about the company's AI image generator, Copilot Designer, with the Federal Trade Commission (FTC). Jones, who has been with Microsoft for six years, reportedly wrote a letter to the FTC outlining his concerns and criticizing Microsoft's handling of the situation.

According to a report from CNBC, Jones expressed frustration that Microsoft has not taken action despite repeated warnings about Copilot Designer's capability to generate harmful images. During testing, Jones discovered disturbing content generated by the tool, including images depicting demons, monsters, and scenes related to sensitive topics such as abortion rights, teenage violence, sexualized violence against women, and underage substance abuse.

One particularly troubling aspect highlighted by CNBC was the tool's generation of images featuring Disney characters like Elsa from Frozen in inappropriate and politically charged contexts, such as scenes depicting the Gaza Strip conflict and Elsa wearing an Israel Defense Forces uniform.

Jones has been vocal about his concerns regarding the underlying model used by Copilot Designer, known as DALLE-3, since December. Despite his efforts to raise awareness internally and externally, including posting an open letter on LinkedIn, Microsoft reportedly pressured him to remove the post.

In response to inquiries from The Verge, Microsoft spokesperson Frank Shaw reiterated the company's commitment to addressing employee concerns in accordance with its policies. Shaw emphasized Microsoft's established channels for reporting safety issues and stated that the company has facilitated meetings with product leadership and its Office of Responsible AI to review the reports.

This development follows previous incidents involving Copilot Designer, including its generation of explicit images of Taylor Swift in January, which prompted Jones to raise concerns with US senators. Microsoft CEO Satya Nadella condemned the images as "alarming and terrible" and pledged to enhance safety measures. Similarly, Google temporarily disabled its own AI image generator last month due to concerns about racially insensitive and historically inaccurate images.


Jones's decision to escalate the matter to the FTC underscores the seriousness of the situation and reflects growing scrutiny over the ethical implications of AI technologies. As the debate continues, companies like Microsoft face increasing pressure to prioritize safety and accountability in their AI development processes.

Jones's decision to escalate the matter to the FTC underscores the seriousness of the situation and reflects growing scrutiny over the ethical implications of AI technologies
Jones's decision to escalate the matter to the FTC underscores the seriousness of the situation and reflects growing scrutiny over the ethical implications of AI technologies
Share Article:
Editors at Blank Coverage

Group of selected Authors at Blank Coverage

Post a Comment (0)
Previous Post Next Post