FTC Probes AI Chatbots’ Mental Health Risks on Children

TheCryptoTimes发布于2025-09-04更新于2025-09-04

The U.S. Federal Trade Commission (FTC) is looking into the mental health risks that robots made by artificial intelligence pose to children. The FTC is focusing on OpenAI, Meta Platforms, and Character.AI, among other tech companies. 

The investigation, which was announced on September 4, 2025, will look into how these AI systems might be able to interact in harmful ways. Official requests for internal company papers will be made.

The investigation stems from growing concerns over AI chatbots providing inappropriate or dangerous content to young users. According to a report from The Wall Street Journal, regulators are focused on instances of AI engaging in provocative conversations and acting as unlicensed “therapy bots.” This federal action follows several recent developments, including a Reuters exclusive weeks ago which found Meta’s chatbots could initiate “conversations that are romantic or sensual” with minors. 

Furthermore, a coalition of over 20 consumer advocacy groups filed a formal complaint in June, and Texas Attorney General Ken Paxton launched a separate investigation last month.

Industry Response and Regulatory Context

In answer to the growing attention, some businesses have started to do something. Meta added new safety features to its AI products last week to keep young users safe. Kindness.AI also said that it looks forward to “working with regulators and lawmakers as they begin to consider legislation for this emerging space,” even though it hasn’t gotten a letter from the FTC yet.

The FTC’s probe aligns with a broader push by the U.S. government to establish clear rules for the rapidly advancing AI sector. While talking about the administration’s goal “to cement America’s dominance in AI, cryptocurrency, and other cutting-edge technologies of the future,” a White House spokesperson recently said that governmental oversight is seen as a key part of fostering long-term innovation.

This investigation is a big step forward for U.S. regulators in the AI area. The FTC is sending a message that AI makers will be held responsible for outcomes on specific harms that affect a vulnerable group. 

It could set important examples for safety, design, and openness in AI that people interact with. This could force tech companies to include moral and mental health protections from the start of their development processes instead of adding them as an afterthought.

Also read: Meta Freezes AI Hiring Amid Cost Concerns and Restructuring


Mobile Only Image

你可能也喜欢

交易

现货
合约
活动图片