联邦贸易委员会禁止人工智能生成的虚假评论

币界网Publicado em 2024-08-15Última atualização em 2024-08-15

币界网报道:

联邦贸易委员会宣布了将影响社交媒体影响者和企业的新规定。根据这些新规定,联邦贸易委员会将对那些发布人工智能生成的虚假评论的人进行民事处罚。

根据新闻稿的细节,联邦贸易委员会宣布了最终规则,以打击人工智能生成的虚假评论和推荐。新规则禁止买卖这些虚假评论,这也将招致民事处罚。

联邦贸易委员会主席Lina M.Khan表示:“虚假评论不仅浪费人们的时间和金钱,还会污染市场,转移诚实竞争对手的业务。”Lina表示,通过实施最终规则,美国人将免受欺骗。

联邦贸易委员会禁止虚假社交媒体指标

联邦贸易委员会的最终规定提到了该规定所禁止的事情。新闻稿称,根据最终规定,禁止人们购买或出售社交媒体影响力的虚假指标。这些包括被劫持的帐户或机器人生成的关注者和视图。

联邦贸易委员会还表示,这一禁令仅限于买方知道指标是假的,并且会歪曲买方的影响力的情况。

营销和电子商务律师Rob Freund谈到了X的最终规则。Rob表示,任何通过任何不真实的方式获取他们的观点、播放、订阅、保存、点赞等的人都是违法者。

联邦贸易委员会的最终规定最早可能在10月出台

联邦贸易委员会禁止虚假评论,包括人工智能生成的评论,这些评论由没有实际业务、产品或服务经验的个人撰写,或歪曲评论者的经验。

根据该规定,企业被禁止出售甚至创建这些评论。这些企业也被禁止购买这些评论,尤其是在知道这些推荐是假的之后。

根据发布的细节,最终规则将在《联邦公报》公布之日起60天后实施。在这种情况下,它最早可能在10月到来。

在X上宣布这一消息后,社区对新规定发表了意见。X上的大多数人对最终规则给予了积极评价。X用户Jonathan Rose Dunlap表示,这对零售购物者来说是一个巨大的胜利。

其他一些用户也询问了违规的处罚,预计处罚会很严厉。

Leituras Relacionadas

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

In a span of four days, Amazon announced an additional $25 billion investment, and Google pledged up to $40 billion—both direct competitors pouring over $65 billion into the same AI startup, Anthropic. Rather than a typical venture capital move, this signals the latest escalation in the cloud wars. The core of the deal is not equity but compute pre-orders: Anthropic must spend the majority of these funds on AWS and Google Cloud services and chips, effectively locking in massive future compute consumption. This reflects a shift in cloud market dynamics—enterprises now choose cloud providers based on which hosts the best AI models, not just price or stability. With OpenAI deeply tied to Microsoft, Anthropic’s Claude has become the only viable strategic asset for Google and Amazon to remain competitive. Anthropic’s annualized revenue has surged to $30 billion, and it is expanding into verticals like biotech, positioning itself as a cross-industry AI infrastructure layer. However, this funding comes with constraints: Anthropic’s independence is challenged as it balances two rival investors, its safety-first narrative faces pressure from regulatory scrutiny, and its path to IPO introduces new financial pressures. Globally, this accelerates a "tri-polar" closed-loop structure in AI infrastructure, with Microsoft-OpenAI, Google-Anthropic, and Amazon-Anthropic forming exclusive model-cloud alliances. In contrast, China’s landscape differs—investments like Alibaba and Tencent backing open-source model firm DeepSeek reflect a more decoupled approach, though closed-source models from major cloud providers still dominate. The $65 billion bet is ultimately about securing a seat at the table in an AI-defined future—where missing the model layer means losing the cloud war.

marsbitHá 1h

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

marsbitHá 1h

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

DeepSeek-V4 has been released as a preview open-source model, featuring 1 million tokens of context length as a baseline capability—previously a premium feature locked behind enterprise paywalls by major overseas AI firms. The official announcement, however, openly acknowledges computational constraints, particularly limited service throughput for the high-end DeepSeek-V4-Pro version due to restricted high-end computing power. Rather than competing on pure scale, DeepSeek adopts a pragmatic approach that balances algorithmic innovation with hardware realities in China’s AI ecosystem. The V4-Pro model uses a highly sparse architecture with 1.6T total parameters but only activates 49B during inference. It performs strongly in agentic coding, knowledge-intensive tasks, and STEM reasoning, competing closely with top-tier closed models like Gemini Pro 3.1 and Claude Opus 4.6 in certain scenarios. A key strategic product is the Flash edition, with 284B total parameters but only 13B activated—making it cost-effective and accessible for mid- and low-tier hardware, including domestic AI chips from Huawei (Ascend), Cambricon, and Hygon. This design supports broader adoption across developers and SMEs while stimulating China's domestic semiconductor ecosystem. Despite facing talent outflow and intense competition in user traffic—with rivals like Doubao and Qianwen leading in monthly active users—DeepSeek has maintained technical momentum. The release also comes amid reports of a new funding round targeting a valuation exceeding $10 billion, potentially setting a new record in China’s LLM sector. Ultimately, DeepSeek-V4 represents a shift toward open yet realistic infrastructure development in the constrained compute landscape of Chinese AI, emphasizing engineering efficiency and domestic hardware compatibility over pure model scale.

marsbitHá 1h

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

marsbitHá 1h

Trading

Spot
Futuros
活动图片