# Сопутствующие статьи по теме Misinformation

Новостной центр HTX предлагает последние статьи и углубленный анализ по "Misinformation", охватывающие рыночные тренды, новости проектов, развитие технологий и политику регулирования в криптоиндустрии.

Wikipedia Implements New Editing Rules: Vote Passes, Strictly Prohibits Using AI to Generate or Rewrite Article Content

On March 26, Wikipedia officially passed a new policy through a community vote that explicitly prohibits users from directly using AI to generate or rewrite article content. This decision reinforces the platform's commitment to content accuracy and human editorial control. The updated policy strengthens previous guidelines by moving from a recommendation against generating articles from scratch to a strict ban on using large language models (LLMs) for content creation or rewriting. The policy was approved overwhelmingly by volunteer editors, with a vote of 40 to 2, reflecting deep concerns within the community about AI-generated misinformation and inaccuracies. While AI tools are still permitted for suggesting basic edits, they must not introduce any unverified content. All AI-assisted contributions must undergo human review to prevent factual errors or hallucinations. This move highlights Wikipedia’s effort to balance technological efficiency with content integrity amid the growing use of generative AI across digital platforms. By clearly distinguishing between AI-assisted editing and AI-generated content, Wikipedia aims to preserve human-driven knowledge curation and prevent trust issues caused by automated content production. The decision sets a significant precedent for ethical knowledge management in the age of artificial intelligence.

marsbit03/27 01:08

Wikipedia Implements New Editing Rules: Vote Passes, Strictly Prohibits Using AI to Generate or Rewrite Article Content

marsbit03/27 01:08

An AI-Generated 'Whistleblower Post': How Did It Make Two CEOs Write Self-Defense Essays at Midnight?

An anonymous post on Reddit, allegedly written by a drunken backend engineer from a major food delivery platform, went viral with 87,000 upvotes and 36 million views on X. The post accused the company of using algorithms to exploit drivers—assigning “desperation scores” to prioritize orders for more financially vulnerable drivers, delaying regular orders despite promised priority delivery, and misusing driver welfare funds for lobbying against unions. The viral allegations prompted immediate public denials from the CEOs of DoorDash and Uber, who issued statements and social media posts in the middle of the night to refute the claims. DoorDash published a detailed rebuttal on its website. The post was later exposed as an AI-generated hoax by a Platformer reporter. The “whistleblower” provided a fake 18-page technical document and an AI-generated employee ID, which was detected using Google’s SynthID watermarking tool. The account was deleted when further verification was requested. The incident highlights how AI can cheaply and convincingly fabricate content that aligns with public skepticism toward tech platforms. Past real controversies, such as DoorDash’s tip policy and Uber’s Greyball tool, made the false narrative feel plausible. The case underscores growing public anxiety over the difficulty of distinguishing real from AI-generated content and the power of emotionally resonant misinformation—even when debunked—to shape perception.

比推01/07 13:36

An AI-Generated 'Whistleblower Post': How Did It Make Two CEOs Write Self-Defense Essays at Midnight?

比推01/07 13:36

活动图片