# Сопутствующие статьи по теме Verification

Новостной центр HTX предлагает последние статьи и углубленный анализ по "Verification", охватывающие рыночные тренды, новости проектов, развитие технологий и политику регулирования в криптоиндустрии.

Why Are We So Persistent in That 'Laborious and Unrewarding' Data Cleaning?

In the article "Why Are We So Committed to 'Labor-Intensive and Unrewarding' Data Cleaning?", the RootData team reflects on their second bounty event, which focused on enhancing data transparency in Web3. The event involving over 140 participants resulted in 1,220 submissions, with 564 valid data points approved—a 46.2% acceptance rate. Key improvements included identifying key team members from projects like MOMO.FUN and Subhub (often not publicly listed), correcting inaccuracies in token unlock details and TGE timelines, and updating outdated information such as misattributed founders and deprecated social accounts. The author emphasizes that ensuring data transparency—though challenging—is critical for protecting investors' "right to know." In Web3, where misinformation is common (e.g., inconsistent token unlock data across platforms), RootData aims to serve as a reliable source of validated information. The team notes that core team changes around TGE events often signal project risks, yet such details are frequently overlooked. To uphold transparency, RootData publishes monthly reports on false fundraising claims, conducts in-depth analyses (e.g., exchange listing reports), and cross-verifies data rigorously—even declining unverified submissions. They also engage with industry leaders like Binance to align on data accuracy goals. The long-term vision is to transform isolated data points into structured, actionable transparency reports that support informed investment decisions. The article concludes by advocating for collective effort in advancing Web3 data integrity.

marsbit01/24 09:09

Why Are We So Persistent in That 'Laborious and Unrewarding' Data Cleaning?

marsbit01/24 09:09

Why Do I Feel Less Valuable the More I Use AI?

The article discusses the "Zhang Wenhong Paradox," named after a prominent Chinese doctor who refuses to integrate AI into hospital medical records. He argues that while he can leverage AI to review cases and spot its errors due to his decades of experience, young doctors who rely on AI from the start risk never developing the independent clinical judgment needed to verify AI's output. This highlights a broader anxiety among skilled professionals (programmers, lawyers, analysts): as AI handles 80% of execution work, they fear their remaining 20% of value may not justify their professional worth. The core argument is that AI acts as a multiplier: it amplifies existing skills (10x) but cannot compensate for a fundamental lack of understanding (0 x 10 = 0). True skill in the AI era is redefined as judgment—the ability to define problems, think structurally, and verify AI outputs. The author warns against outsourcing thinking to AI; clear, structured input is crucial to avoid "garbage in, garbage out." Furthermore, AI tends to output average, consensus-based answers, so deep, first-principles understanding is needed to challenge its suggestions and avoid mediocrity. Historically, tools like computers transformed professions (e.g., lawyers shifted from finding cases to crafting strategies). Similarly, AI is shifting human roles from "doers" to "validators" and "commanders" who integrate macro-strategy with micro-verification. The conclusion: this is the best era for independent thinkers who can leverage AI as a powerful tool, but it requires building a solid foundation of expertise to avoid becoming mere operators of the technology. The key is to "compete with AI in setting questions, not answering them."

marsbit01/19 12:08

Why Do I Feel Less Valuable the More I Use AI?

marsbit01/19 12:08

活动图片