Coinbase敦促美国证券交易委员会“放弃”其“非理性”的DeFi交易规则

币界网Published on 2024-08-12Last updated on 2024-08-12

币界网报道:

Coinbase周一再次公开反对美国证券交易委员会(SEC)多年来试图扩大“交易所”一词的官僚定义,如果成功,将使DeFi生态系统牢牢置于监管机构的管辖之下。

在周一提交给美国证券交易委员会的一份长达八页的评论中,Coinbase首席法律办公室Paul Grewal谴责潜在的规则变更在几个方面是“任意的”和“不合理的”,并敦促该机构“放弃努力”将拟议的规则应用于去中心化交易所(DEX)。

Coinbase反对这一变化的根本原因是,美国证券交易委员会继续拒绝承认DEX——由自动化的链上软件(又名智能合约)运行,几乎没有人工管理——从定义上讲,无法遵守为纽约证券交易所等传统证券交易所设计的规则和标准。

Grewal写道:“DEX无法遵守为集中式公司管理的传统金融交易所设计的注册和披露要求。”。“即使DEX能够以某种方式遵守现有的注册和披露规则,委员会也没有解释SEC注册的DEX如何促进数字资产的交易。”

由于这些明显的紧张局势,Coinbase在今天给美国证券交易委员会的信中暗示,该机构很可能试图暗中取缔DEX,但没有这么说。

Decrypt就这些指控与美国证券交易委员会进行了联系,但没有立即收到回复。

Coinbase进一步指责美国证券交易委员会未能完成对拟议规则变更的适当成本效益分析。这是因为监管机构只是笼统地表示,它将监管交易“加密资产证券”的交易所,而没有定义哪些数字资产构成证券,哪些不构成证券。

美国证券交易委员会长期以来一直拒绝划定这样的界限——它认为哪些加密货币是证券,哪些不是——这仍然是加密货币行业对该机构最大的不满之一。美国证券交易委员会没有提出这样的框架,而是选择逐一起诉其声称构成非法证券发行的加密项目。

近几个月来,监管机构甚至似乎对某些加密资产的看法发生了转变。例如,一年多来,据报道,美国证券交易委员会秘密地将以太坊视为一种证券。然后,在5月,该机构突然改变了方向,批准了华尔街以太坊现货ETF的交易。

Coinbase今天写道,由于美国证券交易委员会没有明确界定它认为哪些加密货币是证券,因此它不可能正确计算出准确的成本效益分析,以确定如果DEX像证券交易所一样受到监管,有多少金融活动将属于其职权范围。

Grewal写道:“如果没有一个单一、稳定的数字资产受证券法约束的观点,美国证券交易委员会就无法合理地进行这些计算。”。

因此,Coinbase表示,美国证券交易委员会“必须撤回该提案,收集进行合理成本效益分析所需的信息,尝试纠正其现有的错误假设和分析,并允许对委员会可能提出的任何修改后的提案进行下一轮评论”。

美国证券交易委员会去年首次提出了“交易所”的修订定义,这可能会影响DEX。从那时起,加密货币公司和项目一直在漫长的反复中寻求阻止规则变更的通过,将其视为对DeFi的生存威胁。

上个月,Uniswap Labs,一家领先的DEX背后的公司,在4月份受到美国证券交易委员会诉讼的威胁,致函监管机构,辩称其必须放弃监管DeFi的尝试。此前,美国最高法院做出了一项爆炸性裁决,严重限制了联邦机构自行界定其监管权力界限的能力。

安德鲁·海沃德编辑

Related Reads

A 120,000 Yuan Tombstone or 399 Yuan AI Immortality: Which Would You Choose?

"The 'Deathcare Moutai' Fushouyuan, once a highly profitable cemetery operator, has halted trading amid a severe crisis, with its net profit plummeting by 52.8% in 2024. This reflects a broader trend of people rejecting expensive traditional burials, as average grave prices in China have soared to over ¥120,000. In response, the industry is pivoting to digital alternatives, with companies like Fushouyuan offering AI-powered memorial services, such as virtual farewell halls and AI-generated recreations of the deceased. Simultaneously, a low-cost, unregulated AI 'resurrection' industry has emerged online, with services priced as low as ¥399. These often use open-source tools to create crude digital avatars from photos and voice clips, exploiting vulnerable individuals, particularly bereaved parents who have lost their only child. However, these services raise significant ethical and legal concerns, including data privacy risks and potential use in scams. Academic studies warn that such AI companions may exacerbate grief, leading to prolonged mourning disorders and emotional dependency, rather than providing genuine comfort. While regulations are being drafted to manage digital human services, the deep emotional drive to 'reconnect' with loved ones often overshadows rational concerns. Ultimately, the article questions whether digital immortality truly preserves memory or merely offers a commercialized illusion, emphasizing that no technology can replace the real, irreplaceable loss of a human life."

marsbit29m ago

A 120,000 Yuan Tombstone or 399 Yuan AI Immortality: Which Would You Choose?

marsbit29m ago

Anthropic Starts Poaching Scientists? $27K Weekly Onsite Stipend to Fix Claude's Expert-Level Errors

Anthropic has launched a new STEM Fellow program, offering $3,800 per week for a three-month, in-person residency in San Francisco. The role targets experts from science, technology, engineering, and mathematics (STEM) fields—machine learning experience is helpful but not required. Instead, Anthropic values scientific judgment and a willingness to learn quickly. Fellows will work with Claude models and internal tools under the guidance of an Anthropic researcher. Example projects include a materials scientist identifying errors in Claude’s reasoning or a climate scientist integrating atmospheric modeling software with Claude. The goal is to have experts "tell Claude where it's wrong" and improve its scientific capabilities. This initiative is part of Anthropic’s broader strategy to strengthen its scientific ecosystem, following earlier programs like the AI Safety Fellows and AI for Science programs. The company acknowledges that current AI models, while powerful, still produce high-confidence errors and lack end-to-end research autonomy. The program aims to embed domain expertise directly into model development, turning scientists into "high-level reviewers" for AI. Anthropic CEO Dario Amodei has previously emphasized AI’s potential to accelerate scientific breakthroughs, particularly in biology and healthcare. The company believes that the next phase of AI competition will depend not on scaling parameters, but on integrating human expertise to refine model accuracy and reliability.

marsbit1h ago

Anthropic Starts Poaching Scientists? $27K Weekly Onsite Stipend to Fix Claude's Expert-Level Errors

marsbit1h ago

Trading

Spot
Futures
活动图片