加密货币市场在吸引对冲基金参与方面落后于TradFi

币界网Publicado a 2024-08-15Actualizado a 2024-08-15

币界网报道:

加密货币市场吸引了对冲基金的份额。波动性的加剧使得对冲策略成为可能,吸引了一部分同时也是风险投资来源的公司。

与传统金融相比,加密货币对冲基金的份额相对较小。在TradFi中,对冲基金监管的资产增加了100%,占可用市场的3-4%。在加密货币领域,即使市场相对较小,对冲基金也只持有1.5%的可投资资产。

目前的基金数量取决于除比特币(BTC)之外的加密资产的估值为1.1万亿美元。比特币市值也超过1.1万亿美元,提供了一个更具流动性的市场。分析师认为,随着对冲基金的成熟,它们可以继续向加密货币领域扩张,达到TradFi基金的份额。

第一季度,对传统和加密对冲基金的需求都在增长,管理的资产达到4.3万亿美元。随着市场动荡的加剧,对冲基金变得有吸引力,它提供了一种通过积极管理的策略来抵消整体增长缓慢的方法。

在过去的一年里,加密货币基金增加了200个新实体,采用了不同的量化策略。主要是,随着人们对加密货币的兴趣增长,来自美国的资金进入了市场。对冲基金的策略和加密货币持有行为各不相同。一些人选择使用Coinbase Custody,而另一些人则积极参与公共区块链并直接持有代币。

加密货币基金的业绩记录相对较短

大多数加密对冲基金的业绩记录都很短,在市场上1-3年的时间里有56.2%。大约34%的基金已经存在了三年多,只有7.2%的基金有四年或更长时间的业绩记录。Vision Track指出,在2022-2023年的熊市中,约35%的现有对冲基金被摧毁。从2022年5月开始,715只加密货币专用基金中有250只不得不关闭。

截至2023年底,加密对冲基金管理着152亿美元,其中114亿美元分配给基本面策略,18亿美元为量化定向基金,19亿美元为市场中性基金。

正如2023年愿景跟踪报告所述,加密对冲基金的配置也慢于市场趋势和叙述。随着故事在几周内发生变化,加密货币原生代可以更快、更灵活地分配资金。

由于对资产的精心选择,采用基本面策略的基金表现最佳。2023年大部分时间,管理的对冲基金资产保持在100亿美元左右。去年年底,当最后一个季度开始出现牛市的初步迹象时,资产增长了41%以上。

加密对冲基金也在等待资产代币化的实现,这可能会带来4000亿美元的新机会和流动性。目前,除了证券化,很少有初创公司创建了代币化资产。

加密对冲基金面临市场和技术障碍

对冲基金一直是加密货币领域发展的关键,因为它们也提供了风险投资。该领域一些最大的参与者包括Pantera Capital、Polychain Capital和数字资产集团。自2021年繁荣时期领先的对冲基金之一Three Arrows Capital倒闭以来,新进入者一直持谨慎态度。

加密货币对冲基金在2018年第一次重大熊市期间也面临着淘汰,当时大多数对冲基金都关闭了。加密货币的另一个问题是,资产的寿命或流动性得不到保证,因为几代旧硬币和代币已经崩溃为零,没有流动性或用例。

对冲基金的另一大障碍是超越比特币(BTC)的能力。与BTC相比,大多数山寨币表现不佳或跌至零。因此,主流对冲基金寻求与BTC互动的简化方式,特别是通过最近推出的完全受监管的ETF。加密货币内部基金有时会通过积极的资产管理来跑赢BTC。

对于加密货币,这些资金也来自加密货币内部组织,这些组织了解该行业更具体的盈利潜力。像MEV Capital这样的对冲基金在DeFi中很活跃,直接管理金库和流动性。


Hristina Vasileva的加密货币报道

Lecturas Relacionadas

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

In a span of four days, Amazon announced an additional $25 billion investment, and Google pledged up to $40 billion—both direct competitors pouring over $65 billion into the same AI startup, Anthropic. Rather than a typical venture capital move, this signals the latest escalation in the cloud wars. The core of the deal is not equity but compute pre-orders: Anthropic must spend the majority of these funds on AWS and Google Cloud services and chips, effectively locking in massive future compute consumption. This reflects a shift in cloud market dynamics—enterprises now choose cloud providers based on which hosts the best AI models, not just price or stability. With OpenAI deeply tied to Microsoft, Anthropic’s Claude has become the only viable strategic asset for Google and Amazon to remain competitive. Anthropic’s annualized revenue has surged to $30 billion, and it is expanding into verticals like biotech, positioning itself as a cross-industry AI infrastructure layer. However, this funding comes with constraints: Anthropic’s independence is challenged as it balances two rival investors, its safety-first narrative faces pressure from regulatory scrutiny, and its path to IPO introduces new financial pressures. Globally, this accelerates a "tri-polar" closed-loop structure in AI infrastructure, with Microsoft-OpenAI, Google-Anthropic, and Amazon-Anthropic forming exclusive model-cloud alliances. In contrast, China’s landscape differs—investments like Alibaba and Tencent backing open-source model firm DeepSeek reflect a more decoupled approach, though closed-source models from major cloud providers still dominate. The $65 billion bet is ultimately about securing a seat at the table in an AI-defined future—where missing the model layer means losing the cloud war.

marsbitHace 5 hora(s)

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

marsbitHace 5 hora(s)

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

DeepSeek-V4 has been released as a preview open-source model, featuring 1 million tokens of context length as a baseline capability—previously a premium feature locked behind enterprise paywalls by major overseas AI firms. The official announcement, however, openly acknowledges computational constraints, particularly limited service throughput for the high-end DeepSeek-V4-Pro version due to restricted high-end computing power. Rather than competing on pure scale, DeepSeek adopts a pragmatic approach that balances algorithmic innovation with hardware realities in China’s AI ecosystem. The V4-Pro model uses a highly sparse architecture with 1.6T total parameters but only activates 49B during inference. It performs strongly in agentic coding, knowledge-intensive tasks, and STEM reasoning, competing closely with top-tier closed models like Gemini Pro 3.1 and Claude Opus 4.6 in certain scenarios. A key strategic product is the Flash edition, with 284B total parameters but only 13B activated—making it cost-effective and accessible for mid- and low-tier hardware, including domestic AI chips from Huawei (Ascend), Cambricon, and Hygon. This design supports broader adoption across developers and SMEs while stimulating China's domestic semiconductor ecosystem. Despite facing talent outflow and intense competition in user traffic—with rivals like Doubao and Qianwen leading in monthly active users—DeepSeek has maintained technical momentum. The release also comes amid reports of a new funding round targeting a valuation exceeding $10 billion, potentially setting a new record in China’s LLM sector. Ultimately, DeepSeek-V4 represents a shift toward open yet realistic infrastructure development in the constrained compute landscape of Chinese AI, emphasizing engineering efficiency and domestic hardware compatibility over pure model scale.

marsbitHace 6 hora(s)

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

marsbitHace 6 hora(s)

Trading

Spot
Futuros
活动图片