Sam Bankman-Fried's Parents Ask Court to Dismiss FTX's Lawsuit Seeking to Recover Funds

CoinDeskPolicy2024-01-16 tarihinde yayınlandı2024-01-17 tarihinde güncellendi

Özet

Bankman and Fried, both professors at Stanford Law School, argued that Bankman did not have a fiduciary relationship with FTX .

Joseph Bankman and Barbara Fried, the parents of Sam Bankman-Fried, have asked a court to dismiss a lawsuit by the bankrupt cryptocurrency exchange FTX seeking to recover funds it alleges were fraudulently transferred.

FTX sought to “recover millions of dollars" from Bankman and Fried in Sept. 2023. Less than two months later, their son, Bankman-Fried, was found guilty on all seven charges of defrauding customers and the United States. His sentencing is expected in March.

Bankman and Fried, both professors at Stanford Law School, argued that Bankman did not have a fiduciary relationship with FTX and did not serve "as a director, officer, or manager," and even if a fiduciary relationship existed with FTX to plausibly allege a breach, according to a Jan 15. court filing.

Advertisement
Advertisement

Significantly, the court filing argued that it is not enough for FTX to plead that the parents “knew or should have known.” Instead, the filing argued that FTX should have produced specific facts showing “actual knowledge” that the parents “knew certain actions would result in a breach of fiduciary duty.”

In the Sept. 2023 lawsuit filing, FTX did not state the total amount Bankman and Fried may have misappropriated, but it did provide certain line items – Bankman received an annual salary of $200,000 for his role as a senior adviser to the FTX foundation, more than $18 million for the property in the Bahamas and $5.5 million in FTX Group donations to Stanford University, which the University has said will be returned.

Edited by Parikshit Mishra.

İlgili Okumalar

Where Is the AI Infrastructure Industry Chain Stuck?

The AI infrastructure (AI Infra) industry chain is facing unprecedented systemic bottlenecks, despite the rapid emergence of applications like DeepSeek and Seedance 2.0. The surge in global computing demand has exposed critical constraints across multiple layers of the supply chain—from core manufacturing equipment and data center cabling to specialty materials and cleanroom facilities. Key challenges include four major "walls": - **Memory Wall**: High-bandwidth memory (HBM) and DRAM face structural shortages as AI inference demand outpaces training, with new capacity not expected until 2027. - **Bandwidth Wall**: Data transfer speeds lag behind computing power, causing multi-level bottlenecks in-chip, between chips, and across data centers. - **Compute Wall**: Advanced chip manufacturing, reliant on EUV lithography and monopolized by ASML, remains the fundamental constraint, with supply chain fragility affecting production. - **Power Wall**: While energy demand from data centers is rising, power supply is a solvable near-term challenge through diversified energy infrastructure. Expansion is further hindered by shortages in testing equipment, IC substrates (critical for GPUs and seeing price hikes over 30%), specialty materials like low-CTE glass fiber, and high-end cleanroom facilities. Connection technologies are evolving, with copper cables resurging for short-range links due to cost and latency advantages, while optical solutions dominate long-range scenarios. Innovations like hollow-core fiber and advanced PCB technologies (e.g., glass substrates, mSAP) are emerging to meet bandwidth needs. In summary, AI Infra bottlenecks are multidimensional, spanning compute, memory, bandwidth, power, and supply chain logistics. Advanced chip manufacturing remains the core constraint, while substrate, material, and equipment shortages present immediate challenges. The industry is moving toward hybrid copper-optical solutions and accelerated domestic supply chain development.

marsbit29 dk önce

Where Is the AI Infrastructure Industry Chain Stuck?

marsbit29 dk önce

Autonomy or Compatibility: The Choice Facing China's AI Ecosystem Behind the Delay of DeepSeek V4

DeepSeek V4's repeated delay in early 2026 has sparked global discussions on "de-CUDA-ization" in AI. The highly anticipated trillion-parameter open-source model is undergoing deep adaptation to Huawei’s Ascend chips using the CANN framework, representing China’s first systematic attempt to run a core AI model outside the CUDA ecosystem. This shift, however, comes with significant engineering challenges. While the model uses a MoE architecture to reduce computational load, it places extreme demands on memory bandwidth, chip interconnects, and system scheduling—areas where NVIDIA’s mature CUDA ecosystem currently excels. Migrating to Ascend introduces complexities in hardware topology, communication latency, and software optimization due to CANN’s relative immaturity compared to CUDA. The move highlights a broader strategic dilemma: short-term compatibility with CUDA offers practical benefits and faster adoption, as seen in CANN’s efforts to emulate CUDA interfaces. Yet, long-term over-reliance on compatibility risks inheriting CUDA’s limitations and stifling native innovation. If global AI shifts away from transformer-based architectures, strict compatibility could lead to technological obsolescence. Despite these challenges, DeepSeek V4’s eventual release could demonstrate the viability of a full domestic AI stack and accelerate CANN’s ecosystem growth. However, true technological independence will require building an original software-hardware paradigm beyond compatibility—a critical task for China’s AI ambitions in the next 3-5 years.

marsbit48 dk önce

Autonomy or Compatibility: The Choice Facing China's AI Ecosystem Behind the Delay of DeepSeek V4

marsbit48 dk önce

İşlemler

Spot
Futures
活动图片