Robinhood聘请前Cruise和Lyft高管Jeff Pinner担任首席技术官

币界网Published on 2024-08-12Last updated on 2024-08-12

币界网报道:

Robinhood已任命Jeff Pinner为其首席技术官。Pinner今天(星期一)宣布,凭借在Cruise和Lyft的职业背景,Pinner将率先努力加快Robinhood的产品开发,优化基础设施,并提升客户体验。

Jeff Pinner的新角色

Pinner在Robinhood的角色将专注于提升公司的工程能力。该公司预计,他的任命将对Robinhood提供金融服务的能力产生重大影响。

Robinhood首席执行官兼联合创始人Vlad Tenev在评论这一任命时表示:“卓越的工程设计对于我们在Robinhood构建尖端金融产品的能力至关重要,这就是为什么我们很高兴欢迎Jeff担任首席技术官。”Tenev强调,Pinner在人工智能和基础设施方面的经验与Robinhood的目标完全一致。

在加入Robinhood之前,Pinner是Cruise的杰出工程师,据报道,他在推进自动驾驶汽车技术方面发挥了作用。他之前在Lyft任职期间取得了重大成就,例如扩展了公司的工程基础设施并领导了其市场组织。

Pinner还担任Lyft的首席技术官,在那里他帮助推动了大幅增长和技术进步。该公司期待他深厚的技术知识和经验来推动尖端金融产品和服务的发展。

Robinhood的其他高管变动

上周,Robinhood任命David Schwed为经纪部门的首席信息安全官,这是另一项重要的高管举措。Schwed之前曾担任Halborn的首席运营官,后来担任顾问,网络安全网络安全是一个笼统的术语,指的是保护计算机系统和网络免受盗窃。更广泛地说,网络安全还可以代表针对硬件、软件或电子数据损坏的对策,以及针对其提供的服务中断或误导的对策。不久前,网络安全一词还不存在,因为它在1989年首次使用。在今天的白话中,网络安全是指为保护计算机或计算机而采取的措施。网络安全是一个总括性术语,指的是保护计算机系统和网络免受盗窃。更广泛地说,网络安全还可以代表针对硬件、软件或电子数据损坏的对策,以及针对其提供的服务中断或误导的对策。不久前,网络安全一词还不存在,因为它在1989年首次使用。在今天的白话网络安全中,是指为保护计算机或计算机而采取的措施。阅读本术语公司。

Schwed是一位技术专家,曾在DFNS、Lava Network、Utila和Hexagate等知名公司工作过。这位帝国州立大学的校友还曾为花旗、银河数码和纽约银行工作。

与此同时,Robinhood最近公布了创纪录的第二季度业绩,总净收入跃升至6.82亿美元。该公司的业绩得益于交易收入的增长和高级订阅服务的提升。

该公司的净收入为1.88亿美元,每股摊薄收益为0.21美元。这一业绩标志着与去年同期2500万美元(每股0.03美元)相比有了显著增长。净利润同比增长652%。

Related Reads

Where Is the AI Infrastructure Industry Chain Stuck?

The AI infrastructure (AI Infra) industry chain is facing unprecedented systemic bottlenecks, despite the rapid emergence of applications like DeepSeek and Seedance 2.0. The surge in global computing demand has exposed critical constraints across multiple layers of the supply chain—from core manufacturing equipment and data center cabling to specialty materials and cleanroom facilities. Key challenges include four major "walls": - **Memory Wall**: High-bandwidth memory (HBM) and DRAM face structural shortages as AI inference demand outpaces training, with new capacity not expected until 2027. - **Bandwidth Wall**: Data transfer speeds lag behind computing power, causing multi-level bottlenecks in-chip, between chips, and across data centers. - **Compute Wall**: Advanced chip manufacturing, reliant on EUV lithography and monopolized by ASML, remains the fundamental constraint, with supply chain fragility affecting production. - **Power Wall**: While energy demand from data centers is rising, power supply is a solvable near-term challenge through diversified energy infrastructure. Expansion is further hindered by shortages in testing equipment, IC substrates (critical for GPUs and seeing price hikes over 30%), specialty materials like low-CTE glass fiber, and high-end cleanroom facilities. Connection technologies are evolving, with copper cables resurging for short-range links due to cost and latency advantages, while optical solutions dominate long-range scenarios. Innovations like hollow-core fiber and advanced PCB technologies (e.g., glass substrates, mSAP) are emerging to meet bandwidth needs. In summary, AI Infra bottlenecks are multidimensional, spanning compute, memory, bandwidth, power, and supply chain logistics. Advanced chip manufacturing remains the core constraint, while substrate, material, and equipment shortages present immediate challenges. The industry is moving toward hybrid copper-optical solutions and accelerated domestic supply chain development.

marsbit23m ago

Where Is the AI Infrastructure Industry Chain Stuck?

marsbit23m ago

Autonomy or Compatibility: The Choice Facing China's AI Ecosystem Behind the Delay of DeepSeek V4

DeepSeek V4's repeated delay in early 2026 has sparked global discussions on "de-CUDA-ization" in AI. The highly anticipated trillion-parameter open-source model is undergoing deep adaptation to Huawei’s Ascend chips using the CANN framework, representing China’s first systematic attempt to run a core AI model outside the CUDA ecosystem. This shift, however, comes with significant engineering challenges. While the model uses a MoE architecture to reduce computational load, it places extreme demands on memory bandwidth, chip interconnects, and system scheduling—areas where NVIDIA’s mature CUDA ecosystem currently excels. Migrating to Ascend introduces complexities in hardware topology, communication latency, and software optimization due to CANN’s relative immaturity compared to CUDA. The move highlights a broader strategic dilemma: short-term compatibility with CUDA offers practical benefits and faster adoption, as seen in CANN’s efforts to emulate CUDA interfaces. Yet, long-term over-reliance on compatibility risks inheriting CUDA’s limitations and stifling native innovation. If global AI shifts away from transformer-based architectures, strict compatibility could lead to technological obsolescence. Despite these challenges, DeepSeek V4’s eventual release could demonstrate the viability of a full domestic AI stack and accelerate CANN’s ecosystem growth. However, true technological independence will require building an original software-hardware paradigm beyond compatibility—a critical task for China’s AI ambitions in the next 3-5 years.

marsbit41m ago

Autonomy or Compatibility: The Choice Facing China's AI Ecosystem Behind the Delay of DeepSeek V4

marsbit41m ago

How Blockchain Fills the Identity, Payment, and Trust Gaps for AI Agents?

AI Agents are rapidly evolving into autonomous economic participants, but they face critical gaps in identity, payment, and trust infrastructure. They currently lack standardized ways to prove who they are, what they are authorized to do, and how they should be compensated across different environments. Blockchain technology is emerging as a solution to these challenges by providing a neutral coordination layer. Public ledgers offer auditable credentials, wallets enable portable identities, and stablecoins serve as a programmable settlement layer. A key bottleneck is the absence of a universal identity standard for non-human entities—akin to "Know Your Agent" (KYA)—which would allow Agents to operate with verifiable, cryptographically signed credentials. Without this, Agents remain fragmented and face barriers to interoperability. Additionally, as AI systems take on governance roles, there is a risk that centralized control over models could undermine decentralized governance in practice. Cryptographic guarantees on training data, prompts, and behavior logs are essential to ensure Agents act in users' interests. Stablecoins and crypto-native payment rails are becoming the default for Agent-to-Agent commerce, enabling seamless, low-cost transactions for AI-native services. These systems support permissionless, programmable payments without traditional merchant onboarding. Finally, as AI scales, human oversight becomes impractical. Trust must be built into system architecture through verifiable provenance, on-chain attestations, and decentralized identity systems. The future of Agent economies depends on cryptographically enforced accountability, allowing users to delegate tasks with clearly defined constraints and transparent operation logs.

marsbit1h ago

How Blockchain Fills the Identity, Payment, and Trust Gaps for AI Agents?

marsbit1h ago

Trading

Spot
Futures
活动图片