Kraken snaps up $60B token platform Magna – IPO next?

ambcryptoPublished on 2026-02-19Last updated on 2026-02-19

Abstract

Kraken's parent company, Payward, has acquired tokenization platform Magna for an undisclosed sum as it prepares for a potential IPO. Magna, which recorded a peak TVL of $60 billion in 2025, will operate as a standalone platform supported by Kraken's liquidity and resources. The deal enhances Kraken's capabilities in token issuance, vesting, staking, custody, and escrow services. This move aligns with Payward's confidential IPO filing with the SEC in November, following its reported $2.2 billion in adjusted revenue for 2025. Despite a challenging crypto market where Bitcoin has declined and newly listed firms trade below debut prices, Kraken aims to proceed with its public listing alongside other industry players like Ledger, Copper, Securitize, and Consensys.

Kraken’s parent company, Payward, has acquired tokenization platform Magna as it prepares for a potential IPO. The paperwork was confidentially filed with the SEC.

Kraken expands, IPO plans underway

The exchange has so far stated that Magna will operate as a standalone platform. They’ll be supported by the exchange’s liquidity, resources and know-how.

Payward and Kraken Co-CEO Arjun Sethi made collective intent known with this deal. He noted that they’d want to “help projects move from idea to execution,” without “locking them into one stack.”

This deal will increase Kraken’s ability to handle token issuance, vesting, staking, custody and escrow services. These are tools increasingly in demand as more projects move on-chain.

Magna CEO Bruno Faviero said joining Kraken will give the platform more resources and global reach. Expressing happiness at the development, he stated,

“I couldn’t be more excited about our shared vision to support token ecosystems and the builders behind them across formation, launch, and growth.”

Magna currently serves more than 160 clients and recorded a peak TVL of $60 billion in 2025. The acquisition is in line with Payward filing for an IPO in November. They reported $2.2 billion in adjusted revenue for 2025.

They’re not the only ones...

Hardware wallet maker Ledger and digital asset custodian Copper have both explored US listings.

Tokenization firm Securitize recently reported revenue growth of more than 840% ahead of its own IPO plans. Consensys, the parent company of MetaMask, is also reportedly preparing for a debut.

Optimism was high at the start of 2025, as Bitcoin [BTC] surged to around $126,000 in October from under $94,000 at the end of 2024. Several crypto firms went public during that rally, many posting strong first-day gains.

But things went sour quickly.

Since October, BTC has fallen below $63,000, and newly listed crypto stocks have struggled. Bullish, eToro and Gemini are all trading well below their debut prices, with some down more than half.

Even Circle, which has held up better than its peers, remains below its opening level.


Final Summary

  • Kraken’s Magna acquisition will speed up IPO plans in a less than ideal crypto market.
  • The real test will be whether Kraken can go public as rivals struggle post-listing.

Related Questions

QWhat is the parent company of Kraken and what significant acquisition did it make?

AKraken's parent company is Payward, and it has acquired the tokenization platform Magna.

QHow much was Magna's peak TVL in 2025 and how many clients does it serve?

AMagna recorded a peak TVL of $60 billion in 2025 and currently serves more than 160 clients.

QWhat are some of the services that Kraken's acquisition of Magna will enhance?

AThe acquisition will increase Kraken's ability to handle token issuance, vesting, staking, custody, and escrow services.

QWhat was Payward's adjusted revenue for 2025 as mentioned in the article?

APayward reported $2.2 billion in adjusted revenue for 2025.

QAccording to the article, how has the performance of newly listed crypto stocks been since Bitcoin's price decline from its October high?

ASince Bitcoin fell from its October high, newly listed crypto stocks have struggled, with Bullish, eToro, and Gemini all trading well below their debut prices, some down more than half.

Related Reads

OpenAI Post-Training Engineer Weng Jiayi Proposes a New Paradigm Hypothesis for Agentic AI

OpenAI engineer Weng Jiayi's "Heuristic Learning" experiments propose a new paradigm for Agentic AI, suggesting that intelligent agents can improve not just by training neural networks, but also by autonomously writing and refining code based on environmental feedback. In the experiment, a coding agent (powered by Codex) was tasked with developing and maintaining a programmatic strategy for the Atari game Breakout. Starting from a basic prompt, the agent iteratively wrote code, ran the game, analyzed logs and video replays to identify failures, and then modified the code. Through this engineering loop of "code-run-debug-update," it evolved a pure Python heuristic strategy that achieved a perfect score of 864 in Breakout and performed competitively with deep reinforcement learning (RL) algorithms in MuJoCo control tasks like Ant and HalfCheetah. This approach, termed Heuristic Learning (HL), contrasts with Deep RL. In HL, experience is captured in readable, modifiable code, tests, logs, and configurations—a software system—rather than being encoded solely into opaque neural network weights. This offers potential advantages in explainability, auditability for safety-critical applications, easier integration of regression tests to combat catastrophic forgetting, and more efficient sample use in early learning stages, as demonstrated in broader tests on 57 Atari games. However, the blog acknowledges clear limitations. Programmatic strategies struggle with tasks requiring long-horizon planning or complex perception (e.g., Montezuma's Revenge), areas where neural networks excel. The future vision is a hybrid architecture: specialized neural networks for fast perception (System 1), HL systems for rules, safety, and local recovery (also System 1), and LLM agents providing high-level feedback and learning from the HL system's data (System 2). The core proposition is that in the era of capable coding agents, a significant portion of an AI's learned experience could be maintained as an auditable, evolving software system.

marsbit57m ago

OpenAI Post-Training Engineer Weng Jiayi Proposes a New Paradigm Hypothesis for Agentic AI

marsbit57m ago

Your Claude Will Dream Tonight, Don't Disturb It

This article explores the recent phenomenon of AI companies increasingly using anthropomorphic language—like "thinking," "memory," "hallucination," and now "dreaming"—to describe machine learning processes. Focusing on Anthropic's newly announced "Dreaming" feature for its Claude Agent platform, the piece explains that this function is essentially an automated, offline batch processing of an agent's operational logs. It analyzes past task sessions to identify patterns, optimize future actions, and consolidate learnings into a persistent memory system, akin to a form of reinforcement learning and self-correction. The article draws parallels to similar features in other AI agent systems like Hermes Agent and OpenClaw, which also implement mechanisms for reviewing historical data, extracting reusable "skills," and strengthening long-term memory. It notes a key difference from human dreaming: these AI "dreams" still consume computational resources and user tokens. Further context is provided by discussing the technical challenges of managing AI "memory" or context, highlighting the computational expense of large context windows and innovations like Subquadratic's new model claiming drastically longer contexts. The core critique argues that this strategic use of human-centric vocabulary does more than market products; it subtly reshapes user perception. By framing algorithms with terms associated with consciousness, companies blur the line between tool and autonomous entity. This linguistic shift can influence user expectations, tolerance for errors, and even perceptions of responsibility when systems fail, potentially diverting scrutiny from the companies and engineers behind the technology. The article concludes by speculating that terms like "daydreaming" for predictive task simulation might be next, continuing this trend of embedding the idea of an "inner life" into computational processes.

marsbit59m ago

Your Claude Will Dream Tonight, Don't Disturb It

marsbit59m ago

Trading

Spot
Futures
活动图片