BitMine joins NYSE ‘Big Board’ with expanded $4B buyback plan

ambcryptoОпубліковано о 2026-04-10Востаннє оновлено о 2026-04-10

Анотація

BitMine Immersion Technologies achieved a major milestone by uplisting from NYSE American to the prestigious New York Stock Exchange (NYSE), enhancing its visibility and trading potential. The company also announced a significant expansion of its share buyback program, increasing it from $1 billion to $4 billion, positioning it among the top 10 corporate buyback initiatives. This strategy aims to repurchase shares if they trade below intrinsic value relative to its ETH holdings. Additionally, BitMine reported holding 4.8 million ETH, nearing its 6 million ETH target, and plans to stake its entire ETH stash on its MAVAN platform, potentially generating $300 million annually in staking rewards. Despite these developments, its stock (BMNR) declined 2% to $21.29, with a year-to-date loss of 22%.

BitMine Immersion Technologies hit a major milestone on Wall Street today, marking a big start to Q2.

On Thursday, the firm was uplisted to the New York Stock Exchange (NYSE) from the ‘smaller’ NYSE American. In a statement, Tom Lee, Chairman of the world’s largest Ethereum treasury said that being uplisted to the “big board” is a “major milestone.” He added,

“The NYSE is the most prestigious venerable stock exchange with a storied history.”

This could increase the firm’s visibility and, by extension, its trading volume.

Additionally, the firm’s board approved the overhaul of its share buyback from 2025’s $1 billion to $4 billion. Commenting on the massive buyback cash, Lee claimed,

Bitmine’s expanded $4 billion buyback reflects our commitment to shareholders. There may be a time in the future when Bitmine shares are trading below intrinsic value, and the Company wants to be in a position to accretively retire common shares.

This means that the firm could actively begin its share buyback if the mNAV (market-to-Net Asset Value) falls below 1. In other words, when its share trades at a discount to its ETH holdings, then the buyback will be initiated.

The $4B program ranked BitMine among the top 10 firms with the largest corporate buybacks.

BitMine’s holdings hit 4.8M ETH

Separately, the firm reported that it now holds 4.8 million ETH and was 79% done with its target of holding 6 million ETH or the ‘5% Alchemy.’ In the past week alone, it bought 40K ETH.

Unlike Strategy, which currently holds BTC for pure price appreciation and to extend the same volatility to MSTR, BitMine eyes a steady annual revenue.

The firm recently launched a MAVAN staking platform and plans to stake its entire ETH stash on it. Thanks to current staking rewards, BitMine could earn $300 million annually. In fact, the MAVAN platform won’t stop at ETH, with planned expansion to target other Proof-of-Stake (PoS) chains like Solana.

Meanwhile, BitMine’s stock, BMNR, slipped by 2% and closed the Thursday market session at $21.29 after the bullish update. On a year-to-date (YTD) basis, BMNR and ETH have posted nearly similar losses of 22% and 26%, respectively.

Source: Google Finance

Final Summary

  • BitMine was uplisted to the NYSE, an upgrade from the prior NYSE American, offering it an extra visibility to Wall Street investors seeking indirect exposure to ETH.
  • Treasury firm now holds 4.8 million ETH and is about 20% away from hitting its goal of holding 6 million ETH.

Пов'язані матеріали

OpenAI Post-Training Engineer Weng Jiayi Proposes a New Paradigm Hypothesis for Agentic AI

OpenAI engineer Weng Jiayi's "Heuristic Learning" experiments propose a new paradigm for Agentic AI, suggesting that intelligent agents can improve not just by training neural networks, but also by autonomously writing and refining code based on environmental feedback. In the experiment, a coding agent (powered by Codex) was tasked with developing and maintaining a programmatic strategy for the Atari game Breakout. Starting from a basic prompt, the agent iteratively wrote code, ran the game, analyzed logs and video replays to identify failures, and then modified the code. Through this engineering loop of "code-run-debug-update," it evolved a pure Python heuristic strategy that achieved a perfect score of 864 in Breakout and performed competitively with deep reinforcement learning (RL) algorithms in MuJoCo control tasks like Ant and HalfCheetah. This approach, termed Heuristic Learning (HL), contrasts with Deep RL. In HL, experience is captured in readable, modifiable code, tests, logs, and configurations—a software system—rather than being encoded solely into opaque neural network weights. This offers potential advantages in explainability, auditability for safety-critical applications, easier integration of regression tests to combat catastrophic forgetting, and more efficient sample use in early learning stages, as demonstrated in broader tests on 57 Atari games. However, the blog acknowledges clear limitations. Programmatic strategies struggle with tasks requiring long-horizon planning or complex perception (e.g., Montezuma's Revenge), areas where neural networks excel. The future vision is a hybrid architecture: specialized neural networks for fast perception (System 1), HL systems for rules, safety, and local recovery (also System 1), and LLM agents providing high-level feedback and learning from the HL system's data (System 2). The core proposition is that in the era of capable coding agents, a significant portion of an AI's learned experience could be maintained as an auditable, evolving software system.

marsbit8 хв тому

OpenAI Post-Training Engineer Weng Jiayi Proposes a New Paradigm Hypothesis for Agentic AI

marsbit8 хв тому

Your Claude Will Dream Tonight, Don't Disturb It

This article explores the recent phenomenon of AI companies increasingly using anthropomorphic language—like "thinking," "memory," "hallucination," and now "dreaming"—to describe machine learning processes. Focusing on Anthropic's newly announced "Dreaming" feature for its Claude Agent platform, the piece explains that this function is essentially an automated, offline batch processing of an agent's operational logs. It analyzes past task sessions to identify patterns, optimize future actions, and consolidate learnings into a persistent memory system, akin to a form of reinforcement learning and self-correction. The article draws parallels to similar features in other AI agent systems like Hermes Agent and OpenClaw, which also implement mechanisms for reviewing historical data, extracting reusable "skills," and strengthening long-term memory. It notes a key difference from human dreaming: these AI "dreams" still consume computational resources and user tokens. Further context is provided by discussing the technical challenges of managing AI "memory" or context, highlighting the computational expense of large context windows and innovations like Subquadratic's new model claiming drastically longer contexts. The core critique argues that this strategic use of human-centric vocabulary does more than market products; it subtly reshapes user perception. By framing algorithms with terms associated with consciousness, companies blur the line between tool and autonomous entity. This linguistic shift can influence user expectations, tolerance for errors, and even perceptions of responsibility when systems fail, potentially diverting scrutiny from the companies and engineers behind the technology. The article concludes by speculating that terms like "daydreaming" for predictive task simulation might be next, continuing this trend of embedding the idea of an "inner life" into computational processes.

marsbit10 хв тому

Your Claude Will Dream Tonight, Don't Disturb It

marsbit10 хв тому

Торгівля

Спот
Ф'ючерси
活动图片