Epstein Files Reveal Alleged Early Investment in Coinbase

TheNewsCryptoPublicado em 2026-02-03Última atualização em 2026-02-03

Resumo

Epstein Files reveal that Jeffrey Epstein allegedly invested $3 million in Coinbase through Brock Pierce's Blockchain Capital in 2014. Documents suggest the deal may have secured him a meeting with co-founder Fred Ehrsam. A 2018 email indicates Epstein later sold half his stake back for around $11 million. Separately, Blockstream CEO Adam Back denied any financial ties to Epstein, though a document shows a co-founder discussed the firm's seed round with him.

The latest anticipation revolves around the Epstein Files, indicating a vast collection of documents associated with the case of American financier Jeffrey Epstein, which revealed that he made a $3 million investment in the crypto exchange Coinbase around 10 years ago.

According to the documents publicised by the U.S. Department of Justice, Epstein invested in Coinbase via Brock Pierce’s Blockchain Capital in 2014. A Bitcoin researcher, Kyle Torpey, mentioned that it is not clear if the deal really went through, but there are many discussions regarding investing in Coinbase in the files.

The buying supposedly arranged Epstein a face-to-face meeting with Coinbase co-founder Fred Ehrsam. The leaked email screenshot revealed mentioning Jeff and Ehrsam, signalling Ehrsam might be aware of his involvement in Coinbase.

The Revealed Screenshot

Ehrsam wrote that I have a gap between noon and 3pm today, but again, it’s not critical for me, but it would be nice to meet him if suitable. Is it important for him, as per the screenshot. After four years, in 2018, one more email surfaced that confirmed that Epstein got his Coinbase allocation. It seems that he then sold 50% of the stake back to Blockchain Capital for about $11 million.

At the same time, the chief executive officer of Blockstream, Adam Back, pushed back against allegations from the Epstein Files concerning his continued connection with the convict. Back posted on X, writing that Blockstream has no direct or indirect financial connection with Jeffrey Epstein.

A document publicised by the U.S. DOJ corresponding to July 2014 revealed that Blockstream co-founder Austin Hill talked about the seed round of the firm with Epstein and Joi Ito, then director of the MIT Media Lab.

Adam Back also mentioned in his post that Blockstream met with Jeffrey Epstein, who was referred to at the time as a restricted partner in Ito’s fund.

Highlighted Crypto News Today:

Trump Says He Was Not Involved in $500M Abu Dhabi WLFI Deal

TagsCEOCoinbaseDOJ

Leituras Relacionadas

OpenAI Post-Training Engineer Weng Jiayi Proposes a New Paradigm Hypothesis for Agentic AI

OpenAI engineer Weng Jiayi's "Heuristic Learning" experiments propose a new paradigm for Agentic AI, suggesting that intelligent agents can improve not just by training neural networks, but also by autonomously writing and refining code based on environmental feedback. In the experiment, a coding agent (powered by Codex) was tasked with developing and maintaining a programmatic strategy for the Atari game Breakout. Starting from a basic prompt, the agent iteratively wrote code, ran the game, analyzed logs and video replays to identify failures, and then modified the code. Through this engineering loop of "code-run-debug-update," it evolved a pure Python heuristic strategy that achieved a perfect score of 864 in Breakout and performed competitively with deep reinforcement learning (RL) algorithms in MuJoCo control tasks like Ant and HalfCheetah. This approach, termed Heuristic Learning (HL), contrasts with Deep RL. In HL, experience is captured in readable, modifiable code, tests, logs, and configurations—a software system—rather than being encoded solely into opaque neural network weights. This offers potential advantages in explainability, auditability for safety-critical applications, easier integration of regression tests to combat catastrophic forgetting, and more efficient sample use in early learning stages, as demonstrated in broader tests on 57 Atari games. However, the blog acknowledges clear limitations. Programmatic strategies struggle with tasks requiring long-horizon planning or complex perception (e.g., Montezuma's Revenge), areas where neural networks excel. The future vision is a hybrid architecture: specialized neural networks for fast perception (System 1), HL systems for rules, safety, and local recovery (also System 1), and LLM agents providing high-level feedback and learning from the HL system's data (System 2). The core proposition is that in the era of capable coding agents, a significant portion of an AI's learned experience could be maintained as an auditable, evolving software system.

marsbitHá 47m

OpenAI Post-Training Engineer Weng Jiayi Proposes a New Paradigm Hypothesis for Agentic AI

marsbitHá 47m

Your Claude Will Dream Tonight, Don't Disturb It

This article explores the recent phenomenon of AI companies increasingly using anthropomorphic language—like "thinking," "memory," "hallucination," and now "dreaming"—to describe machine learning processes. Focusing on Anthropic's newly announced "Dreaming" feature for its Claude Agent platform, the piece explains that this function is essentially an automated, offline batch processing of an agent's operational logs. It analyzes past task sessions to identify patterns, optimize future actions, and consolidate learnings into a persistent memory system, akin to a form of reinforcement learning and self-correction. The article draws parallels to similar features in other AI agent systems like Hermes Agent and OpenClaw, which also implement mechanisms for reviewing historical data, extracting reusable "skills," and strengthening long-term memory. It notes a key difference from human dreaming: these AI "dreams" still consume computational resources and user tokens. Further context is provided by discussing the technical challenges of managing AI "memory" or context, highlighting the computational expense of large context windows and innovations like Subquadratic's new model claiming drastically longer contexts. The core critique argues that this strategic use of human-centric vocabulary does more than market products; it subtly reshapes user perception. By framing algorithms with terms associated with consciousness, companies blur the line between tool and autonomous entity. This linguistic shift can influence user expectations, tolerance for errors, and even perceptions of responsibility when systems fail, potentially diverting scrutiny from the companies and engineers behind the technology. The article concludes by speculating that terms like "daydreaming" for predictive task simulation might be next, continuing this trend of embedding the idea of an "inner life" into computational processes.

marsbitHá 49m

Your Claude Will Dream Tonight, Don't Disturb It

marsbitHá 49m

Trading

Spot
Futuros
活动图片