U.S. courts deliver sentencing in SafeMoon case as SBF pushes for new trial

ambcryptoPubblicato 2026-02-10Pubblicato ultima volta 2026-02-10

Introduzione

Two major U.S. crypto fraud cases advanced differently on February 10. John Karony, former CEO of SafeMoon, was sentenced to 100 months in prison after victims testified about financial losses caused by his fraudulent assurances. The judge rejected defense arguments and described the scheme as "a massive fraud." Separately, Sam Bankman-Fried filed a pro se motion for a new trial, arguing new witness testimony could weaken the case against him. He was previously convicted and sentenced to 25 years for misusing FTX customer funds. These developments show how crypto cases are diverging—some reaching final sentencing, while others continue through prolonged appeals.

Two of the most prominent crypto fraud cases in the U.S. courts moved in different directions today, 10 February.

In one case, the former chief executive of SafeMoon received a prison sentence following conviction. In another, Sam Bankman-Fried, the former head of collapsed exchange FTX, filed a fresh bid seeking to reopen his case.

SafeMoon CEO sentenced after victim testimony

A federal judge in New York sentenced John Karony, the former CEO of SafeMoon, to 100 months in prison, according to courtroom reporting by Inner City Press.

During the sentencing hearing, multiple victims described how they invested in SafeMoon after being reassured by Karony’s public statements and personal engagement with the community.

Several said the losses reshaped their financial futures, preventing home purchases and affecting education plans.

U.S. prosecutors sought a 12-year sentence, arguing Karony deliberately misled investors and showed no remorse. The defense cited his age and background to mitigate the punishment.

The judge rejected those arguments, describing the scheme as “a massive fraud” and stating it was “more like theft than fraud,” emphasizing that investors had been explicitly assured there would be no rug pull.

The sentence marks a final chapter in one of the most widely followed cases to reach U.S. courts.

SBF files long-shot motion for new trial

In a separate development, Bankman-Fried filed a pro se motion seeking a new trial on his FTX fraud conviction, according to Bloomberg.

The filing, dated 5 February and docketed Tuesday in Manhattan federal court, argues that new witness testimony could undermine the government’s case.

The request is separate from Bankman-Fried’s formal appeal. It comes after a federal appeals court rejected his attempt to secure release while that appeal is pending.

The Second Circuit ruled in December that he had not demonstrated a substantial likelihood of success.

Bankman-Fried was convicted in November 2023 on seven counts of fraud and conspiracy and sentenced in March 2024 to 25 years in prison.

Prosecutors said he misappropriated billions of dollars in FTX customer funds to support risky trading at Alameda Research, political donations, and luxury real estate purchases.

Cases enter different phases

Together, the two developments highlight how high-profile crypto prosecutions are diverging in 2026.

While the SafeMoon case has reached sentencing, delivering closure for victims, the FTX case continues to generate procedural filings as its former executive pursues post-conviction relief.


Final Thoughts

  • The SafeMoon sentencing reflects courts moving toward final judgments in retail-focused crypto fraud cases.
  • Bankman-Fried’s filing underscores how larger cases can remain active for years through appeals and post-conviction motions.

Domande pertinenti

QWhat was the prison sentence given to the former CEO of SafeMoon, John Karony?

AJohn Karony, the former CEO of SafeMoon, was sentenced to 100 months in prison.

QWhat did Sam Bankman-Fried file in relation to his FTX fraud conviction?

ASam Bankman-Fried filed a pro se motion seeking a new trial on his FTX fraud conviction.

QWhat was the original sentence that U.S. prosecutors sought for John Karony?

AU.S. prosecutors sought a 12-year sentence for John Karony.

QHow many years in prison was Sam Bankman-Fried sentenced to in March 2024?

ASam Bankman-Fried was sentenced to 25 years in prison in March 2024.

QAccording to the article, what did the judge describe the SafeMoon scheme as?

AThe judge described the SafeMoon scheme as 'a massive fraud' and stated it was 'more like theft than fraud'.

Letture associate

OpenAI Post-Training Engineer Weng Jiayi Proposes a New Paradigm Hypothesis for Agentic AI

OpenAI engineer Weng Jiayi's "Heuristic Learning" experiments propose a new paradigm for Agentic AI, suggesting that intelligent agents can improve not just by training neural networks, but also by autonomously writing and refining code based on environmental feedback. In the experiment, a coding agent (powered by Codex) was tasked with developing and maintaining a programmatic strategy for the Atari game Breakout. Starting from a basic prompt, the agent iteratively wrote code, ran the game, analyzed logs and video replays to identify failures, and then modified the code. Through this engineering loop of "code-run-debug-update," it evolved a pure Python heuristic strategy that achieved a perfect score of 864 in Breakout and performed competitively with deep reinforcement learning (RL) algorithms in MuJoCo control tasks like Ant and HalfCheetah. This approach, termed Heuristic Learning (HL), contrasts with Deep RL. In HL, experience is captured in readable, modifiable code, tests, logs, and configurations—a software system—rather than being encoded solely into opaque neural network weights. This offers potential advantages in explainability, auditability for safety-critical applications, easier integration of regression tests to combat catastrophic forgetting, and more efficient sample use in early learning stages, as demonstrated in broader tests on 57 Atari games. However, the blog acknowledges clear limitations. Programmatic strategies struggle with tasks requiring long-horizon planning or complex perception (e.g., Montezuma's Revenge), areas where neural networks excel. The future vision is a hybrid architecture: specialized neural networks for fast perception (System 1), HL systems for rules, safety, and local recovery (also System 1), and LLM agents providing high-level feedback and learning from the HL system's data (System 2). The core proposition is that in the era of capable coding agents, a significant portion of an AI's learned experience could be maintained as an auditable, evolving software system.

marsbit41 min fa

OpenAI Post-Training Engineer Weng Jiayi Proposes a New Paradigm Hypothesis for Agentic AI

marsbit41 min fa

Your Claude Will Dream Tonight, Don't Disturb It

This article explores the recent phenomenon of AI companies increasingly using anthropomorphic language—like "thinking," "memory," "hallucination," and now "dreaming"—to describe machine learning processes. Focusing on Anthropic's newly announced "Dreaming" feature for its Claude Agent platform, the piece explains that this function is essentially an automated, offline batch processing of an agent's operational logs. It analyzes past task sessions to identify patterns, optimize future actions, and consolidate learnings into a persistent memory system, akin to a form of reinforcement learning and self-correction. The article draws parallels to similar features in other AI agent systems like Hermes Agent and OpenClaw, which also implement mechanisms for reviewing historical data, extracting reusable "skills," and strengthening long-term memory. It notes a key difference from human dreaming: these AI "dreams" still consume computational resources and user tokens. Further context is provided by discussing the technical challenges of managing AI "memory" or context, highlighting the computational expense of large context windows and innovations like Subquadratic's new model claiming drastically longer contexts. The core critique argues that this strategic use of human-centric vocabulary does more than market products; it subtly reshapes user perception. By framing algorithms with terms associated with consciousness, companies blur the line between tool and autonomous entity. This linguistic shift can influence user expectations, tolerance for errors, and even perceptions of responsibility when systems fail, potentially diverting scrutiny from the companies and engineers behind the technology. The article concludes by speculating that terms like "daydreaming" for predictive task simulation might be next, continuing this trend of embedding the idea of an "inner life" into computational processes.

marsbit43 min fa

Your Claude Will Dream Tonight, Don't Disturb It

marsbit43 min fa

Trading

Spot
Futures
活动图片