Tracking 118 Coins Launched in 2025: 85% Are Below Their Initial Valuation at Launch

深潮Published on 2025-12-22Last updated on 2025-12-22

Abstract

An analysis of 118 Token Generation Events (TGEs) that occurred in 2025 reveals a significant downturn in post-launch performance. The study compared each token's current fully diluted valuation (FDV) to its valuation at issuance. Key findings show that 84.7% (100 out of 118) of these assets are currently valued below their initial TGE price. This indicates that approximately 4 out of every 5 new tokens are trading at a lower valuation than at launch. The median token experienced a 71% decline in FDV (and a 67% drop in market cap) since its release. Only 15% of the tokens analyzed have maintained a valuation higher than at their TGE. The report concludes that participating in a TGE can no longer be considered a form of "early investment" under these market conditions.

Author:Ash

Compiled by: Deep Tide TechFlow

We tracked 118 Token Generation Events (TGEs) launched in 2025 and compared their current fully diluted valuation (FDV) with their valuation at issuance. The results are as follows:

  • 84.7% (100 out of 118) currently have a valuation lower than at TGE;

  • This means approximately 4 out of every 5 newly issued tokens are valued below their issuance level;

  • The median token's fully diluted valuation (FDV) has fallen by 71% since issuance (market cap down 67%);

  • Only 15% of tokens remained "green" (i.e., above issuance valuation) after TGE.

Nowadays, participating in a TGE can hardly be considered "early investment" anymore. How lamentable.

View the complete data via the link

Related Questions

QWhat percentage of the 118 tokens tracked from 2025 TGEs are currently valued below their initial valuation?

A84.7% (100 out of 118 tokens) are currently valued below their initial TGE valuation.

QHow does the median token's Fully Diluted Valuation (FDV) perform compared to its issuance valuation?

AThe median token's FDV has declined by 71% since its issuance.

QWhat proportion of the tracked tokens maintained a valuation higher than their initial issuance?

AOnly 15% of the tokens remained 'green' (valued above their initial issuance valuation).

QAccording to the article, what is the current perception of participating in a Token Generation Event (TGE)?

AParticipating in a TGE is no longer considered 'early investment'.

QWhat metric, besides FDV, is mentioned to have declined for the median token since issuance?

AThe market capitalization of the median token has declined by 67% since issuance.

Related Reads

OpenAI Post-Training Engineer Weng Jiayi Proposes a New Paradigm Hypothesis for Agentic AI

OpenAI engineer Weng Jiayi's "Heuristic Learning" experiments propose a new paradigm for Agentic AI, suggesting that intelligent agents can improve not just by training neural networks, but also by autonomously writing and refining code based on environmental feedback. In the experiment, a coding agent (powered by Codex) was tasked with developing and maintaining a programmatic strategy for the Atari game Breakout. Starting from a basic prompt, the agent iteratively wrote code, ran the game, analyzed logs and video replays to identify failures, and then modified the code. Through this engineering loop of "code-run-debug-update," it evolved a pure Python heuristic strategy that achieved a perfect score of 864 in Breakout and performed competitively with deep reinforcement learning (RL) algorithms in MuJoCo control tasks like Ant and HalfCheetah. This approach, termed Heuristic Learning (HL), contrasts with Deep RL. In HL, experience is captured in readable, modifiable code, tests, logs, and configurations—a software system—rather than being encoded solely into opaque neural network weights. This offers potential advantages in explainability, auditability for safety-critical applications, easier integration of regression tests to combat catastrophic forgetting, and more efficient sample use in early learning stages, as demonstrated in broader tests on 57 Atari games. However, the blog acknowledges clear limitations. Programmatic strategies struggle with tasks requiring long-horizon planning or complex perception (e.g., Montezuma's Revenge), areas where neural networks excel. The future vision is a hybrid architecture: specialized neural networks for fast perception (System 1), HL systems for rules, safety, and local recovery (also System 1), and LLM agents providing high-level feedback and learning from the HL system's data (System 2). The core proposition is that in the era of capable coding agents, a significant portion of an AI's learned experience could be maintained as an auditable, evolving software system.

marsbit27m ago

OpenAI Post-Training Engineer Weng Jiayi Proposes a New Paradigm Hypothesis for Agentic AI

marsbit27m ago

Your Claude Will Dream Tonight, Don't Disturb It

This article explores the recent phenomenon of AI companies increasingly using anthropomorphic language—like "thinking," "memory," "hallucination," and now "dreaming"—to describe machine learning processes. Focusing on Anthropic's newly announced "Dreaming" feature for its Claude Agent platform, the piece explains that this function is essentially an automated, offline batch processing of an agent's operational logs. It analyzes past task sessions to identify patterns, optimize future actions, and consolidate learnings into a persistent memory system, akin to a form of reinforcement learning and self-correction. The article draws parallels to similar features in other AI agent systems like Hermes Agent and OpenClaw, which also implement mechanisms for reviewing historical data, extracting reusable "skills," and strengthening long-term memory. It notes a key difference from human dreaming: these AI "dreams" still consume computational resources and user tokens. Further context is provided by discussing the technical challenges of managing AI "memory" or context, highlighting the computational expense of large context windows and innovations like Subquadratic's new model claiming drastically longer contexts. The core critique argues that this strategic use of human-centric vocabulary does more than market products; it subtly reshapes user perception. By framing algorithms with terms associated with consciousness, companies blur the line between tool and autonomous entity. This linguistic shift can influence user expectations, tolerance for errors, and even perceptions of responsibility when systems fail, potentially diverting scrutiny from the companies and engineers behind the technology. The article concludes by speculating that terms like "daydreaming" for predictive task simulation might be next, continuing this trend of embedding the idea of an "inner life" into computational processes.

marsbit29m ago

Your Claude Will Dream Tonight, Don't Disturb It

marsbit29m ago

Trading

Spot
Futures
活动图片