Lighter DEX Introduces Native LIT Token as Part of Ecosystem Expansion

TheNewsCryptoPubblicato 2025-12-30Pubblicato ultima volta 2025-12-30

Introduzione

Lighter, a decentralized perpetual exchange, has launched its native Light Infrastructure Token (LIT) as part of its ecosystem expansion. The token has a total supply of 1 billion, with 50% allocated to the ecosystem—including an airdrop for points program participants—and the other 50% reserved for the team and investors. LIT will be used for governance, staking to access premium trading and data services, and paying platform fees. Shortly after the announcement, Lighter activated the LIT/USDC trading pair, making the token available for trading.

Lighter, a decentralized perpetual exchange and DeFi infrastructure platform, has unveiled its native token for its crypto ecosystem, named Light Infrastructure Token (LIT). They confirmed it through their X handle by this morning in a thread post, describing its token allocation, structure, and utilities. Then, within a few hours, the lighter activated the LIT/USDC trading pair, and the token was live on the platform after the announcement.

Where this Lighter is built on Ethereum Layer 2 using Zero-knowledge rollup technology, in the post, they mentioned in the thread post that the earnings generated by their main DEX product, as well as future products and services, can be tracked openly by anyone in real time on chain and will be divided between growth and buybacks depending on market circumstances.

Token Allocation and LIT Use Cases

When it comes to token allocation, the total supply of the LIT token is 1 billion. With that, the Lighter clarified that half of the percentage is kept for the ecosystem. Then, the other 50% is reserved for the team/investors, in that 26% is vested in the team and 24% allocated to investors.

Regarding the ecosystem portion, the 2025 points season programs produced 12.5 million points, which will be distributed immediately through an airdrop. This equates to 25% of the entire token supply. The remaining 25% will be preserved for future reward schemes, with a tiny portion going toward partnerships and ecosystem growth.

With that, they have added that the LIT tokens are not only used for governance, but also have other use cases. The Lighter platform offers trading excursion and data verification services designed in different tiers, which require staking of the LIT tokens to use the features to their full potential. Also, LIT is to pay fees for all platform activities.

Lighter Introduces New Trading Pair

Then, shortly after the LIT token announcement, Lighter announced its LIT/USDC trading pair this morning through its X handle and Discord channel. As the LIT is live now and available for trading, it is drawing attention from crypto circles.

Domande pertinenti

QWhat is the name of the native token introduced by Lighter DEX and what does LIT stand for?

AThe native token is named Light Infrastructure Token, and LIT stands for Light Infrastructure Token.

QOn which blockchain and Layer 2 technology is Lighter built?

ALighter is built on Ethereum Layer 2 using Zero-knowledge rollup technology.

QWhat is the total supply of LIT tokens and how is it allocated between ecosystem, team, and investors?

AThe total supply is 1 billion LIT tokens. 50% is allocated to the ecosystem, 26% to the team, and 24% to investors.

QWhat are some of the use cases for the LIT token on the Lighter platform?

ALIT tokens are used for governance, staking to access trading and data verification services, and paying fees for all platform activities.

QHow did Lighter distribute a portion of the tokens to users and what was the program called?

ALighter distributed 12.5 million tokens (25% of total supply) through an airdrop as part of the 2025 points season programs.

Letture associate

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

In a span of four days, Amazon announced an additional $25 billion investment, and Google pledged up to $40 billion—both direct competitors pouring over $65 billion into the same AI startup, Anthropic. Rather than a typical venture capital move, this signals the latest escalation in the cloud wars. The core of the deal is not equity but compute pre-orders: Anthropic must spend the majority of these funds on AWS and Google Cloud services and chips, effectively locking in massive future compute consumption. This reflects a shift in cloud market dynamics—enterprises now choose cloud providers based on which hosts the best AI models, not just price or stability. With OpenAI deeply tied to Microsoft, Anthropic’s Claude has become the only viable strategic asset for Google and Amazon to remain competitive. Anthropic’s annualized revenue has surged to $30 billion, and it is expanding into verticals like biotech, positioning itself as a cross-industry AI infrastructure layer. However, this funding comes with constraints: Anthropic’s independence is challenged as it balances two rival investors, its safety-first narrative faces pressure from regulatory scrutiny, and its path to IPO introduces new financial pressures. Globally, this accelerates a "tri-polar" closed-loop structure in AI infrastructure, with Microsoft-OpenAI, Google-Anthropic, and Amazon-Anthropic forming exclusive model-cloud alliances. In contrast, China’s landscape differs—investments like Alibaba and Tencent backing open-source model firm DeepSeek reflect a more decoupled approach, though closed-source models from major cloud providers still dominate. The $65 billion bet is ultimately about securing a seat at the table in an AI-defined future—where missing the model layer means losing the cloud war.

marsbit3 h fa

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

marsbit3 h fa

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

DeepSeek-V4 has been released as a preview open-source model, featuring 1 million tokens of context length as a baseline capability—previously a premium feature locked behind enterprise paywalls by major overseas AI firms. The official announcement, however, openly acknowledges computational constraints, particularly limited service throughput for the high-end DeepSeek-V4-Pro version due to restricted high-end computing power. Rather than competing on pure scale, DeepSeek adopts a pragmatic approach that balances algorithmic innovation with hardware realities in China’s AI ecosystem. The V4-Pro model uses a highly sparse architecture with 1.6T total parameters but only activates 49B during inference. It performs strongly in agentic coding, knowledge-intensive tasks, and STEM reasoning, competing closely with top-tier closed models like Gemini Pro 3.1 and Claude Opus 4.6 in certain scenarios. A key strategic product is the Flash edition, with 284B total parameters but only 13B activated—making it cost-effective and accessible for mid- and low-tier hardware, including domestic AI chips from Huawei (Ascend), Cambricon, and Hygon. This design supports broader adoption across developers and SMEs while stimulating China's domestic semiconductor ecosystem. Despite facing talent outflow and intense competition in user traffic—with rivals like Doubao and Qianwen leading in monthly active users—DeepSeek has maintained technical momentum. The release also comes amid reports of a new funding round targeting a valuation exceeding $10 billion, potentially setting a new record in China’s LLM sector. Ultimately, DeepSeek-V4 represents a shift toward open yet realistic infrastructure development in the constrained compute landscape of Chinese AI, emphasizing engineering efficiency and domestic hardware compatibility over pure model scale.

marsbit3 h fa

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

marsbit3 h fa

Trading

Spot
Futures
活动图片