P2P team admits to betting on its own raise days after Polymarket tightened insider trading rules

ambcrypto2026-03-27 tarihinde yayınlandı2026-03-27 tarihinde güncellendi

Özet

P2P, a crypto project, has admitted that its team placed bets on Polymarket regarding the outcome of its own $6 million fundraising campaign. The bets were made approximately 10 days before the raise concluded, using funds from the project's treasury. This activity, which generated around $23,000 in profit and loss, occurred just days after Polymarket updated its rules to explicitly prohibit insider trading, including by individuals who can influence an event's outcome. While P2P stated the bets were not based on guaranteed information and plans to return all proceeds, the case highlights the enforcement challenges decentralized prediction markets face in preventing manipulation and maintaining trust, especially from involved actors. The incident is a real-world test of how newly tightened market integrity rules are applied in practice.

A crypto project has disclosed that it placed bets on its own fundraising outcome on Polymarket, drawing attention to how newly tightened market integrity rules may apply in practice.

In a public statement, P2P.me confirmed that an account labeled “P2P Team” on-chain was controlled by its team. The account was used to bet on whether the project would reach a $6 million fundraising target.

The bets were placed roughly 10 days before the raise concluded, when the outcome had not yet been finalized.

The project stated that the capital used came from its foundation’s treasury and that all proceeds would be returned. It added that it plans to liquidate the positions and introduce internal policies governing prediction market activity.

Case emerges days after Polymarket tightened insider trading rules

The disclosure comes just days after Polymarket updated its rules on 23 March, introducing stricter definitions around insider trading and manipulation.

Among the changes, the platform explicitly prohibited trading by individuals who hold positions of influence over an outcome. That category includes participants directly involved in events tied to prediction markets.

While P2P said the bets were placed before the raise was completed and not based on guaranteed allocations, the timing of the disclosure places the case within a broader shift toward tighter oversight on prediction platforms.

On-chain activity shows active trading and profits

Data from the “P2P Team” account indicates the activity was not purely symbolic.

The account recorded roughly $149,000 in trading volume and around $23,000 in profit and loss. Individual positions generated gains of over $11,000. The figures suggest the trades were executed as active positions rather than passive signaling.

Source: Polymarket

P2P acknowledged that failing to disclose the activity at the time was a mistake. The team notes that trading on outcomes that a team can influence may erode trust, even if the result is not predetermined.

Incident highlights challenges in prediction market enforcement

The case underscores a broader challenge facing decentralized prediction markets: how to manage participation by individuals who may influence event outcomes.

Polymarket’s model relies on open participation and transparent on-chain activity. However, the presence of informed or involved actors can complicate enforcement, particularly when trades occur before outcomes are finalized.

As platforms move to formalize rules around insider activity, real-world cases like this may shape how those standards are interpreted and applied.


Final Summary

  • P2P disclosed betting on its own fundraise outcome, raising questions about insider participation in prediction markets.
  • The incident comes as platforms like Polymarket tighten rules, highlighting ongoing challenges in enforcing market integrity.

İlgili Sorular

QWhat did the P2P team admit to doing on Polymarket?

AThe P2P team admitted to placing bets on their own fundraising outcome, specifically on whether the project would reach its $6 million target.

QWhen did Polymarket update its rules regarding insider trading and manipulation?

APolymarket updated its rules, introducing stricter definitions around insider trading and manipulation, on March 23.

QWhat was the financial result of the 'P2P Team' account's trading activity?

AThe 'P2P Team' account recorded approximately $149,000 in trading volume and around $23,000 in profit and loss, with individual positions generating gains of over $11,000.

QAccording to the article, what is a key challenge for decentralized prediction markets highlighted by this incident?

AA key challenge is managing participation by individuals who may influence event outcomes, as the presence of informed or involved actors complicates enforcement, especially when trades occur before outcomes are finalized.

QWhat action did P2P say it would take following this disclosure?

AP2P stated it would liquidate the positions, return all proceeds to its foundation's treasury, and introduce internal policies governing prediction market activity.

İlgili Okumalar

OpenAI Post-Training Engineer Weng Jiayi Proposes a New Paradigm Hypothesis for Agentic AI

OpenAI engineer Weng Jiayi's "Heuristic Learning" experiments propose a new paradigm for Agentic AI, suggesting that intelligent agents can improve not just by training neural networks, but also by autonomously writing and refining code based on environmental feedback. In the experiment, a coding agent (powered by Codex) was tasked with developing and maintaining a programmatic strategy for the Atari game Breakout. Starting from a basic prompt, the agent iteratively wrote code, ran the game, analyzed logs and video replays to identify failures, and then modified the code. Through this engineering loop of "code-run-debug-update," it evolved a pure Python heuristic strategy that achieved a perfect score of 864 in Breakout and performed competitively with deep reinforcement learning (RL) algorithms in MuJoCo control tasks like Ant and HalfCheetah. This approach, termed Heuristic Learning (HL), contrasts with Deep RL. In HL, experience is captured in readable, modifiable code, tests, logs, and configurations—a software system—rather than being encoded solely into opaque neural network weights. This offers potential advantages in explainability, auditability for safety-critical applications, easier integration of regression tests to combat catastrophic forgetting, and more efficient sample use in early learning stages, as demonstrated in broader tests on 57 Atari games. However, the blog acknowledges clear limitations. Programmatic strategies struggle with tasks requiring long-horizon planning or complex perception (e.g., Montezuma's Revenge), areas where neural networks excel. The future vision is a hybrid architecture: specialized neural networks for fast perception (System 1), HL systems for rules, safety, and local recovery (also System 1), and LLM agents providing high-level feedback and learning from the HL system's data (System 2). The core proposition is that in the era of capable coding agents, a significant portion of an AI's learned experience could be maintained as an auditable, evolving software system.

marsbit50 dk önce

OpenAI Post-Training Engineer Weng Jiayi Proposes a New Paradigm Hypothesis for Agentic AI

marsbit50 dk önce

Your Claude Will Dream Tonight, Don't Disturb It

This article explores the recent phenomenon of AI companies increasingly using anthropomorphic language—like "thinking," "memory," "hallucination," and now "dreaming"—to describe machine learning processes. Focusing on Anthropic's newly announced "Dreaming" feature for its Claude Agent platform, the piece explains that this function is essentially an automated, offline batch processing of an agent's operational logs. It analyzes past task sessions to identify patterns, optimize future actions, and consolidate learnings into a persistent memory system, akin to a form of reinforcement learning and self-correction. The article draws parallels to similar features in other AI agent systems like Hermes Agent and OpenClaw, which also implement mechanisms for reviewing historical data, extracting reusable "skills," and strengthening long-term memory. It notes a key difference from human dreaming: these AI "dreams" still consume computational resources and user tokens. Further context is provided by discussing the technical challenges of managing AI "memory" or context, highlighting the computational expense of large context windows and innovations like Subquadratic's new model claiming drastically longer contexts. The core critique argues that this strategic use of human-centric vocabulary does more than market products; it subtly reshapes user perception. By framing algorithms with terms associated with consciousness, companies blur the line between tool and autonomous entity. This linguistic shift can influence user expectations, tolerance for errors, and even perceptions of responsibility when systems fail, potentially diverting scrutiny from the companies and engineers behind the technology. The article concludes by speculating that terms like "daydreaming" for predictive task simulation might be next, continuing this trend of embedding the idea of an "inner life" into computational processes.

marsbit52 dk önce

Your Claude Will Dream Tonight, Don't Disturb It

marsbit52 dk önce

İşlemler

Spot
Futures
活动图片