Interactive Tutorial | Perle Labs, Which Raised $17.5 Million, Launches Season 1 Points Campaign

Odaily星球日报Опубліковано о 2026-01-22Востаннє оновлено о 2026-01-22

Анотація

Perle Labs, an AI training data platform backed by $17.5 million in funding, has launched its Season 1 points campaign. The platform, which uses human experts to perform verifiable, on-chain data labeling on Solana, allows users to earn points by completing tasks. The interactive tasks, available on their app, include: a legal contract classification quest (1,000 points), social media account bindings (500 points), a product tagging quest from complaint emails (1,000 points), a meeting caption quest requiring audio recordings (1,000 points, pending manual review), and a medical specialty classification quest (1,000 points). Each task requires connecting a Solana wallet with some SOL for transaction fees. Users can also earn a 10% referral bonus from invited friends. Points and badges can be tracked in the "Earnings" section.

Original | Odaily Planet Daily (@OdailyChina)

Author | Asher (@Asher_ 0210)

On January 19, Perle Labs, an AI training data platform driven by human experts, announced on platform X that the beta version ended on January 6. The Season 1 campaign is now officially launched. Users can earn points by completing various tasks.

Below, Odaily Planet Daily will give you a quick overview of Perle Labs and a step-by-step guide to participating in the Season 1 campaign to earn points and potentially qualify for a future token airdrop.

Perle Labs: An AI Training Data Platform Driven by Human Experts

Project Introduction

Perle Labs is an AI training data platform driven by human experts, focused on providing high-quality datasets to support decentralized AI development. According to ROOTDATA, Perle Labs has raised a total of $17.5 million in funding to date.

On the Perle Labs platform, experts (such as doctors, financial analysts) perform data annotation. Each step is recorded on the Solana blockchain, making it tamper-proof and fully traceable. Clients (such as pharmaceutical companies, banks) can directly access this "certified" data via API.

Step-by-Step Interaction Tutorial

Interaction Link: https://app.perle.xyz/

STEP 1. Click "Connect Wallet" to connect a wallet supporting the Solana network (you will get a personal referral link; inviting friends earns you 10% of the points they earn). Ensure the connected wallet has a small amount of SOL, as each task requires on-chain confirmation.

STEP 2. Click "Marketplace". There will be 5 tasks available to participate in.

Except for the social binding task, for each other task, you need to fully watch the instructions to unlock it.

Task 1. Join The Perle Labs Legal Classification Quest: Read excerpts from legal contracts and select the corresponding clause (the questions are highly specialized; using AI tools for assistance is recommended). 10 questions, 100 points each, 1000 points total.

Task 2. Welcome to Perle Labs: Bind your personal X, Discord, Telegram, etc. 5 tasks total, 100 points each, 500 points total.

Task 3. Join The Perle Labs Product Tagging Quest: Find the mentioned product in complaint email messages. 10 questions, 100 points each, 1000 points total.

Task 4. Join The Perle Labs Meeting Caption Quest: Read sentences aloud (in English). 10 questions, 100 points each, 1000 points total. The audio part requires manual review and won't show as completed immediately; just submit the voice content on-chain.

Task 5. Join The Perle Labs Medical Specialty Quest: Read the content of doctor's notes to determine the medical specialty of the case (the questions are highly specialized; using AI tools for assistance is recommended). 10 questions, 100 points each, 1000 points total.

STEP 3. Click "Earnings" to view your points and badge acquisition status.

Пов'язані питання

QWhat is Perle Labs and what is its primary focus?

APerle Labs is a human expert-driven AI training data platform that focuses on providing high-quality datasets to support decentralized AI development.

QHow much total funding has Perle Labs raised according to the article?

APerle Labs has raised a total of $17.5 million in funding.

QWhat is the purpose of the Season 1 campaign launched by Perle Labs?

AThe purpose of the Season 1 campaign is for users to complete various tasks to earn points, potentially for future token airdrops.

QWhich blockchain does Perle Labs use to record data annotation steps?

APerle Labs records every step of expert data annotation on the Solana blockchain.

QWhat are the five main quests available in the Marketplace for users to earn points?

AThe five main quests are: 1. Legal Classification Quest, 2. Social Account Binding, 3. Product Tagging Quest, 4. Meeting Caption Quest, and 5. Medical Specialty Quest.

Пов'язані матеріали

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

In a span of four days, Amazon announced an additional $25 billion investment, and Google pledged up to $40 billion—both direct competitors pouring over $65 billion into the same AI startup, Anthropic. Rather than a typical venture capital move, this signals the latest escalation in the cloud wars. The core of the deal is not equity but compute pre-orders: Anthropic must spend the majority of these funds on AWS and Google Cloud services and chips, effectively locking in massive future compute consumption. This reflects a shift in cloud market dynamics—enterprises now choose cloud providers based on which hosts the best AI models, not just price or stability. With OpenAI deeply tied to Microsoft, Anthropic’s Claude has become the only viable strategic asset for Google and Amazon to remain competitive. Anthropic’s annualized revenue has surged to $30 billion, and it is expanding into verticals like biotech, positioning itself as a cross-industry AI infrastructure layer. However, this funding comes with constraints: Anthropic’s independence is challenged as it balances two rival investors, its safety-first narrative faces pressure from regulatory scrutiny, and its path to IPO introduces new financial pressures. Globally, this accelerates a "tri-polar" closed-loop structure in AI infrastructure, with Microsoft-OpenAI, Google-Anthropic, and Amazon-Anthropic forming exclusive model-cloud alliances. In contrast, China’s landscape differs—investments like Alibaba and Tencent backing open-source model firm DeepSeek reflect a more decoupled approach, though closed-source models from major cloud providers still dominate. The $65 billion bet is ultimately about securing a seat at the table in an AI-defined future—where missing the model layer means losing the cloud war.

marsbit1 год тому

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

marsbit1 год тому

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

DeepSeek-V4 has been released as a preview open-source model, featuring 1 million tokens of context length as a baseline capability—previously a premium feature locked behind enterprise paywalls by major overseas AI firms. The official announcement, however, openly acknowledges computational constraints, particularly limited service throughput for the high-end DeepSeek-V4-Pro version due to restricted high-end computing power. Rather than competing on pure scale, DeepSeek adopts a pragmatic approach that balances algorithmic innovation with hardware realities in China’s AI ecosystem. The V4-Pro model uses a highly sparse architecture with 1.6T total parameters but only activates 49B during inference. It performs strongly in agentic coding, knowledge-intensive tasks, and STEM reasoning, competing closely with top-tier closed models like Gemini Pro 3.1 and Claude Opus 4.6 in certain scenarios. A key strategic product is the Flash edition, with 284B total parameters but only 13B activated—making it cost-effective and accessible for mid- and low-tier hardware, including domestic AI chips from Huawei (Ascend), Cambricon, and Hygon. This design supports broader adoption across developers and SMEs while stimulating China's domestic semiconductor ecosystem. Despite facing talent outflow and intense competition in user traffic—with rivals like Doubao and Qianwen leading in monthly active users—DeepSeek has maintained technical momentum. The release also comes amid reports of a new funding round targeting a valuation exceeding $10 billion, potentially setting a new record in China’s LLM sector. Ultimately, DeepSeek-V4 represents a shift toward open yet realistic infrastructure development in the constrained compute landscape of Chinese AI, emphasizing engineering efficiency and domestic hardware compatibility over pure model scale.

marsbit1 год тому

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

marsbit1 год тому

Торгівля

Спот
Ф'ючерси
活动图片