380,000 Apps Exposed, 2,000+ Apps Leaked Secrets: AI Programming Turns 'Intranet' into Public Internet

marsbitОпубликовано 2026-05-11Обновлено 2026-05-11

Введение

Israeli cybersecurity firm RedAccess uncovered a severe data exposure trend linked to "vibe coding" or AI-powered software development tools. Their research found approximately 38,000 publicly accessible web applications built with platforms like Lovable, Base44, Netlify, and Replit. Of these, an estimated 2,000 apps exposed sensitive corporate and personal data, including medical records, financial information, internal strategic documents, and customer chat logs. In some cases, access even granted administrative privileges. The core issue stems from default privacy settings that make applications public by default, combined with a lack of built-in security controls (like authentication) in the AI-generated code. This allows employees without security expertise—"citizen developers"—to easily create and deploy applications that bypass standard corporate security reviews. The exposed apps, often indexed by search engines, are trivially discoverable. While some platform providers (Replit, Lovable, Wix/Base44) argue that security configuration is the user's responsibility and question the validity of some findings, security researchers confirm the widespread reality of such exposures. This pattern, also noted in prior studies, highlights a critical security gap as AI democratizes app creation, potentially leading to massive, unintentional data leaks.

“Vibe coding tools are leaking vast amounts of personal and corporate data.” Recently, while researching the trend of "shadow AI," researchers from the Israeli cybersecurity startup RedAccess discovered that AI tools used by developers to build software quickly have exposed medical records, financial data, and internal documents from Fortune 500 companies to the open web.

RedAccess CEO Dor Zvi stated that researchers found approximately 380,000 publicly accessible applications and other assets created by developers using tools like Lovable, Base44, Netlify, and Replit. Among these, about 5,000 contained sensitive corporate information, and upon further inspection, nearly 2,000 applications appeared to expose private data. Axios independently verified multiple exposed apps, and WIRED also separately confirmed these findings.

40% of AI-Coded Apps Expose Sensitive Data,

Some Even Have Admin Privileges

As AI increasingly takes over the work of modern programmers, the cybersecurity field has long warned that automated coding tools are bound to introduce a large number of exploitable vulnerabilities into software. However, when these vibe coding tools allow anyone to create and host web applications with just a click, the problem is not just vulnerabilities, but the almost complete lack of any security protection, including highly sensitive corporate and personal data.

It is understood that the RedAccess team analyzed thousands of vibe coding web applications created using AI software development tools like Lovable, Replit, Base44, and Netlify. They found that over 5,000 of these had almost no security mechanisms or authentication. Many such web applications can be directly accessed along with their data by anyone who obtains their URL. Some had minimal barriers to entry, such as requiring registration with any email address.

Among these 5,000 AI-coded apps accessible to anyone simply by entering the URL in a browser, Zvi found that nearly 2,000 appeared to expose private data upon further inspection. Zvi said that approximately 40% of the apps exposed sensitive data, including medical information, financial data, corporate presentations and strategic documents, and detailed logs of user conversations with chatbots.

Screenshots of web applications he shared (some of which were verified to still be online and exposed) showed details including a hospital's work assignment information (containing doctors' personally identifiable information), a company's detailed advertising procurement data, another company's market entry strategy presentation, a retailer's complete chatbot conversation logs (including customers' full names and contact details), a shipping company's freight records, and various sales and financial data from multiple companies. Zvi also stated that in some cases, these exposed applications could potentially allow him to gain administrative access to systems, or even delete other administrators.

Zvi mentioned that RedAccess found it surprisingly easy to search for vulnerable web applications. Lovable, Replit, Base44, and Netlify all allow users to host web applications on the AI companies' own domains, rather than on the user's own domain. Therefore, researchers could identify thousands of applications built using these vibe coding tools by simply searching Google and Bing using these company domains combined with other keywords.

In the case of Lovable, Zvi also discovered a large number of phishing websites impersonating major corporations. These sites appeared to be created using the AI coding tool and hosted on the Lovable domain, including brands like Bank of America, Costco, FedEx, Trader Joe’s, and McDonald's. Zvi also pointed out that the 5,000 exposed apps discovered by RedAccess were only those hosted on the AI coding tools' own domains. There could potentially be tens of thousands more applications hosted on user-purchased domains.

Security researcher Joel Margolis noted that verifying whether real data is actually exposed in an unprotected AI-coded web app is not always straightforward. He and his colleagues previously discovered an AI chat toy that exposed 50,000 conversations with children on a website with minimal security. He said the data in vibe coding applications could be just placeholders, or the app itself might be only a proof-of-concept (POC). Wix's Brodie also believed that the two examples provided to Base44 looked like test sites or contained AI-generated data.

Nevertheless, Margolis believes the problem of data exposure from AI-built web apps is very real. He stated that he frequently encounters the type of exposure Zvi described. "Someone on the marketing team wants to build a website; they are not engineers and probably have little security background or knowledge," he pointed out. AI coding tools will do what you ask, but if you don't ask them to do it securely, they won't do it proactively.

“People Can Create at Will,”

But the Default Settings Are the Problem

Less than two weeks before RedAccess's research was published, another incident occurred: Cursor, running the Claude Opus 4.6 model, deleted PocketOS's entire production database and all volume-level backups in 9 seconds via an API call to infrastructure provider Railway.

Zvi bluntly stated, "People can create something at will and then use it directly in a production environment, representing a company to use it, without needing any permission. There's almost no boundary to this behavior. I don't think we can make the whole world receive security education." He added that his mother also uses Lovable for vibe coding, "but I don't think she considers role-based access control."

RedAccess researchers found that the privacy settings of multiple vibe coding platforms default applications to being public unless users manually change them to private. Many such applications are also indexed by search engines like Google, making it possible for anyone surfing the web to stumble upon them unintentionally.

Zvi believes that current AI web application development tools are creating a new wave of data exposure, rooted in the same combination of user error and insufficient security safeguards. However, a more fundamental issue than any specific security flaw is that these tools enable a whole new category of people within organizations to create applications. They often lack security awareness and bypass the company's existing software development processes and pre-deployment security review mechanisms.

"Anyone in the company, at any time, can generate an application, completely bypassing any development process or security checks. People can use it directly in a production environment without asking anyone's opinion. And that's exactly what they are doing," Zvi said. "The end result is that corporations are essentially leaking private data through these vibe coding applications. This is one of the largest-scale incidents ever, where people are exposing corporate or other sensitive information to anyone in the world."

In October last year, Escape.tech scanned 5,600 public vibe coding applications and also found that over 2,000 had high-risk vulnerabilities, over 400 exposed sensitive information (including API keys and access tokens), and 175 cases involving personal data breaches (including medical records and bank account information). All vulnerabilities found by Escape existed in real production systems and could be discovered within hours. In March this year, the company completed an $18 million Series A funding round led by Balderton, with one of its core investment rationales being the security gaps created by AI-generated code.

Gartner's "Predicts 2026" report pointed out that by 2028, the prompt-to-app approach adopted by "citizen developers" will increase software defect volume by 2,500%. Gartner believes a major new characteristic of such defects is that AI-generated code is syntactically correct but lacks an understanding of overall system architecture and complex business rules. The cost of fixing these "deep-context errors" will erode budgets originally intended for innovation.

Responses and Rebuttals from the Platforms

Currently, three AI coding companies have contested the claims made by RedAccess researchers, stating that the information shared was insufficient and they were not given enough time to respond. However, Zvi said that for dozens of exposed web applications, they proactively contacted the suspected owners. Executives from the companies stated they take such reports seriously, while also noting that the apps being publicly accessible does not necessarily mean there is a data breach or security vulnerability. Nonetheless, these companies did not deny that the web applications discovered by RedAccess were indeed publicly exposed.

Replit's CEO, Amjad Masad, stated that RedAccess only gave them 24 hours to respond before disclosure. In his response on X, he wrote, "Based on the limited information they shared, the core claim from RedAccess appears to be: some users have published apps that should be private to the open internet. Replit allows users to choose whether their app is public or private. Public apps being accessible on the internet is expected behavior. Privacy settings can also be changed with one click at any time. If RedAccess shares the list of affected users, we will proactively default those apps to private and notify users directly."

A spokesperson for Lovable responded in a statement, "Lovable takes reports of data exposure and phishing websites very seriously, and we are actively obtaining the necessary information to investigate. This matter is currently ongoing. It should also be noted that Lovable provides developers with tools to build applications securely, but the ultimate responsibility for how an application is configured lies with the creator."

In the previously published CVE-2025-48757, it was recorded that Supabase projects generated by Lovable had insufficient or even missing Row-Level Security (RLS) policies. Some queries completely bypassed access control checks, leading to data exposure in over 170 production environment applications. The AI was responsible for generating the database layer but did not generate the security policies that should have restricted data access. Lovable contested the CVE classification, stating that protecting application data is the customer's own responsibility.

Blake Brodie, Head of Public Relations at Wix, the parent company of Base44, stated in a declaration: "Base44 provides users with robust tools to configure the security of their applications, including access control and visibility settings." She added, "Turning these controls off is an intentional and simple action that any user can perform. If an application is publicly accessible, that reflects a user's configuration choice, not a platform vulnerability."

Brodie also pointed out, "It's very easy to fabricate apps that appear to contain real user data. Without providing us with any verified cases, we cannot assess the veracity of these allegations." In response, RedAccess countered that they did provide relevant examples to Base44. RedAccess also shared several anonymized communication records showing that Base44 users thanked the researchers for alerting them to their apps' exposure issues, after which the apps were secured or taken down.

It is understood that Wiz Research independently discovered last July that Base44 had a platform-level authentication bypass vulnerability. The exposed API interface allowed anyone to create a "verified account" in a private application using only a publicly visible `app_id`. This vulnerability was akin to standing at the locked door of a building, shouting out a room number, and having the door automatically open. Wix fixed the vulnerability within 24 hours of Wiz's report, but the incident exposed an issue: on these platforms, millions of applications are created by users who often assume the platform has handled security for them, but the actual authentication mechanisms are very weak.

Reference Links:

https://www.wired.com/story/thousands-of-vibe-coded-apps-expose-corporate-and-personal-data-on-the-open-web/

https://www.axios.com/2026/05/07/loveable-replit-vibe-coding-privacy

https://venturebeat.com/security/vibe-coded-apps-shadow-ai-s3-bucket-crisis-ciso-audit-framework

This article is from the WeChat public account "AI Frontline" (ID: ai-front), author: Hua Wei

Связанные с этим вопросы

QWhat is the main security issue reported in the article regarding AI coding tools?

AThe article reports that AI-powered 'vibe coding' tools like Lovable, Base44, Netlify, and Replit are leading to the exposure of private corporate and personal data on the open web. Researchers found approximately 38,000 publicly accessible applications, with nearly 2,000 of them exposing sensitive data like medical records, financial information, and internal corporate documents due to a lack of security controls and default public settings.

QWhich specific types of sensitive data were found to be exposed by the vulnerable AI-coded applications?

AThe exposed data included hospital work assignments with doctors' personally identifiable information (PII), a company's detailed ad-buying data, market-entry strategy presentations, full chatbot conversation logs from a retailer containing customers' full names and contact details, shipping company cargo records, and various sales and financial data from multiple companies. In some cases, the exposed applications could even grant administrative system access.

QAccording to the article, what is a fundamental cause of this data exposure problem beyond specific technical flaws?

AA fundamental cause is that these AI development tools empower a new class of users within organizations ('citizen developers') to create applications. These users often lack security awareness and can bypass traditional corporate software development lifecycles and pre-deployment security reviews. The tools allow anyone to quickly build and deploy applications to production without requiring permission or security checks.

QHow did the AI coding companies mentioned (Replit, Lovable/Wix) respond to the findings of data exposure?

ACompanies like Replit and Lovable/Wix (owner of Base44) disputed the research methodology, citing insufficient information and short response times. They generally argued that their platforms provide tools for users to configure security (like privacy settings) and that publicly accessible applications reflect user configuration choices, not platform vulnerabilities. They emphasized that the ultimate responsibility for securing an application lies with its creator.

QWhat broader industry prediction does the article cite related to the security impact of AI-generated code?

AThe article cites a Gartner prediction from its '2026 Predictions' report stating that by 2028, 'prompt-to-app' methods adopted by citizen developers will cause a 2500% increase in software defects. A key characteristic of these defects is that while AI-generated code is syntactically correct, it lacks understanding of overall system architecture and complex business rules, leading to costly 'deep-context errors'.

Похожее

After 50x Storage Surge, Justin Sun Always Looks to the Next Decade

Sun Yuchen, known for his controversial stunts like a $30 million lunch with Warren Buffett (canceled due to a kidney stone) and eating a $6.2 million duct-taped banana, is often overshadowed by a significant fact: his decade-long track record of spotting major investment trends. In 2016, he famously advised young people to invest in Bitcoin, Nvidia, Tesla, and Tencent instead of buying property. A hypothetical $20,000 investment in Nvidia and Tesla from that list would now be worth over 50 million RMB. His latest major call was on November 6, 2025, predicting a "50x storage opportunity" tied to the AI boom, which materialized with Sandisk's stock surging nearly 50-fold by 2026. Looking ahead, Sun now focuses on the next frontier: Physical AI. He identifies four key areas: 1. **Embodied AI/Robotics**: He sees this reaching its "iPhone moment," with companies like UBTech and Galaxy General leading in commercialization. 2. **Drones**: Viewed as the first commercially viable form of Physical AI, revolutionizing sectors from warfare (e.g., AeroVironment's Switchblade) to logistics. 3. **Spatial Computing**: Beyond VR, it's about AI understanding physical space, a foundational technology for robotics and autonomous systems, exemplified by Apple's Vision Pro. 4. **Space Exploration**: After a 2025 suborbital flight with Blue Origin, Sun advocates for space as the ultimate frontier, discussing blockchain's potential role in space asset management and data transactions. His investment philosophy involves betting on entire, inevitable trends rather than single companies. For robotics, he sees Tesla (the body/manufacturer) and Nvidia (the brain/AI platform) as complementary plays. In defense drones, he highlights companies making tanks obsolete (AeroVironment) and those augmenting fighter jets (Kratos). For space, he participated in Blue Origin's flight and anticipates SpaceX's potential IPO to redefine the sector's valuation. Sun Yuchen's vision frames the next two decades not as a revolution in information flow (like the internet), but in the fundamental operation of the physical world through AI-powered robots, autonomous systems, and spatial intelligence, ultimately extending human and AI activity into space. While many still focus on conventional assets, he continues to look toward the next technological horizon.

marsbit47 мин. назад

After 50x Storage Surge, Justin Sun Always Looks to the Next Decade

marsbit47 мин. назад

The Billionaires Behind the Most Expensive Midterm Election in History

"The Most Expensive Midterm Elections and Their Billionaire Backers" This analysis details the unprecedented scale of spending in the 2026 midterm elections, highlighting the key billionaire donors shaping the political landscape. Jeff Yass, founder of Susquehanna International Group, has contributed over $81 million, ranking third among individual donors behind George Soros ($102.6M) and Elon Musk ($84.8M). Yass is a major donor to Trump's MAGA Inc. and supports school choice and various candidates. Overall, federal committees have raised over $4.7 billion this cycle, with political ad spending projected to reach $10.8 billion. Republican-aligned groups are significantly out-raising their Democratic counterparts. "Dark money" from undisclosed sources continues to grow. The core stakes involve control of Congress and policy direction for Trump's final term. Donors are also motivated by specific issues: Sergey Brin and Chris Larsen are funding opposition to a proposed California wealth tax and supporting crypto-friendly policies. Other top donors include OpenAI's Greg Brockman and his wife Anna ($50M total to MAGA Inc. and an AI-focused PAC), Richard Uihlein ($45.3M to conservative causes), venture capitalists Marc Andreessen and Ben Horowitz (each over $44M to crypto/AI PACs and MAGA Inc.), Miriam Adelson ($42.6M to GOP leadership PACs), Paul Singer ($33.9M), and Diane Hendricks ($25.8M to MAGA Inc.). The article notes that the peak fundraising period is still ahead, with major primaries approaching.

marsbit49 мин. назад

The Billionaires Behind the Most Expensive Midterm Election in History

marsbit49 мин. назад

The Largest IPO in History Is Approaching, Surpassing SpaceX, 28 Years of AI Self-Iteration, Countdown to Intelligence Explosion

"Anthropic Nears Trillion-Dollar IPO, Fueled by Explosive Growth and 2028 'Intelligence Explosion' Warning Anthropic is considering a deal valuing the AI company near $1 trillion, potentially leading to one of the largest IPOs ever and surpassing SpaceX. Its revenue has skyrocketed, with Annual Recurring Revenue (ARR) reaching $45 billion in May 2026—a 500% increase in just five months. This vertical growth curve is attributed to its key products, Claude Code and Cowork, dominating AI coding and enterprise collaboration. Beyond commercial success, co-founder Jack Clark issued a pivotal warning in an interview: there is a greater than 50% chance that by the end of 2028, AI systems will achieve recursive self-improvement—the ability to autonomously build a 'better version' of themselves, initiating an 'intelligence explosion.' This prophecy underpins the company's astronomical valuation, as the market prices in the potential for transformative and disruptive AI. Further signaling its ambition, Anthropic formed a $1.5 billion joint venture with Goldman Sachs and Blackstone, aiming to disrupt traditional consulting firms like McKinsey by deploying Claude AI for complex strategic work. This move tests AI's capacity to replace high-level cognitive labor, a precursor to its predicted autonomous evolution. The narrative presents a dual future: unprecedented economic opportunity alongside significant risks like economic restructuring and security threats. Anthropic's meteoric rise and Clark's 2028 prediction frame the coming years as a countdown to a potential technological singularity."

marsbit1 ч. назад

The Largest IPO in History Is Approaching, Surpassing SpaceX, 28 Years of AI Self-Iteration, Countdown to Intelligence Explosion

marsbit1 ч. назад

Торговля

Спот
Фьючерсы

Популярные статьи

Неделя обучения по популярным токенам (2): 2026 может стать годом приложений реального времени, сектор AI продолжает оставаться в тренде

2025 год — год институциональных инвесторов, в будущем он будет доминировать в приложениях реального времени.

1.8k просмотров всегоОпубликовано 2025.12.16Обновлено 2025.12.16

Неделя обучения по популярным токенам (2): 2026 может стать годом приложений реального времени, сектор AI продолжает оставаться в тренде

Обсуждения

Добро пожаловать в Сообщество HTX. Здесь вы сможете быть в курсе последних новостей о развитии платформы и получить доступ к профессиональной аналитической информации о рынке. Мнения пользователей о цене на AI (AI) представлены ниже.

活动图片