The Image Generation Model That's Hotter Than Nano Banana Has Leaked, Screenshots Are No Longer Evidence | Includes Prompts

marsbitОпубликовано 2026-04-19Обновлено 2026-04-19

Введение

A new AI image generation model, widely referred to as "GPT Image 2," has been leaked and is demonstrating significant advancements over predecessors like DALL-E 3 and even Google's Nano Banana Pro. It excels in four key areas: text rendering, prompt adherence, photorealism, and world knowledge. The model can generate highly accurate text in multiple languages, including complex Chinese characters, making it capable of producing convincing fake documents, UI screenshots, and product labels. This capability also raises concerns about the reliability of using screenshots as evidence. The model is currently in A/B testing, with a full release expected around May 2026 when DALL-E services are officially retired. It is accessible for testing on the LM Arena platform. The article includes several prompt templates optimized for the model, such as generating realistic app screenshots, product photos with detailed labels, and street scenes with accurate signage. This advancement is reshaping creative workflows but also accelerating the displacement of some traditional design roles.

Is your impression of text-to-image still stuck on Nano Banana?

But kid, times have changed again.

@johnAGI168 https://x.com/johnAGI168/status/2044781168151724067

@0115hippo https://x.com/0115hippo/status/2044722124611539160

In early April, three anonymous image models, codenamed maskingtape-alpha, packingtape-alpha, and gaffertape-alpha, appeared on the LM Arena evaluation platform. They disappeared a few hours later.

OpenAI has not officially announced this model yet, but based on the metadata returned by the API and user-side testing records, it has already gained a widely accepted name: GPT Image 2.

Screenshots Can No Longer Be Used as Evidence

Over the past few years, one of the most obvious weaknesses of AI image generation models has been text within images. In the DALL-E 3 era, if you asked it to write "Hello" in an image, it might output "Hellp" or even "Hl10", with letters tilting drunkenly. GPT Image 1 improved a lot, handling simple English labels. By GPT Image 1.5, its accuracy in rendering English text was close to 95%, but it still had significant flaws with non-Latin scripts like Chinese, Japanese, and Korean.

But the leaked sample images from GPT Image 2 have changed this impression.

@MrLarus https://x.com/MrLarus/status/2044824800909054181

@akokoi1 https://x.com/akokoi1/status/2044789531615056175

The text in the images is exactly what it should be. Chinese characters are clear, with accurate glyphs and complete strokes. Someone tested generating an ID card-style image, where the name, address, and ID number were all rendered correctly, with neat formatting, looking at first glance like a photo of a real document.

This is good news. The improvement in text rendering means generating infographics, posters, product packaging, and complex charts becomes more reliable.

But there's always another side to the coin. A model that can generate photo-realistic ID-style images and precisely render UI screenshots naturally makes "screenshots can be used as evidence" increasingly questionable.

By comparison, this is also a core difference between the GPT Image series and other models. Midjourney still has no progress in text rendering, and the Stable Diffusion series also has this old problem. According to the leaked Arena test results, GPT Image 2 surpassed Midjourney in four dimensions: text rendering, instruction following, photorealism, and world knowledge. Midjourney's advantages are mainly retained in artistic style and aesthetic control.

Does It Really Know What the World Looks Like?

A tester asked the model to generate a hypothetical GPT-8 product pricing page. The resulting image had a layout that was indeed in the style of the OpenAI website, with button placement and font choices resembling those from a real interface, and the hierarchical logic of the price table was correct.

GPT Image 2 can generate images extremely similar to real software interfaces, including browser windows, mobile app interfaces, and data visualization charts, with a level of fidelity unmatched by the previous generation.

@johnAGI168 https://x.com/johnAGI168/status/2044781168151724067

@levelsio https://x.com/levelsio/status/2040333489476681758

This will lead to some very interesting practical uses. When designers are creating product prototypes, they don't need to open Figma first and draw a bunch of wireframes; they can directly describe the desired interface in text, and the output is a reference image that can be used for team discussions. When creating investor decks, they can show a "product screenshot" without waiting for an engineer to write code. When writing documentation, example interface images for illustration can be generated directly, without having to think about where to find screenshots for a blank page.

@marmaduke091 https://x.com/marmaduke091/status/2040338311873515597

Image Generation Is No Longer Just "Image Generation"

OpenAI has already announced that DALL-E 2 and DALL-E 3 will officially cease service on May 12, 2026. Azure OpenAI's DALL-E 3 was retired early in February.

DALL-E was the first place many people encountered AI image generation, from those blurry early works to today, in just a few short years.

Meanwhile, Google, which had just established its industry position with Nano Banana Pro in early 2026, might feel the pressure. Early test reports indicate that GPT Image 2 simultaneously surpasses Nano Banana Pro in three dimensions: realism, text rendering, and world knowledge. This kind of triple win is not common.

For creators, the feeling is complex. Illustrators, graphic designers, and photographers are not facing this topic for the first time. Since the release of GPT Image 1, the number of freelance graphic design positions has decreased by about 18%. AI has indeed replaced the decision to "hire someone to do this" in certain scenarios, but it is also creating new ways of working, allowing one person to do more.

The evolution speed of image generation models no longer leaves much time for adaptation. It was only a few months from GPT Image 1's launch to version 1.5. And from 1.5 to 2, it's only been about half a year. Each generation solves the core shortcomings of the previous one while opening up new possibilities.

GPT Image 2 is currently still in the A/B testing phase, with some ChatGPT users randomly gaining access. The official release window is widely predicted to be around May, coinciding with the retirement of DALL-E. If you want to experience it early, you can currently try your luck on the LM Arena evaluation platform.

Test Address: https://arena.ai

Based on community feedback and the known strengths of this model, the following prompt templates can maximize your chances of success:

UI/Screenshot Prompt: A photorealistic screenshot of a mobile banking app, clearly showing transaction history with dates, amounts, and merchant names legible. iPhone 16 screen, natural hand holding the phone, coffee shop background.

Product Label Prompt: A photographic product photo of a craft beer bottle, with clear label details showing the brewery name "Oakridge Brewing Co.", alcohol content 6.8%, a mountain logo, and an ingredient list. Studio lighting, white background.

Signage Prompt: A street scene photo of a Tokyo alley at night, showing multiple neon signs in both Japanese and English, including a ramen shop sign reading "Ichiban Ramen — Est. 1987", a karaoke bar sign, and various glowing advertisements. Wet, reflective pavement with light reflections.

Interface/World Knowledge Prompt: A photorealistic YouTube video screenshot showing a video titled "How to Assemble a Computer in 2026" with 2.3 million views, featuring realistic comments, sidebar video recommendations, and channel info. Desktop browser view.

Widescreen Trigger Prompt: A cinematic widescreen photo of an IKEA store exterior at dusk, showing the glowing IKEA sign, a parking lot with realistic cars, and shoppers entering and leaving. Golden hour lighting, 16:9 format.

Unattributed image sources and references: https://miraflow.ai/blog/how-to-use-duct-tape-ai-model-arena-gpt-image-2-guide

This article is from the WeChat public account "APPSO", author: Discovering Tomorrow's Products

Связанные с этим вопросы

QWhat is the name of the leaked image generation model mentioned in the article, and what is its significance?

AThe leaked model is referred to as GPT Image 2. Its significance lies in its dramatic improvement in text rendering accuracy, especially for non-Latin scripts like Chinese, and its ability to generate highly realistic images, including convincing UI screenshots and document-style images, which challenges the reliability of screenshots as evidence.

QHow does GPT Image 2's performance compare to other models like Midjourney and Google's Nano Banana Pro?

AAccording to the article, GPT Image 2 outperforms Midjourney in text rendering, prompt following, photorealism, and world knowledge, with Midjourney retaining an advantage mainly in artistic style and aesthetic control. It also reportedly surpasses Google's Nano Banana Pro in realism, text rendering, and world knowledge.

QWhat are some of the potential practical applications of GPT Image 2's capabilities?

APotential applications include generating product prototypes and UI mockups for designers, creating realistic 'screenshots' for investor decks without coding, producing example interface images for documentation, and generating accurate product labels, packaging, and information graphics.

QWhat major change is OpenAI making to its image generation services in relation to this new model?

AOpenAI has announced that DALL-E 2 and DALL-E 3 will officially stop service on May 12, 2026, with Azure's DALL-E 3 having already been retired in February. This suggests a transition to the new GPT Image model series.

QWhere can users currently try to access or test the GPT Image 2 model, and what is a recommended strategy for getting good results?

AThe model is currently in A/B testing, with some ChatGPT users randomly gaining access. Users can also try their luck on the LM Arena评测平台 (arena.ai). The article recommends using specific, detailed prompt templates focused on UI/screenshots, product labels, signage, interface/world knowledge, and widescreen formats to maximize success.

Похожее

Yao Shunyu's 88 Days

Yao Shunyu, a 27-year-old AI expert with a background from Princeton and OpenAI, joined Tencent in September 2025. Within 88 days, he led a major overhaul of Tencent’s AI strategy and organization, resulting in the release of Hunyuan Hy3 preview—a MoE model with 295B total parameters and 21B active parameters, supporting up to 256K context length. The launch came after Tencent leadership, including CEO Ma Huateng and President Martin Lau, openly criticized Hunyuan's earlier underperformance—citing slow development, over-reliance on superficial benchmark optimization, and poor generalization in real-world applications. Internal adoption was low, with key business units like WeChat and gaming seeking external AI solutions. Yao reshaped Tencent’s AI approach by integrating previously siloed teams, dissolving the ten-year-old Tencent AI Lab, and establishing new units focused on AI infrastructure and data. Hy3 preview was developed using co-design principles, closely aligned with product teams to ensure practical usability from the start. It has already been integrated into core products like Yuanbao, QQ, and enterprise tools. The release signals a shift from chasing rankings to building usable, scalable AI grounded in Tencent’s ecosystem. While external partnerships (like with DeepSeek and OpenClaw) helped retain users temporarily, the focus is now on making Hunyuan a reliable internal foundation. The real test lies in sustaining this new organizational momentum amid fierce competition from Alibaba, DeepSeek, and others.

marsbit29 мин. назад

Yao Shunyu's 88 Days

marsbit29 мин. назад

Торговля

Спот
Фьючерсы

Популярные статьи

Как купить BANANA

Добро пожаловать на HTX.com! Мы сделали приобретение Banana Gun (BANANA) простым и удобным. Следуйте нашему пошаговому руководству и отправляйтесь в свое крипто-путешествие.Шаг 1: Создайте аккаунт на HTXИспользуйте свой адрес электронной почты или номер телефона, чтобы зарегистрироваться и бесплатно создать аккаунт на HTX. Пройдите удобную регистрацию и откройте для себя весь функционал.Создать аккаунтШаг 2: Перейдите в Купить криптовалюту и выберите свой способ оплатыКредитная/Дебетовая Карта: Используйте свою карту Visa или Mastercard для мгновенной покупки Banana Gun (BANANA).Баланс: Используйте средства с баланса вашего аккаунта HTX для простой торговли.Третьи Лица: Мы добавили популярные способы оплаты, такие как Google Pay и Apple Pay, для повышения удобства.P2P: Торгуйте напрямую с другими пользователями на HTX.Внебиржевая Торговля (OTC): Мы предлагаем индивидуальные услуги и конкурентоспособные обменные курсы для трейдеров.Шаг 3: Хранение Banana Gun (BANANA)После приобретения вами Banana Gun (BANANA) храните их в своем аккаунте на HTX. В качестве альтернативы вы можете отправить их куда-либо с помощью перевода в блокчейне или использовать для торговли с другими криптовалютами.Шаг 4: Торговля Banana Gun (BANANA)С легкостью торгуйте Banana Gun (BANANA) на спотовом рынке HTX. Просто зайдите в свой аккаунт, выберите торговую пару, совершайте сделки и следите за ними в режиме реального времени. Мы предлагаем удобный интерфейс как для начинающих, так и для опытных трейдеров.

365 просмотров всегоОпубликовано 2024.03.29Обновлено 2025.03.21

Как купить BANANA

Обсуждения

Добро пожаловать в Сообщество HTX. Здесь вы сможете быть в курсе последних новостей о развитии платформы и получить доступ к профессиональной аналитической информации о рынке. Мнения пользователей о цене на BANANA (BANANA) представлены ниже.

活动图片