Google Is Really Anxious, Launches Deep Research Agent Late at Night with MCP Support and Native Charts

marsbitPublicado a 2026-04-22Actualizado a 2026-04-22

Resumen

Google has launched two new AI research agents, Deep Research and Deep Research Max, built on the Gemini 3.1 Pro model. These agents are designed for enterprise and developer use, accessible via API, and support integration of open web data with private enterprise information through a single API call. They also feature native chart and infographic generation within reports and support the Model Context Protocol (MCP) to connect with third-party data sources securely. Deep Research is optimized for speed and lower latency, while Deep Research Max prioritizes depth and extended reasoning, making it suitable for asynchronous workflows like overnight analysis tasks. The agents are not available to general Gemini app users, including Pro subscribers, and are currently offered as a paid preview through the Interactions API. The update aims to position Google competitively against rivals like OpenAI and Anthropic in the AI research and analysis tool space, particularly targeting high-value sectors such as finance and consulting. Enhanced capabilities include multi-modal input support, collaborative planning features, and improved performance on benchmarks, though comparative data with competitors should be interpreted cautiously due to potential methodological differences.

By | Alphabet AI

Google is really anxious.

Just earlier, news broke that Google co-founder Sergey Brin has restarted "founder mode," personally overseeing the battle and forming an elite "strike team" to fully enhance Gemini's key capabilities in AI programming and autonomous agents to catch up with competitors like Anthropic.

Then, late at night, Google announced a major update, launching a new generation of autonomous research agents built on the Gemini 3.1 Pro model: Deep Research and Deep Research Max.

Not only has the reasoning capability at the model's foundation been strengthened, but there is also a strong push to evolve autonomous research agents towards enterprise-grade and developer platforms. Through API openness, support for private data, background asynchronous tasks, and other methods, Google is attempting to seize the initiative in the high-value scenario of "AI research/analysis tools," responding to competition from opponents like OpenAI (Hermes) and Perplexity.

These two agents, for the first time, allow developers to fuse open web data with enterprise proprietary information through a single API call, natively generate charts and infographics in research reports, and also connect to any third-party data source via the Model Context Protocol (MCP).

The two agents are now available as a public preview through paid plans of the Gemini API and can be accessed via the Interactions API first launched by Google in December 2025.

That's right, these new agents are currently only available via API; regular users cannot access them in the Gemini App, even if they have a paid subscription. Seeing the update news but finding they can't use it, some users lamented: "Google, for some reason, continues to punish us Pro subscribers of the Gemini App..."

Google CEO Sundar Pichai also personally took to X to promote it: "When you need speed and efficiency, use Deep Research; when you pursue the highest quality context collection and synthesis, use the Max version—it achieves scores of 93.3% on DeepSearchQA and 54.6% on HLE through extended test-time computation."

Eighteen months ago, the goal of Google Deep Research was to help graduate students avoid being overwhelmed by a sea of browser tabs. Today, Google hopes it can replace the basic research work of investment banking junior analysts.

The gap between these two goals—and whether this technology can truly bridge it—will determine whether autonomous research agents become a transformative product in the enterprise software field or just another AI demo that looks impressive in benchmarks but disappoints in meetings.

Two Versions, Adapted to Different Workloads

The standard Deep Research has lower latency and lower cost, suitable for scenarios prioritizing speed.

Deep Research Max prioritizes depth over speed. This agent uses extended test-time computation for deep reasoning, searching, and iteration, ultimately generating a report.

Google points out that asynchronous background workflows are its ideal use case, such as running via a cron job at night and delivering a complete due diligence report to the analyst team the next morning.

In Google's own benchmarks, Deep Research Max showed significant improvements in retrieval and reasoning tasks. The agent can gather information from more sources than previous versions and capture nuances that older models easily missed.

Google also provided a comparison with competitors.

However, comparing with OpenAI's GPT-5.4 and Anthropic's Opus 4.6 is not entirely fair. GPT-5.4 excels in autonomous web search but is not specifically optimized for deep research. For this, OpenAI provides its own DR agent, which switched to GPT-5.2, not GPT-5.4, after the February update. OpenAI's strongest search model is actually GPT-5.4 Pro, but Google clearly did not include it in the comparison.

According to OpenAI's data, GPT-5.4 Pro can score up to 89.3% on the agent search benchmark BrowseComp, while GPT-5.4 scores 82.7%.

Based on Anthropic's own reports, Opus 4.6 scores higher on BrowseComp than the value shown, specifically 84%. This score was achieved with reasoning turned off, and the model performed better than the high-intensity reasoning settings used by Google in the API benchmark tests.

These discrepancies likely stem from differences in testing methods—whether the models were evaluated via the raw API or encapsulated within each lab's own toolchain. Google's data is not necessarily wrong, but it deserves cautious interpretation. In any case, the presentation lacks sufficient transparency.

MCP Support

Perhaps the most impactful feature of this release is the newly added support for the Model Context Protocol (MCP). This feature transforms Deep Research from a powerful web research tool into something closer to a "universal data analyst."

MCP is an emerging open standard for connecting AI models to external data sources. It allows Deep Research to securely query private databases, internal document repositories, and specialized third-party data services—all without sensitive information ever leaving its original environment.

In practice, this means a hedge fund could simultaneously point Deep Research to its internal trading flow database and a financial data terminal, then ask the agent to combine both with public information from the web to synthesize insights.

Google revealed that it is actively working with companies like FactSet, S&P, and PitchBook to design their MCP servers, clearly indicating that Google is seeking deep integration with data providers that Wall Street and the broader financial services industry rely on daily.

According to a blog post written by Google DeepMind product managers Lukas Haas and Srinivas Tadepalli, the goal is to "enable joint customers to integrate financial data products into workflows powered by Deep Research, and by leveraging its massive data universe, gather context at lightning speed, thereby leaping forward in productivity."

This feature directly addresses one of the most stubborn pain points in enterprise AI adoption: the huge gap between the information models can find on the open internet and the information organizations actually need for decision-making. Previously, bridging this gap required significant custom engineering work.

MCP support, combined with Deep Research's autonomous browsing and reasoning capabilities, simplifies most of this complexity to a one-time configuration. Developers can now have Deep Research use Google Search, remote MCP servers, URL Context, code execution, and file search simultaneously—or completely turn off web access and search only on custom data.

The system also supports multimodal input, including PDFs, CSVs, images, audio, and video, for use as grounding context.

Native Charts

The second major feature is native chart and infographic generation.

Previous versions of Deep Research could only generate plain text reports. If users needed visualizations, they had to export the data and create charts themselves. This shortcoming significantly weakened the "end-to-end automation" positioning.

Now, the new generation of agents can natively embed high-quality charts and infographics within reports, dynamically rendering complex datasets in HTML or Google's Nano Banana format, making them a direct part of the analytical narrative.

For enterprise users—especially those in the financial and consulting industries who need to produce deliverables ready for stakeholders—this feature transforms Deep Research from a tool that "accelerates the research phase" into one that can generate something close to a final analytical product.

Furthermore, combined with the new collaborative planning feature (which allows users to review, guide, and optimize the agent's research plan before execution) and real-time streaming of intermediate reasoning steps, the new system gives developers fine-grained control over the investigation scope while maintaining the high level of transparency required in regulated industries.

Deep Research Is Becoming Part of the "Infrastructure" Google Provides to Enterprises

Google's official blog post clearly states that when developers build using the Deep Research agent, they are calling "the same autonomous research infrastructure that powers multiple popular Google products (such as the Gemini App, NotebookLM, Google Search, and Google Finance)." This indicates that the agents provided via the API are not simplified versions of Google's internal build, but the same system, offered as a service at platform scale.

This evolution has progressed extremely rapidly.

Google first launched Deep Research in the Gemini App in December 2024 as a consumer-facing feature, then powered by Gemini 1.5 Pro. Google described it as a personal AI research assistant that could synthesize web information in minutes, saving users hours of work.

In March 2025, Google upgraded Deep Research using Gemini 2.0 Flash Thinking Experimental and opened trials to everyone. It was then upgraded to Gemini 2.5 Pro Experimental, with Google reporting that evaluators preferred its reports over competitors' by a 2-to-1 ratio.

December 2025 was a key turning point, as Google launched the Interactions API, providing Deep Research programmatically for the first time, powered by Gemini 3 Pro, and simultaneously released the open-source DeepSearchQA benchmark.

The underlying model driving these improvements is Gemini 3.1 Pro, released on February 19, 2026. It achieved a major leap in core reasoning capability: on the ARC-AGI-2 benchmark, which evaluates a model's ability to solve novel logic patterns, 3.1 Pro scored 77.1%, more than double that of Gemini 3 Pro.

Preguntas relacionadas

QWhat are the two new autonomous research agents announced by Google, and what are their key differences?

AGoogle announced two new autonomous research agents: Deep Research and Deep Research Max. The standard Deep Research has lower latency and lower cost, making it suitable for speed-critical scenarios. Deep Research Max prioritizes depth over speed, using extended test-time compute for in-depth reasoning, search, and iteration to generate reports, and is ideal for asynchronous background workflows.

QWhat is the Model Context Protocol (MCP) support in the new Deep Research agents, and why is it significant?

AThe Model Context Protocol (MCP) is an emerging open standard for connecting AI models to external data sources. Its support allows Deep Research to securely query private databases, internal document repositories, and third-party data services without sensitive information leaving its original environment. This transforms the agent from a powerful web research tool into a 'universal data analyst' and addresses a key enterprise adoption pain point by bridging the gap between public web information and proprietary organizational data.

QWhat new capability do the Deep Research agents have regarding data visualization, and why is it important for enterprise users?

AThe new Deep Research agents can natively generate high-quality charts and infographics within their reports, dynamically rendering complex datasets in HTML or Google's Nano Banana format. This is important for enterprise users, particularly in finance and consulting, as it transforms the tool from one that only accelerates the research phase into one that can produce analysis products that are nearly ready for delivery to stakeholders.

QHow are these new Deep Research agents made available to users, and is there any limitation for regular Gemini app subscribers?

AThe new Deep Research agents are available as a public preview exclusively through the paid tiers of the Gemini API, accessible via the Interactions API first introduced in December 2025. They are not available to regular users within the Gemini app, even for those who have a paid Pro subscription.

QWhat underlying model powers the latest Deep Research agents, and what major improvement does it offer?

AThe latest Deep Research agents are powered by the Gemini 3.1 Pro model. This model represents a major leap in core reasoning capabilities, achieving a score of 77.1% on the ARC-AGI-2 benchmark, which is more than double the score of its predecessor, Gemini 3 Pro. This benchmark evaluates a model's ability to solve novel logic patterns.

Lecturas Relacionadas

Trading

Spot
Futuros

Artículos destacados

Cómo comprar NIGHT

¡Bienvenido a HTX.com! Hemos hecho que comprar Midnight (NIGHT) sea simple y conveniente. Sigue nuestra guía paso a paso para iniciar tu viaje de criptos.Paso 1: crea tu cuenta HTXUtiliza tu correo electrónico o número de teléfono para registrarte y obtener una cuenta gratuita en HTX. Experimenta un proceso de registro sin complicaciones y desbloquea todas las funciones.Obtener mi cuentaPaso 2: ve a Comprar cripto y elige tu método de pagoTarjeta de crédito/débito: usa tu Visa o Mastercard para comprar Midnight (NIGHT) al instante.Saldo: utiliza fondos del saldo de tu cuenta HTX para tradear sin problemas.Terceros: hemos agregado métodos de pago populares como Google Pay y Apple Pay para mejorar la comodidad.P2P: tradear directamente con otros usuarios en HTX.Over-the-Counter (OTC): ofrecemos servicios personalizados y tipos de cambio competitivos para los traders.Paso 3: guarda tu Midnight (NIGHT)Después de comprar tu Midnight (NIGHT), guárdalo en tu cuenta HTX. Alternativamente, puedes enviarlo a otro lugar mediante transferencia blockchain o utilizarlo para tradear otras criptomonedas.Paso 4: tradear Midnight (NIGHT)Tradear fácilmente con Midnight (NIGHT) en HTX's mercado spot. Simplemente accede a tu cuenta, selecciona tu par de trading, ejecuta tus trades y monitorea en tiempo real. Ofrecemos una experiencia fácil de usar tanto para principiantes como para traders experimentados.

319 Vistas totalesPublicado en 2025.12.08Actualizado en 2025.12.08

Cómo comprar NIGHT

Discusiones

Bienvenido a la comunidad de HTX. Aquí puedes mantenerte informado sobre los últimos desarrollos de la plataforma y acceder a análisis profesionales del mercado. A continuación se presentan las opiniones de los usuarios sobre el precio de NIGHT (NIGHT).

活动图片