Recently, Apple has obtained extensive access to Google's Gemini model, aiming to accelerate the development of its lightweight on-device artificial intelligence through advanced data distillation technology.
According to related reports, Apple currently has full access to the Gemini model within its data centers. The core of this strategic move is to use the high-quality answers and logical reasoning chain records generated by Gemini as training data to "feed" Apple's self-developed small models. This "model distillation" approach, where large models guide the training of small models, enables the lightweight versions to maintain efficient computation while possessing logical processing capabilities similar to those of top-tier large models.
Although Gemini was initially designed for chatbots and enterprise-level applications, differing from Apple's deep system-level planning for Siri in terms of product logic, this collaboration significantly fills the gap in Apple's access to high-quality synthetic data. At the same time, Apple has not abandoned its self-development path; its Apple Foundation Models team is simultaneously advancing the in-house development of underlying models. It is expected that these new-generation AI features, incorporating distillation technology, will be showcased at the upcoming Apple Worldwide Developers Conference (WWDC) in June.
This collaboration marks a shift in the AI industry from pure computing power competition to more efficient training strategy competition. Apple's choice to "pay for data," by absorbing the capabilities of top-tier models to strengthen its edge computing advantage, not only reflects the game and balance between tech giants in general-purpose large models and private on-device AI but also预示着 that on-device equipment in the future will possess stronger local reasoning and complex task processing capabilities, further advancing the process of AI democratization.






