AGI-26 Conference Reveals Star-Studded AI Speaker Lineup

TheNewsCryptoPublished on 2026-05-07Last updated on 2026-05-07

AGI-26, the 19th annual conference of the AGI Society, which is devoted only to Artificial General Intelligence, will take place in San Francisco from July 27th to the 30th. The AGI Society has confirmed the keynote speakers and a whole speaker schedule for the event.

Researchers from organizations that are at the forefront of machine intelligence research, such as Google DeepMind and MIT, as well as Tufts University and SingularityNET, are included on the roster. These researchers bring together scholars whose approaches to AGI diverge as much as they overlap.

Karl Friston, whose neuroscience-inspired models have reshaped how researchers think about perception and inference; Gary Marcus, one of the most prominent voices for symbolic and hybrid approaches; Michael Levin and Hananel Hazan, whose work on biological intelligence at Tufts is opening new questions about the substrate of cognition; Neil Gershenfeld (MIT); and Ben Goertzel, whose pursuit of general intelligence through SingularityNET and the ASI Alliance has defined much of the field’s ambition. All of these individuals have been confirmed for the forthcoming AGI-26 conference.

Alexander Lerchner, who works for Google DeepMind, Alison Gopnik, who works for the University of California, Berkeley; Alexander Ororbia, who works for the Rochester Institute of Technology; Faezeh Habibi, who works for SingularityNET; Josef Urban, who works for the Czech Technical University in Prague; and Greg Meredith, who works for F1R3FLY.io. Neuroscience. Symbols. Agents. Hybrids. Every major school of thought, in one room.

The AGI Society has built the conference series on three themes: advancing the theoretical foundations of artificial general intelligence (AGI), developing practical pathways from today’s narrow AI systems toward robust general intelligence, and addressing the societal and ethical implications of what comes next. The program for the four-day event includes presentations of papers that have been peer-reviewed, demonstrations of software and hardware, tutorials, and workshops.

The edition that was published this year places a particular emphasis on a question that the field can no longer ignore: how rapid advancements in reasoning, adaptation, and generalization translate into systems that are accountable and beneficial over the long term, even as they leverage their general intelligence to radically overhaul their own foundations. In addition, there will be thematic sessions during the conference that will focus on Neural-Symbolic and Hybrid Methods, Predictive Coding, Practical Proto-AGI Systems, and Active Inference for General Intelligence.

As conference series co-founder Ben Goertzel says:

“There has never been such an exciting time to be working toward AGI. I have said this each year at the AGI conference for the last few years, and it keeps on being true. The rate of intellectual and practical progress toward AGI we are seeing is truly remarkable, if at times a bit dizzying. While serious AGI researchers understand that scaling LLMs will not get us to full human-level AGI, and research advances are still required, there is nonetheless a feeling across the field that accelerating progress will continue and major additional breakthroughs may be soon at hand. The AGI conference series remains the only venue gathering together AGI researchers and practitioners from across the spectrum of scientific and engineering approaches, from academia and industry and the open source community and from all around the globe.”

In conjunction with the technical program, a distinct Investor Day will be place on July 30. The purpose of this day is to discuss the current state of the investing environment as well as the wider implications of general intelligence.

The Kurzweil Prize for Best AGI Idea ($1,250), the AGI Society Prize for Progress Toward AGI ($1,000), the Springer Prize for Best AGI Paper ($1,000), and the Hyperon Prize for Best Student Paper ($250) are some of the rewards that will be considered for the outstanding papers that are submitted to the AGI-26 conference.

Over a thousand academics, practitioners, and thought leaders from academia, business, and government have participated in the AGI Conference Series since it was established in 2008. Jurgen Schmidhuber, Yoshua Bengio, Peter Norvig, Richard Sutton, Christof Koch, and Francois Chollet are some of the speakers who have previously delivered presentations. For the most part, the series has served more as a pressure test than as a showcase. It has been a site where fundamental assumptions have been called into question, and where the path that the field is headed has been fashioned.

A link to the tickets: https://luma.com/AGI-26

The Artificial General Intelligence Society (AGI Society) is a non-profit organization working to further the research and development of artificial general intelligence systems. Through the facilitation of global collaboration and communication, the society promotes the dissemination of information and a variety of perspectives about the future of intelligence.

TagsAltcoinBlockchain

Related Reads

Gensyn AI: Don't Let AI Repeat the Mistakes of the Internet

In recent months, the rapid growth of the AI industry has attracted significant talent from the crypto sector. A persistent question among researchers intersecting both fields is whether blockchain can become a foundational part of AI infrastructure. While many previous AI and Crypto projects focused on application layers (like AI Agents, on-chain reasoning, data markets, and compute rentals), few achieved viable commercial models. Gensyn differentiates itself by targeting the most critical and expensive layer of AI: model training. Gensyn aims to organize globally distributed GPU resources into an open AI training network. Developers can submit training tasks, nodes provide computational power, and the network verifies results while distributing incentives. The core issue addressed is not decentralization for its own sake, but the increasing centralization of compute power among tech giants. In the era of large models, access to GPUs (like the H100) has become a decisive bottleneck, dictating the pace of AI development. Major AI companies are heavily dependent on large cloud providers for compute resources. Gensyn's approach is significant for several reasons: 1) It operates at the core infrastructure layer (model training), the most resource-intensive and technically demanding part of the AI value chain. 2) It proposes a more open, collaborative model for compute, potentially increasing resource utilization by dynamically pooling idle GPUs, similar to early cloud computing logic. 3) Its technical moat lies in solving complex challenges like verifying training results, ensuring node honesty, and maintaining reliability in a distributed environment—making it more of a deep-tech infrastructure company. 4) It targets a validated, high-growth market with genuine demand, rather than pursuing blockchain integration without purpose. Ultimately, the boundaries between Crypto and AI are blurring. AI requires global resource coordination, incentive mechanisms, and collaborative systems—areas where crypto-native solutions excel. Gensyn represents a step toward making advanced training capabilities more accessible and collaborative, moving beyond a niche controlled by a few giants. If successful, it could evolve into a fundamental piece of AI infrastructure, where the most enduring value in the AI era is often created.

marsbit7h ago

Gensyn AI: Don't Let AI Repeat the Mistakes of the Internet

marsbit7h ago

Why is China's AI Developing So Fast? The Answer Lies Inside the Labs

A US researcher's visit to China's top AI labs reveals distinct cultural and organizational factors driving China's rapid AI development. While talent, data, and compute are similar to the West, Chinese labs excel through a pragmatic, execution-focused culture: less emphasis on individual stardom and conceptual debate, and more on teamwork, engineering optimization, and mastering the full tech stack. A key advantage is the integration of young students and researchers who approach model-building with fresh perspectives and low ego, prioritizing collective progress over personal credit. This contrasts with the US culture of self-promotion and "star scientist" narratives. Chinese labs also exhibit a strong "build, don't buy" mentality, preferring to develop core capabilities—like data pipelines and environments—in-house rather than relying on external services. The ecosystem feels more collaborative than tribal, with mutual respect among labs. While government support exists, its scale is unclear, and technical decisions appear driven by labs, not state mandates. Chinese companies across sectors, from platforms to consumer tech, are building their own foundational models to control their tech destiny, reflecting a broader cultural drive for technological sovereignty. Demand for AI is emerging, with spending patterns potentially mirroring cloud infrastructure more than traditional SaaS. Despite challenges like a less mature data industry and GPU shortages, Chinese labs are propelled by vast talent, rapid iteration, and deep integration with the open-source community. The competition is evolving beyond a pure model race into a contest of organizational execution, developer ecosystems, and industrial pragmatism.

marsbit8h ago

Why is China's AI Developing So Fast? The Answer Lies Inside the Labs

marsbit8h ago

3 Years, 5 Times: The Rebirth of a Century-Old Glass Factory

Corning, a 175-year-old glass company, is experiencing a dramatic revival as a key player in AI infrastructure, driven by surging demand for high-performance optical fiber in data centers. AI data centers require vastly more fiber than traditional ones—5 to 10 times as much per rack—to handle high-speed data transmission between GPUs. This structural demand shift, coupled with supply constraints from the lengthy expansion cycle for fiber preforms, has created a significant supply-demand gap. Nvidia has invested in Corning, along with Lumentum and Coherent, in a $4.5 billion total commitment to secure the optical supply chain for AI. Corning's competitive edge lies in its expertise in producing ultra-low-loss, high-density, and bend-resistant specialty fiber, which is critical for 800G+ and future 1.6T data rates. Its deep involvement in co-packaged optics (CPO) with partners like Nvidia further solidifies its position. While not the largest fiber manufacturer globally, Corning's revenue from enterprise/data center clients now exceeds 40% of its optical communications sales, and it has secured multi-year supply agreements with major hyperscalers including Meta and Nvidia. Financially, Corning's optical communications revenue has surged, doubling from $1.3 billion in 2023 to over $3 billion in 2025. Its stock price has risen nearly 6-fold since late 2023. Key future catalysts include the rollout of Nvidia's CPO products and the scale of undisclosed customer agreements. However, risks include high current valuations and potential disruption from next-generation technologies like hollow-core fiber. The company's long-term bet on light over electricity, maintained even through the telecom bubble crash, is now being validated by the AI boom.

marsbit9h ago

3 Years, 5 Times: The Rebirth of a Century-Old Glass Factory

marsbit9h ago

Trading

Spot
Futures
活动图片