The MLnotes Newsletter

The MLnotes Newsletter

Share this post

The MLnotes Newsletter
The MLnotes Newsletter
Thinking, Fast and Slow: Why Building Agentic AI Applications Is the Next Frontier

Thinking, Fast and Slow: Why Building Agentic AI Applications Is the Next Frontier

Mehdi Allahyari's avatar
Angelina Yang's avatar
Mehdi Allahyari
and
Angelina Yang
Oct 14, 2024
∙ Paid

Share this post

The MLnotes Newsletter
The MLnotes Newsletter
Thinking, Fast and Slow: Why Building Agentic AI Applications Is the Next Frontier
Share

The Evolution of AI: From Fast Thinking to Slow Reasoning

Two years into the Generative AI revolution, we're witnessing a significant shift in the field. AI is evolving from "thinking fast" - rapid-fire pre-trained responses - to "thinking slow" - reasoning at inference time. This transition is unlocking a new cohort of agentic applications that promise to reshape industries and redefine the boundaries of what AI can achieve.

System 1 vs System 2: The Dual Process of AI Thinking

To understand this evolution, it's helpful to consider the concept of System 1 and System 2 thinking, popularized by Daniel Kahneman. System 1 represents fast, instinctive, and emotional thinking, while System 2 is slower, more deliberative, and logical.

In the context of AI, pre-trained models like large language models (LLMs) primarily operate in the realm of System 1 thinking. They excel at pattern recognition and quick responses based on vast amounts of training data. However, they often struggle with complex reasoning tasks that require deliberate thought.

The next frontier in AI research is focused on developing System 2 capabilities - the ability to pause, evaluate, and reason through decisions in real-time. This is where models like OpenAI's o1 (formerly known as Q* or Strawberry) come into play.

Generative AI's Act o1: The Reasoning Era Begins
Source: Sequoia

The Strawberry Fields of AI Reasoning

OpenAI's o1 model represents a significant leap forward in AI reasoning capabilities. Unlike traditional LLMs that rely solely on pre-trained responses, o1 incorporates inference-time compute - essentially giving the AI time to "stop and think" before responding.

This approach is reminiscent of DeepMind's AlphaGo, which famously defeated world champion Lee Sedol in the game of Go. AlphaGo's success wasn't just due to pattern recognition; it could simulate and evaluate potential future moves, demonstrating a form of reasoning that went beyond simple pattern matching.

The challenge in replicating this for general-purpose AI lies in constructing appropriate value functions for open-ended tasks. How do you score the quality of an essay or a travel itinerary? This is where o1's innovations in reinforcement learning and chain-of-thought reasoning come into play.

New Scaling Law: More Time to Think

One of the most exciting insights from the o1 paper is the emergence of a new scaling law: the more inference-time compute you give the model, the better it reasons. This opens up a new dimension for AI improvement, beyond just increasing the size of pre-trained models.

As we scale up inference-time compute, we may see AI tackling increasingly complex problems.

Could we see AI contributing to mathematical proofs or scientific breakthroughs? The potential is tantalizing.

But maybe, instead of tackling only the 5-second rule problems, AI may be able to take a stab at the 5-hour ones, pretty soon.

Source: Inbound Talk

Custom Cognitive Architectures for the Real World

While general-purpose reasoning models are advancing rapidly, the messy reality of real-world applications requires more specialized approaches. This is where custom cognitive architectures come into play.

Companies like Factory are developing AI "droids" with tailored cognitive architectures that mimic human thought processes for specific tasks. For instance, a Factory droid designed for code review and migration might break down the task into discrete steps, propose changes, add tests, and involve human review - just as a human developer would.

The Rise of Agentic Applications

This evolution in AI capabilities is giving rise to a new generation of agentic applications across various industries. Some notable examples include:

Keep reading with a 7-day free trial

Subscribe to The MLnotes Newsletter to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 MLnotes
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share