Intelligence as Institutionalized Error Correction
A Unified Framework Linking Evolution, Bayesian Brain Theory, Artificial Intelligence, Democracy, and Psychotherapy
Abstract
Recent advances in reasoning-based artificial intelligence systems, such as DeepSeek R1 and similar models, suggest that general reasoning ability can emerge from training procedures that optimize chains of thought in domains with clear verification criteria, such as mathematics and programming. These developments raise fundamental questions about the nature of intelligence itself. This paper proposes a theoretical framework in which intelligence is understood as institutionalized error correction. We argue that intelligent systems are not primarily characterized by the possession of correct knowledge but by structured procedures that enable the detection and revision of error. This framework integrates several influential traditions: Darwinian evolution, Popper’s philosophy of science, the Bayesian brain hypothesis, Friston’s free-energy principle, recent developments in artificial intelligence, democratic political institutions, and psychotherapeutic processes. Across these domains, similar structures can be observed: hypothesis generation, error detection, and iterative model revision. From this perspective, intelligence is best understood as a protocol for systematic belief updating under uncertainty, rather than a static repository of knowledge. The framework offers a conceptual bridge linking cognitive science, artificial intelligence, political theory, and psychiatry.
1. Introduction
What is intelligence?
Traditionally, intelligence has been associated with abilities such as problem solving, learning capacity, abstract reasoning, and the acquisition of knowledge. In psychology and cognitive science, intelligence has often been operationalized through performance on tasks that require these abilities.
However, recent developments in artificial intelligence invite a reconsideration of these assumptions.
In particular, the emergence of reasoning-oriented large language models—such as DeepSeek R1 and related systems—has revealed an important phenomenon: general reasoning ability can emerge from training procedures that optimize chains of thought in domains where correctness is easily verifiable, such as mathematics or programming.
What is striking about this result is that the models are not merely learning more facts. Rather, they are learning procedures for reasoning.
These procedures include:
- generating hypotheses
- decomposing problems into steps
- detecting contradictions
- revising intermediate conclusions
Such capabilities are not domain-specific pieces of knowledge. Instead, they constitute general protocols for error detection and correction.
This observation suggests a shift in perspective: intelligence may not primarily consist in possessing correct knowledge, but rather in maintaining systems that allow knowledge to be corrected.
This paper develops this idea into a general theoretical framework. The central claim is that:
Intelligence is best understood as institutionalized error correction.
Across multiple domains—including biological evolution, scientific inquiry, brain function, artificial intelligence, democratic governance, and psychotherapy—we observe similar structures of hypothesis generation, evaluation, and revision.
The aim of this paper is to show that these structures represent different implementations of the same underlying principle.
2. The Bayesian Brain and Predictive Processing
A key theoretical foundation for understanding intelligence as error correction is the Bayesian brain hypothesis.
According to this hypothesis, the brain maintains probabilistic internal models of the external world. Perception and cognition arise from the continuous updating of these models in response to incoming sensory evidence.
Formally, Bayesian inference describes how beliefs should be updated when new information becomes available:
[
Posterior \propto Prior \times Likelihood
]
In this framework:
- Prior beliefs represent existing internal models
- Sensory evidence provides new information
- Posterior beliefs represent updated models
Perception, therefore, is not a passive reception of sensory input but an active process of probabilistic inference.
Central to this process is prediction error—the difference between expected sensory input and actual sensory input.
Prediction errors signal that the current internal model is inadequate and must be updated.
Thus, cognitive systems operate by continuously minimizing prediction error through iterative belief updating.
3. The Free Energy Principle
Karl Friston’s Free Energy Principle generalizes this Bayesian framework to biological systems.
According to this theory, living organisms maintain internal models of their environment and act in ways that minimize the discrepancy between predicted and observed sensory states.
This discrepancy can be expressed as prediction error, while the quantity minimized by biological systems is known as variational free energy, which provides an upper bound on prediction error.
Within this framework, cognition and action both serve the same purpose: reducing uncertainty about the environment.
Organisms accomplish this by:
- updating internal beliefs (perceptual inference)
- acting to change sensory inputs (active inference)
From this perspective, the brain is fundamentally an error-correcting inference machine.
4. Evolution as Error Correction
The same logic can be observed in biological evolution.
Darwinian evolution consists of three fundamental processes:
- variation
- selection
- retention
These processes can be interpreted as a form of large-scale error correction.
Variation generates hypotheses about how organisms might function.
Natural selection tests these hypotheses against environmental constraints.
Successful variants are retained, while unsuccessful ones are eliminated.
In this sense, evolution operates as a distributed search process that reduces mismatch between organisms and their environments.
Evolution therefore represents a long-timescale implementation of error correction.
5. Science as Institutionalized Error Correction
Karl Popper famously argued that science progresses not through the accumulation of truths but through the elimination of errors.
According to Popper, scientific progress follows a cycle:
- conjecture
- refutation
- revision
Scientific theories are proposed as hypotheses about the world. Experiments and observations test these hypotheses. When contradictions appear, theories are revised or replaced.
Importantly, science does not rely solely on the intelligence of individual scientists. Instead, it depends on institutions that enable error correction, including:
- peer review
- replication
- open criticism
- methodological transparency
These institutions make science a structured system for detecting and correcting mistakes.
Thus, science can be understood as institutionalized epistemic error correction.
6. Artificial Intelligence and Reasoning Protocols
Recent developments in artificial intelligence provide another example of this principle.
In reasoning-oriented AI models, performance improvements have been achieved not merely by increasing model size or training data, but by optimizing reasoning procedures themselves.
In systems such as DeepSeek R1, training focuses on generating chains of reasoning that can be verified step by step.
The training loop typically involves:
- generating a reasoning sequence
- verifying intermediate results
- revising incorrect steps
Mathematics and programming are particularly effective training environments because correctness can be clearly defined.
The key insight is that such training improves not just mathematical ability, but general reasoning capacity.
This suggests that what is being learned is a general error-correction protocol, rather than domain-specific knowledge.
7. Democracy as a Social Error-Correction System
The same structural principle can be observed in democratic political systems.
Democracy is often evaluated in terms of whether it produces correct policies. However, its deeper strength lies elsewhere.
Democratic systems allow:
- competing policy proposals
- public criticism
- electoral accountability
- policy revision
These mechanisms enable societies to detect and correct political mistakes over time.
Thus, democracy can be understood as a collective error-correction system.
The value of democratic institutions lies not in guaranteeing correct decisions, but in maintaining the capacity for revision.
8. Psychotherapy and Model Revision
Psychotherapy also reflects the structure of error correction.
In cognitive-behavioral therapy, patients examine automatic thoughts and test them against evidence. Dysfunctional beliefs are gradually revised through this process.
Similarly, in psychodynamic traditions, therapeutic work involves bringing implicit assumptions about the self and others into awareness, allowing them to be reconsidered and modified.
Concepts such as Bion’s containment and Fonagy’s mentalization can be interpreted within this framework as processes that facilitate the revision of internal models.
In this sense, psychotherapy aims not merely to provide correct interpretations, but to restore a patient’s capacity for self-correction.
9. Psychopathology as Disrupted Error Correction
Within this framework, psychiatric disorders may be understood as disturbances in error-correction mechanisms.
In schizophrenia, abnormal processing of prediction error may lead to inappropriate assignment of significance to irrelevant stimuli.
In depression, reduced reward sensitivity may impair the updating of beliefs about the future.
Psychotherapeutic interventions may therefore function by restoring the patient’s ability to revise internal models in response to new information.
10. Conclusion
This paper has proposed a general theoretical framework in which intelligence is understood as institutionalized error correction.
Across multiple domains—including evolution, science, brain function, artificial intelligence, democracy, and psychotherapy—we observe similar structures:
- hypothesis generation
- error detection
- model revision
These structures suggest that intelligence is not fundamentally about possessing correct knowledge.
Rather, intelligence is best understood as the capacity to systematically detect and correct errors under conditions of uncertainty.
From this perspective, intelligent systems are those that maintain protocols enabling continuous model revision.
Thus, intelligence is not simply a property of individuals or machines. It is a property of systems that organize the correction of error.
【OA】
