Below is a fully restructured, logically unified, and expanded version of your notes, written as a single coherent paper in English.
I have not shortened anything; instead, I have made all assumptions explicit, clarified transitions, eliminated duplication, and organized the argument so that each section builds necessity for the next.
You can later compress or adapt it for journal submission.
Intelligence as Institutionalized Error Correction
A Unified Framework Linking Evolution, Bayesian Brain Theory, Artificial Intelligence, Democracy, and Psychotherapy
March 2026
- Abstract
- 1. Introduction: Rethinking Intelligence
- 2. Intelligence as Error Correction: A General Architecture
- 3. Evolution as Error Correction Across Generations
- 4. Cybernetics: Feedback as the Core Mechanism
- 5. The Bayesian Brain and Predictive Processing
- 6. Gregory Bateson: Learning About Learning
- 7. Science as Institutionalized Error Correction
- 8. Artificial Intelligence: Explicit Error Minimization
- 9. Democracy as Collective Error Correction
- 10. Psychotherapy: Repairing Error-Correction Capacity
- 11. Karl Jaspers and Epistemic Humility
- 12. A Three-Layer Architecture of Intelligence
- 13. Formal Perspective: Hierarchical Bayesian Model Selection
- 14. Conclusion
Abstract
Recent advances in reasoning-oriented artificial intelligence systems—such as DeepSeek’s R1 model—have revealed that general reasoning ability can emerge not primarily from the accumulation of facts, but from training procedures that optimize structured reasoning under conditions where correctness can be verified. These developments challenge traditional conceptions of intelligence as a stock of correct knowledge or fixed cognitive capacity.
This paper proposes a unifying theoretical framework in which intelligence is understood as institutionalized error correction. Across biological, cognitive, technological, social, and clinical domains, intelligent systems share a common architecture: the generation of models or hypotheses, the detection of error through feedback, and the systematic revision of those models over time. This framework integrates insights from Darwinian evolution, Bayesian and predictive-processing theories of the brain, cybernetics, philosophy of science, contemporary artificial intelligence research, democratic theory, and psychotherapy.
From this perspective, intelligence is not what a system knows, but how reliably and flexibly it can correct what it gets wrong. Intelligence emerges wherever error-correction processes become stabilized, accelerated, and institutionally supported.
1. Introduction: Rethinking Intelligence
1.1 Traditional Conceptions
Intelligence has traditionally been defined in terms of problem-solving ability, learning speed, abstract reasoning, and knowledge acquisition. In psychology, it has often been operationalized through performance on standardized tasks. In artificial intelligence, intelligence has historically been associated with rule-based expertise or large-scale knowledge representation.
These approaches implicitly assume that intelligence consists primarily in having correct answers.
1.2 A Shift Prompted by Artificial Intelligence
Recent developments in large language models and reasoning-oriented AI systems have exposed a limitation of this assumption. Systems trained explicitly to generate and verify chains of reasoning—particularly in mathematics and programming—exhibit improvements that generalize beyond those domains.
What is being learned in such systems is not merely information, but procedures:
- generating candidate hypotheses
- decomposing problems into intermediate steps
- detecting inconsistencies
- revising conclusions when errors are identified
These are error-correction protocols, not static knowledge.
1.3 Central Thesis
This observation motivates the central claim of this paper:
Intelligence is best understood as the existence of structured mechanisms for systematic error correction under conditions of uncertainty.
The remainder of the paper demonstrates that this same structure appears—independently but convergently—across evolution, brain function, scientific inquiry, artificial intelligence, democratic governance, and psychotherapy.
2. Intelligence as Error Correction: A General Architecture
Across domains, intelligent systems exhibit a recurring triadic structure:
- Model generation – producing candidate representations, beliefs, or policies
- Error detection – comparing predictions with outcomes
- Model revision – updating or replacing inadequate models
This architecture defines intelligence not by correctness at any moment, but by corrigibility over time.
3. Evolution as Error Correction Across Generations
3.1 Darwinian Adaptation
Charles Darwin’s theory of evolution by natural selection introduced the first non-teleological account of adaptive intelligence.
Evolution operates through:
- variation (generation of candidate forms)
- selection (environmental testing)
- retention (inheritance of successful variants)
Although blind and non-representational, this process systematically eliminates maladaptive designs.
3.2 Evolution as Distributed Error Correction
From the present perspective, natural selection functions as a slow, population-level error-correction system, reducing mismatch between organisms and environments over generational time.
Evolution is therefore the first layer of intelligence: error correction without foresight.
4. Cybernetics: Feedback as the Core Mechanism
4.1 Wiener and Feedback Control
Norbert Wiener introduced cybernetics as the study of control and communication in animals and machines. The key innovation was feedback.
A cybernetic system compares:
- expected state
- observed state
- deviation (error)
- corrective action
This formalized error correction as a general principle.
4.2 Ashby and Requisite Variety
W. Ross Ashby extended cybernetics by demonstrating that effective regulation requires sufficient internal variety. His Law of Requisite Variety states:
Only variety can absorb variety.
Error correction is impossible if a system lacks enough alternative models to revise toward.
5. The Bayesian Brain and Predictive Processing
5.1 Bayesian Inference as Normative Model Updating
The Bayesian brain hypothesis proposes that brains maintain probabilistic models of the world and update them according to Bayes’ rule:
[
P(\text{Model} | \text{Data}) \propto P(\text{Data} | \text{Model}) \times P(\text{Model})
]
Prediction error—discrepancy between expected and actual input—drives belief revision.
5.2 Friston and the Free Energy Principle
Karl Friston generalized this into the Free Energy Principle, arguing that living systems maintain their integrity by minimizing prediction error through perception and action.
Brains thus function as hierarchical error-correcting inference machines.
6. Gregory Bateson: Learning About Learning
Gregory Bateson extended cybernetics into psychology and psychiatry.
His hierarchy of learning distinguished:
- Learning I: correcting errors within fixed rules
- Learning II: revising the rules themselves
- Learning III: transforming the entire system of assumptions
This anticipates modern ideas of meta-learning and deep model revision.
7. Science as Institutionalized Error Correction
7.1 Popper’s Epistemology
Karl Popper argued that science progresses not by accumulating truths but by eliminating errors through conjecture and refutation.
Crucially, science relies on institutions:
- peer review
- replication
- open criticism
These stabilize error correction beyond individual cognition.
8. Artificial Intelligence: Explicit Error Minimization
Modern machine learning systems explicitly implement error correction by minimizing loss functions.
Reasoning-oriented systems improve general intelligence by learning procedures for detecting and correcting intermediate errors, not by memorizing answers.
This mirrors both Bayesian inference and Popperian epistemology.
9. Democracy as Collective Error Correction
Democracy’s epistemic value lies not in correctness, but in revisability.
Democratic systems enable:
- competing proposals
- public criticism
- electoral feedback
- policy revision
From this view, democracy is a social error-correction mechanism operating under uncertainty.
10. Psychotherapy: Repairing Error-Correction Capacity
10.1 Therapy as Model Revision
In cognitive therapy, dysfunctional beliefs are tested against evidence. In psychodynamic therapy, implicit assumptions are made explicit and revised.
Concepts such as mentalization and containment facilitate safe error correction of personal models.
10.2 Psychopathology as Disrupted Updating
Psychiatric disorders can be understood as disturbances in error processing:
- schizophrenia: aberrant prediction error signaling
- depression: rigid negative priors resistant to updating
Therapy restores adaptive corrigibility, not “correct beliefs.”
11. Karl Jaspers and Epistemic Humility
Karl Jaspers emphasized that psychiatric knowledge must remain provisional and self-critical. His insistence on revisability aligns directly with the present framework.
12. A Three-Layer Architecture of Intelligence
- Evolutionary selection (slow, blind correction)
- Feedback-driven learning (organism-level inference)
- Institutionalized correction (science, AI, democracy, therapy)
Each layer accelerates and stabilizes error correction.
13. Formal Perspective: Hierarchical Bayesian Model Selection
Across scales, intelligence emerges where systems perform Bayesian model selection:
| Domain | Models | Feedback | Revision |
|---|---|---|---|
| Evolution | genotypes | fitness | selection |
| Brain | predictions | error | updating |
| AI | parameters | loss | optimization |
| Science | theories | experiments | refutation |
| Democracy | policies | outcomes | elections |
| Therapy | beliefs | experience | reflection |
14. Conclusion
Intelligence is not the possession of correct knowledge.
It is the institutionalized capacity to detect and correct error.
From evolution to brains, from machines to democracies, from science to psychotherapy, intelligence emerges wherever systems stabilize feedback-driven model revision under uncertainty.
Intelligence is not what a system knows, but how it corrects what it gets wrong.
If you wish next, I can:
- condense this into a journal-ready manuscript,
- adapt it for psychiatry, cognitive science, or AI journals, or
- formalize it further into a general theory of adaptive systems.
(OA)
