
Bostrom’s Superintelligence: Definitions and Core Argument here
by Sebastian Benthall - edited by this site author.
1 Core Definitions
I wanted to take the opportunity to spell out what I see as the core definitions and argument of Bostrom’s Superintelligence as a point of departure for future work. First, some definitions:
-
Superintelligence. “We can tentatively define a superintelligence as any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.” (p.22)
-
Speed superintelligence. “A system that can do all that a human intellect can do, but much faster.” (p.53)
-
Collective superintelligence. “A system composed of a large number of smaller intellects such that the system’s overall performance across many very general domains vastly outstrips that of any current cognitive system.” (p.54)
-
Quality superintelligence. “A system that is at least as fast as a human mind and vastly qualitatively smarter.” (p.56)
-
Takeoff. The event of the emergence of a superintelligence. The takeoff might be slow, moderate, or fast, depending on the conditions under which it occurs.
-
Optimization power and Recalcitrance. Bostrom’s proposed that we model the speed of superintelligence takeoff as: Rate of change in intelligence = Optimization power / Recalcitrance. Optimization power refers to the effort of improving the intelligence of the system. Recalcitrance refers to the resistance of the system to being optimized.(p.65, pp.75-77)
-
Decisive strategic advantage. The level of technological and other advantages sufficient to enable complete world domination. (p.78)
-
Singleton. A world order in which there is at the global level one decision-making agency. (p.78)
-
The wise-singleton sustainability threshold. “A capability set exceeds the wise-singleton threshold if and only if a patient and existential risk-savvy system with that capability set would, if it faced no intelligent opposition or competition, be able to colonize and re-engineer a large part of the accessible universe.” (p.100)
-
The orthogonality thesis. “Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.” (p.107)
-
The instrumental convergence thesis. “Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent’s goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by a broad spectrum of situated intelligent agents.” (p.109)
2 Bostrom’s core argument in the first eight chapters of the book, as I read it, is this:
-
Intelligent systems are already being built and expanded on.
-
If some constant proportion of a system’s intelligence is turned into optimization power, then if the recalcitrance of the system is constant or lower, then the intelligence of the system will increase at an exponential rate. This will be a fast takeoff.
-
Recalcitrance is likely to be lower for machine intelligence than human intelligence because of the physical properties of artificial computing systems.
-
An intelligent system is likely to invest in its own intelligence because of the instrumental convergence thesis. Improving intelligence is an instrumental goal given a broad spectrum of other goals.
-
In the event of a fast takeoff, it is likely that the superintelligence will get a decisive strategic advantage, because of a first-mover advantage.
-
Because of the instrumental convergence thesis, we should expect a superintelligence with a decisive strategic advantage to become a singleton. Item 1 in Full explanation.
-
Machine superintelligences, which are more likely to takeoff fast and become singletons, are not likely to create nice outcomes for humanity by default.
-
A superintelligent singleton is likely to be above the wise-singleton threshold. more Hence the fate of the universe and the potential of humanity is at stake.
Having made this argument, Bostrom goes on to discuss ways we might anticipate and control the superintelligence as it becomes a singleton, thereby securing humanity.
£2.68 here. 27 04 2025
This Bostrom paper introduces some new ideas related to the challenge of endowing a hypothetical future superintelligent AI with values that would cause it to act in ways that are beneficial.
Hail Mary, Value Porosity, and Utility Diversification
put elsewhere