Properties That Drive the Rise of Functional Information
This article explores 12 essential properties that enable systems to accumulate functional information, revealing how purpose-driven complexity emerges across nature and technology.
The Law of Increasing Functional Information proposes that across natural, biological, and technological systems, there is a directional tendency for functional complexity to accumulate over time. Unlike entropy, which describes a drift toward disorder, this law focuses on the emergence of useful order—configurations that are not merely statistically rare, but specifically suited to perform meaningful functions within a system or environment. The key insight is that when conditions allow for variation, selection, memory, and reinforcement, systems can evolve toward increasingly adaptive forms.
This evolutionary process does not require life as we know it. It is a substrate-independent principle. Whether atoms combine to form molecules, cells construct biochemical networks, or humans develop languages and institutions, functional information accumulates when systems are structured to explore, retain, and improve purpose-driven patterns. These systems are not driven purely by randomness or deterministic rules—they evolve through feedback, selection, and structural memory, all of which allow functional gains to be preserved and amplified.
For this law to operate, however, specific properties must be present. These properties are not optional—they are preconditions for functional accumulation. Without them, systems stagnate, regress, or become trapped in maladaptive patterns. With them, systems become capable of open-ended adaptation and increasing sophistication. These properties describe how a system must be structured, constrained, and enabled to support the discovery, preservation, and refinement of functions across time and scale.
The twelve properties fall into several categories. Some refer to structural conditions, such as diversity of components and mechanisms for variation. Others refer to informational capacities, such as error correction, memory, and recursion. Still others involve dynamical processes, such as selection pressure, feedback, and reinforcement. Taken together, these properties define the architecture of systems that can evolve—not just biologically, but cognitively, socially, technologically, or cosmologically.
Crucially, these properties are deeply connected to fundamental physical principles. Selection is a physical process—one that filters configurations based on how well they persist, replicate, or impact their environment. Memory is a thermodynamic investment, where energy is used to preserve structure. Feedback and recursion are informational dynamics that reshape causality, making future states dependent on interpretations of prior ones. These are not soft metaphors, but foundational processes that tie together physics, biology, and information theory into a single explanatory framework.
This article explores each of these twelve properties in detail. Together, they reveal what it takes for any system—living or nonliving—to evolve complexity that is not just statistically rare, but functionally meaningful. By understanding these principles, we gain a powerful lens for explaining the emergence of life, intelligence, and civilization—not as anomalies, but as natural consequences of deeper laws that govern how information interacts with the fabric of reality.
Summary
Diversity of Components
A wide variety of building blocks expands the system’s capacity to explore complex combinations and functions. Diversity is the substrate for innovation.Mechanism for Novel Variation
The system must continuously generate new configurations through mutation, recombination, or experimentation. This is the engine of novelty.Functional Selection Pressure
Selective filters evaluate which configurations perform well under real-world constraints. Only functions that enhance survival, efficiency, or purpose persist.Feedback Loops Across Scales
Internal and cross-level feedback enables systems to learn from their outputs, self-correct, and refine behavior dynamically across time and space.Capacity to Store Function
Memory structures—whether genetic, neural, symbolic, or social—allow systems to preserve useful configurations, enabling cumulative evolution.Mechanism for Error Correction
To preserve and scale function, systems must detect and repair noise, drift, or mutation. Fidelity is essential to long-term adaptability.Recursion and Self-Simulation
Systems capable of internally modeling themselves or the environment can plan, generalize, and learn at a higher level of abstraction.Memory and Persistence
Beyond momentary memory, the system must maintain identity and continuity, supporting long-term integration of layered, functional adaptations.Inter-domain Communication
Functional information can flow between domains—e.g., molecular, neural, symbolic—allowing higher integration and coordination of processes.Tool Use and Externalization
Embedding functions into tools or the environment amplifies capacity, stabilizes memory, and extends the system’s influence beyond internal limits.Strategic Exploration of State Space
Rather than random trial-and-error, intelligent systems prioritize promising regions of functional possibility, increasing innovation efficiency.Reinforcement of Successful Functions
Effective functions are not only preserved but scaled—through replication, investment, or expansion—creating positive feedback and systemic growth.
The Properties
1. Diversity of Components
Analytical Role:
Diversity expands the system’s exploration space—the set of possible configurations it can generate. This is not aesthetic variety but structural heterogeneity with combinatorial consequences. The number of potential functions a system can produce scales not linearly, but exponentially, with the diversity of available components. If each component type offers a unique behavior, constraint, or interaction, the system can recombine them into a vast array of higher-order structures, each of which may hold potential functionality.
Why It Matters:
Without diversity, functional evolution is limited by the narrowness of the raw material. A homogeneous system may cycle endlessly within a limited range of configurations, unable to generate novelty. Diversity ensures raw creative capacity, allowing the system to evolve not just along one trajectory, but into entirely new regimes of functionality.
Examples:
In biology: diverse amino acids form the basis for proteins with radically different folds and functions.
In computation: instruction sets in programming languages enable flexible operations.
In society: cognitive diversity enables innovation and resilience.
2. Mechanism for Novel Variation
Analytical Role:
A system must not only have diverse parts—it must be capable of generating new combinations, mutations, or structures from them. This introduces dynamism into the architecture. Novel variation can be random (e.g. mutations), deterministic (e.g. algorithmic recombination), or guided (e.g. via learning or planning). The essential condition is that the system regularly explores new regions of its configuration space, rather than remaining static or repeating known forms.
Why It Matters:
Functional information grows when better solutions are discovered. If no variation is introduced, no new functions can emerge. Without novelty, selection becomes irrelevant, and evolution halts. Moreover, even previously discovered functions can degrade without variation as a counterbalance to error or drift.
Examples:
In evolution: meiosis and mutation introduce genetic diversity.
In cognition: brainstorming introduces ideational variation.
In AI: gradient-based exploration or reinforcement learning generates policy diversity.
3. Functional Selection Pressure
Analytical Role:
Not all variations are useful. Selection pressure is the evaluative mechanism that filters configurations based on whether they improve the system’s ability to persist, adapt, replicate, or achieve a goal. Importantly, selection must be functional, meaning it is context-sensitive and based on performance criteria. This transforms the raw space of possibilities into an adaptive landscape—some peaks are retained and reinforced, others are discarded.
Why It Matters:
Without selection, systems drift or decay. Even with high diversity and novelty, the absence of a reliable feedback mechanism leads to noise accumulation, not progress. Selection introduces directionality—what Hazen et al. call “selection for function”—which explains why evolution can produce sustained complexity without intelligent design or foresight.
Examples:
In physics: crystal growth selects for stable lattice configurations.
In ecosystems: predators select for camouflaged or evasive traits.
In design: market feedback selects for usable, efficient products.
4. Feedback Loops Across Scales
Analytical Role:
Feedback transforms a passive system into a responsive, learning system. Internal feedback allows subsystems to adjust based on performance; cross-scale feedback links small-scale behavior (like a gene) to large-scale outcomes (like an organism’s survival). Feedback can be negative (stabilizing) or positive (amplifying), but in both cases it allows the system to evaluate and refine its function in real time.
Why It Matters:
Feedback is essential for stability, adaptation, and learning. Without it, systems cannot detect when functions fail or need improvement. Feedback enables recursive self-optimization, where the outputs of one cycle become inputs for the next. It allows systems to be open to environmental conditions, embedding external structure into internal form.
Examples:
In biology: hormone regulation provides homeostatic feedback.
In machines: control systems adjust based on sensor readings.
In social systems: democratic institutions embed public feedback into governance.
5. Capacity to Store Function
Analytical Role:
The accumulation of functional information depends on the ability to retain configurations that work. This requires a system to have memory structures—mechanisms that preserve specific patterns, interactions, or instructions over time. Storage can be physical (e.g., DNA, neural networks), symbolic (e.g., language, software code), or institutional (e.g., norms, routines). The function must not merely occur—it must be preserved and reproducible.
Why It Matters:
Without storage, all functional adaptation is ephemeral. The system would need to rediscover useful configurations repeatedly, wasting energy and time. Memory makes cumulative evolution possible. It also allows for modular reuse, where successful functions can be repurposed in new contexts—one of the most efficient forms of innovation.
Examples:
In biology: genetic material stores templates for protein synthesis.
In technology: databases and codebases persist functional operations.
In culture: books, institutions, and traditions preserve strategies for survival or success.
6. Mechanism for Error Correction
Analytical Role:
As systems become more complex, they are also more fragile—small errors can disrupt function. Therefore, systems must include error detection and correction mechanisms to preserve functional configurations against decay, mutation, or noise. These mechanisms act as informational repair protocols that ensure fidelity across replication, processing, and interpretation.
Why It Matters:
Accumulating functional information requires long-term stability and reliability. Without error correction, systems will regress due to entropy or accumulated noise. Robustness is necessary to protect progress and enable further refinement. It also allows systems to operate near the edge of chaos, where innovation is possible but not self-destructive.
Examples:
In cells: DNA repair enzymes fix mutations.
In AI: checksums or validation loops correct faulty outputs.
In human systems: peer review, auditing, or legal systems correct deviant behavior or error.
7. Recursion and Self-Simulation
Analytical Role:
Recursive systems can reference, simulate, or apply functions to themselves. They generate higher-order structures where output becomes input in nested cycles. This enables a powerful capability: the capacity to internally simulate the outcomes of potential actions, improving efficiency and foresight. Self-reference also enables meta-learning, or learning how to learn.
Why It Matters:
Recursion accelerates the exploration of functional possibilities by allowing internal experimentation before external execution. It is crucial for abstract thought, strategy, and generalization. Without it, systems rely solely on trial and error in the environment, which is slower and riskier. Recursive architectures are often found at the heart of intelligence.
Examples:
In programming: recursive functions solve problems by referencing themselves.
In language: grammar and syntax exhibit recursive patterns.
In cognition: mental models simulate future actions or consequences.
8. Memory and Persistence
Analytical Role:
Beyond immediate storage, systems must possess long-term durability—a stable identity that persists across disruption, reproduction, or change. Persistence is the temporal context in which functional information can unfold. It enables layered complexity, where functions are not only executed but maintained and integrated over time.
Why It Matters:
Systems that degrade too quickly cannot build up complex functionality. Persistence allows for iterative refinement, where feedback from one phase can inform the next. It also enables integration across timescales, a hallmark of sophisticated adaptive systems. Importantly, persistence doesn't mean rigidity—it can include mechanisms for flexible continuity, like developmental plasticity or institutional evolution.
Examples:
In biology: multicellular organisms maintain identity over time despite turnover of cells.
In software: long-running systems accumulate logs, updates, and refinements.
In society: traditions and constitutions persist through generations and upheaval.
9. Inter-domain Communication
Analytical Role:
Inter-domain communication refers to the ability of a system to translate or transfer information between distinct subsystems or representational formats. These domains might differ in scale (molecular ↔ organism), medium (neural ↔ linguistic), or modality (chemical ↔ symbolic). The key is that functionality generated in one part of the system becomes input, context, or structure in another. This allows emergent functions to cascade, co-evolve, and synergize across the system.
Why It Matters:
Functional information grows not just through local optimization, but through cross-pollination of function between domains. It enables complex coordination, emergent intelligence, and the integration of diverse types of constraints and goals. Communication also multiplies the reuse of functional components, creating a more efficient architecture for scaling complexity.
Examples:
In biology: gene expression leads to protein folding, which influences behavior.
In human society: speech turns into written law, which affects social behavior.
In AI systems: perception modules feed into planning modules, which influence motor outputs.
10. Tool Use and Externalization
Analytical Role:
Tool use allows systems to externalize function, embedding intelligence or control into the environment. This expands both memory and capability without increasing internal complexity. Externalization acts as a multiplier of functionality, turning the outside world into a substrate for storing, amplifying, or distributing functional information. Tools may be physical (hammers), conceptual (formulas), or institutional (contracts).
Why It Matters:
Internal limits constrain the growth of functional complexity. By projecting structure into the external world, systems can bootstrap themselves into new functional regimes. Externalization also allows functional memory to outlast the system itself—e.g., in books, architecture, or genetic legacies—enabling long-term accumulation and ecosystem-level coordination.
Examples:
In primates: use of sticks or stones to manipulate environment.
In humans: writing, agriculture, and software all externalize function.
In organisms: niche construction modifies environmental constraints to support survival.
11. Strategic Exploration of State Space
Analytical Role:
Rather than exploring randomly, advanced systems bias exploration toward functionally promising areas of configuration space. This can be achieved through heuristics, learning algorithms, curiosity-driven behavior, or planning. The result is accelerated discovery of viable and higher-quality functional configurations. Strategic exploration increases the ratio of useful to useless variation.
Why It Matters:
Without strategy, most exploration is wasteful, especially in large or sparse spaces. Strategic exploration allows for targeted innovation, where prior functional knowledge guides the generation of novelty. This is a critical accelerator in systems that need to adapt in complex or adversarial environments, and a core trait of intelligent behavior.
Examples:
In evolution: behavioral plasticity allows organisms to test new strategies in response to context.
In AI: Monte Carlo Tree Search or policy gradients optimize exploration.
In science: hypothesis-driven inquiry strategically probes causal space.
12. Reinforcement of Successful Functions
Analytical Role:
Once a useful configuration is discovered, systems need mechanisms to amplify, replicate, or invest in it. This reinforcement acts as a selection memory, increasing the probability that similar configurations will be encountered or preserved in the future. It also guides energy or resources toward expanding that functionality, creating positive feedback loops that consolidate success.
Why It Matters:
Discovery alone is insufficient. For functional information to accumulate, systems must not only retain useful structures but also build upon them preferentially. Reinforcement creates a pathway from isolated success to structured hierarchy—e.g., from a single innovation to a standardized module to an integral part of system architecture.
Examples:
In evolution: reproductive success amplifies functional traits.
In neural learning: Hebbian reinforcement strengthens successful synaptic pathways.
In society: profitable innovations attract investment and replication.