The Formative Mind: Theories of Consciousness as Practice

Community Article Published September 22, 2025

Short version: Instead of treating consciousness as a passive byproduct of a powerful unconscious engine, think of it as the engine itself: a process that builds rich representations (self-organizing), predicts and models its own processing (metarepresentation), and thereby brings an agent and its world into being (individuation). Below is a synthesis.


1 — Problem statement: the old story and why it misleads

Traditional cognitive science assumes a big, smart “cognitive unconscious” that does the heavy computation; consciousness is a secondary, slow, capacity-limited display that just watches results. That picture creates two problems:

  1. It multiplies explanatory mechanisms (hidden computations + consciousness) unnecessarily.
  2. It treats consciousness as epiphenomenal — helpful for description but not for control or learning.

The alternative proposed here collapses those two layers: phenomenal experience is the primary representational medium the system uses to learn and control itself. The rest of this article explains how that can work and why it matters for ML.


2 — Self-Organizing Consciousness (SOC): learning from experience, not from hidden rules

Core claim: The brain builds the world inside experience itself. Learning operates on moment-to-moment conscious contents; there isn’t a separate hidden “store” doing the heavy lifting.

How it works (mechanism):

  • Chunking by attention: Attention samples small portions of input (speech, visual scenes, action streams).
  • Associative binding: Repeated co-occurrences in attention get linked (Hebbian-style).
  • Forgetting as selection: Rare/noisy chunks decay; repeated, useful chunks survive and become stable representations (words, objects, skills).

Illustration — language: Instead of assuming an innate grammar, SOC shows how infants could extract words from unbroken speech: random chunking → reinforcement of repeated chunks → surviving units form lexical and higher-order structures.

Takeaway: Rich, usable representations can emerge from simple local rules operating on conscious content; intuition is the fast readout of those well-organized representations.


3 — Metarepresentational control: awareness as a tool for self-management

SOC explains what the system builds; metarepresentational accounts explain how it steers and regulates itself.

Attention Schema Theory (AST)

  • Idea: Awareness = a compact internal model of attention.
  • Function: A simplified, predictive description of attention gives the system better top-down control and makes it possible to infer others’ focus (social cognition).

Hierarchical Forward Models (HFM)

  • Idea: Awareness comes from stacked predictive models that predict the outputs of lower models (a model of a model).
  • Function: IM1 predicts a pathway’s output; IM2 predicts IM1’s state. Prediction errors update models and create actionable self-knowledge.

Takeaway: Metarepresentation turns experience into controllable dynamics: the system learns models of its own models, enabling regulation, planning, and social inference.


4 — Open-Ended Intelligence (OEI): consciousness as world-building

Core claim: Intelligence isn’t just solving pre-specified problems; it’s the process of individuation—forming an agent, goals, and a world together.

Key dynamics enabling individuation:

  • Edge of chaos: Systems operate between rigidity and randomness; this regime supports both stability and novelty.
  • Metastability: Transient, repeatable neural states allow sequence and coherence without lock-in.
  • Topological stability: Durable thoughts correspond to stable network structures (non-contractible loops); intuition reconnects these structures to generate novelty.

Takeaway: Conscious processes, when coupled with metarepresentation, don’t just represent the world — they actively create meaningful structure within it.


5 — Unified picture: the formative mind

A compact workflow:

  1. SOC sculpts rich representations from raw experience.
  2. AST/HFM build self-models that predict and control attention and processing.
  3. OEI situates these mechanisms as a process that forms agents and problems, not merely solves them.

In short: experience → representation → self-model → guided individuation.


6 — Conclusion: from spectators to builders

The formative view reframes consciousness from a spectator to a maker: a process that constructs the representations a system uses to think, control, and co-create its world. For ML, the lesson is clear: build systems that represent richly, predict themselves, and pursue open-ended formation. That’s a practical path toward agents that don’t just solve our problems — they learn what problems matter.

Community

Sign up or log in to comment