The Philosophy of Structuralism

How does mindfulness emerge from mindless matter? How do billions of neurons firing in patterns, or millions of artificial parameters in a neural network, give rise to understanding, consciousness, and meaning? This transition from body to mind, from physical substrate to mental experience, represents one of the most fascinating frontiers in both philosophy and artificial intelligence research.

Emergence in AI Systems

Basic Components
Individual neurons, weights, parameters, data points with no inherent “intelligence”
Collective Interactions
Forward/backward propagation, attention mechanisms, gradient flows between components
Organizational Patterns
Distributed representations, feature hierarchies, information bottlenecks, latent spaces
Emergent Capabilities
Reasoning, abstraction, in-context learning, creativity, multimodal understanding

Scale Emergence

Capabilities that appear suddenly when models reach certain size thresholds

Example: In-context learning in large language models

Architectural Emergence

Capabilities arising from specific structural arrangements of network components

Example: Attention mechanisms in transformer models

Training Emergence

Capabilities that develop from specific learning processes or data exposure

Example: Multimodal reasoning in models trained on aligned data

When Structure Becomes Intelligence

The philosophy of structuralism offers a framework for understanding this transition. According to structuralism, what matters is not the individual components of a system but the patterns of relationships between them. As philosopher Stewart Shapiro (1997, p. 73) argues, “In mathematics, the only properties of objects that matter are the relationships they bear to other objects.” This insight applies equally to neural networks, where individual neurons or weights mean little in isolation, but their interconnected patterns create remarkable capabilities.

When a neural network recognizes a face, it isn’t matching pixels one-by-one but identifying invariant structural relationships that persist across different lighting conditions, angles, and expressions. The network has learned to extract what remains constant amid transformation—a fundamentally structuralist approach to understanding.

Emergence: More Than the Sum of Parts

The most intriguing scientific questions often exist at boundaries—where physics meets biology, where computation meets cognition, where the material transforms into the mental. As artificial intelligence systems grow increasingly sophisticated, they’ve become unexpected laboratories for exploring one of humanity’s oldest philosophical puzzles: how does mindfulness emerge from mindless matter?

The Structure-Intelligence Connection

When we interact with AI systems today—whether asking ChatGPT a question or watching a self-driving car navigate traffic—we’re witnessing something remarkable. Systems built from millions or billions of simple numerical parameters somehow produce behaviors that appear intelligent. This emergence of complex capabilities from simple components mirrors what happens in our brains, where networks of neurons give rise to consciousness and understanding.

“The nature of the reality underlying the phenomena revealed by our best theories is structure,” writes Ladyman. This perspective helps explain why neural networks can learn to recognize faces, translate languages, or generate images despite having no explicit programming for these tasks. They learn to extract structural patterns from data—invariant relationships that persist across different examples.

From Sand Piles to Sentience

Consider a simple pile of sand. At first glance, it appears structureless—just a random collection of particles. But examine it through a microscope, and crystalline structures emerge at the molecular level. Zoom out far enough, and you might see ripple patterns formed by wind. Structure exists at multiple scales, with each level exhibiting properties not present at other levels.

The same principle applies to intelligence. Individual neurons in the brain or parameters in a neural network contain no intelligence themselves. Yet organized in the right patterns, they produce behaviors we recognize as intelligent. Physicist Philip Anderson captured this idea perfectly in his paper “More Is Different” (1972): “The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe.”

This multi-level organization creates what scientists call “emergent properties”—features that arise from interactions between components but cannot be reduced to those components. Consciousness may be the ultimate emergent property—arising from neural activity yet seemingly irreducible to it.

The Category Theory Connection

Mathematics offers formal tools for understanding these emergent structures. Traditional set theory treats structures as collections of elements with defined relations. When sets of properties A, B, and C enter into relation R(A, B, C), the output is typically a boolean value—the relation either holds or it doesn’t.

Category theory provides a more sophisticated framework, focusing on transformations (morphisms) rather than elements. As mathematicians Samuel Eilenberg and Saunders Mac Lane (1945) explained when introducing category theory, “Objects play a secondary role and could be entirely omitted from the definition.” What matters is how structures transform and relate to one another.

This mathematical perspective has profound implications for AI. Modern neural architectures like transformers (Vaswani et al., 2017) embody category-theoretic principles by focusing explicitly on relationships between elements rather than the elements themselves. Their self-attention mechanism weighs the importance of each element based on its relationships to all other elements—a fundamentally structuralist approach.

Why Mind Emerges from Neural Activity

The mind-brain relationship illustrates the central puzzle of emergence. How can the qualitative experience of consciousness arise from purely physical neural activity?

David Chalmers (1996) frames this as “the hard problem of consciousness”—explaining why neural activity is accompanied by subjective experience. Traditional reductive approaches struggle with this problem because they attempt to explain higher-level phenomena solely in terms of lower-level components. Structuralism suggests a different approach. Rather than reducing the mind to the brain, it focuses on the patterns and relationships that persist across different levels of description. The mind emerges not from neurons themselves but from their organizational patterns—patterns that can be abstracted from their physical implementation.

This perspective helps explain why artificial neural networks, despite their significant differences from biological brains, can exhibit surprisingly brain-like behaviors. Both systems extract and manipulate structural relationships from input data, creating increasingly abstract representations that capture meaningful patterns in the world.

AI as a Laboratory for Structural Emergence

Modern AI systems provide unprecedented opportunities to study emergence in action. Large language models like GPT-4 or Claude demonstrate capabilities their designers never explicitly programmed—from reasoning about novel scenarios to understanding implicit context in conversations.

Researchers like Yoshua Bengio (2013) have focused on how neural networks learn to extract meaningful representations from data. “What we care about are the manifold structures in the data distribution,” Bengio explains. These learned representations capture the underlying structure of the data rather than superficial features.

Interestingly, recent research reveals that larger neural networks don’t just perform better quantitatively—they develop qualitatively new capabilities. Anthropic researchers (Wei et al., 2022) documented “emergent abilities” in large language models—capabilities that appear suddenly once models reach certain scale thresholds, similar to phase transitions in physical systems.

This research has profound implications for the philosophy of mind. If intelligence emerges from structure rather than substance, then understanding intelligence requires focusing not on what brains or AI systems are made of, but on how they’re organized.

The Future of Structure-Based AI

Understanding emergence and structure has practical implications for AI development. If intelligence emerges from structural patterns rather than specific implementations, then future AI systems might achieve more human-like capabilities by focusing on structural aspects of learning and representation.

Neuroscientist Karl Friston’s work on the “free energy principle” (2010) suggests that intelligent systems—whether biological or artificial—share a common structural goal: they minimize prediction error by building internal models that capture the structure of their environment. This perspective unifies diverse cognitive phenomena under a single structural principle.

Similarly, computer scientist Melanie Mitchell (2019) argues that true AI advancement requires systems that can identify and manipulate analogies—recognizing structural similarities across different domains. This aligns with psychologist Douglas Hofstadter’s view that analogy-making lies at the core of human intelligence.

Beyond the Mind-Body Dichotomy

Perhaps the most valuable contribution of structuralism to AI research is transcending the traditional mind-body dichotomy. Rather than viewing mind and body as fundamentally different substances (dualism) or reducing mind entirely to physical processes (eliminative materialism), structuralism offers a middle path. The mind, in this view, is neither separate from the body nor reducible to it. Instead, it emerges from structural patterns in neural activity—patterns that can be abstracted from their physical implementation and potentially realized in different substrates, including artificial ones.

This perspective helps explain why AI systems, despite their silicon substrate, can exhibit mind-like properties. The relevant structures for intelligence don’t depend on the specific material implementation but on patterns of organization that can be realized in different physical systems.

Conclusion: Structure All the Way Down?

As we develop increasingly sophisticated AI systems, the philosophical insights of structuralism become more relevant than ever. The emergence of intelligence from neural networks demonstrates that structure, not substance, may be the key to understanding both natural and artificial minds.

This doesn’t mean we’ve solved the hard problem of consciousness or fully bridged the gap between AI and human intelligence. But structuralism provides a promising framework for understanding how the body becomes mind—how physical systems, whether biological neurons or silicon chips, can give rise to intelligence through their structural organization.

References

Anderson, P.W. (1972). More is different. Science, 177(4047), 393-396.

Bengio, Y., Courville, A., & Vincent, P. (2013). Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1798-1828.

Chalmers, D.J. (1996). The conscious mind: In search of a fundamental theory. Oxford University Press.

Eilenberg, S., & Mac Lane, S. (1945). General theory of natural equivalences. Transactions of the American Mathematical Society, 58(2), 231-294.

Friston, K. (2010). The free-energy principle: a unified brain theory? Nature Reviews Neuroscience, 11(2), 127-138.

Ladyman, J. (1998). What is structural realism? Studies in History and Philosophy of Science, 29(3), 409-424.

Mitchell, M. (2019). Artificial intelligence: A guide for thinking humans. Farrar, Straus and Giroux.

Shapiro, S. (1997). Philosophy of mathematics: Structure and ontology. Oxford University Press.

Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30, 5998-6008.

Wei, J., Tay, Y., Bommasani, R., Raffel, C., Zoph, B., Borgeaud, S., … & Huang, D. (2022). Emergent abilities of large language models. arXiv preprint arXiv:2206.07682.

This article was written with the help of Claude-Sonnet-3.7.

Emergent Structures in AI and how body becomes mind

Yildiz Culcu


Hi, I'm Yildiz Culcu, a student of Computer Science and Philosophy based in Germany. My mission is to help people discover the joy of learning about science and explore new ideas. As a 2x Top Writer on Medium and an active voice on LinkedIn, and this blog, I love sharing insights and sparking curiosity. I'm an emerging Decision science researcher associated with the Max Planck Institute for Cognitive and Brain Sciences and the University of Kiel. I am also a Mentor, and a Public Speaker available for booking. Let's connect and inspire one another to be our best!


Post navigation