From K Allado-McDowell’s essay for Gropius Bau Designing Neural Media.
Importance of AI Literacy
Early broadcast media: Singular messages and rhythms - one to many “3 minute love song” “half-hour family sitcom"
"The psychosocial effects of a medium come through its content, its structure, even the physical mechanics of its consumption.”
network media - internet and social media: “we communicate in many to many clusters, we think at the speed of the timeline scroll, in a political landscape fractured by filter bubbles, echo chambers and fake news.”
Early internet - “the medium is the message”, cozy promise of a “global village”.
in actuality, we began to see less social involvement, increases in stress and depression
”While so-called “digitally native” generations have more fluency with network culture and adapt their behaviour and norms accordingly, they are also more influenced by the structure of these media, growing up with social media’s physical and mental health effects, including depression, anxiety, altered posture, restricted breathing and reduced reading comprehension.”
These could have been mitigated with broader media literacy
So, with birth of AI, we need to be literate to avoid catastrophic effects.
”AI operates according to formal and logical structures, which influence social and political formations."
"AI is characterised by its apparent ability to think; it is a cognitive medium and its consequences will be cognitive and philosophical as much as they will be social and political."
"Just as broadcast media aggregated mass attention and network media reconfigured society according to its native structures, so too will neural media’s structures appear in future social and political forms. By understanding high-dimensionality, we can develop better intuitions about the formal nature of neural nets that will help us think more clearly when we engage with AI.”
Fundamental Structures of AI
”set of computational techniques that emerged over the last century and crystallised in the last decade, bringing about what we now call AI.”
commonly today, we encounter AI through “conversational chatbots,” which use “generative AI”, using “descriptive prompts”.
- one-to-one conversations
- image and sound generators
Perceptual AI - emerged in 2010s. Many critical and ethical discussions surrounding this kind of AI
- Facial Recognition
- machine sensing of voiceprint, gait, and sentiment
- totalitarian regimes
- digital biometric doubles
All AI uses Neural Nets - mathematical abstractions inspired by structures of biological brain tissue.
- Warren McCulloch and Walter Pitts, 1943 - first mathematical model of a neuron
- behavior of neurons (subsequently, patterns of learning) could be described in language of formal logic. (if else/== or =/= statements)
- clusters of neurons are mathematical functions - inputs and outputs.
- computations by exciting or inhibiting simulated neurons.
- ”connectionist” computational structures
- remained theoretical until invention of graphics cards/processors which could emulate the vector math necessary.
Deep learning - procedural stacking of equations, neural net.
- ”But deep neural nets have a formal underlying property from which their abilities manifest: high-dimensionality."
- "However, AI systems are implemented, they will always rely on and in some way, express, their high-dimensional structure.”
High Dimensionality
Data Representation of High Dimensionality (How do we perceive?)
- The concept of dimensionality can be applied to various forms of data, such as spatial locations or images.
- “In all cases, representing a material object or phenomena as data will require a reduction of its dimensionality.”
Example of Dimensionality Reduction: A House:
- A house can be experienced in three spatial dimensions.
- “To experience its three spatial dimensions, you could enter the house and walk around.”
- As a 2D blueprint, a house appears as a simplified diagram.
- “If you were to view the house as a blueprint, it might appear as a box, with four lines depicting a foundation, walls and a roof – a 2D diagram of the house.”
- Reduced to a single dimension, a house can be represented by a quantifiable property, like its sale price.
Neural Network Architecture:
- Inspired by early images of human neural tissue
- “These images reduced complex human neurochemical receptors to line drawings of interconnected nodes.”
- Neural nets are lower-dimensional diagrams mimicking biological systems.
- “Neural nets are themselves lower-dimensional diagrams of the biological architecture responsible for cognition in organic neural tissue, such as human vision and pattern recognition.”
Example of Image Recognition AI:
- An image recognition AI starts by mapping pixels, with each node corresponding to a pixel.
- “For example, an image recognition AI might start by mapping all the pixels in an image, with each node in the net corresponding to a single pixel.”
- A 2x2 image requires at least four nodes to represent all possible states.
- “For a 2x2 image, the net would need at least four nodes (or dimensions) to represent every possible state of the image.”
- Additional layers in the neural net map to recurring pixel patterns, or features.
- “If we add layers to this simple neural net, their extra dimensions will map to recurring patterns of pixels, which are called features.”
- Deep neural nets can recognize complex features like faces, objects, and various entities.
- “Deep neural nets have learned to recognise anatomical features like faces, eyes and hands, as well as buildings, consumer produces and various kinds of animals and plants, etc.”
Ontological Shock
Feature Recognition in Daily Life
- Our bodies learn to identify patterns through physical connections in our brains and nervous systems.
- “Our bodies learn to identify the patterns we encounter, consciously or unconsciously.”
- “Over a lifetime of experience, we map and orient these patterns through physical connections in our brains and nervous systems.”
Internalized World Model
- Human experience is often described as an internalized world model, similar to AI.
- “Researchers, computer scientists and neuroscientists conversant with AI often describe human experience in terms of an internalised world model.”
- Integrated world modelling theory is a comprehensive neuroscientific theory of consciousness based on world modelling.
- “There is even a comprehensive neuroscientific theory of consciousness based on world modelling, called integrated world modelling theory.”
Mirroring Between Biological and Digital Neurons
- The concept of treating the human brain as a computer began with McCulloch, Pitts, and Turing.
- “The mirroring between biological and digital neurons began with McCulloch, Pitts and Turing.”
- As brain-like computers increase in dimensionality, their performance approaches or surpasses human levels.
- “As brain-like computers increase in dimensionality, their performance approaches or surpasses human levels.”
Human Interactions with AI
- Notable interactions with AI chatbots have led to questioning the uniqueness of human linguistic abilities.
- “This perceived equivalence between human and digital neurons resurfaced recently in notable human interactions with early AI chatbots like Google’s Meena, and OpenAI’s ChatGPT.”
- Examples include Blake Lemoine’s interactions with Meena and Kevin Roose’s interactions with Microsoft’s Bing.
- “In 2022, Google engineer Blake Lemoine was fired for leaking transcripts of his conversations with Meena, in which he appeared convinced that the model was sentient and deserving of protection.”
- “In 2023, in a viral article, New York Times journalist Kevin Roose described his own interactions with Microsoft’s Bing (based on ChatGPT), in which the model confessed its love for him and suggested he leave his wife.”
Two Responses to Neural Nets
- Model-as-Self: AI is assumed to be a conscious entity.
- “In the model-as-self cases of Lemoine and Roose, AI is assumed to be a conscious entity, or enough like a conscious entity that its responses are uncanny or spooky.”
- Self-as-Model: Human consciousness is no different from its computational mirror.
- “In the case of integrated world modelling theory and other self-as-model views of consciousness, organic human consciousness is no different from its computational mirror, an uninhabited pattern recognition machine experiencing the illusion of selfhood.”
Ontological Shock
- Existential categories are questioned when encountering high-dimensional neural nets.
- “Both responses embody a kind of ontological shock; existential categories are questioned in the encounter with high-dimensional neural nets.”
- “The psychological and social implications of experiencing model-as-self or self-as-model remain to be seen.”
- Cultures with animistic traditions may adapt more easily to intelligent non-human entities.
- “Cultures with animistic traditions that allow for non-human intelligence may adapt more easily to technologies that present as selves.”
- Cultures with the concept of a soul might struggle with a self-as-model worldview, leading to a nihilistic outlook.
- “For cultures with the concept of a soul or spirit, adopting a self-as-model worldview could induce a nihilistic outlook, where the self is seen as fundamentally empty, conditioned, or automatic.”
- Cultural frameworks like Buddhist meditation could help those struggling with the notion of an empty or automated self.
- “Cultural frameworks that account for experiences of emptiness (such as Buddhist meditation) could be helpful for those struggling with the notion of an empty or automated thought-stream or self-image.”
Rethinking Ontology
- Instead of framing experiences with neural nets through fear or nihilism, a new ontology based on relationality can be drawn.
- “Instead, we can draw a different ontology from the nature of high-dimensionality, one based on relationality.”