Tuesday, March 1, 2016

Head Games: Modernism and Post-Modernism

Zheng Fanzhi: "Van Gogh III" (2017)

An oblique style so muddles most Post-modernist discourse that its opponents rarely know precisely what they are opposing, other than stylistics, and even writers who fancy themselves post-modernists rarely evidence much understanding of underlying issues. Essentially, Post-modernism represents a conscious abandonment of Modernism, which in the context of European philosophy refers to the search for a self-sufficient and perfectly coherent system of human knowledge. Modernism traces its roots to René Descartes, who proposed developing a complete system of knowledge based on the methodical inspection and correlation of mental and physical experience. This approach effectively divorced metaphysics from theology and thus secularized the question ‘How do we know what we know?’ for subsequent philosophers. Two basic approaches to this question developed in the following centuries. Empiricists, following John Locke, insisted that we derive mental constructs of reality from sensory experience; Idealists, following Emmanuel Kant, insisted that we interpret sensory experience based on innate mental constructs of reality.
These contrasting approaches paralleled similarly contrasting approaches to the same question as posed by philosophers since the time of Plato and Aristotle. Plato had held that concepts are innate in all conscious beings but they have a reality independent of any consciousness. According to this view, which became known as Realism in reference to the presumed independent reality of concepts, cognition is a matter of ordering perception according to innate mental templates. This was the view championed by Augustine of Hippo, who adapted it to Christian theology by positing our creation in the Image of God as the source of innate concepts. In opposition to Plato, Aristotle had argued that we build up mental constructs of reality based on experience of the physical world. This view, which dominated discussions of metaphysics in the Middle Ages, became known as Thomism after Thomas Aquinas, who adapted the views of Aristotle to Christian theology just as Augustine had adapted the views of Plato. William of Ockham later elaborated an extreme interpretation of Aristotle, which became known as Nominalism in reference to the provisional nature of the terms (i.e. categories) by which we perceive reality. Ockham argued that concepts are not innate—or, if innate, then unreliable—and that only the intervention of a transcendent intelligence can rescue people from perceptual error. Unlike the churchmen, who viewed reason as subservient to faith, Descartes asserted the primacy of reason as the basis of philosophy as well as faith. He thereby sidelined the whole of Medieval European philosophy as a misguided detour from the speculative traditions of Antiquity.
Later discussions of these issues conflated elements of Platonic and Aristotelian positions in ways that defy easy correlation with traditional positions. Phenomenologists (Georg Wilhelm Friedrich Hegel, Marin Heidegger, Jean-Paul Sartre) argued that that mental constructs of reality are intuited from experience (physical, emotional, and social) and validated by an act of will, whether personal (as in Existentialism) or collective (as in Marxism). Analytic Philosophers (Bertrand Russell, Richard Rorty) argued that mental constructs of reality inferred from physical experience are validated by universal principles such as mathematics that are innate in human consciousness. Generally speaking, Phenomenologists claimed a moral intuition of reality whereas Analytic Philosophers claimed logic-based objectivity. With regard to ethics, both Phenomenologists and Analytic Philosophers promoted a personal commitment to the welfare of others as the only transcendental truth, personal immortality and the meaning of life being construed as functions of collective continuity. The critical weakness common to both approaches was the difficulty in establishing universal truth given the inherently individuated nature of validation. Phenomenologists approached this problem by insisting, quasi-mystically, that intuition is trans-subjective. Analytic Philosophers approached the problem by insisting (mistakenly as it turns out) that rigorously logical processes with a fixed set of axioms cannot yield contradictory results. The dark corollary to both traditions was the Nihilism of Friedrich Nietzsche, who proclaimed that systems of meaning grounded in the self ultimately require the imposition of a super-self as axiomatic referent.
The great champion of Analytic Philosophy was Bertrand Russell, who together with Alfred North Whitehead authored the Principia Mathematica, a logical deduction of the entire framework of arithmetic and algebra from a few propositions. This was to be the first step in the reduction of all philosophical questions to a consolidated system of impregnable logic that dismissed any remaining questions (e.g. ‘Why are we here?’) as irrelevant. Ironically, the program unraveled following the efforts of Kurt Gödel to resolve an issue peripheral to the Principia Mathematica. Gödel employed a system of meta-mathematical mapping to reduce the paradox ‘Is the statement [This statement is not demonstrable] demonstrable?’ to a simple either/or proposition with only one possible answer. What Gödel inadvertently discovered was that no amount of basic propositions can account for all the demonstrable possibilities of a complex system; moreover, no system can justify its foundational propositions from within the system—the validity of any system is always exterior to the system.
Following this impasse, philosophers approached the task of developing a comprehensive system of philosophy by concentrating on some pervasive aspect—e.g. presence, power, language. Language became an especially attractive object of inquiry because it provides the means of elaborating, communicating, and preserving thoughts. Ferdinand de Saussure inaugurated the current line of inquiry into this field by developing a theory of linguistics based on difference, an approach which derives from Hegel’s thesis that identity is relational. Though impressive, this system suffered from a naïve objectivity that ignored the importance of personal and cultural subjectivity. Ironically, Charles Sanders Peirce had earlier proposed a theory of language that appreciated the subjectivity of language, but this very appreciation (based on Ockham’s tripartite distinction between object, symbol, and subject) stifled further development of a comprehensive theory of language. The most original contributor to language theory was Ludwig Wittgenstein, who observed that utterances have ‘accepted’ but not ‘true’ meanings because language is primarily a function of use, a kind of game with rules and strategies that change with time and distribution as the influence of individual participants grows and fades. To take a simple example, the word ‘nice’ (from ne scire) originally meant ‘ignorant’ but it has come, over the course of many uses, to mean ‘pleasant.’ What should it mean? Why? Any answer to these questions, whether dogmatic or libertarian, raises the issue of manipulation of meaning. The consequences of Wittgenstein’s thesis are profound for any general theory of knowledge at a time when population growth, widespread higher education, improved communications technology, and the ripple effect of theoretical advances have led to an explosion of information that must be vetted, categorized, correlated, archived, and distributed—all on the basis of unwittingly biased understandings.
Among the Phenomenologists who turned their attention to language, Claude Levi-Strauss stands out as the last great Modernist. By adapting Saussure’s insight into the nature of signs to the anthropology of pre-literate societies, Levi-Strauss convincingly demonstrated the underlying symbolic reasoning linking apparently random associations. He observed, for example, that the fearful association of storms with rabbits, harelips, and twins in the Andes is rooted in a fear of the unnatural splitting of entities. In this case, twins are considered imbued with the principle of splitting and therefore capable of provoking storms, which are viewed as cloud-splitting; harelips are quasi-twins, as are the rabbits for which they are named, and therefore also a threat. This approach, which became known as Structuralism, imploded following the attempt by Algirdas Greimas to develop a formal theory of symbolic communication. The problem was the recognition by Greimas that the opposite of any given symbol is a function of context. For example, the opposite of marriage can be construed as (among other possibilities) rape, homosexuality, celibacy, or widowhood depending upon the aspect under consideration.
The question of why some oppositions seem more obvious than others led to Post-structuralism, namely the work of Michel Foucault, who contended that the conventions we use to describe reality effectively limit our ability to discern reality. Foucault further argued that these conventions are the artifacts of social power, which he tended to view as something exercised from above—a thesis that effectively challenged the validity of all social and historical relations. The implications of Foucault’s challenge, which recalls that of Wittgenstein, can hardly be overstated at a time when information has become such big business that consumers rightly wonder whether it is carefully tailored to the advantage of the provider at the expense of the receiver. Jean-Francois Lyotard labels this the ‘Post-Modern Condition,’ and he glowingly observes that, under the circumstances described above, localized appeals to legitimacy based on dialog easily outflank generalized appeals based on established principles because dialog is intrinsically participatory, and this democratic nature disarms the distrust that immediately greets dogmatic statements. The very idea of Truth thus becomes the nemesis of the Post-Modernist for whom there exist only inherently oppressive conventions.
The next step was Deconstruction, a critique of metaphysics itself as an oppressive discourse based on the suppression of otherness. This project is most closely associated with Jacques Derrida, who won a large following among literary critics for his nimble rhetoric and his use of porous and multivalent language, but the roots of this project can be traced to Heidegger’s challenge to deconstruct European metaphysics by examining the suppression of tensions and inconsistencies that belie its artificiality. The critical approach was elaborated by Emmanuel Levinas, who developed a theory of alterity (‘otherness’) that adapted Saussure’s theory of meaning-as-difference to identify a trace of the Other within every construction of identity, including the Self. This approach achieved truly disruptive force in Jacques Lacan’s reinterpretation of Self as a function of violence that manifests itself in the simultaneous separation from and cannibalization of external authority. Derrida insinuated these critiques of meaning into every imaginable field of philosophical discourse, playfully inverting the teleological impulse of Hegelian Phenomenology by reveling in the myriad threads of meaning interwoven into every word.

Monday, February 1, 2016

Head Games: Knowledge and Ambiguity



Zeng Fanzhi: "Self Portrait 09-8-1" (2009)
Because questions of meaning raise the issue of how we come to know anything at all, some theory of human cognition is fundamental to discussions of meaning in general. Recent writing on the subject defines cognition as the manipulation of relationships between data inferred from sensory experience. The basis for cognition is therefore perception, which results from the regulated interaction of sensory experience and memory over time.
The most basic unit of perception is the one-time neurophysical stimulation of a particular receptor with unique parameters of sensitivity. The brain records the event and establishes a rudimentary sense of the parameters involved. As the brain records similar events from the same receptor, it arranges these in relation to one another in time. As the brain records simultaneous events from similar receptors, it arranges them in relation to one another in space. The combined input from all similar receptors is mapped as a field of perceived phenomena which fluctuates over time.
The brain is evidently predisposed to further map any field of perceived phenomena as a collection of relatively stable regions whose boundaries and location fluctuate over time. Each such region constitutes a quality. By similarly mapping the input from receptors with different parameters, the brain builds up layers of mapped regions which it correlates to identify relatively stable qualities across all parameters. The brain interprets these complex bounded regions as objects. The brain further correlates the experience of qualities and objects stored in memory to establish conceptual templates that facilitate future mapping of the total, multi-layered field of perception. The combination of immediate experience and this collection of conceptual templates constitute our perception of the world.
The importance of memory to the process can hardly be overstated since without it perception cannot advance beyond primal consciousness. More importantly, the accumulation in memory of multiple contexts associated with each quality or object permits the establishment of complex chains of correlation based on similar or contrasting associations. This is the source of both the richness and the ambiguity of language as discussed below.
Interaction with the world introduces dynamics even more complex than those involving simple perception because it creates the context for self-awareness. Self-awareness is evidently a function of innate, sub-rational processes that establish, as axiomatic reference, a persistent object (the body) whose interactions generate more information than the interactions of other objects. This persistent object has a number of defining peculiarities, including: (1) the immediate proximity of the persistent object to the self, (2) the self’s ability to reliably influence the behavior of the persistent object, and (3) the self’s experience of pain corresponding to the location of the persistent object in time and space. As the self explores its relationship with the world, it notes the relative appeal of specific interactions, and the ability to predict or influence these interactions becomes the criteria for reaffirming or reevaluating the reliability of established conceptual templates. Self-aware interaction with the world therefore introduces the need for cognition, i.e. the manipulation of relationships between data inferred from sensory experience.
As the self explores the world, it also discovers relatively information-rich objects in the world, other than the subject, whose behavior indicates that they too have self-awareness and practice cognition. A predisposition to expect the existence of others like ourselves is apparently innate, as is the desire to interact with them.
This brings us to social language, which arises from the impulse of one individual to manipulate another’s chains of correlations by reproducing or alluding to some significant aspect of a shared field of perception. The interplay of such communication will predictably lead to the establishment of a common vocabulary and rules of syntax that serve as placeholders for previous exchanges. Given the interactive origin of language, this model predicts that language will favor the more influential participant(s) in any ongoing exchange, with languages evolving and fragmenting as the general influence of privileged participants wanes in the face of new participants or contingencies.
This inherent drift renders language a Rosetta Stone of uncertain authority, and discourses that rely on language to regulate conformity cannot therefore guarantee their own integrity. The unsettling nature of this realization constitutes the so-called “post-modern condition.”

Saturday, January 2, 2016

Head Games: Mind and Brain

Functional MRI (fMRI) technology has recently enabled researchers to identify the pathways of two interrelated decision making processes involving a number of structures in the brain. The main pathway is complex and nuanced but slower. An alternative pathway, primarily involved in emergency decisions, trades subtlety for a nearly instant response. The two pathways and their interrelationship are outlined as follows:
The Thalamus screens incoming sensory data, continuously forwarding segregated data streams to specialized regions of the Sensory Cortex for processing while tagging any sudden aberrations in a data stream for immediate processing by the Amygdala.
The Sensory Cortex processes individual data streams received from the Thalamus to identify patterns, which are then reported to the Associative Cortex. Specialized regions of the Sensory Cortex process information on spatial awareness, touch, taste, smell, hearing, and vision.
The Unimodal Associative Cortex processes patterns as sensory experience. The Polymodal Associative Cortex processes patterns as concepts. Sub-regions of the Sensory Cortex filter data streams for words, shapes, faces, etc. The Associative Cortex creates a library of previously recognized patterns by sensitizing or suppressing neural relays. These stored patterns speed identification of previously encountered phenomena, but they can also impede the identification of unusual phenomena by tagging awkward information for deletion from Sensory Memory. The Associative Cortex sometimes also creates hallucinations and dreams by editing or transposing associations.
Sensory Memory—which has a capacity of up to 2 seconds—allows us to perceive discrete bits of information as a fluid stream by holding recent information in consciousness while receiving updates. Information deleted from Sensory Memory never reaches consciousness or becomes part of Long-term Memory.
The Hippocampus receives multiple data streams from the Associative Cortex, which it processes as clusters of patterns. These clusters of patterns are indexes used to identify or reconstruct previously encountered contexts or events and their associated emotional content. The Hippocampus can reconstitute previously established associations as memory (complete with sensory experience) based on a single recognized stimulus. The Hippocampus sometimes also creates false memories by editing or transposing associations.
The Prefrontal Cortex reconciles input from the Hippocampus and dictates a response. The initial response is invariably to focus attention on something significant or unusual for further evaluation. The Prefrontal Cortex then elicits additional related information from the Hippocampus and Associative Cortexes while suppressing information unrelated to an object of focus. The Prefrontal Cortex subsequently engages in a kind of modeling (related to dreaming) that compares the possible outcome of various scenarios suggested by experience—true dreams evidently have a ‘brainstorming’ function, generating novel scenarios that may prove useful later. Recent evidence suggests that the Prefrontal Cortex uses built-in algorithms such as the Nash equilibrium to select a course of action that reconciles often conflicting emotional charges associated with a given situation. The Prefrontal Cortex ultimately orchestrates a response by stimulating various organs of the Brain and, through these, the Body.
Working Memory—which has a capacity of up to 30 seconds or approximately 5-9 blocks of information—enables the Prefrontal Cortex to ‘juggle’ incoming information long enough to determine its relevance before either ignoring it or initiating the creation of Long-term Memory.
Long-term Memory—which takes up to three years to completely solidify—begins as a heightened sensitivity of neural synapses following stimulation. The neurons will begin to grow together if repeatedly stimulated as a result of recurring experiences, whether real or as elements of dreams. Sufficient stimulation causes the neurons to eventually fuse, and memory becomes hard-wired. Further repeated stimulation results in the suppression, or even elimination, of any neural relay that interferes with the efficiency of the fused synapse. As a result, highly intelligent people have fewer—but more highly integrated—neural networks. Moreover, continued stimulation prompts these neural networks to grow additional relays, thereby increasing their capacity over time.
Repeated stimulation cuases neurons to fuse together.

An alternative decision making process (involving only the Thalamus, the Amygdala, and the Hippocampus) trades subtlety for a nearly instant response. The alternative pathway leads directly from the Thalamus to the Amygdala, which monitors risk indicators stored in the nearby Hippocampus. If aberrant data from the Thalamus contains one of these risk indicators, the Amygdala prompts the Adrenal Gland to flood the body with adrenaline, initiating a ‘fight or flight’ response. Adrenaline has the additional effect of heightening attention to (and memory of) specific bits of information associated with risk.
A feedback loop is generated following any response (by either decision making process) as the Prefrontal Cortex evaluates the outcome of the previous cycle. Positive evaluations result in the release of dopamine from the Medial Forebrain Bundle, and negative evaluations result in the release of acetylcholine from the Periventricular System. Feelings of pleasure or anxiety result from the respective activation of these two pathways, and the Hippocampus establishes an association of this emotional content with a given context for future reference.
In addition to the functions outlined above, the Anterior Cingulate and the Nucleus Accumbens respectively monitor successive cycles for repeating or alternating patterns. A single repetition is sufficient to trigger a predictive impulse, and the adverse reaction to any broken pattern is directly proportional to the perceived stability of that pattern—just as a friend’s betrayal is more disturbing than the actions of a stranger.