A person known as K.C. has contributed significantly to understanding narrative and sense of presence of another. In 1981, at age 30, K.C. received a major head injury in a motorcycle accident. Despite his injury, K.C. retained normal human adult language skills. He also retained common knowledge about the world and knowledge about causal relations between actions and events. K.C.’s well-functioning memory of objective facts and procedural skills allowed him to continue, post-injury, with “effortless functioning in his everyday environment” in a way that is “comparable to most of his age mates.”
K.C., however, lost the ability to remember and construct personalized narratives. Persons who knew K.C. observed that he no longer remembered his personal interactions with them. K.C. has no first-person, emotional memory of his own experiences:
K.C.’s younger brother from whom he was once inseparable met accidental death a few years prior to his own head injury. K.C. remembers nothing of the circumstances in which he had learned of this shocking news, including where he was at the time, who told him of the event, and how he reacted emotionally. Likewise, the events of a potentially lethal chemical spill from a train derailment that forced him and his family to evacuate their home for over a week have been reduced to a dry fact of the world.
For K.C., “details of personal occurrences continue to exist only in the present, vanishing from K.C.’s reality the moment his thoughts are directed elsewhere.” While K.C. understands objective causal reasoning, he cannot imagine himself in the future. In the language of biological science, K.C. lost the functioning of his episodic memory. In terms better understood within the humanities and within study of communication, K.C. lost the ability to remember and construct personalized narratives
K.C. shows that the world really isn’t just constructed from narratives. In an influential 1983 law review article entitled “Nomos and Narrative,” a prominent legal scholar declared:
No set of legal institutions or prescriptions exists apart from the narratives that locate it and give it meaning. … Once understood in the context of the narratives that give it meaning, law becomes not merely a system of rules to be observed, but a world in which we live.
K.C. understands much about the world that others recognize. Persons who interact with him would not conclude that he is living in a different world from them. Narrative might be understood as causal reasoning, as recognizing that this leads to that, as understanding that this law implies that behavior or else that punishment. Personalized narrative is not necessary for understanding such a system of rules or for living in a common world. One also might suspect that eloquently telling an attractive tale, or endless repeating silly ones, doesn’t make true, bountiful reality.
More significantly, at least for those not confined under non-scientific disciplines, K.C shows that making sense of presence of others doesn’t required remembering and constructing personalized narratives. In laboratory tests, K.C. is not distinguishable from ordinary persons in ability to infer another person’s mental state (known as Theory of Mind tests). With respect to lower-level, more tightly synchronized processes for making sense of presence of another, K.C. also appears to be similar to ordinary persons. K.C. retains the ability to attune sensitively to others in real-time interaction. He’s characterized as “always agreeable, courteous, and attentive,” with an appreciation for sarcasm and humor. Although he has no personal emotional memory, his real-time experience of emotions is appropriate for someone with his memory: “Each time he is told of September 11, he expresses the same horror and disbelief as someone hearing of the news for the very first time.”
K.C.’s interactions with others suggests the importance of sub-conscious attunement to others. A psychologist who did research with K.C. observed that K.C.:
guesses that he has never met one of the authors (R.S.R.) who has, in fact, visited him at his home approximately eight times a year for the past 5 years, though there is a certain level of familiarity and comfort that he demonstrates, particularly in a greater willingness to initiate conversation and to ask questions.
Familiarity and comfort suggest ease in diffuse, sub-conscious patterns of interaction. Just like teammates on a sporting team acquire skills of tacit knowing by playing together, so too do persons in communication.
 Rosenbaum et. al. (2005) p. 994. This source provided the the quoted phrases and facts included in the paragraph.
 Id. p. 993-4.
 Id. p. 994.
 Cover (1982) p. 4. This work is at the sophisticated end of discourse about discourse, narrative analysis of narrative, the social construction of reality, pre-post-post-modernism, and (para)-en/thesis. Hamlin, Wynn, and Bloom (2007) suggests that 6-month-old, preverbal infants engage in normative “social evaluation,” evaluating other purposive, animate agents as appealing or aversive. “Social evaluation” suggests evaluation of peers. However, 12-month-old infants show relatively little interest in other infants. Infants, for good instrumental reasons, are mainly interested in adults. An alternate interpretation of Hamlin, Wynn, and Bloom (2007) is that infants as young as 6-months-old engage in rudimentary, positive instrumental reasoning.
 Rosenbaum et. al. (2007). This article also documents that another test subject who also lost episodic memory from an injury also could not be statistically distinguished from control subjects on Theory of Mind tests. Descriptions of the Theory of Mind tests are available in online supporting material.
 Rosenbaum et. al. (2005) p. 993, 994.
 Id. p. 993.
Cover, Robert M. (1983), “Nomos and Narrative,” Harvard Law Review v. 97, pp. 4-68.
Hamlin, J. Kiley, Karen Wynn, and Paul Bloom (2007), “Social evaluation by preverbal infants,” Nature v. 450 (22 November 2007).
Rosenbaum, R. Shayna, Stefan Köhler, Daniel L. Schacter, Morris Moscovitch,
Robyn Westmacott, Sandra E. Black, Fuqiang Gao, Endel Tulving (2005), “The case of K.C.: contributions of a memory-impaired person to memory theory,” Neuropsychologia v. 43, n. 7. pp. 989-1021.
Rosenbaum, R. Shayna, Donald T. Stuss, Brian Levine, Endel Tulving (2007), “Theory of Mind is Independent of Episodic Memory,” Science v. 318 (23 November 2007) p. 1257.
Theory of mind has organized much research into social behavior:
The ability to interpret others’ mental states and intentions, called “Theory of Mind,” has been a key area of interest for those studying the evolution of primate and human social behavior. Often, people have imagined that Theory of Mind emerges as a correlate of self-awareness — the ability to reflect on one’s own mental states. As the model goes, a focal individual interprets another’s mental state by imagining herself “in the shoes of” the other individual.
Theory of mind pushes into the background agents’ social and ecological circumstances and their embodied nature. Hence theory of mind does not provide a propitious conceptual framework for understanding social behavior.
Theory of mind doesn’t provide much insight into how persons can appreciate and sympathize with the pain they each feel. How do I know the feel of pain that you feel when your finger touches a hot stove or when someone breaks your heart? How do you know that I know how you feel? It seems to me that mutual recognition of common human nature, including social nature, is crucial for understanding how these feelings exist. Theory of mind is both too specific (mind, rather than fully embodied person) and too abstract (questions not closely related to problems of ordinary behavior) to provide much insight into others’ pain.
Theory of mind doesn’t provide a good description of an agent’s understanding. Consider an ordinary human interpretation of the behavior of a robot. The robot is programmed to store the location of a ball as either one specific location A or another specific location B. Starting with a random ball location state variable, the robot enters the room and goes to that location. The robot’s sensors then detect the presence or absence of the ball. If the ball is present, the robot bounces the ball (plays with it), sets its location state variable to this position, and then leaves the room. If the ball is absent, the robot goes to the other location, and behaves likewise.
Suppose an unprimed human observer watches the robot enter the room and play with the ball many times. Occasionally the observer, while the robot is out of the room, shifts the ball between locations. In those cases the robot first looks in the wrong location, then goes to the right location. If the ball isn’t moved, then the robot goes directly to the location containing the ball. Humans readily anthropomorphize technology, even while they almost surely would not confuse it with a real human being. The observer would typically describe the robot’s behavior as looking for the ball. In instances where the observer shifted the ball, the observer would typically describe the robot as not knowing the true location of the ball. Thus, a researcher might describe the observer as having a theory of mind (for the robot).
Theory of mind associated with mutual recognition of common human nature requires many orders of magnitude greater processing power. A simple digital machine could readily learn and predict the state and behavior of the ball-seeking robot. Describing and interpreting a person’s thoughts and emotions by looking at a representation of that person’s eyes or at her patterns of movement is a much more complex problem. More importantly, interpreting another’s mental states and intentions in historical ecologies has been predominately a highly interactive task. How one person responds physically to another both reveals emotions and intentions and changes them. A theory of mind apart from human being seems to me to be like a theory of time addressing what happened before the beginning of time.
Most person-to-person communication occurs between persons who know each other well (family and close friends). A recent study of a large number of mobile phone voice calls found that in an 18-week period, about two-thirds of mobile phone users engaged in mutual calling with only two other persons. The mean number of partners for mutual calling in that period was three. Mutual calling partners with more mutual calling partners in common spent on average more time in calls with each other. Mutual calling appears to have increasing returns in personal familiarity.
Recent neuroscience research points to neural functioning that supports this macro-behavioral pattern. The suppression of a certain brain wave pattern (mu activity) is associated with sub-conscious processing of the present activity of another person. Premotor neurons that perform such sub-conscious processing have been called mirror neurons. Mu activity, and by implication mirror neuron activity, depends on familiarity with the other person:
mu activity was suppressed most when subjects watched videos of themselves, indicating the greatest mirror neuron activity. For both groups [autistic and non-autistic children], the measurements showed a slightly lower level of suppression when subjects watched familiar people in the video and the least when watching strangers.
Recognition of another is typically considered to be a high-level neural function. Familiarity with another, however, appears to associated with (downloaded) resources for sub-conscious processing of another’s actions.
Persons highly value in communication making sense of presence. The relation of mu activity to personal familiarity is consistent with personal familiarity being a resource for making sense of presence. Presence as a value, and familiarity as a resource, provide a structure for increasing returns in mutual calling.
 See Analysis of a large-scale weighted network of one-to-one human communication, Jukka-Pekka Onnela et al 2007 New J. Phys. 9 179 doi:10.1088/1367-2630/9/6/179, Fig. 4 and Table 1.
 See Structure and tie strengths in mobile communication networks, J.-P. Onnela, J. Saramäki, J. Hyvönen, G. Szabó, D. Lazer, K. Kaski, J. Kertész, and A.-L. Barabási, PNAS 104, 7332-7336 (2007), preprint, pp. 4-5. Overview of this paper here.
 From “Mirror, Mirror In The Brain: Decoding Patterns Reflecting Understanding Of Self, Others May Further Autism Therapies,” Society for Neuroscience News Release, 11/04/07, summarizing L. M. Oberman, V. S. Ramachandran, J. A. Pineda, “Mirror Neuron Activity Modulated by Actor Familiarity in Children with Autism Spectrum Disorders: an EEG Study,” 2007 Neuroscience Meeting Planner. San Diego, CA: Society for Neuroscience, 2007, abstract on EBDblog. The sentence following the above quoted excerpt states, “This indicates that normal mirror neuron activity was evoked when children with autism watched family members, but not strangers.” The abstract states, “Both neurotypical participants and those with ASD [autism spectrum disorder] showed greater suppression to familiar individuals compared to the stranger.” These and the above descriptions appear to be inconsistent, but that is not relevant to the point here.
Brains operate using perception-action cycles. Good programmers know that a key to fast code is a good data structure. Sensory inputs are like a data structure for the brain. Using your body and physical implements, you can shape the sensory data structure that your brain uses. Moving around to get a good look is thinking on your feet. Tinkering is thinking with your hands. If you just use your head to think, you’re mentally handicapped.
Tetris game play provides a good case study of how the brain and hands work together. Careful study of play indicates that players rotate the falling blocks more frequently than is plausible for a linear cognitive program of play. Study also indicates that using keystrokes, a player can rotate a falling block about ten times as fast as a person can mentally rotate such a block. In figuring out where to direct a block, players use keystrokes to rotate blocks at least in part because doing so is more cost effective than performing such actions on representations within one’s head.
Trade-offs between external sense and cognitive effort also occurs in choices between different sensory forms of media. The rapid shift of “soap operas” from radio to television suggests that adding visual stream to the programming lowered the cost of making sense of it. More generally, the cost of making sense of presence falls with richer sensory inputs.
Playing Tetris and making sense of presence both involve perception-action cycles. In Tetris, the player rotates blocks to create new sensory inputs so that higher level cognitive processes can more efficiently plan trajectories for placing blocks. In sense of presence, attunement to another occurs as a good created through the social evolution of human nature. This attunement can occur at different cognitives levels, from awareness of twittering of another to a face-to-face, heart-to-heart talk with a best friend. At each level, attunement is associated with characteristic patterns of action such as textual response or eye-tracking.
Perception-action cycles are built into human biology from the lowest to the highest levels of cognitive complexity. As a cognitive scientist explained:
At all levels of the central nervous system, the processing of sensory-guided sequential actions flows from posterior (sensory) to anterior (motor) structures, with feedback at every level. Thus, at cortical levels, information flows in a circular fashion through a series of hierarchically organized areas and connections that constitute the perception-action cycle. Automatic and well-rehearsed actions in response to simple stimuli are integrated at low levels of the cycle, in sensory areas of the posterior (perceptual) hierarchy and in motor areas of the frontal (executive) hierarchy. More complex behavior, guided by more complex and temporally remote stimuli, requires integration at higher cortical levels of both perceptual and executive hierarchies, namely areas of higher sensory association and prefrontal cortex.
Interactivity, which new Internet services emphasize, is deeply embedded in human biology.
 David Kirsh and Paul Maglio (1994), “On distinguishing epistemic from pragmatic actions,” Cognitive Science 18, pp. 15-20.
 Id. p. 24.
 Joaquín M. Fuster (2004), “Upper processing stages of the perception-action cycle,” TRENDS in Cognitive Sciences, vol. 8, no. 4 (April) p. 144. Note that id. Figure 2 describes the general structure of Kirsh and Maglio (1994), p. 39, Figure 16. For additional description of the perception-action cycle, see Paul Baxter’s Memoirs of a Postgrad.
Faces, particularly eyes, naturally attract human attention. One aspect of the biological machine of faciality is eye structure. Compared to other primates, humans have more salient eyes:
the human eye lacks certain pigments found in primate eyes, so the outer fibrous covering, or “sclera,” of our eyeball is white. In contrast, most primates have uniformly brown or dark-hued sclera, making it more difficult to determine the direction they’re looking from their eyes alone. … Humans are also the only primates for whom the outline of the eye and the position of the iris are clearly visible. In addition, our eyes are more horizontally elongated and disproportionately large for our body size compared to most apes.
Experimental evidence indicates that a chimpanzee’s gaze direction responds primarily to a person’s head movements, while human gaze tracks another’s gaze direction. But if the objective is to indicate direction, the advantage of using eye movement rather than head movement isn’t obvious.
Eye contact, however, has more subtle value in communication. Seeing someone’s head move doesn’t mean that she knows that you were looking at her in a situation in which you would track her change in gaze direction. Eye contact generates common knowledge of gaze direction (you both know that you’re looking at each other, you both know that you both know you’re looking at each other, etc.) and common sense of whether a change in gaze direction would be tracked (nervous distancing or needed shift in attention?). Just looking at each other’s head doesn’t work this way, because head orientation doesn’t imply eye orientation in humans and other primates.
Making sense of presence is probably more valuable to humans than to other primates. Across species, a larger neocortex, both in absolute size and relative to total brain volume, is correlated with greater social complexity (pdf link). Relatively salient human eyes, like the relatively large human neocortex (particularly prefrontal cortex), support sense of presence. Direct gaze is a powerful way to produce sense of presence. I relay to you fourth-hand a plausible reported fact: “human infants look at the face and eyes of their caregiver twice as long on average compared with other apes.”
Given the importance of gaze to humans, a video viewer’s ability to discern the whites of the eyes of persons on a video might be a useful measure of video quality with real human relevance. My video of the JDRF Spin to Win has little interest other than the eyes, faces, and expression of the participants. Viewing the video on YouTube, the faces are distorted and whites of the eyes are barely discernable. But viewing the video on Blip.tv, you can see whites of the participants’ eyes much more clearly. The point is not simply that Blip.tv offers better quality video than YouTube. High-density, huge-screen television offers much better video quality than either. People want to see the whites of others’ eyes. That is a human-relevant measure of video quality.
A lot of interesting thinking and experiments are now going on concerning presence in communication. Mike Gotta’s post entitled “Presence: Complex, Pervasive And Evasive” highlights the business case for presence. Which industry structure do you think is better for private investment, competition among many firms, and innovation: an industry in which firms compete to supply a commodity service like per-minute voice communication, or an industry in which firms compete to provide a “complex, pervasive, and evasive” good? My economics training suggests the latter!
Person-state definitions, attention management, and impression management are aspects of presence that shouldn’t be over-emphasized and that are probably better hidden in the design of services than presented as tasks that users must manage. In person, too active impression management goes by the name of being a phony. That would be a horrible insult to be associated with a Telco 2.0 service.
Moreover, as Craig Roth insightfully notes, if Captain Picard doesn’t have effective interruption management technology, businesses today probably should be cautious about the prospects for developing it.
A service designed for persons to use to broadcast a text message answering one simple question, “What are you doing?” produced this message:
oooooh la la! Biz is looking like a well-dressed handsome man! ^_^ Ready sweep Livvy off of her feet…again! [Twitter]
That’s not literally state information, but it does make for a strong sense of presence.
A more propitious direction for presence is better communicating persons acting in the world, expressing themselves where they are. Georgia O’Keeffe beautifully conveys this idea:
I have picked flowers where I found them.
Have picked up sea shells and rocks and pieces of
wood where there were seashells and rocks and pieces of
wood that I liked.
When I found the beautiful white bones
on the desert I picked them up and took them home too.
I have used these things to say what is to me the
wideness and wonder of the world as I live in it.
[from exhibition catalogue, 1944]
Primate neural systems process gaze relatively well. Infant chimpanzees aged 10-32 weeks prefer photographs of human faces with eyes open compared to photographs with eyes shut, and with direct gaze compared to averted gaze. By four months of age, human infants can discriminate between faces with direct and averted gaze. In adult humans, direct gaze enhances the memorability of faces and the speed of person categorization. Moreover, direct gaze seems to be the best explanation for sensational reception of Byzantine icons in an artistically rich sixteenth-century Indo-Muslim culture.
Gaze has considerable value in making sense of presence. According to a recent study, mother-infant chimpanzees pairs gaze into each other’s eyes on average about 17 times per hour. Mutual gazing covaried similarly in chimps and humans:
maternal cradling was found to be inversely related to mutual gazing in chimpanzees, such that when mother and young infant are in constant physical contact, there is little mutual gaze. Reduced face-to-face interactions, including reduced amounts of mutual gaze, are found in human cultures that have increased physical contact with infants compared with Western norms. … We purpose that mutual engagement in primates is supported via an interchangeability of tactile and visual modalities [Bard et. al., 2005, pp. 621, 623].
The value of the visual mode, however, depends on its circumstances. If the features of a face are scrambled, infant chimpanzees are indifferent between eyes with direct and averted gaze. Direct gaze from a painting or photograph of a face may create value of the same type as mutual gaze and physical contact, but perhaps not as efficiently.
The evocatively named “mirror neurons” have recently been attracting much discussion in the blogsphere. Mirror neurons seemed to be associated with hightened affective states and hyper-speculation in humans.
But they are not the only neurons with these properties. As early as 1993, two scientists found that a particular neuron in a cat’s brain responded to a wide range of auditory stimuli, but not when the cat’s eyes were closed or in the dark. After their work had been “interrupted by the inescapable late-night giddiness suffered (enjoyed?) by those who do electrophysiological experiments,” the scientists reached these conclusions:
we finally concluded that cats must be deaf at night. This, of course, began a string of other ridiculous conclusions: blind cats are probably deaf too; and on and on. [Stein and Meredith (1993) p. 108]
These are truly astonishing hypotheses!
Key images that mirror neurons evoke are probably biologically misleading. Mirrors produce representations of objects that have little relation to the physical form of the mirror. Mirrors do not adaptively tune to subjects of interests. Mirrors are typically part of a “one brain” circuit. Making sense of another like oneself is rather different from looking in a mirror.
The human brain evolved and develops in social circumstances – circumstances of living bodies communicating intensively with others like themselves. In game theory, the rules of the game are assumed to be common knowledge among the participants. In communication among conspecifics, the common structures of conspecifics’ bodies are rules of the game. The flesh-and-bone relations of whole living bodies are central to making sense of another like oneself.
Sensory tuning is an important feature of living bodies. One neuroscientist described this process thus:
every percept has two components intertwined, the sensory-induced re-cognition of a category of cognitive information in memory and the categorization of new sensory impressions in the light of that retrieved memory. Perception can thus be viewed as the interpretation of new experiences based on assumptions from prior experience — in other words, the continuous testing by the senses of educated hypotheses about the world around us. [Fuster (2003) pp. 84-5]
“Perceptual prediction” effects, such as representational momentum and the flash lag effect, suggest that the “sensory-induced re-cognition of a category of cognitive information in memory” can be highly decentralized and not dependent on traditionally defined cognitive and memory circuits.
Recently two scholars put forward a provocative proposal for motor involvement in perceiving conspecifics:
The various brain areas involved in translating perceived human movement into corresponding motor programs collectively act as an emulator, internally simulating the ongoing perceived movement. This emulator bypasses the delay of sensory transmission to provide immediate information about the ongoing course of the observed action as well as its probable immediate future. Such internal modeling allows the perceiver to rapidly interpret the perceptual signal, to react quickly, to disambiguate in situations of uncertainty, and to perceptually complete movements that are not perceived in their entirety. … Thus, what originally appeared to be a neurological extravagance – the activation of motor resources when no motor movement is intended – may instead be an elegant solution to a perceptual problem. [Wilson and Knoblich (2005) p. 468]
This proposal, while speculative, at least shifts attention from representations, meaning, and linguistic expression to presence, the real-time experience of making sense of another like oneself. The latter seems to me to connect more insightfully to developing biological knowledge about mirror neurons.
Fuster, Joaquin M. (2003), Cortex and mind: unifying cognition (Oxford: Oxford University Press).
Stein, Barry E. and M. Alex Meredith (1993), The Merging of the Senses (Cambridge: MIT Press).
Wilson, Margaret and Günther Knoblich (2005), “The Case for Motor Involvement in Perceiving Conspecifics,” Psychological Bulletin, v. 131, n. 3 pp. 460-73.
Brain effects are communicative goods. A recent study found common effects among reading and seeing actions:
Participants observed actions and read phrases relating to foot, hand, or mouth actions. In the premotor cortex of the left hemisphere, a clear congruence was found between effector-specific activations of visually presented actions and of actions described by literal phrases.
For example, reading the phrase “biting the peach” and seeing a video of a person bite a peach activate a common set of premotor neurons called “mirror neurons.” These neurons also trigger muscular actions such as actually biting a peach.
Consider the economics of activating these neurons. Making sense of text is relatively expensive. Actually executing actions involve the caloric cost of moving bodily mass. Observing actions is probably the cheapest means to activate the common neurons associated with these different sensory circumstances. Perhaps this helps to explains why so many persons spend so much time on couches, watching sports on television.
Past physical experience affects the extent of neural activation. A scholar who has studied this relation noted:
“When we watch a sport, our brain performs an internal simulation of the actions, as if it were sending the same movement instructions to our own body. But for those sports commentators who are ex-athletes, the mirror system is likely to be even more active because their brains may re-enact the moves they once made. This might explain why they get so excited while watching the game!” [supporting scholarly paper (pdf)]
Sense of presence involves attunement to another like oneself. Common experience of physical action heightens sense of presence. Current demand for televised sports probably depends strongly on explicit marketing investment. An interesting challenge might be to try to calculate the implicit marketing value of sports participation.