Friday, July 25, 2014

Audio hallucinations and loving machines

Two articles got me thinking about human nature and its future this week.

Stanford researcher: Hallucinatory 'voices' shaped by local culture:

'via Blog this'

One is a Stanford News piece covering a recent study's findings about the influence of culture on the expression of schizophrenia. Namely an anthropologist's take on how the hallucinatory voices are perceived by schizophrenics in differing societal contexts. Professor Tanya Luhrmann's interpretation of her data is that in individualist cultures the voices are more likely to be experienced as hostile or as deficits. Those with the illness in more collectivist societies have a better chance of interpreting the voices as friendly and as aspect of the world.

First, the normal disclaimer. As a rule of thumb, it's best to wait for three studies done by three teams at separate facilities to come to similar conclusions before declaring any set of findings "fact". Second, the sample populations in this study appear to have been very small. Also, the discussed definition for what constitutes individualist and collectivists societies is...broad. Calling western societies individualist is throwing a wide blanket over a group of cultures strung out rather than clustered on that spectrum.

To me, this study looks like an interesting start point. An inroad on the contextual presentations of schizophrenia that's deserving of a lot more follow up.

That said, I've always wondered if it was easier to be schizophrenic, or at least more accepted, when the voices were often seen as aural manifestations of angels, demons, or saints. I've also suspected that it'd be even easier to deal with the audio hallucinations in animist cultures, where the dominant paradigms frequently imbue everything with the potential for voice and consciousness.

It's also amazing how period and technology specific the voices can be. While living in Scandinavia ten years ago, I spent quite a bit of time hanging around doctors in general and psychiatrists specifically. One of their more interesting observations was the hallucinatory universality of spy satellites, in particular CIA spy satellites, used to steal or read thoughts on both sides of the Atlantic.

Rather than individualist versus collective, I wonder if it's paradigm that's the stronger influence. An individual's broad worldview in the sense of technology base, religion type, general acceptance of the existence of clinical disorders of the mind, and more in that vein.

Assuming that the differences hold up with larger sample populations.

I also wonder if people might choose to cultivate useful forms of schizophrenia in the future, with implants and gene augmentation. To give literal voices to the various analytic functions of the brain.

How? One of the more intriguing hypotheses I've come across about the disease is that an individual's inner narrative voice is a composite of several analytical and simulation functions. Schizophrenia, in this model, is in part a timing error in which the inputs that make up the singular voice of the mind fall out of sync with one another. A lack of cohesion that we might eventually take deliberate advantage of to construct a new style of processing our world awareness with parallel internal narratives.

That said, it's been several years since I've encountered the voice timing hypothesis in a medical journal. I have no idea how or if it's held up in light of recent discoveries about networks in the brain, and our ongoing refinement of neuroanatomical functions.

Failing the Third Machine Age: When Robots Come for Grandma — The Message — Medium:

'via Blog this'

Robots as emotional caregivers and sources of solace?

That's actually something I've given a fair amount of thought to. Though for reasons other than those articulated by assistant University of North Carolina professor Zeynep Tufekci.

Where Tufekci is understandably concerned about the destruction of much needed medical and caregiver jobs and all the violent turmoil associated with past technological revolutions, I'm more worried on this issue that humans will quit or greatly reduce their socializing with one another.

Especially if the robots or software agents are kinder than people. Nicer and either capable of genuine emotion or a facsimile convincing enough for people to buy into.

Why? Because I've seen it happen with monkeys. Not monkeys and robots, but monkeys and humans.

Usually with juvenile and young adult rhesus macaques who became accustomed to socializing with people. Hanging out with the lab techs was all about the constant grooming opportunities, treats, and all those encouraging friendly noises (words) that humans tend to vocalize around monkeys they like. Life in the troop, on the other hand, was endless macaque infighting and politics. The sharp-edged conflicts over status, jockeying for position, and slapping down or humiliating those underneath to keep them in their place in the rhesus hierarchy.

That's not to say life in all troops was brutish. Leadership styles of the elites and combativeness within each culture varied. Still, it never looked fun at the bottom, and in some groups not even tolerable in the middle or at the top.

Monkeys acclimatized to human-levels of friendly interactions and lack of common social violence tended to do poorly when introduced back into a communal housing unit.

Not that human society is anything close to perfect. There are those who would almost certainly be healthier and more emotionally stable with loving machines rather than their squabbling fellow human primates. Still, I'd rather see technology help make us better social animals rather than seclude us from one another. The future of human augmentation, in my eyes, is as much about improved emotions as enhanced reasoning or augmented talents.

No comments: