Human downgrading by social media

Below is an excerpt from my forthcoming book…

© Mahabodhi Burton

 

12 minute read

Check out this fascinating excerpt from the chapter ‘Transhumanism and Alienation,’ which draws on the 1995 book War of the Worlds: Cyberspace and the High Tech Assault on Reality where Mark Slouka lays out the foundations for what we see today: the downgrading and commodification of human experience, and its remedy: as suggested by Tristan Harris from the Centre for Humane Technology.

Are you concerned about the future of technology and its impact on society? This excerpt is a must-read.

 

 

 

The Transhumanist culture of unmoral alienation 

In 1995 Mark Slouka published a chilling book called War of the Worlds: Cyberspace and the High Tech Assault on Reality.[1] In the various chapters he catalogues the fronts on which that assault is taking place:

  • ‘Reality is death’: The Spirit of Cyberspace
  • ‘Springtime for Schizophrenia’: The Assault on Identity
  • Virtual World: The Assault on Place
  • Highway to Hive: The Assault on Community
  • A Republic of Illusion: The Assault on Reality (My emphasis)

The Internet only communicated 1% of the information flowing through two-way telecommunications networks in the year 1993, by 2007 it was more than 97%,[2] so Slouka’s concerns were largely about a vocal minority of transhumanist enthusiasts:

‘My quarrel is with a relatively small, disproportionately influential group of self-described ‘Net Religionists’ and ‘wannabe gods’ who belief that the physical world can (and should) be ‘downloaded’ into a computer, who believe that the future of mankind is not in RL (real life) but in some form of VR (virtual reality); who are working very hard (and spending enormous amounts of the federal and private money) to engineer their very own version of the apocalypse. As intelligent as they are single-minded, these people have been ignored by the majority of humanists for too long; it’s time we started listening.’[3]

In a poignant passage Slouka opines the lack of caution:

‘Given the enormous effect the digital revolution may come to have on our lives … there is something downright eerie about the lack of debate, the conspicuous absence of dissenting voices, the silence of the critics. Congress seems uninterested; watchdog groups sleep. Like shined deer, we seem to be wandering en masse onto the digital highway, and the only concern heard in the land, by and large, is that some of us may be left behind.’[4]

For some, the novelist and voices techno-evangelist Robert Coover explains, humanity has to do with souls and ‘depth’ and the search for meaning and purpose; with tradition, ritual, mystery and individualism.’ For others, like himself, it has more to do with the spiritualism of the hive[5]—where we are all voices subsumed into one grand organism / machine immeasurably greater than the sum of its parts;

‘I regret’, he says, ‘having to give up the comforting fairy tales of the past: I, too, want to be unique, significant, connected to a “deeper truth,” canonized. I want to have an “I.” Too bad.’[6]

It is revealing that Coover claims not to have an ‘I’: therefore, releasing himself from moral agency.

Back in 1973, when I attended Manchester University, I was enrolled in the Computer Science degree, largely because of advice from the school careers officer. On arriving I couldn’t face the prospect and transferred to Physics; however, at periodic lulls in my vision the temptation to get involved in the computer industry would reappear, prompted by the unusual financial and employment security within it. All of that finally abated when, aged 34, I got involved with Buddhism. For the last fifty years, involvement in the computer / IT industry has meant joining an ‘over-class’; one with unique privileges; secure employment; increasing financial reward: to the point where in its most extreme example, in 2021 Mark Zucherberg personally controlled 28% of a $1 trillion company.

It is worth remembering the vacuous beginnings of Facebook; in the year before founding Facebook in 2004, Zuckerberg created FaceMash, a website where Harvard students were invited to vote on which of two randomly-selected women were more attractive; who was ‘buff’ and who was ‘rough.’ The seeds of everything that has happened since—including the phenomenon of  teenage girls on Instagram feeling prone to suicide because of their ‘body-image’—are there in that origin As Zuckerberg famously said:

‘The question isn’t, “What do we want to know about people?”, It’s, “What do people want to tell about themselves?”‘ (My emphasis)

 

‘The geek

Apparently, the word ‘geek’[7] used to mean ‘a circus performer who bites the heads off live chickens and snakes’, i.e., a person you gawked at, a fool; an uncultivated person; a dupe.’ Later it came to mean ‘any person considered to be different from others in a negative or bizarre way;’ for instance, in 1957, Jack Kerouac introduced the concept as ‘An overly diligent, unsociable student; any unsociable person obsessively devoted to a particular pursuit.’ Between 1990 and 2000, the use of ‘geek’ exploded along with the personal computer market. (Bill Gates was often called the ‘ultimate geek’, meant as a compliment).

Finally, it has come to have a much more positive connotation; ‘a person regarded as being especially enthusiastic, knowledgeable, and skilful, especially in technical matters’, although it still has a connotation of social ineptitude; in the American Heritage Dictionary a geek is now:

  • ‘A person regarded as foolish, inept, or clumsy’
  • ‘A person who is single-minded or accomplished in scientific

or technical pursuits but is felt to be socially inept.’

When I think of twenty-somethings working in Silicon Valley contemplating new product or increased user involvement with an intense religious light in their eyes, the word that comes to mind is not ‘immoral’—which refers to ‘a conscientious rejection of typical moral standards and has a connotation of evil or wrongdoing’, nor is it ‘nonmoral’—which ‘describes actions that are not usually subject to moral concerns, such as which shirt to wear’, nor is it ‘amoral’—which ‘implies an awareness of moral standards, but a lack of concern for them while acting’: the word seems to be unmoral ’, which:

‘Refers to those having no moral perception. It is best used for animals or inanimate objects incapable of considering moral concerns but can also be used for humans lacking the same.’[8]

This is more of a tendency—than something absolute—among ‘enthusiastic technical types’, who are often very fluid when talking about technical ideas; but when asked a more moral question—off a technical topic, they appear visibly to go blank: ‘Does not compute!’ Maybe geeks that are distanced from their emotions to some degree; maybe they can be a bit alienated at times; and maybe they are not all that interested in moral questions: it is not really part of the job.

And maybe—on some deeper or unconscious level—they ascribe to the worldview of Scientism: which has four key claims:

  1. The only kind of knowledge we can have is scientific knowledge
  2. The only things that exist are the ones Sciencehas access to
  3. Science alone can answer our moral questions and explain as well as replace traditional ethics
  4. Science alone can answer our existential questions and explain as well as replace traditional religion[9]

It is difficult to understand how claims 3 and 4 can be true when—as we saw in Chapter 1—Science only has access to third-person evidence, whereas morality is the domain of first-person evidence, but then again Coover accepts no such thing as an ‘I.’ The only conclusion I can draw is that Scientism is ‘the religion of the alienated;’ those alienated from their experience.

 

Silicon Valley and Ethics

The prediction in Mark Slouka’s book that the digital revolution would become a $3.5 trillion industry is unerringly accurate. Technologically, it has been a stratospheric success, enabling not just instant worldwide access to information but increased computer power has led to unforeseen advances in biomedicine. Socially and morally however, its effects  have been disastrous: as if the geeks, pandemic safetyists and gender theorists have exported their brand of unmorality and alienation to the rest of society: increasing numbers of people seem to be adopting an unmoral outlook.

In a talk titled ‘How Technology is “Downgrading Humans”’,[10] Tristan Harris—of the Centre for Humane Technology—talks of a ‘global climate change of culture that’s caused by technology.’ Harris used to work for Google; he tells his story in an interview with Jack Kornfield.[11]

 

Image courtesy of Shaughn and John.

See article in The Press Democrat.

 

 

According to Harvard professor – and father of Sociobiology – E.O.Wilson, the real problem of humanity is we have paleolithic emotions, medieval institutions, but god-like technology: we are ‘chimpanzees with nukes.’ But as Harris says:

‘If you have unprecedented capacity to enact consequences you have to have unprecedented wisdom to guide that power.’

The technological singularity—or simply the singularity—is a hypothetical point in time at which technological growth will become radically faster and uncontrollable, resulting in unforeseeable changes to human civilization. According to the most popular version of the singularity hypothesis, I.J. Good’s intelligence explosion model, an upgradable intelligent agent will eventually enter a “runaway reaction” of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an “explosion” in intelligence, and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.[12] Harris says:

‘While we were all looking out for the moment here in Silicon Valley when technology crosses the line of human strengths: that’s when we think the singularity happens—that’s when it takes our jobs—we miss this much earlier point, when technology, like a magician, doesn’t have to be smarter than you, it just has to know how to overwhelm your weaknesses. This is the best diagnosis for describing why a number of the most important problems have gone wrong’

Specifically, he says technology overwhelms our weaknesses by overwhelming our:

  1. Cognitive limits, which leads to information overload
  2. Dopamine system, which leads to addictive use
  3. Weakness for social validation, which leads to mass narcissism culture (everyone has to be an influencer)
  4. Confirmation bias: that it feels good in the nervous system to get information that the confirms your worldview, and it feels bad to get information that doesn’t, which leads to fake news
  5. If you hack into outrage, you get polarisation
  6. And if you hack into trust: the limits of what we know: the basis of whether or not to trust something, you get deep fakes and bots

‘Checkmate on your nervous system’, he says. According to Frances Haugen, the Facebook Whistleblower, speaking on CBS News’ 60 minutes:[13]

‘The thing I saw at Facebook over and over again was there were conflicts of interest between what was good for the public and what was good for Facebook. And Facebook, over and over again, chose to optimize for its own interests, like making more money.[14] … When we live in an information environment that is full of angry, hateful, polarizing content it erodes our civic trust, it erodes our faith in each other, it erodes our ability to want to care for each other; the version of Facebook that exists today is tearing our societies apart and causing ethnic violence around the world.’

Given that there are so many so-called Buddhists in Silicon Valley it is surprising that those designing social media platforms haven’t picked up on the basic Buddhist principle that we hear a negative message four times louder than we hear a positive one—no doubt a ‘negativity bias’ helpful to our survival—but maybe their ‘Buddhist side’ wasn’t listening very hard: while their other interests predominated.

Out-of-control technology deepens a sense of powerlessness; when governments react to this, democracy is undermined. Haugen relates how the Facebook ‘interaction dynamic’—where people enjoy engaging with things that elicit an emotional reaction, and the more anger that they get exposed to, the more they interact and the more (ads) they consume—led to a complaint to Facebook by major political parties across Europe. A 2019 internal Facebook report obtained by Haugen says that the parties,

‘…feel strongly that the change to the algorithm has forced them to skew negative in their communications on Facebook… leading them into more extreme policy positions.’

The European political parties  were essentially saying to Facebook; ‘The way you’ve written your algorithm is changing the way we lead our countries. You are forcing us to take positions that we don’t like, that we know are bad for society.’ But we know that if we don’t take these positions, we won’t win in the marketplace of social media. (My emphasis)

‘In theory, the principle invoking the so-called ‘marketplace of ideas’ is a bedrock of free speech laws; it presumes that ‘truth’ would prevail in a level playing field. In the marketplace, however, of platform-based ideas, the theory fails—it seems social media is neither equal nor fair. A platform’s design to maximize financial gains through data monetisation techniques can overwhelm ‘truth’ with inbuilt susceptibility to sensationalist, viral, curated campaigns.’ [15] (My emphasis)

Zuckerberg claims that Facebook is giving people more power:

‘When you give everyone a voice and give people power, the system usually ends up in a really good place. So, what we view our role as, is giving people that power.’

But it is more and more common for those in the centre ground to not feel they have any power at all: because sensationalism or extremism get more views; or because debate on certain topics is effectively censored. This is very dangerous for democracy because, as Alice Walker says; ‘The most common way people give up their power is by thinking they don’t have any’, and if those people give up and keep quiet:

‘Nothing strengthens authority so much as silence’—Leonardo da Vinci

If you look even deeper, the above issues, respectively, lead to 1) shortening attention spans, 2) social isolation, 3) teen depression and suicide, 4) breakdown of sense making, 5) conspiracies and extremism, and 6) post-truth world (where you don’t know what to believe anymore: you are overwhelmed)

These six areas are an interconnected system of harm, which Harris calls Human Downgrading:

‘These are all connected problems, and we have to see them as a system that we call human downgrading. This is the climate change of culture, and it is game over if we don’t change it, because it’s degrading our sense making and our choice making, at a moment when we need that the most.’

Harris says that to fix the problem, we need a systemic solution: he says, ‘the real answer is that we have to end attention and surveillance capitalism.’

‘The pairing of a business model that says that the better I am manipulating in real time your nervous system, by knowing something about you that you don’t know about yourself, and being in a tight individual loop with you, that is the core problem.’

He says ‘Our ultimate goal is to change how technology is built. The conditions for that to happen are through External Pressure; (policymakers, government, the media, the public, parents, teachers, all of that); also Internal Pressure (the Facebook employees letter saying you could do a better job with advertising); and it’s also Aspirational Pressure (we need somewhere else we can go other than downgrading human experience). To do this we need to ask transformative questions, because the questions that we are asking about how we limit notifications is good, but we’re not asking the transformative questions.’

‘The technology exists: we obviously can’t put the genie back in the bottle, but we need to investigate the paradigm that got us here.’[16]

Harris doesn’t believe that anyone in Silicon Valley wanted this to happen, but he suggests that what needs to change is the mindset or paradigm from which all the assumptions; all the beliefs; all the design choices; all the understanding of human nature, etc., come from.

The basic mindset or paradigm is:

  1. Give users what they want
  2. Disrupt everything
  3. Technology is neutral
  4. Who are we to choose what’s good for people
  5. Growth at all costs
  6. Design to convert users (get those users clicking)
  7. Obsess over the metrics (quantity of clicks, engagement)
  8. Capture attention

Of these eight paradigms, numbers 1-5 can be grouped under the heading Minimize Responsibility; numbers 6-8 under the heading Commodify Human Experience (that human experience is a commodity; it’s a resource; and we can treat it as fundable; and we can get people to want it).

Langdon Winner suggests that:

‘Technology is never a neutral force: it orders our behaviour, redefines our vales, reconstitutes our lives in ways that we can’t always predict. Like a political constitution or a legislative act … technology establishes the rules by which people live.’[17]

These categories in the mind of the technologist create the interconnected system of harm that is Human Downgrading.

Harris then asks, ‘What is the alternative new way of thinking about these things?’ His answer:

  1. See in terms of human vulnerabilities (like a magician understand what people’s vulnerabilities are; unlike in The Matrix, see the code that’s running under people that causes them to be vulnerable and do what they do: what causes them to tick item one or two on the menu)
  2. Find and strengthen human brilliance (e.g., people are good at building trust face-to-face)
  3. We are constructing the social world (we are the urban planners; we are organizing the wiring diagram of the flow of human attention)
  4. Choosing is inevitable: there is no such thing as not choosing
  5. Bind growth with responsibility (Growth and scale are always bound to responsibility and accountability)
  6. Design to enable wise choices (Enhance greater sense making, more considerate choice making
  7. Obsess over things that really matter in people’s lives
  8. Nurture awareness (Regenerate attention and nurture their own capacity and awareness)

The first five now Embody Responsibility and the last three Enhance Human Wisdom

 

The chapter goes on to explore Buddhists in Silicon Valley.

 

 

 

 

[1] Mark Slouka (1995) War of the Worlds: Cyberspace and the High Tech Assault on Reality. Abacus.

[2] ‘History of the Internet’ Wikipedia.

https://en.wikipedia.org/wiki/History_of_the_Internet

[3] War of the Worlds: Cyberspace and the High Tech Assault on Reality. p8.

[4] Ibid.

[5] Ibid p10.

[6] Quote is from a letter by Robert Coover to Harper’s Magazine 289 (August 1994):p4.

[7] Merrill Perlman. The transformation of the word geek. Columbia Journalism Review. January 14 2019.

https://www.cjr.org/language_corner/geek.php

[8] ‘A Lesson on ‘Unmoral’, ‘Immoral’, ‘Nonmoral’, and ‘Amoral.’ Merriam Webster.

https://www.merriam-webster.com/words-at-play/using-unmoral-immoral-nonmoral-amoral

[9] Mikael Stenmark. (2020) Scientism: Science, ethics and religion. Routledge. px.

[10] ‘How Technology is “Downgrading Humans”.’ (Tristan Harris X Capgemini). Center for Humane Technology. YouTube. 3 February 2022.

https://www.youtube.com/watch?v=LZ0PnUzRh8U&t=586s

[11] ‘A New Paradigm for Technology | Tristan Harris, Jack Kornfield.’ Wisdom 2.0. YouTube. 3 March 2019.

https://youtu.be/5VS_xoRXdBQ

[12] Technological singularity. Wikipedia.

https://en.wikipedia.org/wiki/Technological_singularity

[13] See Scott Pelley. Whistleblower: Facebook is misleading the public on progress against hate speech, violence, misinformation 60 minutes. CBS News. 4 October 2021.

https://www.cbsnews.com/news/facebook-whistleblower-frances-haugen-misinformation-public-60-minutes-2021-10-03/

[14] Haugen told CBS the root of Facebook’s problem is in a change that it made in 2018 to its algorithms—the programming that decides what you see on your Facebook news feed. The algorithm picks from those options based on the kind of content you’ve engaged with the most in the past. And one of the consequences of how Facebook is picking out that content today is it is — optimizing for content that gets engagement, or reaction. But its own research is showing that content that is hateful, that is divisive, that is polarizing, it’s easier to inspire people to anger than it is to other emotions. … Facebook has realized that if they change the algorithm to be safer, people will spend less time on the site, they’ll click on less ads, they’ll make less money. Haugen says Facebook understood the danger to the 2020 Election. So, it turned on safety systems to reduce misinformation—but many of those changes, she says, were temporary. And as soon as the election was over, they turned them back off or they changed the settings back to what they were before, to prioritize growth over safety. And that really feels like a betrayal of democracy to her.

[15] Archit Lohani, ‘Countering Disinformation and Hate Speech Online: Regulation and User Behavioural Change,’ ORF Occasional Paper No. 296, January 2021, Observer Research Foundation. See

https://www.orfonline.org/research/countering-disinformation-and-hate-speech-online/

[16] Harris draws on the work on ‘Leverage Points’ by Donella Meadows.

 https://donellameadows.org/archives/leverage-points-places-to-intervene-in-a-system/

[17] Langdon Winner, ‘Do Artifacts Have Politics?’ in Technology and Politics, ed. Michael E. Kraft and Norman J. Vig (Durham, N.C.: Duke University Press, 1988), p43.

Author: Mahabodhi

Share This Post On

Submit a Comment

Your email address will not be published. Required fields are marked *

<\/body>