Silicon Valley's
Blue Error Screen
Tech companies exploit behavioral psychology for profit. 
But they’re clueless about empathy––that might be a good thing. 
by Christian Thorsberg // Journalism 383: Health and Science Writing
Venice Beach, Coors Light. 
Newborn baby, college suitemate, high school sweetheart, 
Adobe Suite.
LeBron James, Talulah Riley,
Qdoba Cholula Hot and Sweet Chicken Burrito.

     Joe Toscano brandishes his phone as if it were some sort of weapon, taps his passcode as if the screen were red-hot and scalding his Apple ID fingertips. He presses close to his computer’s Zoom camera and gives me a look that warns: I’m about to let you in on a terrible secret. 
     With a moment’s hesitation he opens the Instagram app, and I watch as his upper-half begins moving synchronously staccato––the long front curls of his brown mohawk jiggling, his hunched shoulders shuddering, his jawbone flexing––as if his thumb, now swiping purposefully through his feed, were the fulcrum his entire body’s motor control lay dependent upon. As the familiar monotony of Instagram stories and posts flash across the call, Toscano recites what sounds like a droning 21st century list poem: “Vacation photo, beer. Baby pic, college buddy, old crush, Photoshop...”
     His phone is set to dark mode and the conveyor belt of data pops colors that blind me. At last he abstracts his ballad to more directly convey his point: “One, two, three, there’s an ad. One, two, three, and an ad.”
     One, two, three, and an ad. When Joe Toscano scrolls through his Instagram, he sees exactly three posts from those accounts whom he follows, and then, inevitably, an advertisement for a company that he does not follow nor engage with online. The same is true for my Instagram, and the same is true for yours––post, post, post, ad; story, story, story, ad––whether you notice or not. “Because everything's so well targeted, the ads feel like content I might have liked,” Toscano says. 
     It’s a social media motif whose universal infrastructure is buoyed by the research of positive psychology. The branch of study that examines our human drive for meaningful experiences, positive institutions, and individual happiness, according to Toscano, is harnessed to support the idea that programming an ad every third post on Instagram (and every fourth post on Facebook) is the most efficient way to market as many ads as possible––that, in 2021, cost between 70 cents and $1 per click––without the user feeling overwhelmed. And this pattern repeats indefinitely, or at least until you put your phone down. Tomato, tomato.
1-2-ad; 1-2-ad; 1-2-3-ad; @kojiwan_kenobi
1-2-ad; 1-2-ad; 1-2-3-ad; @kojiwan_kenobi
1-2-3-ad; 1-2-3-ad; @_stephen_guerrero
1-2-3-ad; 1-2-3-ad; @_stephen_guerrero
1-2-ad; 1-2-3-ad; 1-2-3-ad; @mayaschnake
1-2-ad; 1-2-3-ad; 1-2-3-ad; @mayaschnake
1-2-3-ad; 1-2-3-ad; @abejeschnake
1-2-3-ad; 1-2-3-ad; @abejeschnake
1-2-3-ad; 1-2-3-ad; @jirbykulian
1-2-3-ad; 1-2-3-ad; @jirbykulian
     Toscano refers to our friends’ vacation photos as “positive rewards” and to beer ads as “negative punishments.” It’s a lingo he picked up while working as a user experience researcher at Google in the 2010s, helping to design the interfaces and web pages we interact with on a daily basis. The rare data scientist among a group of MFA graduates and traditional artists, he was frequently asked to fiddle with designs and operations that nudged his ethical bounds a whisper wider––different button placements that would encourage users to stay online, color palettes that toyed with emotions and encouraged longer screen time, personalized product recommendations based on swaths of data collection and website tracking. 
      “The industry is a gigantic psychological experiment,” he tells me. “And it's mostly run by people who are not traditionally qualified to run research experiments. They’ve never gone through review boards, they haven't been through ethics training.”
     This creeping sense of power misuse led to Toscano leaving Google in 2017. He now urges companies to “respectfully earn users’ consent” with data transparency and backs proposals for federal data taxes. Most recently, he joined the group of Silicon Valley defectors and media experts as an interviewee in the 2020 documentary “The Social Dilemma.”
     The film, which debuted on Netflix last fall and became perhaps the most mainstream tell-all of social media’s dangers, is a cautionary tale that extends far beyond human pattern-seeking and the positive reinforcement psychology of “one, two, three, ad.” It exposes the algorithms that insulate our online profiles with content we are most likely to engage with, criticizing Silicon Valley's business model that comes at the expense of our attention and personal data. It waxes dystopian––indeed, argues we are currently participating in a dystopia––and draws from a laundry list of peer-reviewed studies that suggest social media’s unchecked infrastructure facilitates the natural erosion of democracy, journalism, and mental health: 64 percent of extremist group joins come via social media recommendation tools; fake news spreads six times faster than true news; fake news and rhetoric on Facebook aggravated the Myanmar genocide in 2017, as only four Burmese speakers were available to monitor the country’s 7.3 million users; children under the age of 14 spend twice as much time on their phones as they do with their families; the more we treat artificial intelligence as if it were human, the more we mistreat and dehumanize real human beings. Tim Kendall, a former executive at Facebook and ex-president of Pinterest, near the end of the film, responds to the question of what his biggest fear might be: “I think, in the shortest time horizon, civil war.” 
     While at Google, Toscano thought the same thing. “We were going to drive people into their own echo chambers and create just incredibly visceral emotions, and it all really comes down to the data science, the fundamentals, the ethics of it,” he says. 
     Basic animal behavior is incredibly well-understood. The stuff of dopamine triggers and patterns that turned Pavlov’s dogs all slobbery is exactly the principle at work in our docket of Instagram story, story, story, ad. And the concept of positive-negative reinforcement that leads a mouse to choose cheese over traps is the basis for why a Democrat’s Facebook feed is agreeably right-hating, why a Republican’s Facebook feed is agreeably left-hating.
     Jaron Lanier, the futurist father of virtual reality and author of “Ten Arguments for Deleting Your Social Media Accounts Right Now,” lists “Social media is destroying your capacity for empathy” as argument number six. “Trump supporters seem nuts to me, and they say liberals seem nuts to them,” he writes. “But it’s wrong to say we’ve grown apart and can’t understand each other. What’s really going on is that we see less than ever before of what others are seeing, so we have less opportunity to understand each other.” 
     Are these algorithms obscuring nuance? It’s as if we all have the same chess board, but are playing vastly different games with dissimilar rules and pieces. We share a framework, but none of the same content. The question of empathy thus arises––if Silicon Valley knows all, then what of empathy and its psychology? How does empathy function in an online, algorithmic landscape? And how does it translate offline? Is civil war truly a reality if we cannot empathize? 
     Toscano puts his head in his hands and rubs his sleepless eyes, exhales a frustrated sigh. While in this tired position he recalls the Better Ethics and Consumer Outcomes Network (BEACON), the company he founded post-Google that is rethinking the technological relationship between companies and people. It is no doubt sobering to think that BEACON has four employees compared to Google’s 135,000, and perhaps this is the thought that runs through Toscano’s mind as a silence expands within the Zoom call. At last he returns his phone to his pocket and looks up. “I don't know the answer to how Silicon Valley defines empathy,” Toscano says. “Because I don't think they do.”
      In the early 2010s, when Facebook was establishing itself as the preeminent social network, the study of online empathy took an abstract, often indirect shape. The small field of techno humanism examined the characteristics of online behavior––the psychology of assimilating ourselves into online profiles, interacting with machines for the first time, anthropomorphizing avatars and identities. Early research made two things clear: human interaction is quantitatively and qualitatively richer in the real life setting, and as such we must be wary of, or at least intensely regulate, a certain “leadership by dashboard” that places more value on data than natural human behaviors and instinct. From these findings, a seed of an idea, a philosophy called humane technology, began to slowly make the rounds. It asked the question: what good can we achieve if we structure these sites around the key peer-reviewed data point––that social media, when used in moderation, does indeed improve quality of life?
     But things took a turn and data became more valuable than oil, and less than a decade later Toscano has a poetic synopsis for how the data-driven infrastructure of 2021 social media is affecting the human experience: “We're trying to turn subjective experiences into objective binary responses,” he says. “In doing so, you're reducing life to a set of specific inputs. And those inputs don't always necessarily mean what you think it does. Effectively, you start to shape the world based on laws of averages, rather than idiosyncrasies that is life itself. You start to make a bunch of gray areas, instead of colorful different spots.”
    The psychology of empathy is at once well-documented and completely mysterious. Ask enough questions, and you find yourself in the realm of the existential––why are we here? Why do we do what we do? But there are nonetheless truths to our comprehension.
     From a psychological perspective, there are two kinds of empathy: affective empathy and cognitive empathy. Affective empathy describes our ability to understand the emotions of others––someone is sad, frustrated, thrilled––while cognitive empathy is our ability to understand what someone is thinking. “And there's clearly overlap, as those two are both about representing other people,” says Dr. Sid Horton, a psychology professor and researcher at Northwestern University. “But there clearly seems to be some separation between thinking about other people's thoughts, versus thinking about other people's feelings.”
     Cognitive empathy is often discussed under the umbrella term “theory of mind,” which describes how humans develop a capacity for empathetic understanding, and thinking outside themselves, over their lifetimes. It explains and explores the natural progression from the egocentric tendencies of young children to the more socially-aware behavior exhibited by adults. “It's the capacity to understand mental states, which can include emotions, thoughts, beliefs, and knowledge that are different from your own,” Horton says. 
     Research suggests that the development of theory of mind––sometimes called “mentalizing”––is what facilitates social interaction. And there is evidence that having older siblings, or reading fictional stories, helps children develop theory of mind more rapidly. Paul Zak, the founding director of the Center for Neuroeconomics Studies and a professor at Claremont Graduate University, takes this a step further; he argues that the presentation of social information itself has an impact on one’s empathetic response. When one is presented with a narrative that follows the traditional storytelling arc––exposition, rising action, climax, falling action, and denouement––his research indicates higher emotional and chemical responses than in stories that are fragmented or stand-alone. This suggests that complete storytelling, and listening to these stories, increases one’s empathy. 
     Horton prefers to think of theory of mind as if it were a muscle. Its exercise calls for reps of perspective taking (actively viewing the world from another’s point of view), a practice he admits is increasingly difficult in our online silos, which allow us to be “a little bit lazy,” he says. In his main area of research, memory and language, Horton probes the psychological notion of “common ground.” It refers to the assumptions we make when interacting with others, the parts of our identity or knowledge we do and don’t share. Via our perceptions of common ground, we construct and memorize cognitive models of people we interact with, and make decisions based on these models––what kind of language to use, which references to make, what must be explained and what is a given. But on Facebook, Instagram, and Twitter, Horton says, because of their architecture, we already have an innate model of who our audience and followers are. Common ground, perspective taking, theory of mind––these muscles again go unflexed.
     “There’s a difference between having a theory of mind and using a theory of mind,” Horton says. “To what extent do we spontaneously, without being prompted or having to work at it, understand what other people are thinking or feeling? Maybe that spontaneity is what really gets affected online. If you aren't exposed to other perspectives of other things, maybe you're less likely to spontaneously consider what they might think.”
     On January 6, 2021, when the United States Capitol was stormed by rioters and domestic terrorists, and when, in the aftermath, it was revealed that social networking played no small role in the attack’s planning, escalation, and indictments, it appeared that Kendall and Toscano’s civil war fears were more relevant than ever. 
     The question of empathy and echo chambers quickly rose to the forefront: surely, in all of Silicon Valley’s psychological tinkering, they’ve devised a way to quantify online cause, and offline effect?
     “There's definitely a lot of people paying attention to the psychology of users,” says Dr. Rinée DiResta, the technical research director at Stanford Internet Observatory and another “The Social Network” interviewee. “It's just that they're paying attention to the psychology of users from the standpoint of maximizing profit. We still don't know what piece of content actually changes someone's mind, or changes their heart, or makes them feel motivated to take an action offline.”
     Quantifying engagement, empathy, and theory of mind in the online space is what DiResta and her team work every day to better understand. The digital landscape has come a long way, she says, from the rudimentary internet epoch when “ads looked like ads,” so visually obvious and clearly untargeted. Today’s inorganic world of immersive marketing and filter bubbles, DiResta says, changes not only users’ relationship with the idea of content, but gives social media companies the ability to play content creator. This “frustration architecture,” as she calls it, is the main topic of her Congressional advising on data policy and current writing. But, she says, we are no closer to understanding the quantifiable psychological effects that an echo chamber app like Parler has on someone, than we were during the propaganda-age of World War II. We can’t exactly be certain how our thought processes were affected, when, in 2016, 150 million Americans were targeted by Russian bots on social media, just as we can’t exactly know the tangible psychologies of Vietnam War-era films and commercials. 
     “We internalize the idea of the enemy,” DiResta says. “But I think it's frustrating to a lot of people that we still can't quite gauge, you know, what is going to lead to an actual manifestation of violence versus what is going to remain in the realm of rhetoric.” 
     Online empathy and what comes of it, DiResta says, is one psychological query that won’t make Silicon Valley any more money, whether they know the answer or not.
      The question must be asked: should we even want Silicon Valley to be concerned with empathy? Should we press for an additional experiment, further psychological tweaking, from the same people who bloated an infrastructure of agreed-upon good, when used in moderation, into a universal marketing scheme?
     Dr. Desmond Patton has perhaps one of the most intimate experiences with online empathy. The director of Columbia University’s Safe Lab, a research initiative that studies how youth of color navigate violence both on and offline, Patton spent seven years studying one 17 year old girl named Gakirah Barnes and what he calls “internet banging”––the online behavior of gang affiliated individuals. What he learned changed his perception of not only himself and Barnes, but how we view and study empathy cross-culturally.
     “I think that oftentimes, the problem with social media is that there are whole swaths of people and communities that are not given automatic empathy,” Patton says. “I think social media is just a reproduction of life. And that is particularly complicated when we’re talking about folks that are marginalized.”
     The average Silicon Valley worker is a young, college-educated white male. The effect of this, Patton says, is in the proverbial pudding: the design of Facebook, Twitter, Instagram. These tools, he says, infused with their algorithms and binaries and patterns, leave little room for the racially and culturally diverse expression of language, movement, and complex emotions, let alone applying empathy well to any of these interactions. “Many Black and Latinx youth are using these tools to process their grief,” Patton says. “But the tools were not created for them. So many times, in my work, we see that [expressiveness] just sitting there. We don’t leverage it for information, we don’t leverage it in the courts, we don’t leverage it in a way to help young people.”
     With this in mind, Patton places emphasis on the qualitative data he acquires from analyzing social media posts. His team uses language and context to piece together the narratives, grief, and trauma of young Black social media users, oftentimes annotating specific posts and reconvening to discuss the macro-level framework of their work. Without knowing who these people are in real life, it is as empathetic an approach as Patton’s team can execute, one that is so far removed from the generalizing algorithms that he can’t help but wonder what a totally inclusive, truly empathetic social media platform would look like. It's highly unlikely a user would take the painstaking time to analyze the online language and context and trends of some unknown profile online. Is there an infrastructure to be built that makes applying theory of mind that little bit likelier?
      “What would a community-driven social media platform look like?” Patton asks. “And for me, that’s where it starts and stops. Because I don’t want to decide what should be there. Because what happens is we don’t anticipate different ways in which our lived experiences occur on these platforms. I don’t understand how someone who is blind interacts on Twitter. I don’t understand how someone is non-binary interacts on Instagram. And because, probably, someone who is non-binary, or blind, did not develop or did not partially develop any of these platforms, that’s where we should start.”
     The English word “empathy” is only 100 years old; the current landscape of social media will hardly reach any significant age before it iterates again into something new, just as it has been for the past two decades. The publishing of conclusive research that justifiably quantifies our affective and cognitive empathy online might be some time away.  But the behavior and emotions that exist now have existed forever––our instinct to write someone off as backward-minded or uneducated, to distance ourselves from someone disagreeable.
     For the first time, because of social media, everyone has a voice. “And everyone's voice has to be considered, right?” Toscano runs his fingers through his curly mohawk and shrugs, again rubbing his tired eyes and sighing. “And now we’re in a world where we don't know what voices to follow or not.”
     “In the meantime,” he says, “the best way to increase empathy within our society is to get off of the screen, go talk to your neighbor, and be a good steward in your community.”