I had the opportunity recently to chat with Katy Byron, of the Poynter Institute’s MediaWise, and Erik Palmer, who consults with schools about, among other things, operating in a digital world. They offered very helpful strategies to help students (and teachers) become savvy digital-media consumers, including how to determine the integrity and validity of the information they receive and share. Social media certainly inundates us with content, much of it designed to influence instead of objectively inform us. How can we distinguish truth from fiction, bias from nonpartisan? Fortunately, resources are available to help.
Still, I’m worried.
Knowing how to do lateral reading, for instance, is great, but you still need the motivation to apply that strategy. Since social media generally feeds us with information we already agree with, why would any of us take the time to verify it? I have a dream (fantasy?) of a world where people see confirming news and think, “I wonder if there’s a conflicting view.” Can we get people actually to seek information that contradicts their existing beliefs, to work to understand, rather than convince, people with other viewpoints? To achieve such a world, I think it helps to understand what drives our information seeking and evaluating behavior, and here’s where it gets both interesting and complicated.
You likely have heard about confirmation bias, our desire to seek and embrace information that confirms what we already believe. It makes us feel good. However, recent research is revealing that our relationship with truth and contradictory information is full of nuance. A recent piece in The Economist titled “This Article is Full of Lies” does a nice, concise job of making much of this research accessible. One fundamental question is whether or not truth even matters. As the article notes, according to polls, some politicians are more liked than they are trusted.
Do we simply accept that politicians lie and embrace them based on other characteristics?
A brain study at Emory University, for instance, found that when both Democrats and Republicans were given threatening information about their candidate before the 2004 election, the parts of the brain related to emotion and emotion regulation lit up. The parts typically related to reasoning were quiet. Does emotion rule and rational thinking drool, at least when it comes to partisan issues?
Research by Dan Kahan at Yale Law School suggests that something called Identity-Protective Cognition may be at play. It’s not just that we seek out confirming information, even if it may be misinformation; we defend ourselves against information, including research and data, that threatens the beliefs related to our cultural identities. We all belong to groups that give us friendship and support and help define who we are. Those groups matter to us, and we want to maintain those affiliations. When those groups share core beliefs, we risk alienation by adopting ideas that contradict those beliefs. So, if you belong to a group that questions the validity of human-induced climate change, you are likely going to weigh evidence in a way that favors the beliefs of the group. In a time of hyper-partisan affiliations, we are increasingly likely to see Identity-Protective Cognition at work.
Sadly, more education doesn’t seem to improve the chances of open-mindedness. Kahan’s results mirrored what my Harvard colleagues David Perkins and Shari Tishman reported a number of years ago. Across a series of studies looking at responses to contentious political questions, here’s what they found: “People with higher IQs were no more likely to attend to the other side of the case than people with lower IQs, although people with higher IQs did tend to offer more elaborate justifications of their preferred side of the case.” More knowledge led to more sophisticated justifications for maintaining existing beliefs. Sigh . . .
Are you beginning to understand why I’m still worried about our ability to engage in truth-finding and civil discourse when our lives are filled with inflammatory social media bubbles?
Looking across the research landscape, though, I find reasons to be hopeful, and I have some suggestions to guide experiments that might lead to my contradictory information-seeking world.
Timothy Levine, a professor at the University of Alabama, Birmingham, suggests that humans are evolutionarily predisposed to believe what others tell them. He calls it the Truth Default Theory. Levine claims that most of the time, people do tell the truth. I find that encouraging. And it’s much more efficient to assume honesty than to always expect deception. Fortunately, we are not doomed to be duped (which happens to be the title of Levine’s very recently released book). We can be primed for when to expect misinformation and when to be suspicious. Erik Palmer shared some suggestions about how to make students more discerning in the podcast I hosted with him and Katy Byron.
Now, I want to think about how we might leverage some of our underlying drives more productively. A study reported in Science in 2018 found that lies on social media “diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information.” That’s not the hopeful part. The news we might be able to use from this study comes from the analysis of why. Two characteristics of the quickly spreading false news stood out: novelty and emotion. The “false stories inspired fear, disgust, and surprise.” Who wouldn’t want to share those? We could teach students, though, to use these traits as triggers for deception detection. It’s exactly when we reflexively want to click the retweet button that we need to pause.
Another study about moral grandstanding suggests a way we might encourage and reward identifying falsehoods and even seeking balanced viewpoints. According to the researchers, moral grandstanding can take many forms.
. . . someone may trump up moral charges, pile on in cases of public shaming, announce that anyone who disagrees with them is obviously wrong, or exaggerate emotional displays in taking ideological positions. They may also engage in a kind of moral one-upmanship, or “ramping up,” making increasingly strong claims to “outdo” other discussants as they try to show that they are more morally sensitive, or care more about justice.
Whatever the form, though, moral grandstanding is defined by what motivates it: the desire for status. As social beings, humans value how others perceive them. The desire to be accepted and stand out is part of what drives Identity-Protective Cognition. Here, it pushes extreme discourse. The way to stand out in your group is to be more of what that group stands for.
I wonder: can we offer belonging and status for the pursuit of truth and civil discussion? In our podcast, Katy Byron talked about some of the work at MediaWise that moves in that direction. The Teen Fact-Checking Network might provide a way for young people to gain recognition for flagging falsehoods and promoting objective information. The MediaWise Ambassadors tap media personalities and influencers to make being media savvy a cool thing to be a part of. What can you do locally to shift what students value? Where can you provide status that matters for the kind of media usage and civil discourse we desire?
While we experiment with the external social levers of belonging and status to encourage fact-checking and sharing, I think we should also consider internal drivers. Our brains release feel-good chemicals in the face of novelty, surprise, and resolution. Sensational news draws our attention. Add in a bit of unknown —Major city facing catastrophe!—and we’re really drawn in (I wrote a longish piece about the motivating power of “finding out”, but it’s in Spanish).
Encouragingly, some of Kahan’s research on Identity-Protective Cognition revealed that people who are “scientifically curious,” who enjoy (those feel-good brain chemicals at work) surprising and unexpected discoveries, actually seek out information that contradicts their existing beliefs, regardless of their political affiliation. Curiosity, it seems, trumps knowledge when it comes to being open to multiple perspectives. So, how can we encourage this kind of productive curiosity in the world of social media?
Again, we need to experiment. One idea is to emphasize the excitement of “finding out” if information is genuine. Individuals may not need likes and retweets to get enjoyment out of uncovering a deepfake video or confirming that an image is, in fact, the real thing. We need to encourage healthy skepticism, being careful not to fall into cynicism, and, of course, we need to provide the tools for information verification. I don’t have the answers . . . yet, but I want to push us all to explore the possibilities.
I’m still worried, but I’m also hopeful. Knowledge and strategies are essential, but research suggests they may be insufficient. We also need to tap our inner, human drives. If we do it well, we may just be able to nudge ever more people, kids and adults, into being a bit more curious. Instead of just confirming what we already think, maybe we can generate more “Hmm, I didn’t know that” thoughts. I sure hope so.
The views expressed in this article are those of the author and do not necessarily represent those of HMH.
Dock hosts the Shaping the FutureTM podcast series. Listen to the media literacy episode with co-guests Katy Byron and Erik Palmer: Shaping the Future: Future Skills for Fact-Checking Online Fakes.
Subscribe to HMH Learning Moments wherever you listen to podcasts to be the first to hear new episodes:
- Apple Podcasts
- Google Podcasts
- Shaped for podcast updates and new blog posts.
Please consider rating, reviewing, and sharing HMH Learning Moments with your network. We value our listeners' support and feedback. Email us at Shaped@hmhco.com.
SHAPING THE FUTURE is a trademark of Houghton Mifflin Harcourt Publishing Company.