Wednesday, September 11

Code Red: Episode 3


Photo credit: Izzy Greig


In the final episode of Code Red, Podcasts contributors Zoë Bordes and Alicia Ying sit down with UCLA professors Sarah Roberts and Stuart Soroka to get their perspective on the causes and consequences of online extremism. Correction: This podcast and the original version of its description incorrectly referred to Stuart Soroka as Robert Soroka.

Zoë Bordes: Welcome to the third episode of Code Red. In episode one, we explored the different definitions of online extremism, the reasons people turn to extremism and a few real-life examples. Then in episode two, we discussed how algorithms and misinformation contribute to online extremism. Today, we’ll be talking with experts to get their opinions on these topics.

Alicia Ying: In this episode, we have two amazing speakers lined up. First, we’ll hear from Dr. Sarah Roberts – an assistant professor at UCLA who’s a leading scholar on social media policy, commercial content moderation and the role of the internet in perpetuating global inequities. She’s also the faculty director of the UCLA Center for Critical Internet Inquiry, co-director of the Minderoo Initiative on Technology and Power, and a research associate of the Oxford Internet Institute. Dr. Roberts brings a unique perspective informed by feminist science and technology studies, and we’re excited to have her on the show.

ZB: Our second speaker is Dr. Stuart Soroka, a professor in the departments of communication and political science at UCLA. He is also the series editor for Cambridge Elements in Politics and Communication and an associate member of the Centre for the Study of Democratic Citizenship. Dr. Soroka specializes in political communication and political psychology, as well as the relationships between public policy, public opinion, and mass media. He is mainly interested in negativity and positivity within news coverage and the role of mass media in shaping representative democracy.

So, we’re going to begin by hearing Dr. Roberts’ take on the ethics of Big Tech.

AY: We live in a world where technology is seamlessly integrated into our lives, leading us to pay little attention to the hidden tools and algorithms designed to hook us into these online platforms. To understand Big Tech companies thriving off their users, we went to Dr. Roberts.

She says that users’ endless clicks, scrolls, swipes and content interactions themselves aren’t the endgame for companies.

Sarah Roberts: The companies are all about taking our activity, our preferences, our behavior, our networks of friends and others, and monetizing. And they monetize it, because it’s true. Their true clientele is other companies. So these are really what you would call like business to business corporations. They are trying to sell ad space, not an unfamiliar model to, you know, people who work in news media, but they’re trying to sell ad space, they’re trying to sell ads aligned with particular users and particular behaviors or particular preferences. And the thing with social media is because we do so much and all of that is tracked and analyzed to an absurd degree, the way in which those ads can be targeted and sold, the value that’s placed on them, is predicated on that specificity. We are being commoditized and sold. So that is what I mean when I say that we’re not really users, we’re being used.

AY: Basically, the main method of understanding how material is really evaluated is in the context of its monetary value in this ad marketplace.

ZB: Understanding this concept is crucial to grasp how economic models and user engagement expectations drive the decisions of platforms. These decisions impact the visibility of content. Whatever social media platform we decide to open, it’s really natural to anticipate fresh and engaging content that’s tailored to our past interactions. Dr. Roberts emphasized that at the end of the day, users are part of a production chain that shapes content. Content moderation, in essence, is an editorial practice that is driven by the platform’s economic interests. This perspective reveals that decisions on content moderation are driven not just by safety concerns but also by a need to maintain user engagement and satisfy advertisers. This alignment can lead to inconsistencies and challenges in moderating content effectively.

AY: With all of this in mind, we wonder at what point will these companies feel some sort of responsibility for all of the misinformed, extremist, violent and hateful content on their platforms.

SR: I think the answer is in, you know, the history of other industries that caused harm. They are disinclined to change a business model that is basically turning on a faucet from which, you know, Benjamins pour out, right? It’s just, “Hey guys, can you maybe make less money?”

AY: In our interview, Dr. Roberts listed off some previous examples when companies only acted when they were pressured. For example, despite the tobacco industry knowing that smoking caused cancer, they didn’t say anything. Another example is that the automobile industry didn’t always have safety features such as seatbelts and airbags in the event of accidents. People needed to educate and push for elected officials in Congress to add such regulations.

With all of this in mind, our crucial question is: At what point will social media companies acknowledge their responsibility for the spread of misinformed and extremist content on their platforms? The history of other industries suggests that companies are often reluctant to alter profitable business models, even in the face of harmful outcomes. Just as the tobacco industry long denied the dangers of smoking and the automobile industry resisted adding essential safety features, social media companies may need significant external pressure – whether from the public, regulators, or both – to implement meaningful changes.

As we’ve seen throughout this series, the ease with which users can be drawn into extremist content underscores the urgent need for a proactive approach. Social media platforms must consider their ethical responsibility to monitor and manage their content effectively not only to improve user safety but to foster a healthier digital environment for all of us. Understanding the complex dynamics of content moderation, which is driven by economic incentives, is essential in this endeavor. Only through informed advocacy and demanding accountability can we hope to influence these powerful entities to prioritize the well-being of their users over mere profit.

And with that, we turn to Dr. Soroka.

ZB: So, Dr. Soroka has done a lot of work on the negativity bias. This is how he defines it:

Stuart Soroka: So mammals have evolved with brains that prioritize negative information over positive information. And we attach valence to information, we identify the valence of information very, very quickly within milliseconds. And that identification of valence then structures how that information then finds its way through our brains and how we think about things, whether we pay attention to them, whether we believe them or don’t believe them, and all kinds of other things. So we along with other mammals exhibit negativity biases. And that means that when we go to read news, like any other situation in which we’re receiving information, when we go to read news, we’re going to be more attentive, more responsive to that negative information. So means we mean, we basically set media up to do this for us, right? The whole notion of media as a fourth estate, monitoring error and identifying error and letting us know, we kind of set media up so that media processes information, in the same way that our brains do, right? We’re all – we are and the media that we read – are prioritizing negative information. And that might make sense in an information environment in which we have to make decisions about what to pay attention to and what not to pay attention to. Right, it might make sense because the consequences of negative information are bigger than the consequences of positive information. But it might also make sense because in a very complex information environment, we can’t pay attention to everything all the time. We have to decide what to pay attention to. We have to have some kind of quick way of deciding, like not deciding by reading all of it, but some kind of quick, within milliseconds decision like, “This is the thing I’m going to be attentive to, and this is the thing I’m not going to be attentive to.” Because we just don’t have enough attention for all of it. So for all of those reasons, what you get is media consumption that prioritizes negative information and media production that prioritizes negative information.

ZB: I think that all of this is really, really interesting. Think about the influx of content we are getting on a daily basis. The posts that get the most views and interactions generally tend to be the most attention grabbing and most extreme. Think about why clickbait works.

Our natural tendency to gravitate towards negative content in the media might reinforce extremist ideologies by amplifying their visibility and impact.

When I was listening to Dr. Soroka speak about this, one thing that came to mind was the action verbs that are used in YouTube titles or in certain news headlines. I’m a political science major, and so I will sometimes watch political commentary videos. The titles to many of these videos are things like, “Republican gets destroyed by Democrat on live TV!” or “Republican absolutely torches Democrat!”, things like that. And so I’m wondering – and by the way, I just want to make it clear that Dr. Soroka did not make this claim. His work is not directly related to extremism. This is just me trying to connect the dots, but I’m wondering if these crazy headlines that I was talking about might take advantage of our negativity biases in order to be more attention grabbing. And since they are more attention grabbing, then maybe there could be a link between content that plays towards our negativity biases and our likelihood of developing more extremist preferences.

So, since our negativity bias works as a quick way of both processing and filtering out information, I asked Dr. Soroka if the negativity bias is responsible for a lot of people just taking headlines and running with them, rather than actually going through and reading the article or watching the full video. I asked him if this was the case, as well as whether he thought that the negativity bias was a good or bad thing.

SS: So we need some way of making a quick decision. And we probably also need it because if you want to learn about politics, for instance, you probably don’t want to learn about politics for eight hours of your day, you probably want to learn about politics for five, maybe 10 minutes of your day. Actually, if you wanted to learn about politics for 10 minutes of your day, that might make you more interested in politics than most other people. So even people who really care about politics do only learn about it for a limited amount of time. So you have to have some way of filtering out information. And often, it’s the negative content that requires that you rethink your preferences, whether you have some change in behavior or a change in support for a leader. So there are lots of positive aspects to a system which helps identify the information that is most important to you. And we can view negativity biases that way. But they come with negative consequences as well. If what you have is a media that is prioritizing negative information and media consumers that are prioritizing negative information, then you might get this kind of snowballing of negative information on negative information. You’re not sampling from all the information; you’re sampling from an already biased body of information. And so we might end up with a kind of snowballing of negativity, more negativity than would be efficient as a way of managing the world.

ZB: Dr. Soroka also said later that the literature mostly finds that we are more inclined to believe something is true when it is negative. This probably just has to do with our experience with news content – we are more used to reading negative news. So negative content to a certain extent just seems more “newsy” to us.

AY: On that note, the next piece of the puzzle we wanted to focus on is misinformation. Our co-contributors Izzy and Ciara already spoke a little bit about misinformation and how dangerous it can be in the context of online extremism in episode two, so make sure to listen to that if you haven’t already.

We asked Dr. Soroka about how dangerous the threat of misinformation really is, and how much of it is really out there. If misinformation is really one of the primary ways people can develop extremist ideas, then it’s important to understand the level of risk associated with it.

SS: I think the consensus in the discipline actually is that misinformation is more limited than we originally thought. When we first started studying misinformation five, not quite 10 years ago, there was the sense that, particularly Facebook at the time, was going to be a source of all kinds of misinformation and that other social media over time, we’ve thought, are going to increasingly spread misinformation because we’ve taken away the gatekeeping of news editors, for instance, and allowed anyone to circulate anything they want.

And then you add to that deepfakes, and before deepfakes, just simple royal family-style photo editing. And I mean, you introduce, like, technology provides all these opportunities to create and then spread misinformation through channels that aren’t like vetted by the New York Times editorial staff. We thought that was going to be really, really bad. And actually, a lot of the information you get turns out to be edited by the New York Times editorial staff or the Washington Post editorial staff or, whatever, the Huffington Post. So I mean, actually, a lot of the information that you get on social media and otherwise is recirculated from official sources, like reputable official sources. And so the actual, like infiltration of misinformation into our lives, is much more limited than was originally thought. And, you know, it’s predominantly amongst senior citizens on Facebook. I mean, that is finding of the literature. That’s not to say that other misinformation isn’t out there. It is. We all get some of it. We just don’t get a ton of it. We’re not surrounded by misinformation. If you decided right now, to go online, and read about the state of the Israel-Palestinian war, for instance, or read about the state of the U.S. economy, you could find misinformation. But if you just Google it or open up whatever news app you use to read about it – I mean, unless it’s a really horrible news app, I guess – you’re probably getting mostly true information about that content.

AY: It’s worth mentioning that Dr. Soroka made it clear that this isn’t to say misinformation isn’t something to worry about. We definitely should worry about it, because it can get worse. But in terms of the actual frequency of misinformation – it’s much more limited than we originally thought.

ZB: Additionally, there’s evidence that a lot of people’s beliefs are simply expressive and not actually genuine. This has been found through surveys. So someone could be preaching a conspiracy theory online – he used QAnon as an example – and they might not actually believe in it. They just kind of think that it’s fun. And so when researchers do things like pay people to tell the truth, for example, then all of a sudden, people who said they would never do something are actually doing it – like during COVID when states would provide rewards for people who got vaccinated.

AY: So, according to Dr. Soroka, we may have not only overstated the prevalence of misinformation. We may have overstated people’s belief in misinformation, too. And this is a good thing, because it means that maybe the likelihood of extremism is also reduced. But once again, even though this may be the case, it is something we should be wary of because it could get worse. Especially since this year is an election year in the U.S., and we know how foreign governments and political interest groups will go to great lengths to influence elections in any way possible.

Zoë, what did Dr. Soroka say about the effects of misinformation?

ZB: He kind of qualified the effects – there’s evidence for the prevalence of misinformation causing people to go two different ways. The first is that it could lead to decreased trust in the media.

SS: One potential consequence of misinformation that is negative and doesn’t require that you believe any of the misinformation and for which there also is empirical evidence is that once you introduce into the world the possibility that something is fake, that becomes an active consideration. Then people start questioning the news or having less trust in the news than they used to have. So it may be that you don’t have to be exposed to misinformation for misinformation to matter. You just have to know that news outlets are faulty and prone, maybe, to misinformation. And once you start believing that, then it might be harder for you to get the information you need because you might either stop consuming information or start being wary of any information. And so you can’t update your preferences in an efficient way anymore because you start questioning the reality of all kinds of things.

ZB: The presence of misinformation alone could cause someone to start mistrusting everything, and that’s how they can get pulled into conspiracy. That’s when they start becoming active in things like QAnon, or they start believing that the Earth is flat. On the other hand, people can also get pushed in the opposite direction.

SS: But another possibility, and there’s good evidence for this. There’s a forthcoming book on this actually. Another possibility is that you stop using news aggregators, you stop reading news from anywhere, and you start focusing on big sources because they’re the only ones you can trust. And actually, your trust in them increases because you’ve seen so much noise elsewhere. So one possibility is that fake news damages our trust in all news. Another possibility is that fake news actually moves us back to trusting the reputable legacy news sources. And that ABC News and the Washington Post and the New York Times are actually aided by the existence of fake news rather than damaged.

AY: It’s really interesting how simply knowing that misinformation exists can lead someone to losing or gaining trust in big news sources. Dr. Soroka spoke about the importance of legacy news sources because citizen journalism takes a lot of work. And another point that he mentioned was this: A lot of the information circulating on social media is probably at its root from those reputable legacy sources such as the New York Times, the Washington Post, et cetera.

ZB: A big takeaway from Dr. Soroka is that misinformation is not as prevalent as we initially thought. Much of people’s belief in misinformation appears to be exaggerated, and a lot of the information that we consume daily actually does come from reputable sources. Obviously, as the years go on and we develop further into a digital landscape, we can expect to see more misinformation and more extremism. But for right now, you don’t have to be especially concerned that every single thing that you’re reading is fake.

Still, it’s important for us all to educate ourselves about how to combat these issues. Here is Dr. Roberts on how students can best educate themselves:

SR: So for one thing, freshmen have the opportunity now to take a data justice and society course. It’s basically over two quarters. The first quarter is devoted to like, you know, again, situating this material giving people context, historical context, giving them other kinds of context to help them understand this. And then second quarter, there’s like a more specific component where, I believe, students can choose, like which part of the offering that they then would follow up with.

There are also other introductory courses. So the Department of Information Studies, which is in the School of Education and Information Studies, has, I think, three undergraduate courses that it offers. IS 10 is one of them, and they do some of the same type of work, but from an information studies perspective, which is a little bit different.

I teach in the gender studies department, and I have taught a class called the “Internet as Social Object.” It’s GS 185. 185 is some special topics and things like that. One of my colleagues, Dr. Stacy Wood, who is the director of research at the Center for Critical Internet Inquiry, has taught a similar course that she has developed multiple times for gender studies as well.

And it really – you know, I don’t want people to scared away by the notion that, “Oh, I don’t know about gender studies” or like, “I don’t want to just take a higher-level course in that department.” It’s really presuming no knowledge about any of this stuff. And there is a perspective in these courses from gender studies that, what that means for how the course is taught, is that we are interested in looking at systems of power. who’s got access to what? Who doesn’t? Why is it like that? What are the historical antecedents of that? What are the, you know, policy regimes that facilitate those inequities? How did those play out in the context of tech and the internet?

I think over the next few years, we’re going to see a little shift, and maybe an increase in these kinds of opportunities for students. I think if you are a student who would find some of this stuff interesting, like, don’t hesitate to reach out. And also, you know, let people know, like in your department, that that’s something that is important to you.

It’s totally within everybody’s grasp to get this stuff from whichever dimension, and I think one of the tricks of the big tech industry, which you can see them do when they go in front of Congress, is to pretend like these things are so hard to grasp that mere mortals couldn’t do it. Well, I can tell you that UCLA students are more than capable. In fact, a lot of them are building the tech. And they’re certainly on the, you know, on the vanguard of using it, so I don’t buy that. And you guys shouldn’t either.

AY: Big thank you to Dr. Roberts and Dr. Soroka for taking the time to speak with us for this episode.

ZB: And thanks for tuning in to Code Red – we hope that you learned something, and that you’ll walk away from this with a bit of a deeper understanding of online extremism.

AY: Signing off, Alicia.

ZB: And Zoë! Have a great day.

Zoe Bordes

Comments are supposed to create a forum for thoughtful, respectful community discussion. Please be nice. View our full comments policy here.

×

Comments are closed.