In this podcast, we talk with Professor Alan Dennis about the fake news phenomenon and how people are responding to our changing technological environment.

Professor Dennis holds the John T. Chambers Chair of Internet Systems and is a professor of Information Systems at the Kelley School of Business at Indiana University.

Professor Dennis has developed several software systems and technology start-ups over the years.  His current focus is on using big data and analytics to help parents select baby names NameInsights.

Professor Alan Dennis
Professor Alan Dennis


Show notes and links for this episode:

Studies on fake news by Professor Alan Dennis:

Behind the stars: The effects of news source ratings on fake news in social media

Says who?: How news presentation format influences perceived believability and the engagement level of social media users (pdf)

 

News items:

Can Mark Zuckerberg fix Facebook before it breaks Democracy?

Jordan Peele uses an AI to make Barack Obama deliver a PSA about fake news

Facebook’s bad idea: Crowdsourced ratings work for toasters, but not news

You see it, you buy it: Just being exposed to fake news makes you more likely to believe it

How your brain tricks you into believing fake news

Facebook’s latest attempt to deal with fake news on its site:

Taking down more coordinated inauthentic behavior

Facebook is rating the trustworthiness of its users on a scale from zero to 1

 

Thank you to bbc.co.uk@copyright2018BBC for some of the additional sounds in this podcast.

Sandra: In 2017 the Collins dictionary declared fake news its word of the year. This headline:.

Newsreader (file recording): Pope Francis shocks the world - endorses Donald Trump for president.

Sandra: Is just one of the more infamous news headlines posted during the 2016 US election campaign. Thousands of shocking headlines were read and shared by millions of prospective US voters throughout that campaign such as:

Newsreader (file recording): WikiLeaks confirms Hillary sold weapons to ISIS.

Sandra: And:

Newsreader (file recording): Hillary is disqualified from holding any federal office.

Sandra: And:

Newsreader (file recording): Hillary Clinton is running a child sex slave ring out of the basement of this suburban D.C. pizza parlor.

Sandra: The cumulative impact of these items being shared over and over began to distort public conversations about the candidates to such an extent that President Obama voiced his concerns.

President Barack Obama (file recording): As long as it's on Facebook and people see it, as long as it's on social media people start believing it. And, it creates this dust cloud of nonsense.

Sandra: However the eventual winner of that election, Donald J Trump, has his own take on the source of this nonsense problem.

President Donald Trump (file recording): We record when we deal with reporters, it's called fake news.

Sandra: And even what is fake news.

President Donald Trump (file recording): CNN is fake news. I don't take questions from CNN. John Roberts of Fox.

Sandra: Like many others, US talk show host Jimmy Kimmel has become so frustrated with Trump's distortions, he got a nine year old to kidsplain.

Child (file recording): Hi Mr President. Today I'm going to explain fake news. Because it's not what you think it is. Fake news is a news story that is intentionally fabricated and presented as if it were real news. Get it?

Sandra: The other feature of the fake news President Trump refuses to get, for all his raging against traditional media, is that most fake news is a creature of social media - Facebook, Twitter, YouTube, that kind of thing. And that’s because:

Alan: Fake news on social media is really enabled by the technology.

Sandra: That’s Professor Alan Dennis. He is an expert in information systems, based at the Kelley School of Business at Indiana University. Alan Dennis is interested in how people are responding to our changing technological environment.

Sandra: Recently he tested eighty three digitally savvy university undergraduates to see how competent they were at detecting fake news. So how good were their telling fact from fiction?

Alan: They were lousy at it. Three quarters of them would be better off to flip a coin to decide whether this was true or false than use their own judgment.

Sandra: To better understand our engagement with social media Professor Dennis went deeper. He observed what happens inside the brain when people read news headlines.

President Donald Trump (file recording): We had an EEG headset on what we wanted to figure out is what parts of the brain were active as they read headlines that supported their opinions, and headlines that didn't.

Sandra: Alan tested the very moment when the outside news feed interacted with people's inner beliefs.

Alan: Once you realised it wasn't what you wanted to hear your brain just turned off.

Audio file: voices: I'm on Facebook, Instagram.

Audio file: voices: I like Instagram, Facebook and Snapchat.

Audio file: voices: We're on Instagram, Snapchat - probably not so much Facebook.

Audio file: voices: I use Facebook, Snapchat.

Audio file: voices: Facebook, Instagram, Snapchat.

Audio file: voices: Facebook and Twitter. Because I can get a broader range of opinions on there.

Sandra: Stick around to find out why our brains are messing with the truth. And towards the end of the podcast, something completely different: what's in a name. Alan has run the data on the 500 most popular names in the US and the UK and he can tell you scientifically how people are likely to respond to you based on your name. And statistically, what's likely to happen to your namesakes.

Alan: Okay, Wilbur. Turns out if you look at probability of being a millionaire, Wilbur is right up there among the top of them.

SBI Intro: From the University of Sydney Business School, this is Sydney Business Insights. The podcast that explores the future of business.

Sandra: Alan joined us in the studio to talk about his highly topical studies into the weird, and frankly dangerous new world of fake news. So we know the whole world knows fake news as a social problem, a political problem but you're an information systems researcher. How come you've decided to investigate fake news.

Alan: Well I guess two things. First a lot of the fake news on social media is really enabled by the technology. So as a technology researcher it just makes sense that I'm going to study a new and interesting technology. And the other thing is some of my colleagues in I.T. have accused me of being a psychology professor in disguise. So yeah it just fits who I am.

Sandra: Do you use social media?

Alan: Yeah. But I'm very cautious. So I have a LinkedIn account that I use mostly just to respond to things. I also have a Facebook account but I'm very careful in what I post there. Primarily it's for a close family and friends. I don't have a lot of friends on Facebook. The friends that I do have are real friends, people that I know.

Sandra: So let's write the definition of fake news.

Alan: Fake news is a story that is deliberately crafted to be false but create some value for the person creating it. There’re two major categories.

Alan: During the US election in 2016, there was a bunch of teenagers in Macedonia that made more than a million dollars each by posting fake news on Facebook and other media and driving people to their Web sites. They then sold advertising on those fake Web sites so the mere fact that they use this to draw our eyes to paid advertising. So that's one category where people profit from what is commonly called click bait.

Alan: The other category of fake news is fake news that is designed to influence opinion. And again during the US election in 2016, we saw a lot of that allegedly by Russian intelligence. There are other cases where people have posted fake information on Facebook to try and influence public opinion like there's been a series of things from the anti vax campaign about the problems with vaccines that have no scientific evidence to back up their claims.

Sandra: So is fake news really on the increase?

Alan: Yes. Fake news is on the increase fueled partly by people can make money from this. And partly because people have seen that this is a very powerful way to influence opinion. It is so precise that I can target people in specific cities, even almost down to the street. Certainly, neighborhoods within cities. I can identify people by demographics, age. I can identify likely voters. I can place a paid advertisement on your news feed and it doesn't cost me very much to do that.

Sandra: So you're saying that people go to Facebook to read news.

Alan: People don't go there to read news, but have an interesting story appears that catches our eye many of us click on it and the problem is Facebook learns what we're interested in. So the more I click on it the more I get.

Audio file: voices: There's a lot of like, you know, click bait and all that that pops up. Especially my mom actually, she gets a lot. She even gets it via messages. She gets like a fake like advertisement or something like 'click here' and 'get this deal' or whatever.

Audio file: voices: I think I can distinguish between the truth and not the truth. For example, if my favourite singer. Well if there's a news that my favorite singer, he is married. Oh, it's fake. I think it's just what I think. I think I know a lot about him.

Audio file: voices: Well, I think that when you initially look at a news story you can't tell whether it's fake or not and I think that that especially in today's media landscape the pressure is on the individual who wants to stay informed to not just judge a piece by its headline, but to instead read other pieces and do their own research to determine whether or not it's true.

Audio file: voices: I don't think I have the ability to tell if it is true. I'm sorry.

Sandra: So are we good at detecting which stories are true and which stories are not?

Alan: Usually we're very bad at judging whether a story is true or false. If you think about it, for many years we've asked internet users to rate content. So you buy something on eBay or on Amazon, we ask you to rate it. Well that's a normal thing to do because you've bought the product or you've bought the service, and you've actually used the product or the service. You can rate that. But think of a news story. Think of any news story you want - political news for example. I live in the United States now, so we could say President Trump did this or President Trump did that. Would you believe it? You have no way of knowing because unless you're a friend of President Trump you don't know what he actually did. Everything that we rely on in the real world to assess whether something is true or false, we can't on social media. Because you have not interacted with people in the story you cannot verify. So because you can't verify whether something is true or false, how do you make that judgment? Well you rely on what you know previously. So if you support President Trump you're more likely to believe a news story that says he did something good. If you oppose President Trump you're more likely to believe a story that says he did something bad. So whether he did or he didn't becomes less relevant to what your hidden beliefs are. Or what your public espoused beliefs are.

Sandra: So what was the phenomena you were observing here?

Alan: So one of the first questions that I wondered about was how good are people being able to detect fake news. Can you tell that the story is false and this story is true? So we set up a study to do exactly that. We picked news stories that were circulating in the United States. Both stories that were true as well as fake stories that were being circulated on social media.

Alan: We brought in a bunch of young social media users because they have a lot of experience with social media but they may not have as much experience with news. Most of these young adults are not news junkies so we figured that they were the right target population. We gave them a bunch of headlines twenty of them were true and 20 of them were false. We asked them to go through and rate the extent to which they believed each of their headlines. In short they were lousy at it. About three quarters of them were worse than chance. In other words three quarters of them would be better off to flip a coin to decide whether this was true or false than use their own judgment.

Sandra: So why are people so bad?

Alan: One of the reasons why people are so at this is a phenomenon that's called confirmation bias. Confirmation bias means is whenever you get new information, for example a story on social media, you compare that to what you already believe in you are more likely to believe a new information if it aligns with your past beliefs. So in other words, if you are a strong Republican in the United States and the headline says Trump did something great you're more likely to believe it. But if you're a Democrat you're less likely to believe that. We believe what we want to believe when it makes no sense at all.

Sandra: Confirmation bias: our tendency to look for interpret and favor information that confirms our preexisting beliefs. What this means is that our brains are already hardwired to a set of beliefs and any new information is processed according to those ideas. To understand how confirmation bias operates in the realm of social media and fake news, Alan and his fellow researchers showed test participants news headlines while there were rigged to an EEG headset that tracked the electrical activity in the brain.

Alan: What we wanted to figure out is what parts of the brain were active as they read headlines that supported their opinions and headlines that didn't. What we found was that when you saw a headline that supported your beliefs to areas of the brain lit up one area was the right parietal which is used to focus attention on something that you're looking at. So in other words you saw this you recognise that supported your view and you focus your attention on. The other part is the left and right frontal cortices. Those are commonly what we call the part that shows your thinking it's working memory putting things in and out of working memory and in engaging in cognition. So what we found is the moment you realise that a headline supported your opinion both of those areas lit up. Once you realise that the headline opposed your opinion. There is no cognition whatsoever. Once you realise that wasn't what you wanted to hear, your brain just turned off.

Sandra: Why does that happen?

Alan: It's confirmation bias because again if you think of why you go to Facebook in the first place most people go to Facebook for entertainment. They want to do something that's relaxing and interesting. They don't want to have to think about news. So the moment you see a headline that you don't like "Hey, I don't want to pay attention to it". Whereas if it really is something that's exciting and you want to be true, yes you engage with it because it's entertaining.

Sandra: Entertaining and unlike traditional media highly shareable. The power of social media lies in its ease of access combined with its ability to amplify our voice as we comment, like and share our way across our social media feeds.

Alan: We wanted to find out how strong the effect was of confirmation bias on their actions. And what we found for many of the actions, belief whether I believe the story to be true or not had about the same impact as confirmation bias. But for some actions confirmation bias was stronger. So in other words you were more likely to like a story if the story aligned with your prior beliefs. And it didn't really matter whether you thought the story was true or false. The effect of confirmation bias was twice as strong as the effect of whether you believe it or not. So regardless of whether it's true or not, I'm going to share it, I'm going to like it, just because it spreads what I think should be true.

Sandra: So how can we overcome this?

Alan: Facebook started by using a fake news flag. Now they didn't want to perturb people so they didn't use the word fake. Instead they just said disputed. So whenever they found a story that fact checkers showed was false. They flagged it with this disputed flag. So as part of the study we also wondered how did people react to this fake news flag. And it turns out they ignored it. Well that's really too simple. What happened was whenever they saw the fake news flag on a story they wanted to believe, they stopped and they thought about it but they didn't change their mind. It took them longer to come to that same conclusion that "I believe the story even though it's marked as fake". And in fact when we looked at the EEG signatures, the only thing that we saw going on in the brain was a signature that's associated with anger. So whenever I see the fake news flag a story I want to believe it makes me angry but I don't change my mind.

Sandra: So it wasn't helpful at all.

Alan: Interestingly we know some people at Facebook, so we in a number of other researchers shared some of our research with Facebook. We noticed several months after our study came out Facebook stopped the flag because they realised it had no effect. So instead they came out with a new idea and they asked users to rate news stories the same way we ask users to rate sellers on eBay. So if you think about it this is the grand solution that we use on the Internet. We crowdsource things because Internet users are just good sources of knowledge.

Sandra: But if I buy a product on Amazon, I can write a product based on my knowledge of the said book.

Alan: Or toaster.

Sandra: Or toaster. But I'm still not a friend of Trump. So if it's a Trump story, how would I know?

Alan: And that's the problem. Whenever you buy a product on Amazon or eBay you've used the product. You can rate it logically. You have experience with it. But a story on social media, you have no personal experience. You can't rate the story the same way that you can rate a toaster or wet wipes or whatever you buy on Amazon or eBay. So when you stop to think about it, it just doesn't work.

Sandra: This seems to raise some real concerns about the strength of democracy and what it would take to sustain a well-informed society.

Alan: One of the studies we did was we asked ourselves so, if users aren't good at rating this are experts are good at rating this? In the United States right now there there's a culture emerging where people don't trust experts. So you begin to wonder well even if we did expert ratings, would people believe the ratings of these experts? Or would they just think these experts are biased? So we did another story where we told users different things. We gave them the same ratings. We told some users those came from users and we told other users that the ratings came from experts. And it turns out the ratings from experts were the ones that had greater influence. Users intuitively knew that experts were more appropriate to rate news items than were other users. So there may be a little hope there in that if we can provide some expert rating of news articles and news sources, maybe people will listen to that.

Sandra: Does this mean that social media companies should change the way that they present news and who rates that news. Have you talked to Facebook about that?

Alan: We have. We've shared some of these results with Facebook. Facebook has started introducing a little icon with the letter I on some new stories. If you click on that icon it will take you to the Wikipedia page of that particular news source provider so that you can read the Wikipedia page and come to your own conclusion as to whether this is a good provider of news or not. Is this a source to be trusted? So that might be useful but of course one of the things that we've learned is that social media users are lazy. I'm lazy. I think providing an icon that's right on the news story, yeah people read that at the same time as they read the news story. Making them click on something to a different page to get an evaluation, I'm sceptical.

Audio file: voices: I'm more inclined to click on something that I agree with. Even though I'm aware of it and I try not to be, I fall victim to it.

Audio file: voices: Yeah I don't really believe in the, I don't know what to believe on Facebook really.

Audio file: voices: People are sharing stuff on Facebook, like is a lazy way to agree with something.

Sandra: How can people get better at having fake news. How can we help them be more discerning given all the constraints that come with social media.

Alan: So there is one thing about social media that sets it apart from how we behave in real life. And in fact most of the other media that we consume on the Internet. So if you think about it if somebody is going to tell you a story in real life what happens? They walk up to your office door walk up to your cubicle and before they open their mouths you know who they are. You already have an assessment of who's telling you something. You know some of the people in your office are very credible, they're trustworthy, you believed them. Maybe there are other people in your office that you don't trust because they have a history of sharing gossip, or inflating the truth. So before they speak, you already have in your own mind "Should I believe this person or not?". Same thing is true on the rest of the Internet. If you think about it if you're interested in news what do you do. You go to a news site you pick your favorite news site. Maybe you have CNN, maybe you have Fox News, maybe above ABC. You picked your news site and then you go there and read the content. So before you get to the content you've already decided whether you're going to trust it. On Facebook doesn't work that way. Stories from reliable news sources pick your favorite reliable one maybe BBC whatever you want to believe are interim mixed with paid content from people who are paying to persuade you something.

Alan: So in Facebook you don't know who put it there before you read it. A typical Facebook story has a really big headline and an eye catching graphic, and only below that in very small print is the source. So maybe if we did the same thing on social media that we do in the real world if we made the source of the story clear for our users we get better results. And in fact that's one of the studies I've done shows. If you simply move the source of the story for the headline, people get more skeptical because they start thinking about "who's the source again?" before they start thinking about content. That's the way it works in the real world, and just that small little interface shift might help people be a little bit more skeptical of fake news.

Sandra: So there seems that there is something social media companies could do to us help out.

Alan: The simple one is don't take paid advertising and go back to what Facebook was, which is a place for friends and family. And in fact Mark Zuckerberg made a statement a few months ago where Facebook was going to try to move back to its roots, and become more of a place for us to share with friends and family, less of a place where paid advertising intruded. Whether that's a long-term strategy or a short-term reaction, only time will tell.

Sandra: So what happens to us when we just can't square what we're reading with what we believe.

Alan: One of the things we're really interested in is what happens when I show you a story that aligns with your previous opinions. You going to believe it. But I mark it as fake. What happens when you see the fake in something you want to be true? That's a situation called cognitive dissonance. It's where you see two pieces of information presented that both cannot be true. Fake flag in a story I want to believe. So how do you wrestle and how do you come to a decision about which of these is true?

Alan: So when we studied this on the lab we put stories up on the screen that we knew aligned with users' prior opinions. We then put the fake news flag on these to see how they would react. And all we found was it took them about one and a half seconds longer to think about the story before they came to a judgment. So the flag made them stop and think but it didn't change their opinion. They still believe the story. In fact when we looked at the EEG what was going on in that time period while they saw the story they like with the fake news flag, the only thing that we could see in the brain signature was they got angry. So the fake news flag pissed them off and that's all. Can I say?

Sandra: Yes, you can. Is it possible that the way forward actually is regulation rather than trying to change what these companies are doing from within the organisation.

Alan: So the question for me in the United States is to what extent is fake news on social, media the internet equivalent of shouting fire in a crowded theatre? Do we need to take regulatory action to change this? And I'll be quite honest, I don't have an answer, but I think now is the time for us to start talking about what limits should we place on the use of social media to deliberately spread fake news.

Sandra: Years ago, before fake news was even a twinkle in Mark Zuckerberg's eye, Alan found out what data can tell us about choosing baby names.

Alan: What we want to know is do people react differently to different names? Do people have stereotypes about names? And the short answer is, oh yes we do. So for example, let me give you a name. Wilbur. What do you think of Wilbur? What is your first reaction?

Sandra: Not the life of the party.

Alan: Most people perceive Wilburs to be quiet. Maybe a little shy, maybe a little socially not at the top of their game. Okay. Turns out if you look at the probability of being a millionaire Wilbur's right up there among the top of them. So there's some interesting complexity that our stereotypes pick up. So nameinsights.com was created to provide information on baby names to help parents pick names. So it has many of the things that you would expect. Like what's the frequency of a name. But we go one step further. We have data from surveys where we've asked people what do you think of this name rate this name on how positive, how negative, and a whole bunch of other scales. So inferring personality characteristics. That's one source of data. So another source of data is we've pulled data off the web from places that let us do this, where we can connect postings that people will place on public websites. We have content analyze their words to capture emotion and personality out of the words they write. So we have these surveys, "what do you think of people with these names?". And we have actual behavior of people with these names. And as well of course we have all kinds of records like the millionaires list I mentioned we have some that we have marriage records death records divorce records so I can tell you what the most common name is to get divorced. I'll tell you who you're more likely to divorce which of course means you have to marry them in the first place. We have all kinds of information like that we can also tell you what type of music people with names are likely to like. And of course that's just correlation. But some of the things that we've discovered, there is an amazing correlation here that is beyond chance. It's certainly not predictive. So the statistics we're getting is called Cronbach’s Alpha point four, which on a scale of 0 to 1 means there's something here that's not zero but it's not perfect. Perfect correlation would be one. So point four you can look over and say okay this is more than just chance. But it's not predictive. So this will show you some tendencies. And we think that's likely because people have stereotypes and whenever they hear a name, they act on their stereotype. So people with that name get greeted with the same stereotypes. They adjust their interaction with other people to match the stereotypes. So this is not predictive but it gives you some insight into what names are good and what names may not be quite as good.

Sandra: So does that mean that parents actually can help kids along in life or pretty much set them up for - I want to say failure.

Alan: Is a name predictor of your future? Absolutely not. But can it nudge you in one direction or another? I'd say the jury's out, but if I was having a kid in this day and age yes I'd look at this to help me avoid bad names.

Sandra: So if you're to have another child what would you call them.

Alan: I don't know I haven't looked that up yet. And the other thing that I've learned is always ask the mother.

Sandra: Does it matter more or less in the online world, given that we have social media profiles?

Alan: Good question. The first thing in the social media world I see is your name. So I would imagine it would, because we anchor on that first whereas in the real world first thing I see is your face. So that's what I anchor on. And we know from cognitive psychology as everyone forms a basic judgment, approach or avoid, in about half a second. And that is very predictive of what you're going to do next. So the moment I see your face, half second later my brain has already told me I'm going to approach or run away. Same thing with names on social media.

Sandra: Thank you so much for talking to us today.

Alan: My pleasure. I've enjoyed this.

Sandra: This podcast was made possible by Jacquelyn Hole and Megan Wedge who made this story feel good and sound awesome. Wilbur was our silent contributor.  

SBI Outro: You've been listening to Sydney Business Insights, the University of Sydney Business School podcast about the future of business. You can subscribe to our podcast on iTunes, Stitcher, Libsyn, Spotify, SoundCloud or wherever you get your podcasts. And you can visit us at sbi.sydney.edu.au and hear our entire podcast archive, read articles, and watch video content that explore the future of business.