Leading behavioural economist Professor Dan Ariely shares his insights into US politics, how we think about inequality, his desire to become a waiter – plus his advice on how to split the bill.

Dan Ariely is a professor of psychology and behavioural economics at Duke University and a founding member of the Center for Advanced Hindsight. He is the author of three New York times bestsellers. Through his research and his (often amusing and unorthodox) experiments, he questions the forces that influence human behaviour and the irrational ways in which we often all behave.

Dr Sandra Peter, Professor Dan Ariely and Professor Ellen Garbarino
Dr Sandra Peter, Professor Dan Ariely and Professor Ellen Garbarino

Dan Ariely, Professor of Psychology and Behavioral Economics, Duke University

Ellen Garbarino, Professor of Marketing, University of Sydney

Dan Ariely website

Dan’s research

Pocket Ariely App

Dan’s Wall Street Journal column

Predictably Irrational

The Upside of Irrationality

The Honest Truth About Dishonesty

Center for Advanced Hindsight

Dan Ariely’s TED talks


You can subscribe to this podcast on iTunesSpotifySoundcloudStitcherLibsynYouTube or wherever you get your podcasts. You can follow us online on FlipboardTwitter, or on sbi.sydney.edu.au.

Sandra: Our guest co-founded the Center for Advanced Hindsight at Duke University in the US. Which is possibly the most awesome name for a research institute anywhere in the world.

Dan: Hi my name is Dan Ariely and I'm the James B. Duke Professor of Psychology and Behavioral Economics at Duke University.

Sandra: Just to flesh out Professor Ariely's CV, his list of achievements over the last 10 years include three New York Times best seller books in the field of economics, five TED talks, each of them watched by millions of people, and being named one of the world's most influential living psychologists. Ariely also writes on the advice column for The Wall Street Journal and his insights can be sought by his app The Pocket Ariely. Professor Ariely is one of the world's foremost behavioural economists. So orthodox versus behavioural economists. What's the difference? Ariely says that while he asks the same sort of questions as orthodox economists, rather than assuming people will behave in the financially rational way he sets out to discover how we actually behave. Ariely then incorporates this real-world data into his economic analysis. He advises governments and corporations and lecturers around the world. In his book, Predictably Irrational, Ariely shows via ingenious and often amusing experiments how we frequently make decisions that are not in our best financial interests. In fact, so consistent are our irrational decisions they are, Ariely argues, highly predictable. His curiosity to discover what motivates us has helped shape government policies in healthcare and transport, informed corporate pay structures, and at an individual level opened our eyes to the hidden motivations behind our choices.

Intro: From the University of Sydney Business School this Sydney Business Insights, the podcast that explores the future of business.

Sandra: During a recent visit to Australia Ariely took time out to catch up with an old friend and colleague from Duke University, Professor Ellen Garbarino. Ellen is now Professor of Marketing at the University of Sydney Business School.

Ellen: I thought we might start out by asking your take on how the growing understanding of human decision-making effects some of the important topics of the time that we're hearing in the news. And then maybe move on to discussing how behavioural economics and the understanding and studying of human influence are evolving into the future. So we'll start small, go big.

Dan: Okay. You're not trying to solve the world's problems, hunger.

Ellen: At the end of the day I expect perfect behavioural economic solutions to world hunger, yes.

Dan: Okay.

Ellen: Although this is a big problem for the world because one of the big issues in the modern age is the increasing tribalisation and polarisation of the world. And we're certainly seeing this in the clash of political ideologies, and the rise of nationalism in many countries including US and Australia. How can we use what we're learning in human decision making in behavioural economics to help bring us more together?

Dan: I've been doing this research project with Mike Norton about inequality. Imagine you sorted all the Americans on line from the poorest to the richest. And then you divided them into five buckets. The poorest 20 percent, the next 20 percent, the 20 percent in the middle, the next one, and the richest 20 percent. And you ask the question first about how much of the 100 percent wealth is distributed among those five buckets. So let me ask you a question, is this okay?

Ellen: Okay.

Dan: Okay. So the bottom 40 percent of Americans, how much of the pie, how much of the 100 percent wealth you think they own?

Ellen: Bottom 40 percent, 5.

Dan: Okay. The average Americans answer nine point three. But the actual result is zero point three.

Ellen: Wow.

Dan: So the bottom half have very, very little. People pay a lot of attention to the wealthy and how much the top 1 percent has. But how little the bottom 40 percent to 50 percent have, people don't pay attention. So that's step one. People don't understand the extremeness of what is called the Gini coefficient, the rate of inequality. But then we move to a different question and we said, imagine a society that you could enter and would be randomly placed. You could be among the richest, you could be among the poorest. You don't know. It's a society with a certain wealth distribution. And tell us what kind of wealth distribution would a society have to be so that you're willing to enter it in a random place.

Ellen: Uhuh.

Dan: This is, by the way based, on the philosopher John Rawls who had this definition for just society. And he called this the veil of ignorance. And we did this in the US, in England, in Australia and in Israel. And in all cases, we found very similar results. We found that when people draw society that they will be willing to enter in a random place, people want a society that is more equal than Sweden. Including Americans by the way. And we find that the differences between Republicans and Democrats, rich and poor, right wing, left wing, is actually very small. So here's the thing. When you listen to Republicans and Democrats, they sound so very different and you can say, "what they agree on?". But the reality is that when you talk to people about their base ideology they seem to agree.

Sandra: Ariely then offered another example of how human intolerance for inequality can be used positively to try fairer policies in healthcare.

Dan: We said look, at some point babies die at birth, right.

Ellen: Yeah.

Dan: And the question is how much of this baby mortality are you willing to tolerate for the poorest 20 percent, the next 20 percent, the next 20 and so on. And here nobody is willing to tolerate any difference.

Ellen: You.

Dan: Now what that means, even for Americans, is socialised health care because you know mortality of babies is just a function of wealth and quality of socialised medicine. So when we ask the question in the US about Obamacare, people say I'm against Obamacare. But when we asked them the question about what society do you really want to build, people seem to be much more similar and much more equal in their approach. So the good side of all of this is that I think that we do have a lot of similarity in terms of ideology. We just somehow let the political system obfuscate it.

Ellen: Okay so you would argue in favour of us all trying to start with an agreement on what the ends are, and then follow up with what the means would be?

Dan: Exactly. Now I'm not sure it's the end because it's about our underlying preferences, right, like what's a just society for example. How do we solve inequality? One solution is taxes. Another solution is education. Right taxes would solve inequality much quicker. Education would solve it in a much slower way. You know, we probably need a mix of both of those. It's not one or the other. What kind of mix do we want? That should be handed to professionals, not to everybody. But people should basically be able to vote or to express what country do we want to have in the next, you know, 10, 20, 40, 100 years and I think politicians should then take these inputs and then try to implement the system to get there.

Ellen: See, the one thing it triggers for me is the goal setting literature. Right. We've done a lot in decision making about the role of goal setting and had a goal set effectively. It seems like how do we get to the ends that we want, the goals that we're after most effectively.

Dan: Yeah. And we often don't know exactly the solution. And that's another kind of big lesson from social science. If we think about one of the big differences between behavioural economics and standard economics, in standard economics there's an answer. And in behavioural economics there are questions, and there's a methodology called experiments. And we basically try things to see what's working for all kinds of things. Right. Let's say something like reducing global warming. There are many, many ways to go. We can all become vegetarians. We can change the refrigerant in air conditioning, in refrigerators. We can change cars. There's all kinds of things we can do. Which one of those methods is the most efficient? Which one of those methods can we actually implement? All of those things we just don't know, and we need to try different methods and see where we can have the biggest impact.

Ellen: You're hearing a lot in the news about honesty or the lack thereof. And I know that this is something you've spent a lot of time in your career looking at. Particularly dishonesty. So I want to ask whether you think people are becoming more accepting of dishonesty in recent years.

Dan: So the answer is yes. And I'm thinking that you're asking mostly about the political system.

Ellen: Well I'm actually interested in a broader range of dishonesty but certainly the way you're hearing about it in the world is about the political dishonesty.

Dan: Yeah. So I think in the arena of political dishonesty the answer is absolutely yes. So we looked into it and what we found was that in the last political election people wanted - Americans - wanted their politicians to lie. It's not as if they wanted politicians to lie for lying's sake. They wanted politicians to lie because they wanted their policies to be re-enacted. And here's the thing about dishonesty. There are very few people that enjoy dishonesty for dishonesty's sake. It's mostly a trade-off between human values. Right. There are many human values, honesty is one of them, but honesty doesn't always win. Right. The classic question of "Honey how do I look in that dress?". In the case of politeness, we often think that honesty is not a top priority so the question with dishonesty is what is the hierarchy of values. And when do we give honesty up for other values. For example of caring about other people or for financial gains. And what happened in the last elections in the US is people were very, very motivated by ideology. So the people on the right wanted Obamacare to be abolished. The people on the left wanted stronger regulation on the environment and in both cases, people were willing to accept some dishonesty in the service of their policies. So people on the right said, we want a politician who would be like a back-street fighter, who would lie to get our cause acted. And indeed they got what they wanted, right. So if you think about what Trump has done to the American Supreme Court, what he has done to the Environmental Protection Agency, what he's done to their tax reform. He did all kinds of things to allow the politicians to get their ideology re-enacted in ways that I think is destructive for this society. But they wanted somebody who would act without any moral constraints. And I have to also say that me as a left wing person, and incredibly worried about global warming for example, if you ask me what do I prefer - a perfectly honest leader who would not be able to enact any measures to try to protect the planet, or would I prefer somebody who is willing to cut corners here and there but is going to re-enacts measures that would help us slow down and reverse global warming - I would be in a big dilemma.

Sandra: Ticking off the big things that most concerned him, Ariely says conflicts of interest are contaminating political and business life. Yet even moral challenges can have a silver lining.

Dan: Conflicts of interest are incredibly corrosive, and the problem is we don't see them coming. We don't see them influencing us. And I should first say something good about conflicts of interest. Conflicts of interest is what allows us to make friends very easily. We can buy somebody a beer and a sandwich and in 10 minutes they like us more and they see life from our perspective. That's the beautiful thing about it. The sad thing about it is that once conflicts of interest are part of the economic system - investment, government, health and so on - it's incredibly difficult. So over the last 20 years we've eliminated regulation and increased conflicts of interest. And sadly the 2007-2008 financial crisis created many more regulations but none of them to eliminate conflicts of interest. And if it was up to me that's the next thing I would look at. It wouldn't be to look at people and say, oh these are just bad apples. It would be to actually look at conflicts of interest. And I know in Australia now you have the Royal Commission to examine the superannuations. Conflicts of interest are incredibly important to look at and to think about how can we reduce or eliminate them.

Sandra: Ariely is not only a theoretical economist, tinkering away in his Advanced Hindsight Lab at Duke University. As a behavioural economist, he’s willing to throw himself into the messy reality of business life. So, a few years ago, Ariely set out to prove conflicts of interest can be eliminated in the commercial world by starting an insurance company where honesty would be hard wired into the business relationship.

Dan: Now in terms of creating, trust I tell you a story. So a few years ago I decided to join a couple of guys and to start an insurance company with no conflicts of interest. Right, so studying conflicts of interest, I saw how corrosive it is and if you think about insurance it is a terrible industry. Imagine insurance, you have a consumer you have the insurance company. The consumer pays, pays, pays, pays. At some point something bad happens and the person wants the insurance company to pay. And of course the insurance company is better off not paying. Right. Somebody is going to get a better bonus. They have our money so to say. That's a terrible problem and doesn't end there. Because we as consumers know that the insurance company don't want to give us our money back. So what do we do? We exaggerate, you know. TV becomes a little bigger. Jewellery becomes more expensive and so on. And the insurance company knows that we exaggerate. So they make it difficult and complex, and lots of paperwork and so on. So it's a system based on conflicts of interest and mistrust.

So we said let's try and change that. If it's a two-player system, insurance company and individuals, it's a fixed pie. You have to split the pie between two parties. There's always going to be conflicts of interests. So we said let's make it a three-party game. We have the consumers, insurance company, and a charity. And let's say the charity is the World Wildlife Fund. We get lets a thousand people join this insurance company under the World Wildlife Fund and the insurance company takes money every month or twice a year, so they keep 10 percent and they pay claims. And if there's money left over at the end of the year it goes to the charity. So think about this three-party game. What happens if you the consumer and you are cheating? Who are you cheating? You're cheating your favourite charity. Right the insurance company takes a fixed amount. They don't take anymore. And we started this insurance company in New York almost two years ago and two weeks after we started this, we get an email. And this guy said you insured my apartment. I thought I lost my laptop. You paid me for my claim. But now I am realizing that I actually found it. It wasn't stolen. I just lost it. And he said how do I pay you back. Right? And when this happened, I asked my friends from other insurance companies what do they do in those circumstances? And of course they have no idea because it never happened to them. Right and that's for me the good news. If you trust people good things can come. A trust is not just about, you know, saying to people "trust me". It comes from creating a structure that has trust. And in this case because we understand conflicts of interest, we said look don't trust us. We don't trust ourselves. If we will put ourselves in a situation like a regular insurance company with conflicts of interest, there's a good chance that we will not be honest because you know like everybody else people are tempted. So let's create a structure that would basically not allow us to be dishonest. And that was the important thing for me.

Ellen: One of the things that's happened in your life is that you may have started out as a standard academic, but you've definitely moved past that in the recent decades and become quite the public intellectual, with a lot of points of influence and primarily talking to outside publics. And so I was interested, you know your TED Talks are seen by millions of people. And what have you learned from doing this, that are your tools for communicating complex ideas to broader audiences. How should we all be doing this better?

Dan: What I've done in the last 10 years more and more is to do more large-scale field studies and much less lab studies. And I find that first off, I'm learning a lot from that because there's lots of nuances in the field that are important, and we don't reflect them correctly when we do our studies. But I'm becoming less and less of a scientist I would say, and more and more often engineer. I think that one thing that I do is I hope I'm able to communicate the fact that there's no true answer, there's just kind of making progress. And that when I write or talk it's kind of a journey to try and make improvements. So I say here is what we know so far. Here are some findings. What would you do with this? Here is one approach of what we could do. The other tools I use is, I use experiments right. So I basically build my arguments like we all do in papers. We say, here's an experiment, here's an experiment, here's an experiment, here's something we understand about human nature. Here's something we understand. And now let's think about how we put those things together. Another thing I try to do is I take an example from the identifiable victim effect. So the identifiable victim effect is this idea that we sympathise, empathise with one tragedy and we don't pay much attention to large tragedies. So when I describe experiments, I try very much to either get people to intuit the results with me. To walk them through the process, not to just say here is the result. But to say what would you do? How would you feel? And so on. Or I try to describe it from the perspective of one person, even doing the experiments we have many of them. And try to say here is one person, here they go through this experiment. What would you do? Here's what they did. So I try to focus on an individual case to give people the intuition of their behaviour.

Ellen: So let's switch gears a bit and talk about what are some of the things that you see as the big ideas, as the dangerous ideas that we're confronting in the world right now, that we can say something about with your techniques?

Dan: So I generally think about human waste. I think about where we are, and where we could have been. And for me the big areas of human waste is how we waste our time, health, money, the environment, hate and motivation. I think those for me are the big six. And in each of them there's a question of how do we recreate the world in a way that is more compatible with our skills. So let's take example of the physical reality. Over the last two, three hundred years we've done a lot to improve our physical reality. We build cars, shelters and air conditioning, and light, and speakers and planes and so on. And by doing that we kind of augmented our physical ability. We used to be very different from supermen and we're a little closer now. We could fly great distances, we could survive hot and cold weather, we could do all kinds of things. And we didn't do it by training ourselves to resist cold. We just wrapped ourselves in clothes or created heating. So in the physical world we understand our limitation and we build around those. In the process we live much longer. Our cognitive demands are much, much higher to make good decisions. And now I think it's a time to create cognitive tools. So what are the chairs and lights and aeroplanes and cars for the mind? What are the tools that would get us to behave in a better way? And here too I don't think it's about training people to think long term, but it's about understanding our shortcomings in building around it. But sadly in the mental world we're not there yet.

Sandra: It took decades after the introduction of the car for governments to impose driving and manufacturing rules to deal with safety issues provoked by the switch from horses to automobiles. Issues such as social media and overconsumption threaten our present and future safety in ways that Ariely believes require equally strong government intervention.

Dan: So think about something like Facebook, or the doughnut. Those are things that don't help us think long term. They are things that take advantage of us in the short term. Right. They are things that actually decrease our ability to live a healthy life in a long term. So I think the big question in all of those is how do we build an environment that is supportive of better thinking and not an environment that tempts the worse of us to express itself. And it is a real challenge because you go to a supermarket, you have a plan of what you want to buy. The supermarket also has a plan. It's just not the same plan as yours. And they decide the environment so they can influence dramatically what you're going to buy. And they're not going to put, you know, cucumbers and tomatoes at the end of the aisle display. They're going to put the things that tap our emotions and can get us to act at that moment. So what kind of environment are we willing to build. And also the moral challenge that we have is how paternalistic are we willing to be. Because if we're not going to be willing to have any paternalism, I think we're going to get into deep trouble. So take as an analogy driving. Driving is about our physical environment. We have lots of rules about driving, right. You can only drive after a certain age, and you have to take lessons, and you can't go over the speed limit, and you have to stop and read signs, and you can't park here and can't park there. All kinds of rules and so on. And it's amazing that we have so many rules for driving and we don't have that many rules for eating healthy for example. And you can ask what's the difference. Now one difference is that with driving we see the downside. We see cars on the side of the road. We see accidents and people can kill other people, not just themselves. But the same thing is true for diabetes. We see the consequences of diabetes. And when people are diabetic, they influence not just themselves, but they influence society around them too. It's just not as clear as an accident. So we need to figure out if were willing, for example, to take the same level of paternalism we have in driving and applied to things like eating. Or how we spend our time in social network.

Ellen: So could behavioural economics, in its desire for paternalism as a way to improve human behaviour, work together with A.I. to invent a more longer-term oriented system that's less about feeding our short-term preference pattern?

Dan: I think so. I think we could try to inject novelty.

Ellen: Uhuh.

Dan: It's kind of the opposite from what we've done right. What we've done with AI is to say let's look at what behaviour is being more reinforced. It has been more standard and so on. But instead if we say look but even if you've done something a hundred times it doesn't mean that that's the right thing for you. Let's create a model for what novelty is. And let's figure out how many times we should show you something novel. Even if you if you don't like it, we will still get you to try it. I think that will be really interesting.

Ellen: Yeah that would be really cool.

Dan: I'll give you an example for this.

Ellen: Okay.

Dan: When I was in grad school, I tried to be a waiter. I couldn't get a job as a waiter. So two weeks ago I met somebody who owned a restaurant. I told him I had this dream, so he allowed me one night to be a waiter, and one night to be a bartender. It was great fun. And as a waiter I tried to convince people not to split the bills. So there are basically people who split the bill with even amounts. Four people, two hundred dollars, everybody pays 50. There are people who calculate how much each person ate and each person pays their own. And people who one person invites everybody, and they change turns. And I think that's the right approach because of what we know about the pain of pain. Right. So imagine you have a table, four people, 200 dollars, each person is fifty dollars. If each person pays 50 dollars, they each experience unpleasantness of let's say 50 points for parting with their 50 dollars. So total 200 points of unhappiness around the table toward the end of the meal. Which is a terrible time to experience a downside. Now if one person pays 200 dollars for everybody, that person doesn't experience four times the negativity right, because of diminishing returns. So maybe they paid two hundred dollars, but they experience only 150 points of unhappiness. So it's less. Plus they get to feel that they treat the other people, so they feel a bit better about that. Plus the other people feel that they're being treated. So they're excited about it. So in general I think that one person paying for everybody and switching over time is the right approach. Now as a waiter I tried to convince everybody that that was the right approach. Sometimes I would ask them to pick who it is. Sometimes I offer them credit card roulette. I said, oh if you put your credit cards in, I'll pick one and that person will pay for everybody. And I had some success, and everybody liked that approach. Not everybody did it. Some of them said oh you know it's too big of a group, or some people said we don't eat enough to get it, but everybody liked the approach. But here's an example of something that we come up with is based on the pain of pain, and getting something for free is also extra special. So all of those elements basically build the solution of one person buys and they just switch over time. That's an example that if you look at people's behaviour from the past system would never recommend it. So that's where I think we need to be paternalistic from time to time and you know, I'm wouldn't force people to do it, but I would certainly make that the easy approach. I would make this the default approach. Maybe I would give even people a discount for doing it, at least to get them going with this approach.

Ellen: It's a cool idea. All right. We've mostly been talking about the positives of behavioural influence. Are there any ideas that you or others have used in the past and that they used to promote that you think we should stop doing?

Dan: I think we need to not settle for nudges anymore. So I think nudges are very nice and have all kinds of potential. But at the end of the day they are quite mild in their potential application. So for example think about something like texting and driving or think about global warming. I don't think nudges are sufficient for those problems. I think for those problems we need to move to regulation. And this basically takes things out of the domain of social science. It says, there are some things we are just going to give up on influencing people at the level that is needed, right. Like you know, what's the number of deaths from texting and driving that is acceptable? If we agree that it's zero time.

Ellen: Yeah.

Dan: That it's very hard to imagine a situation in which the cost benefit analysis of "let's text right now because it's so important". If we agree that this is zero, then it's not in the realm of social science. Social science can tell us, you know, what kind of regulation is likely to work and how to get people to get there, and help to get people to create the habit and so on. But it's not in the social science domain to do the behavioural change. Or if you think about global warming, at the end of the day I think we need to figure out when regulations. So we have some massive powers working against us in terms of living up to our human potential. And I think that we need to kind of figure out what are the cases in which we just need to turn to regulation and use social science to figure out the right regulation, but not give up on the regulation itself.

Ellen: Thank you so much for helping us out with that. It was really interesting.

Dan: With pleasure.

Ellen: Take care.

Dan: Thank you. Bye

Sandra: And thank you Ellen. This podcast was made possible by Jacquelyn Hole and Megan Wedge who made this story feel good and sound awesome.

Outro: You've been listening to Sydney Business Insights, the University of Sydney Business School podcast about the future of business. You can subscribe to our podcast on iTunes, Stitcher, Libsyn, Spotify, SoundCloud or wherever you get your podcasts. And you can visit us at sbi.sydney.edu.au and hear our entire podcast archive, read articles and watch video content that explore the future of business.

 

Related content

The ABC behind successful teamwork

With teamwork as the secret sauce for service excellence, is identifying and cultivating the right blend of teamwork mechanisms the special ingredient to transform customer satisfaction into profits?