This week: happiness, big data analysis, and inclement weather. Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.

01:38 – Mapping happiness via media reports

08:54 – Sentiment analysis breakdown

16:59 – Sydney’s apocalyptic weather update

The stories this week

Mapping world happiness around the world with sentiment analysis

And why that’s the only story this week – Sydney’s apocalyptic weather

Google’s BigQuery visual document extraction tool

Quid’s platform for interrogating the world’s collective intelligence

Nobel laureate Daniel Kahneman’s TED talk on how we perceive happiness differently

Our previous analysis of AI conversations in the media

The problem with Sentiment Analysis


You can subscribe to this podcast on iTunes, Spotify, Soundcloud, Stitcher, Libsyn, YouTube or wherever you get your podcasts. You can follow us online on Flipboard, Twitter, or sbi.sydney.edu.au.

Our theme music was composed and played by Linsey Pollak.

Send us your news ideas to sbi@sydney.edu.au.

Disclaimer: We'd like to advise that the following program contains real news, occasional philosophy and ideas that may offend some listeners.

Intro: This is The Future, This Week on Sydney Business Insights. I'm Sandra Peter. And I'm Kai Riemer. Every week we get together and look at the news of the week. We discuss technology the future of business, the weird and the wonderful, and things that change the world. Okay let's start. Let's start!

Sandra: Today on The Future, This Week: happiness, big data analysis, and inclement weather. I'm Sandra Peter, I'm the director of Sydney Business Insights.

Kai: I'm Kai Riemer, professor at the Business School and leader of the Digital Disruption Research Group. So Sandra, before you ask me what has happened in the future this week let's put a couple of questions to our audience. Now, as we are winding down Season Four of The Future, This Week, we would like to hear from you two things. What are the big things that we might have missed this season and you would like to hear, so we can incorporate it into one of our last episodes? And what are the best episodes that we have done this year, so that we can put together for you a bit of a Christmas special?

Sandra: You can e-mail us at sbi@sydney.edu.au or reach out to us on Twitter.

Kai: But do it quickly, in the next few days.

Sandra: As we're wrapping up Season Four, tell us what the big stories of this year have been for you, or if we've missed anything big that you'd like us to talk about. And we'll try to squeeze it in before the end of the year, or if not make sure that we include it in season five starting early next year.

Kai: Now you can ask me.

Sandra: So Kai, what's happened in the future this week?

Kai: Ah, our first story comes from Forbes magazine, and it's titled "Mapping of world happiness 2015 - 2018 through eight hundred fifty million news articles".

Sandra: And this sounds so fantastic when you stumble upon this, it sounds like the perfect thing. Eight hundred and fifty million news articles in sixty-five languages from around the world that give you a day by day view of how happy the world is.

Kai: The author presents a map which offers a time lapse animation of how happy each part of the world was, visualised through little green and red dots that are sprinkled all across the world.

Sandra: So what we're getting here is the average tone of the worldwide news coverage. They are looking at news articles remember, and there are correlating this to their geolocation and they range from positive ones, the green ones, to negative ones, the red ones.

Kai: Now you might ask how did they do this? And this is the crux of the matter here. So what the author has done is he utilized freely available news articles that stem from the various regions of the world and ran this through a sentiment analysis. A type of textual analysis that associates words with either a positive or negative tone. So the word hate is negative, the word love is positive. Good, bad, and anything in between. So usually sentiment analysis is based on standard vocabularies which basically have mapped those words and allocated them to either one of the two sentiments. Good or bad.

Sandra: And the claim is, and I quote from the article, is that this can allow us to peer inside the soul of a global society.

Kai: Anyone who is into big data analysis will find this really interesting simply because of the sheer amount of data that was utilized in this analysis, and the way in which it was visualized. The author claims that there was 400 gigabytes of geographic data. Eight hundred fifty million tone measures out of three point two trillion total emotional assessments in this pool of news articles. It's an enormous amount of data and certainly an impressive analysis. But we do want to pick this apart, and we do want to call bullshit on the claims that the article is making. And we want to have a quick look at sentiment analysis as such.

Sandra: So first let’s look at what is termed happiness in this article. Which is simply how the media chooses to portray certain events, whether they be economic events or natural disasters.

Kai: The article talks about how in the event of a flood or an earthquake, media coverage turns red so to speak, because as the media works through the incidents words are decidedly negative. But then very quickly might switch over to green again as the conversation moves on. And the author talks at length about how this might show the resilience of people who like a good news story, who do not get hung up on the negativity of the incidents. And so how this data can reflect how people in a certain location feel at a certain point in time.

Sandra: Unfortunately this has very little to do with what we would actually term happiness, if we look at any research on happiness. For instance, looking at the work of Danny Kahneman on whether happiness is measured as life satisfaction, as you are reflecting on your entire life. Or whether you are looking at what people experience in their day to day life. Measuring that kind of happiness is not reflected at all in media coverage where if it bleeds it leads.

Kai: And we have a quick clip for you. Here's Danny Kahneman.

Danny Kahneman (file audio): There are several cognitive traps that sort of make it almost impossible to think straight about happiness. And my talk today will be mostly about these cognitive traps. This applies to lay people thinking about their own happiness and it applies to scholars thinking about happiness, because it turns out we are just as messed up as anybody else is. The first of these traps is a reluctance to admit complexity. Turns out that the word happiness is just not a useful word anymore, because we apply it to too many different things. I think there is one particular meaning to which we might restrict it, but by and large this is something that we will have to give up and we'll have to adopt the more complicated view of what well-being is.

Kai: And this is the point here. What this article does is, unfortunately what happens often in these kinds of studies is, a gross overgeneralization. Overclaiming what the data can actually show. My point is when you are measuring tone of voice in the media, you get a map of tone of voice in the media. But the extent to which that actually measures the happiness of people on the ground in these locations is more than questionable. Who of us would say that the way in which the news media reports on any given day is really a reflection of how happy the country is? That's just not something that the data would support.

Sandra: So let's look at today. If we look at the events in Sydney, we've had seemingly more rain in one day than we should be having in two months. Obviously, a lot of the media coverage is quite negative. People have been impacted trying to get to work. There have been floods, there have been accidents. It's been quite a distressing day for people in Sydney, but overall most of us are still quite happy with our lives here and it hasn't really impacted our level of happiness. It might impact our daily mood, but it definitely has nothing to do with our overall happiness.

Kai: Interestingly, the article gives us a glimpse that the author actually understands that there's something wrong maybe with the way in which the data is claimed to support this conclusion when he discusses why India seems to be mostly red. So according to this claim, Indians are really unhappy people. Also when you look at the map as such, China seems to be mostly green. So they are very happy people there. Or could it just be that the Indian press uses sensationalist language and often, as Sandra said - it bleeds, it leads? Foregrounds the really negative side of things. Whereas the most state media in China tends to portray a very happy picture of the country and therefore uses more positive language. So we're not saying that this type of analysis is completely useless, but we would question whether this is actually a measure of happiness or just the way in which the media might use language in a particular location. Which isn't uninteresting but just not what the author claims it is. But we also want to look into sentiment analysis and the kind of claims that such a reasonably crude analysis would support.

Sandra: So let's say Kai and I write an article today about the ability of such analysis to give insights into happiness. And we would say, oh yeah this was absolutely great. The bass analysis ever. It was such a brilliant piece of work.

Kai: Oh, sentiment analysis is the best.

Sandra: It's absolutely marvellous.

Kai: It's awesome.

Sandra: You can use it for anything, any time. It will give you the best results.

Kai: And as you can hear, this is actually quite an Australian thing to do. Sentiment analysis has huge problems picking up sarcasm, for example. And I've actually had a conversation with a colleague once about a study on Twitter sentiment in the aftermath of corporate crises like the grounding of QANTAS fleet at the time. Which in Australia really doesn't work. Because Australians have this innate ability to use language in this sarcastic kind of way. "Well done QANTAS, great job once again." And they will happily do this on Twitter and sentiment analysis would think, oh everything is swell. Australians are unfazed by what happened at QANTAS.

Sandra: And we've seen the same message after message on Twitter during the Banking Royal Commission. Where people applauding and cheering on...

Kai: Executives who make a fool of themselves.

Sandra: But that's not the only problem with sentiment analysis, because all these stories appear in a certain context.

Kai: It's a very blunt instrument. It just goes by way of which words are associated with good or bad. When people use them in the opposite way, as in sarcasm, it doesn't work. But it also doesn't know context. So there was an article in Fast Company a few years ago which looked at sentiment analysis during the Obama/Romney election in 2012. Where a sentiment analysis would pose a daily sentiment score for either of the candidates. Making the point that the algorithm is completely blind to the fact that there's actually two opposing groups messaging on any given day. So rather than picking up on what the national sentiment would be, the only thing they really pick up is which of those two groups are louder and post more messages.

Sandra: Which is not to say that this is not something you'd be interested in at certain times, but it does not give you the overall...

Kai: No, it does not give you the overall sentiment. Because national sentiment for a candidate would suggest that opinions changed. That you would actually have a certain swing in which candidate is favoured. All you get is which of the two groups that might be stable, no opinions being changed, shouts louder on any given day. Or in another example who says something can matter a lot. Not just what is being said. Sentiment analysis treats every message the same.

Sandra: That's sick bro.

Kai: Exactly. So the use of the words "killer" or "sick" has a different meaning when a young person says them to when an older user might say them. So the context matters, who says something matters, and sentiment analysis is pretty much blind to all of this. On top of this, everything is classified as either good or bad. There's really no middle ground. Nothing is ever really neutral in sentiment analysis.

Sandra: Yet we're not trying to say that the vast amounts of data that are now being created and the opportunities to analyse millions of articles, for instance, does not itself present opportunities.

Kai: And while sentiment analysis is a rather blunt instrument, which might have some applications in certain bounded contexts which we understand already, text analysis per say - semantic analysis - can be really powerful and there some really interesting ways to think about this and what to do with it.

Sandra: And, for example, the two of us have actually done some of this work about a year ago, when we were trying to understand what the public conversation was around artificial intelligence. And we were - similar to this article - trying to look at all the articles that had been written around artificial intelligence to try to get a sense of where the conversation was at and what people were thinking about it.

Kai: At the time we used a text analysis tool called Quid, which pulls in lots of news articles, blocks textual information that is available on the Internet, and makes it available to analysis according to certain keywords.

Sandra: We had a year of news stories from around the world on artificial intelligence. And what we found back then was that the conversation broke down into three big themes. One had to do with partnerships and initiatives around artificial intelligence. All the big companies were doing something in the space. There was another cluster of articles that was looking at the potentially facts of artificial intelligence. Remember this is the time of "they're taking our jobs". And there was another cluster that had concerns about the present applications of AI. We looked at the conversations that were getting traction in the media, and it was clear that people were, on the one hand, enamoured with the potential effects of these technologies rather than their actual manifestations, or of their actual implementation. And a closer look at the debate revealed that on the one hand people were concerned about the apocalyptic warnings about killer robots. Think Elon Musk and the Terminator. And on the other hand, the fear over job losses and the adverse impact on the economy. So, "they are coming to take our jobs". But overall the conversation just seemed to oscillate between the overly optimistic reports about the efficacy and the potential of AI as a game changing technology, and the really pessimistic reports about the envisioned impacts on jobs and society more generally.

Kai: So while this ping pong between utopian and dystopian stories in the media was quite prevalent, listeners of the podcast will remember that about a year ago we discussed the real problems with AI. So based on what we found in this analysis, we looked at what was missing from the conversation. And we picked up on the fact that there was some real ethical issues, some real problems with, for example, bias and that data that goes into training deep learning algorithms, a conversation that was just emerging at the fringes in the media. There were a few people who are calling these problems out. And so we pointed this out in the article and were able to paint a picture whereby we wouldn't focus on what is in the media but things that were just emerging but not really being picked up.

Sandra: And we were making an argument for bringing such conversations to the forefront of business and society if we were to successfully embrace artificial intelligence into our businesses and society more broadly. And a number of initiatives since then have started to consider how to address these issues. The end of last year, the Institute of Electrical and Electronics Engineers released the Ethically Aligned Design Report. Earlier this year, Alan Finkel called for Australia to lead the conversation around the ethics of AI. The Human Rights Commission is currently consulting among others with the University of Sydney on new guidelines around technology and ethics, with a strong focus on artificial intelligence.

Kai: And really, the pendulum has swung, and we would expect that if we were to do an analysis like this today, we would find that one of the biggest concerns now is indeed bias and the problem with algorithmic decision making that might go unchecked. And that comes off the back of a discussion around the power of Google and Facebook, and their algorithms and how they shape our daily lives.

Sandra: So where we want to leave this story is that massive amounts of data do allow us to do some incredibly interesting things.

Kai: But we do need to bring an understanding of the context, the conversation. And so if the measurement is indeed too broad or too general, we're in danger of overclaiming what the data can reasonably support.

Sandra: But unfortunately that's all we have time for this week.

Kai: We will put in the show notes the reason for this. An article with the inclement weather and the storm in Sydney which has derailed our travel plans today and cut short a little bit the time we have for this podcast. But we will see you on the podcast next week. Don't forget, send us your ideas for the best stories of the year or the things we might have missed.

Sandra: Thanks for listening.

Kai: See you soon. Thanks for listening.

Outro: This was The Future, This Week made possible by the Sydney Business Insights team and members of the Digital Disruption Research Group. And every week right here with us our sound editor Megan Wedge who makes us sound good and keeps us honest. Our theme music is composed and played live from a set of garden hoses by Linsey Pollak.

You can subscribe to this podcast on iTunes, Stitcher, Spotify, YouTube, SoundCloud or wherever you get your podcasts. You can follow us online on Flipboard, Twitter or sbi.sydney.edu.au. If you have any news that you want us to discuss, please send them to sbi@sydney.edu.au.

Related content

The ABC behind successful teamwork

With teamwork as the secret sauce for service excellence, is identifying and cultivating the right blend of teamwork mechanisms the special ingredient to transform customer satisfaction into profits?