This week: the algorithm is innocent, Australians in space, and the licence to watch. Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.

The stories this week

The algorithm is innocent

Australia gets its own space agency

Federal government makes push for states to hand over drivers’ licences

Outline hires tech news prodigy William Turton

Gawker reporter is a high school senior

Working as intended (or is it?)

Facebook’s response to fake Russian ads

Politicized fake news about Las Vegas shooter

Who Will Take Responsibility for Facebook?

The Australian space agency

Not the Australian Research and Space Exploration agency

The space agency could play a vital role in inspiring students

Brisbane Girls Grammar School’s new observatory

All hail Elon Musk

Elon Musk now plans to send people to Mars in seven years

Moscow turns on facial recognition

Facial recognition at China beer festival

Face recognition is reshaping China’s tech scene

Privacy has taken a back seat amid the Opal debate

The Future, This Week 17 March 2017 featuring German space agency tomatoes

The Future, This Week 15 September 2017 featuring facial recognition

The Future, This Week 8 September 2017 featuring payment by face recognition

Linsey Pollak, learn how to make carrot instruments

Robot of the week

Qoobo, the weird wagging cat tail robot


You can subscribe to this podcast on iTunesSpotify, Soundcloud, Stitcher, Libsyn or wherever you get your podcasts. You can follow us online on Flipboard, Twitter, or sbi.sydney.edu.au.

Send us your news ideas to sbi@sydney.edu.au

For more episodes of The Future, This Week see our playlists

Introduction: This is The Future, This Week on Sydney Business Insights. I'm Sandra Peter. And I'm Kai Riemer. Every week we get together and look at the news of the week. We discuss technology, the future of business, the weird and the wonderful and things that change the world. OK let's roll.

Kai: Today on The Future, This Week: the algorithm is innocent, Australians in space, and the licence to watch.

Sandra: I'm Sandra Peter, I'm the Director of Sydney Business Insights.

Kai: I'm Kai Riemer, professor at the Business School and leader of the Digital Disruption Research Group.

Sandra: So Kai what happened in The Future, This Week?

Kai: Well you were on television but this week you're back with the mere mortals doing podcasts. So welcome back Sandra.

Sandra: Thank you. And if you've missed it - ABC's Q&A, a special edition on the future.

Kai: It was one of my hardest hours. I was there in the audience sitting there and I'm so used to finishing your sentences but I wasn't allowed to say a word so that's a relief that we're back on The Future, This Week now.

Sandra: Well you can start them this week. So what happened in the future this week?

Kai: Our first story is from The Outline and it's called "The algorithm is innocent". The article makes the point that Google and Facebook who have been in the news a lot recently for all kinds of different instances of presenting inappropriate content that they are deflecting responsibility onto their algorithm. They basically say the computer did it, it was none of our making, the computer malfunctioned, the algorithm presented inaccurate content. So Sandra what is that about?

Sandra: On Monday for instance when the worst mass shooting in U.S. history took place, if you were to Google Geary Danley the name that was mistakenly identified as the shooter who killed a lot of people in Las Vegas on Sunday night, Google would present quite a number of threads that were filled with bizarre conspiracy theories about the political views that this man had.

Kai: Story sourced from the website 4chan which is basically an unregulated discussion forum known for presenting all kinds of conspiracy theories and not necessarily real news. And the point was that Google presented these links in its top stories box that sits right on top of the Google search page.

Sandra: Google then went on to say that unfortunately we were briefly serving an inaccurate website in our search results for a small number of queries.

Kai: And we rectified this almost immediately once we learned about this mistake.

Sandra: In an e-mail sent to the author of The Outline article, Google also explained the algorithm's logic where this algorithm had weighed freshness too heavily over how authoritative the story was and that the algorithm had lowered its standards for its top stories because there just weren't enough relevant stories that it could go on...

Kai:...so the news was too new essentially for the algorithm to find other relevant things that it could present or so the story goes.

Sandra: So it was the algorithm's fault.

Kai: Absolutely.

Sandra: And really this wasn't the first time we blamed the algorithm. Back in April the article mentions the Faceapp app that had released a filter that would make people more attractive by giving them lighter skin and rounder eyes. And it was called an unfortunate side effect of the algorithm - not intended behaviour.

Kai: So it was an inbuilt bias that attractiveness was essentially associated with whiteness.

Sandra: And of course there are the big stories of the past couple of weeks where Facebook had allowed advertisers to target people who hated Jews in what was called again a faulty algorithm.

Kai: And we also have discussed this on the podcast previously, stories around YouTube presenting inappropriate ads on videos. And let's not forget the whole story around Facebook and the US election where Facebook is frequently being blamed for taking an active role in presenting biased news stories, fake news to potential voters that has played a role in the election outcome and also that...

Sandra: Facebook had said that this idea was crazy - that fake news on Facebook had influenced the outcome of the election. But then coming back recently and saying that they are looking into foreign actors and Russian groups and other former Soviet states as well as other organisations to try to understand how their tools are being used or being taken advantage of to obtain these results.

Kai: So Facebook, Google and others working with machine learning and algorithm and algorithmic presentation of content are frequently blaming their algorithms for these problems. They're saying it wasn't us it was a faulty algorithm.

Sandra: So let's examine that idea of a faulty algorithm. So what would a truly faulty algorithm be?

Kai: In order to determine this, let's remember what we're talking about: traditional algorithms are definite sequences of steps that the computer runs through to achieve a result - to bring the software from one state to another so it's a definite series of steps which we would call an algorithm and we can determine when it malfunctions because we don't end up in the state that we intended to be in. But machine learning works differently. Machine learning is a probabilistic pattern matching algorithm which in this instance did exactly what it was supposed to do. Present certain results that are relevant to the topic on some criteria, semantic nearness or some key words that it elicits and so the 4chan article was relevant because it was talking about the same topic.

Sandra: These algorithms are not designed to either exclude faulty information or deliberate misinformation. Nor are they built to account for bias.

Kai: No and in fact they don't actually understand what they're presenting, they just present stuff that is relevant to the audience as measured by will someone click on it. So relevance is measured after the fact. So I am being presented with a set of links and when I click on those links then the algorithm will learn from this that next time present something to Kai that is similar to what Kai just clicked on and so over time the algorithm is improving what it presents to me to elicit more and more clicks so that I like stuff, that I share stuff. It also in the case of Facebook presents me with potential friends and if it presents the right people I might create more connections so really what the algorithm does it optimises engagement with platform links, shares, likes, clicking on ads and therefore revenue for the company.

Sandra: So first off the algorithms are not per say faulty. They are doing what they're designed to do. We just happen not to agree with the results that they are presenting but they are working pretty much as they were built to work.

Kai: Yes the problem is not so much that the results are inaccurate. It's more that they are inappropriate and the algorithm has no appreciation for what is appropriate or inappropriate because it doesn't understand our world, it doesn't live in our world, it doesn't know anything about culture, about norms, about what is right or wrong. In other words as someone said on television it doesn't give a damn.

Sandra: So the question is how do we fix this? How does Google go about fixing things? So first of all can you fix this?

Kai: So you can't fix the algorithm. The algorithm does exactly what it's supposed to do. It does pattern matching and it presents results that are relevant but it's also essentially a black box. We discussed this before so you don't actually know how the waiting in the algorithm works and what will come out at the end. The only thing you know is that it will present something that is probably relevant to what you were looking for.

Sandra: So the reason it would be really hard to fix this is because you don't exactly know what type of information you should change and also the data that you model it on is biased to begin with. So how do you go about changing that?

Kai: And we're not talking algorithms that were trained with a definite set of training data that you could change to eradicate or minimise bias, those algorithms learn on the fly they learn constantly from what people are clicking on. So people who are clicking on links that are associated with a political leaning will then be presented more of those things that they are potentially clicking on which also leads to the echo chamber effect where people are presented things that just reaffirm their beliefs and we talked about this previously. So the whole idea is not for those algorithms to be unbiased it's precisely to exploit bias to present things that are relevant to people, to have them click on more things.

Sandra: So Facebook's solution to this and there is a good article in Business Insider looking at this and as always we will link to all the articles in our show notes and you can explore all the resources that we had a look at. Facebook's answer is to throw bodies at the problem. So on Monday Facebook announced that it would hire another thousand people in the following months to monitor these ads, for instance like the Russian ads that we saw, like the Russian ads linked to fake accounts that we've seen during the US elections that it will hire a thousand people to remove the ads that don't meet its guidelines. If this sounds a little bit familiar it's because Facebook's done this before. If we remember the sad incidents of lifestream suicides and lifestream murders that we've seen on Facebook, this is when Facebook said that it will hire about 3000 new people to monitor some of the content on top of the four and a half thousand people it already had. So we're now at over eight thousand people that are monitoring - are these the jobs of the new economy?

Kai: Sadly yes. So what we're talking about now is a system where a vast array of algorithms is in charge of discerning who gets to see what on Facebook, what search results are being presented on Google. The kind of ads that are presented alongside YouTube videos and because those algorithms are not really very intelligent they are very good at matching relevant content to search results and to people's known preferences. But they have no appreciation for appropriateness, for things that might be out of line, things that might be illegal, things that might be offensive. So you have to add this human layer of judgement of often by the hour low paid jobs who are in charge of weeding out the most blatant obvious mistakes that those algorithms are making.

Sandra: And intuitively this idea of hiring more and more people to throw at the problem seems a good solution, seems like a reasonable commonsense solution but if you take a closer look and the Business Insider article also takes a closer look at this. There are quite a few things that we would need to figure out, things like who are these people that we're hiring? Are these contractor? Where are they from? Are they in the same places? Do they understand the context that they're supposed to regulate?

Kai: On what basis do they make their judgement?

Sandra: Exactly. Is there a training, they're taught to look for specific kinds of things?

Kai: Where does reasonable filtering and inappropriate censorship start?

Sandra: How does this then inform how Facebook's algorithm and machine learning processes work? When does it start flagging things that it wasn't flagging up until now? Are any of these organisations then working with government authorities or with other people to figure out what are some of the standards? How do we develop the standards by which these will happen? So there are a whole bunch of questions that remain unanswered and yes this is a step forward but probably not an ultimate solution to the problem.

Kai: And the bigger question is do we have a good understanding of what the problem is because eradicating so-called buyers or diversity in search results is not the ultimate solution to every search that we do on the Internet.

Sandra: Absolutely not. So there are a couple of other really good articles by William Turton who also wrote The Outline article. He gives a couple of really good examples. For instance if you do a google search for a flat earth it should give you a wide variety of stories that the earth is not flat but also that there are unfortunately still a lot of people out there who believe the earth is flat.

Kai: Yeah and you might want to actually look up the flat earth movement and what ideas the people are into.

Sandra: However same author did a search for the Great Barrier Reef and the top story is presented by Google on the Great Barrier Reef were some from the Sydney Morning Herald around the coral crises and from Wired magazine talking about the crisis of the Great Barrier Reef but the other story was a Breitbait news saying that the coral reef is still not dying, that nothing is happening and that this is all a great conspiracy. So the idea of what is a point of view vs. what is probably complete nonsense...

Kai: Because it just goes against all the science that we have on the topic.

Sandra: Is it irresponsible for Google to attach some kind of implicit credibility to a story that is pushing these things around the coral reef?

Kai: Which goes back to the old problem that the algorithm does not really understand the intention that goes with searching for a particular topic and also that it cannot really distinguish between real news, fake news, between scientifically sound facts and just opinion or propaganda.

Sandra: So where does this leave us? First there is a huge problem associated with bias in algorithms and it has a number of consequences some of which we spoke about on Q&A that have to do with how we hire people or how we grant people parole. But there is this whole other range of consequences of bias in algorithms. Second is the language that we use to talk about this. We talk about faulty algorithms doing the wrong thing.

Kai: So we anthropomorphise these algorithms as if they had agency, as if they were actors that would make those decisions and therefore would make mistakes or apply the wrong judgement. And incidentally that allows us to absolve ourselves to just point to the algorithm as the actor who made the mistake.

Sandra: But it is our job or indeed some of these companies jobs to get the thing right.

Kai: Yes but here I want to interject. What does that mean for these companies to get things right? What are they trying to do? What are they optimising on? And if we're looking at what Facebook does essentially they're in the business of connecting everyone, of creating engagement on the platform. They're not really in the business of providing balanced news. What they are optimising is clicks, ad revenue, connecting more people because that leads to more clicks, sharing in ad revenue. The problem of fake news or buyer's imbalances - they're basically a sideshow for them. It's an unfortunate side effect of the thing that they're trying to do of creating more connections and engagement. It is something that they have to deal with but it's not their purpose to actually be a balanced news outlet. And neither is Google actually doing this. For them it's much the same, it's actually about advertising and you drive advertising by exploiting people's world views and preferences and yes biases and the province that we're discussing are emergent side effects that they have to deal with and they do this by layering filters of people and other algorithms that try to weed out the most obvious problems.

Sandra: So are you saying that because it's not these companies jobs it absolves them of any responsibility?

Kai: Absolutely not. That's not at all what I'm saying. What I'm trying to say is we need to understand what they're trying to do to then realise how these problems come about and maybe ask the question whether they are actually optimising the right thing, whether we actually want those platforms which have become the Internet for some people who spend most of their online time on platforms like Facebook. Whether we need some form of at least awareness in the first instance or regulations or some standards that will provide incentives for these companies to actually deal with the problem not as something that happens after the fact but actually removes the systemic issues that create the problem in the first place.

Sandra: So at the very least we need to talk about these issues, have a public conversation about them and be aware that they are happening.

Kai: And I'm sure we will have to revisit this issue because those problems are not going away.

Sandra: So let's move on to a happy story. This one comes from science magazine and after a very lengthy campaign Australia finally gets its own space agency.

Kai: So 60 years after the launch of its first owned satellite Australia is finally rebuilding its space agency to reduce the dependency that we have as a country on foreign nations for launching satellites, for using their comms satellite infrastructure or really for doing research into anything to do with astronomy or space material design and we're talking about a 330 billion US dollar global space economy of which Australia really commandeers a tiny point eight percent bit. So is this really just about getting a bigger slice of this pie?

Sandra: No Kai this is because even New Zealand's got a space agency so we need to get one. I mean out of the OECD countries it's only us and Iceland who don't have a space agency. But more seriously why go to space in the first place? The traditional answer to this question really is that this is about the potential to find resources. We might be able to mine an asteroid that will solve our fossil fuel problems for the next million years or that we might find a new habitat, we are not doing great with the one we've got now, we might need a new home fairly soon.

Kai: And Elon Musk thinks that the robots are coming for us so we really have to make an effort to escape earth and colonise Mars.

Sandra: Another traditional reason for going to space is pure curiosity. We might find cool shit out there.

Kai: So it's also about inspiration and the imagination of what we could do by transcending the limitations of Earth going to the moon again, building new space stations.

Sandra: And let's not forget Australia did cool stuff in the 60s. In 1967 we were one of the first countries to launch a satellite and a couple of years later a NASA tracking station in Australia received and broadcast the first TV images of Neil Armstrong taking the first steps on the moon. So these sort of projects can play a vital role in inspiring students to take up science, tech, engineering, maths psychology...

Kai: And you can see that there's a craving for engaging with astronomy and space research when schools are building their own observatories.

Sandra: Brisbane Girls' Grammar school has launched the Dorothy Hill Observatory, a remotely operated observatory and a range of telescopes fostering kids' interest into what space exploration can give us.

Kai: So the point that we're making is that having a space agency in Australia will allow Australian companies and researchers connect better into the worldwide economy and research that emerges around space exploration today but also to inspire school children and students to engage with this topic and maybe follow in the footsteps of Elon Musk.

Sandra: Oh yes. All hail Elon Musk. First of his Name, King of the Martians and the First Molemen, Protector of Tubes, Breaker of Industries and Father of Dragons. This is from Wired magazine an article that talks about his rocket travel plans.

Kai: Yes. So this week not only did Australia announce its space agency, at the same conference...

Sandra:...The International Astronautical Congress in Adelaide SpaceX and Tesla CEO Elon Musk announced it's time for...

Kai:...a fleet of reusable rockets that could transport people across the earth to any place in under an hour. And also to send a first non crude cargo ship to Mars by 2022 and then by 2024 to send humans to Mars so that mankind could become a multi planetary species.

Sandra: There were some issues with his plan to finance his BFR and we're not gonna say it on the podcast but if you played Doom back in the day you would know what BFR stands for. It is a big rocket.

Kai: His outline for how he will fund his program was decidedly open ended. It goes like this: steal underpants, launch satellites, send cargo and astronauts to ISS, Kickstarter, profit. And obviously he made fun of this himself on Twitter later. So this is really about inspiring us to think beyond the currently possible. And as a true imaginary and futurist he says let's imagine it we figure out the details later.

Sandra: And figuring out things gets us back to the idea of why go to space to begin with. We might not get to Mars but in the process we will get better at other things. So space exploration has brought us very cheap satellites which we now all benefit from across a range of industries and consumer services. It has allowed us to reimagine how we grow food, The German space agency for instance.

Kai: We've covered on the podcast previously and it had to do with growing tomatoes in an unusual way you can look it up.

Sandra: We'll include it in our show notes along with the clips from the Martian and that's being done now in the Atacama Desert as well. So thinking about how to grow staple crops and given that we have a growing population and growing urban centres this space exploration is actually helping us reimagine how we could feed people.

Kai: And next time you ask what has NASA ever done for us and you get in the car it's right there right, you're going to find your destination.

Sandra: Sunscreen, researching human psychology, all of these things that have come out of our desire for space exploration. So very excited to keep an eye on this one.

Kai: Oh and it's also given us Megan's favourite joke. Megan Wedge is our editor she's sitting right there. So it goes like this. What do you do when you see a space man?

Sandra: I don't know.

Kai: Park your car man.

Sandra: It's good to laugh a bit because our last story is quite serious.

Kai: It's a story from the Sydney Morning Herald. It's titled "Turnbull government to push states to hand over all drivers' licences." This comes on the back of an announcement to build and roll out across Australia a national face recognition database basically a big brother style surveillance mechanism that will supposedly make Australia safer by being able to identify criminals, potential terrorists, by way of face recognition rolled out across CCTV cameras that you find in public spaces, in shopping malls across Australia which will be based on a database of people's photographs. If you have a passport and have travelled internationally your photo will already be in the database. But since that only covers about 50 percent of all Australians the government has now taken to convincing the States to hand over their state based databases of people's photographs from their driver's licences. And Victoria and New South Wales have already signalled that they are happy to participate in this undertaking in the name of keeping Australians safe.

Sandra: So again the story that seems to be common sense. We all want to be safer and we would all welcome criminals being caught a bit faster. So let's look at what are the concerns?

Kai: So firstly the article quotes Adam Molnar who is a lecturer in criminology at Deakin University who asked would this even comply with international law? He says it is mass undifferentiated surveillance that can be used regardless of innocence and no participation in a criminal activity. So he is concerned that it is a massive invasion of people's privacy and without any suspicion of wrongdoing. And the question is being asked once we build a database like this what if this database is being hacked or what if the system is being used for other purposes. So can we be sure that a system that is able to track and identify every person in this country in a public space cannot be used for other than the purposes of apprehending criminals or preventing terrorism. So let's have a look at what other jurisdictions are doing.

Sandra: In Moscow for instance, the officials have turned on facial recognition in a citywide camera network. Moscow has over 160000 cameras in a CCTV network. And the authorities now have the ability to turn on...

Kai:...between two and four thousand of those cameras and make them live for face recognition and they've already shared stories about how this has helped apprehend people on their most wanted list. So there are some success stories.

Sandra: Interestingly they also mention the fact that this could be used for other purposes. For instance an article in Tech Crunch says that this could add a layer of accountability for services like police or say garbage collectors who you could keep an eye on and if they're saying oh we did clean this or we did patrol this area you could actually use this system to monitor whether these individuals have done their jobs, have done it at the pace or at the speed that was required and at the time that they said they had done it. You could pretty much track just about what anyone is doing.

Kai: And we've seen in the U.K. that CCTV cameras have been used to publicly shame people who litter in the streets with loudspeakers actually talking back at them and face recognition takes that to the next level as we can see in China.

Sandra: Yes in China we see these giant billboards that are connected to face recognition systems and if you jaywalk the face recognition system picks it up links it to your government ID and displays your face and your name on a giant billboard saying why are you crossing the street when you're not supposed to.

Kai: In some Chinese train stations face recognition is now used to check if you have a valid train ticket. You can use your face to pay for chicken wings at KFC.

Sandra: So we've discussed in a previous podcast that we'll link to how a train station in western Beijing matches passengers tickets to their government issued I.Ds. by scanning their faces and if the faces match the ID photo then the system says the ticket's valid and the gates will open.

Kai: And a recent application we have seen is at the China Beer Festival where the face recognition database was used to apprehend known fugitives but also to deny access for people with criminal records or known drug abuse or other black spots on their records. So are we now at risk of putting a system in place that allows us to really enforce a set of norms and sanitise people's behaviours in public spaces and create a giant panopticon. And let's remember that the idea of the panopticon is not that you're necessarily watched all the time but that you change your behaviour because you could be watched and just the knowledge of having face recognition everywhere and that you could be watched engaging in wrongdoing and therefore be penalised will have an effect on people's behaviour. So is this something that is straight out of 1984?

Sandra: And let's not forget this is at the intersection of a number of sectors so we're seeing push from government and from official authorities to use these technologies to make us safer. We're also seeing this push from corporations from Apple and Facebook trying to give us better services. The new iPhone10 having the facial recognition technology or your ability to have more fun by animating emojis. We're seeing this push from organisations trying to make people's lives easier, you can just walk onto the train. Companies that allow you to enter their premises by using face recognition technology.

Kai: So this is again a story about technological progress, the ability to do new things. But the necessity to have a public discourse about what are responsible ethical users that are really benefiting all of us and to create systems that we're comfortable with. And let's not forget that having data in place means that we can use them for other purposes. Take the Opal Card data which I have written previously about and there have been instances in Australia where people have been contacted by the authorities because according to their Opal records they were in the vicinity of a crime being committed. And so they were invited to be on the witness list even though they hadn't come forward. So the point that we're making is once you build a system there will always be the concern that it will be used for other purposes and anyone who now argues the government will not do this with this system should remember that they are now reusing the photographs that were originally collected to be on the driver's licences for a different purpose.

Sandra: So what we want to highlight is a) the need for a public debate b) the speed with which these things are being implemented and explored with. And this is not necessarily a negative story. There are many good uses for these technologies and many instances in which we want to have them but that we need to be a bit more critical about our understanding of what the implications of these technologies are.

Kai: And the Tech Crunch article ends with the sentence "having a system like this requires one have a significant amount of trust in the government to operate it effectively and responsibly". And even if we might have trust in our government today we should remember that once these systems are in place they're available to anyone who will be in government in the future.

Sandra: And even though we're tempted to have this blind trust in technology that eventually we will get good enough and these systems will be secure. But let's remember with large team efforts spanning many many years we still cannot prove that a thousand line program cannot be breached by external hackers. So certainly we won't be able to prove that these sort of systems cannot be breached.

Kai: And now. (Robot of the Week audio)

Sandra: Because we haven't done a robot of the week in a very long time and because I spoke about cats and robots and AI on Q&A.

Kai: Here is Qoobo the robot cat. It's not really a cat. It's just a pillow with a tail that when you stroke it the tail will wag and it will purr and the article says: "it's soft, responsive and will never scratch, dismiss or leave you it's the perfect cat. It's also a robot.

Sandra: Qoobo - robot of the week.

Kai: Such as it is and that is all we have time for today.

Sandra: Thanks for listening.

Kai: Thanks for listening.

Outro: This was The Future, This Week made awesome by the Sydney Business Insights' team and members of the Digital Disruption Research Group. And every week right here with us our sound editor Megan Wedge who makes us sound good and keeps us honest. Our theme music was composed and played live on a set of garden hoses by Linsey Pollak. You can subscribe to this podcast on iTunes, Soundcloud, Stitcher or wherever you get your podcasts. You can follow us online on Flipboard, Twitter or sbi.sydney.edu.au. If you have any news that you want us to discuss please send them to sbi@sydney.edu.au.

Related content