This week: pay with a smile, fake reviews and regulating AI. Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.

The stories this week

You can now pay for fried chicken by just scanning your face

AI has learned to write totally believable product reviews

How to regulate Artificial Intelligence

Face recognition expected in the new iPhone 8

To pay just smile

The smartphone’s future is all about the camera

AI makes it easy to create fake videos

The Australian Competition and Consumer Commission on fake reviews

Tales of a “fake reviewer”

How to spot fake reviews

Google extracts more money from ads using AI

Fake Russian Facebook accounts bought $100K in political ads

Automated crowdturfing attacks and defences in online review systems

10 simple ways to spot a fake Amazon review

Should AI be regulated?

NAB’s ‘virtual banker’ chatbots to save millions

Faster analysis for astrophysicists thanks to AI

Our robot of the week

Meet the Laundroid


You can subscribe to this podcast on iTunesSpotify, Soundcloud, Stitcher, Libsyn or wherever you get your podcasts. You can follow us online on Flipboard, Twitter, or sbi.sydney.edu.au.

Send us your news ideas to sbi@sydney.edu.au

For more episodes of The Future, This Week see our playlists

Introduction: This is The Future, This Week on Sydney Business Insights. I'm Sandra Peter and I'm Kai Riemer. Every week we get together and look at the news of the week. We discuss technology, the future of business, the weird and the wonderful, and things that change the world. OK let's roll.

Kai: Today in The Future, This Week: pay with a smile, fake reviews and regulating AI.

Sandra: I'm Sandra Peter. I'm the Director of Sydney Business Insights.

Kai: I'm Kai Riemer, professor at the Business School and leader of the Digital Disruption Research Group.

Sandra: So Kai what happened in the future this week?

Kai: So apparently in China I can now pay for my bucket of fried chicken with a smile. The first story is from Mashable and it's titled "You can now pay for fried chicken by just scanning your face".

Sandra: And this comes courtesy of Alibaba and Alibaba has been experimenting with new forms of payment for a couple of years now and KFC is now using facial recognition where you walk into a store, they scan your face and you can pay for your meal.

Kai: So you know these screens in McDonald's restaurant where you can configure your meal or configure your burger - same thing really but with a camera at the top where it scans your face. And so this apparently is a new kind of facial recognition technology that works with a 3-D camera. It's the same kind of technology that is apparently being released in the new iPhone 8 which because of the larger screen does away with the fingerprint scanner and now uses facial recognition to unlock the phone. So this new technology is called depth sensing technology, also called structured light, and it works by... now get this...spraying thousands of tiny infra-red dots across a person's face to pay for your chicken. That sounds finger lickingly good. But the way it works is that it has a sense of depth of field. So it can't be fooled by holding up a photograph to the camera because it actually recognises whether there's a real face in front of it and then uses AI to recognise a person's face.

Sandra: So let me stop you there for a moment. You said AI, artificial intelligence?

Kai: I did.

Sandra: A lot of stories this week had artificial intelligence. There is KFC's new payment technology, there is the new iPhone, there's NAB's virtual chat bots that are going to save them billions.

Kai: There's Google and predictive analytics to increase the accuracy of ad targeting.

Sandra: There are astrophysicists that now can analyse images 10 million times faster to look for black holes and other things in the universe.

Kai: So it seems to be a real week for AI.

Sandra: Which brings the question what is AI? We seem to be using this for everything - AI replacing jobs, helping us do things better, it's smarter than us, it's us dumber than us.

Kai: It's coming for us.

Sandra: So I think we should stop there for a moment and just try to clarify a little bit what we mean by AI. There seems to be quite a bit of confusion.

Kai: So let's go back some 50 years or so and look at how AI started up. So what we now call good old fashioned AI started out as a gigantic program of encoding the word into machines. So that first wave of AI was really collecting all the facts about the words and then using rule based reasoning to build intelligence into a machine. And that went on for a good 20 to 30 years and it failed quite spectacularly when AI encountered what is now called the common sense problem that as humans we seem to know a lot more things that we can ever express in words and rules. And so AI fell off a cliff for a while but it has been rediscovered big time. But we're doing it differently now.

Sandra: The huge shift came along when we decided that we're not going to go through the painstaking process of codifying everything and trying to teach machines how to do that but take a whole new approach altogether. So rather than explicitly programming for a particular outcome we decided that machines should learn from a large array of example in the very similar way to how we do it without telling them explicitly how to do things.

Kai: So we utilise a technology called neural networks which basically simulate neurones in the brain that adjust their configurations based on the patterns of input data that we as humans take in via perception. That's the idea basically. And so what we call machine learning or deep learning or data learning is essentially these layers of neurones that are able to adjust so that they can quite reliably react to certain patterns of data that we feed in and then create a certain outcome in the process recognising cats in pictures for example.

Sandra: So the idea is that these deep learning algorithms can actually take advantage of the huge datasets that we have now. So indeed teaching a machine to recognise a cat is a very very complex problem...

Kai: But we do have enough pictures available on the Internet.

Sandra: We do. And if we can do away with the fact that we fully have to codify and transfer the knowledge of what a cat is from a programmer to the machine via explicit rules and we can feed it millions and millions of pictures of cats that allows the machine to keep improving its performance in a similar way.

Kai: Yeah. There's two ways of doing it in one way you are feeding a large amount of data into the algorithm and those patterns are forming in those neural networks. And the other is that you learn as you go. But in either case you need a human to actually tell the algorithms what it is looking at and whether it's making progress in its learning.

Sandra: So Artificial Intelligence in this case and in most of the stories that we looked at this week really just means this particular type of machine learning. But is that really intelligence?

Kai: Well according to Professor Luis Perez-Breva the faculty director of MIT's innovation teams program who was recently interviewed on the future of work podcast run by Jacob Morgan (and we'll put a link in the shownotes). According to him, artificial intelligence is at best an aspiration. So he wants to make the distinction between intelligence as we see it in humans and the kind of technologies that we have such as automation, machine learning, data learning and robotics. And what he's saying is that yes we have the ability of building machines that solve certain tasks that are commonly associated with human intelligence but that doesn't mean that we're actually building intelligent things. So what we are doing is essentially solving problems in very bounded, very narrow areas. We can teach algorithms to be conversational agents or bots and have a conversation in a very bounded area such as an advisory context as in NAB, we can do image recognition but each of those algorithms can do this one thing well but it cannot do other things. There is no general intelligence there. There is no thinking, there is no reflection, there's no awareness, there's no living a life. So we're very far away from actually creating artificial intelligence but we're getting really good at solving certain problems in new and effective ways.

Sandra: So in that broader sense artificial intelligence really describes a general field of study in which we are trying to achieve this. But with current methods that we have we are not. But in the media, in the press quite often machine learning and automation are used interchangeably with artificial intelligence in a general intelligence sense and this can be quite dangerous because whilst on the one hand all of the developments that we are discussing in the press have to do with machine learning. The images that are conjured up by using the terms artificial intelligence are quite often those of Arnold Schwarzenegger in the Terminator or things like Skynet that do actually exhibit those general intelligence traits.

Kai: So what happens is that because machine learning algorithms can solve tasks that are commonly associated with human intelligence we tend to anthropomorphise these machines and attribute agency to robots or AI algorithms and therefore creating promises or dystopian fantasies that are quite misguided. So according to Professor Perez-Breva we are very far off creating actual intelligence in machines. And so we shouldn't confuse those utopian or doomsday scenarios with the real problems that we're solving and the actual applications that are happening today of these technologies.

Sandra: So speaking of applications that we have today at these technologies, once we understand that AI is about machine learning, it's interesting to look at what AI can actually do today, what is it good for today, which brings us to our second story.

Kai: Fake reviews. So this story is from Business Insider and it's called AI has learned to write totally believable product reviews and the implications are staggering.

Sandra: "I love this place. I went with my brother and we had the vegetarian pasta and it was delicious. The beer was good and the service was amazing. I would definitely recommend this place to anyone looking for a great place to go for a great breakfast. A small spot with a great deal."

Kai: This is a review written from scratch by an algorithm in a study reported by researchers from the University of Chicago, in a research paper that will be presented at a conference in October (we'll put the link in the shownotes) and the researchers basically demonstrate that machine learning algorithms can create totally believable fake review texts from scratch. So these texts not only can they not be detected by plagiarism software because they do not just work by reassembling existing reviews, they have learned the patterns that underlie successful user reviews from scratch by being fed a large amount of Yelp restaurant reviews to then be able to write those reviews from scratch.

Sandra: So what happened with these reviews is that individuals were asked to rate whether they thought these reviews were real or not which they passed the test. They were believed to be real. However the really interesting thing comes from the fact that what the researchers claim that this is effectively indistinguishable from the real deal because the humans also rated these reviews as useful.

Kai: Which means that the deception intended by the researchers was entirely successful. So these algoritms can now write reviews that are completely indistinguishable from human written reviews.

Sandra: Yep that are perceived as useful as the ones that humans write.

Kai: And fake reviews are already a problem. There's a whole industry, a grey area.

Sandra: Human generated fake reviews?

Kai: Yes. So there's people reporting (and we can put an article in the show notes) of a woman reporting what it takes to write believable fake reviews and building a profile that is trustworthy and believable. So there's a whole industry of people getting paid to write these reviews for product owners, for restaurants to pimp their reviews. And it happens on Amazon, on Yelp.

Sandra: So are these algorithms actually coming for their jobs?

Kai: Are we seeing the automation of click farms in the fake review industry?

Sandra: It's a scary thought.

Kai: Yeah it's a scary thought because so far with human written fake reviews there's limitations as to how many reviews you will have. But once you can do this with an algorithm you can really scale this up and turn this into a massive problem.

Sandra: Let's recognise that there are still problems to be solved. These reviews have to come from certain accounts that have to be believable as well. However the really big issue has been solved which is are these perceived as real and do users trust these reviews? Are they perceived as useful?

Kai: And there's articles on the web which you can look up - how to spot fake online reviews and many of them talk about yes what accounts do they come from but this is a problem we can solve with technical means as well. But most of them centre around how do these reviews read? Are they overly positive? What are the traits of a fake review? But the research has shown that algorithms can now write reviews that are indistinguishable from real reviews to the naked eye. They can however still be uncovered by machine learning because apparently the way in which the algorithm distributes certain characters across the text is different from how humans would normally do it.

Sandra: And once we figure out the cost of this, this will lead to an arms race where you try to generate fake reviews, I try to figure out what the fake reviews are, you get better at writing them and I try to get better at catching them. Soon enough early enough...

Kai: Sounds a bit like the giant robot duel that is about to happen. But seriously are we now on the verge of engaging in a machine learning arms race about fake information on the Internet and detecting that same information. Which leads us to a bigger problem...

Sandra: Fake everything. Fake news. Fake tweets. Fake...

Kai: Journalism, propaganda, all kinds of fake shit. So while fake reviews might threaten the business models of things like Amazon or the business model of Yelp or in the TripAdvisor or all of these sites, the idea of fake everything has some deeper implications. So think about the fake tweets right. An account like @DeepDrumpf the bot that humourously impersonates Donald Trump might be a fun thing to follow and an easy thing to spot versus the pro Trump chat bots that are indistinguishable from real people tweeting pro Trumps. So this might hijack things like elections and we've seen that happen before. We also have algorithms that write articles. A lot of market updates in financial newspapers are written by basically algorithms.

Kai: And there's just today an article in The New York Times reporting that it has surfaced that fake Russian Facebook accounts or about 100000 dollars worth in political ads during the campaign.

Sandra: And this has happened so far with a lot of written language. So with saying tweets, reviews, articles.

Kai: And there's also applications for example in journalism right. A lot of articles like say a match report in sports follow certain patterns which could easily be written by an algorithm instead of a journalist. Or take your spam emails some of them are of abysmal quality. So we would welcome some algorithms. So I'm sure algorithms will find their way into spamming to create ever more believable spam emails creating a real problem. But it's not confined to just text.

Sandra: We've seen researchers at the University of Washington having recently released a fake video of President Obama in which he very convincingly appears to be saying things that he has been made to say, he's never said them, but we can realistically create that.

Kai: So he might have said them in a different context but what they have done is they have overlayed his mouth movements on to a real video footage and the artificial mouth blends in quite convincingly so that it's really hard to discern whether this is actually real or fake. You can now create a fake voice from just a few minutes of recorded audio.

Sandra: And we can also make them say things that users will perceive useful that inform their political views in a meaningful way or that might inform the choices they make about the restaurant or about the place to visit in meaningful ways.

Kai: So we're now on the verge of having a president say things to camera that they have never said which creates scary scenarios. Orson Welles-like scenarios of deceiving the general public about terrorism threats or alien invasions. So fake news, fake information on the Internet takes the next leap to fake believable artificial humans on video.

Sandra: So the fact that these things not only exist and are believable but they are also perceived as useful does actually two things: first one is it tends to undermine the trust that we have in these things. Right. On the one hand we might think well this could have been written by an algorithm, who knows whether this is real or not and that is one way in which it can become quite dangerous. The second thing is their ability to actually hijack trust and just to mention here that Rachel Botsman's got a book coming up - I was on a panel with her yesterday and the book's coming out in October: "Who can you trust". That really is looking at our perception of trust and how this has been transformed in things like banking or media or politics or even dating. So the second thing that can happen here is the hijacking of trust. We might not be aware that these things are real and actually let's say if I am a bank and I have a chat bot that looks and sounds exactly like a real person trying to convince you to buy something and I take advantage of everything that I know about you to give you exactly the right cues to make you buy something that is hijacking the very notion of trust.

Kai: So we're talking deception here but we're also talking the bigger problem that we're now on the verge of, and we talk a lot about post truth society, but with algorithms being able to pose as humans not just in text conversations but soon in believable video or artificial Avatar technology and giving a shout out here to digital Mike who were presenting at Disrupt Sydney soon and the research into digital humans and digital faces really puts us on to the trajectory that at some point what can you trust on the Internet. And does that put the whole idea, the utopian idea, of the original idea of the Internet as a democratic space with information at your fingertips and the democratisation of information at peril because if you cannot believe anything on the internet anymore and it's harder and harder to know that what you're looking at is actually real and produced by a real human and not by an algorithm for malicious purposes, the very nature of the Internet might change dramatically.

Sandra: So we don't really want to paint a really doomsday scenario here. Most of these things have applications that also can make our lives much better. But as these things are coming to the forefront there are questions that we need to be asking now and solutions that we need to start thinking about as these things are developing not when they are already widespread.

Kai: But we learned something else important from the whole story about fake reviews and fake news: a lot of the time AI is used to give the appearance of human intelligence. We don't have actual human intelligence so I propose not to talk about artificial intelligence which gives the illusion that we have created intelligence by artificial means. What we really have here is fake intelligence. We have machines pretending to be humans. We have our algorithms that can do things as if they were human but they're not actually, there's no there there, there's no human behind this. Often times. So I see two areas: one is using these algorithms to actually solve problems that help humans in doing what they're doing in their jobs or algorithms that are used to pose as humans to give the illusion of a human interacting with another human which is essentially deception. And that brings us to our last story about regulating AI.

Sandra: So our last story for today is from the New York Times and it's called how to regulate artificial intelligence. It's written by Oren Etzioni who is the chief executive of the Allen Institute for Artificial Intelligence. And he lays out a number of rules to try to start a conversation around the impact that these things will have on humanity and how we can develop a set of rules, a set of regulations that might help mitigate some of the negative effects of this technology.

Kai: Oren takes as his inspiration the three laws of robotics formulated by Isaac Asimov in 1942 and formulates three rules for reigning in some of the negative effects of AI as we know it.

Sandra: Isaac Asimov wrote the three rules of robotics back in 1942: a robot may not injure a human being or through inaction allow a human being to come to harm. A robot must obey the orders given by human beings, except when such others would conflict with the previous law and the robot must protect its own existence as long as such protection does not conflict with the previous two laws.

Kai: So Oren's three rules are slightly different but they're nonetheless significant. So the first one says an AI system must be subject to the full gamut of the laws that apply to its human operator. So the idea is that we shouldn't attribute agency to an AI and then be able to absolve ourselves and say it wasn't me it was the AI. He says no no no we can't do it like that. These are algorithms, they are operated by someone and whoever operates the AI must be responsible for the effect it creates.

Sandra: At first sight this seems like a really sensible thing to do and something that is very desirable. But if we take into account the conversation we had earlier about what artificial intelligence is, this is not as easy as it seems. And it might actually be something that we won't necessarily be able to achieve even if we agree that this is something that we must do.

Kai: But on the other hand say if you have a bank employing AI to make decisions about mortgage or investments for customers and something goes wrong, wouldn't it be sensible to hold the bank accountable and tell them that they can't just say you know it's the computer, it's the algorithm that did it and we don't know how it works. That points to what we've discussed previously that a lot of the time these machine learning algorithms are black boxes. We don't quite know how they come to a certain decision or to a certain output. And so when those outputs have negative consequences and we hold whoever operates the algorithm accountable, wouldn't that mean that they have to take this risk seriously and then maybe decide that in certain areas of their operations they shouldn't use algorithms that we cannot fully understand where we cannot fully predict what they are going to do.

Sandra: So that actually is probably the only solution of them not employing the algorithm in the first place because the problem that you're highlighting there is that a lot of these deep neural networks result in things that have the problem of what's called low interpretability meaning that we have a lot of difficulty figuring out how these systems have reached the decisions that they have. And this creates a number of risks that you've pointed to. First is that these machines may have hidden biases that do not come from the intent let's say of the bank to deceive its customers but rather from the data that was used to train these systems which made their way into the algorithm that has resulted. Now that means that we might have racial or gender or ethnic or all sorts of biases and we've discussed many of them previously but they do not reveal themselves as explicitly so. But they are embedded in this subtle interactions that happen in the deep neural networks.

Kai: So I agree. While we might hold someone responsible for negative effects that those algorithms might create, they are not always obvious right?

Sandra: There is also a lack of verifiability of where the algorithm will ultimately land because if they are not built on explicit logic or explicit rules as the first generation of AI, it is quite difficult if not impossible to prove with any certainty where the decision will land.

Kai: And that's why many people have warned against using machine learning and deep learning algorithms in mission critical areas where 100 percent accuracy is important. So if we go back to the article this week about Google and increasing the accuracy in ad targeting - using it there is fine because this is by its very nature a hit and miss business so a lot of the time we're just putting ads in front of people who won't click it. But if we are increasing the accuracy by a low margin and we do this millions of times it adds up to real money. It's fine there but when we make judgements about who to incarcerate or who to grant bail to as we discussed previously we don't want to hit and miss right? We want good judgement which is what these algorithms cannot provide.

Sandra: And unfortunately even if we should discover that we granted parole to the wrong person or that we did not give a credit line to someone who could have actually taken their small business and gone very far with it, so if these systems make errors diagnosing it is quite difficult but also correcting it can be extremely difficult (if not impossible) because we need to work with the underlying structure that led to that solution and that is unimaginably complex - once we change the conditions under which we train the system, those predictions again become very hard to understand, to analyse.

Kai: And so we end up with a mixture of good old fashioned AI and machine learning once we start modelling all kinds of rules on top of machine learning about things that shouldn't go wrong which just leads back to the problems that we weren't able to solve in the first round of AI.

Sandra: So the first rule whilst generous in its aspirations might be very difficult, you know, you can't say my AI did it but unfortunately people will say I don't know how my AI did it.

Kai: So the second rule harks back directly to our fake review problem which is any AI must clearly disclose that it is not human. So in other words AI must not be used to deceive humans into believing that they are dealing with another human.

Sandra: So all of the examples that we brought up previously about fake Twitter bots or fake news would subscribe under this rule including the famous case of this happening in academia.

Kai: Yes so in 2016 a group of academics at Georgia Tech used IBM's Watson to create an artificial tutor named Jill Watson who successfully advised a large number of students before it was disclosed to them that they were actually dealing with a bot and not a real human tutor. So that created some controversy but also some insight into how believable conversations with a conversational bot can be in a strictly bounded area such as the topic area of a unit of study or a course.

Sandra: Which brings us to our third rule: that an AI system cannot retain or disclose confidential information without explicit approval from the source of that information.

Kai: So to me at first sight I thought OK why this third rule because we could just subsume this under the first rule that everything has to comply with the law. But if you think about it a lot before we talk about it in terms of privacy and confidential information might fall outside of the law. And it also shows that this is a problem that is quite significant and therefore worth formulating in its own right.

Sandra: So if we think of things like Amazon smart speaker, those are things that can not only observe what is happening around them and record that information but also actually elicit information from you by asking you questions, by providing feedback or by inadvertently stumbling upon information such as in the case of the vacuum cleaners that we discussed.

Kai: Yeah the little Roomba that maps the layout of your house while it cleans your floor...

Sandra:...but inadvertently learns information about how you use your house, where you spend most time, which rooms you heat and so on and so forth. They sounds like sensible rules that whilst there might be aspirational there are things that we could subscribe to but what about calls to basically not regulate this. This is an incipient technology, we're just figuring out how to make the most out of this technology, we're at the exponential path of their development, we should not regulate this and just let it go wild and see where innovation and discovery takes us.

Kai: Yes so this is not without controversy. So there are calls to let it run. There's an article in The Huffington Post which shows the pros and cons of regulating AI. There's a guy named Xavier Amatriain who basically makes the point that as a fundamental technology that might underpin so many different services AI should just be unregulated, everyone should be able to experiment and do just about everything with it. And he mentions other technologies such as biotech and nanotech and nuclear energy and things like that. I think he misses the point though because some of those technologies such as nuclear or biotechnology they are quite heavily regulated. But he also makes the point that if we were to regulate this in Australia or the US would regulate it, other countries might not. And that might just create an uneven field of competition where some companies are able to use the technology and others are not.

Sandra: So even if we were to take steps to slow down the progress of AI or to think more clearly through its implications other nations might regulate it differently and again the US is making huge strides in AI but so is China at this point. And this of course can be used not only in consumer products and on platforms that change the way we interact with the banking system or change the way we run our minds but it can also be weaponised and there are a whole host of nefarious applications of this technology.

Kai: Yes but my bigger point would be in the current hysteria around the robots are coming for us, and the whole anthropomorphism the hype around the successes of AI, I can't see anyone actually taking the mandate to regulate. So at this point in time I think formulating certain rules as a set of ethics for the use of machine learning is an entirely sensible thing to do and to create awareness and educate people about the differences of these technologies and what they can and can't do and what applications in of fake intelligence for creating fake reviews and things like that can cause, what the implications are I think is very important but I also think we haven't fully explored all the good things that these technologies can do going forward in image recognition, in medicine to improve the accuracy of a cancer diagnosis for example. So there's heaps of things where this can actually take us further. But I think we should be aware of the downsides and have a conversation about it. I think this is the most important thing - a conversation that goes beyond misguided alarmism.

Sandra: So we offer this as a starting point for those kinds of discussions and they don't have to be in the abstract. They can be around things like the fake reviews that we've just seen or really tangible impacts of machine learning and artificial intelligence. We can talk about safety of autonomous vehicles really things that are impacting us as we speak.

Kai: And robotics...robots.

Sandra: Robot of the week time?

Kai: Yes.

Audio: Robot of the week.

Sandra: The Laundroid. Panasonic's Laundroid. This is really one that should be up for the Juicero awards - the most overhyped, overpaid technology that does something you could easily do with existing things.

Kai: So imagine this: there's a wall in your bedroom with a drawer where you will drop your dirty laundry. Behind that is a washing machine that picks its own detergent. There's a drying technology. Everything happens behind this wall. There's robotic arms that then pick the freshly cleaned and dried laundry, folds it and puts it into its right place in the same wall which also doubles up as your wardrobe.

Sandra: Did this receive 118 million dollars just like Juicero which is also closing down this week?

Kai: I don't think Panasonic does need that kind of funding but it will put you out of pocket 25,000 British pounds for this installation in your bedroom and supposedly you also need the space to put this thing somewhere so I can see this be a best seller.

Sandra: Definitely one for the Juicero Awards.

Kai: We'll keep an eye on that. And that's all we have time for.

Sandra: Thanks for listening.

Kai: Thanks for listening.

Outro: This was The Future, This Week made awesome by the Sydney Business Insights team and members of the Digital Disruption Research Group. And every week right here with us our sound editor Megan Wedge who makes us sound good and keeps us honest. You can subscribe to this podcast on iTunes, Soundcloud, Stitcher, or wherever you get your podcasts. You can follow us online, on Flipboard, Twitter or sbi.sydney.edu.au. If you have any news you want us to discuss please send them to sbi@sydney.edu.au.

Related content