This week: why it’s too early to certify AI, and phantom traffic jams and fungi sandals in other news. Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.

The stories this week

The Turing Certificate and overcoming our mistrust of robots in our homes and workplaces

O’Neil Risk Consulting and Algorithmic Auditing (ORCAA) certification

Limits and challenges of Deep Learning

Paper by Gary Marcus

Some background on Cathy O’Neil’s ethical matrix

Future bites / short stories

Estonia will roll out free public transit nationwide

One single automated and connected car can make driving better for everyone

Mushroom and the future of fashion


You can subscribe to this podcast on iTunesSpotifySoundcloud, Stitcher, Libsyn or wherever you get your podcasts. You can follow us online on Flipboard, Twitter, or sbi.sydney.edu.au.

Our theme music was composed and played by Linsey Pollak.

Send us your news ideas to sbi@sydney.edu.au.

Disclaimer: We'd like to advise that the following program may contain real news, occasional philosophy and ideas that may offend some listeners.

Intro: This is The Future, This Week on Sydney Business Insights. I'm Sandra Peter and I'm Kai Riemer. And every week we get together and look at the news of the week. We discuss technology, the future of business, the weird and the wonderful, and things that change the world. Okay let's start. Let's start.

Sandra: Today in the future this week: why it's too early to certify AI, and phantom traffic jams, and fungi sandals in other news. I'm Sandra Peter I'm the Director of Sydney Business Insights.

Kai: I'm Kai Riemer, Professor at the business school and leader of the Digital Disruption Research Group. So Sandra what happened in the future this week?

Sandra: Our main story today is about overcoming our mistrust of robots in our homes and workplaces. It's actually to do with a recent keynote that Kai and I attended at the CEDA lunch event on artificial intelligence - that is the Committee for Economic Development of Australia and this particular event was one of their series on artificial intelligence and it featured Dr Alan Finkel who is Australia's chief scientist who tried to tackle the problem of how do we ensure we can trust the algorithms that are in our lives. You can actually read most of his speech in an article that appeared in The Conversation entitled "Finkel overcoming our mistrust of robots in our homes and workplaces". So we thought today we'd have a bit of a discussion about how we think about ethics in algorithms and go through Dr Finkel's proposal of how we could practically go about this issue.

Kai: So he began his speech by outlining that our world is fundamentally governed by trust. We trust in the competence and benevolence of other people, we trust in the reliability of technical systems, cars, elevators - we trust a lot in our daily lives and he provided a few examples and he says what is missing is that people currently do not trust artificial intelligence and robots because it's new, it's unfamiliar and there has been a lot of news recently around Facebook and Cambridge Analytica, the election tampering and so the proposal that he makes is to come up with a certification of trust. The point is that AI systems are too complex for any lay person or consumer to understand and so the best way forward is then to have an expert certification that assures the customer that these systems behave the way you would expect them to behave. So let's hear from Dr Finkel himself.

Dr Finkel audio: What we need is an agreed standard and a clear signal so that we individual consumers don't need the expert knowledge to make ethical choices and so that the companies know from the outset how they are expected to behave. So my proposal is a trustmark, its working title is the Turing Certificate of course named in honor of Alan Turing. Companies would be able to apply for a Turing certification and if they meet the standards and comply with the auditing requirements they would then be allowed to display the Turing stamp and then consumers and governments could use their purchasing power to reward and encourage ethical AI.

Sandra: And before we go into this conversation let's just say it's fantastic to see someone like Alan Turing being honoured for this. Turing was one of the most influential people in the development of theoretical computer sciences and in how we think about algorithms and computation. And let's remember the Turing Machine. Let's also remember that the Turing Test was meant to establish whether a computer managed to deceive us into believing that it's actually a human being.

Kai: So in that respect naming it the Turing Certificate is kind of ironic.

Sandra: Yes indeed. So let's see what are we actually awarding this for and let's remember that these are the latest calls for establishing whether algorithms are ethical or not. Couple of years ago the Obama White House was actually already calling on companies to directly audit their algorithms. So let's unpack this current round.

Kai: So Dr. Finkel proposes that providers that are using AI systems could submit to a voluntary auditing process whereby the quality of their products would be audited as well as the ethical standards of their organisation itself. And we also want to mention that he is not alone with this proposal. Cathy O'Neil the author of the book "Weapons of Math Destruction" and we have featured her on the podcast previously and Sandra and I have actually carried out an interview with Cathy that will come out as a special podcast a bit later in the year. She has recently launched her own certificate which she calls ORCAA which stands for O'Neil Risk Consulting and Algorithmic Auditing and she basically implements a version of what Dr Finkel is calling for. So she proposes to audit AI systems for accuracy, bias, and fairness and would then award a seal that would go on a website or an app.

Sandra: Okay so let's unpack this. Lets first look at whether establishing trust through a certificate actually works.

Kai: Well, we've been in this place before when e-commerce, the Internet, online shopping was new. People didn't trust online shops with their credit cards and a whole initiative came up to actually certify and then put so-called trust seals on websites and it has been shown in research that these trust seals, they work.

Sandra: But they don't they work a little bit too well?

Kai: Yes so it has been shown that people do indeed trust these seals and when you have a trust seal people are more likely to then use websites and submit their credit card information. The problem though was that it didn't really matter whether the seal was real or fake so you could make up a seal that looks like a real certification and people would still trust it so, by and large, providing certification is largely ineffective when anyone can come up with their own seal and put it on their website or app so there's a question mark around that.

Sandra: And this happens of course even when we think about organic foods or about fair trade coffee or about things that are healthy for you. Any sort of star rating or certificate of provenance actually signals that trust whether or not the organisation behind it is a legitimate one whether that certification is recognised or not.

Kai: And it certainly makes a difference whether a seal is really very widely used like the Australian energy or water rating or whether we talk about rather new and obscure seals for AI or at the time online shopping. I think there's a difference there but that's not the topic we really want to unpack.

Sandra: So what we really want to look at today is whether or not you could actually certify whether a algorithm or a product or indeed a company builds ethical AI.

Kai: So let's hear from Dr. Finkel how he thinks about his certification proposal.

Dr Finkel audio: True quality is achieved by design not by test and reject. It's an important principle.

Sandra: And on the face of it this actually sounds quite reasonable. So we think about designing in ethical ways in a range of other industries and fields and in how we think about designing drugs or treatments for patients. Why not in AI?

Kai: Yeah so it makes complete sense to argue that good design should lead to reliability to accuracy to fairness to all the kind of things we want in these systems. And it is important that we adhere to proper standards for example in selecting our training data and we've highlighted problems with that and that bias can enter into deep learning algorithms that way.

Sandra: Or in how we think about what the algorithm optimises for - Facebook algorithm for instance might optimise for engagement rather than for civil discourse.

Kai: But what we want to stress here is that deep learning algorithms that are at the heart of this current wave of artificial intelligence do not adhere to the same quality principles as traditional computing did. The argument is that we're actually dealing with a fundamental shift in computing paradigm and that's what we want to unpack here because we want to show that deep learning in principle can not adhere to the same quality as traditional algorithms. And that's very important to understand when it comes to the call for assuring quality through design.

Sandra: So let's unpack a little bit about how we think about this in the traditional computing paradigm and how would we think about this in this new paradigm. So in traditional computing it is quite easy to think of an algorithm as a finite set of steps that wants to achieve a goal and we can embed this in a specific set of rules that the algorithm abides by and thereby audit that algorithm for how well it complies with those rules. So for instance should we use one of these algorithms to decide whether or not somebody should be given a bank loan, should they fail to obtain that bank loan we can audit the algorithm and we can say exactly what sort of rules or what criteria led to that person not getting the loan.

Kai: So an algorithm is literally defined in mathematics and computer science as an unambiguous specification for how to solve a class of problems. And that's important because in traditional computing we build deterministic sets of rules so that we can audit, that we can debug when an error occurs, we can go back and we can understand why this error occurred and we can then fix it and we have reliable methods for doing this.

Sandra: So a hundred percent of the ethical behaviour of that algorithm could be determined by design.

Kai: We could then decide whether we implemented the right rules but we could certainly have a conversation about what went into building this algorithm, we can fully understand how the algorithm makes the decision and we want to stress here that we're discussing the principles of this way of computing of course in large scale systems these things become a matter of complexity but in principle this is possible. But this changes when we go to deep learning.

Sandra: When we talk about deep learning, rather than having this explicit set of rules that are encoded in the algorithm that decides whether you get the loan or not. In the case of deep learning, the idea is to try to derive certain patterns from data that we already have. So what the algorithm needs is a large set of labelled data with all the characteristics of these customers and another set of outcomes that presents the algorithm with people who have received the loan in the past or people who have not received the loan in the past. And of the people who have received the loan in the past, who are the people who have defaulted.

Kai: So the way this works is then for the neural network that forms this deep learning algorithm to associate certain inputs, the customer data, with certain outputs, no loan, good loan, bad loan which can then later be used to classify the data of a new customer that comes along. The point though is that the way in which the deep learning neural net makes that association between the input and the output is a black box. There is no way to fully understand how these associations are being made. The important thing though is that we can use this then to classify the data of a new customer that comes along and determine whether this customer should be given a loan and on average this works really really well when a new customer comes along with their data set the algorithm will in most cases with good accuracy determine whether this customer should or should not be given a loan.

And we are using these kinds of algorithm in many different domains - speech recognition, image recognition, and results have shown that deep learning can recognise objects in pictures with greater accuracy often than human users. But there are problems.

Sandra: The problem is that given the nature of these algorithms you'll never know exactly how a new customer will be classified. And if that customer falls outside of what you have trained the algorithm on you will not know whether or not your algorithm has performed well. That customer will be classified regardless - it will either receive or not receive a loan and you won't be able to audit how well the algorithm has performed in this additional instance.

Kai: At a fundamental level deep learning is non-deterministic where traditional algorithms are deterministic because of a set of rules that will lead to the same outcome and a predictable outcome every time, deep learning is of a very different nature and we want to here highlight a paper that was recently published by Gary Marcus who is the former head of AI at Uber and a professor at New York University who has outlined this in much detail and we're going to put a link in the show notes so he says that AI can fail spectacularly and in most instances we don't even know that it happened but we have seen some instances where we can easily see the nature of these types of failures.

Kai: We've previously talked about a deep learning algorithm used in English language tests where Irish accents couldn't be understood, image recognition systems failing spectacularly in classifying black people as gorillas or recent research which has shown how easily deep learning can be fooled.

Sandra: Mistaking a cat for broccoli when a couple of pixels have been changed in the image.

Kai: Or a turtle for a rifle for example.

Sandra: And it's quite easy to see how that algorithm has failed and in some instances even how that algorithm could be improved for instance by changing the dataset on which it is trained. However in more complex instances in which these algorithms are used in our day to day lives let's say to grant people parole or to grant them a loan, it is increasingly difficult to understand in what ways they might be feeling us even though they might work quite well for most instances.

Kai: And let's stress this again where a traditional algorithm when it's being fed input data that it can't recognise will come up with an error message or an exception. A deep learning system will always give an output. It has no way of telling you that the new data record fell outside of its test space that it can't recognise. It will always give you a solution and therein lies the problem. When an error occurs we don't know that this error occurred unless we can inspect the image and clearly see that this is a cat and not broccoli. So in most instances this presents a problem. So what we're really dealing with is a form of fundamentally nondeterministic or non reliable computing which, make no mistake, works really really well in most instances when the data falls within the training space. But when something new comes up a new situation arises or something that is slightly different from what it was trained with we have no way of knowing that the algorithm made a mistake.

Sandra: Which brings us back to Dr. Finkel's conversation and he asks the questions of whether this Turing stamp should be granted to products or should it instead be granted to organisations. We've seen that it is actually quite difficult to establish whether a specific algorithm is reliable or not.

Kai: Yes and I would go as far as to say that algorithm is really the wrong word here because it gives the wrong association. Surely there is an algorithm involved executing the deep learning net. But what we end up with as a product, the deep learning system does not behave like an algorithm at all. So using the word algorithm might already send the wrong message of reliability as does calling them artificial intelligence. Because let's remember that there's no thinking abstraction going on so learning can always only happen in a brute force way from a training dataset not by reflecting on a mistake that we have made as we would do as humans. So there's neither intelligence at work nor is there a deterministic algorithm at work. Something else is at work and we need a new way of dealing with this and we need new quality criteria. So it's unhelpful to suggest that we can achieve the same reliability in principle. We need to work towards making these systems more reliable but they are working on a different set of principles and this is why both Cathy O'Neil and Dr Finkel propose to not just look at the products but at the organisations that provide those products more broadly.

Sandra: So Cathy O'Neil indeed has a very practical example of how you could start thinking about these issues. Her company's created an ethical matrix that would allow you to audit a company. The ethical matrix would concern things like the profit orientation of that company, the quality of the data that that company is using to train the algorithms.

Kai: Is there a mechanism to learn from past mistakes and feed that back into retraining the algorithm?

Sandra: So to help a company think through these consequences, Cathy's matrix is made up on the one hand of about half a dozen traits - things like accuracy or consistency or bias, transparency, fairness, timeliness and on the vertical axis she actually looks at stakeholders so who are the people who care whether this algorithm works or not? Who cares if this algorithm fails? Who would get hurt if something goes wrong with these algorithms? These questions actually allow her to reveal problematic areas say discrimination against the certain class of people. So what she ends up with is a matrix of green cells, that means seems to be going well. Yellow, that means there could be a problem here, or red, that means that harm is actually being done in some capacity to a certain group of people. What would then ensue is that someone like Cathy O'Neill would work with organisations to try to remove as many red boxes from the matrix and try to improve on the yellow boxes. And we recognise this as probably one of the few ways in which you could go about doing this in a very practical way.

We also highlight the real difficulty with doing this. So one of her recent clients took about four months to audit, that is one company taking more than four months to audit. Thinking through the resourcing to how you would find enough people and enough time to audit the sheer number of companies using these products and services and then the diversity of products and services just for one company like Google or Facebook, what would that entail in terms of the effort that we would put forward? Another consideration would be who would set the standard of what is ethically acceptable, would then Australian ethical certificate be equivalent to an American ethical certificate?

Kai: Or a European one for that matter given that privacy for example takes on a very different role in the European context as we see with recent regulation. To sum up the point that we're making is that we're standing at the beginning of what is a new computing paradigm, best described not as artificial intelligence but probably something like probabilistic or non-deterministic computing that does not submit to the same quality by design criteria as previous algorhythmic or rule based computing would do would, that needs new quality standards, a new way of assuring that whatever we do in the world actually gives us the intended outcome and also discussion about given the non-deterministic nature of deep learning are there areas where we would not want to use these technologies? So we think rather than applying traditional criteria to this we have to actually rethink this. And of course there's people like Gary Marcus who fundamentally understand deep learning who have worked with these algorithms for years who point out that this is largely unresolved but that's the conversation that has to happen at this point in time if that field is not to plunge into what he calls another AI winter to suffer from overblown expectations and then being taken down by problems that arise because we have unclear expectations of what these things can do.

Sandra: And since Dr Finkel's address and Cathy O'Neil's new endeavours there have been many to point out the potential pitfalls, the fact that however well we might do this we might not be asking the right set of questions or we might not be looking at the right set of stakeholders that many of these undertakings are fundamentally subjective. We want to point out that having this conversation is the really important first step to take.

Kai: Well that was a long story. Given how long that was, we want to end with our Future Bite/ Short Stories and we have actually three good news stories this week I believe. So Sandra what is one thing that you learned this week?

Sandra: So my first insight is from CityLab this week: “Estonia will roll out free public transport nationwide.”

Kai: Say that again.

Sandra: The country of Estonia will have free public transport in the entire country.

Kai: Shit that's an unusual move.

Sandra: That is a very unusual move. A couple of weeks ago there was talk about Paris looking into the possibility of getting rid of tickets on its metros and buses and that would have made Paris the world's largest area for free transit. But it seems there is actually a bigger scheme afoot. Estonia from the 1st of July will have 24/7 free public transport making it the largest area because it stretches the entire territory of that country.

Kai: How would they fund this? Do they not lose out on a lot of money that way?

Sandra: So that's what you'd think. But actually what they are doing is extending this from one city, Tallinn, to have it in all other Estonian cities and between Estonian cities.

Sandra: And it turns out that only about 20 percent of the bus network's costs were recouped from fares. So it's really easy to see how if you got rid of all the ticketing machines and everything that goes into checking those tickets and enforcing that system, well actually it seems like a burden worth losing.

Kai: So what you're saying is that the money that they recover from ticket sales is basically eaten up by the infrastructure of maintaining a ticketing system in the first place. And otherwise the system is already subsidised.

Sandra: Yep! So for Estonia this actually means that making the whole system fareless is only about 15 million dollars. That's not a lot.

Kai: So I think this is really interesting because it goes to show that Estonia understands its public infrastructure as an infrastructure. We have seen far too many times in this country Australia as well that trains and buses are seen more akin to a product or service that has to recoup its cost, driving up prices and therefore compromising its utility for passengers. So it's a radical move that Estonia is doing but it is actually creating an infrastructure that will have positive flow on effects by increasing people's mobility, by getting cars off the roads and hopefully also having a positive impact on the environment.

Sandra: Add to that the fact that this might actually cut down on delays and on travel times for most people in that country. Add to that the fact that at its root for Estonia this is actually a form of fiscal redistribution. The fact that it will allow for a massive democratisation of mobility for everyone living in Estonia and it might actually prompt other governments to consider this.

Kai: So the reason why we bring up stories like this is because they're interesting experiments so we'll keep an eye on this and see what the flow on effects that this might bring to their country, tourism, jobs, mobility and would that be a blueprint to apply to countries with a different geography and a different set of problems such as Australia. Could we imagine introducing free public transport in Sydney for example?

Sandra: I can definitely imagine it.

Kai: Sandra we're talking free public transport, not free Ubers right?

Sandra: And speaking of free Ubers, I know one of your stories concerned traffic jams.

Kai: So this is an interesting one from Wired Magazine that reports on the latest research coming out of the University of Michigan and it concerns so-called phantom traffic jams. These are the kind of traffic jams that appear out of nowhere and they are related to human behaviour. And they come about in something like this: so someone hits the brakes for whatever reason, there was something in the street. Basically they brake. The person behind them brakes and brakes even more because they detected the car in front of them a little bit too late. And so you end up with a cascade of braking and someone at the end of that cascade actually has to come to a full stop and everyone else behind them creating a traffic jam that appeared without apparent reason. And what this research found and they did this through simulations is that if you enter into that cascade, a single automated or connected car one with automated braking system for example you can smooth out the entire effect and avoid the phantom traffic jam. The point being that automated braking systems can prevent that cascade from magnifying.

Sandra: So what you're really saying here is that if just one guy bought a Telsa then we would all benefit from the technology that's in that car and get rid of some of the traffic jams on our streets.

Kai: So yes indeed a University of Illinois study showed that if only one in twenty cars was at least partially automated with that kind of technology we could largely do away with these kinds of phantom traffic jams in stop and go traffic so, you know, that rich guy in the big car with the latest technology might actually benefit everyone. So you know in that way the article calls it a different form of trickle down economics. There is however, by the way, an easier way to get rid of that. If every one of those human drivers adhered to the prescribed safety distance that phantom traffic jam would also not occur. But you know how likely is that to happen. So as more of these semi autonomous technologies make their way in to our cars, be it Volvo, Tesla or otherwise we might all benefit. That's the point of the article.

Sandra: And now to mushrooms. My last short for today actually concerns mushroom sandals.

Kai: Shoes for "fun guys ".

Sandra: Really?

Kai: Yeah okay.

Sandra: So this is in the trend of sustainable fashion and we're seeing more and more people come up with sustainable clothes, sustainable shoes, sustainable bags.

Kai: And this comes on the back of realisations that, and the article says current levels of consumer waste are frankly unsustainable. A recent study in Australia showed that consumers wear new clothes on average only four to seven times and many consumers wear fashion items only once or twice and then discard them and basically throw them in the waste.

Sandra: Yep and we've spoken about the waste in the fashion industry - we've actually had a whole episode on this and we'll link to this in the show notes which is actually proving to be a huge problem as those clothes are no longer recycled. While it turns out these shoes and this kind of clothes are actually compostable.

Kai: Or when you don't like your shoes anymore you can just eat them.

Sandra: So I have to mention this before turning to the baked mushroom sandals that there have been a number of designers who have used this kind of products, we've had Salvatore Ferragamo last year using a citrus biproduct material that was like silk and they used the same shirts and dresses and pants. There was a Philippines based designer that created leather out of pineapple leaves and a Dutch design that created a mycelium dress that look as any satin cocktail dress and it is this the mycelium that's the interlocking root system that spans forests of mushrooms in your backyard. This is after it rains and this fungi fashion has really been a trend, not only for the sandals that are in this article but we've seen actually a Microsoft Artist-in-Residence that grew her own wedding dress out of these things, we've seen biodegradable light fixtures made out of the same mycelium and we've seen leather made out of this. But this time it's a prototype shoe that combines mushroom agricultural waste and fabric scraps and Gillian Silverman is actually a graduate of the University of Delaware. And what we've got is an amazing sandal that is non-toxic biodegradable and of course not live.

Kai: So what we're saying is when you are travelling Estonia on one of your free bus rides and you're sitting in one of your phantom traffic jams and you're getting hungry you can just eat your shoes. And that's all we have time for today. Thank you for listening.

Sandra: Thanks for listening.

Outro: This was The Future, This Week. Made awesome by the Sydney Business Insights Team and members of the Digital Disruption Research Group. And every week right here with us our sound editor Megan Wedge who makes us sound good and keeps us honest. Our theme music is composed and played live from a set of garden hoses by Linsey Pollak. You can subscribe to this podcast on iTunes, Stitcher, Spotify, SoundCloud or wherever you get your podcasts. You can follow us online, on Flipboard, Twitter or sbi.sydney.edu.au. If you have any news that you want to discuss please send them to sbi@sydney.edu.au.

Related content