This week: why the hard questions go unanswered, the road for self-driving cars seems rockier than we thought, and robolawyers. Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.

The stories this week

At this year’s SXSW, the hard questions have all gone unanswered

Rocky self-driving car progress

Robolawyers

Self-driving cars’ spinning-laser problem

The 2,578 problems with self-driving cars

Using ritual magic to trap self-driving cars

March of the robolawyers

People slamming into law-abiding autonomous cars


You can subscribe to this podcast on iTunes, Soundcloud, Stitcher, Libsyn or wherever you get your podcasts. You can follow us online on Flipboard, Twitter, or sbi.sydney.edu.au.

Send us your news ideas to sbi@sydney.edu.au.

For more episodes of The Future, This Week see our playlists.

Introduction: The Future, This Week. Sydney Business Insights. Do we introduce ourselves? I'm Sandra Peter, I'm Kai Riemer. Once a week we're going to get together and talk about the business news of the week. There's a whole lot I can talk about. OK let's do this.

Sandra: Today in The Future, This Week: why the hard questions go unanswered, the road for self-driving cars since rockier than we thought and robolawyers.

Sandra: I'm Sandra Peter. I'm the Director of Sydney Business Insights

Kai: I'm Kai Riemer. I'm Professor here at the business school. I'm also the leader of the Digital Disruption Research Group. So Sandra what happened in The Future, This Week?

Sandra: First South by Southwest Interactive happened South by Southwest is a huge annual conglomeration of festivals, conferences and other events around film, around interactive media, around music but also a very large part of it South by Southwest Interactive is focused on emerging technology which has earned the festival a reputation for the breeding ground for new ideas and creative technologies. And this is where Twitter first appeared in 2007. This is where Foursquare appeared in 2009 where we had Meerkat a couple of years ago and South by Southwest claimed to cover a lot of sessions with a lot of sessions on workforce automation and the surveillance state and the future of Internet.

Kai: And robots.

Sandra: And robots. An article in The Verge reported on a series of talk this week that reflected the festival's really conflict averse tone this year the fact that there was a huge focus on creativity on personal life and background of people who were attending. But there was very little critical examination around how society will grapple with the effects of widespread automation or discussing ethical dilemmas involved with ever more powerful AI taking on big roles in transportation or in medicine.

Kai: Another example mention was that while on one panel the early democratising force of the Internet was mentioned no mention was made of the fact that today we have white supremacist conspiracy theories being rampant on the Internet and a discussion about fake news and echo chambers and the way in which the Internet population seems to be ever more split into subcultures that rarely ever talk to each other.

Sandra: Or large organisation dominating the distribution of media, with I think its 40 to 50 percent of Americans getting their news on Facebook.

Kai: So Sandra why do you think those conferences are unable to ask those hard questions. Why are those things not discussed?

Sandra: Well first I think increasingly we are seeing problems that are ever more complex. Technology is changing very very rapidly and any conversation around the implications of technology whether they be ethical or around the changes in the workforce are quite complex debates. The second would be the very speed of the technology. These changes are so rapid that very few people manage to keep up with these changes.

Kai: So you're saying it's in here and in the technology topic that a panel at a conference can only ever touch the surface of what lies beneath.

Sandra: No we're seeing that it's actually difficult to get at this not that we shouldn't be doing it.

Kai: Right. I find that those conferences sometimes because they're commercial conferences they rely on high profile speakers to come to the conference often they rely on the sponsorship and goodwill of corporations to be part of the conference that this might actually impede the questions that can be discussed and can be asked. I've been part of a few conferences that were rather disappointing in what I perceive for this reason because you cannot you know chop the hand that feeds you. Is that a problem do you think?

Sandra: Well it is a problem because it gets to the question of where should these conversations be taking place. So we have increasingly complex conversation whether that's around the society grappling with the effects of technology or the negative side effects of some of the technologies we're employing or the ethical implications of technologies we're developing. The question is who is responsible for a) having this conversations or driving these conversations, should these be a theoretical responsibility, is this the domain of academics, is it the domain of businesses to start these conversations or of larger societies or even governments to have these conversations.

Kai: Yes and to what extent can we expect journalists for example to delve deeply into how AI works, how robots work and therefore what they can and can't do to critically question some of the often very far reaching implications that are being reported like AI will replace humans in all parts of life, robots will come and take away all our jobs.

Sandra: Algorithms will remove biases.

Kai: That's right. So these are all pretty stark claims which you know cannot be discussed in a few hundred words and maybe not you know in a couple of questions at a conference panel. So are we running the risk that the way in which the media works and those conferences work that we cannot actually have those tough discussions?

Sandra: The answer to that question might be increasingly yes. These questions are too complex to answer in half hour sessions. I think there is a huge role to play for universities for instance here at the University of Sydney Business School...

Kai:...Of course we would say that...

Sandra:...Of course you would say that, we discuss these matters at some length with our students. So developing the next generation of leaders or empowering them to have these conversations I think is quite important.

Kai: Or are we too positive about technology topics. Is there an issue with a techno optimism whereby we like to look at all the positive outcomes the fewer good outcomes the problems that we can solve with technology yet forgetting about the downside the dystopian view as a balancing out of the utopian claims that are often made. Is this something that you wouldn't want to have at a conference like this which should feel good and should be looking forward should be techno optimist and should really be a vision for what we can do in the future and a more critical view point just gets in the way.

Sandra: Well as a techno optimist I strongly believe in the power of dreaming big and imagining the future to empower entrepreneurs or innovators to make these bold claims or to indeed innovate or invent in that space. I think there is definitely a role for that. But then on critical examination of the future I think is extremely dangerous.

Kai: But certainly there is a difference between dreaming big and making bullshit claims right. Which raises the question - Who then should be part of that conversation? Who else? It can't just be left to theorist's academics and self-proclaimed Futurists.

Sandra: I think technology is one of the most influential megatrends that will actually shape the way we live and the way we work and the way we function as societies in the future. So I think this is a conversation that everybody needs to be part of this won't be solved by tech conferences or by Silicon Valley or indeed by academia but rather there is a need for all of us to push and ask the hard questions in public forums and create that collective understanding.

Kai: So let's take a look at a couple of topics that were discussed at the conference. The first one is self-driving cars. There is a perception that self-driving cars will be a normal part of our daily lives in the very very near future. Some people say in five to 10 years even.

Sandra: Indeed and we we're looking at them are article in Tech Crunch that actually points out that even Uber's fleet is demonstrating some fairly wild swings on measures of safety and reliability. And there isn't a steady progress in self-driving cars but rather more jerky sort of stumbling towards the goal of self-driving reliability. And this is complicated by things such as Uber its court battle with Google over autonomous car technology which has just started. We haven't seen the end of that yet. So that raises the question of how do we think about self-driving fleets in the near future.

Kai: Yes indeed. So the documents that were made available showed that Uber self-driving cars have done about twenty thousand miles but that at about on average every mile someone had to intervene because something went wrong. Not necessarily always big things that would have led to accidents but veering off the street or things where a driver had to disengage the computer and then the computer could take over again. So a lot of small things to be ironed out. The companies say that as the algorithms learn and they become more proficient those things will become less and less. But are we making steady progress. What's the technology like. There were a few other articles that point to things not being all that ready yet.

Sandra: Whilst the Uber conversation was around miles per intervention and whether there were critical incidents or just you know bad experience not a smooth ride. There are a couple of other stories including one raised by M.I.T that look at the sort of practical progress towards autonomous vehicles that really need improvements also in technology and things like sensors that map vehicles through the environment. Here we want to talk a little bit about LIDAR sensors and the fact that companies such as Alphabet or the spin out company Waymo, Uber and Toyota all of these with the notable exception of Tesla that's using other technologies but all of them rely on LIDAR sensors to locate themselves on the map or to get the round or to identify things like people or dogs or cats running in front of the car.

Kai: So these LIDAR’s are essentially devices that sit on top of the car, they look reasonably ugly at the moment looks a bit like a coffee machine sitting on top of the roof. But what they essentially do is they should lasers into the environment and then read a 3-d image of the environment off the back bouncing reflections and can create a fairly accurate picture of a few centimetres on a 100 meter distance of what the environment looks like in 3D. Now the technology is very expensive at the moment. It's bulky. It's not 100 percent reliable and it is one of those things that really stand in the way of making progress to bring self-driving cars to the masses isn't it.

Sandra: Indeed you've highlighted the fact that it actually is quite expensive it costs thousands of dollars or even tens of thousands of dollars apiece.

Kai: It’s got moving parts in it that makes it complicated.

Sandra: It’s got moving parts it, spinning mirrors that direct the laser beams at the moment. Indeed many vehicles these days have more than one of these things onboard and despite the relatively small number of autonomous vehicles that we have at the moment the man has become a very huge problem. So by some reports some companies wait for six months to get one of these things. Now there is a light at the end of the tunnel apparently which is something called solid state LIDAR technology which would be much cheaper much smaller and much more robust. But that hasn't.

Kai: Hasn't eventuated yet. We haven't seen any working devices and not being produced at scale anyway. Now while those problems with LIDAR’s might be solved the whole story points in the direction of how hard it is to make cars see. Right. And one thing I want to point to here is that as we're trying to build cars that can see like humans do I think we're missing a point about how humans drive cars. It's not just like we're taking in sensor information with our eyes. And then the brain processes it and we can drive the car with our hands and feet, we're driving cars with our bodies, right. So we are moving in traffic. We know where we are and we can react quite intuitively by feeling, feeling the car feeling the road. All of this information we take in. So it's very hard to replicate in a piece of technology in a computer and an algorithm what it's like to drive the car and the multiple ways that we as humans are able to sense when we drive a car in traffic. So it will be interesting to see how companies will solve this problem going forward with combining multiple sensors.

Sandra: The issue of humans driving cars is also one of the reasons we are looking at autonomous vehicles in the first place. I have actually skipped the stage in the development of autonomous vehicles. So most companies whether that's Google or Uber or in the traditional car manufacturers like Ford or Mercedes have skipped the stage of what's called Level 3 autonomous capability where you would have a human in the car who would take over in terms of emergency but rather are looking at developing this fully autonomous vehicles because we would actually need sensors inside the car to be able to tell if that human is still looking at the road, if they have strapped you know VR goggles to their head, or are playing games in the car but rather doing something else. So they're going straight to full autonomy. Now this also creates the other difficulty of not having this person who is interacting with the car in any way in the fact that it's only the technology that is stalling the development of these cars. But the fact that most of these autonomous vehicles have to interact with other cars driven by humans.

Kai: Yes that's a really interesting topic because presumably if you're building a self-driving car and you’re programming your algorithms, you're training your algorithms, you want those algorithms to adhere to the rules. Right. It turns out though that humans are in traffic they don't. Humans do not always follow the rules. They speed up they might break the rules at times and sometimes for good reasons because humans apply judgement. Humans can work with the rules. They do not have to slavishly adhere to the rules. And so the problems that have been observed is that self-driving cars get into trouble when drivers around them are a little bit lenient with the rules which is what creates a traffic flow that is largely organised by human drivers. If you now enter cars into the mix that are very slavish with the rules you're really messing with the system and you're creating dangerous situations where human drivers might not expect how a self-driving car reacts and so you're creating unexpected side effects in a system where humans that apply judgement and self-driving cars that strictly adhere to the rules have to interact.

Kai: And indeed so far autonomous vehicles have refused to break the law. So we haven't built in any mechanism for them to break the law even though the safest thing may be to break the law for instance to avoid an accident. And they also can't read social cues. We often rely on eye contact or signalling or moving the car a little bit forward to signal the other driver that we might take initiative and join the traffic in an intersection. And so far autonomous vehicles have struggled to interact with what is the majority of cars on the road.

Kai: Absolutely. And we know from experience that when we drive in traffic the rules cannot cover 100 percent all the situations that might arise. And so as humans we have to interact we have to apply judgement we have to commit to a certain course of action knowing that other people will anticipate and will know how we react because we've done this for years and years organising and negotiating the way in which we do traffic among humans. You enter those very mechanistic safe driving cars into the mix and things would just break down inevitably. So this is what people are concerned about when we talk about a traffic system that will gradually move towards a system where we have more and more self-driving cars because we cannot just switch from one fully non Autonomous to a fully autonomous system.

Sandra: And this goes back to our conversation about the big questions and the big questions might be you know are self-driving cars going to be here in five years. Maybe quite a few people are saying maybe not. But also what that technology might look like. What is the infrastructure that we need to build for that to accommodate it even for technologies that we are not sure what they look like today? What are the ethical implications of having these autonomous vehicles on the roads? Who and when gets to decide.

Kai: I'm pretty certain that in the next five to 10 years we will see cars that are being sold that have some form of assistance systems where you can have some you know certain autonomy in certain situations...

Sandra:...You could get a Tesla this week.

Kai: Yes absolutely. We might see one or two companies launching fully autonomous taxi services. But will we have a traffic system which has a majority of self-driving cars or will we have a situation where most new cars that are being sold are self-driving, I cannot see this happen any time soon.

Sandra: If we look at where autonomous vehicles might show up first leaving aside the conversation around industrial autonomous vehicles whether they be in mining or in ports or in public transport we will probably see autonomous vehicle coming up first in the areas that have been very extensively mapped, probably as a transportation service in discreet areas.

Kai: Oh that brings me to another story which showed up just recently. There was an artist by the name of James Bridle and he's a Flickr artist. He does photography and he has this photo project where he is trapping self-driving cars. We will put up the pictures for you to see. But what he's essentially done is he's drawn a circle as a solid line with a dotted line around it. And the idea is that a self-driving rule abiding car would know that it can drive into the circle but it would find no way out of the circle because you cannot cross the solid line. Now whether or not this is realistic it’s just a prank or an art project, it points to a deeper problem which is that self-driving cars will read off the built environment certain cues as to what to do. So they rely on certain visual cues in the built environment and if those cues are not there, they get into trouble but it also needs a means that once we learn how they read those visual cues this might lead to being you know to people playing pranks on them, to, you know we might see a whole new YouTube genre of people playing pranks on self-driving cars by trapping them in cul de sacs or by having them veer off roads. But it also points to a serious problem that you can hack into or in other ways derail those sensors to maliciously bring about accidents for example. So we're not really...

Sandra:...since some of their sensors are quite good, could we build billboards where human eye wouldn't see it but it would have embedded pixels that would give certain directions to the car.

Kai: This is actually what is being discussed right.

Sandra: Yes. This is indeed one of the ways that you could hack self-driving cars if the sensors are reading the environment you could actually build code they would be able to read off in large billboards and that could be used for good, you know driving you into the next very fancy restaurant for a free meal. But it could also be used for other purposes. And again we haven't exhausted the discussion around autonomous vehicles and even the problems with technology we haven't even discussed things like weather you know the bad rains we've had in Sydney what does that do to sensor technology and to Lidar or snow or sleet or low light or a glaring case of cameras or radars and all the things that Tesla relies on.

Kai: Exactly. All of this points to the fact that once released into the wild, out of controllable lab, experimental conditions all kinds of things might happen where as humans we can employ judgement and we might make the right call but algorithms that have to rely on training data, on rule following, they might not actually be in a situation where they can react appropriately. Which points to the last story I want to bring up in this context. There's an article called "when machines go rogue" and it points to the fact that with the self-learning technology, deep learning, machine learning, we're now entering an age where we have algorithms that are quite different to the ones that we have employed in technology so far. So if you think of planes and the way in which planes are being steered by automatic technology those algorithms they are off the traditional IF THEN nature which means that you can actually test the code rigorously you can put the plane and its software through a very detailed rigorous testing and certification scheme to be almost certain that nothing will happen under all the kind of conditions you can imagine.

Sandra: But this gets complicated with machine learning, doesn't it?

Kai: Absolutely. Even the traditional algorithms you can never be 100 percent certain but you know self-flying planes and they are largely self-flying these days tell us that it works to a large extent but self-learning technology is radically different. It's based on neurons self-organising by learning from training data and then producing similar outcomes. And so when they read off sensor data in a real life situation they will react to this data in the way they were trained and then react presumably in a way that will be okay. But you can never be 100 percent certain because the technology, the algorithm is largely a black box. And so it will always from time to time throw up certain unpredictable behaviour which even the developers do not fully understand how this comes about. So all you can do is train more trained better trained more detailed without ever be certain that nothing will happen.

Sandra: So indeed the black box creates huge problems not only around seeing the potential effects of employing that technology but also as to how do we think about ethics or morality or right or wrong in that space.

Kai: Which leads us to our third topic. Robolawyers.

Sandra: So this is a story about a more fundamental shift in professional services. The rise of the robolawyers in the Atlantic talks about advances in artificial intelligence and how they might diminish the role of lawyers in the legal system in some case replace lawyers overall. And this is part of a wider conversation about replacing doctors and lawyers and a whole bunch of other professionals. So this conversation is also about how technology changes business models entirely. Whereas we used to have a one on one relationship with our lawyer or with our doctor or other professional services. These will now become embedded in systems that are then made available to people.

Kai: Well first of all I think we need to distinguish because there's two types of technologies being folded into the same conversation. The first one which we referred to as artificial intelligence as a kind of shorthand. What we're really talking about there is pattern matching. So what we're talking about is that sophisticated pattern matching technology is used to do away with a lot of the entry level lower skilled jobs in professions such as law but also accounting where it is all about collating vast amounts of information, going through past court cases and coming up with the kind of patterns that might actually help with a case that we're dealing with. So artificial intelligence or better pattern matching machine learning can do this much more reliable and faster more efficiently than paralegals or junior lawyers would be able to do.

Sandra: We need to make sure that we are not claiming that all of these professions are indeed creative, highly innovative professions all of these professions can be broken down into smaller parts and many of the tasks in those smaller parts can be better performed.

Kai: And indeed it is breaking down into those low level tasks. These are more senior more expertise based jobs that rely on judgment that now enables companies to automate certain of those low level tasks. And so there is a real threat to those entry level jobs into those professions that they are being done away with. Under the mantra of cost savings and efficiency which raises obviously certain problems as to how are lawyers supposed to gain the skills that they need. How do they learn the trade when those entry level jobs are no longer available? You know they they're not coming into the profession and go into the more expertise based job straight away so that’s a problem I think that is not being discussed at the moment.

Sandra: Or it might be indeed about training them differently. Increasingly lawyers will have to rely on the systems and know and understand these systems so maybe the entry level training for these lawyers will be quite different and it will be how do you learn how to make the most out of the brute force that you get from machines analysing big data or having these remarkable algorithms at hand.

Kai: Which points to the more likely outcome that is that we will have to re-learn how we do those jobs that rather than having junior lawyers or junior accountants do all of these menial tasks. We will learn the trade quite differently by employing computers and machine learning algorithms to do that work for us and therefore develop into the profession in a very different way where algorithms just become part of the trade. They become a tool to be used by lawyers by accountants which will change the narrative I think from a you know fear based - The robots are coming for jobs - to a discussion about how can we actually improve and make accessible legal services to a wider population by doing away with the bottleneck of menial work that we can employ computers to do.

Sandra: And indeed I think the article in The Atlantic has embedded in it a very good observation which is that this is not a conversation about replacing jobs or about getting those algorithms to actually deliver on affordability or inefficiency of law systems but rather it is a story about changing business models in these industries and rethinking how we do law how in the how would we do medical services and so on.

Kai: Yes indeed. And that points to the second technology I want to mention which is more traditional algorithms which people developed to cope with the complex often bureaucratic nature of government processes or legal processes where the process itself is actually fairly deterministic and mechanistic. It needs a lot of work though because a lot of information has to be collated, there's a lot of forms to be filled in different instances have to make decisions but the outcome is often largely predictable once you know what has to go into the process, say in disputing a parking ticket and someone has built an app for that. This app does not need machine learning but it needs an algorithm that has all the steps embedded in it that it takes to collect all the information and then submit the claim. A process that is rather complex and time consuming and off putting to people in their everyday lives but can be solved with computers in a fairly straightforward way. And a lot of tasks are like that in accounting, in law, in many other dealings with governments and so computerising those I think is a logical step in coping with the often artificial complexity that is being put up by the bureaucracy around those processes.

Sandra: And this and it also speaks to a larger question since we spoke about larger questions today: around how technology is making the boundaries of traditional industries a lot more permeable. So these are indeed some processes of tasks that can be performed outside of the traditional law firm or outside the legal industry. And we've already seen a very sort of quiet creep of technology trying to break down these boundaries. So for instance resolving disputes this used to be a matter for largely the court system. But now there are about 60 million eBay disagreements being solved online every year that never go through the court systems and there's a lot more of those than the ones that do go through the court system. And that has made the service available to millions of people.

Kai: Yes. And that's all we have time for today. More questions to be asked next week.

Sandra: See you next week.

Kai: See you next week.

Outro: This was The Future, This Week, brought to you by Sydney Business Insights and the Digital Disruption Research Group. You can subscribe to this podcast on Soundcloud, iTunes or wherever you get your podcasts. You can follow us online, on Twitter and on Flipboard. If you have any news you want us to discuss please send them to sbi@sydney.edu.au.

Related content