Sandra Peter and Kai Riemer
The Future, This Week 22 September 2017
This week: when intuition saves the world, the data science of winemaking, and robolawyers…again. Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.
The stories this week
Other stories we bring up
The man who saved the world featuring Kevin Costner
Gary Klein’s book “Seeing what others don’t: the remarkable ways we gain insights
MIT’s Frank Levy and Dana Remus’s forthcoming paper ‘Can Robots be Lawyers’
You can subscribe to this podcast on iTunes, Spotify, Soundcloud, Stitcher, Libsyn or wherever you get your podcasts. You can follow us online on Flipboard, Twitter, or sbi.sydney.edu.au.
Send us your news ideas to sbi@sydney.edu.au
For more episodes of The Future, This Week see our playlists
Dr Sandra Peter is the Director of Sydney Executive Plus at the University of Sydney Business School. Her research and practice focuses on engaging with the future in productive ways, and the impact of emerging technologies on business and society.
Kai Riemer is Professor of Information Technology and Organisation, and Director of Sydney Executive Plus at the University of Sydney Business School. Kai's research interest is in Disruptive Technologies, Enterprise Social Media, Virtual Work, Collaborative Technologies and the Philosophy of Technology.
Share
We believe in open and honest access to knowledge. We use a Creative Commons Attribution NoDerivatives licence for our articles and podcasts, so you can republish them for free, online or in print.
Transcript
Introduction: This is The Future, This Week on Sydney Business Insights. I'm Sandra Peter and I'm Kai Riemer. Every week we get together and look at the news of the week. We discuss technology, the future of business, the weird and the wonderful, things that change the world. OK let's roll.
Kai: Today on The Future, This Week: when intuition saves the world, the data science of winemaking, and robolawyers....again.
Sandra: I'm Sandra Peter, I'm the Director of Sydney Business Insights.
Kai: I'm Kai Riemer, professor at the Business School and leader of the Digital Disruption Research Group.
Sandra: So Kai what happened in the future this week?
Kai: So our first story is an unusual one. It comes from NPR National Public Radio dot org. And it concerns Stanislav Petrov, the man who saved the world, who unfortunately died aged 77. Now this is a story that happened a while back, 34 years to be precise. In the middle of the Cold War Stanislav Petrov was a lieutenant colonel in the Soviet Union's Air Force Defence Forces and his job was to monitor his country's satellite systems, in particular for incoming nuclear weapon launches from the United States. It was in the early morning hours of September 26 1983 when this happened.
Audio: "It was completely unexpected, as such things usually are. The sirens sounded very loudly and it just sat there for a few seconds staring at the screen with the words "launch" displayed in bold red letters. A minute later the siren went off again. The second missile was launched and then the third and the fourth and the fifth. The computers changed their alerts from launch to missile strike. There were no rules about how long we were allowed to think before we would strike. But we knew that every seconds of delay took away valuable time that the Soviet Union's military and political leadership needed. And then I made my decision. I would not trust the computer. I picked up the telephone handset, spoke to my superiors and reported that the alarm was false. But I myself was not sure until the very last moment.
Sandra: And let's remember this was already an extremely tense moment during the Cold War. Earlier that month the Soviet Union had shot down a Korean Airlines plane that had drifted into Soviet airspace. There were a lot of people who had died so it was already a tense moment between the US and the Soviet Union with a lot of threats and a lot of animated discussion.
Kai: The U.S. Army had engaged in quite a few provocations in that same year and so there was a tense atmosphere when Stanislav had to make this decision.
Sandra: And at this point he had all the data that was suggesting there is an incoming and ongoing missile attack. If he had sent his report up the chain of command, there would have been a nuclear disaster, there would have been the Great War of 1983.
Kai: And in fact the protocol clearly demanded that this information be given up the chain of command and therefore a retaliation be launched. Now he acted on what he describes as a gut feel.
Sandra: So why are we talking about this story?
Kai: Because it's a timely story. We're talking a lot recently about the tension between human decision making, human judgment, human biases and all the inadequacies that we as humans supposedly have. These are the more rational and therefore better decisions that algorithms are supposed to make.
Sandra: And let's remember the tensions that we are experiencing now are in a way similar to what was going on at that point in time. There was a system where the data said that there is an ongoing attack. There are these missiles coming in. The impact is imminent. And the protocol clearly said that the decisions should be based on the computer readouts and the computer readouts said this is happening, you should retaliate now.
Kai: But Stanislav knew that something was off. Something didn't feel right.
Sandra: So he had a gut feeling.
Kai: He had a hunch, no it was more than a hunch, he had an intuition. But the point here that we want to talk about is that this is not a layperson's intuition. This is not you or me having a hunch about something. This is an expert making an expert judgment where expertise is at work.
Sandra: So this is important because this very human skill of intuition seems out of place in the world where data can help make all these decisions. So what is the place of this skill when experts have an intuition, a gut feeling about what is going on and when can we rely on this? So let's talk about this a little.
Kai: And it is here that we have to go to a true expert on studying human intuition, human expertise, and what people do in tense situations. And the name of that person is Gary Klein. Gary Klein has been researching expert decision making, especially in situations under stress where firefighters, army personnel, nurses have to make critical decisions where they are able to execute their skill in the form of yes intuition or expertise in making those judgments often subconsciously where they are only able to explain what actually had happened after the fact. But it is crucial that they were able to make those judgments in the moment, in the situation, with a high level of predictive accuracy.
Sandra: So Gary Klein is a famous psychologist, a famous researcher who has pioneered the field of naturalistic decision making, and he's done a lot of this work as you've mentioned looking at expert firefighters, people under high stress conditions and how they make their decisions. And a lot of his insights around intuition do bring up experts who are in the moment, in the field and we need to contrast this with quite a few other studies so he's not the only person who's been studying intuition.
Kai: No Daniel Kahneman has been very influential in actually critically questioning intuition. So Daniel Kahneman has shown that under a lot of conditions humans are actually not very good at using intuition to make predictions about the future. And Kahneman and Klein have had a productive discourse and have had this back and forth you could say an academic dispute, but they basically came to better understand the different natures of intuition which we want to talk about here.
Sandra: So Daniel Kahneman, the American psychologist who is also a Nobel Laureate in economics and his researches around human rationality and looking at this in the context of modern economic theory, and a lot of work which we've discussed before on this podcast around system one thinking sort of the intuition that goes on behind a lot of the things that we are not conscious of versus more conscious decision making. And a lot of Dan Kahneman's work actually points out that if you're looking at intuition on the very simple combinations of variables, statistical modelling is much better at predicting and being accurate than people are. So simple combinations of variables, very simple situations do not work.
Kai: And he's shown this in a lot of experiments.
Sandra: He and other psychologists have also run a lot of experiments around medium and long term predictions, where these people are supposed to forecast, let's say prices for stocks far into the future, and again statistical modelling is better in those cases than human intuition. This idea of the illusion of validity, what people think they know something, they think they will know what will happen. But again medium and long term, this doesn't work. So where did Danny Kahneman and Gary Klein come to a common understanding, around when does intuition work?
Kai: When people are in the moment using their expertise and intuition to make judgments about what is about to happen in that situation.
Sandra: So as in the case of Stanislav who was in the moment, there were five incoming missiles he had an intuition. He had the feeling that there was something wrong about this and he decided not to call, not to strike back.
Kai: Yes. So his intuition told him something was off. And so he held off on raising the alarm. And it turns out he was right in the end and he prevented probably one of the most significant disasters in human history. And it turns out that on hindsight, on reflection what was off about this was that there was five missiles coming in shortly one after the other which didn't make sense right. You would expect one missile launched out of mistake or a lot of missiles coming in because it was always expected that a first launch would try to overwhelm the opposition by launching tens or hundreds of missiles.
Sandra: But again as in the case of Gary's firefighters, there was very little time so this wasn't a conscious decision. This wasn't him sitting down and thinking about how many missiles and what the situation is but rather an intuition that something feels wrong about this.
Kai: And he was right in hindsight and it was later found that the problem was that the sunlight was reflected off a cloud cover and then picked up by the satellite. The problem is he did not follow the protocol and his orders, and he was actually later reprimanded for a minor issue that he didn't take down notes. So he basically never received the kind of recognition that he deserved.
Sandra: Also if you want to see the whole story there is a 2015 docu drama with Kevin Costner called "The man who saved the world". We'll put a link in the shownotes.
Kai: So the takeaway from this story is that had we put a machine, an algorithm, a computer in Stanislav's place we might not be here today because...
Sandra:...The Soviet military would have retaliated with a nuclear strike and we would have had a World War.
Kai: Yeah. Computers have no gut. They have no feel. They do not exert judgement and human intuition. So the story here is once again and we've discussed this many times before, what is the best combination of human judgment, decision making and the use of data and technology. So why does human intuition have such a bad reputation?
Sandra: It gets a bad reputation for two reasons. A - that often intuition is used indiscriminately. So we spoke about the conditions where intuition does work and it's often used to mean anything. You know, I see something on the bus and I have a hunch or I see something on TV and I have a hunch. We spoke about the conditions under which intuition works and that is experts immersed in the moment and in their field of expertise. And this is not about us going down the street and having an intuition about the weather.
Kai: The second reason is that we live in a world where we value rational thinking, scientific insight, data, algorithmic formulaic decision making, where all of these things have become synonymous with going about it the right way. So human expertise has gotten a bad reputation because it's kind of dubious on that backlog. We can't quite explain how it's being done. Yes. Daniel Kahneman uses his system one but it's largely something that somehow the brain does. But we cannot really understand this step by step process by which the mind does it and it doesn't sit well with the way in which we want to formalize the words, make everything transparent, make everything obvious. So it eludes us and therefore it is sort of put in the realm of the slightly magical or dubious. And it therefore gets a bad name. But there in lies the danger because we tend to dismiss human expertise, we tend to formalize systems, we put up rules how doctors need to make decisions. We create all of these bureaucratic systems that often hamper our expertise and Gary Klein has actually done a lot of work to show that we often hamstring experts when we put them inside such formulaic systems.
Sandra: This doesn't always need to be a story of human versus technology. Let's look at our next story where human expertise and technology come together to make something quite good.
Kai: So let's talk about wine. Our next story is from Digital Trends. It's called "The winery of the future looks like something Bruce Wayne would run".
Sandra: This is Palmaz Vineyards in Napa Valley. 610-acre parcel in downtown Napa and vineyard run by the Palmaz family originally from Argentina, now has the equivalent of a 22 story building of which about 18 stories are underground of technology enhanced winemaking.
Kai: So what looks like an innocent nice chateau-like wine makery from the outside, looks something akin to the evil villains lair from one of the James Bond movies.
Sandra: So is the romance of winemaking dead? This is a vineyard that is full of systems that handle everything from the temperature in the barrels, to last minute adjustments, to the rate of fermentation, to helping move the barrels around in forming the fermentation process, the selection of grapes, pretty much anything that you can think of has data attached to it in this vineyard.
Kai: Yes exactly. But I think what is important here is that technology has not employed in the name of efficiency or productivity to squeeze out more wine from the same amount of grapes, to increase yields, and to just crank more produce through this machine. The process described in the article is very respectful to the art of winemaking, but it uses sophisticated data science to support the human winemaker to free the winemaker from the procedural decisions on what temperatures to select in the fermentation process, to really concentrate on the tasting and the art of blending wines. So one indication of this is that the whole idea of this 22 story set up is that no pumps are used in the filtering of the wine. It's purely based on gravity. The filtration process is set up such that the wine can go through the various filtering stages by flowing down the set up purely on the basis of gravity.
Sandra: So technology is used to augment what the wine maker does rather than to bring efficiency to the process. It's not replacing what the expert does but it is there to support the vision that the winemaker has for what he wants to achieve.
Kai: It starts with the micromanagement of the vineyard itself where twice a week a plane goes overhead and measures the chlorophyll in the plants, by way of infra-red photographs, which can then be used to target on a micro level the watering of the plants. There's a whole lot of technology that goes into this, technology by the way where Australian researchers are world leading, the CSIRO runs a project, University of Adelaide has created an app VitiCanopy which uses photographs to determine the growth of grapes and the plants, they use drones and thermal sensors, there's a lot of science actually employed these days to get the best quality out of a vineyard but it doesn't stop there.
Sandra: And indeed we have people here at the business school involved in some of the science behind making better wines in Australia.
Kai: Absolutely. But where the winemaker really comes into play is where all this data about where the wine was grown is then combined with the real time data of the fermentation process and presented to the winemaker in quite an innovative way. So the place where this all happens is a dome shaped large set up where 24 different fermenters are set up in a circle and the winemaker can walk around and take samples and tests from the fermentation cylinders and has the data on the fermentation process projected onto the dome overhead, and as the person moves from one fermenter to the other the information is updated and follows her across the room and she can use that data then to influence on a micro level the temperature in the fermenter when she thinks that something is particularly worthwhile keeping because the taste of that particular process was really novel or great and will include the link and the show notes and thus have some picture and this is straight out of a James Bond movie. But the point here is not that the winemaker is replaced by technology in any way. Her capabilities are augmented. I actually amplified through the technology and the data is not there to actually make decisions for her. It's rather that the data is used as an additional way to sense how the fermentation process is going and therefore to really concentrate on bringing out the best in the wine in terms of taste.
Sandra: So wonderful example of how technology supports human expertise but I think it's also worth looking at why we have such an aversion to this idea of bringing science into the process of winemaking. We see winemaking as this craft that should be pure. There should be the farmer out in the field, unencumbered by any technology if possible. Barefoot, sunburnt and in tune with nature. And we really see any mechanical or technological process added to this as corrupting the purity of what is the wine making process, and we are seduced by this idea of the humble farmer with very little intervention. And I think this is because the art of winemaking has been around for a very long time. So we have archaeological evidence going back seven thousand years before Christ showing that we have been drinking fermented grapes, there were processes of fermentation in vats akin to what we now called winemaking 4000 years before Christ. So for a very long time we had this idealic picture of what this looks like. And the first time that any mechanical or any technology came into winemaking it wasn't really at the high end supporting experts but rather at the low end of winemaking helping produce very cheap wine mass produced wine. So we always associated this idea of technology with something that will be now mass produced that is taking away from the expert.
Kai: And I think basically the same dichotomy is at play that we made earlier, where technology is often seen as in contrast with the human element where now technology is more rational, it's efficient, and humans are irrational. We have these biases. So we're making these dichotomy between technology and humans often between technology and nature and quite rightly we are concerned with the industrialization of certain craft and they are good examples of bad winemaking where technology is used to abuse the grapes to just create lots of cheap product. But I think the case we want to make is that this doesn't have to be like that.
Sandra: The point I want to make is that we always put technology up against the human, and depending on the social narratives we tell about the nature of that process we have to have one or the other win out. In the case of the previous story, this was about the military so we come to trust that technology will save us and human expertise will be inferior and if we could just make the machine good enough to take over. On the other hand if it's a process that is more on the artisanal side, arts and crafts something like winemaking, where the craftsmanship is important then we always tend to favour the human. The human will be always better than technology. And it really doesn't have to be that way does it?
Kai: No it doesn't and the case here shows clearly that human expertise is valued. The winemaker is still the artisanal decision maker with who has the expertise and the judgement to create a good wine. And there's some great quotes in the article where it says that all the machinery around automating the fermentation doesn't guarantee it's a great wine, just a great fermentation. What makes a wine great is when the winemaker happens to be there when something great happens and also data is great but quite useless unless the winemaker is there and can actually interact with it at the time when they are working with the wine. So what we see here is the use of technology to amplify human expertise. It's not an either or.
Sandra: And we can't take their word for it that's science and technology have led to better wine so we will need to test this.
Kai: That calls for an excursion.
Sandra: But before we do that we've got one more story for this week - we're going to finish this week's episode and this is a Silicon Valley start-up that wants to replace lawyers with robots.
Kai: So this one's from The Washington Post and it concerns a start-up called Atrium - a company incorporated as a law firm that wants to do things radically differently.
Sandra: This is the latest ambition of Justin Kan and he's a serial entrepreneur well-known for building the video game streaming platform Twitch which was sold to Amazon for nearly a billion dollars a couple of years ago. His newest venture has raised about 10.5 million dollars so far.
Kai: And its ambition is to create a technology based or a technology augmented law firm where the execution of certain legal processes is automated and where lawyers working for the firm are quote "technology turbo charged.
Sandra: And the first types of tasks that they have taken up are really routine legal tasks - things like fundraising from venture capitalists or issuing stock options to employees. So they've tackled pretty much the same sort of routine type jobs that other start-ups in Silicon Valley have gone after first in other industries.
Kai: Basically what motivated Kan to create their latest venture in the first place is frustration with going over the same processes, dealing with lawyers who couldn't exactly tell them upfront what it's going to cost, and basically engaging in what he considered an unacceptably intransparent process and so they wanted to change this. But the story also shows that it is a lot of work - they are employing lawyers, they have to figure out how every process works, and it's a lot slower than you would expect from prior predictions that we've heard in the last couple of years.
Sandra: So the company is trying to build technology that will ultimately automate the work of all these people that they currently employ of the human beings they are currently doing the work. But the legal industry seems to have been much slower at being automated than all of the predictions we've seen over the past few years.
Kai: Now this is in part to do with the overstated potential of machine learning and artificial intelligence but it also has a good deal to do with the economics of the profession in general.
Sandra: So on the one hand we've got people like Frank Levy from MIT together with Dana Remus, they're quoted in the article. They've got the forthcoming paper on whether robots can be lawyers and they're looking at the fact that maybe only about 10 percent of the legal work could actually be outsourced to software as we stand at the moment.
Kai: Which contrasts markedly with earlier predictions by the Oxford Group who would say that most of what lawyers do could be automated because they base their study on a very different understanding of how lawyers work.
Sandra: So clearly AI has made inroads and machine learning has made inroads where activity is revolved around pattern matching. So in places like simple disputes we've seen eBay for instance solving sixty million disagreements through online dispute resolution systems rather than going through lawyers and judges and court times and this is three times more than all the legal lawsuits filed in the US court system. We've also seen a huge increase in things like electronic tax returns in Australia for instance. So in these sort of instances machine learning, algorithms have worked quite well because let's remember this is all about pattern matching. However in other instances where it is about expertise about judgement we've seen very little inroads into the law profession.
Kai: And I think we can learn from the winemaking example that machine learning and working with data and automation can really be useful to augment the way lawyers work. But it can never replace judgement, sense making, reflection because this is just not what these algorithms can do. And coding this is also quite hard as the start-up realises. So they are employing engineers now to really create these algorithms that can handle quite mundane cases and they will make some inroads but the article questions quite rightly whether the economics of this are actually working because yes you might not have to employ as many lawyers but software engineers are not exactly cheap. And every new problem that you have to solve and when the law changes and the legal process changes you have to update those algorithms. They are not self learning in the way in which humans are self-learning. So yes we might make some inroads but the economics are not such that this will scale indefinitely.
Sandra: And speaking of economics there's also another aspect that's hinted at in the article which have to do with the economics of the law profession itself. The incentive structures in these law firms doesn't exactly reward replacing human work with machine work. Let's remember lawyers are paid by the hour so whilst technologies that make their life a bit easier have some appeal they have little appeal in terms of the returns for their work if this results in less billable hours.
Kai: So there's no appetite from that point of view from within the industry to make radical changes because it would alter this business model that is based on billable hours. This is why disruption will come from outside.
Sandra: Exactly. There is also a partnership structure in the law firms. So the investments that you would make in this technology would again have to come from the same people who are directly benefiting from billing these hours. So again something like an outsider would be one of the few ways the economics could make sense.
Kai: And where we see most potential for this is indeed in the mundane cases that have high rates of repetition where investments in codification would actually pay off and a lot of the legal processes that especially corporations have to go through have very little value add and are often akin to red tape and so I think there's a certain appetite to automate some parts of legal processes and dealing with high repetition cases but I can't see these technologies make big inroads into areas where individual human judgement on a case by case basis is required.
Sandra: So it turns out one of the biggest innovations so far has been the business model that Justin Kan has come up with - Atrium's big innovation is its pricing model. The firm doesn't charge by the hour as we've seen in the case of traditional partnerships and as most corporate law firms will do but rather it estimates the amount of work it expects to do and it charges a single upfront monthly fee regardless of how many hours are worked.
Kai: So innovation in this space is coming but it might be in a more holistic sense that the model by which corporate law is done is changing, the way in which technology is used by lawyers is changing, but we're not seeing fully automated law firms as in the robolawyers are taking over. We're far away from this type of dystopian vision.
Sandra: But one step closer to that glass of wine because that's all we have time for this week.
Kai: Thanks for listening.
Sandra: Thanks for listening.
Outro: This was The Future, This Week, made awesome by the Sydney Business Insights team and members of the Digital Disruption Research Group. And every week right here with us our sound editor Megan Wedge who makes us sound good and keeps us honest. You can subscribe to this podcast on iTunes, Soundcloud, Stitcher, or wherever you get your podcasts. You can follow us online, on Flipboard, Twitter or sbi.sydney.edu.au. If you have any news you want us to discuss please send them to sbi@sydney.edu.au.
Close transcript