Sandra Peter and Kai Riemer
The Future, This Week 01 April 2017
This week’s April Fool’s special edition: how Elon Musk wants to save us from the AI apocalypse, the role of smart phones in planned parenthood, farmers hacking tractors, and Trump’s burning tweets. Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.
The stories this week
Elon Musk launches Neuralink to connect brains with computers
American farmers hacking tractors
Smartphone attachment for home fertility tests
A robot burning Trump’s tweets
Other stories we bring up
Elon Musk launches Neuralink, a venture to merge the human brain with AI
American farmers hacking tractors again
A robot burning Trump’s tweets one more time
IMB Thomas Watson 1943 “I think there is a world market for maybe five computers”
You can subscribe to this podcast on iTunes, Soundcloud, Stitcher, Libsyn or wherever you get your podcasts. You can follow us online on Flipboard, Twitter, or sbi.sydney.edu.au.
Send us your news ideas to sbi@sydney.edu.au.
For more episodes of The Future, This Week see our playlists.
Dr Sandra Peter is the Director of Sydney Executive Plus and Associate Professor at the University of Sydney Business School. Her research and practice focuses on engaging with the future in productive ways, and the impact of emerging technologies on business and society.
Kai Riemer is Professor of Information Technology and Organisation, and Director of Sydney Executive Plus at the University of Sydney Business School. Kai's research interest is in Disruptive Technologies, Enterprise Social Media, Virtual Work, Collaborative Technologies and the Philosophy of Technology.
Share
We believe in open and honest access to knowledge. We use a Creative Commons Attribution NoDerivatives licence for our articles and podcasts, so you can republish them for free, online or in print.
Transcript
Introduction: The Future, This Week. Sydney Business Insights. Do we introduce ourselves? I'm Sandra Peter, I'm Kai Riemer. Once a week we're going to get together and talk about the business news of the week. There's a whole lot I can talk about. OK let's do this.
Kai: The Future, This Week's April Fool’s special edition. We discuss how Elon Musk wants to save us from the A.I. apocalypse. The role of smartphones in Planned Parenthood. Famous hacking tractors and Trump's burning tweets.
Sandra: I'm Sandra Peter, I'm the director of Sydney Business Insights.
Kai: I'm Kai Riemer, I'm a professor here at the Business School. I'm also the leader of the Digital Disruption Research Group.
Sandra: So an April Fool's special edition.
Kai: Yeah well we thought long and hard about how to do this, right?
Sandra: And we thought about some stories that we could come up with that would be an April Fool's joke.
Kai: Yes. But in the end we decided we live in a world now where every day the news sounds like April Fool's. So until such time that we can claim back some words sanity and normality, what is the point in doing April Fools, really? So today, while our news stories all seem a little bit dubious and unreal, let us assure you all of them are real.
Sandra: So what happened in The Future, This Week?
Kai: Well Elon Musk, personality that he is, wants to protect us from the looming A.I. apocalypse, or so he says. So what's his plan?
Sandra: Elon Musk launched a new venture called Neural Link which wants to merge the human brain with A.I. and what he's trying to do is trying to merge the biological with the digital.
Kai: So his idea is that we are soon to have A.I. that is so intelligent that it might actually have the thought of getting rid of us because it doesn't need us anymore. And his plan is to merge us with A.I. because he thinks this is the only way to outsmart the super intelligence that we are inevitably about to create.
Sandra: Yes the idea is based on actual technology, neural lace, developed a couple of years ago which is a mash that tends to grow with your brain and is essentially promising a wireless brain computer interface. But it would also be able to program your brain to release certain chemicals so that it can communicate with computers. And a couple of years ago the Smithsonian was reporting on mice with this electronic mesh that at that point were connected at that point through a wire with a computer and were able to talk back to computers.
Kai: What he's working off is some serious medical technology that has applications in helping disabled people I understand.
Sandra: So before we go to the A.I. apocalypse, if we think about the actual applications of this, we've seen interfaces between computers and human brains help with diseases such as Parkinson's disease, or epilepsy, or helping people with Tourette's syndrome or other disorders, even depression, spinal cord injury and so on. So it actually, this knowledge has some great promise for neurodegenerative diseases. It's still in its infancy very far from the neural lace, we actually have electrodes that are placed in brains and it's still a very complex medical procedure that people undergo only in extreme circumstances when they've exhausted all other options. So there is great promise for human health and longevity in this technology. But as to the A.I. apocalypse.
Kai: What is really outrageous about his claim is not so much that we can find a direct link of technology to the brain because we're already doing this in some limited form. I think what is most outrageous about it is the assumption that we are anywhere close to creating a true general intelligence in machines. So the distinction that is often made is between A.I. and A.G.I., an artificial general intelligence, which would essentially be a human like intelligence in a machine but with the computing power of a machine and therefore vastly more intelligent than a human or so the claim goes.
Sandra: So close to what science fiction tells us, with Ghost in the Shell and so on.
Kai: Absolutely, it's the Skynet scenario of Terminator, straight out of Terminator. But let's take a look at what we actually got so far and have a quick discussion on why this might not actually a realistic proposition for developing anything like human intelligence in a machine. We don't have the time to really go deeply into this. This is a story for another day. But let's take a look at what we have so far. So what we commonly call A.I. now is essentially pattern matching. These are algorithms that can sift through a large amount of unstructured data and find patterns. And they're very good at this, much better than humans, much quicker than humans, can do this on vast amounts of data with really exciting applications in medical diagnosis in all kinds of professions where large amounts of data have to be organised and analysed.
Sandra: And indeed we've seen this in the news this week with ads on YouTube, where such an algorithm is trying to match certain ads to certain videos on YouTube to drive revenue.
Kai: Yes, and it works quite well that those ads follow a user around the Internet and you can target users and you can place those ads in front of users. That's a really great application. There's just a catch isn't there.
Sandra: Yes indeed. The U.K. government ads and some AT&T ads have ended up preceding extremist videos, racist videos or other abusive videos on YouTube. So these people have pulled their ads from YouTube and indeed Pepsi has followed suit, and Wal-Mart has followed suit, in Australia Bunnings and Target.
Kai: Yes. And if you think about how this works. Basically what we're attaching these videos to is to users now with programmatic advertising, we're targeting users not necessarily the outlets where those ads are being placed. And so depending on where the user ends up looking, the ad will follow them, which means that if I go to an extremist website I might be shown an ad that is targeted at me, which means a reputable brand might end up running next to an offensive text and through their advertising money supporting those extremist, right wing anti-Semitic, or what have you website provider. And quite obviously the brand owners are not too happy about this and a large number of them have used their advertisements from Google, YouTube and associated sites.
Sandra: And indeed we have a real problem trying to improve on these algorithms as well. So in an attempt previously by Google, to make this algorithms be even more careful about the videos that it identifies as being extremist or having content that is not appropriate, has led to ads being pulled also from content that was good content. So for instance a product by a researcher trying to put out real stories of real women who have suffered abuse who were putting their stories out there was identified as offensive content and the ads were pulled. So the opacity surrounding the practices that YouTube has implemented is a real problem.
Kai: So it turns out that to solve this problem is not that easy. So you would say that if we had this great artificial intelligence, could the algorithm not just watch those videos or read through these websites and you know learn which websites to avoid, but it turns out this is not all that easy because pattern matching works well in a well-defined context, but it doesn't work too well across context because those pattern matching algorithms that we know today and that work, they're essentially one trick ponies. And they're very susceptible to context change. So a set of words that are completely innocent in one context and might make sense say, on a history website, could be completely offensive on a right wing website. So this problem is largely unsolvable. And at the moment advertisers have to nominate certain keywords that they want to avoid which doesn't solve the problem at all. So what we're saying is that we're not yet in a position to solve this problem which is easily solvable for a human being reading through texts and watching these videos. We can know in an instant whether or not the content is offensive or appropriate but algorithms can't do this because they can't deal with context with nuance. They're not human in any way.
Sandra: So A.I. is still very far from the powerful A.I. that will take over the world. We can't even work out targeted advertising, or photo tagging, or indeed curating news feeds. And the solution that we shall teach this A.I. and imbue it with the intelligence that we would have had has so far resulted in, let's say, dubious outcomes. Last year Microsoft's A.I. did a little chatbot. They had to be pulled after I think it was 24 hours because as it tried to learn through playful conversation it became racist and misogynistic and it started saying anti-Semitic things after only 24 hours.
Kai: Because the bot is not having a real conversation, it works on pattern matching. It doesn't actually understand what it's saying, it's just matching answers to questions in a conversation by way of pattern matching. So if you think about it, these algorithms they are really, really good at certain things, much better than we are at pattern matching say. But they're really, really bad, even worse than you know little children, at making sense of everyday situations. So why would anyone think that these algorithms, which are clearly of a different kind – so much better at some things and so much worse at others – could lead us on a path of development and improvement that will eventually lead to artificial human or artificial general intelligence. I think this is a philosophical problem and some reading up on existential philosophy would be needed here to really see that we're not anywhere near creating an artificial intelligence that is anything human like. I don't think we have the time to discuss this in detail here and we might come back to this but I think the claim that we are going to see this super intelligence anytime soon and that we should spend vast amounts of money that we could invest into serious brain interface medical research to fight off the impeding A.I. apocalypse is just ludicrous.
Sandra: Whilst we might not be fighting Skynet soon or preventing The Matrix from taking over, I still think there is a role for stories like this and for charismatic people in Silicon Valley now that Steve Jobs is dead. Elon Musk has emerged as the next sort of celebrity scientist a bit like Tony Stark in Iron Man, for whom I think he was the model actually. Why I think these stories are important. So Elon Musk is also the CEO of Space X, tells us that flight to Mars is actually possible, makes people work for dreams that might be very far in the future but for which we need actual steps now and that imagination I think has a very important role to play. And we've seen what happens with lack of imagination. I think it was Thomas Watson who was the chairman of IBM in the 1940s said that there is a world market for maybe five computers. So having that lack of imagination you know there is no reason why anybody would want a computer in their own home. This was back in the 70s. I think that lack of imagination is actually quite dangerous so we need people who dream big.
Kai: So maybe I do like the imagination to see the A.I. apocalypse coming but what you're really saying is that we need positive dreams not nightmare stories to take technology forward right?
Sandra: Yes. Which brings us to our next story.
Kai: This one is published in IEEE Spectrum. It reports on Harvard researchers developing a cheap smartphone test for male fertility. So how does that work?
Sandra: Maybe you should take this one for the team.
Kai: Sure. It works exactly how you would expect this to work. There's an attachment to your iPhone which does an optical analysis of the, shall I say, specimen, to see what the fertility level is. And the researchers think that there's a market for this kind of device to give men the ability to test their fertility in either two situations: one, wanting to father a child or, for those who have had the operation, to make sure that they are not fertile anymore to not have any accidents. Now this raises a lot of questions, doesn't it?
Sandra: Yes. Could this become as easy as home pregnancy tests for women?
Kai: Which puts this in a wider context. This is one of a series of developments underway for creating home use medical devices. So are we seeing a trend towards self-diagnosis devices for diagnosing all kinds of different conditions and diseases? We already have risk warn heart trackers and fitness trackers. So there's a lot of data that we can collect on our bodies, on our health, fitness, now fertility. The question this raises is what to do with all this data and does that take away from the role of the medical profession by putting more power or also more problems in the hands of end users?
Sandra: Might this be a disintermediation first of the medical profession where some of these initial diagnoses will not be the job of doctors but rather of software companies that develop algorithms to try to point to abnormalities.
Kai: And what if there's a mistake and the algorithm makes the wrong call on fertility?
Sandra: Indeed, will robots not only pay taxes, as Bill Gates suggests, but also pay child support?
Kai: I think this ties in with the whole discussion about A.I. based diagnosis and the fact that for diagnosing certain diseases you might not need a doctor. So are we seeing a homemade triage system where people self-diagnose before they go to the doctors and is this something that creates more problems and certain expectations on the part of the patients, does that put the role of the doctor into a very different position?
Sandra: Or does it raise the question of having the smart phone as possibly a human right in the future where if you did not have access to that you could not have access to any kind of health care.
Kai: Yes, the legal, the ethical ramifications, they're really manifold and it ties in with an article in The New Yorker which makes a lengthy argument about diagnosis and A.I. in diagnosis and what these algorithms can and can't do and the outcome of this article was to say yes they're really good at diagnosing that there might be something, fertility level or that there might be a melanoma on the skin and you could have a smartphone app for this, but it does not actually explain anything - where did the melanoma come from. So it can't actually reason about the causes and the argument that was made in that particular article was that the role of the diagnostician is one of tying in with research, actually understanding where those diseases come from and how to fight those diseases, and putting more and more of the diagnosis into the hands of consumers or algorithms might actually be detrimental to our ability to do research and understanding the disease and fighting them in the long term, finding new cures for those. So it might actually cement the status quo of what we can diagnose at the moment but it might actually be detrimental to how the medical profession evolves and how we can actually learn about preventing diseases and fighting diseases. So it's a double-edged sword - it might give us advances in diagnosis but if not done in the right way it might actually be detrimental to doing research. So I think that's a discussion that will pop up over and over again.
Sandra: And it's a similar discussion about how both the business model and the industries themselves are changing that we had in previous weeks around the law practice. And we need to ensure that as these things change we make sure that they are empowering both for the industry and for consumers as well.
Kai: Yes and this particular story not only ties in with a previous story but also with the one next story and I can build us the bridge here because the article on this device reasons about applications and farming and testing fertility of farm animals. Which brings us to farmers. Tractors. And a story that was reported in Motherboard. Believe it or not American farmers have to turn to Ukrainian software downloaded over the Internet to hack their tractors in order to get their tractors repaired. Now why?
Sandra: So the problem is the farmers' inability to repair or enhance their tractor unless they take it to an authorised dealer that can access the software that the manufacturing company has put on them.
Kai: Yes. So the tractor manufacturer does not allow any repair shop to repair the tractor, only those that they allow to have access to this software. The deeper question that this raises is to what extent does the farmer actually own the tractor? And there is a story that I found from 2015 in Wired magazine which tackles that very question because at the time the manufacturer, which shall remain nameless, made the point that the farmer is not actually owning the tractor, they are purchasing the equipment but they are only licensing the software that is built into the tractor and to operate the tractor so that they sign a licence agreement much like we would sign a licence agreement for a piece of software that we are buying and they are not allowed to tamper with or in any way access the internals of the tractor other than through authorised dealers or repairers who have the required software and licence to do this.
Sandra: This has been a question that was asked not only on the agricultural equipment scene but also in the car industry as well. It's the big question of your right to repair your own equipment.
Kai: Yes. And what do we actually own? So the bigger question is as more and more of our everyday things that we buy in that we own and that we use contains software and the particularities of licensing of software where you do not actually buy anything but you only acquire a licence to use the software, to what extent do we actually own what we purchase and to what extent can that ownership be revoked? If you take a piece of equipment for example and do something silly on YouTube which the manufacturer or the brand owner might think reflects badly on their brand, can they then revoke ownership of something that you have paid for and which for all intents and purposes you think you own.
Sandra: And of course manufacturers would say that this raises all sorts safety issues because the software was developed in such a way to comply with all the safety regulations, also compliance about everything from you know CO2 emissions to health and safety standards and so on. But then again preventing people from accessing their own software also stifles innovation. So we've seen quite a bit of innovation from people hacking their own tractors. Back a couple of years ago, someone with the same name as yours. So farmer Riemer from Manitoba, hacked the tractor to drive itself. So we've seen a lot of innovation coming and a lot of open source software that helps farmers improve their practices on the day to day basis.
Kai: Yes I think the problem for these farmers and the mention in the article is a much more everyday one. They need a tractor now because bad weather is coming, the tractor doesn't work, or they need to go and do their harvest and the local repair shop cannot help them. So they have to wait for the manufacturer's repairer to come out and help them. So in that situation farmers have taken their fate into their own hands and figured that they can download software from the black market, hack their tractors and have the local repair help them out to go about their business and you know take out the horse manure, cow manure, to their...
Sandra: You mean shit, don't you?
Kai: Yeah...to their fields. And also I find the argument about safety a little bit disingenuous because if you cannot repair something on your tractor and it's so cumbersome to repair, will you not just operate a tractor that might be unsafe because you don't repair it in the end? So I find that a little bit dubious. But the biggest problem is that I think this is a land grab. Right? If you can tie in the farmers into your own repair service quite obviously this is an income source, this is an income stream, so does that not stifle competition in the market for tractor repairs? So this is one of the burning questions that the courts will soon have to answer.
Sandra: Speaking of burning, our favourite story for April Fool's this year...
Kai: Which also is real, lucky it's real.
Sandra: A robot is burning Donald Trump's tweets as they happen.
Kai: Whenever Donald Trump tweets...
Sandra: A fairy dies...
Kai: Yes but also a robot kicks into action, prints out Donald Trump's tweet on a piece of paper and burns it.
Sandra: We will be including this video in the show notes for this week.
Kai: Yes because the bot not only prints then burns but also video records the burning of the tweet and tweets it back to Donald Trump.
Sandra: @realDonaldTrump. I burnt your tweet.
Kai: A useless yet ingenious piece of robotic technology.
Sandra: Which is where we want to end this week's April Fool's. We always speak about technology in ways that it advances humanity and it's good for health or good for agriculture or industry. But we should remember that as human beings we also need little robots that do nothing else but burn tweets.
Kai: Yes. And let's not forget that in technology every day is April Fool's Day because last year's April Fool's Day is today's future is tomorrow's reality. Regardless of how outrageous something seems, if you can think it someone will build it.
Sandra: And sometimes we just need to build shit.
Kai: Happy April Fool's Day.
Sandra: See you next week.
Kai: See you next week.
Outro: This was The Future This Week brought to you by Sydney Business Insights and the Digital Disruption Research. You can subscribe to this podcast on SoundCloud, iTunes or wherever you got your podcasts. You can follow us online on Twitter and Flipboard. If you have any news you want us to discuss please send them to SBI@sydney.edu.au.
Close transcript