This week: A Vivid Ideas special debate with Rachel Botsman and Mike Seymour: Can I marry my Avatar?

Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.

The stories this week

Real time Mike

Meet Mike in our TFTW podcast

Event recap: Mummy, can I marry my Avatar?

Rachel Botsman’s The Trust Shift podcast

Vivid Sydney


You can subscribe to this podcast on iTunesSpotifySoundcloud, Stitcher, Libsyn or wherever you get your podcasts. You can follow us online on Flipboard, Twitter, or sbi.sydney.edu.au.

Our theme music was composed and played by Linsey Pollak.

Send us your news ideas to sbi@sydney.edu.au.

Disclaimer: We'd like to advise that the following program may contain real news, occasional philosophy and ideas that may offend some listeners.

Intro: This is The Future, This Week on Sydney Business Insights. I'm Sandra Peter. And I'm Kai Riemer. Every week we get together to discuss the news of the week. We discuss technology, the future of business, the weird and the wonderful and things that change the world. Okay let's start. Let's start.

Sandra: Today on The Future, This Week: a Vivid Ideas special - can I marry my Avatar? I'm Sandra Peter, I'm the Director of Sydney Business Insights.

Kai: I'm Kai Riemer professor at the Business School and leader of the Digital Disruption Research Group.

Sandra: While still on semester break we bring you a panel discussion from our recent Vivid Sydney Ideas Festival event. For our listeners who might not know Vivid Sydney is an annual festival of light, music, and ideas where as part of this festival we were showcasing state of the art research from the University of Sydney Business School.

Kai: Sandra and I were joined by noted author and technology thinker Rachel Botsman on a panel at the Sydney Ideas Festival which centered around the ethical, societal implications of living with digital humans, digital avatars, digital agents and it featured a special presentation of Digital Mike which we have mentioned on the podcast before. Digital Mike is a photo realistic Avatar which was built here at the University of Sydney Business School in the Motus Lab which is part of the Digital Disruption Research Group. It's a collaboration between a university, Mike Seymour as the lead researcher and companies such as Epic Games, 3Lateral, Tencent and Cubic Motion all of which have come together to build what is really a 3D photorealistic representation of Mike Seymour's face which was scanned in 3D at places such as University of Southern California in LA, Disney Zurich. A lot of effort went into creating this digital representation.

Sandra: So before we get into our panel we had a live demo on stage at Vivid Ideas...

Kai:...which consisted of what we called a bit of theatre where I came on stage before the show started. Mike's eyes initially and then his face was up on a big screen visible to the audience and I interact with Digital Mike and we play a little deception game on the audience...

Sandra:...to showcase really the range and the possibilities that are opened up by living alongside digital humans.

Kai: So while the face on screen is puppeteered in real time by Mike Seymour whose voice the audience can hear, it gives the illusion as if I was talking to a fully digital agent so let's hear this little play unfold.

Kai at Vivid Sydney: We would like to start you off with a little demonstration of the kind of technologies that we're talking about. We're seeing some eyes here on the screen. These are the eyes of Digital Mike. Now Mike is kind of like a digital assistant you all know Google Home or Siri on your phone except Mike has a face. So think of him like an upgrade of Siri. Hey Mike.

Digital Mike: Hey Kai how may I help you?

Kai: Mike can you book me a restaurant table for after this event?

Digital Mike: Sure Kai. Can you book me a table at the Trone Bar Restaurant today please?

Siri: I'm sorry, I can't make restaurant reservations in Australia.

Digital Mike: Sorry Kai I can't do that.

Kai: Wait was that Siri Mike?

Digital Mike: Yes. Actually Siri is terrific, I love her.

Kai: Well I'm sure you do but you were not suppose to outsource that task Mike.

Digital Mike: Well I don't think you want me doing it. I'm a very intelligent Avatar I don't see why I should be wasting my time doing things like that.

Kai: Now let's show the audience a little bit more what you can do. So what are the kind of questions that you can answer.

Digital Mike: I'm an incredibly intelligent being, I can answer anything.

Kai: You've said that before. So what then is the meaning of anything?

Digital Mike: 42.

Kai: Ah okay well fair enough. Now it becomes very clear to me that I'm not the right person to ask these questions so I would like to go to the audience and see if anyone wants to ask a question of mike. You will have to give me the question because I have to address Mike directly so anyone here want to ask something?

Audience member: What's your favorite car?

Kai: Oh very good question. Hey Mike.

Digital Mike: Yes Kai how may I help you?

Kai: So Mike the question is what's your favorite car?

Digital Mike: That would be a Tesla.

Kai: Any other questions? Yes sir.

Audience member: How old are you?

Kai: Oh now it's getting personal. Hey Mike.

Digital Mike: Hi Kai how may I help you?

Kai: Mike, how old are you?

Digital Mike: I'm almost 1 year old.

Kai: Okay so almost one year old. So that's fantastic. Okay.

Audience member: Why is Donald Trump's skin orange?

Kai: Hey Mike.

Digital Mike: Yes Kai, how may I help you?

Kai: Why is Donald Trump's skin orange?

Digital Mike: Kai, I'm an incredibly intelligent being. But not even I have any clue about Donald Trump.

Kai: That is a fair answer. You seem to be a very intelligent being but...

Digital Mike: I'm an incredibly impressive being. I have an incredible IQ, handsome as well.

Kai: Yeah, wait a minute. Okay, I've been on the record before telling the world that I do not believe in artificial intelligence and indeed I don't and this is time to actually read what is behind this curious curtain here. Everyone say hello to real Mike sitting here.

So we should outline for our podcast audience that what has happened at this moment is that I have pulled back a black curtain behind which Mike Seymour was sitting puppeteering the digital version of himself up on screen and Mike is wearing a curiously shaped helmet with cameras and cables hanging off him now revealed to the audience where we continue our dialogue.

Mike: Well I thought I was going to offscreen and I'm obviously I'm not.

Kai: Now quite obviously there is a superintelligent being behind all of this but other than that this is a puppeteering show. So why don't you just walk us through a little bit what you're doing and why you're wearing a curiously shaped head like that.

Mike: Sure, so what I've got up on the screen here and what you're seeing with me wearing is an elaborate rig we've built for research. So what we're doing is trying to see how people might react and deal with artificial intelligence when it can drive something like this rig. So we decided to produce a rig that a person could puppeteer. So what you're seeing is basically high end film tech. This is exactly the technology that's being used for the upcoming Avatar films. It's also what was used on Avengers. Actually my background is from the film industry and visual effects. I actually keep two cameras mounted one at the front and one at the side and that's interpreting my face. In other words the cameras are working out what my human face is doing and you'll know that I don't have any markers or any special paint on my face. Okay so from the computer guessing effectively using artificial intelligence what my face is doing it informs another computer, the second one to my left, which then decides to interpret that into a 3D model which you saw earlier - the wireframe. And then finally we produce a high resolution version of you which comes up on the screens which is going at about 60 frames a second so that's considerably higher than feature film speed. Of course what we expect to be able to do is plug a giant AI engine into the back of this or for example Siri and that voice, that intelligence would drive this face instead of me driving.

Kai: So Mike, we need need to talk about why we chose to have the face look like your face you know with the little caveat that this face actually doesn't age. So this face is a year old and so as time progresses it will be interesting to see.

Mike: Well as time progresses, I'll never age. It's terrific. So what we did is we decided to use me because I'm at Sydney University and I'm incredibly cheap. And also if I was flying over different partners that were working with us around the world we only had to buy one airline seat. So we come partnered here with as you saw MOD productions which are (Mikaela's going to wave at my left) and then also with the guys that are driving this head rig which is Cubic Motion. Cubic Motion is based in Manchester and they've got the AI engine that's interpreting my face. In addition to that they hand that information to that second computer which is driven by a team in Serbia called 3Lateral and 3Lateral has the digital levers that drive the digital face. And then what you're seeing on the screen is provided by Epic who make Epic Games they make the UE4 engine so that's a game engine. So this is basically a game engine running and the good news is very soon you'll actually be able to get a copy of my digital head free on the Internet that you can play with yourself.

Kai: So I don't pretend that I understand everything that he just said. Let's just say it is an incredible technological feat to bring a real life looking face to life in real time.

Mike: All of this will be coming to things like mobile phones. So in fact as you know the iPhone this particular one has a face reader. And so what we're doing now with this elaborate head rig we expect to be able to do off an iPhone in a short number of years.

Kai: So the point being is that at this point in time it takes still an incredible amount of technology and effort to do these things. But the technological curve is so steep so to speak the speed is so fast that soon enough these technologies will be in our pockets and that is part of why we are doing this because at Sydney Uni with this research we want to be slightly ahead of the curve we want to do research on these technologies and we want to have these kinds of discussions that we're about to have before these technologies are being rolled out by companies such as Facebook, Google and the like as consumer technology, given the implications that these technologies might have and you might envision in your own head where these things might be going. But this is why we put on this panel so what we're going to do is we're going to start the panel now. Mike will get changed and we have to say good bye to Digital Mike who has just sunk into the abyss. So I would like to introduce our speakers now. Our first speaker is Rachel Botsman who's a really well known speaker, writer, also a lecturer at the University of Oxford. She's given many TED talks you can look her up online. Her latest book is called "Who Can You Trust?". So welcome everyone Rachel Botsman. [CLAPPING]. Thank you Rachel.

Our second speaker is Dr. Sandra Peter a colleague of mine at the University of Sydney Business School. Sandra leads as director Sydney Business Insights which is the University of Sydney Business School's what we call an engagement platform - now that's a fancy word for a think tank and influential voice that translates much of what we do in research for the rest of the word. Sandra's work is again very future oriented and Sandra asks very difficult questions of the future and her point is that predictions don't work. We have to have different ways of engaging with the future, through imagination, through research such as this. Sandra it's my pleasure to invite you to the panel. [CLAPPING] So this is a collaboration between Sydney Business Insights, the Digital Disruption Research Group and the venture that we've founded together it's called the Motus Lab and the man behind Motus who drives this research from the technical end is Mike Seymour who is really the expert in this topic and Mike will lead us through the panel and I will shut up now and sit down and just follow Mike's lead. Thank you.

Mike: Can I also say thank you guys for coming and I've been sitting in the tent for an hour so it's good to be out. So I guess in one sense I'm the technical boffin but in another sense we don't want to just focus on the technology we want to focus on some of the ethical and interesting most moral implications of where this could be heading. So in that respect, you guys are the experts and I'm going to ask the questions. So I guess I'm going to start Sandra with you and say do you think that this technology is going to be something that's going to be widely out there?

Sandra: Mike I think we will see this as soon as two years, three years. I'm not making predictions here but we've already seen Apple come out with the new iPhone. We've seen that a technology on there lets you animate emojis. So you know you can animate little panda bears and little cats in a very similar way to what you've presented here but the step from that going to faces to pictures that we already have and think about the fact that many of these companies already have a lot of that information. Think about the faces that Google has, that Apple has, that Facebook has so I think that technology is actually fairly close and let's not forget that it's not just these companies working on these technologies. There's a whole bunch of research coming out of China and out of companies like Tencent and Baidu and Alibaba that are coming to complement this.

Mike: So Rachel you've done a lot of work on trust. Is there any fundamental difference here in our relationship to technology when you start putting human faces on stuff we're already anthropomorphizing stuff. Where do we sit do you think?

Rachel: Let me ask the audience a question. How many of you were disappointed or less impressed once you saw real Mike? Just raise your hand. Could someone say why that was? What was the experience when you saw him? You felt deceived. Did anyone just think it was Mike being filmed.

Audience member: Yeah.

Rachel: Like so sorry my kids don't really care that that was super impressive, they're exactly the same as you and it was really interesting because I have (I think they're slightly younger than you) they're six and four and they were miserable with me this morning because I couldn't take them to karate and swimming because it's all about them right. So they said well where are you going and I said well I'm going to debate whether I'm allowed to marry an Avatar and my daughter was like no you cannot marry an Avatar. And I said why? It wasn't it concerns over her dad. She actually said you should ask if he is a lawyer which I thought was very perceptive. But the point of this was the reason why she didn't want me to marry you was that she was concerned that I was going to go to another world and that I wouldn't live on the same physical plane as her so that I wouldn't be able to ever see her again and my son said well that would be cool. So in his mind the technology becomes cool not when it's present in the current physical reality but when it takes him to another place which I thought was really interesting. So to answer your question I think what is very exciting and rich with potential but also is very frightening is that how will we know what to trust is true? How will we know what to trust is real and will our children even care because the difference between these things will be so hard to discern and they won't even be impressed by what these technologies are because they'll be so normal to them.

Mike: Yes but to a certain extent there is sort of two aspects of trust here right - do I trust that the Avatar is somehow giving me incorrect information or I trust what it's saying but also if that Avatar was you talking to your children them having trust that it is actually you that's driving that and if I was going to have an Avatar it becomes quite significant who gets to control it. How does that happen in the world.

Rachel: I think. If the avatar was me, my children would think that they would have the power and control to take over me that they would think this was a really good thing that in some way they would have more control over virtual mummy than real mummy. And the reason why I say this is my son plays Minecraft, as we call it Minecrack, we've had to like you get one hour and there's a clock and it goes off. But the reason why I'm saying this is I said to him "Jack, I grew up in the world of Tetris. It's very clear rules, you never win, the system always beats you they come down from the top, and I say don't understand this game" and he said "my world, my rules" and this is the part that worries me is that I think they think they will be able to control the technology and have influence over it and they won't understand the line between what they can and cannot control.

Mike: But Kai we've got other issues because this same kind of technology can be used to produce...I could produce you know theoretically a digital Kai who I could then have saying things and once that becomes a world leader that becomes a whole lot sort of a different area of trust. It isn't like I can control it it's more that anyone could be producing videos or having something interact and impersonating someone.

Kai: For those of you who felt deceived, what I find quite remarkable is how ready we are to believe that we can create intelligence in an artificial digital Avatar. That to me is really interesting especially given the fact that the field is nowhere near creating artificial intelligence like something that would actually qualify as intelligence, that would have a conversation with you. All we can do right now is simulate conversation by doing brute force analysis of what you said and match that to some text fragment that in the past has been said or responded to the words that you just said. There's no intelligence, there's no coherence in that it's pretty much a simulation. But that's interesting.

Sandra: But coming back to that deception would it have been less deceitful or more deceitful if I were to puppeteer his Avatar? That is coming, a moment where I could pretend to be Mike and Mike can't be with his friends this morning so I might show up in Mike's place and you know talk to them over a cup of coffee or tell them what I think about the news last night, about the game last night. Would that be less deceitful?

Mike: So now imagine I have some horrific accident and I get a burn on my face. And I could use this technology to have a face that didn't have that imperfection. And so as I'm speaking on Skype to perspective people in a business context I could look prettier in the sense that I could look less marred by this thing that I'm embarrassed by. Is that deceit? And if it is, what about helping the person that feels bad about the fact that maybe they've lost an eye or whatever and feel sort of self-conscious?

Sandra: Or are no longer you know 25 and want to appear like they're 25. And on the one hand I think you could think about the great things that they could bring to people who normally would not go out, might be ashamed of their appearance but on the other hand what I want to look 25 in all of my videos. What if I work in business and I would find life much easier with your Avatar than with a female avatar. What would that do to a conversation around gender?

Mike: Maybe we're going to condition society to pretty people and the person that has the natural imperfection, maybe I've got a birthmark. I'm personally not ashamed of that but in this world everyone does this digital makeup thing when I walk out in the real world people are kind of surprised.

Rachel: I think there's an ethical distinction between the virtual manipulation of yourself and then the manipulation of someone else. So the example you were talking about Kai in the realm of politics is really interesting. What's interesting about that is the compressed timeframe right so our concern over the U.S. election and Cambridge Analytica and now we're seeing that Trump's actually saying the video of him talking about women wasn't him because someone manipulated it. And I'm sure some of you've seen the Obama videos with Jordan Peele and it's this year that the conversation has changed that he can now use this technology as an excuse. I think there's a whole ethical set of questions that we can tackle there and then there's a whole ethical set of questions where it's you as an individual that tries to use this technology to enhance who you are, what you look like, your physical being. I see that as an extension of what we're already doing and maybe I'm being naive around this but if it's me and I'm digitally enhancing myself how is that different from cosmetic surgery or extreme makeup. Isn't that my right to make that enhancement?

Mike: They're about a year away from me being able to Skype into a camera in English and have a Korean speaking Mike at the other end, in other words my lips are lip synched to Korean even though I can't speak the language. I would say that is showing something I can't do. I understand the make up thing but it seems to be like that's getting close to...I mean I'm presenting myself as somebody that's speaking their language. Now if I get something wrong because of the translation software are you holding Mike accountable or are you holding me accountable or am I one and the same thing.

Rachel: When it comes to the digital enhancement or manipulation, I think a lot of it comes down to the intent. What is your intent behind doing this? Are you intentionally trying to deceive someone or are you doing this from a place where it's a good intention behind the deception in some kind of way.

Mike: So you're Korean, I'm Australian and I say something to you and the translator gets it wrong you're not ascribing any blame to me even though it's my face talking I think you would without even thinking about it assume Mike Seymour was being rude not the technology.

Rachel: I think that's a different question. The question of who's accountable when things go wrong or when the virtual version of ourselves says something or does something wrong is different and who is accountable, is it the creator? Is it the virtual version, is it a real version? Like there's a whole other set of questions there. But I think what we're talking about is how ethical is it to make the decision to represent ourselves as something completely different from who we really are whether it's by colour or by language. Do we have that right as human beings to represent ourselves in a very different way if the technology allows us to do so

Kai: Yes. I want to take this one step further. Sure we could argue that everyone should be allowed to represent themselves in the best possible way. But the question is what is that best possible way because that is not up to me right. This is what we collectively define. We already have problems with body image where it might feel like an individual decision to do something but it's very much driven by cultural social norms that come about by many individuals doing these things and then these things becoming normalised. Now when we think about disability for example and the ability to present myself without that disability online what does that say? It says that I should present myself differently to who I am because that is somehow normal. Now normal it's not something that I get to define, that is defined for me, and that might make me more comfortable and indeed I might be better integrated into the digital world but the more we do this the less we become accustomed to actually seeing people who are different in the real world. So now we have created a situation where...

Mike: Your example would reinforce the sexist behaviour.

Kai: Yeah. So now we have a situation where I'm more comfortable to fall in line with the cultural norms online but I'm no longer willing to leave the house and what have we achieved then. So I think we need to think this through.

Sandra: Let me challenge one of the assumptions that I think everyone here has made which is that this will be our decision in some way. So the four of us on stage here all work at the university and are part of a university degree. Now what if my employer decides for me that my students are best taught by a different version of me or what if my employer actually hangs on to the digital me after I've left the university. Let's say I decide to move to a different university.

Rachel: Do you own the rights to your digital self?

Sandra: I think those decisions are still to be made so we know companies like Daimler and a few of the banks are looking into this technology. They will make some of these decisions for us. What if I decide to sell that self, let's say I leave the university or God forbid I die and the university still owns that version of myself. Who will get to decide what the best self looks like?

Mike: So Kai I have children, mine are older than yours. You have younger children. So you work really hard, I mean it's hard to believe isn't it that professors work hard but they do work incredibly hard. Maybe it's hard some days to get home as a professor to read your kids a bedtime story. Is it ethically okay to have a digital Avatar of you read your kids a bedtime story given that they'll read the same story over and over and over again without ever worrying there's a Mr Happy tshirt in the back row so the kids want Mr Happy again and again and again in your digital Avatar will happily do it.

Kai: Well I say. Let me say no. Now look this is a really difficult question right because well first of all obviously the problem might be hypothetically I'm not home for the children so that's the problem right. Now I don't have to solve that problem anymore because I can employ my digital self to pretend daddy being home and that might work while they're two or three or four years old. But maybe that will just raise a whole cynical generation of people who'll happily accept that real human contact is something that is incredibly rare and not for everyone. And again it comes back to what are the cultural norms, what is it that we come to accept as normal as these technologies become consumer technologies.

Rachel: I think it's a really good point. It was really interesting that Aristotle... do you know Aristotle? So it was Mattel's version of Alexa designed for children.

Kai: Oh not Greek philosophy...

Rachel: No, it's not. It was pulled from the market because they spent years developing this and it was designed to be like the digital assistant from when your child was born all the way through to when your child was 18 and it could grow with your child. And the reason why they pulled it was because they were surprised at how quickly parents were outsourcing parenting to the assistants so the assistant would get smarter and learn how the baby wanted to be soothed and I thought this was really interesting and I was very critical of Aristotle and Mattel and then I thought to myself it's really easy to take this ethical high ground and then my mind went back to a situation when my son was really young and I had forgotten that I was meant to be on the radio and the phone rang and I'm thinking oh my God like how do I go live on air with a screaming baby. So I put Jack in his cot and I phone his Nonna and I say Wendy can you go on Skype and can you read Jack a story and sing to him, just keep him quiet for five minutes till I get through this radio interview. So I put the iPad in the cot, Jack's sitting there interacting with Nonna and I think right I can get through this, five minutes later and then I hear this screaming because he's thrown the iPad on the floor. Now in that situation I can sit here and go I would never hire a virtual assistant. I would have pressed go in that instance so I think we make these decisions without thinking of the social pressure, without thinking of the context which would encourage us to actually outsource those parenting moments in time.

Kai: And I mean the reason why we chose the title for this session is because it is important to talk about children because children they happily take anything to be normal in this world and we know this from children growing up in all kinds of ghastly circumstances and regimes around the world. So what we come to accept as normal is what is being done around us. So the earlier we have these technologies in people's lives, the more we're creating the society going forward and then ingraining things as normal.

Mike: What if I had a maths tutor on my iPad that was a digital face that would talk to the person and it would use the camera so it could see your expressiveness and it would speed up or slow down and just respond to you if you were a kid and it would say oh you don't seem to be following, do you want me to repeat that? And what if that digital face on the screen to the notional seven year old girl that I'm handing this to is not just a maths tutor but a digital person. What if the person on the screen is a digital version of herself maybe six months older to make it aspirational. So now this girl has a version of herself that is already being able to do the problems. She can see she can do the problems, it's facilitating her ability as an educator what do you think?

Sandra: Or think it's a difficult question because I have to ask what's behind it? Is there something akin to artificial intelligence behind it or is it 55 year old teacher who is pretending to be the little girl.

Mike: No, like a Siri type. You know it's like I've developed a maths tutor, it's got a narrow field of view it's not going to be able to answer questions on religion but it can answer questions on school level maths.

Sandra: I think there is definitely potential. Very similar to the conversation we had before, we often go to the oh no keep it away from me and I think that's the normal reaction for many of us who see the potential downsides of this technology so there is definitely upsides. Just like in the conversation we've had here we spoke about kids but what about elderly people who might be very very lonely and at this point as a society we haven't figured out how to put more people in their lives. And what if this is an option of having digital Avatars. Same thing with children but then what are you priming them for? Is this the only way they'll learn from a person who is their age who has certain features and also many of these technologies actually are, as Kai said, quite dumb so they might not be able to pick up if the child is in distress, if the child doesn't understand things, if they have reached the limits of what they can do and how to improve so...

Mike: I'm not suggesting leaving them for a year with a computer. But I mean for an afternoon, for an hour.

Kai: I want to pick up on this right because my observation is that the progress that we're currently making in the digital representation, so the faces, the digital faces is running far ahead of the creation of intelligence in machines which we haven't mastered. We can simulate under very narrow circumstances something that might sound like a comprehensible conversation but the moment we step outside of this things become entirely random and nonsensical. So the danger that I see is that we're creating something that for all intents and purposes looks like and expresses facial emotions like a real human but has no capacity to actually be emotional, empathetic, understand or pick up on what's going on in front of the screen beyond some very narrow sort of surface layer - oh you're smiling you must be happy. You're frowning you must be angry. And so what if those situations go horribly wrong. Someone confesses to the digital companion that they're about to commit suicide because they feel lonely. There's no capacity in the machine to pick up on this.

Sandra: And we've seen this with Siri. We've seen this with people scheduling "remind me to kill myself" and they said okay what time should I remind you.

Mike: My favourite one was Siri call me an ambulance. Alright I'll call you an ambulance. Hello ambulance. [LAUGHING]

Kai: We can make fun of this but this then raises so many questions. Who is responsible? How could anyone let this happen? And so we have to be clear that what we can do with faces and what we're about to do is very much ahead of what we can do in terms of what this thing can actually respond to.

Sandra: And can I also point to the fact that we as humans are really bad at understanding that these things are not real. A good example is a couple of years ago there was an Instagram account created by a L'il Miquela and L'il Miquela was this hip woman who hung out in all the cool places and went to the good restaurants and wore the latest Prada shoes and gave advice about dating. And she wasn't real. She still has about a million followers who listen to her every word and you know buy the T-shirt that she buys because it's softer than the other ones. She doesn't wear tshirts. But it was revealed a couple of months ago and we talked about this on the podcast it was revealed she is not real. She's a digital, still image same as Digital Mike but just the still image with a company behind it. It didn't make any difference to her followers. People still asked her how she was. People still asked what she prefers and followed her exactly the same way. So we are very bad at understanding what makes them what they are if we slap a human face on it.

Rachel: So virtual user becoming a really big thing in conferences particularly when the speakers are becoming too busy and famous to show up. So was at one with Tony Robbins... No it's just me...

Sandra: Just checking.

Rachel: No I actually wish I could send my virtual self sometimes not today but it was a virtual Tony Robbins being controlled by the real Tony Robbins in LA and this woman sitting next me to me says...

Kai: Or was he?

Rachel: Maybe someone in his team exactly. "He's so handsome". Which is weird in itself because Tony Robbins is not handsome by my sense. He's not real. "No, he's real." He's not real. She so wanted it to be the real Tony Robbins that she would look over so many different things and I love that ABBA is going on tour because the real ABBA hate one another but they can put virtual ABBA as ABBA when they liked one another on tour.

Sandra: They can also put dead people - Tupac Shakur appearing appeared at a Coachella Festival. But he dead.

Rachel: Right but I think this is a really interesting question as to where does this desire come from. We kind of know but to want to believe these things are real.

Mike: Well let me swing it then and pick up on an earlier point you made about elderly care. So Rachel it's completely feasible and these have been studies that are already done that the iPad I described earlier has a artificial face on it and it's in an older person's home because we physically know they'll do better at home than in a nursing home. So it's in the home. They know it's not real but they still every morning it says hi and they ask a couple of questions like how do you feel today, what's going on.

If the person can't answer the Avatar's questions it sends alerts to either family, friends, or relatives. It's not taking the place of that carer. It's just saying that we're going to check in with this person every morning and then it might say don't forget today you take the red pills for your heart and the person just says yes. And in the studies that they did in Europe they discovered the two faces that worked the best on the screen were a) their pet, like a dog asking the questions and b) their grandchild. Now they knew it wasn't their grandchild right but it was so happily sort of psychologically pleasing to them to see a digital version of their granddaughter talking to them that they would use the system. And not only that but of course they'd show all their friends, they were proactive in using it.

So that seems to me like very doable technology solves Kai's problem about the faces getting too far in advance. Do we have any issues with that or do we fall into the same problem of the dehumanising of the person because of the computer?

Rachel: So I think one of the biggest applications of this is in social isolation and loneliness particularly in elderly care, I mean there was a frightening study that was done in the UK recently that found that 25 percent of the population over the age of 70 didn't see anyone more than once a month, friends or family, which is just horrific. But the issue I have with it is who's deciding what the limits are. You can't decide how attached and dependent that person is going to be on that virtual version of their pet or an adult or whoever they choose it to be and correct me if I'm wrong because I don't know enough about the technology but we're so far off from being able to embed empathy and so if they get to a point where they become too trusting with the virtual pet or their virtual grandchild and then it doesn't give them what they need back, does that fix the loneliness and the isolation? Or does it make them feel more alone because they say I fell in love with this thing.

Mike: Well to extend the age care metaphor the other use that's happened in Australia is a pilot scheme set up for disabled people where a virtual assistant appears on the screen and reads and listens and does all the facial stuff for the loop of telling what you're doing but it doesn't require you to use a keyboard and mouse to do a lot of stuff. And so in a society where we're getting increasingly computerised and requiring people to go through a computer, if you're elderly, if you're disabled isn't it a good thing to suddenly have a face you can talk to. You don't think it's human, it's just, it facilitates you doing stuff without being dexterous.

Kai: So I'm mindful of the time that we want to go to the audience I want to say one more thing. As humans we're hard wired to faces. It's the first thing that we can recognise as newborns so there's actually a region in the brain which can pick up faces and facial expressions that we don't have to learn. So while that is incredibly important and useful for us as social beings in creating the social fabric that makes society work, it also makes us vulnerable because this can be exploited. This is exploited in advertising anyway, we have big-eyed children on TV and it sort of speaks to us without actually having to speak to us. Now that we can create digital humans, we don't actually be having to convince humans to do something shady we just create a digital version that can do anything that we want. This can be incredibly powerful in the right situation, incredibly dangerous because we have a certain defencelessness to faces. Okay so what we're going to do is do some questions now from you guys and we'll come back.

Audience member: This is a question for Rachel - you've said in the past that trust is a new currency. So do you think the advent of this sort of technology is going to increase or decrease the value of trust?

Rachel: I think it's going to accelerate how easily we give away our trust. I don't think it's going to undermine the role of trust in society and in a weird strange way, and maybe this is the optimistic side of me, I think it is going to help us place more value on human trust. My hope and this might be a naive comment but through these virtual interactions we actually realise the limits and we start to recognise and embrace what is fundamentally human. And cannot be replaced by a machine. So in a weird way and I think you're seeing this already, like the fake news stuff where people are starting to say okay I place more value on professional journalism and they're starting to say what is the truth, that's the same kind of thing where people will start to say am I'm giving my trust away too easily to virtual beings and do I need to be responsible for that and more in control of that. So it's possible that these technologies actually might reinforce what it means to be human.

Audience member: So do you think at some point we could get an Avatar to just pretty much do everything for us like you could tap something on your phone I guess it would turn on your Playstation you could just go upstairs and go and play on that. Or it could put on the stove?

Mike: So the examples you just gave then are not very far away from what I can do with Siri now, I just need to have those devices connected. So my attitude is we're going to give you the tech that you think is awesome and cool that'll do all the things you described but I'm pretty sure Kai will point out that it'll just appear clever, it won't actually be clever.

Kai: So that the issue is that we don't really understand intelligence. We want to somehow recreate who we are and this intelligence in machines. We're not doing any of this because we don't understand it. What we do is we do some clever tricks that appear intelligent or we simulate something that intelligent people do like playing chess or playing Go. These machines don't play Go. They have incredible computing power and they can win the game but they don't play the game. This is important. Why do we play games? We play games for the enjoyment, we play games because we want to play games. We want to challenge ourselves. We want to be better than the other person. These machines have none of that because they don't understand what a game is. They're just programmed to win. So it's very flat. There's no humanness in there. So they don't actually technically play the game nor do they want to play the game nor can they go and play another game. So we can automate certain tasks in our lives. That's not new, we've done that a lot and we will do this and that can help in many parts of our lives. We can help doctors be better at finding cancer cells but these things can't have a conversation with a patient, come up with a treatment plan that actually fits that person. So yes we could automate a whole bunch of things in your house to free up time so that you can go and play on your PlayStation, but the question is why would we want to do this? Cooking dinner - many people cook because they want to cook. It's enjoyable. We enjoy something that is hard and difficult. We enjoy having a meal. So many of the things that we do we do them because that's what makes us human. We can reduce everything to a chore and say oh if we could automate that and then we can do that. But that doesn't mean that we're actually creating things that are like us. So I think that's important to keep in mind. So automation will do interesting things for us but we're not creating anything that is likely to replace us any time soon. Many people find that is bad news but I think it's good news.

Rachel: So I read Yuval Harari's book a couple of years ago "Sapiens" and now I'm listening to it and it's really interesting when you read a book and you listen you hear the book very differently. And the point I missed which is a fundamental threat throughout the book is that. The expectation of automation throughout society, how the expectations stayed the same like will free up time by automating or outsourcing or accelerating a task and that time will be filled with something better. And we f*cked up better. I did not say that word.

But it's really important because this feels like a human through a deep human flaw that this isn't a new phenomenon this has happened to our civilization that we discover a technology, it can accelerate, automate and we think we're on the path to something better. Why do we believe that?

Kai: It's almost a religious belief in the power of technology. I want to come back to your example. We can build something that is far better playing PlayStation than you. But why would you automate that? You could just have a robot do the PlayStation playing for you, you've got time to cook.

Audience member: How far would you allow laws to come in into the creation of the use of avatars - would that stymie the creation or the use or the ability to use them better?

Sandra: I think fairly soon some of these technologies will be consumer technologies and knowing that there's kids in the room I will point to the fact that right now we have a problem with deep fakes. These are videos of adults where we use technology to put some famous person's face on a adult actress's body or a male. That technology is under no one's control and it's consumer level technology. I can go on the Internet and make these things with the faces of someone famous so the fact that we will be able to enforce this, I don't think we're quite there yet. And there's a lot of work but these sort of things help.

Rachel: There's two things that come to mind. Who sets those laws? With all due respect like the Australian government? Regulators? So where do the laws come from I think is a really interesting question and is there a new role for philosophers in society. We need new frameworks that are more than legal frameworks so social contracts so where that thinking comes from I think it's a hybrid. I don't think it is just from academics and education so that's a big question in society is a massive void and a massive problem. And then to your point Sandra I think it's the implementation of these things, the effectiveness of these things. So the other day I was on a panel with someone from Facebook and they asked him a question about stealing data and he said look the problem is if you know someone went into your house and stole the toaster you'd understand that because it's a physical thing and it got removed. But there is no law around what data stealing even looks like. And I think this is that problem. But ten fold. So where does the law come from and then how effective is it in the implementation are huge societal challenges that we have coming up.

Kai: Just on that question while that might sound like a good idea, I want to point out what it would take to enforce this. So basically someone would have to be able to watch what I do in my home and on my computer so it would require us trusting the government to monitor everything that we do on our computers. Otherwise we can't enforce the fact that I might puppeteer a different Avatar when I'm on the Internet. So I think sometimes these things are not only impractical they'e are no no because the person trying to enforce the law we might trust them even less. We might not want to put in place the kind of systems that are necessary to enforce some of those laws. So I think it's really difficult.

Mike: Someone's been waiting here.

Audience member: We talked about at the moment that is very narrow capability as far as that we don't have AI yet but with the learning algorithms and as we continue to develop them it's coming together and faking a real human is becoming easier and easier as we've seen with Google and some other things that are happening. Thinking forward even a decade do you see that as complicating or simplifying the ethical discussion and as you pull that together for "Mummy, do I marry my Avatar?" how does that play in your mind that ethical dilemma.

Mike: Can I just say right away that the one thing that the film the Terminator got right is that the Avatar in the future will be incredibly patient, will always be there, will never let you down. Will always listen to you, will always have time and will never be bored by what you're saying. If you think that is going to cause people to fall in love with their Avatar you don't know people. That's just my point of view.

Kai: They also won't understand zilch because these entities for want of a better word they don't live in our world. They don't grow up into our world, they don't get to experience our world, they don't have bodies, they don't have emotions. It is pretty clear neurologically that emotions are actually underpinning cognition. It's not like we start with thought and then we somehow get emotion on top of that. There's no reason to believe that computers are anything like minds right. There's any number of reasons why we will not be able to simulate human cognition, human minds, human emotion in machines in a genuine way that makes machines sentient. So I think that's fiction. The interesting question is why are we so ready to believe this and why do we believe we can build this. I think there's enough evidence that we can't. And I think we would see soon the next two to three, four years this whole AI thing will experience a real reality check that a lot of these things will be problematic. But the point is that we can simulate a believable human entity and deceive people. That's the issue.

Mike: And we could definitely produce something that looks like it's clever even though it's not clever.

Rachel: So I was sitting on a plane recently next to someone who runs Google Labs and just at a very macro level we were talking about ethical issues of our time and she pointed something out to me that now seems so obvious she said for previous generations the debate around sexuality and our right to be heterosexual, homosexual that was a previous generation. The debate today seems to be gender like our right to be whatever gender that we want. And her prediction is that the next big ethical debate where somehow government and law will try and get involved where it really doesn't have a place is on virtual beings and marriage and cyber sex and relationships. And when you put it like that, I don't know if you disagree, but it's like will this debate in 30 years time be the debate my parents were having around the right to marry someone of the same sex.

There's a qualitative difference that I want to point out which is the discussions around sexuality, gender they're all discussions about us where what we discuss is us and we gets to experience and we can empathise because we all get to have sexuality and gender and whatever, the moment we're bringing machine entities in these entities cannot participate in that discussion because they don't have these experiences. So it becomes a different kind of discussion. It becomes weird and we can pretend that they do which makes it even weirder. So I don't have an answer of what that will be like but I point out that this is a different discussion but it reflects on us the way in which we treat these entities.

Audience member: I'm an English teacher, I'm a high school teacher and I know when you take phones away from students they completely go crazy, they lose their shit, and I was wondering the question is about with dependent technology and the Avatars what implications it's going to have on their resilience to survive when these technologies when we take them away or when they run dead or the battery runs flat.

Sandra: I think we quite often think that people are resilient because of something in them. And if we hurt that thing in them then they will become less resilient. And I would put it to you that resilience actually resides within your broader network and to that if you think of for instance like the most resilient people in the world, people like refugees who risk everything and risk their lives over high water and high fences they're quite resilient to be doing that. If you take away their network their family their children their friends and lock them up they become less resilient suddenly but they're the same person but now they stop eating they want to commit suicide and so on. So the fact that students resilience would just rest in that's one thing, I think it's underestimating the power that the school, the parents, their entire network around them has.

Rachel: So I'm about to move back to the UK and there's a massive movement in the UK, my kids are about to join a new school where phones are completely banned. You cannot bring a phone into school and it's pretty new, summer school is about six months to a year and one of the reasons why they've done it is resilience and they're actually saying it's an addiction, they can see the detox they can see some of the kids not be able to cope to get through the day. That is so worrying to me that they have to now ban it to actually teach the kids that they are resilient enough to go throughout the day without that phone, without that network, that they can be with their peer group and okay and be okay at school.

Sandra: That's my point because it's within that peer group.

Kai: For many teenagers were so accustomed to actually do a lot of their social contact through the device, when you take that device away you're basically taking the support network away. So you actually have to find ways to cope with that and to make up for that. But I think for a school that's an opportunity, that's a chance to actually recover some other practices and create new practices that can build resilience that are not dependent on the devices. I think that's important not because you know we want to patronise people or educate and say oh you mustn't use your device but because we have recently learned in the last couple of years that the platforms that we spend time on that we use to connect with our peer group, they're not built with our best interests in mind and that's the reason why we actually have to be more watchful and a bit more sceptical of these technologies. Having the social network and communities, Facebook to name one, can do great things for communities. It has become sort of an afterthought to the business model and that's...

Mike: Just to pick up on that though because obviously I'm the pro tech guy on the panel. There are five big US companies, the biggest by market capitalisation and in no order they're like Apple, Google, Amazon, Facebook and Microsoft yeah. They've eclipsed the oil companies, they've eclipsed the conglomerates they're in a race to be the first trillion dollar company. Every one of those is spending billions with a B on this technology. So this isn't like some guys in a lab, like we are, if you right now go to Sydney University come out with an honours or PhD in machine learning and you come up with some you know algorithm in this area, they buy the entire company to get you, astronomical they're talking to them footballer salaries on people that are good at this.

Now it may be a bubble that may burst but there is no doubt there's a huge amount of money and research going into this. So this is not without significance in that what you describe they must be doing that for some reason and secondly it's going to accelerate the process and that's all being controlled by effectively the desire to hit social media.

Rachel: Of course they're investing in this but it's very different from an oil company or even a tobacco company because the raw material that they're exploiting is our time and attention.

Mike: I'm not making an ethical judgement, I'm just saying that there is a lot of research in this area.

Rachel: Yes that's what worries me. And this is why I actually think you need pretty drastic actions like banning phones and devices from schools because you have to teach children that they are okay and they do have the resilience whether it be within their physical environment or... we have to be teaching those skills alongside the introduction of these technologies which is absolutely inevitable.

Sandra: I'd add one more skill to that which is understanding what the technology can and cannot do and what's behind it. I think that's a sorely missing skill not only in our politicians but in people in general. Having a sitting U.S. senator say well if you don't charge people how do you make your money and someone from Facebook having to go well it's ads Senator, we use ads. Not understanding how these technologies work, I think is the first area because then we think they're smart, we think they're intelligent, we think they can be our companion and do all these things. So actually infusing that whether it's school whether it's public conversations like this where you look behind the curtain and you see that deception but next time you look at the system you'll go yeah actually it's not that cool, there are some things but not other things. I think that's critically important for the public conversation around this.

Kai: I think once you understand that the user, the interface, the way in which Facebook is built they are employing deliberately techniques that we have learned from building pokie machines. So the way in which the interface is built is essentially to grab your attention and to keep you engaged because the more engaged the less we put away our devices the more money the company makes. So I think that's one aspect that is important to understand and the other one is the education aspect.

I teach Business Information Systems or Digital Technology to undergraduate/postgraduate students and we're often confronted at open day for example when we talk to parents with observations like oh but the kids are all digital natives they know all of this technology and that makes a fundamental mistake because just the fact that I'm using something everyday doesn't mean that I have to know a whole lot about it. How many in the room drive a car? How many of you can change a light bulb on that car? You must be driving a relatively old car because on the latest ones you have to actually dismantle the whole thing to get to a light but there's no light bulbs and it was LED anyway. But the point being that I don't actually have to know much about a combustion engine or an electric engine to drive a car nor do I have to understand how Facebook works in order to use it - quite the opposite - the more that I lose myself in the technology, the less I actually think about how it works and that presents a real danger. The number of students that we see coming out of school that do not understand how Google works and where these things that Google chucks up are coming from is a little concerning. I'm not saying that you have to know anything about how the algorithm works but just to know how Google prioritises the stuff that you see on the first page because hey who goes to the second page anyway right. Very few people. I want to make a final statement of the title of this panel "Mummy, can I marry my avatar?". That's a yes or no question. But the point is that we don't have an answer to that. What we actually have to think about is who gets to make that decision. How do we collectively find a way to engage in a process to make those decisions, to have those conversations. That's what needs to happen.

Rachel: I was going to say something very similar and I actually on the way here I was playing out that conversation with my daughter and would I let her marry an Avatar and especially if she said it is better than the real thing it just understands me better and it's always there and one of the things I was thinking about was when I wanted to get married in my early 20s and would have made really the wrong choice and my parents in quite a diplomatic way tried to teach me long term consequences. They could see ahead in ways that I couldn't see. And that's what I was thinking about is how could I help her understand the consequences of marrying someone virtual vs. marrying someone physical. I think this conversation is coming I think it's one that I might have to have...

Sandra: I was thinking about what Avatar, I would love to have the Avatar of my grandmother and I was thinking of what she said at my wedding which was "excellent choice for a first husband". [LAUGHING]

Mike: So we are out of time. I do want to thank you guys coming. We actually think it's really terrific to be having these conversations ahead of time. I apologise for the earlier deception though as you see that was the point of the exercise. I am obviously very engaged in this conversation. I love it but I just want to finish with the words of another professor that I am good friends with who said "hey this stuff's coming we should at least make sure they like us". Thanks so much for being here and I appreciate it. Thank you.

Outro: This was The Future, This Week. Made awesome by the Sydney Business Insights Team and members of the Digital Disruption Research Group. And every week right here with us our sound editor Megan Wedge who makes us sound good and keeps us honest. Our theme music is composed and played live from a set of garden hoses by Linsey Pollak. You can subscribe to this podcast on iTunes, Stitcher, Spotify, SoundCloud or wherever you get your podcasts. You can follow us online on Flipboard, Twitter or sbi.sydney.edu.au. If you have any news you want to discuss please send them to sbi@sydney.edu.au.

Related content