Sandra Peter and Kai Riemer
The Future, This Week 19 Jul 19: #DeepFakes, #DigitalHumans, #WillSmith
This week: deep fakes, digital humans and a young Will Smith in our Vivid Ideas Special. Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.
The stories this week
Udacity develop AI that can generate lecture videos from audio narration
Digital influencers in the New York Times
50 year old Will Smith plays with 23 year old in “Gemini Man”
Other stories we bring up
Byung-Hak Kim and Varun Ganapathi’s LumièreNet paper on arXiv.org
Udacity’s AI that can generate lecture videos from audio narration sample
Martin Scorsese’s The Irishman will feature young versions of Al Pacino and Robert De Niro
Our previous Vivid event in 2018
Our previous discussion of AI news anchors
Our previous discussion of Lil Miquela
Deepfakes have got the US Congress worried
Deepfakes as new election threat
Yumi, the digital human new brand ambassador for Japanese skincare brand SK-II
Adobe produces tools to make media alterations easier
Engadget’s recommended reading on fighting deepfakes
You can subscribe to this podcast on iTunes , Spotify, Soundcloud, Stitcher, Libsyn, YouTube or wherever you get your podcasts. You can follow us online on Flipboard, Twitter, or on sbi.sydney.edu.au.
Our theme music was composed and played by Linsey Pollak.
Send us your news ideas to sbi@sydney.edu.au.
Dr Sandra Peter is the Director of Sydney Executive Plus at the University of Sydney Business School. Her research and practice focuses on engaging with the future in productive ways, and the impact of emerging technologies on business and society.
Kai Riemer is Professor of Information Technology and Organisation, and Director of Sydney Executive Plus at the University of Sydney Business School. Kai's research interest is in Disruptive Technologies, Enterprise Social Media, Virtual Work, Collaborative Technologies and the Philosophy of Technology.
Share
We believe in open and honest access to knowledge. We use a Creative Commons Attribution NoDerivatives licence for our articles and podcasts, so you can republish them for free, online or in print.
Transcript
Disclaimer We'd like to advise that the following program may contain real news, occasional philosophy and ideas that may offend some listeners.
Intro This is The Future, This Week on Sydney Business Insights. I'm Sandra Peter, and I'm Kai Riemer. Every week we get together and look at the news of the week. We discuss technology, the future of business, the weird and the wonderful, and things that change the world. Okay, let's start. Let's start!
Kai Today on The Future, This Week: deep fakes, digital humans and a young Will Smith in our Vivid Ideas Special.
Sandra I'm Sandra Peter. I'm the Director of Sydney Business Insights.
Kai I'm Kai Riemer, professor at the Business School and leader of the Digital Disruption Research Group. Hello Sandra.
Sandra Hi Kai.
Kai What are you doing here?
Sandra Well, I'm still on a break.
Kai Yeah, me too.
Sandra Yes, but we saw this article reporting on Udacity now having software that can create believable digital humans that can give video lectures, fake academics. So we thought better have a look at this, and it was also really good excuse to bring out our Vivid Special that we promised you before we left on the break.
Kai So this is why we're going to do a special today on digital humans, but with a different angle.
Sandra Unreal people.
Kai Unreal people, exactly. So Venture Beat reports on this research by people, done at Udacity, who have since left who are basically using the same kind of technology used in deep fakes. And we've discussed this previously, where so-called generative adversarial networks, basically an AI algorithm, is used to generate facial movements that can then have people say words that they have never said before, or create entirely fake people from scratch. And in this instance what happened is that they animate the face of a lecturer in a video, based off the audio that you give the algorithm. So imagine that someone had previously recorded a video lecture of themselves and had used this material to train this algorithm. You could then simply record new audio and the algorithm would add the video of the person.
Sandra And of course you could expand that to think that well you could generate the audio from text if you had samples of the audio, we've seen examples of software like that, including free software like Lyrebird on the internet. But just to recap, so Udacity is one of these massive open online course platforms and you've got other ones like Coursera and edX. And whilst these platforms contain hundreds of courses that are freely available to people online, it's very time consuming to update them. So think about putting everything that you've ever taught in class online, but then having to update some of that content every semester. So what happens is that once you've got these professionally recorded lectures or video clips, it then would take significant resources to get those academics back in a studio, and re-edit those videos, just to make a few changes or a few updates. So a couple of researchers who previously worked at Udacity released the paper presenting this work that actually can update this video lecture simply using new audio of the person in these videos.
Kai And so this academic paper was released on arxiv.org, and it's called LumièreNet: Lecture Video Synthesis from Audio, and this AI system, LumièreNet, basically does exactly that.
Sandra And we must note here that since its development Udacity as the MOOC platform, having developed this, Udacity has stepped away from this software, and the two researchers involved in this, Kim and Ganapathi, have left to start their own company that develops this software now.
Kai This is only one in a number of recent developments around this technology that was first popularised by the so-called deep fakes. And remember, this started off by people putting faces of celebrities on to porn footage, has since been more widely applied. And we're now seeing the first applications, or the first glimpses of applications in areas where this might actually have uses in business or education.
Sandra Well actually in the business of education, and that's actually our business and our education. So we've both recorded plenty of online video lectures over the years. So the idea would be that the university could now just use those video lectures and just update them every year with new stuff.
Kai So that obviously raises a whole bunch of questions which we will come back to. But we want to start off by asking what is new about this AI-based facial animation of people in videos? And we want to go to our Vivid event for this.
Sandra So our Vivid event took place about six weeks ago, and some of you listeners made it to our event, it was a sold out event at The Museum of Contemporary Art during the Vivid Ideas and Vivid Lights Sydney Festival.
Kai And the event kicked off by our very own Mike Seymour, a researcher here at the Business School in the Motus Lab, and our international guest, Hao Li from the University of Southern California in LA, who demonstrated and discussed how this AI-based creation of fake videos, fake people, digital humans, differs from what was the state of the art technology only a couple of years ago. So here's Mike Seymour explaining what has changed in just a short period of time. Where at last year's Vivid we presented Meet Mike, where Mike's face was elaborately constructed from scratch like it is done in the movie industry. And now we've moved to AI-based face creation which is so much simpler.
Sandra And the developments have been tremendous. Let's have a listen.
Vivid Audio - Mike Seymour Now the reason I'm showing you that from last year is because that was a particular way of making 'people' in the lab. And we call it, this idea of... I'm going to use the example of architecture. I don't know if any of you had a house made recently. But if you do, you have these simple stages: you model it in 3D, you then texture it, you light it and you render it. It's a very well-understood process in architecture, in video games and in film. It's the same thing every time, right. And so when you've done, you get a pretty realistic looking 3D image. Okay. So that's normal computers. That was last year. Wow. Hasn't the world moved in a year? So now we have this:
Audio - Mike Okay. So here I am actually in the offices of Pinscreen getting my face scanned, or rather, actually just photographed on an iPhone. Which is great, don't get me wrong, I'm just used to bigger cameras with more tech. Not just one being held in someone's hand with a Mickey Mouse back on it.
Vivid Audio - Mike Okay, so one of those is me, and one of those is Hao's team making digital me. Yeah. And you might guess that the one with less wrinkles is the generous one that Hao's team made. Okay. That is machine learning. Yeah? It's deep learning. Now it's very simple how it works. It works like this. You simply take a bunch of stuff and you put it all together. Okay. Now here's the thing. I'm semi-joking with that. Because it's a black box, we kind of don't really understand how it works. At least, we understand what it's doing and why it's doing it. But we let it make a bunch of decisions based on learning from an enormous amount of examples. And so Hao's going to show you that. But if you don't want to get into the maths, and I personally love the maths, but if you don't get into the maths, think of it like an artist. An artist interprets stuff but they have to have seen a lot of it, right? Like if you want to paint this picture, you have to have seen a lot of things. You can't just paint something you've never seen. So if you've seen a lot of things, many many times from different angles, you could paint it from the front, you could paint it from the back, you could imagine what it looks like. Now you might have seen the videos at the start when you're coming in, and they are using this new tech, they are imagining, if you like, what people would be like from different angles, from different viewpoints, doing different things. But it's not the model, texture, light, render. This is what we call hallucinating, the computer hallucinates what it imagines would be going on, yeah? And so that's a completely different model. So last year it was in the lab with lots of computer graphics, and now we're on an iPhone, and we're doing it with deep learning and machine learning.
Sandra And here's Hao Li, who not only helped developed some of this technology but also commercialises it through his company Pinscreen. And there's also a free app you can download if you want to play around with the technology which we actually used on the day at Vivid, the app's called Pinscreen.
Kai And we will put the link in the shownotes.
Sandra But first let's listen to how this is done on an iPhone, in this case one with a Mickey Mouse back.
Vivid Audio - Hao Li One of the topics today is to show how AI can change this, and how we can make all these technologies more accessible. So let me show you a couple of progress or advancement that we're doing in the past few years, in terms of making this accessible. How do we create a digital face? Right, so the traditional approach is to use a highly sophisticated capture device. This is a Light Stage that we have in our labs at USC ICT. It cost a million dollar, not everyone can afford this at home, you probably won't have all these LED lights. But it's sufficient to capture a high fidelity person. Right, so using this data we have been able to develop an algorithm where all you need is to take a picture using your iPhone. And from that single image you can build a three-dimensional face at high fidelity, using the information it has learned from the high-end capture device. Right, so this is to show you three-dimensional models that are being generated from that one single input image. Everyone has this capability on your phone, so all you need is an RGB camera that can actually record your face, and it's already possible nowadays to track high fidelity facial expressions of a person's face, or you can even capture lighting condition off the environment, in real-time. Right, so these are live demonstrations, so you can see the face is actually CG. These are some of the capabilities that we've developed at Pinscreen, but we want to go further. Because if you are animating CG characters, you have to use a traditional computer graphics model. But one of the things that we have developed recently is a deep neural network that is trained with hundreds of thousands of people's faces, and can generate the content directly. What does that mean? So if you look at this slide here, on the upper right you have the picture of three different subjects. Their pictures is the only information that we have. On the left we have Koki who is our driver, and we're going to track his face. And based on his facial expression we're going to synthesize facial expressions of these subjects, based on that one single input picture. So let me show you how this looks like. Right, so we've never seen any other expressions of these people. It can generate an expression that is plausible of them, in real time, right. And it's not just facial expressions, it's new viewpoints, it's all the 3D information about the face. Now what's really scary about this, is that it all works in real time. And if you have a high-end machine, you can actually synthesize at high fidelity. So this actually gives an entirely new dimension to technologies like deep fakes, because you can now have a livestream of yourself being, you know, one of these leaders in a rogue country. So we took a picture of Kim Jong Un, it wasn't possible to get a 3D scan of him. And one of our engineers is trying to become him, and stream himself in real time. So let's have a look at how this looks. Right, so, and it works instantly. We can take any picture, and drive a person's face.
Kai And so what Hao is saying here is that anyone can now take another person's face, especially if they have a lot of video in the public domain, and use these algorithms to have them say just about anything, because the voice can be synthetically recreated, the face can be synthetically animated. And so that raises real questions about fake news and puppeteering politicians for example, which has now made waves and has the US Congress up in arms, bringing in new legislation against deep fakes, because there's concerns that this might play a role in the upcoming US elections next year.
Sandra And again we'll include this in the shownotes. So, since we had the Vivid event, in the following weeks we've seen a number of articles pop up around the subject. There has been two articles in the MIT Technology Review around deep fakes having Congress panicking. There have been a couple of reports, including from the Human Rights non-profit Witness, documenting the current state of deep fake technology, and showing that it still requires some specialised skills to produce them, but often being able to produce forgeries that are not immediately obvious very easily, and that the technology has advanced at a very rapid state. We've also seen a couple of articles reporting on Samsung technology that can recreate an entire video out of a single photograph. And we've also seen university and industry researchers demoing tools that allow users to edit words that are being spoken in the video, simply by typing in what they want the person in the video to be saying. We've also seen, as you've mentioned, the House of Representatives in the US holding hearings around deep fakes, and trying to think about what sort of legislation would be needed to put in place. And recognising the fact that this doesn't just affect politicians, but it's also an issue that is quite significant as it has been deployed against women, or against minorities or other groups of people, and quite often many of these groups do not have the resources that are needed to be able to fight or remove those videos in time.
Kai And while there is now two different bills before Congress, and critics have come out to say that while it's good that this is on Congress's radar, these bills will be largely ineffective because proposals such as that deep fake videos need to be clearly marked with a watermark for example, does not hold any sway if someone really wants to deceive the public and put malicious videos out there. So while there is proposals to deal with this, it will largely again be up to platform providers such as Facebook who spread news items, to weed out malicious videos, which will end up in an arms race where software that creates fake videos will battle with other software that tries to detect whether a video has been doctored with or been created by a AI from scratch. And again there's article about this which we will put in the show notes, such as a recent article in Engadget titled 'Recommended Reading: Fighting deepfakes', which reports exactly on this type of technology.
Sandra To be fair, fake videos as a general category, or even fake pictures are nothing new per se. We've always had fake videos, we've always had fake pictures, but this time around it's a little bit different and back at Vivid you went into a little bit of detail about why actually people 'unreal people', digital people, and fake faces, why they're a little bit different to other types of fake videos.
Vivid Audio - Kai But I just wanted to point out that we know from neuroscience that we are hard-wired to respond to faces and eye contact in very specific ways, because it's the first thing that we learn to recognise as babies. So it's almost like the one thing we don't have to learn is to lock onto a face, and follow a face. And I want to point it out because we're going to a different kind of online interaction now when we talk digital faces that can make eye contact for example, and that can react in believable ways and that might not actually be puppeteered by a human, but by some simulated intelligence, which opens up all kinds of different questions because, I'm not saying that we're defenceless but at least it takes us off guard. We respond to faces in a much more natural way, which means that, you know, we have to be much more vigilant when we encounter these entities. And now you can come in Mike...
Vivid Audio - Mike Seymour Well I was going to say, I think you're absolutely right, that it is almost defenceless. And my example, you know the Shoah project that Paul did, you know the...?
Vivid Audio - Hao Li Yeah. Yeah.
Vivid Audio - Mike Seymour Yeah. So this is the, they did it a project where they, so it's not a digital human in the traditional sense, but it's a captured conversation of a Holocaust survivor, and it's captured in every angle simultaneously, so it's that in-the-lab version. But you can just sit there and talk to it, right. So I'm going to use Tim here in the front row, so I would sit opposite. This character seems to make eye contact but it's based on video, very very real. But wherever you go in the room you would see around them. It's not like it's looking at a flat screen. And so I was in USC, where you guys are, they said 'oh can you test it, because you've got a weird accent, you know. And see if he understands you, and you can ask him questions'. And I went, 'all right'. We're in testing phase, so I sit down. There's a chair, with a real Holocaust survivor that they'd had in the lab filming, and they said 'ask him some questions'. And I said 'I will'. And so I asked him a few questions, and then they said 'oh ask him something really hard'. And I was like, oh well you know, how do you feel about this or that, and they'd programmed it to answer most things. And they went, 'no no, we need you to ask the sort of questions that he might get asked in the field, by like kids who could ask really embarrassing questions'. And I said 'Well what?'. And they said 'oh well ask him like if he's seen people, you know, shot in the head or something'. And I went 'what?!'. And I like had this visceral response, I can't say, so rude! Like, I just literally felt offended, or the idea of asking this digital person something, kind of offended me, that or that, I don't know, who it offended? And it just felt wrong. And yet of course, it was a completely valid piece of the testing to test how the computer would handle an awkward embarrassing question. And they weren't being insensitive, these are some of the greatest researchers in terms of preserving that body of knowledge, they were in no way being nasty. But they just wanted it tested in a weird Australian accent. And I was defenceless to see past that, yeah. So I think there's no way that you guys are getting your wish, and this doesn't have emotion. I say it's totally going to have emotion, because emotion presses your buttons and people want to press your buttons, good or bad. There's no way that this isn't becoming the most emotionally rich subjectively, I don't know, manipulative experience you've ever had.
Vivid Audio - Sandra I think we fundamentally want to fall in love with these things, as we want to fall in love with most of our technologies. We love to hate our phone, right. We love to hate our iPad, and hang on to it, even onstage. But most of these technologies, if you look at Lil Miquela, which is that Instagram influencer that you've seen before, in the beginning people didn't know she wasn't real. And they followed her, millions of people follow her. And then there was a big kerfuffle and people found out that she's not real, and it didn't matter. It didn't matter that the fact that she was telling you that she's wearing these fancy shoes, and this nice shirt, and she was encouraging you to buy it. But she's not real, she's never worn anything. She would tell you where she went to eat, and people would ask her personal questions, because we want to make them real. And I think this idea of being able to push buttons and things, we will, I think exploit it. Many of the technologies we come up with over the last 10 years have been technologies developed for good, and for good applications and we've managed to hijack every single one of them. See any news on Facebook over the past five years, or any of the sort of big tech companies. And to your point, we would use them and we would exploit them. We're at the University of Sydney Business School we've got your face, because it's in the public domain. We would love to have you teach classes in 20 years from now, 30 years from now, where we say 'well one of the first guys to do this, and here's hearing from him' and it would probably...
Vivid Audio - Kai And he hasn't aged a bit.
Vivid Audio - Sandra He hasn't aged a bit, and it would be someone puppeteering them. I'm sure that I could get Kai to teach my classes with a digital version of me. Nobody would know. Nah.
Vivid Audio - Kai Nah.
Kai So as you can hear, interestingly our discussion at Vivid had already foreshadowed exactly the kind of technology that Udacity, or the researchers from Udacity, have now announced. The creation of virtual tutors which would be the logical next step, after you can, you know, pretty much edit your own videos, the question would be can the university not just keep my recordings, or my avatar and have myself teach forever? But we also want to go back to the discussion around these digital influencers like Lil Miquela, because the New York Times had a big article about this just recently.
Sandra This was a couple of weeks after our Vivid event, the New York Times had an article called "These Influencers Aren't Flesh and Blood, Yet Millions Follow Them". And it reported on, among other things, on the fashion label Balmain that had commissioned the British artist called Cameron-James Wilson to design a diverse mix of digital models. And as always, we'll include this in the shownotes, so can have a look at the picture as well. There there's a black woman, an Asian woman, and a white woman who are all modelling Balmain clothes, and none of these are real, they're all digital models. And this has been followed by a range of other fashion companies, but also by companies like KFC, yes the chicken KFC, who've worked with Generic Versatility to develop a virtual version of Colonel Sanders, who in the New York Times article can be seen promoting Dr Pepper, but a much younger version of a Colonel Sanders.
Kai Yeah you could argue the hipster version of Colonel Sanders, very much a product of these times, not to be mistaken for the older image that we might have of Colonel Sanders that is often depicted in logos.
Sandra Definitely not, we encourage you to have a look at the images in the article. And since we are talking about digital influencers, I want to also bring in another article from Fast Company which reported on the skincare line SK-II, which is a Japanese, really cult, skincare brand, which is another one was adopted a digital influencer. Her name is Yumi, and this is a collaboration with Soul Machines, that's a New Zealand startup that uses AI to create digital humans. And we've spoken about Soul Machines before, but Yumi's sole purpose for existing is to advise people who buy SK-II's products, and to tell them how to take better care of their skin and so on. And she looks like a 20 something year old Japanese woman. She is powered by artificial intelligence, not by an actual person but she will be with SK-II forever. And Soul Machines, the company that created her also created similar digital humans for Mercedes Benz, for the ABC Bank in Bahrain and for the Bank of Scotland, among many other companies that they've worked with.
Kai The company was founded by Marc Sager, who we've previously worked with here at the Motus Lab. He also is the creator of BabyX, which we covered very, very early on on the podcast, a couple of years ago. Now I want to go back to the New York Times article which raises some interesting questions around these virtual influencers. Now first of all, the question is: if you're a company who wants to have models representing you, isn't it indeed much easier to, rather than trying to find the right model, to just create a digital model or a series of digital models from scratch, which can have all the kind of politically correct diversity in terms of gender and skin colour, and can then, you know, say and do whatever you want them to do?
Sandra And as we've seen from Calvin Klein also sexual orientation.
Kai Which refers to Lil Miquela obviously making out with the decidedly real Bella Hadid, creating much controversy after the release. And so the article in The New York Times goes into a discussion about truth in advertising and matters of trust and what can we believe when, you know, models in advertisements are no longer real people.
Sandra But to be fair advertising has never relied on it being real or being truthful, not even the Instagram influencers, we've spoken about this often on the podcast. So in a way you could say this is nothing new, it just allows us to be a bit more refined in how we portray fake lives, or fake skincare or any other fake shit.
Kai But the trust question becomes more pertinent when we go to artificially generated news anchors, for example, like we've seen with Xinhua News. Or indeed, when celebrities, people in the public space, have their videos be faked and doctored with, and we've seen recent examples like a video of Mark Zuckerberg telling, quote 'the truth' about Facebook business model, or indeed Kim Kardashian-West making a pronouncement about her role as an influencer in a totally believable video. Let's hear from Kim just briefly.
Audio - Kim Kardashian When there's so many haters, I really don't care because their data has made me rich beyond my wildest dreams. My decision to believe in Spectre literally gave me my ratings and my fanbase. I feel really blessed because I genuinely love the process of manipulating people online for money.
Sandra So such deep fakes also raise some interesting questions about the responsibility of the people who create these technologies in the first place. We've seen, for instance, companies like Adobe that has produced many more tools to create or alter media, to create fake videos, to create fake people, to manipulate images, than they have released tools to detect them. So the question remains to what extent are the people creating these technologies in the first place responsible for unleashing them in the wild, and actually we asked this question at the Vivid.
Vivid Audio - Kai I want to put Hao on the spot.
Vivid Audio - Hao Li Oh yeah.
Vivid Audio - Kai [00:26:36] Yeah. So a recent survey by Times Higher Education found that people really do not trust AI researchers, or they basically disagree, or strongly disagree that they behave ethically when they build applications, right. And you're only building the faces behind it, right? You're not building like the logic behind it, so I'll let you off the hook slightly. But here's the scenario, right?
Vivid Audio - Hao Li Yep.
Vivid Audio - Kai Facebook comes to you and says 'we give you X amount of dollars', you know, enough to make the team uncork the champagne. 'And what we want, is we want an algorithm that can harvest the photos of people's social network and turn them into these visual avatars, because we know that people respond to faces really well. And what if, you know, we can just use digital versions of people's friends to sell stuff, or recommend stuff, right?'. Where are those business models? Is there a line that you would draw? Or how are those conversations even evolving in your world?
Vivid Audio - Hao Li Let me first start to say a few words about like the trust between people and a researchers. It's actually true, so a lot of people are scared. But I think this is also a PR problem, because I think there is a misunderstanding of what AI is. I also use a lot of this AI, I probably use it too much. AI isn't what AI is. AI is not like an actual, you know, Cyberdyne Systems, Skynet, that has a mind of its own. Nowadays it's pretty much just pattern recognition, right. So it's basically, so here's an example in natural language processing. The way to think about this is that back in the 70s there's something called a Turing test, when you tried to have a software, and you have a conversation with them, and you're trying to figure out if it's like a real person behind, or an actual software. It has a bunch of rules, very similar to this true foundation example. And if the rules are good, you can fool the person that it's a real person. Nowadays this would be done using deep learning. The idea is basically you take all the movie scripts that exist in the world, so all the possibilities of conversations are there. So I can say something, and based on, you know, all the movies in the world, it can give you something that's plausible, right, it could add a little bit of memory in it, and you can fool a person that it is a natural conversation. But this is just hallucinating it. So the same thing goes with how we generate the avatars, how people are mining our conversation on Facebook. But then this is not, you know AI is just a tool that does that, but it's not the AI itself, that is I would say you would blame it for, it's basically the entire social media system. So I think you're basically sharing everything there, you can spread information, you can spread fake information very quickly. So there are other dangers other than you know saying it's AI.
Vivid Audio - Kai But Hao, back to my question, because as Facebook, I don't want these things to think and it doesn't matter. I only want these things to simulate, such that people click on the ads or they buy more from my customers who I'm selling this service to. So is this something that will come?
Vivid Audio - Hao Li It's already there. Right. I mean I just, I took a, you know, a random class of boxing last week, and now I'm seeing all these video ads. So it's already like, you know, harvesting all the information that you're inputting to your, you know, anything about, that is chatting, it's tracking where you are. It's even predicting what you going to do next, right, and it's, and I would say in some cases it's pretty accurate. It's feeding you with things that you find interesting. And its sole purpose is to get you more and more addicted, right. And it's not the AI that is doing it. It's the people who are programming the AI to do the specific task. So I think that's the main issue.
Vivid Audio - Mike Seymour Well, I don't think it's a problem with AI researchers and so on, I think it's a problem with just societal abuses of stuff.
Vivid Audio - Sandra It's your fault.
Vivid Audio - Mike Seymour No, no, but I mean seriously, like we had this example recently, and I don't think you, you might have been on the team that did this but, stop me if you want...
Vivid Audio - Sandra If it was your fault!
Vivid Audio - Mike Seymour So, this guy gets caught with six hookers jumping up and down on top of him. He's a politician in Brazil, was that you?
Vivid Audio - Hao Li Yeah. Oh, no.
Vivid Audio - Mike Seymour Not the politician. So Hao was there. I'll set you the story up story up. But so the guy comes out in the press when this is revealed in Brazil, right, and I've got the clippings. Because I, I, try explaining to your wife how you watching porn on the screen. 'No honey, it's for work, really'. But the thing is, this guy claimed it was just all fake videos. He's like 'it's all fake. It's all digital humans. It wasn't me, right, it was completely faked'. And the public kind of bought it, until you guys came in, right?
Vivid Audio - Hao Li Yeah. So we were looking at the footages, it was me, there was also like when another person was (inaudible). He's like one of the top experts in visual forensics. And to us it looked real. You know it's possible that it could have been faked, it's unrealistic because it would have put so much more resources and efforts in order to create something that is, I think it's more plausible that they hired someone else, a digital double to do it. But I think another thing that was kind of funny that came out of this was more like, he actually got elected. So, I met a Brazilian recently, he actually got elected. Some think that it's because of that. Some people thought he was a hero.
Vivid Audio - Mike Seymour They said they had heroic proportions.
Vivid Audio - Kai But also we've, we've seen already Donald Trump denying and using the idea that anyone can now fake audio or video online, gives now reason to people who, you know, said something that they later can't remember. To say 'no, no, someone faked this'. Which, you know, is plausible, we can do this now. So the question then becomes, even more so than already, what can we and what can't we believe online? And where does the arms race lead with technologies to actually call out those cases, and does it actually matter if that comes out like three days later when the world has moved on?
Sandra And unfortunately quite often the news we see around digital humans are of the arms race, deep fake, manipulation kind. But there is one thing we wanted to bring up before we end because there actually have been some interesting, not good news stories, but interesting applications of the technology in an area where everything is actually make-believe, where this is part of the industry. And that is the movies.
Kai And while in the movies the creation of so-called CGI characters, computer generated characters was built around the way in which Mike explained earlier, built them from scratch with assets. The new AI-based technology has helped immensely with the proliferation of CGI characters, to the extent that we see a lot of movies coming out right now which feature either regenerated characters of deceased actors, or younger versions indeed of actors that we all love and cherish.
Sandra So for instance, Peter Cushing who died in 1994 reappeared in the Rogue One Star Wars movie in 2016. We've seen Kurt Russell looking like he did 20 years ago in the Guardians Of The Galaxy Vol. 2. We've seen Michael Douglas quite young in the recent Ant Man, we've seen Robert Downey Junior being young in Captain America: Civil War.
Kai Oooh, Princess Leia please.
Sandra That's another Star Wars one, we've seen Samuel L. Jackson as a young Nick Fury in Captain Marvel quite recently. And there are going to be a couple of movies entirely built around these technologies.
Kai So a new movie which features two Will Smiths is coming out called Gemini Man, where the current.
Sandra 50 year old...
Kai Will Smith is battling with a younger version of himself.
Sandra So the 50 year old Will Smith is actually meeting the Fresh Prince from Bel Air, a 23 year old version of Will Smith in what is the new Ang Lee action movie.
Kai You're a big fan of Ang Lee, I've been told.
Sandra I am a very big fan of Ang Lee who manages to cross, really, epochs and, as well as cultures, with movies set whether they're in Victorian England, or in ancient China, or in the 70s in the US manages to write some beautiful, beautiful love stories.
Kai You are obviously alluding here to Crouching, Hidden Dragon and Brokeback Mountain and...
Sandra Yes, as well as Sense and Sensibility.
Kai So, this better be a good movie then....
Sandra Probably not a love story between the two Will Smiths.
Kai No, because one is out to kill the other, which makes for an interesting one. But there's also the new ILM movie announced, and we don't have pictures yet but a press release, which will apparently feature a young Al Pacino and a young Robert De Niro.
Sandra So Martin Scorsese's The Irishman will see Jimmy Hoffa and his killer, played by Al Pacino and Robert De Niro de-aged for the purposes of re-enactment.
Kai De-aged, a good word.
Sandra So we're looking forward to that. So join us at the movies this holidays.
Kai So, as always with an emerging technology like this, there's more open questions than answers. But we are seeing positive applications and entertainment and in education, in business advisory contexts. And then obviously the whole discussion around deep fakes. And I'm sure this will come back time and again.
Sandra As will we, but for now we're going back on our break.
Kai And we will see you on Season 6, in only a few weeks’ time.
Sandra Thanks for listening.
Kai Thanks for listening.
Outro This was The Future, This Week, made possible by the Sydney Business Insights team, and members of the Digital Disruption Research Group. And every week right here with us, our sound editor Megan Wedge, who makes us sound good, and keeps us honest. Our theme music was composed and played live on a set of garden hoses by Linsey Pollak. You can subscribe to this podcast on iTunes, Stitcher, Spotify, YouTube, SoundCloud or wherever you get your podcasts. You can follow us online on Flipboard, Twitter, or sbi.sydney.edu.au. If you have any news that you want us to discuss, please send them to sbi@sydney.edu.au.
Close transcript