This week: I’m um, a bot; sexist spaces; and Japan making stuff in other news. Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.
The stories this week
Other stories we bring up
Future bites / short stories
Our theme music was composed and played by Linsey Pollak.
Send us your news ideas to email@example.com.
Disclaimer: We would like to advise that the following program may contain real news, occasional philosophy and ideas that may offend some listeners.
Intro: This is The Future, This Week on Sydney Business Insights. I'm Sandra Peter. And I'm Kai Riemer. Every week we get together and look at the news of the week. We discuss technology, the future of business, the weird and the wonderful, and things that changed the world. OK let's start, let's start.
Sandra: Today on the future, this week: I'm um, a bot; sexist spaces and Japan making stuff in other news. I'm Sandra Peter. I’m the Director of Sydney Business Insights.
Kai: I'm Kai Riemer, Professor at the Business School and leader of the Digital Disruption Research Group.
Sandra: So, Kai what happened in The Future, This Week?
Kai: Ahhh this happened.
Audio: Hi how can I help you? Hi, I'm calling to book a woman's haircut for a client. I'm looking for something on May 3rd. Sure give me one second. Ah hm. Sure what time are you looking for around? At 12:00 p.m. We do not have 12:00 p.m. available but closest we have to that is a 1:15. Do you have anything between 10:00 a.m. and 12:00 p.m? Depending on what service she you would like, what service is she looking for? Just a woman's haircut for now. OK we have a 10 o'clock? 10:00am is fine. OK what's her first name? The first name is Lisa. Okay perfect. So I see Lisa at 10 o'clock on May 3rd. Ok great thanks. Great have a great day. Bye.
Kai: So this was Google's AI driven assistant presented at Google's 2018 IO Developers Conference on Tuesday. And if you're wondering the AI was the one placing the phone call to make a reservation for a haircut for her client. So this made news just about everywhere. We've picked an article from 'engadget' to start our conversation titled "Pretty sure Google's new talking AI just beat the Turing test." Now as a reminder the Turing Test stipulates that any A.I. should pass for a human when in natural conversation or thereabouts, that's the popular version and we can probably say that on this occasion certainly it did. Now the phone call was prerecorded but Google assures us that it is a true and real phone call placed to an unassuming woman who probably didn't know that she was interacting with a bot.
Sandra: So it's been an overall big week for technology companies where Google's conference being just one of the forums where big things got announced. We also had the Facebook F8 Conference where there was talk of augmented reality, virtual reality, privacy. There was the Microsoft Build Conference, again with announcements around cloud and artificial intelligence. But we were choosing to focus on this particular news on Google duplex and go into a bit more detail around this because this seemingly feels like next level A.I. stuff. It feels like a really big step. And even though this is still very much under development it's still something that Google hasn't released. But we'll be conducting some early testing around this as early as this summer. We think it's worth analyzing in a bit more detail. And just to highlight how seemingly good the technology is, here's another clip that the CEO of Google's Sundar Pichai showed off at the Google conference. This is again Google assistant making a call to make a booking at a restaurant and they mention that Google Duplex, the AI behind the assistant could help users make this sort of restaurant reservations or they could schedule hair appointments or check times or book your holidays over the phone. So let's have another listen to Google Duplex.
Audio: Hi how may I help you? Hi, I'd like to reserve a table for Wednesday the 7th. For seven people? Um, it's for four people. Four people- when? date? Um Wednesday at 6.00 p.m. Oh actually we need reserve for like after like five people, for four people you know you can come. How long is the wait usually to be seated? For when tomorrow, or weekend or? For next Wednesday the 7th. Oh no it’s not too busy, you can come for four people, ok? Oh, I gotcha. Thanks. Bye bye.
Kai: So, in this instance it's the male voice that is computer generated placing the call and it is quite spectacular. We think in its mundane nature because you know this thing engages in a believable conversation and it holds the conversation even though the person at the other end isn't quite clear. The audio is broken up. She speaks with an accent. And they also get mixed up in their conversation, but the bot recovers and actually is able to follow the course of the conversation even though it didn't end in a reservation. Now, let's have a look at how this works. This is accumulation of a number of technologies. First of all Google is making this available as a service for clients, called Google assistant, whereby the user can specify certain tasks such as getting an appointment at this restaurant, I need to give a telephone number, personal details, preferred dates and times so all the kind of things that we would give to a human assistant as well. The bot then makes the phone call and collects the relevant information and gives that back to the user or makes a calendar entry supposedly so, Google was a little bit vague on that end. But what happens here is that this is an outgrowth of Google's synthetic speech program wavenet which unlike previous synthetic speech systems like we know from GPS systems for example, are not based on a human reading out words like basically reading out a dictionary and then the system just piecing together words. This is a fully synthetically created voice which means it has proper intonation, it can stop start and actually behave like a human would speak, so that's big progress in that respect, which is then paired with Google's Duplex AI system which works by way of deep learning. So again, it has to be trained with certain training data in this case, hours and hours of recorded human conversations. Humans placing phone calls to restaurants or hair studios.
Sandra: And let's be clear Google has specified that Duplex can only carry out these natural conversations after it's been really deeply trained in specific domains so you couldn't just ask the assistant to now switch to a completely different domain and have the same fluency or the same responsiveness as it had in the conversations that we've just heard.
Kai: Yeah so there's nothing magic behind it in the sense that we've all of a sudden created a truly generally intelligent system. This is still the same old deep learning technology at play which means you have to actually have access to training data, to real conversations placing hair appointment calls or making restaurant reservations, which can then be used to create these systems that are capable of carrying out those specific tasks, so for every task you want to add you have to train the algorithms, so it's basically a lot of piecemeal work that will only work once you have access to the data. Nonetheless what they have done is quite spectacular and a big leap from the kind of robo call synthetic voice systems that everyone is used to when you call a call center for example.
Sandra: So we want to try to unpack this in the context of two conversations that we have had previously on The Future, This Week. And we've talked about conversational interfaces a couple of weeks ago and will include that link in the show notes, and we've also had repeated conversations about fake humans, fake reviews, fake voices, so we'll try to have a look at this new technology and where this comes into play. The obvious first application of this technology would be automated customer service centres, so rather than you trying to converse in an unnatural way with a bot, this is what would actually meet you at the other end, so rather than making the bookings it would take the bookings and in that respect Google is not alone in trying to figure out how to make this work, and obviously a huge cost cutting measure for all organizations that managed to get this right. So there is a clear race in that space. Microsoft is working on similar things, Facebook is working, even companies like Kodak are working on this. But we think the more interesting conversation lies elsewhere.
Kai: So while at the moment Google is intent on selling Google assistance as a service, quite obviously these kinds of technology will rapidly become more widely available. So we have to ask questions about what this kind of synthetic speech can do. So the Engadget article raises the prospect of being able to impersonate anyone creating synthetic voices for celebrities or people in the public domain which raises issues of being able to speak on someone's behalf.
Sandra: And here we want to remind our listeners that a few weeks ago we spoke about Baidu AI and Baidu is the Chinese equivalent of Google which actually announced that their conference that they can now clone a voice with less than a minute sample and also change the accent or the gender of that voice. But really recreate any voice with a sample of less than a minute, which for most of us exists out there in the public domain for us through podcasts such as this one, but also through recordings of your conversations that you've had with call centres or through videos that you might have uploaded to your Facebook or your twitter.
Kai: Yeah and so on a personal level for example I recently had to call the Australian Taxation Office and the ATO now has a system whereby they offer voice authentication so they can record a little bit of your speech and then next time your call up they will use your voice to authenticate you and establish that it is you calling and you can then have access to your private information. Now with these technologies in the public domain.
Sandra: I can pretend to be you.
Kai: Absolutely those kinds of security measures are rendered pretty much worthless because anyone can now go and create a synthetic voice from publicly available speech and so, you know not surprisingly I didn't sign up for this service but every time I called the ATO they will now urge me to sign up. So this has real security implications for these kinds of technology, like voice recognition which have just only become available on the back of AI. So this ties in with the fact that AI can recognise pattern, but you can always turn it around and then create patterns. So the moment you have this AI for speech recognition you can also create speech.
Sandra: So we're starting to touch here upon the issue of deception and the fact that in both clips that we have listened to the assistant didn't identify itself as a chat bot or a robot. So there is an element of deception here we might want to consider. Do we want regulation around disclosing the fact that you are not actually a human but rather a machine?
Kai: So Google already faced some questions from the audience and the media about whether or not Google's assistant, once available as a service will actually identify itself as a bot or will just act as a natural person and I think they said that yes yes yes it will be made clear to anyone we call that this is a synthetic bot but that doesn't necessarily apply to anyone else using that kind of technology once it becomes more widely available beyond Google.
Sandra: And let's remember that some of the advantages of having this technology in the first place is your ability to create that illusion. So some of the advantages of actually managing to develop this technology will go away the moment we actually disclose that this is not a real human but rather a bot.
Kai: So I could create a digital voice twin of myself and send that Kai out to make all kinds of phone calls on my behalf, which then raises further ethical questions. Sure I can have my digital twin you know make annoying phone calls to call centre hotlines or make restaurant bookings and the hairdresser appointments
Sandra: Or a call home to say you will be running late because your class is running late.
Kai: Exactly so what if I decided that certain calls are just a little uncomfortable and can I not outsource laying off staff or telling my kids they're not getting a dog via my digital assistant? So where does it stop?
Sandra: And would you lose your accent for instance if you had the ability? Would I change my gender to make the more difficult phone calls just pretend that I have a much deeper voice?
Kai: Which ties in with the discussion we had about digital humans last week when we raised the prospect of people with disability or people who find themselves discriminated in the public space adopting a different identity. And while that on an individual level might be beneficial, a woman choosing to speak with a male voice to assert herself into a work conversation, for example, what are the long-term effects? Does that not mean that we are enshrining gender and racial differences and discrimination in the public domain?
Sandra: But this is not to take away from the fact that this is a truly remarkable achievement from Google to manage to do this technology
Kai: Some really cool shit
Sandra: And that it does have some very clear positive effects. Think about people who find it quite difficult to make a phone call rather than trying to explain yourself to a chat bot, someone would understand that the more nuanced language that we normally use. Think about applications in the healthcare industry or service centres where this could be a truly remarkable step forward. Just think about never having to hear 'your call is important to us you are called number 57'.
Kai: So these issues of deception were discussed in a Wired article which we're also going to put in the show notes and there were a couple more articles that raised other issues that might come up subsequently, most notably an article in The Atlantic which touches on the whole automation prospect that comes with this kind of service. It's titled "Service workers forced to act like robots meet their match". The point that the article makes is that yes the technology is spectacular in the way in which it simulates a human caller. But it points to the fact that much of telephone-based work in call centers and service centers is already very much scripted and robot like. So what the article says is that for the past 20 to 30 years, companies have worked on standardizing and scripting the kind of calls that are being made to call centers, for example, to the point where algorithms and workflow systems are guiding step by step workers where workers merely lend their voice box so to speak to the computer that is actually in charge of walking the person through the phone call. So this, they say is just the next logical step that will do away with the human element in those contexts altogether.
Sandra: I think the other thing we want to highlight here is that clearly there's a huge cost advantage to having this sort of technology around but there's also a huge benefit in employing it especially for a company like Google. Thinking about the types of data that you would have access to if people were to employ these as assistants rather than on the call center side on the individual user side. So let's say I make all my appointments now through this. Google actually gets access to my data beyond what was previously available through my email or through my calendar or through inquiries that I made to Google. But on a much more personal much more immediate and much more granular level. And this actually comes on the same day that Google also announced Google Lens which is Google's artificial intelligence augmented reality platform which soon will be able to interface between you and the world. So imagine holding up your phone and either an app or directly the, if you're an android user, the camera app on your phone and pointing let's say to the shoes Megan is wearing today and the app would immediately show me similar shoes and where I can buy them on the Internet or pointing them to anything else in your environment where the software would use Google's engine to try to understand what you're seeing and augment it by providing you additional information around whether it's the buildings around you or the people around you or objects around you. It even goes as far as to pass text in your environment. And we've seen this before with other applications where you could point to a sign on the street or a book cover and then translate that to a different language. So, think about traveling overseas it would recognize foods around you, you could even point it at the dog and it would tell you what breed it is and it will use image recognition to try to provide you more information about the environment that you're in. So again, Google making huge strides here in the data that it actually collects about you. So as much as these technologies provide us assistance with whether it's appointments or recognizing a new dog breed they also collect enormous amounts of information about our life by knowing every step of the way what we choose to pay attention to, how we choose to pay attention to those things in our environment.
Kai: And let's not forget that Google assistant also has to be trained with conversations that involve many other people who might or might not have signed up for having their calls that they made to various services be used to train the A.I. in the first place. So more potential privacy implications on that front as well. But I want to raise one more thing, a point that was made in an article in 'inc.com' "This is the worst use of artificial intelligence you will read about all day. Thank you so much Google" Well that’s the title of the article. The point that the author makes is what if this kind of technology becomes so widely available that robo callers, marketing agencies, people spamming you can actually deploy this. Yes, we get robo calls sometimes which just leave a message. Much of telemarketing however is based on the fact that someone tries to entice you to hand over your credit card. So why a robot might start off the call, a conversation with a real human usually has to take place which limits the scalability of these shady systems. What if we could have an AI that is indistinguishable from a human, that you can train to be polite convincing, that will not break down, that will not tire and that you can actually use to place thousands and thousands of calls. What would the implications be for the usefulness of the telephone more broadly?
Sandra: I'd say that while there is a risk of this happening in the initial phases of this technology we have found ways to do the same thing with spam email. So you know on the internet no one knows you're a dog. No one knows you're AI. We have tons of emails that are sent from spammers or from bots that are indistinguishable from the real thing, and email hasn't broken down on that account. Granted this technology could pretend to be your mother or your child or your spouse or the prime minister. But again, we have seen these sorts of things emerge in the realm of email and again even though it is an arms race we have managed to deal with this.
Kai: Yeah but this is my point. We would have to actually find ways to filter out these calls before they reach us. The point with email is that we have found ways to distinguish spam mail from real mail and filter them out before I, as the recipient have to deal with them. Now what technological ways would we have to come up with to filter out the kind of robo calls before you take the call? Or maybe we find ways of you know asking certain questions that might distinguish robot from human in the first couple of seconds of a call, but it would still mean that I'm being interrupted, that I might get many more calls during the day that I will have to deal with, which potentially could render the phone itself a much less appealing device than it might be at the moment because telephone after all is a synchronous medium not an asynchronous one like email where I can filter before I actually interact with the message. So we're not at this point yet but I'm certain that many robo call providers are keen to get their hands on these algorithms.
Sandra: And to be honest I can't wait to get my hands on this as well because there's a number of calls that I would love to outsource to an assistant.
Kai: You just want to replace me with an algorithm again.
Kai: Okay I think it's time to go to our second story of the day which comes to us from Fast Cold Design.
Sandra: "The subtle sexism of your open plan office" a remarkable new study that looks at the experience of women in open offices designed by men.
Kai: So the article reports on a research paper by Alison Hirst from Anglia Ruskin University in Cambridge and Christina Schwabenland of the University of Bedfordshire which was published in gender work an organisation. Now interestingly this study did not start out as a gender study. It started out as a study about workers experiences in coworking or open plan office spaces more generally.
Sandra: So what the researchers there was actually study a local government that was moving over a 1000 employees from traditional offices to a big open office. And they've done this over the course of three years. They've done this by interviewing, it was 27 women and 13 man for one or two hours. But this was done over the course of three years. One of the researchers participated in the work place went on to the meetings went onto the coffee breaks to have lunch with these people. Over the course of a few years and what emerged from this study was quite astonishing.
Kai: So there's been a number of studies over the years and we'll come to some of them in a minute that have pointed to all kinds of different problems with open plan workspaces. Interestingly the aspect found in this study hadn't been mentioned in any of the others, potentially because many previous studies were done using questionnaires and surveys. And while a lack of privacy or lack of well-being has often been reported as a result of working in open plan offices. What came out of this study is that it was women in particular who felt very uncomfortable in these spaces. An effect that didn't go away over time so there's often the prediction that once people get used to and working in open plan becomes normalized their initial unease of working in a public space might subside. Women reported that often they've felt watched and judged by their male colleagues. And they noticed that they were starting to pay much more attention to their appearances. They started to dress up. They started to adjust their dress codes to make sure that people knew that they were not part of the group of assistants but were actually executives. So in other words they became very self-conscious and that this feeling of being self-conscious and being judged by male colleagues extended to the way in which they moved about in the space.
Sandra: So not only dressing differently for instance, not wearing cardigans but rather wearing suits but also avoiding certain spaces where they would have to confront the larger number of people also trying to fit in with the aesthetic of the space, so changing your clothing to better match these spaces that are often quite clean, a lot of glass. So for instance not wearing jeans. man had experienced a lot less of this because they were wearing suits regardless of the environment that they were in. Two more subtle effects of for instance women hiding their emotion for fear of being judged for exhibiting anger or sadness.
Kai: The article makes it clear that the space was designed, and I quote "to chant rather than control overtly and to encourage movement rather than fixity." So, in other words it was created with the intentions to instill collaboration and openness in all the kind of positive cultural influences that you want from a more egalitarian space that breaks down hierarchy and actually becomes more inclusive. But as we've just heard while it certainly had the intended effect on many colleagues predominantly the male ones, many female colleagues report that the opposite effect.
Sandra: So here we want to raise two types of more general questions. One is around open plan more generally and the other is around who and how these spaces are designed. Okay so let's take a step back and look at open plan offices more generally.
Kai: So often these spaces are promoted based on ideas of being more open, leading to more serendipitous encounters between people, therefore more informal conversations more collaboration. This is almost a truism these days but we want to highlight that there has been some large scale social science research in recent years that has largely debunked many of those claims. For example, Inc.com reports on a study published in The Journal of Environmental Psychology in which 40,000 workers across 300 U.S. companies were surveyed about their experiences and the outcomes of working in open plan offices. And they found that in closed private offices clearly outperformed open plan layouts in most aspects of what they call indoor environmental quality acoustics, privacy and what they call proxemics issues. Researchers claim the benefits of enhanced ease of interaction were smaller than the penalties of increased noise level and decreased privacy and also the fact that it is much harder to do work that requires concentration in open plan offices. So they find a negative net benefit of this form of working.
Sandra: And we do want to remark here that of course this depends a lot on the type of work you are engaged in. If you are engaged in collaborative work where you are part of a larger team, these kinds of offices can provide you a benefit but it is often the case that open plan is used regardless of the types of work that you are engaged in. The amount of time that you spend in the office of think about professional services where people might only be spending a couple of hours a day in the office and then they do actually want to interact, want to be part of the group of the community that they are part of vs. people for instance who do research who actually need to engage in solitary work for long periods at a time.
Kai: And this is the outcome of a study by Gemma Irving from the University of Queensland who reported in an article in The Conversation in January 2018, just that she found that depending on the context open plan can actually work in particular in the kinds of software development teams that employ Agile methodology, where people engage in joint process improvement initiatives, where they have to work together on a day to day basis and benefit from overhearing the conversations of their co-workers to be in the know and work on their shared projects. While on the other end of the spectrum the author finds that scientists for example, who predominantly collaborate with people in other institutions benefit much more from a private office set up because they need to engage in Skype conversations with these other people and do not work with their immediate colleagues on a day to day basis.
Sandra: Before we wrap up this story though let's consider the question that the FastCo article ends on which is thinking about how these places were designed in the first place. They ask would the design of this office space have been different if women were part of the team. This was an all men team that had designed the open plan office for the thousand government workers that were part of this study
Kai: And the author describes the design as distinctly masculine featuring clean lines, lots of glass, looking very corporate.
Sandra: Interestingly quite often the experience of these places has not only shaped by the furniture around us or the amount of glass that we use but more subtle things like for instance the temperature. There seems to be a productivity tax for women in terms of the temperature of their offices. We've had evidence for quite a while here, a number of peer reviewed scientific studies where it was discovered that actually women employees in offices that are too cold have a lower productivity. Office temperatures seems to be an aspect that is also designed for men. Most companies set their thermostats to an international standard, standard 55, which assumes that the average worker is actually something akin to a 40-year-old man dressed in a business suit. So really the office worker of the 1960s. An interesting article from Inc. actually which will include in the show notes, summarizes a number of studies on this topic and highlights the fact that unfortunately today it seems to be that we still conform to this office temperature and that women for a variety of reasons get colder much faster than men, which means that they are actually less productive in these environments.
Kai: And we know for a fact that women generally prefer higher room temperatures to men approximately, 3 degrees Celsius or 5.5 degrees Fahrenheit, and that there is no research which actually backs this up in physiological terms.
Sandra: In terms of the effects of this on work, a 2004 study of women doing clerical work simply because this is one where it's a lot easier to measure productivity, found that when the temperature in an office dips to about 20 degrees Celsius, that's 60 degrees Fahrenheit, Women's error rates increased to about 25 percent compared to 10 percent if the temperature was slightly higher. Surprisingly there is actually a very simple solution with this.
Kai: Just turn up the thermostat.
Sandra: The question that normally arises well, wouldn't that make men less productive in that case, would a man be too hot. Actually no. Men are not correspondingly less productive. It seems there is good research that shows that raising the thermostat will make men more productive too. And actually, all this goes to show you that open plans are not a one solution fits all and that there is actually plenty of good research out there that could help us improve the open plan spaces we have at the moment and reconsider how we use them to best enhance our work.
Kai: And not fall for simple ideology but actually think about where we want to employ certain spaces to fit the population of workers in that space. So let's end this here and move on to our future bites. Our short stories The News of the week and let's start with you Sandra. One thing you learned this week?
Sandra: My quick story for this week comes from the New York Times and it's titled "Japan seeks its economic mojo in the stuff that makes the stuff.".
Kai: Ok - What does that mean?
Sandra: That means that Japan may actually be rethinking its critical role in the global economy around making the stuff that makes the stuff for today's digital revolution. That means helping for instance, makers of semiconductors or makers of LCD panels to keep their assembly lines working correctly. So, let's remember Japan is actually the world's third largest economy behind the United States and behind China. And clearly sustaining this sort of competitive advantage will be quite difficult for Japan because of its constant competition with low cost manufacturing that is coming out of China. But more broadly Japan is actually carving out a niche with many of Japan's big corporations actually pushing in this direction. Companies like Panasonic for instance, that back in the they were making televisions or even video recorders, now drastically moving away from consumer businesses and actually shifting into this new space of industrial electronics.
Kai: So what we're saying is Japan rather than engaging in the gold rush have decided to sell the shovels to the gold miners which in the history has always been a good strategy I guess.
Sandra: Yes indeed. And what have you learned this week?
Kai: Well I want to provide an update on the Uber crash, the self-driving car that unfortunately killed a pedestrian in the street and we've discussed it at length when it happened a few weeks ago. The investigation has now revealed that none of the sensors actually failed and that the algorithm indeed detected the person in the street yet chose to ignore the obstacle. Now why did this happen? Basically, what was at work here is an algorithm that balances the necessity of the car to start stop when certain things are detected in the street, such as plastic bags and other things that would not present a threat to the car with the serious kind of obstacles that you don't want to hit. And the article in M.I.T. Technology Review claims that the adjustment had been taken too far which led unfortunately to the car not responding to the detected obstacle and killing Elaine Hertzberg, the woman who was pushing her bike across the street in the process. Now what this shows to me again is the difficulty of designing these kinds of algorithms which as this example shows perceive and act on the word nothing like a human driver would do but rather take a lot of piecemeal adjusting to find a way to create a car that can move in traffic quite safely.
Sandra: Another one of the stories that I am sure we will keep coming back to as creating autonomous vehicles is still very much work in progress.
Kai: And that's all we have time for today. Thanks for listening.
Sandra: Thanks for listening.
Outro: This was The Future, This Week, made possible by the Sydney Business Insights team and members of the Digital Disruption Research Group. Every week right here with us our sound editor Megan Wedge who makes us sound good and keeps us honest. Our theme music was composed and played live from a set of garden hoses by Linsey Pollak. You can subscribe to this podcast on iTunes, Stitcher, Spotify, SoundCloud or wherever you get your podcasts. You can follow us online, on Flipboard, twitter or sbi.sydney.edu.au. If you have any news that you want us to discuss, please send them to firstname.lastname@example.org.