This week: is the ivory tower asleep at the wheel, city experiments, and Uber…again. Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.

The stories this week

The Ivory Tower can’t keep ignoring tech – OpEd by Cathy O’Neil

Sidewalk Lab’s vision of a futuristic sci-fi-ready smart city 

Uber orders self-driving Volvos

Awake but not at the wheel – a response to Cathy O’Neil

Another response to Cathy O’Neil

Discussion on Twitter about the matter

Researchers at Microsoft and Google found the AI Now Institute

“Living with Monsters” Call for Papers for the IFIP8.2 Working Conference

Google wants to run cities without being elected

Putting cities back into “smart cities”

Building Googletown – Sidewalks Lab’s Quayside development

Bill Gates wants to build his own “smart city”

Bill Gates and the vision for Arizona

Tech billionaires spent $170 million on a new kind of school

Uber ordered a lot of cars, ABC News

Uber ordered a lot of cars, SMH


You can subscribe to this podcast on iTunesSpotify, Soundcloud, Stitcher, Libsyn or wherever you get your podcasts. You can follow us online on Flipboard, Twitter, or sbi.sydney.edu.au.

Our theme music was composed and played by Linsey Pollak.

Send us your news ideas to sbi@sydney.edu.au.

For more episodes of The Future, This Week see our playlists.

Introduction: This is The Future, This Week on Sydney Business Insights. I'm Sandra Peter. And I'm Kai Riemer. And every week we get together and look at the news of the week. We discuss technology, the future of business, the weird and the wonderful and things that change the world. OK let's roll.

Sandra: Today in The Future, This Week: is the ivory tower asleep at the wheel, city experiments, and Uber again. I'm Sandra Peter, I'm the Director of Sydney Business Insights.

Kai: I'm Kai Riemer, professor at the Business School and leader of the Digital Disruption Research Group. So Sandra what happened in the future this week?

Sandra: So our first story comes from the New York Times and it's called the "Ivory tower can't keep ignoring tech" and it's by author Cathy O'Neill who also wrote a book that we did quite enjoy called "Weapons of Math Destruction". And we want to address this article because it's received a lot of attention both in academic circles and in the wider media as well as it takes on a very important problem which is the issue of algorithms, the fact that we are surrounded by algorithms, machine learning, artificial intelligence that have come to have a huge influence on the way we work, on who gets promoted, on who gets a loan, who gets a credit line at the bank.

Kai: So listeners of The Future, This Week will be very much familiar with this story because we've covered it a number of times from different angles and Cathy provides a good assessment of the algorithmic issues in the beginning of her article.

Sandra: And then she makes another point which is actually equally important that people and especially lawmakers and I'm quoting from the article desperately need this explained to them in an unbiased way so that they can appropriately regulate and tech companies need to be held accountable for their influence over all elements of our lives, which again is a very important point and a very valid point that we need more clarity about some very complex issues at hand.

Kai: But then her argument takes a very strange turn because she then accuses academia and academics of not engaging with this problem she says "but academics have been asleep at the wheel leaving the responsibility for this education to well paid lobbyists and employees who've abandoned the Academy". She makes the point that many academics who are working on algorithms are selling out. That's what she says. And joining big tech companies to work on algorithms in a much better paid environment. She says that there is no single academic discipline that looks into the dark side of algorithms, that there's very little research being done in academia on these issues, and that there is no dedicated institutes in the world that actually look into these problems. And that assessment has drawn a lot of attention and many people have come out and criticised this - it has led to response articles and a long discussion on Twitter.

Sandra: So first we want to discuss the very civilised discussion that has ensued and that has come since the article has been published on the 14th.

Kai: Absolutely. Even though she offended many people who are working in this space, we want to point out that the responses on Twitter were entirely constructive and actually quite useful, it's a really good discussion that came on the back of this.

Sandra: So first you want to comment on some of the issues that were raised in the article and the pushback that has ensued and try to analyse what the situation is. Second we want to have a deeper look at some of the real issues around the points, around the problems that Cathy has pointed out in the beginning of her article. So first let's look at the pushback.

Kai: So she makes the point that there's no single discipline working on this and she's right about this but there are many disciplines working on this and we think it's actually quite a good thing that this problem is being tackled from many different angles. And we want to quote a few of those. It's science and technology studies, so lots of work in information systems, socio-technical research, cultural studies, ethics, political studies, communications studies, human computer interaction. So there's many colleagues in many different fields who have worked on algorithms, on the sociopolitical nature of algorithms, their effects on social systems, the adversarial effects, bias, all of these kind of issues. And for a long time, not just now as these things become apparent in the media, but for the last two decades and some work even older than this.

Sandra: Some work going back to the 1960s, since the rise of artificial intelligence, that have been looking at various aspects not just the development of the technology but the implications of having this technology in our lives.

Kai: And interestingly many colleagues came out and shared a long list of articles and works...

Sandra:...and conferences and institutes that have been established and so on and we'll include some of these in the show notes but you can look at the Twitter conversation around the article.

Kai: Absolutely. This has become quite a good resource to get into this line of research now.

Sandra: There are many many journals, conferences, research and PhD programs and centres that are tackling the impact of algorithms and AI head on.

Kai: For example and the Twitter stream mentions some of those: there's the Berkman Klein Centre for Internet and Society at Harvard University. There is the Data and Society Institute in New York. There's the Centre for Media Data and Society at the University of Budapest, the Turing Institute, the Royal Society inquiry on machine learning. There's dedicated conferences and I want to use the opportunity to plug our own conference which will go down in San Francisco in 2018 titled "Living with monsters, social implications of algorithmic phenomena, hybrid agency, and the performativity of technology" and if you're interested to submit a paper the deadline is next May.

Sandra: So good work being done out of the University of Sydney Business School as well.

Kai: So what we're saying is many colleagues in many different fields have been working on this for a long time and people have pointed this out. So the assessment as such is incorrect and unfair. We have to give her credit though because she basically initiated this discussion which has been very useful also for people working in this field because it's become a great resource to exchange ideas about who is working on these issues. And we're going to come back to this point but we want to examine one other assessment that she makes.

Sandra: Cathy is calling out the lot of the researchers who are actually involved in doing some of this work for not doing this at a university but rather doing this from a corporate perspective and academics once they have joined one of these organisations whether that be Microsoft or Google or any of the people involved in developing the algorithms that she talks about, that they're basically sell outs and now that they've joined these companies they will not speak up against the product that these companies are developing.

Kai: Yeah. So it paints a very stark dichotomy between industry and academia and while there's certainly questions being raised about research that is being done within corporations because it doesn't have to follow the same ethical standards as universities prescribe, we shouldn't suggest with a broad brush stroke that all of the colleagues who are joining companies such as Microsoft or Google are selling out and just throw overboard their ethical standards and their research rigour to follow a strict corporate agenda. And the fact that many colleagues in Microsoft research or Google actually publishing and now being part of the academic discourse in conferences suggests that there is a lot of good research coming out of those places and we want to also mention that Microsoft and Google researchers Kate Crawford and Meredith Whittaker have just last week founded the AI Now Institute which will also do critical research into algorithms.

Sandra: And there's also a long tradition of academics spending a few years in industry especially in places like the US and Europe where academics might go and work in industry for two years or four years and then return to academia to continue their research. So we shouldn't draw a strict dichotomy between the two realms and just imagine that there is a profit motive on the one side and an intellectual endeavour concern on the other.

Kai: Yes but at the same time on the back of Cathy's article, some real issues have surfaced in the discussions online. So the first one being (and many colleagues have pointed this out) corporates are secretive. So even if academics are doing great work on algorithms and bias and algorithms and that kind of unfair effect algorithms might create it is almost impossible to actually subject the algorithms that companies such as Facebook or Google are using to rigorous academic enquiry because these are commercial, in confidence. These companies are not opening their doors for academic research. And so research always has to make inferences. So the point being made is that corporates are very reluctant to engage in academic research itself. The second I think absolutely crucial point that this article raises in the very beginning of the article is that there is a desperate need to explain some of these things in an unbiased way still stands. And for me this is the critical point of this article, that even though we have all of this research being done in a variety of places across the world from different centres, different universities, companies, academics, PhD students, institutes and so on, the public conversation around the impact of these algorithms, of artificial intelligence on our lives, the public conversations that we should be having with our lawmakers, with our institutions, with the people developing these things is not there and is not being seen. So over all this is about communicating to the larger public what some of these findings are and finding ways in which to communicate this.

Kai: And this is one of the main reasons why places like ours have initiatives such as Sydney Business Insights to communicate and discuss and unpack some of those issues which are arguably quite complex and do not lend themselves to one line news headlines easily. But there's also the point being made that because the research is very fragmented what should happen is, we should all come together, we should speak with a unanimous voice and we should be communicating louder. And interestingly one of the people who is very active in this space, Zeynep Tufekci, also has given a really great TED talk on this topic which we will put in the show notes. She came out on Twitter and made a great point saying climate scientists are loud and unanimous. And they are being ignored. So one of the reasons that this is not working the way it should be working is.

Sandra: It’s not that research is not being done but rather...

Kai: The politics and ignorance that often comes with topics that are complex, that are not easily explained and that are often invisible in day to day life and operate on time frames that are longer than immediate political action be it climate change or the systemic effects of creating a society that is fundamentally based on algorithms.

Sandra: So again this highlights not only the importance of doing research but rather the importance of rethinking the role of universities in our current society. Universities have always been guides, places where knowledge is developed, and in today's day and age that means not only doing the fundamental research that is critical to us understanding these phenomena but also having a public role in society where we explain these things and participate in public conversations.

Kai: So we want to highlight that the role of universities is not just to train the future workforce or provide basic research and innovation but also to be a critical voice regarding the society that collectively we are building.

Sandra: Which brings us to our second story of today which actually embodies quite a few of the issues that we raised in our first story. So our second story comes from NPR and it's titled "A Google-related plan brings futuristic vision and privacy concerns to Toronto". So NPR had a question to ask and the CEOs of Google explain the vision.

Audio: What would a neighbourhood look like if Google designed it. This question is something that executives at Google's parent company Alphabet have wondered about for a while. Here's Alphabet's Executive Chairman Eric Schmidt: "Google is an unusual place and we sit there and years ago we were sitting here thinking wouldn't it be nice if you could take technical things that we know and apply them to cities. And our founders got really excited about this and we started talking about all of these things that we could do if someone would just give us a city and put us in charge."

Kai: So Google's wishes are finally coming true because the city of Toronto is putting Sidewalk Labs which is owned by Google's parent company Alphabet in charge of developing an entire waterfront precinct called Quayside.

Sandra: So the area of Toronto is about 12 acres of land and the plan is to equip this with everything technology: self-driving cars, smart streetlights, public wifi and so on.

Kai: Underground robotic delivery lanes and waste disposal systems and sensors and AI.

Sandra: And really what would happen if Google were able to bring all of their technology and apply it in a city. So one big experiment. So what are the issues with this? Why is it such a bad thing for a technology company to come in and bring their technology and develop the city of the future? Let's unpack this.

Kai: So first of all the article mentions citizens in Toronto who quite naturally raise privacy concerns putting Google a data company in charge of running a city precinct. The worry is that because everything would be monitored there would be lots of data and sensors that people are basically living in a transparent space where the company can watch and potentially manipulate everything people do. But this story is bigger than that. And there have been a number of other articles come out. One of them in the Guardian who make a more holistic point about city development and the role of corporations in planning top down a city suburb or entire cities.

Sandra: And this idea of corporations planning cities for us today, we've covered in a couple of other In Conversation podcasts in Sydney Business Insights which we'll include the show notes. So why would we want to have a smart city in the first place? Why do we want to bring technology into the city?

Kai: So the company claims to improve people's life. They say this is the one metric that we are going to optimise. But why should a company be in charge of optimising people's lives? That's the first question that I want to ask. How do you even measure that? And is that something that is uniform for everyone living there? Is improving people's lives something that you can actually measure at a distance,that you can optimise. So what becomes quite clear here is that the company takes the development of cities or a city precinct much the same way as they develop and deliver software - as something to be deployed to the users and then to experiment by making changes the so-called A B testing, so I create two versions of the software, I make a change and then I measure whether this change has the desired outcome or not. Then I revert or I keep the change and I can iterate my way forwards on a set of metrics such as you know does it engage more users, do users click more, you know in terms of advertising we've discussed this before, but can we apply this to people's entire lives and the suburb. The Guardian article raises strong questions in terms of ethics.

Sandra: So first of all these companies need not apply for ethics application. If we were to do these studies as part of a university or a government enterprise we would need an ethics application...

Kai:...to ensure that the research doesn't have any adverse effects on the people who we're experimenting on.

Sandra: And so there are also a few bigger issues at play here. So what happens when you open the door to corporations, because let's not forget we do need corporations to help out with some of the city problems that we have. There are technology solutions that actually do make lives better in cities in which we live. But what happens when you give control over an area like this for a company to implement technology solutions.

Kai: So the concern that is raised in the Guardian article is that no one actually elected Google to run a suburb let alone an entire city because let's not forget this is experimenting at a small scale but the vision is a much bigger one as articulated in the audio clip that we just heard.

Sandra: So there is a question of equity. Are we developing this one neighbourhood that will allow a certain kind of people to live there that can afford a certain lifestyle? Is this something that will only exist in let's say big cities that can afford to bring in corporations like Google to be able to build these sort of things?

Kai: Who gets to decide who's actually going to live in this suburb if the buildings, the suburb, is controlled by a corporation. Can anyone apply to live there or does the corporation want certain people to be involved in the experiment because they want to demonstrate that the technology actually works? Wouldn't that mean that that is more readily demonstrated with a certain demographic of people? Will this discriminate against certain parts of the population?

Sandra: And what type of competition will you be able to enable further on in these cities if part of this city is colonised by algorithms that belong to a certain company. How do you then build that into your plans for further developments around that city for integrating certain suburbs with the rest of the city for new developments and so on?

Kai: And given the nature of the kind of neighbourhood that you're creating and one of the spokespeople of Sidewalk expressed that they are going to build a city from the internet up merging the physical and digital realms. So a very technology driven vision of what this neighbourhood will be like which will be quite attractive to one kind of people who are you know in favour of living in a technological environment but will repel other demographics. So given that vision and the kind of environment that they create will this actually scale? Will this be generalisable to other parts of the city? Will these initiatives just lead to a ghettoisation where we're building very different suburbs?

Sandra: So this is also a question of profitability and we've seen this previously with other initiatives by Google such as Google Fiber where what if it's not profitable enough for Google to build these neighbourhoods, what if they run a few of these experiments then they say actually no it's not profitable enough for us to build it in a smaller city or to build it in an area that has low density or a low socioeconomic background. Also this idea of building from the Internet up there are actually very few companies in the world at the moment that can build from the internet up and increasingly fewer companies can do this. If cities lose the ability all together to control when and where these companies deploy the resources.

Kai: So the question is what if the company loses interest. And we've seen this in other context because this idea to apply technology to an area in a top down way to try and fix it isn't new. We've seen this in schools for example and there was also a recent article about these so-called old school in the US an alternative kind of school where tech billionaires spend 170 million dollars on this new school project which was again built from the internet up with lots of technology and tablets and algorithms that control learning progress and they've created these schools. And the article in Business Insider makes the points that these children have been treated as guinea pigs and now that the schools have run for a little while and the company has built its technology, it loses interest because it wants to pivot to just sell the technology and the schools are being abandoned and there's lots of problems. Parents are taking their children out of these schools with negative effects for the children who are going to these schools but also the benefits of these schools are largely unproven because again and we're coming back to the first story, there is no rigorous research being done on these experiments and whether these experiments are actually done with education outcomes in mind or merely the optimisation of the technologies that are being deployed by these companies to create new business models. So the question the Guardian article raises about cities is - is this merely an initiative where a tech company wants to have a real life lab to optimise its products rather than to genuinely work to creating a vibrant city. And is this even possible to create a city in a top down way?

Sandra: So, a good question to ask here is how do we empower schools or how do we enable and empower our city governments to play a real role in how these strategies are being developed because the problem is not cooperating with these large organisations. Of course we want to find the best solutions for the real urban problems that we have, but the question is how do we make these solutions stick? How do we get the solutions that once they come into a school or into a city or into a neighbourhood to stick and become part of the fabric of that city, and part of this we also discussed with Dr Tooran Alizadeh on a podcast around smart cities and the idea was that what makes each city special, its unique identity, the fact that Sydney is different to Toronto, is different to Vancouver, it's different to New York. All of these companies that develop generic products then have to somehow become part of the fabric of a real city in order to be able to stay there.

Kai: So you're raising the issue of uniformity. If these technologies are being developed and then rolled out at large scale and let's not forget Facebook, Google, these are all global empires that organise our daily lives in much the same way regardless of where you live. So there's that aspect of global uniformity and the fact that we employ technologies that are developed in one part of the world to other jurisdictions. But there's also the issue that cities are not machines, cities are not software. Cities are living breathing organisms that have to live, that have to organically work. They're not the kind of thing that you can actually create in a top down manner. And I want to cite from the Guardian article: "Cities are not machines that can be optimised nor are they labs for running experiments, cities are not platforms with users, nor are they businesses with shareholders, cities are real places with real people who have a right not to live with whatever smart solutions an engineer or executive decides to unleash". So it’s a matter of choice. It's a matter of being in control and being aware of what is being done and there is a real risk here that we are creating another layer of data and technology that sits outside of the accountability of governments which after all in a democratic society we can change.

Sandra: And also of responsibility of government, issues of equity are not the responsibility of organisations like Google or Facebook to make sure to provide access to every part of the city whether that's high speed internet or access to public transport or to bicycles for that matter. It is the responsibility of cities.

Kai: Absolutely. And companies like Google haven't exactly instilled confidence in their accountability as we discussed previously where they point to their algorithms and absolve themselves and say "oh this problem, that was our algorithm doing that, you know it's not our fault". So what are the situations that we're going to end up with when things go wrong with these robotic automated systems on the basis of which they are going to build these cities.

Sandra: But in the end we do need to come up to actual people and buildings that are already in the cities and everything that we know about them and how to run and develop cities for them. Even though the CEO of Sidewalks Lab said there's an inverse relationship between the capacity to innovate and the existence of these actual people and buildings: "it is people and buildings after all that are at the core of the urban challenges that we are facing today and not technology.

Kai: And so it's no wonder that companies such as Google want to build entire neighbourhoods from scratch. And there was also an article that Bill Gates is involved in investing in another city that is being built from the ground up in the middle of Arizona where these experiments can be run in a greenfield situation where these companies do not have to engage with the real existing problems of the cities that we have. And this is expressed in a statement by Grady Gammage who is the spokesman of the Arizona based investment group in charge of building this city who said "envisioning future infrastructure from scratch is far easier and more cost efficient than retrofitting an existing urban fabric.

Sandra: No shit Sherlock.

Kai: Yes. So this basically sums it up the top down way of solving problems works when you can actually build the whole thing from scratch. But that doesn't actually answer the question whether any of these solutions that we build in these ways can be applied to the cities that are already living and breathing around the world today and that have lots of problems with infrastructure and traffic and social inequality and housing crisis and all the kind of things that our city is suffering from.

Sandra: Which brings us to our last story of the week. This one comes from Techcrunch and it's again about Uber. Uber orders up to 24,000 VolvoXC90s for its driverless fleet and were calling this one under our "no news of the week, not the future this week.

Kai: So first of all Uber doesn't have a driverless fleet as such, it has some experimental vehicles that it has been driving around the city of Pittsburgh, Ford Fusions that have done some rides but always with a human driver inside because we haven't actually mastered the kind of level five self driving capacity that supposedly these Volvo cars will have when they are being delivered between 2019 and 21.

Sandra: So in the Ford Fusion is that it has now the human drivers have to take the wheel at least once every mile. And this is cumbersome not only from the perspective of drivers but also from a perspective of city dwellers who have to be tested on in this vehicle.

Kai: So first of all this is an announcement of purchasing a technology that we haven't invented yet. Let's not forget this right so this is a big promise that in the next two years we can actually create fully self driving cars. Because the point that Uber is making is that its largest cost in its operations is actually paying its drivers and it has millions of drivers around the world. And now 24,000 cars it wants to employ to get rid of this cost but economically that seems to be a very different ballgame.

Sandra: Owning a fleet of 24,000 cars first comes at the real cost of...

Kai: One point four billion the article says.

Sandra: And then there is a cost of keeping a fleet of cars that will breakdown, that need servicing, that need insurance, it's a whole different ball game.

Kai: That has very new as yet uninvented and for sure untested technology that might be fickle, that might break down, what happens when a ride is breaking down and you have to send out a human driver to actually take over because if you put a human driver in the car the economics don't work out so you have to be sure that you don't have to have a human driver.

Sandra: You also have to park this car somewhere, you will have to replace them every once in a while. So the entire business model that this is predicated on is not really a solution to the problems that Uber has today. So why do we have this story in the first place? Why is this almost news?

Kai: So this was all over the media. It popped up in many different places including the Sydney Morning Herald. But we're calling this out as not news. My opinion is that Uber needs to provide some narrative, some vision for its investors, because it is still burning through a lot of cash every months. And so the announcement that they will need one point four billion US dollars to purchase this fleet to come to the next iteration of its business model in my view is really a message for its potential investors and not for the general public as such.

Sandra: So there's still a long way away from Uber being a self driving car company.

Kai: Because it would require for Uber to come up with an entirely new business model and cost structure.

Sandra: And whilst we're not saying that this is not a possibility in some future, we're saying this is not the news that actually tells us that.

Kai: And this is all we have time for today. Thanks for listening.

Sandra: Thanks for listening.

Outro: This was The Future, This Week made awesome by the Sydney Business Insights team and members of the Digital Disruption Research Group. And every week right here with us our sound editor Megan Wedge who makes us sound good and keeps us honest. Our theme music was composed and played live from a set of garden hoses by Linsey Pollak. You can subscribe to this podcast on iTunes, SoundCloud, Stitcher or wherever you get your podcasts. You can follow us online on Flipboard, Twitter or sbi.sydney.edu.au. If you have any news that you want us to discuss, please send them to sbi@sydney.edu.au.

Related content