Sandra Peter and Kai Riemer
The Future, This Week 09 March 2018
This week: facing the fake food future, big data big brother, and space junk and cloning voices in other news. Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.
The stories this week
Perfect Day – the start-up that makes milk without cows
If you get on China’s blacklist you can be banned from travel
Other stories we bring up
How close are we to a hamburger grown in a lab?
Yes, bacon really is killing us
The Future, This Week 10 March 2017 on Fake Milk
The Dutch cities amassing data on oblivious residents
“Black Mirror” series “Nosedive” the first episode of season three
Researcher admits study that claimed Uber drivers earn $3.37 an hour was not correct
A new app transcribes your conversations in real-time
Future bites
Uber called an MIT study concluding drivers make less than $4 an hour ‘flawed’
Baidu can clone your voice within just seconds of hearing it
You can subscribe to this podcast on iTunes, Spotify, Soundcloud, Stitcher, Libsyn or wherever you get your podcasts. You can follow us online on Flipboard, Twitter, or sbi.sydney.edu.au.
Our theme music was composed and played by Linsey Pollak.
Send us your news ideas to sbi@sydney.edu.au.
Dr Sandra Peter is the Director of Sydney Executive Plus and Associate Professor at the University of Sydney Business School. Her research and practice focuses on engaging with the future in productive ways, and the impact of emerging technologies on business and society.
Kai Riemer is Professor of Information Technology and Organisation, and Director of Sydney Executive Plus at the University of Sydney Business School. Kai's research interest is in Disruptive Technologies, Enterprise Social Media, Virtual Work, Collaborative Technologies and the Philosophy of Technology.
Share
We believe in open and honest access to knowledge. We use a Creative Commons Attribution NoDerivatives licence for our articles and podcasts, so you can republish them for free, online or in print.
Transcript
Disclaimer: We'd like to advise that the following program may contain real news, occasional philosophy and ideas that may offend some listeners.
Intro: This is The Future, This Week on Sydney Business Insights. I'm Sandra Peter and I'm Kai Riemer. Every week we get together and look at the news of the week. We discuss technology, the future of business, the weird and the wonderful and things that change the world. Okay let's start. Let's start.
Kai: Today in The Future, This Week: facing the fake food future, big data big brother, and space junk and cloning voices in other news.
Sandra: I'm Sandra Peter the Director of Sydney Business Insights.
Kai: I'm Kai Riemer professor at the Business School and leader of the Digital Disruption Research Group. So Sandra where are you?
Sandra: Well I'm still in Palo Alto down the block from Stanford and in the middle of Silicon Valley trying to have a quick look at the future, as it happens having a look at the future yesterday, you're a day ahead of me now.
Kai: Yeah you're looking at the future but you are in the past. That's something to wrap your head around. I hope your Tardis will get you back here for next week's podcast but for this week we'll have to live with doing this via Skype. This is also our fiftieth episode. I was expecting cheering here now but that's all right. Moving on from that. So Sandra what happened in the future this week?
Sandra: So our first story for this week comes from Quartz. It's titled 'Mooving on: meet the start up that makes milk without cows' and coincidentally it's exactly a year ago that we did our first story about milk.
Kai: So at the time we discussed fake milk - soy milk, almond milk, and the question of whether we can call something milk if it doesn't come from an animal, from a cow. So is almond milk actually milk?
Sandra: And also who gets to decide what gets called milk?
Kai: Yeah but what is real milk?
Audio: "A bottle of milk thanks." "Low fat, no fat, full cream, high calcium, high protein, soy, lite, skim, Omega 3, high calcium with vitamin D and folate or extra dollop?" "Ah I just want milk that tastes like real milk."
Kai: But this is different.
Sandra: Or is it?
Kai: Exactly. Now this is milk that also doesn't come from a cow so genetically it's milk but grown in a laboratory. The process by which this is done is called microbial fermentation and it uses a kind of genetically modified yeast that produces the proteins that you find in milk and the outcome is real milk. Or is it real?
Sandra: So in a process that's quite similar to how we make beer for instance, food scientists will now program a genetic code into that yeast and we will get a final product that is arguably cows milk but without any cows.
Kai: So is it cows milk if there's no cow involved? That's one question. So we could discuss at length what we're calling this and who has the authority to actually name things. So if you ask the scientists who does a chemical analysis or a DNA analysis for them it's absolutely cows milk. But there was no cow involved. This is one angle of looking at this but we wanted to discuss a bigger picture here today. So this is part of a larger trend in food innovation and the practice is called cellular agriculture. And it isn't just about milk there's equally a story this week incidentally about what's called clean meat - meat that is produced without the involvement of animals except for the animal donating the initial couple of cells that are then grown in a laboratory to make up muscle meat that is supposedly indistinguishable from real meat so to speak and tastes just the same.
Sandra: So let's have a closer look at this type of food innovation. So first this idea of the cultured meat really took off a few years ago back in 2013 where a Dutch scientist Mark Post went on British television and for the first time cooked a lab grown hamburger arguably made from meat.
Kai: It wasn't cheap.
Sandra: It wasn't cheap at all. It was about 300,000 dollars. That's an expensive burger and it was paid for by...
Kai:...an anonymous donor who later turned out to be Sergey Brin from Google.
Sandra: And since then we've seen a few experiments, companies like Memphis Meats that attracted a lot of capital from people like Bill Gates and Richard Branson and Jack Welch and so on. The company that is in the article we're discussing today, Perfect Day, that's now raised millions of dollars and attracted a lot of talent to work on what is currently called a clean animal product such as the milk or the meat in these experiments. And in theory there's enormous potential in these technologies. In the case of the meat as you've mentioned taking in single cell from an animal and growing that for about three months could actually give you enough meat to make 20 trillion chicken nuggets.
Kai: That's a lot of chicken nuggets.
Sandra: That is a lot of food. So how does this come about in the first place? Clearly a lot of the food tech innovation has come on the back of other technology innovations and has come out of the same innovative places like the other technologies that we see in other industries.
Kai: Yes so on the one hand of course this is driven by new technology and we're doing it because we can. We have the technology now to grow meat and use yeast to make milk and do all of this and do it at scale, soon. But the other angle is that this is done to solve a real problem which is to feed a growing population in a world with environmental problems.
Sandra: So one of the megatrends we look at the University of Sydney Business School is that of resource security and there's a huge need to increase the global food production. The estimates are that by 2050 we'll need 35 percent more food than we are currently producing. So this would be solving a real problem and that there are all the pressures that we're currently have now. If you think of food safety or food security as an issue, in China for instance milk has been a particular problem over the past few years. So this is addressing a real need. There are also huge quantities of land, all also of water, fossil fuels, other resources that are needed to produce the meat and the milk that we currently consume. So this could potentially address sustainability in food production.
Kai: A study by the University of Oxford has found that clean meat production could result in 78 to 96 percent lower greenhouse gas emissions and use 7 to 45 percent less energy, 99 percent less land and about 90 percent less water than traditional methods of meat production. We all know that animal farming is very resource intensive, not only is a lot of land water used to grow food for animals, animals are also destructive to the environment such as in Australia where cattles are blamed for erosion and having a huge negative impact on the native landscape. So one idea behind creating meat in the lab at scale is to solving many of those problems. On top of this come problems of animal cruelty, animal welfare the idea that it's really not a great idea to farm animals at scale, it's cruel to animals, animals catch diseases. We have to use antibiotics, they enter the food chain, they're at the heart of increasing antibiotic resistance so the idea is that this technology will do away with all of these problems.
Sandra: But we must not forget that this technology is really in its infancy. So while there are studies like the ones you've mentioned also echoed by the London School of Hygiene and Medicine that says that producing for instance beef in vitro would reduce land use by ninety nine percent and gas emissions from cattle by 90 percent. These are very dependent on how this meat is produced so in this instance they're looking at vats that are fed only by pond scum. And in this case we would see those benefits. However there are a number of other studies such as one from Arizona State University that looks at for instance manufacturing chicken and the fact that if we are going to use conventional nutrients and things like glucose we might end up using more energy and releasing more greenhouse gases than farming the traditional chickens. And the reason might be that we might need to heat those ingredients to a certain temperature to be able to multiply the cells. But all of these technologies are really in their infancy so it's quite difficult at this point to have reliable studies that could evaluate this at scale.
Kai: Yeah, so there's other sceptics such as Margaret Mellon of the Union of Concerned Scientists who also speculates that the energy in fossil fuel requirements, if we were to do cultured meat production at scale, might be as if not more environmentally destructive than producing food the conventional way so the jury is still out on the environmental benefits but one big argument is the health benefits so proponents of cultured meats say there's not going to be any contamination that happens when slaughtering animals where the intestines can contaminate the meat. We don't need all the medication. We can also grow muscle meat with o ut the fat which is supposedly unhealthy. We might further engineer the meat to give it extra health benefits so there's a real belief in the power of engineering a food that is better than the original.
Sandra: So on the same week where we have stories like the one from The Guardian that says that yes bacon really is killing us and it's looking at decades worth of research that actually shows that the chemicals we used to make bacon currently do cause cancer. This seems like a real solution to the problem but we must remember that creating these food products in the first place is only half of the equation. There's also a problem of achieving acceptance by those who have to then eat these products.
Kai: Yeah and my understanding is that they're just creating muscle cells. They can't actually artificially create bacon. And we love bacon. Everyone loves bacon.
Sandra: Everybody loves bacon.
Kai: Although our sound editor Megan gives me a stare. I think Megan is a vegetarian so okay not everyone loves bacon but for those of us who love bacon there's not going to be any artificial bacon as yet and supposedly we shouldn't eat that anyway. But this is the bigger question how do we feel about this? And so my natural reaction is this is just wrong. This is just unnatural. Why would I eat meat that is coming out of the laboratory, that's not coming from an animal.
Sandra: Whereas I on the other hand have absolutely no problem with yoghurt made in a lab or with milk made in a lab for that matter. So it's going to be interesting to see how public conversations will shape our acceptance of it. For instance MIT reported on a survey that about half of vegetarians would eat meat if it came from a lab and not from an animal.
Kai: Yes so it opens up a whole new angle. Is meat that is coming out of the laboratory, is that actually meat? Is it okay? You know I'm an environmental vegetarian and I don't eat it because of the impact on the environment or animal cruelty, would meat from a laboratory be acceptable or how do we think about this?
Sandra: So this is still playing out. We've seen a change in consumer preferences were only about four years ago if you asked people whether they would eat this about 80 percent would say no. And as late as last year about a third of people would be willing to eat and try this. And we've seen the efforts by some of these startups to actually introduce these lab grown products to high end restaurants first to achieve acceptance and to make that a little bit more prominent in the public sphere.
Kai: Yeah so this brings us back to...
Sandra: ...bacon. If you think about bacon there are a lot of cultural associations that we have with bacon. There's this idea of a comfort food, of a nice warm breakfast of where bacon comes from of how you've shared bacon with your family. There are all these cultural associations that we have with our food so it's not just about how you could use it and how you market it but also with a whole culture.
Kai: Yeah. So it's absolutely an identity issue. What we eat says a lot about who we are but also how we do and how we grow our food says a lot about who we are and that's the point I want to make. These things can change and we know from past episodes of disruptive innovations that those changes are deep rooted changes in identity and in world view and they are reasonably unpredictable. So it's very hard to foresee whether this is going to become a thing. But those changes are often very transformational so we might say now this is really unnatural, we don't want to eat this, this doesn't feel right, what does it say about us if we grow our meat in the laboratory. But let's put it that way - our grandchildren might look back and ask of us you were really growing at scale animals and then murdering them to eat them? That sounds cruel and disgusting, right? So why wouldn't you just grow your food in a laboratory like civilised people. So the way in which we look at the world is very much dependent on what we're used to, what is normal to us and depends on a certain view of the world and that might well be changing as we're going about solving problems of feeding a growing population.
Sandra: So I for one am really looking forward to trying Perfect Day's yoghurt made from real milk without cows.
Kai: But no more bacon.
Sandra: So let's look at our next story which has nothing to do with bacon. Our next story comes from the Sydney Morning Herald and it's titled 'If you get on China's blacklist you can be banned from travel'.
Kai: So this story reports on China's new initiative which is called the Social Credit System. A way for China to bring large parts of the population into the economic system who haven't previously had any credit scores and were therefore unable to participate in many economic transactions. So what China has done is developed a credit system that keeps track of people's behaviour in economic transactions whether you pay your bills or not but goes much further than that.
Sandra: The notion of credit in Chinese has a cultural meaning that actually goes a bit beyond what we would in Australia think of as credit rating systems and so on and goes to moralled ideas of things like honesty and trust and currently China is running up to about 30 local social credit pilots. So these are run through local authorities in smaller towns but also in big cities like Shanghai and they offer different sets of rating systems which look not only at your credit history but also at your general behaviour. So for instance a type of bad behaviour that was mentioned in the article is a shopkeeper who had left four electric bikes parked on the footpath in such a way that it obstructed other people. It also included illegal home renovations by a person who had put up a new sunroom without having the appropriate permits.
Kai: So we could say these are council infringements which people were fined for which were then put on the social credit rating. Now what the article reports about specifically is that those infringements and a drop in credit score can have material influences on what someone is allowed to do or not.
Sandra: So the article reports that about nine million people have been banned from things like buying plane tickets, another three million people have been banned from buying business class train tickets under the current pilots.
Kai: And so while many in the West have pointed to this as a large scale social engineering system and the way in which this is done by the state to enforce certain kinds of positive behaviours appears decidedly concerning and alien to many people in the West, what we want to highlight is that something similar is afoot in Western countries in a different way. And there's another article this week in The Guardian titled "Living laboratories - the Dutch cities amassing data on oblivious residents". And we want to make it clear that this is not just about the Netherlands now as much as it is not just about China. This is about data driven engineering of behaviour in public spaces more generally.
Sandra: So while in China you've got different departments being involved, places like the tax department or industry or court enforcement or food safety or drug safety. What happens in places like the Netherlands is different in some respects but not in others. One of the examples in the Guardian article talks about Stratumseind in Eindhoven which is the busiest nightlife street in the Netherlands. I used to live close to Eindhoven, this is a street with bars and pubs. There are about 15,000 young people who go there every single night and the bars are packed. There's music everywhere. There are people on the streets talking to each other, drinking, dancing, making a lot of noise. So it shouldn't really come as a surprise that technology has been brought in to try to tackle this.
Kai: So what are they doing in this case?
Sandra: The lampposts on the streets now have wifi trackers, they have cameras and they have microphones and all of these are used to try to detect if there is inappropriate behaviour, if people are yelling at each other if they're getting into a fight and use this data to get police officers to step in. And let's remember all of this data is being collected and stored whether you're aware of this or not.
Kai: Similarly in Utrecht the city keeps track of a number of young people hanging out in the streets, their age group, whether they know each other, the atmosphere. Whether or not they cause a nuisance and then they have special enforcement officers that keep track of this information through their mobile devices and they have something called predictive analytics and intervention where they actually use this data to predict where there might be problems and so they start to police certain areas and react to potential problems that are picked up by their sensor networks. So what we're seeing here is that more and more public spaces are being used to place sensors you know we have nice names for this the Internet of Things but we're collecting more and more data about people in public spaces to enforce certain kinds of behaviours that we think are good, are right, are appropriate and to weed out the kind of behaviours that as a society we don't want to see. So in that respect it's a large scale data driven social engineering initiative that is afoot in China at an unprecedented scale but that is also afoot in many Western contexts...
Sandra:...where it's not unimaginable that at some point private companies might be able to use this kind of data for similar purposes.
Kai: And while the use of private companies in these partnerships with cities means that data is more fragmented and no one has this large scale integrated view that China might be creating, at the same time we have very little transparency because a lot of this data and that's said in the article is being kept under wraps and not even shared with the councils whose infrastructure is responsible for collecting the data, is not being shared with the argument that this is commercially sensitive or commercially private information.
Sandra: And let's remember that there are a number of organisations out there who actually are able and have looked at maybe doing this at scale. So about three years ago Facebook actually patented a system of credit rating that would actually look at the history of your friends on Facebook and how good their credit histories have been to try to predict how good you are likely to be.
Kai: So what I want to do is I want to highlight a couple of things that bother me about large scale social engineering projects that try to enforce certain kinds of normative behaviours in the population. Now first of all what concerns me is the accuracy of the data. Surely we can all agree that data is often faulty, that sensors are not always right, that data can be tampered with, that people have different kinds of access to the data. My worry is that when an individual is being banned from using certain kinds of infrastructure because their credit score dropped or because some predictive analytics said that someone needs to keep an eye on them, how do I prove that the data about me is wrong when it's clearly recorded in the system. What is the accountability of these systems and what processes have individuals access to to actually have their scores or their data being changed. And then on a bigger scale, if we think about how these systems enforce one type of behaviour and trying to weed out other types of behaviour are we not creating more and more homogenous societies and social groups, are we not weeding out diversity that we might otherwise value in society and will that not hurt the capacity for renewal in a society or in a larger group any way such an organisation, will that not lead to less creativity and problems with innovation down the track. So I think we need to think carefully about the implications for the individuals but also these systemic effects that we're creating when we are advocating and enforcing one type of behaviour over another.
Sandra: I also think that we need to have a more robust debate about the facts of this sort of big data collection out there in the world. So whilst science fiction series like the episode in Black Mirror where all people have a social credit score which they need to keep up to be able to access good products and good services and have nice places to live or go to good schools while those sorts of episodes allow us to explore those worlds, there are a number of things we can do now to try to think through the implications of what such big data collection would mean in the world. I spent today for instance at the Institute for the Future and upon going to their bathroom facilities there was a sign that caught my attention. It said "Attention: this toilet is under the surveillance of the city of Alameda and the state of California. By using this facility you grant consent for your biological waste to be analysed by the city, the state, and any third party with whom the data is shared, waste data may be combined with other facial recognition or identification technologies. If necessary to contact you regarding your contribution." And it's dated January 1st 2027.
Kai: So I think this is great that an institution like the Institute for the Future tries to explore the implications of large scale surveillance before it comes to fruition. Even though I kind of find it totally plausible that this will happen even sooner than 2027 if we're not having those discussions now.
Sandra: And indeed while such technologies might allow you to know that you have a health condition that you weren't aware of in the case of the smart toilets or it might keep our streets quieter and reduce noise pollution or allow people who don't have a credit rating to access certain products and services, the unintended consequences of these technologies are not something that we can overlook.
Kai: So there's no doubt that a lot of these initiatives are conceived with very good intentions to improve the lives of people. The question remains who gets a voice in saying what is a good life and what we should be doing for this and whether it is okay to advocate one way of doing things for everyone.
Sandra: Let's move to something slightly more cheerful.
Kai: Our Future Bites - short stories that we find amusing or from which we learn new things.
Audio: Hahah.
Kai: That was Alexa.
Sandra: Have you replaced me with Alexa?
Kai: You left us. So Sandra what is one thing that you have learned this week?
Sandra: Researchers can be wrong.
Kai: This is the MIT story you're referring to?
Sandra: Yes a very prestigious MIT research centre published a research brief that looked at ride-sharing driver compensation. This was driven by a Stanford University researcher and it was a report by MIT that looked at how much people are making on platforms like Uber and Lyft. And this was all over the news that drivers earn no more than $3.37 an hour, after accounting for owning the vehicle and servicing the vehicle and operating it and gas and all of that, they would be actually losing money.
Kai: So that sounds all pretty bad but then Uber came out and in typical Uber fashion one must say and accused MIT of being an acronym for...
Sandra:...mathematically incompetent theories at least as it pertains to ride- sharing.
Kai: And indeed MIT came out soon after and credit to them they apologised and said yes okay if you put it that way the survey that they use can actually be misread and we cannot be quite as certain that these are the results and they amended their results and they looked still pretty bad though.
Sandra: They were still below the official estimates and the MIT researchers have agreed to revisit their findings. Their conclusions are still not in line with what Uber and Lyft have put forward as the average earnings. But the reason I wanted to highlight this story is not that researchers can be wrong and they can revisit their study but this study actually highlighted the fact that we do need to estimate for instance the economics of ride-sharing and that this is actually really really hard to do because we know very little about how people work in the gig economy and the fact that there is no good estimate and that we cannot put good numbers to it yet means that actually a lot more work needs to be done to try to estimate whether drivers are adequately compensated...
Kai: And that's a big take away from this story because the MIT researchers then invited Uber to please share their data with the researchers so that we can know with more reliable data what the actual earnings figures are so that the researchers do not have to rely on survey instruments and self-reporting by the drivers. But it doesn't look like the company is forthcoming with that data.
Sandra: So what was one of your short bites for the week?
Kai: Okay we have a space junk problem. That's nothing new. So there is a lot of shit in space, about five thousand satellites of which only 1700 are actually in use. The others are decommissioned. They're dead. They're just floating around in space. Now, this is a problem for two reasons. First reason is SpaceX is currently preparing a new satellite based broadband system and they want to put twelve thousand satellites into space so more than double the number that are currently up in space. And these are satellites each of which weigh 900 pounds and they have fuel onboard so they can manoeuvre which is all great but the problem is that because there's so much junk floating around in space already it increases the likelihood of collisions and that's the problem because in 2009 when an old derelict Russian satellite slammed into a functional iridium telecom satellite at a speed of 26000 miles per hour it shattered into about 200000 centimetre size bits of debris which now pose a big problem. So what people are worried about is a chain reaction in space. It was first raised as a problem in 1978 by astrophysicist Don Kessler. It has been named the Kessler syndrome. So the fear is that two satellites colliding in space disintegrate into so many bits of debris which can then hit other things which will then explode and disintegrate that it can set off a chain reaction in space which will take out a large number of functional satellites which could cripple essential infrastructure that we are dependent on. So NASA is now warning that before SpaceX embarks on a program that more than triples the number of satellites in space, we have to have a better solution to manage what's already up there.
Sandra: Maybe another challenge for Elon Musk. Well I've got another story for this week which is Baidu, China's version of Google, can actually now clone your voice just with seconds of hearing it. So in a new white paper by Baidu in China their latest work in artificial intelligence is software that can actually clone voices after analysing only a few seconds. Now this software can not only mimic the voice that it's heard but it can also change the voice to reflect another gender or to slightly change the accent that it uses and so on. And there are a number of examples that have been posted that we will make available in the show notes. So the cloning of voices is not new, we have been able to do this but it usually required 30 minutes of training material then 20 minutes of audio. Lyrebird which is a Canadian company has managed to do that with only one minute but it's now down to mere seconds. And this raises some very interesting concerns about types of fraud that you could do in Australia for instance there are a number of government services where you can authenticate yourself with a voice sample which would now be very open to fraud. There's also the discussion we had quite recently on the podcast about fake news and about the ability to basically bombard these services with a lot of fake generated content. And the fact that this can now be done at scale and with very very little input is actually something that we need to look at.
Kai: And this is especially concerning when you think about the large scale data collection initiatives that we just discussed. If someone can synthesise your voice or my voice for example from voice samples from the podcast and can use this to do things that would impact negatively your or my Social Credit Score, how are we going to prove that it wasn't us who misbehaved, did the prank or said something on the air that wasn't appropriate.
Sandra: Or were loudly partying on the street in Eindhoven.
Kai: Absolutely. So fake voices, fake milk, fake meat. I guess that's all we have time for today.
Sandra: Thanks for listening.
Kai: Thanks for listening. Megan, can you look after Alexa?
Outro: This was The Future, This Week. Made awesome by the Sydney Business Insights Team and members of the Digital Disruption Research Group. And every week right here with us our sound editor Megan Wedge who makes us sound good and keeps us honest. Our theme music was composed and played live from a set of garden hoses by Linsey Pollak. You can subscribe to this podcast on iTunes, Stitcher, Spotify, SoundCloud or wherever you get your podcasts. You can follow us online on Flipboard, Twitter or sbi.sydney.edu.au. If you have any news that you want us to discuss please send them to sbi@sydney.edu.au.
Close transcript