18 months ago we made the decision to try podcasting as a medium to explore the future of business. Our world is changing. What are the right questions to understand what’s happening at the edge of impactful technology, the world of work, sustainable development and the shifting grounds of megatrends and disruptions?
100 episodes and here’s (some of) what we’ve learnt:
There is a bigger picture and we need to pay attention.
We started with our In Conversation and CEO Insights episodes and looked at many topics, from harnessing collective intelligence, to technology and poverty to social entrepreneurship.
One of our most popular episodes was our chat with Hugh Durrant-Whyte. At that time Hugh was the UK’s Chief Scientific Advisor and the Director of the Centre for Translational Data at the University of Sydney. These days Hugh is Chief Scientist for NSW and in a position to deal with the big structural changes he outlined: Mining and agriculture will be almost fully automated, managed from central hubs, possibly off shore, and jobs will not be returning to those sectors. Food will be grown in laboratories as much as on the land. Autonomous vehicles are easy to build – but bloody hard to drive. But most of all the real risk is an increasingly polarised society, as a result of automation and machine learning.
Hugh was also spot-on a topic that exploded into public consciousness a few months later via the Facebook/Cambridge Analytica scandal.
Sandra Peter [The future of automation]: What do you worry most about the future?
Hugh Durrant-Whyte: Globalised IT companies I think are a big worry, an increasing worry. I think our big thing in the future is going to have to be figuring out the whole data ownership, data privacy issue because I think actually most private companies have gone way beyond what any government would do, already.
We trusted Rachel Botsman (author of ‘Who Can You Trust’) to explain how we are being prepped to live in a post-truth society.
Sandra Peter [The trust shift]: So why this very rapid and huge shift from trusting technology to do things for us, to trusting technology to make decisions for us?
Rachel Botsman: I think it has been waiting in the wings for a long time – these trust leaps when we take a risk to do something differently. And I think we’ve kind of been training for this. We use Netflix, we don’t think about it, but that is outsourcing decision making on what we’re going to watch. The Twitter feeds. And so I think now we’re ready for these monumental leaps around decision making. I think it’s rubbish when people say people won’t trust self-driving cars. I think people will take the leap and then millions will follow very quickly and we will trust that car and decide to a point where within a short timeframe the human decision making will come into question.
There are questions we must dare to ask.
We spoke to many world leading researchers, both here at the University of Sydney and across many world leading universities.
We asked Professor Rae Cooper why she asked women about their working lives – and the answer was remarkably simple: Because no one else had bothered to.
Rae Cooper [Women and the future of work]: There’s real absence of women’s voices in this debate. There are more robots being accounted for in the future of work debate at the moment than there are women or gender and gender differences.
And the surprise for Professor Cooper and her team who thought wages and hours would be working women’s biggest concerns: Nope – for 9 out of 10 women it was respect.
Rae: I think it goes to power. I think it goes to relationships. I think it goes also to gender. I think they’re talking about gendered power relationships as much as they are the material conditions of work.
Keith Chen [Uber, money and monkeys]: Interestingly though when we started to investigate whether or not these groups of monkeys when put in very similar situations to humans in economic decision making settings, what was amazing was that there was no statistical test we could run that on some very basic levels could establish a difference between these monkeys and the median American stock market investor.
Keith now uses behavioural economics to understand how drivers and riders make decisions on ride sharing platforms such as Uber.
Some of the answers were…. surprising.
We learnt that businesses run for social good are a big part of the global economy:
Jarrod Vassollo [Technology against poverty with 40K’s Clary Castrission]: Social enterprises actually do make up a large proportion of the economy. So for example if we were to look at the United States, social enterprises in the US represent 3.5% of the country’s GDP, which is actually the equivalent of Silicon Valley.
And if – like Clary Castrissians’ 40K Plus, you run a social enterprise, doing good does not necessarily earn respect.
Clary Castrissian: We charge for it that’s what makes us a social enterprise and it does stir the feathers to some people in terms of ‘how can you charge people who are only earning three to seven dollars a day?
One of the privileges of the program is asking senior managers about how they do things. We learnt that the Australian Navy runs its own reality show. Seriously.
Commodore Chris Smallhorn [Innovation in the Navy]: We’ve unashamedly stolen a pretty popular show’s concept “Shark Tank”. It just happens to work for us – the Fleet Air Arm shark tank turns into the acronym FAAST because it’s about finding solutions and ideas from every level of our workforce and give them an opportunity to rapidly move those ideas into the hierarchy of the organisation where we have the tools and levers to pull to make good ideas that are demonstrable in their effectiveness to bring a better warfare effect at the end of the day that we’ve got those levers and dollars to actually make those ideas into reality.
We discovered why being stupid at work can be valued:
Sandra Peter [The Stupidity Paradox]: Can stupidity work sometimes for the organisation?
Mats Alvesson: It certainly does. So it makes social life much easier if nobody is starting to raise serious questions, or doubts, or asking other people ‘what do you really mean?,’ ‘what’s the purpose of all this?,’ ‘Why should we do this?’. If questions like that can be avoided, that’s the point with functional stupidity, then social life is much smoother, the social machine is kind of functioning. So it has an advantage point with people it means that they can relax a bit more, they can be lazy in terms of commissions and thinking. They can sleep good at night and don’t have to complicate their existence by asking creative questions, such as ‘what in hell are we up to here?’
We need to better understand how to think about the future, every week.
A very significant part of the SBI conversations have been our weekly chats around the future with Professor Kai Riemer, who is leader of the Digital Disruption Research Group at the University of Sydney’s Business School.
Kai and I started chatting over coffee about the news coming out every week. Topics like should we tax robots (how would that actually work?), are robots taking our jobs (what does that actually mean?), batteries are a good idea (but do they have a dirty secret?), a Universal Basic Income (has supporters on the left and right), the logic of logos, women feel oppressed in open plan offices and so much more.
We decided to record our chats (others must struggle with these topics too) and The Future, This Week was born.
The Future, This Week takes us to unexpected places of knowledge: We learnt how the rise of the cashless economy in Sweden caused a spike in owl smuggling, that open plan offices are sexist spaces and Facebook’s plan to classify its users according to their social class will materially impact what you see online – not just in the Facebook platform. We talked about the chicken of tomorrow. Some stories just kept coming back:
Kai [TFTW December 22, 2017]: … we talked about self-driving cars, we talked about the environment and batteries, a surprising number of times.
Sandra: We talked a whole lot about Elon Musk. We ended up with a segment “It’s a Musk”.
Kai: And AI there was a lot of AI and interestingly the AI story changed a lot across the year so there was a lot of news that we saw early on that sounded scary – AI is going to take over the world, you know that jobs will all be gone so it was a very negative story .. this started out with a very techno positive slant, AI is taking over the world and there will be lots of fall out but over time we’ve learnt that it’s not quite that simple.
A recent TFTW special was a panel presentation from the Vivid Ideas festival in Sydney – ‘Mummy Can I Marry My Avatar?’ The answer may surprise you.
Season 4 of The Future, This Week, starts this week.
We’ve also learnt that sometimes it is important to call bullshit:
Robots are not coming to kill us but there are real dangers that lurk in the algorithms that now increasingly permeate our lives:
Cathy O’Neil [Cathy O’Neil on TFTW]: algorithms, especially the algorithms that can potentially violate people’s rights or close off opportunities that they deserve or screw up democracy – those algorithms should be vetted by an FDA for algorithms. And they should be asking the same kinds of questions like, Is this better than the human process it replaces? Is it going to destroy a bunch of people’s lives? For whom does this algorithm fail?
Our interview with author and algorithmic activist Dr Cathy O’Neil (Weapons of Math Destruction) also reminded us of the most important lesson we learnt from doing 100 podcasts:
It is up to us to activate the futures we desire.
Whether it is the future of work, sustainability, fake news, diversity, healthy cities, rising inequality, it is up to us to understand not only how the world is changing but how we are creating it.