This week: we talk with Wired Magazine co-founder Kevin Kelly about artificial intelligence, group think, and excellent advice for living.

Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Futures Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.

Our special guest this week

Kevin Kelly

Our previous discussions on generative AI and ChatGPT, generative AI and creative work, and weird new jobs like “AI whisperer”

Wired Magazine

Kevin’s blog, The Technium

Kevin’s recent article in Wired on AI art generators and creativity

Kevin at SXSW 2023 in the SXSW Studio

Excellent Advice for Living, Kevin’s soon to be published book

How AI can bring on a second Industrial Revolution, Kevin’s 2016 TED talk

The future will be shaped by optimists, Kevin’s 2021 TED talk


We need your help to vote for our session at SXSW Sydney!

In AI fluency: what’s coming for my job?, award-winning STEM journalist, Rae Johnston, will be joined by Sandra Peter and Kai Riemer from the University of Sydney Business School and Kellie Nuttall from Deloitte to discuss the state of AI and the future of work.

We’ve all seen how the conversation around artificial intelligence has changed since the launch of ChatGPT, but how should businesses, organisations, and the people who work for them, prepare? It involves knowing how AI works, what it can do, and what that means for the future: being AI fluent.


Follow the show on Apple PodcastsSpotifyOvercastGoogle PodcastsPocket Casts or wherever you get your podcasts. You can follow Sydney Business Insights on Flipboard, LinkedInTwitter and WeChat to keep updated with our latest insights.

Send us your news ideas to sbi@sydney.edu.au.

Music by Cinephonix. Image of Kevin by Christopher Michel.

Disclaimer We'd like to advise that the following program may contain real news, occasional philosophy, and ideas that may offend some listeners.

Kai We have to talk about generative AI, more ChatGPT.

Sandra It's inescapable, inevitable, really. So we thought we need to rope in one of the leading technology thinkers.

Kai And writers.

Sandra And one of our intellectual heroes, really, Kevin Kelly, the co-founder of WIRED Magazine.

Kai Who today also holds the awesome title of Chief Senior Maverick at WIRED. I have title envy.

Sandra We all have title envy. And Kevin is such a nice guy too.

Kai So Kevin gave this fantastically interesting talk at South by Southwest recently.

Sandra And he was kind enough to sit down with us afterwards for an interview.

Kai And the thing about Kevin is that he's one of those original thinkers. So he had some really interesting observations about generative AI, ChatGPT, all of these.

Sandra Well, yeah, for example, and Kevin is an eternal optimist when it comes to technology, but he's not really concerned that we or anyone really will lose their jobs. AI is not coming for our job. But AIs are coming for our job descriptions.

Kai Yes. And it is plural, according to Kevin, there are many AI's.

Sandra But they will not replace us. They'll be more like, you know, interns, assistants.

Kai And what's important is that we will really have to learn how to work with these assistants.

Sandra He says it will become part of a required skill set.

Kai Everyone's skill set, really.

Sandra We'll all need to continuously skill up, we'll be perpetual newbies.

Kai So let's hear what you have to say about these technologies and how they are changing our world.

Sandra And you really have to stay on till the end because Kevin also has a wealth of really useful...

Kai And funny!

Sandra Really funny life advice that he agreed to share before his upcoming book is released.

Kai Let's do it.

Sandra Let's do it.

Intro From The University of Sydney Business School. This is Sydney Business Insights, an initiative that explores the future of business. And you're listening to The Future, This Week, where Sandra, Peter and Kai Riemer sit down every week to rethink trends in technology and business.

Kevin Kelly My name is Kevin Kelly. And I am the Senior Maverick at WIRED Magazine. But better known as the co-founder of WIRED Magazine, there were at least five of us. And these days, I'm also an author of books about the future, and the culture of technology.

Sandra Thank you for taking the time to talk to us. Let's start by talking a little bit about AI, it's on everybody's minds these days. Overhyped or underhyped?

Kevin Kelly in the long run is underhyped because I think the effects of this will be centuries in the making and will affect us deeply. And not just affect us in a kind of superficial way, but actually reshape humans and reshape us. And so it's probably overhyped in the near-term because I think some of the employment disruption that people expect is not going to happen in the way, it's going to be more positive. And I think the general speed will again, not continue at the pace it is right now. I think we're gonna just have these little plateaus, little quantum leaps. We're in one right now., but it's unlikely to sustain at this rate for a decade or more. So it's a little bit overhyped right now.

Sandra But you're saying in the long-term underhyped. Why's that?

Kevin Kelly Right. Because, so far this planet, we had a very peculiar history in terms of the Galactic possibilities, because on this planet we have no other species that are close to us in our kind of thinking, we could easily imagine other planets where there have been multiple related species that might have been almost like us. But there's a huge gap between us and gorillas, say chimpanzees. But now we're going to fill that gap. And we don't have a lot of experience with living in a world where there are multiple species of sentience, or intelligence, and other kinds of minds. And that's sort of what we're going to be doing in the long-term.

Kai So when you say in the short-term it's overhyped, we hear a lot about job losses. You're not too worried about that.

Kevin Kelly Not at all. No one really is going to, very, very few but let's round it off to no one, is going to lose their job, but you're going to lose your job description.

Kai Right.

Kevin Kelly Most jobs are bundles of different tasks, and a lot of those tasks will be lost. But the job description shifts, and you have some new tasks including things like working with the AIs.

Kai So as AI comes into our world into our workplaces, what do you think the relationship will be that we'll form with these AIs as they come in?

Kevin Kelly My guess for the initial time, like right now, in the next five years, is that the relationship will be of seeing them as interns, as assistants, as co-pilots, as colleagues, as someone that we work with, but I think not on an equal basis, but like an intern, where they're supporting us to do our job. Doctors will use them tremendously in diagnosing because they, the interns, can command a much broader wealth of knowledge, they keep updated. And designers will use them to make prototypes, alternative ideas. Writers use them, coders are already dependent on them. Most people will find them useful. If you find Google useful right now at all, you will find these interns even more useful because they are not just answering questions. They continue to answer questions, but they also can generate things as well.

Sandra What's the space you think this will be particularly interesting in? Because you discuss this idea of the in-between spaces, in-between domains? How do we think about that?

Kevin Kelly Yeah, so one of the surprising and unfilled roles right now that these are very, very suited for, we're discovering, is in imagining, fulfilling, answering things that are the white spaces where you have two distinct fields that don't seem to intersect. And to imagine something that combines those is something we can do. But the details and the mastery required of it means that we would not normally do it unless we were willing to pay the expense of hours and hours and hours of trying to do it. And here, these AIs can do them very, very quickly. And so you can take almost any kind of random two subjects and ask it to combine or to hybridise or to marry them, or to find something that's in-between the connection. And again, it's something that we can imagine doing. But would require so much effort to do that it rarely is done. But here, they can just do 1000s of them very, very quickly. And that is a kind of creativity, in fact, it's almost some people's definition of creativity is to take two things that don't normally intersect and intersect them. And so this is tremendously useful. And it's not the only thing they do. But it's one thing that they do very well.

Kai And we find this both with the text generators, and with the image generators.

Kevin Kelly Exactly, right. So you can combine, say, you know, I'm looking at a table here, a wooden table. So let's say wood and dancing. So give me some ideas about wood, and dancing. So is it wood that can dance, is it a dancing tree? Whatever it is, we'll just begin to explore that in great detail. And so that can be visually, that can be with text, there's all kinds of ways. And that's what it does very well, it's taking patterns of woodness and dancing patterns, and it's marrying them, it's hybridising, it's saying there's another section here where these patterns can join. And nobody has looked at that before. And that's the thrill, is that as the prompter, or the prompt engineer, you can kind of explore this combination, which probably has never ever been explored. And the way that you do it, no one will probably ever be able to get back to it, because there's a random generated element in this. And the possibility space is so huge, that is often impossible to get back to that place. Unless you have the exact seed that you can get there. Otherwise, you will never be able to arrive there.

Kai That's both exciting...

Kevin Kelly Right.

Kai ...And also tragic in a way, because we're generating all this wealth of ideas, many of which will never see...

Kevin Kelly Right. Will never be seen again. In real life we have those examples of walking along outside in the wilderness and the flowers are blooming and the lights, and there's something that will never happen again. But you're present. And you appreciate it. And it'll probably never be reconstructed with the exact clouds and everything. And so those moments happen, and we're not disturbed by they're never gonna happen, that's part of the joy.

Sandra But we've never really had that, like creativity for an audience of one...

Kevin Kelly Right. Exactly.

Sandra At, you know, reasonable cost.

Kai So, you mentioned prompt engineers or prompters, it takes a certain skill, right? So what does it take for us as a species or as an economy as a society to make best use of these technologies, and what does it mean for workers, when you think about skill, for example?

Kevin Kelly Yeah, so my premise is that working with the AIs and there'll be many kinds, will become an essential skill set for people, in the same way that we now, you have to be able to use a mouse and computer and kind of basically understand how menus and things work. This is sort of a basic life skill at this point, which was not true 30 years ago, for sure. And I think we'll come to see that, you know, in your resume, or whatever it is, having some familiarity with these AIs, maybe the popular ones, will be an essential part of your resume is that 'I'm proficient in prompting whatever the Google, the Bard'. And so some people are going to be better at it because they're going to spend more time, and it is evolving to a kind of language even though we are using whatever your native language to interact. We call it natural language; you're using your natural language to interact with the AIs. They'll be still specialised vocabulary, a specialised, like a sub-language for those who do it really well. Like a keyboard shortcut, they'll have language shortcuts to get these AIs to do what they want to do. And that kind of language shortcut is a skill you can acquire from 1000 hours of doing it,

Sandra Does it mean that we need to reskill across the board? We always talk of re-skilling for people whose jobs become outdated, or particular parts of society, or under-skilled workers. But now I'm thinking, you know, we're re-skilling PhD students, we're re-skilling doctors, we're re-skilling people who are at the cutting edge of their field.

Kevin Kelly I think that's a perfect way to put it, is perpetual re-skilling. Because I talk about the 'perpetual newbie', and how all of us, no matter what age we are, are going to be a newbie, lifelong learning. And the young people, the millennials today, say 'Oh, I'm a digital native, you know, I have an advantage'. And I'm saying, 'Well, yes, but you are not going to be a digital native in five years from now. And so, you're gonna have to relearn and re-skill, as you say, like the rest of us. So that advantage is very short. You're only a digital native for five years. And then you have to re-skill.

Sandra I know, when you talk about technology, you usually have a long-term, optimistic view of where this will take us. But I do want to ask you, whether there's anything you're worried about?

Kevin Kelly Yeah, there are some few things. I'm worried about the weaponization of AI, maybe not in the way you think, because actually I favour, theoretically, robo-soldiers. I think, soldiers, we actually may be able to better program them, so they don't have war crimes, or they're not, you know, emotional. So we have rules of war. And rules of war are very strange, because it's like, they're strange because we allow war at all. But if you have war, it's better to have rules of war. But actually, it's better not to have war. And so I think the idea of like programming soldiers to kill which can kill more kind of fairly without war crimes and stuff, will heighten the absurdity of what we have with 'no, we want humans to kill. Only humans allowed to kill humans' that's like crazy too. So I think it might help us to diminish war, if we had robots doing it, and they might do it better. But the kind of AI in weaponization that I'm more concerned about is applying AI to cyber war. And cyber war itself is very scary to me, because we don't have really good rules for it. We don't have an agreement on what's allowed and what's not allowed. And it's very hard to have things accountable, improving it. And so I think there's just real potential for harm and disruption with AI and cyber war. And so the solution would be to have agreements, to have treaties about what is allowed and not allowed.

Sandra Like we did for nuclear or for biological weapons.

Kevin Kelly Weapons and bio, yeah exactly.

Sandra And the chances of that?

Kevin Kelly Well, you know, I think there are low level things happening right now. And again, all parties are involved, and I'm sure the United States, my country is involved, and that's the problem is if you have tanks, you can take aerial photographs and see them right now. There's no transparency in this. And so we don't even know what's going on, which is part of the reason why I'm worried about it.

Kai How worried are you about bias in AI? I mean, at some level, this AI works because there are all these patterns and distinctions in the world to begin with, right? But some of those patterns we tolerate less, we don't want in whatever the outcome is AI that generates.

Kevin Kelly Right, right, right. The biases are there, they're real. And they're real because the bots are trained on the whole of human content, human-generated from the best that we've ever done to the worst. And also, by the way, some of our greatest literature is full of really bad behaviour. So it's inevitable.

Kai Sometimes that makes them great.

Kevin Kelly Exactly, right. So it's like even the noble works depict really horrible things. So it's sort of inevitable that these bots will be biased in terms of ageism, racism, sexism, war mongering, etc. And so we shouldn't really be surprised. But we collectively are not going to tolerate it, then I agree. But the issue is that, since we want them to act better than us, we don't really have a very good model of what that means. The first approximation, oh, is that you just make them woke, you just woke them up. But, well first of all, we don't have any consensus on that. And secondly, I'm not sure that that is actually what we want. And so we don't have very good pictures in our head. And we don't have the consensus on that, to actually be able to model, and to give the code to them. And so I think part of this discussion right now, at least in the States that we're having about wokeness, is going to be amplified, and expanded, and maybe deepened, as we try to think about, well, what is the superior behaviour? What does that look like?

Kai We're kind of hypocritical...

Kevin Kelly Oh absolutely.

Kai Because they just reflect our own behaviour back at us and we don't tolerate it.

Kevin Kelly Right, Exactly.

Sandra We struggled to agree on standards for autonomous vehicles across different nations and different ethical systems. So I think agreeing on this is one of the fears. How do you think about groupthink?

Kevin Kelly When it comes to my independent opinion, or my groupthink opinion?

Sandra and Kai Both!

Kevin Kelly How do I think about groupthink? Well, it's perfectly fine! I don't know. So to be more serious, a lot of what everybody knows, is absolutely correct. So we tend to often dismiss what everybody knows. But most of what everybody knows, most of the acceptable conventional wisdom is absolutely true. The capital of England is London. Okay, everybody knows that. That's true. So I would say some very large percentage of groupthink is correct. So there's a little bit of tension, because we want only a small amount of it to be independent thinking to help improve or to advance the group think. And so science works by taking 10 new ideas and making it the consensus. So we do want that groupthink,

Kai Are you worried that this is breaking down, the groupthink? Do we need more?

Kevin Kelly No. I think there's plenty of agreement. And I think there's some political areas and others where there is not, but by and large, there's still, the majority is consensus. What we want, though, is there are often areas where there is consensus, where there shouldn't be necessarily, and that's where science and other places advance, or in cultural change, most of our thinking right now is how to facilitate changing your mind. I'm not too worried about where we have agreement.

Sandra We were talking about consensus, common sense. One thing that you're suggesting we do change our mind on is how we think about ownership in this space, how we think about rights in this space. And you were suggesting shifting our mindset from thinking in terms of copyright to reference rights or rights of reference. Can you talk a bit about that?

Kevin Kelly Right, right, right. It's been clear since the advent of the digital era, that the old industrial model for intellectual property, revolved around copies. And that just has broken down in many ways. And for a long time, there was resistance to streaming because it seemed to have gone against the current of protecting copies because there were things just flowing. And so right now, there's attempt to kind of put the generative AI, to think of it in terms of copies of things, and it just doesn't fit. It's just very, very hard. And we have some aspects of the copyright law, which are focused on human creators, and that also isn't going to survive, because it's very clear that we can have machine creator. And so there's an attempt right now to kind of squeeze the intellectual property regimes of these things into these existing copyright mould And I don't think it's going to work. It's already not working with a digital stuff very well, but we have some kind of half-baked idea. But now it's just simply not going to work. And so I'm totally in favour of having these temporary monopolies given to creators, even though I believe my own version of innovation is that if you didn't write the book, or invent the patent, someone right after you is going to, nonetheless, is this an incentive to have a temporary monopoly, but it needs to be very short. And so what should that monopoly be based around, and what rights should you have, etc? As I said, right now, we have just one, which is copies of things. But I think we can change that or shift that in the world of AI and talk about another possible way of structuring it, which is referencing. And this goes to this idea that the humans, or even the machines, that train in AI, might have some credit, or might have some stake in what's made later on. What we might say is that it's not just the AI making it, but it's the AI plus all the other people who are being drawn on to create that thing. So what we don't really know, and this is true, is we really don't know how these black boxes are working. Just as you and I can't really unravel who influenced us in my thinking, and to what extent. So when I'm saying something right now, most of probably what I'm saying has come from my background and training and my experience being possible for me to kind of unravel, who has influenced me more in what I've just said.

Sandra Oh, I've got this guy called Kevin, Kevin Kelly.

Kevin Kelly So this attempt may not work, but it would work better than copies. And so there is some work right now trying to unravel, influence, and assign weights and things to each generation. I don't know if it will work, the researchers don't know if it'll work. But again, it has more promise in my eyes than relying on copies.

Kai My understanding is that at this point, there's somewhat of a trade-off between the ability to explain and assign those references. And the power of these systems that when you make them bigger and more inscrutable, they also become more exciting in the synthesis that they generate, right? So we might solve this, but...

Kevin Kelly Yes, and there are people trying to do it, reverse engineering, going back through it. And so you're right, there may be certain systems that are set up to track that, and they have a cost. And there are also other ways in which we can have kind of green, or clean, training sets, where everybody in the training set, they start from scratch, and they have opt-in. So you have a curated training set where everybody is participating, and knowingly with that right of reference. And so I think that's another way that right a reference might be, where you say, 'okay, my work that I'm producing', like, we have Creative Commons licence, we could say, have a right of reference licence. 'And so anything I produce, I'm okay with it being used to train an AI. And that might be all we need is just right of reference, kind of like a Creative Commons mark, which says, 'I would like to be influential, and have people, you know, use as a training for other things'. So I think there's just a lot of new innovation that we might have to think about in terms of restructuring intellectual property, in this age of synthetic creativity.

Sandra Every time there's one of these new technologies that comes up there are all these ideas that we have to invent and innovate, you know, legal frameworks and all these things around it. But there is also a whole set of ideas that we need to let go or unlearn. Like you mentioned, now, copyright, the idea that this no longer serves us. Are there any other ideas where we need to let go of common sense, or maybe invent new words, to make sense of what's happening now around AI?

Kevin Kelly Hmmm, that's a good question.

Sandra We've had to do this with platforms, you know, antitrust legislation no longer serves us to think about Amazon.

Kevin Kelly Yeah. Again, just responsibility. So there's a great book by Kate Darling at MIT, who was trying to explore parallels with animal ownership and robot ownership. Like if a self-driving car does something that is a crime or hurt somebody or something, who's responsible? In the way that if in the biblical era, if your ox gored somebody, there was always the question of, well, how much accountability do you as an owner, have, you know, the ox went crazy, it was startled, whatever, something happened? Do you have full responsibility, whatever? And so that idea of responsibility, and I think they call it legal tort, is something that will definitely be re-examined. If you have an AI that's making choices, or whatever, and it does some harm that nobody anticipated or had never been done before. How much responsibility do other people have? Does the user have it? Does the manufacturer, do the programmers have it? So this would be really very new. And some of our older ideas about ownership will have to change? Because do we own the AI that we're using? And this is a little bit like what we see in the copyright is, you know, I generated an image, do I own the image? Does Midjourney own the image, do the programmers? I mean, it's like who owns it? So responsibility in general, and tort, I think are going to be redone, reimagined.

Sandra It's interesting that we do find ourselves at this inflection point in, in technology. So I think a lot of these ideas are things that we'll keep discussing, and I hope we'll come back to you to tell us a bit more about that. But you're not only an amazing tech writer, but for a few years now, you've been sharing your life learnings. And you have advice not only about how to think but also about how to live. Some of it very, very practical. Your advice on you know, “if you think you've seen a mouse, you probably have and there's more where that came from”, is very, very true and useful. And do you want to share some of your more recent ones? I know you're getting ready to launch a book soon.

Kevin Kelly Yes. So I have a new book coming out. It's called, Excellent Advice For Living., and the subtitle is, Wisdom I Wish I Had Known Earlier. And this is the book and there's this 40 or 50 little tiny, almost tweetable, bits of advice. And it's aimed at young people and the young at heart. And I try to take a whole book of advice about something very complicated and reduce it to a little tiny sentence. And that's my joy is if I can do that. So you know, I'm just picking at random, it says, "Don't ever work for someone you don't want to become".

Sandra Hear, hear.

Kevin Kelly Right. So I like to distil these into a little maxim, a little proverb, that I kind of repeat to myself. So like, if I'm, you know, interviewing or looking for work, it's like, would I want to become this person? Otherwise, I don't want to be working with you or for you. Things like, just as a reminder, when I want to buy something is "no one is as impressed with your possessions as you are". So you see a guy stepping out of the Porsche, you don't think 'oh, I want to be you'. No, you're not thinking that, you're thinking, you know, it's like, 'what's the matter with him?' So there's "don't let your email box become your to do list run by others".

Sandra Hear. hear That's one I'm still working on.

Kai Everyone needs to feed that corpus into a bot and create a Kevin-bot of, you know, daily insights.

Kevin Kelly

Yes, people have talked about that. And that would be fun.

Kai To consult before every important decision.

Kevin Kelly Exactly. I haven't yet done that. But I am going to do that to see if we can train it and have it make new ones. It would be kind of fun to do.

Kai It would be fun.

Sandra What's the bit that served you best in your life?

Kevin Kelly There's a couple, there may be more career advice kind of stuff. My favourite one is, "don't try to be the best. Try to be the only". The only is really where you want to be. You don't want to be like the best basketball player. First of all, it's very, very unlikely. And you're competing against a very small group. You want to be the only, you want to be the, some guy/person/gal who's doing something with sports that nobody has ever done before. And that's a very high bar, to figure out what it is that you can do that nobody else can do. But if you can aim there and land there, that's the golden ticket. So another bit of advice. Well, there's several things, related to being the only versus the best is you want to if at all possible, work in an area where there's no words or names for what it is that you do. So 10 years ago, there were people who were doing podcasting, you know, they're trying to explain to their mothers, "it's like, well, it's kind of like radio, but it's not, it's sort of, like audio documentary, you know, whatever it is". And so that meant that that was a very potent place to be, that we didn't have the language for it. And so today, you want to be somewhere where it takes some time to describe to your mother what it is you do. That's a sign that you're in a kind of a good place. And the third bit is in your 20s you should, if at all possible, spend a whole bunch of time maybe a year, doing things that don't look anything like success. They're crazy, unpredictable, bizarre, strange, weird, unprofitable, and inefficient, wasting time, whatever it is, it's like something that looks nothing like success. And that will become your foundational and touchstone for success later on, that experience of doing that thing.

Sandra And the thing you'll say at dinner parties for the rest of your life, that thing you'll talk about.

Kevin Kelly Exactly. Whether it's riding a bicycle around the world, whether it's building, you know, an ice sculpture that's the shape of you know, Godzilla, whatever it is.

Kai You could ask generative AI.

Kevin Kelly Exactly if you want, if you want some help in doing something really strange. But yes, it will become a muse to you.

Sandra Kevin Kelly, thank you so much for talking to us.

Kevin Kelly You're very welcome.

Sandra It's been an absolute pleasure.

Kai Thank you.

Kevin Kelly Thank you for taking time.

Outro You've been listening to The Future, This Week from The University of Sydney Business School. Sandra Peter is the Director of Sydney Business Insights. And Kai Riemer is Professor of Information Technology and Organisation. Connect with us on LinkedIn, Twitter, and WeChat. And follow, like, or leave us a rating wherever you get your podcasts. If you have any weird or wonderful topics for us to discuss, send them to sbi@sydney.edu.au.

Related content