Sandra and Kai on an illustrated background

This week: the speed, visibility and hype of generative AI, and goodbye.

Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Futures Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.

This podcast was recorded on 27 April 2023.

Our explainer on generative AI

The New York Times’ feature on generative AI

The Economist’s explainer on generative AI and large language models

Vogue on using generative AI in luxury fashion

Our previous discussions, including generative AI with Kelly Kelly, our special on ChatGPT, generative AI and creative work

Balenciaga Pope and Donald Trump arrest generated images

Letter signed by Elon Musk and other tech leaders on pausing generative AI development

Ezra Klein’s podcast on AI and the future of work and the economy

The Australian Productivity Commission’s report on Australia’s data and digital dividend

Sydney Executive Plus

The AI fluency sprint


Follow the show on Apple PodcastsSpotifyOvercastGoogle PodcastsPocket Casts or wherever you get your podcasts. You can follow Sydney Business Insights on Flipboard, LinkedInTwitter and WeChat to keep updated with our latest insights.

Send us your news ideas to sbi@sydney.edu.au.

Music by Cinephonix.

Dr Sandra Peter is the Director of Sydney Executive Plus and Associate Professor at the University of Sydney Business School. Her research and practice focuses on engaging with the future in productive ways, and the impact of emerging technologies on business and society.

Kai Riemer is Professor of Information Technology and Organisation, and Director of Sydney Executive Plus at the University of Sydney Business School. Kai's research interest is in Disruptive Technologies, Enterprise Social Media, Virtual Work, Collaborative Technologies and the Philosophy of Technology.

Disclaimer We'd like to advise that the following program may contain real news, occasional philosophy, and ideas that may offend some listeners.

Sandra The speed, visibility, and hype around ChatGPT, Midjourney, Dall-E stable diffusion large language models means that we must, must talk again about generative AI.

Kai Yeah, we are discussing that article, you know, in your stream, like really any article recently.

Sandra Pretty much.

Kai Pick one.

Sandra It's been on the front page of everything from the New York Times to The Economist.

Kai Anything, Women's Weekly, Vogue, you pick it. ChatGPT, generative AI, and sometimes, you know, generative AI outputs, images are literally on the front pages of magazines.

Sandra Okay, we really, really need to do this.

Kai Well, let's do this.

Intro From The University of Sydney Business School, this is Sydney Business Insights, an initiative that explores the future of business. And you're listening to The Future, This Week, where Sandra, Peter and Kai Riemer sit down every week to rethink trends in technology and business.

Sandra There has been a lot of conversation as we sat around generative AI, and especially the need to upskill, reskill, rethink what we do and how we do it. Now everything from, you know, superb utopian scenarios of increased productivity to dystopian misinformation, disinformation, they're coming for our jobs again, again...

Kai Still?

Sandra Still. We've had a lot of questions actually coming in around generative AI. So I think we'll try to tackle a few of those. And probably the best place to start is, what is it?

Kai Well, let's not forget that generative AI, for all its hype, speed of development, and visibility, is one part of AI, right. So let's make it clear that a lot of organisations will still build bespoke AI, use their own datasets to solve problems, there's still going to be a lot of business applications of the kind of AI that you train yourself. But generative AI, these big models that are available from companies such as OpenAI, Google...

Sandra So think GPT, think Midjourney Stable Diffusion...

Kai That Bard thing, and various other models that have interesting names like Claude, and others. So they are out there, and they are looking for ways to be used. And it's the beginning of a journey that has already picked up a lot of speed.

Sandra But what is it?

Kai So AI, generally are algorithms that derive patterns from large amounts of data to make predictions. You make predictions to make decisions, or you make predictions in the form of what something should look like. So large language models, text-based generative AI, makes predictions about what a piece of text would look like that responds to a certain prompt.

Sandra Image models might make a prediction about what the Pope might look like in a white puffer jacket by Balenciaga.

Kai And you might have seen this around the internet, or what would it look like if Donald Trump got arrested by the police? So generative AI generate stuff, they use the patterns that get inscribed in these very, very large, artificial neural networks, to basically divine what would a text prompt look like as a picture?

Sandra So for instance, I can ask it what you would look like as a Pixar character. And it would give me a really cool Pixar character that resembles you. And in the case of large language models, these models are trained on what is basically the internet. And that does mean everything from Wikipedia and the BBC to, you know, that subreddit that you wish you had never opened and should not have forwarded to other people.

Kai But all kinds of text really, theatre scripts, chat conversations...

Sandra Code.

Kai Computer code, yes.

Sandra Same with images.

Kai Same with images, so they embody a lot of patterns of all kinds of different image styles, of all kinds of different kinds of text.

Sandra And it can do some amazing things. We should mention again that these are for instance, in the case of ChatGPT, language models not knowledge models, but they can do amazing things.

Kai So another question would be, what are they actually good for? What can we do with them?

Sandra Well, we can do a lot of assistive tasks. Think of them as your assistant, you know, it might be a young grad student who's just come in, doesn't know a lot, but is very eager to help and provide you with as much stuff as they can gather or help organise things for you or put them in specific formats.

Kai Yeah.

Sandra So you can ask it to write your ad copy about something, transform it into a Google ad that fits the character limit and so on, you can ask it to put information into tables or summarise information for you. A lot of assistive tasks. and the internet is full of examples.

Kai And that assistant, that intern, is not perfect, makes mistakes, so you can't stop using your own brain. And it can also be really creative. Creative in ways that sometimes exceed what we as humans could actually do.

Sandra So rather than trolling for hours for stock photography, you can just describe the image that you want and get a picture that is, you know, as good as the pope in a white Balenciaga puffer jacket,

Kai Which already makes our life more interesting because we use a lot of visuals in PowerPoints. And now you can just go and create what you like, rather than spending time finding stuff on the internet.

Sandra And this is where some people have started to find this really useful. Writing that cover letter for your new job by giving it the job description and your CV and telling it to figure out what a good cover letter would look like. Or getting it to help you write your next report. But on the other hand, people have been, let's say panicking about it taking over jobs. If I'm a creative person, or if I'm in advertising, if I'm in communication, even if I'm a lawyer, is this coming to replace me?

Kai Well, back to first principles. It's a language model, not a knowledge model. These things are not reliable, they will not replace people. But those who know how to work them, will be way more effective and productive. So there really is systems that work best when you work best with them. So they're not likely to replace us. But they're likely to change the way we work.

Sandra So they're not coming for our jobs, but rather, they are coming for our job descriptions.

Kai They will require us to acquire new skills and organisations with think about how jobs will actually be sliced, how they will be cut, and how tasks might be redistributed across different roles.

Sandra Generative AI is moving really, really quickly. So lots of people have rightfully had some concerns about it. Because those models are trained on lots and lots of data, much of which is copyrighted, or belongs to people who might not necessarily have opted in.

Kai Social media, right. So we post stuff, things that go into those training datasets are created by people, and sometimes includes quite personal information when it comes to social media profiles, or the profiles people have on publicly facing websites. Like academics, we all have profile pages that contain a lot of information about us that is publicly available and would be ingested into those models.

Sandra And this will really not be an easy thing to solve because the data doesn't get stored in this model. It's more information about the data, which is why we've seen all the conversations about copyright, is there actual copyright infringement if it's in the style of, but none of the images are copied or stored?

Kai And this is really quite different to how we used to do computing, right? People think about AI as having this vast data that it's using. But the data that is used to train, is not actually in the model. The model is vast statistical functions that store likenesses that store patterns, it doesn't store any one cat picture, it stores catness, what a cat is like. So it doesn't really store any personal information about anyone in a traditional sense that you could look up that you could you know, change, or that you could remove.

Sandra So me asking you to remove my data from your AI model would mean at this point, retraining the model.

Kai Yeah, it's completely infeasible, right. You cannot retrain the entire model that takes weeks and weeks to do, an enormous amount of energy to produce a model like GPT-3 or GPT-4. So it's not feasible to do that. So legislation, regulators, will have to find a way to square up the requirements of something like GDPR, and how this type of technology actually works. And we don't have an answer to that, as yet.

Sandra And we said that the very beginning that this is happening very, very quickly, there's a lot of visibility and a lot of hype. There's a lot of hype about where this technology is going as well. We've seen showcases of kind of multimodal generative AI, where you stitch together language models with image models, 2D models with 3D models. Generating...

Kai Video from 2D images, for example, like a fly-through of a house. Looks like a drone is flying, but it's just AI generating stuff.

Sandra Imagine what you can do with kind of Roomba footage. Or using an API to say why don't you generate a recipe for a healthy meal, you know, put the shopping list in my shopping cart and also tweet out pictures of it to all of my friends.

Kai That's really what Silicon Valley is excited about at the moment, to combine the idea of generative AI language models, and all those APIs of various services, to create automated services. Which at once promise to create new services, new business models, but also open up the possibility of, you know, creating content, noise, misinformation at scale, and completely swamp social media.

Sandra But before we get to the misinformation, disinformation, some nefarious uses of this technology, let's remember, and this hasn't been featured as much in the news, there are some amazing new uses of generative AI in even larger language models. In for instance, the medical field, what did make the news was AlphaFold, a project by DeepMind, it's a company acquired by Google that managed to predict the structure of every protein known to man.

Kai So a generative AI basically generating protein structures, a very particular application of this kind of technology.

Sandra And the idea here is that this could lead to follow on research where we could use generative AI to make up new proteins to achieve certain purposes. And in this instance, you can think about curing cancer, attacking cancer cells in the body. The same way we imagine pictures on Midjourney or on DALL-E, you could now imagine drugs. So at that intersection of these large generative models and other fields of research might lie some really interesting things beyond, you know, cute new cat pictures on the internet.

Kai And that tentativeness, this might lie something, has some people really concerned because much like we do not fully understand the blackbox that is these large-scale models, we also do not fully understand what we can actually do with it. And where one person uses the model to fold proteins or predict proteins, other people might use it to predict, you know, more dangerous things or generate, you know, misinformation at scale, or invent new explosives or new ways of fooling people, tricking people into doing things they don't necessarily want to do. So some people have called for a moratorium.

Sandra Indeed, there's been a now very famous letter signed by Elon Musk and 1000s of others, basically demanding a pause in the development of artificial intelligence, and in particular, the generative models like the large language models. They have called on everyone to immediately pause this for at least six months till we figure out what this means. And I think whilst we appreciate that this is developing much faster than we figure out how to accommodate it in our society and our businesses, we're in agreement with the people who have called to have something in place in case we do stop this development. Because at this point, we will just stop this development...

Kai And then what?

Sandra And then what? There is no body in place, there is no structure, no institution, no organisation, no set of legislators that can figure out what to do with it. We'd have the same problem in six months from now.

Kai And we'll put a link to Ezra Klein's podcast in the shownotes who makes that point quite eloquently. But I also want to point out that fortunately, again, language, not knowledge models, we're still dealing with a probabilistic model, not with a real intelligence. So all the attempts on the internet of trying to create a chain of automated language models that would do harm, like, you know, telling a model to figure out how to destroy humanity has so far yielded very little results other than some automated tweets around 'Ooh, I'm going to come for you, I'm going to get you'. So fortunately, what makes these models quite interesting, this probabilistic nature, which gives us all this creative potential to explore also means that the automation potential for doing, you know, the kind of world-destroying harm are limited, the problem will likely lie somewhere else.

Sandra And that is in misalignment in incentives. Thinking about who is building this technology and what they're building them for.

Kai What's the business models?

Sandra And currently, it's companies like Microsoft, like Google, like Meta, that are involved in developing those models, and most of their business models rely on advertising, on capturing people's attention, on mining their data, and so on.

Kai Whenever we think about where could this be used, and the answer is search engines or social media, or marketing, the answer is likely that the business model underpinning this is advertising. And then the step from using the model to predict ad copy, to using the model to actually automate how to influence people to buy things and create content that will engage people to give up more of their time, or indeed, money, is a very small step.

Sandra So not yet clear where this technology is going. But a few things we do know. One of them is that in Australia, we had low business adoption of AI to begin with, this was the Productivity Commission findings that only about 30 or 34% of organisations actually use AI. The fact that ASX 200 board members are ill-prepared for this new wave of technology, as are most of us. We definitely will need to upskill, reskill, rethink how we do our work.

Kai And when we say 'we', we really mean, you know, the two of us as much as we mean anyone else because we are at a watershed moment where these technologies will touch upon many parts of our lives, and how we do business as an economy and as a society.

Sandra And much like we had to figure out how to live and work with the advent of graphical user interfaces, or the internet, or the iPhone, we will need to figure out how to live and work with generative AI.

Kai In all its various forms, yes.

Sandra And at The University of Sydney Business School we're trying to do our bit. We've got a new micro-credential on artificial intelligence fluency, trying to upskill, reskill people into how, not just generative AI, but AI in general, will enter their lives. So not about coding, but about figuring out what it means in your context and in your organisation. And we will put all those links in the shownotes.

Kai And we'll do more than that, we'll put a link in the shownotes to a new initiative that the Business School is launching, Sydney Executive Plus, which has at its core lifelong learning and upskilling, not just around artificial intelligence, generative AI, but all kinds of topics that will prepare businesses and individuals for the fourth industrial revolution. For this period of time that we find ourselves in at the moment that changes so many aspects of what we do.

Sandra Not just about technology, but about how we think about sustainability, or inclusive leadership, inclusive management.

Kai So all those skills that leaders and executive needs to be prepared for the future.

Sandra Speaking of the future, this is also a watershed moment for us, because after more than six years doing The Future, This Week, it is now time to do more than just talk about the future. So we're gonna go ahead and...

KaiC reate the future, together with lots of leaders in the field, to put our energy behind a new creative project for upskilling.

Sandra Upskilling, reskilling, lifelong learning.

Kai Finding new forms of engagement with businesses and executives in trying to figure out how we can productively prepare Australia and its workforce for that exciting future.

Sandra So for the last time, thanks for listening.

Kai Thanks for listening and join us in whatever else lies ahead.

Sandra And before we really sign off, an enormous thank you to our Sydney Business Insights team for making The Future, This Week possible for all these years, and a particular shout out to Megan Wedge, without whom we could have never made this a success.

Megan Woohoo!

Kai Woohoo, Megan!

Sandra And thanks all of you for listening.

Kai Not just today but all those episodes. Thanks for listening.

Outro You've been listening to The Future, This Week from The University of Sydney Business School. Sandra Peter is the Director of Sydney Business Insights and Kai Riemer is Professor of Information Technology and Organisation. Connect with us on LinkedIn, Twitter, and WeChat. And follow, like, or leave us a rating wherever you get your podcasts. If you have any weird or wonderful topics for us to discuss, send them to sbi@sydney.edu.au.

Related content