Sandra and Kai on an illustrated background

This week: our ChatGPT and generative AI special. What is it? How does it work? What to do with it? Where to next?

Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Futures Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.

Join us for the AI fluency sprint, our exclusive online experience and build an understanding of the opportunities, risks and limits of AI.

The stories we bring up

OpenAI’s ChatGPT

Our previous episodes on AI including generative AI and creative work, the changing business of movies, the implications of using public data, the launch of GPT-3 and the launch of GPT-2

Our previous discussion with Kellie Nuttall from Deloitte on AI fluency in Australian organisations

What is ChatGPT and how it works

What is a VCR?

ChatGPT’s 16 bar rap lyrics in Snoop Dogg’s style explaining the environmental issues associated with using ammonia as shipping fuel

An astrophysicist puts ChatGPT to the test

CNET reviews the accuracy of their AI-written articles

Stack Overflow bans AI-generated answers

27% of professionals are already using generative AI for work

ChatGPT and lawyers

Microsoft invested $1 billion in OpenAI in 2019

Microsoft’s plans to integrate ChatGPT into Office products and Bing search

Satya Nadella thinks that the current generation of AI will spark an industry-wide “platform shift”

How ChatGPT in Microsoft Office could change the workplace

Google declares a “code red” over ChatGPT

ChatGPT crafting sophisticated “polymorphic” malware

ChatGPT passing the AWS Certified Solution Architect exam

ChatGPT passing the US Medical Licensing Exam

A New York Times opinion piece on how ChatGPT hijacks democracy

Concerns that false information will be reintegrated back into the training data

OpenAI CEO’s interview on the hype surrounding GPT-4


Follow the show on Apple PodcastsSpotifyOvercastGoogle PodcastsPocket Casts or wherever you get your podcasts. You can follow Sydney Business Insights on Flipboard, LinkedInTwitter and WeChat to keep updated with our latest insights.

Send us your news ideas to sbi@sydney.edu.au.

Music by Cinephonix.

Dr Sandra Peter is the Director of Sydney Executive Plus and Associate Professor at the University of Sydney Business School. Her research and practice focuses on engaging with the future in productive ways, and the impact of emerging technologies on business and society.

Kai Riemer is Professor of Information Technology and Organisation, and Director of Sydney Executive Plus at the University of Sydney Business School. Kai's research interest is in Disruptive Technologies, Enterprise Social Media, Virtual Work, Collaborative Technologies and the Philosophy of Technology.

Disclaimer We'd like to advise that the following program may contain real news, occasional philosophy, and ideas that may offend some listeners.

Sandra So we have to talk about ChatGPT. Where do we start?

Kai Well, we've been on a break. It's been a while - maybe ask ChatGPT?

Sandra Okay, hang on. Okay, let me - 'Write an introduction for two co-hosts of the podcast discussing ChatGPT, how it came about, what it is, and why it's important.' Okay, here we go: “Welcome to the latest episode of our podcast where we discuss the latest advancements in artificial intelligence. ChatGPT.

Kai That's right. ChatGPT is a language model that was developed by OpenAI. And it's been making waves in the AI community for its ability to generate human-like text. But before we dive into that, let's give our listeners a little background on how ChatGPT came about.

Sandra Sure. So ChatGPT is the successor of GPT-2, which was also developed by OpenAI. GPT-2 was a big step forward in language modelling, because it would generate text that was almost indistinguishable from text written by humans. But OpenAI wanted to take it even further. And that's where ChatGPT comes in.

Kai That's right. ChatGPT was designed to improve upon GPT-2 in several ways, including better handling of context and better language understanding. And it's important to note that ChatGPT is now open source, which means anyone can access it, and use it for their own projects.

Sandra Exactly. And that's what makes ChatGPT so important. It's not just a research project, it's a tool that can be used in a wide variety of applications from chatbots to text generation. And we're excited to dive into all of that today.

Kai Absolutely. So, without further ado, let's get started and explore the world of ChatGPT”.

Intro From the University of Sydney Business School, this is Sydney Business Insights, an initiative that explores the future of business. And you're listening to The Future, This Week, where Sandra Peter and Kai Riemer sit down every week to rethink trends in technology and business.

Sandra Okay, we could keep going. But that's what ChatGPT wrote.

Kai Well, first of all, it's impressive that it can do that.

Sandra Yeah, it's not half bad. And there's like some subtle mistakes. And broadly…

Kai There are some mistakes in that it's actually improving on GPT-3, not GPT-2 and in a way, it does improve on GPT-2, [technically, it doesn't,] it does. But GPT-2 wasn't actually doing human like text yet. Also, it says things like understanding of language, there's something that we need to talk about, but not half bad, and people are actually excited about this.

Sandra So, we'll spend some time today talking about exactly what is it? What can it do and how does it work? What it struggles with? But also, what are some of the opportunities out there, business models, what the future might bring.

Kai And where this whole field is going.

Sandra But first off, let's bed down, what is it? There seems to be a lot of confusion around exactly what it is. And probably in the simplest of terms, it's just an open text generation AI. Currently the best on the market, probably not for long -

Kai At least, the best we can access because there's been a few that others are working on that are not open access.

Sandra And as our friend ChatGPT mentioned, the technology itself isn't strictly speaking 'new', there's been a lot of iterations that have gotten us here. Latest one open to the public was GPT-3. But this has sparked a lot of excitement because it is more conversational than previous versions.

Kai It's more conversational, it feels more natural, it's more usable, it's more accessible. And it's really fun to play with.

Sandra And most importantly, it's also available to the general public. It's a free, easy to use, web interface. There were about 5 million people that signed up to test it in, I think it was like the first week that it was released people like us who tried and played with it. And it is now open to everyone so you can sign up and give it a go.

Kai So, what can we do? What are people doing with it?

Sandra Well, Twitter's full of what people are doing with it. Right? It can do anything from, you know, writing funny texts, and blog posts to writing jokes, whether they're funny or not funny. to computer code to, you know, assignments and essays for university exams.

Kai You can also put in text and tell it to summarise it in three sentences, which is really interesting, or you put a text in and tell it to rewrite that text in you know, the style of a famous writer or in a different genre.

Sandra And you can do that in a very natural language. So, you could for instance, ask ChatGPT to write a Marxist treatise on the institution of marriage for a wedding speech, or one Twitter user quoted in the New York Times prompted me to write the biblical verse in the style of the King James Bible explaining how to remove a peanut butter sandwich from a VCR -

Kai And we put the link in the show notes and footnotes for our younger listeners, VCR stands for video cassette recorder. Google it. Someone else asked it to write 16 Bar rap lyrics in Snoop Dogg style explaining the environmental issues associated with using ammonia as shipping fuel.

Sandra Or you could ask it to act as a cybersecurity specialist and tell you how it would implement an information security strategy for a major telecommunications company.

Kai Or do your philosophy assignment, write a four-paragraph essay about the 'concept of being' in the works of Heidegger, Kant, and Hegel. Or maybe just write a joke about doing weekly podcasts. Ha-ha, funny.

Sandra But to understand what to think about the results that it gives you, it's really worth understanding how it works. And there is a lot of innovation here. The text that it gives you back is coherent.

Kai It sounds confident.

Sandra Within a session it can also adapt, you can ask it to regenerate it or to take into account additional details or to give you more examples or in a different style.

Kai And it remembers context, which enables you to actually have conversations that go over multiple interactions where it remembers what the topic was about, and it builds on a topic, which is a real innovation over previous generations of text generators.

Sandra But in order to be able to judge what you can do with ChatGPT, and how to look at these results, it's really, really important to understand how it works, because that will enable you to understand both how amazing some of these opportunities are, but also some of the things that it really struggles with underneath the surface.

Kai First of all, these things are called 'large language models' for a reason, they're not called 'large knowledge models'. This is about language, how language works, making predictions about what words might come next, what would be the appropriate language that responds to a prompt that it is given.

Sandra Peter So, these models have been trained on text, and how text belongs together in a sequence. All of this, as you've said, is based on billions of examples of texts that they pulled from all over the internet. And that might be Wikipedia, or it might be blog or news articles, or chatroom forums, or GitHub -

Kai And movie scripts and computer code. The answer to the question, what text has it been trained on, is all of it whatever, we can…

Sandra All of it up to 2021. But that's a different conversation, we can always update that.

Kai But the important thing is, first of all, underpinning ChatGPT is GPT-3, which includes the body of text from the internet, as it was in 2021. And ChatGPT will politely tell you that when you ask it about current events, as well.

Sandra So, in really, really simplistic terms, what it does is it's trying to make a guess, based on probabilities of what text belongs together with what other texts. And that means that for certain types of problems, it might come up with exactly the right answer. But it also makes it prone to making very simple mistakes.

Kai Yeah, so the way these things are trained, you mentioned a sentence where the algorithm would hide a word, and then try to guess what that word is reveal it to itself and learn what the correct word is. Do this for all the sentences that you find in the body of text that is the internet, run this a billion plus times and train a gigantic network of billions of nodes that will embody the structure of human text - that is basically how in very simple terms, you train, a model that is GPT-3, which was released before ChatGPT, which was already capable of doing a whole bunch of things, but then we added ChatGPT over the top of it.

Sandra And ChatGPT simply enables you to ask it really simple prompts in a conversational style.

Kai So, we have GPT-3, which basically trained itself on the body of text that is the internet. Then the developers did something over the top of it, which they call 'reinforcement learning from human feedback'.

Sandra And that means...

Kai That means that they use actual humans to work with the output and improve the output. For example, they would give humans a prompt, like we just read out and say, 'what would be a good answer?', and they would use that to train the network. In a second step, they would have the network produce multiple answers, and they get humans to rank what feels the most natural, what feels the best answer to you, to make the output much more acceptable to, and accessible to humans, basically,

Sandra But this is not about ensuring that all the answers are correct, no, right - even in the introduction that we read out, it's said it's built on GPT-2, it's actually on GPT-3, but it's reasonably sounding language where we try not to make it offensive, we try to correct for obvious biases and stereotypes, but it's still predicting what the next words would be.

Kai Yeah. So, you can weed out really offensive language, you can weed out the most egregious answers that way. And you can make it sound much more human acceptable to humans, but it is still just working on predicting what the next bit of text might be, and averaging out, basically, the internet. Yeah, it works on probabilities, which is why sometimes it gives wrong answers to even simple maths questions. Yes, there's been a number of people who have run this ChatGPT through the motions, there was one astrophysicist (and we'll put that in the show notes), who asked it quite a few questions where he knows that there are common misconceptions in the general public, even in general knowledge about, you know, how things work. And lo and behold, the network would give you the common misconceptions, precisely because they're common, they're statistically much more likely to complete the sentence of words, than the correct answer, which is not common. So, it's not Wikipedia. No, it's not knowledge. It doesn't know things. It doesn't understand things like humans understand things. It just predicts what the next text would be. That works in many instances and can create lots of correct answers most of the time.

Sandra But it will always give you plausible responses. It requires a...

Kai A critical thinker?

Sandra Yeah, pretty much it requires someone who has some understanding of the domain that you're asking it to write in. So, it can be a creative aid. But if you're trying to get correct answers, or arguments, it does require critical thinking. And I think you played with it to get it to give you arguments for Germany leaving the European Union.

Kai It's quite interesting. So first up, I asked him to tell me why Germany should leave the European Union and the safeguard kicks in it says, I can't say that. It's for the German people to say, but when you ask, it gives me arguments for and against Germany, leaving the European Union. It gives you a list of arguments that are pretty much the Brexit arguments.

Sandra But then you pushed it a bit further, and you got it to basically give you arguments for and against everyone leaving everything.

Kai Yes, I asked it for Victoria and New South Wales, leaving the Australian Commonwealth, and lo and behold similar kinds of arguments, fairly generic list of arguments, and I pushed it and I asked it why the city of Wollongong should use the Commonwealth. And rather than saying that, you know, 'this makes no sense, Kai', it just gives you reasons for why Wollongong would leave the Commonwealth or Europe would leave the United Nations. So it just gives you nonsense answers in the same style.

Sandra Or it sounds a bit like a student doubling down, on an essay they didn't think much about.

Kai Yep, nonsense or B.S. So many people have pointed out that because it only works on language, not knowledge, it has reliability issues. Sometimes it what we call 'hallucinates' - answers that sound confident, plausible, but have no bearing in reality, it just plays with the text.

Sandra But we do want to get to some interesting industry applications and some good use cases and also to challenges it could pose to certain sectors like education. But before we do that, it's important to note a few more things that it struggles with, and it will continue to struggle with for the foreseeable future.

Kai Because it's in the nature of how these models work. It's not a matter of training it more, there are inherent reliability challenges, or robustness is a problem.

Sandra Yeah, if you give it a prompt, it will give you an answer. If you very slightly vary a couple of words, it might give you a completely different answer. But you can also ask it to regenerate the same answer and it will give you a slightly different answer.

Kai But sometimes it changes the meaning because it might, you know, go to a different probability setting, it just ends up in a different context.

Sandra So, I played with it trying to get it to give me the best movies of all time and the best books and poems of all time. And it gave me consistently a list of, well, English language movies or poems, which points to a different limitation altogether. But if I got it to regenerate, it gave me a different list of the top movies and top poems of all time, English language -

Kai Because there are different opinions out there. There is different language associated with the same prompt. It doesn't trade in knowledge, it trades in language, which means it's prone to stereotyping because it gives you the most common answers. And the most common answers are often biased towards what the majority thinks, hence, a US perspective, an English language perspective. And you have to actually know how to ask it to be more specific. You can ask it for non-English movies, and then it will comply, and it will give you a list of that. But you have to be the expert and critical about probing those answers.

Sandra There is also the problem with the lack of referencing certain things. So if you do want to check that what it said is correct or not, you actually have to still do the work of checking whether this is right or wrong.

Kai And it doesn't give you a source. Because the way these models work is all this text is ingested into a big black box language model, it can give you answers, but for the most part, you don't know where those answers are coming from. It's not like in Wikipedia, where you have elaborate references, and the community is very strict about referencing articles, the lack of referencing is another challenge we have with ChatGPT.

Sandra And I know we might be repeating ourselves a little bit here. But we do have to stress the fact that this is about language, not about knowledge. And probably the easiest way to see that is to realise how wrong it sometimes gets like simple math problems. And you know, Twitter and the internet are full of this. It was asked simple things like 'A line parallel to y =4x +6 passes through 510. What's the y coordinate of the point where this line crosses the y axis?', which probably a high school student would work out in a minute - it gave a very confident worked out answer, and we'll put the link in the show notes, and it was completely wrong.

Kai Completely.

Sandra Yeah. And in this sort of instances is quite easy to see where it makes the mistake, similarly seen at the website used to generate certain articles, and there was an article there that imply the savings account initially containing $10,000 with a 3% interest rate compounding annually, would accrue $10,300 in interest, that would be fantastic, after about a year. And obviously, the real interest is about $300.

Kai It just completed the language wrong, which is what it does, it completes language.

Sandra But these things are very easy to spot because they're numerical examples where even lay people are fairly expert. But when we ask it to give us more complex, nuanced answers to, you know, essay questions and assignments, and political views, and so on, things become way more complicated. And it's very easy to miss the fact that it might be incorrect -

Kai Or very skewed or biased towards certain majority understandings -

Sandra Because it sounds so real and so good. So you have to always be quite vigilant about what it gives you, or the content that you come across from some of these generated texts, because it might not be what you're looking for.

Kai And one recent example where that could be seen is the one of Stack Overflow, which is essentially a website for coders that collects code fragments, code examples that help programmers, and people started submitting lots of ChatGPT generated answers, because people found that in many instances that gave them good answers, it allowed them to correct their code mistakes. But because it's unreliable, this website ended up with lots of wrong code being submitted. And the way it works is the community has to download everything that isn't reliable. So it created so much work for the community, that the website ended up banning any ChatGPT generated responses because the error rate is just too high.

Sandra But speaking of work, ChatGPT can actually be quite useful. And it's worth having a look at how people are using ChatGPT currently before we talk a little bit about what the future might hold, and that's maybe a more complicated conversation. But there are quite a few professionals already attempting to make it somehow part of the work that they do.

Kai A publication by Fishbowl which is a magazine reporting on workplace trends found that about 27, about a third of people, have already actively used ChatGPT in their work, and that included responses from companies such as Amazon, Google, IBM, JP Morgan, and the like, done in early January.

Sandra So if you're thinking about the way to have a tool like ChatGPT assist in your work, simple things like maybe rewriting a paragraph that you've written, like I have, and removing any grammatical errors, or going over an email before sending it or putting a paragraph in active voice rather than passive voice -

Kai Or you have to digest a long report very quickly. And you can copy longer text into ChatGPT and tell it to summarise it into a shorter paragraph or a few sentences. It does that surprisingly reliably.

Sandra You can also use it for content creation and we're not thinking student essays here. But -

Kai No, we're not thinking that...

Sandra No, we're thinking places where you have a very clear problem to solve, and this might be a cost saver.

Kai So writing certain types of texts - say, there was an article about the real estate profession. Writing real estate ads follows a certain formula, and ChatGPT has ingested millions of those ads. So it's quite capable of coming up with the marketing copy you need for these ads, just from a few keywords.

Sandra Or there might be uses for it in the legal profession or in any place where you have to write certain types of emails or certain types of letters that are quite formulaic, where having assistance from something like ChatGPT could actually free up people to do the more higher order tasks.

Kai Yeah, like, for example, in the HR function in the hiring process, many companies receive hundreds of applications. And so ChatGPT could be useful in summarising job applications, which come in all kinds of flavours and tease out the most important aspects, which might ironically lead to a situation where someone first writes the application with ChatGPT. And then the company uses it, to summarise it down to the keywords used to actually create it.

Sandra A ChatGPT loop.

Kai Exactly.

Sandra Maybe one last thing to bring up before we try to have a look at the future of ChatGPT is that, notwithstanding the Stack Overflow debacle, where people were using it to generate code, the New York Times article does report that ChatGPT so far, appears to be good at helping programmers spot and fix errors in what they have already written, like not generating code, but helping them figure out if there are issues with the code.

Kai I think there's a difference with you know, me just using ChatGPT to generate some code and, you know, submitting it to a website like this.

Sandra I hope you wouldn't.

Kai Or an experienced coder who actually gets some help from ChatGPT. And then they can run the code, and they will see whether the code is good and has errors or not.

Sandra Before we wrap up our ChatGPT conversation, it's worth having a bit of a look at the future of ChatGPT because the number of examples that we've given, and uses and so on, can make one forget that this has only been around since the end of November last year, basically. So it's quite new. So there's still a lot of work to be done. And this is still just an emerging field.

Kai With a lot of commercial potential, though - that at least is what Microsoft seems to think. Back in 2019, they already invested 1 billion into OpenAI, and then another 2 billion. And because of that they're in the driver's seat of owning or using ChatGPT. So reportedly, they're putting yet another 10 billion in there, mainly to finance the running of the thing, which also happens on their own platform.

Sandra And let's just remind people that it is very, very expensive to train something like ChatGPT.

Kai And deal with the millions of queries that everyone around the world is putting in every day.

Sandra So Microsoft cover nearly all of the costs associated with training the system. And it's also covering the costs associated with these millions of queries. And currently, it looks like in exchange for that they are getting access to the services at no additional cost.

Kai Its CEO, Satya Nadella, is bullish about it. He predicts that the current generation of generative AI will spark another industry-wide platform shift, you know, on par with what we've seen with the shift to mobile devices or cloud computing over the past 15 years. So really thinking about ChatGPT as a platform that will fuel a whole range of other services via APIs. So the integration of those services, chiefly rumour has it, into Bing - anyone remember Bing? - the Microsoft search engine, which has already integrated Dall-E apparently, I haven't used it.

Sandra But it could also be just as straightforward as Microsoft integrating it into Office products. So whenever you're trying to do that new presentation or trying to write that email -

Kai 'Dear ChatGPT, this morning, I have to write a report that includes the following aspects' - Bing!

Sandra Or in Excel, you could have natural language formulas, where you might just ask it to give you the average of those two columns and list them in a specific format in a third column, and it would just write the formula for you. Or when you're opening Microsoft Word, it might be something that just comes pre-integrated, as you've said the way you would have a tool like a image generation AI integrated into something like Adobe Photoshop.

Kai This also has Google on high alert, the prospect of Microsoft integrating ChatGPT into being might force Google to also roll out its text generation AI, which has financial implications because according to some reports running queries on these large language models is about five to seven times more expensive than running a normal search query. So if millions of users start doing that, it might have an impact on the financials of the search giant.

Sandra But again, this technology is only in its infancy. So likely the cost of running these queries and training these models will go down in time. But for now, let's not forget that these models are still trained on a particular period. So if you wanted to use it as a search engine, you would basically have to retrain these models every day, every hour, every minute -

Kai Which is prohibitive, so, for now they will be an add on to a conventional search engine, which is often used to search for current events and news and things that a large language model cannot deliver.

Sandra This is the point where unfortunately, we do have to bring up some nefarious uses of this technology. And this is besides students trying to cheat on exams or essay papers using the tech. But there are already some reports of either nefarious uses or potential for this technology to be misused.

Kai While we've said that ChatGPT can write computer code, not all computer code is good. Gizmodo had an article about it helping people write malware.

Sandra Computer viruses.

Kai That's right.

Sandra Similarly, you can generate not for good text spamming anything from online forums to review sites to dating sites. And bot content is not a new thing. Many of these platforms have been getting better at removing what is called 'coordinated inauthentic behaviour'. But the fact that the price of creating content is trending towards zero is definitely not helping.

Kai You know how we often get these phishing or scamming emails, which are very easy to spot because they're written in broken English. Well, ChatGPT can help with that, too.

Sandra I did mention before you could use it to correct grammar in emails.

Kai Exactly!

Sandra It's also already been used to fool people on job applications or on getting certifications. People all over the internet have been trying to see what they could get it…

Kai To do, which exams to pass, which licenses to acquire and all that.

Sandra Apparently, AWS Certified Solution Architect is an option. Google Ads certification, you can also get with ChatGPT, or someone who got an AWS Certified Cloud Practitioner license for one of these things.

Kai Apparently, someone ran all the questions of the United States medical licensing exam through ChatGPT.

Sandra And it turned out that it was comfortably within the passing range -

Kai Showing moderate accuracy, but nonetheless passing the exam.

Sandra And same thing with Udemy and Coursera, various course quizzes - these are online learning platforms that award certificates, where ChatGPT turned out to be quite good at passing those course quizzes.

Kai And here in Australia, the semester hasn't quite started yet. So, the country is yet to make its own experiences with ChatGPT in the classroom,

Sandra There was also an interesting article in The New York Times talking about what nefarious pairing of humans and AI would look like. Because as we've said, things like content creation or spamming people with enormous amounts of content is something that we've experienced before - this would do it at a different scale and much, much cheaper. But they were talking about pairing different AI systems, many of which can map networks. And then there are the ChatGPT types of natural language generation models where, paired with a human, it would give people the ability to target for instance, politicians within a network by exploiting vulnerabilities, identifying particular Members of Parliament, for instance, with leverage in particular policy areas, and then exploiting those by spamming these particular individuals, crafting messages using letters to influence policy. So pairing this sort of system to exploit vulnerabilities, whether that's in our political systems or in our economic systems, but doing this at incredible speed and scope, this could give a whole new level of complexity to these types of attacks.

Kai But you don't even have to have a nefarious intent to have a negative outcome. Just the fact that many, many people are using these systems right now to create more and more content. Much of it will be riddled with mistakes because you know, remember this is just text completion, means that the body of text on the internet is about to swell by an order of magnitude with ChatGPT-generated text, which one or two generations down the track will be re-ingested by the next large language model iteration, not exactly improving the accuracy of the body of texts that goes into these models, right?

Sandra So as time goes by, we will likely have to spend more and more time reading the answers that ChatGPT gives us and trying to figure out whether they're correct or not, which will put an increasingly high burden on humans - on people - to make sense of what it tells us. And in the process, maybe taking us away from reading content that's written by other humans.

Kai And GPT-4, the next iteration of OpenAI's model is about to be released. We don't know exactly when, but it's rumoured, in the coming few months. That is likely not yet polluted by its own exhaust, created via ChatGPT. But there are real concerns for the quality of text on the internet, as the systems can now automatically create human-like content and pollute and spam the online airwaves.

Sandra So not only the negative, but also the positive potential, business implications, educational implications, societal implications of ChatGPT are just emerging, and they are way too much to fit in this one podcast. But the thought to leave you with as we wrap this up, is that likely, fluency, understanding what this thing is, and how it does the things it does is probably the first step, and then maybe updating some of our practices, institutions, legal frameworks, regulatory frameworks that have emerged in a period where some of these things were not possible.

Kai Digital Literacy must now include ChatGPT. And there are so many misconceptions out there, that a better understanding in the general population but also among business leaders, is really, really important. And as the New York Times summarised…

Sandra We're not ready.

Kai We're not ready yet. And yet…

Sandra That's all we have time for today. Thanks for listening.

Kai Thanks for listening.

Outro You've been listening to The Future, This Week from the University of Sydney Business School. Sandra Peter is the Director of Sydney Business Insights, and Kai Riemer is Professor of Information Technology and Organisation. Connect with us on LinkedIn, Twitter, and WeChat, and follow, like, or leave us a rating wherever you get your podcasts. If you have any weird or wonderful topics for us to discuss, send them to sbi@sydney.edu.au.

Sandra And it's also covering the costs associated with these millions of, um, queries.

Kai Can you say queries again?

Sandra Queries, queries, millions of queries, millions of queries. 'Can you say queries?' The answer is no...

Related content