In this episode, host Ben Walker is joined by Dr Mike Katell, Ethics Fellow at the Alan Turing Institute, to explore the evolving role of AI in marketing and its ethical implications. They discuss the rise of generative AI tools like ChatGPT, DALL-E, and Sora, considering both their opportunities and their risks. The conversation delves into the challenges of AI hallucinations, bias, and the potential impact on creativity, critical thinking, and job progression in the industry. Marketers are encouraged to take a responsible approach, using AI as a tool to enhance, rather than replace, their skills.
00:03
Intro:
Welcome to the CIM Marketing podcast. The contents and views expressed by individuals in the CIM Marketing Podcast are their own and do not necessarily represent the views of the companies they work for. We hope you enjoy the episode.
00:17
Host:
Hello everybody, and welcome to the CIM marketing podcast. And you know, a perennial favourite topic, one of the topics that gets our most hit on this show is the issue of AI and AI in marketing, how it's going to affect us and our careers. And we have got a super senior expert with us today in the shape of Dr Mike Katell, to discuss this. And Mike is an ethics Fellow at the Alan Turing Institute, and that might give you an idea as to the angle we're going to take on this show. Mike, how are you?
00:48
Mike:
I'm good. Ben, great to be here. Thanks for having me on.
00:52
Host:
You know, it's great to have you on the show, firstly because it's great to have someone with your seniority and expertise on the show, but secondly because this is such a hot topic for marketers. It's one of the questions we get asked most of all. It's something that exercises people's minds. They're worried about how it might change their work, their job, their livelihoods, and what they should and should not be doing, ethics-wise, in their roles as professionals in the industry. But we'll get to that. We'll start though, if we may, with a little bit of a buzz recap on what's coming forward into the sector in terms of tools and AI programs that we can use as marketers. What are the key developments that you've seen in the last year to 18 months?
01:36
Mike:
Yeah, well, obviously the big one, the big splash out has been the release of chatGPT, back in November ’22, which really brought this idea of like generative AI, AI that can create content, into the public consciousness. It was for the first time, AI, which people have been hearing about and talking about, but seemed like the stuff of laboratories and, you know, the future, the near future, the far future, etc, was suddenly, immediately in people's hands. Like anyone could go on to OpenAI’s website and interact with ChatGPT. But ChatGPT was not the first by any means, generative AI program that was available for people to use. And marketers are probably familiar with some of the ones that were already on the market, such as the image generating systems, one made by the same company as ChatGPT, called DALLE, with an art program, basically. But there was also mid journey and stable diffusion, and a few others that were out there that were being used primarily to generate imagery from user prompts, in the same way that ChatGPT generates text from user prompts, and these were already getting a fair bit of attention and a certain amount of press. Sora for video production, also made by OpenAI. There's been a demo that's been making the rounds, viral demonstration of how effective Sora is at creating video entirely from nothing, from a few voice prompts, and then music generation software. So there's an actual lot going on right now.
03:06
Host:
I've heard more than I perhaps would choose to about Sora, because I've got a couple of friends I've mentioned before on the show who work in the video editing, video compositing sector, and it's scaring the living life out of them, you know, it really is scaring them in terms of what it can do, the power it can deploy to create these videos. Yes, people will say it's only short clips at the moment, and therefore can't replace, you know, classic cinematography yet. But the feeling is, it's a thin end of the wedge, and it sort of prompts the question that one always is led to in these discussions, which is, is this stuff, Sora and its stablemates, an opportunity for marketers and content creation or a threat to our livelihoods?
03:57
Mike:
Well the answer to questions like this is always, it depends how you look at it, or a little of both perhaps. You know, Sora is very impressive, at least from the demos that we've seen, but right now, it's not yet really ready for most people to use. And the thing that's deceptive about a lot of these demonstrations that we see, for example, in music generation, the there was a thing last year, I think, about generating a song that sounded just like it was written by Drake, and another one by the weeknd, etc. And as with the Sora demo, the implication is that this stuff is incredibly easy to use. You just speak a few words or type something, type a few sentences on a keyboard, presto, you get a complete video or a complete song generated. But it's far more complicated than that. The tools require a lot more fiddling and a lot more expertise. So, this leads to the question of what does this mean for people who are actually using the tools? Well, number one, yes, they're coming, and we're all going to be expected to use them. All of us in all the creative fields are arguably not going to be able to avoid them. I'm going to have to start using them more in my academic writing, for example, it's just going to become an expectation. On the one hand, it's going to up standards, because if everyone is sort of expected to use them, then everyone is also going to be expected to sort of operate at whatever quality enhancements these tools provide. On the other hand, there are real pitfalls, because for those of you old enough to remember when Photoshop first came on the scene, you can make a lot of real dreck, you know, with these tools, right? Like, the easy part is making something that's really mediocre, right? The hard part is making something that's actually remarkable and extraordinary. And as with earlier digital tools, a whole industry cropped up of people who became experts at how to use these tools and how to generate content from them. I don't really want to talk down the potential dangers. I'm currently doing a research project in which we're looking at the potential impacts on both the economic life, but also the creative lives of people in the creative industries, of these tools of generative AI, and there's a lot of real concern about the transformation, about whether the transformation is going to be just another digital transformation, of moving from a place of fewer tools to more tools, of less sophisticated use to more sophisticated use, or is it really going to upset things and really diminish the quality of the experience of people's work? And I think marketers, similarly, should have some concerns if they value the creativity that they currently bring to the work, and some of that creativity is going to become rote, even banal, because it's taken over by some of these tools, that could be a real downer, actually, for people who are working in this realm, but we don't know. I mean, I think it's an unsettled question as to what the effects are actually going to be. Change? Unquestionably. What that change is going to be is, we're still sort of feeling our way through it.
06:55
Host:
It's interesting. There's a few things to unpack there. I'll start with the dreach comment. I've heard it also described as slop. So, choose your own word, dreach or slop. One thing we do know about AI systems, at least as they are formed currently, is that they are very good at producing cheap, easily replicable content that is very bland and very derivative. That's not to say that they cannot produce fantastic, innovative stuff, but if we're quite hands off, just like the early Photoshop pioneers, we end up creating a whole bunch of stuff which looks very similar and doesn't give us a competitive edge. Let's deal with that first, particularly the dreach, the avoiding the dreach, avoiding the slop. What do marketers need to be doing when they're using these tools to avoid that sort of whole new world of blandness as I once encapsulated it on this show previously?
07:46
Mike:
Yeah, I mean, I'd say there's kind of two things they've got to do. One, the hard one, well, they're both hard really. The hardest one, I think, is probably going to be everyone having to up their game as creatives, not just their abilities, but their aesthetic sensibility to be able to see, to look at all the stuff that's coming out that may appear to be very high quality, or higher quality than some of the more rote stuff that was done before, and to recognise what's truly remarkable. And that's something that I think creative people will continue, are good at already, and will continue to be good at. They'll be good at they'll be able to look around and see what's happening in contemporary culture, what's happening, what are my peers doing, and how can I distinguish my work from that work? And so that'll be the main task, and then that's going to require, with these new tools, mastering additional skills on top of the entry level ones, to turn what the systems want to give you into something that's actually truly remarkable and interesting and gripping. But another challenge, I think, is a little bit more systemic, which will be pushing back on like industry prerogatives to work fast and work cheap. We're always working in that realm where it doesn't matter what industry you're in, there's always that sort of pressure, like, work faster, work cheaper.
09:04
Host:
We call it the golden triangle in the creative sector. You can have something that's fast, cheap or quality, you can have two of those things. You can never get three. You can never get something that's quick, cheap and high quality, you've got to choose two. And your implication is that, actually, the advent of AI won't change that fundamental golden triangle.
09:28
Mike:
No and a big risk that people need to consider, this is something that's actually really interesting and being talked about in a lot of industries, a lot of creative industries, is that the power of generative AI at present, it's probably not fit for purpose for building really sophisticated, incredibly high-quality material. The quality will continue to increase, most likely. But at present, the quality that these systems are able to crank out now is a fairly, medium to low quality, adequate quality, right? But this is the stuff that people who are entering the industry are often tasked with doing. They're often tasked with doing, like make an icon for this, or make a short video for that, or make just some background music for this bit here, right? You know, here you've just joined. You're an intern. You're in your first six months, your first year, even your first two or three years, working for a big firm. And this is how you climb the ladder is you start doing this low-level work. And then little by little, you prove yourself, and you get handed more and more sophisticated tasks and more and more complex work to do. There's a risk here that in adopting more and more digital tools like generative AI, we're starting to chop some of the bottom rungs out of that career ladder. Now you have to really worry where's the talent pipeline going to come from if they have no point of entry? As an aside, I was reading something about somebody in the real estate business, like in the big commercial real estate business, who was choosing not to use an AI tool that could do like property valuations, because he valued having employees who had to learn how to do that stuff manually, so that they have the training and the ability as they moved up the ladder and they finally got into more and more positions of authority and decision making that they would have the skills and understanding of how this low level stuff actually worked.
11:26
Host:
That’s a brilliant example, isn't it? Because actually, you can have a computer make a reasonable stab at how much a house is worth. We see it every day on Zoopla and so on, other brands are available, where it can give you an instant valuation of what your house is worth. But if you're a real estate agent, you're going out and you're looking inside a house, and you're making your own value. You're not just valuing that house on the market level. What you're doing is you're looking for detail in that house, and you're thinking how is that going to affect the value of this property compared to its peers on the same street? And that process there is teaching you something very, very quickly and very important about how to handle the details and how to quantify these changes in value, which you wouldn't get if you just evolve all of this to the start of the lower runs of the industry to an AI, what you're saying is you're inviting a whole bunch of estate agents or realtors as they’re called in North America, into the industry who don't actually know the basics. So when they're coming to making more complex calculations at more senior levels, they haven't had the grounding, and they haven't actually learned those skills that they need to do stuff that AI’s cannot do.
12:32
Mike:
I mean thinking about in the marketing realm, the marketing context. Think about copywriting for example. There's so many levels of copywriting, from the incredibly basic, just a few sentences or even a paragraph that's really just meant to be, you know, skimmed through as you move on to the more substantive material, right? Versus doing high level copywriting, where it's got to be incredibly impactful. You've only got a 30-second spot on something very high value, or something like that, and you really want to bring all of your best energy and your best attention to it. Well, having the skills of doing that basic copywriting first of all puts you in a much better position to evaluate what say, ChatGPT or Claude or one of these text generators, gives you to recognise quality, to recognise how to edit and improve what you might get from a digital tool. You get that kind of understanding and experience from having done it yourself, from having performed those tasks on your own. And we're seeing now in universities, for example, the typical university essay now is at least 1/3 written by chatGPT or something like that. And so you have to wonder, what kind of skills are students going to be coming out of university with by relying really heavily on these tools? A new set of skills, perhaps a new kind of skills. We may not understand what those skills are yet, but early indications are a little worrying that they may not be a set of skills that's completely fit for doing high level sophisticated decision-making, work that would really be enhanced by having a deep understanding of what that low level work really looks and feels like.
14:07
Host:
It’s pop psychology. Malcom Gladwell’s Blink book is all about, you know, a collection of experiences giving you what presents as instinct when something's good or bad or right or wrong. And you know, presumably, if we're saying that for the junior copywriter, the junior estate agent, is learning these sort of discrete experiences in series, and then builds up a sort of collective intelligence of how their industry works. Helps them make better decisions, yes, but it also presumably helps them have a more instantaneous and more ingrained view of whether something is right or wrong. And you know, we talked earlier about slop and dreach as you put it, we haven't yet talked about what could be termed bilge, stuff that is nonsense. It's earned this name, hallucination, which is the new fashioned word for an error. I was looking before the pod, doing a little bit of research, and there's an interesting story in the papers last week, which is that they've had to pull ChatGPT from answering questions on elections. And the reason they've had to pull it is that it was making hallucinations so wild that it had gone into predicting the future. It said that chatGPT has been telling users that Joe Biden won the 2024 presidential election and has given specific figures for European parliament seats. It has also said that labour won the 2024 British general election by, and it was very precise on this, with 467 seats. And of course, we've seen umpteen examples of it churning out nonsense. So there's two things. The one is the more exciting thing is that you actually, you know, by doing some of this low level stuff, you become more talented, you become more skilled. You become better at making decisions, but on the less exciting but just as important point, you are also going to be better at critical thinking and spotting accuracies if you've done the junior stuff that some of us would ask to be devolved to AI.
16:15
Mike:
You've hit on something that I think is crucially important there. In order to evaluate what these tools give us, we need a set of skills that these tools are arguably diminishing, are taking away from our experience because they are making things a little too easy, right? So, yeah, if you are using ChatGPT or the like, instead of a traditional search algorithm or search tool, for example, the soothing voice of the chatbot telling you that this is how it is and how it will be is quite persuasive, perhaps, and the critical skills that we've accumulated over the many years now that we've been working with, say, Google search or similar search engines to parse, to try to tell the difference between the good search result and the bad search result, or the reliable YouTube video versus the unreliable YouTube video, or the real news source versus the imaginary news source. Those are skills that have took a long time for us to develop to the point that we have now. We're already not doing a great job I'd say, as a wider society, necessarily, in parsing the real from the fake, the reliable from the unreliable. But at least the skill building is happening. But then when you have one voice giving you one answer to a complicated question, it's even harder then to bring your critical faculties to bear there, and so you need to be prepared in a way that most of us really aren't prepared to do the evaluative work.
17:43
Ad Break:
Looking for more ways to learn and upskill? CIM members can register now for our upcoming member exclusive webinars. More details available at CIM.co.uk/content.
17:53
Mike:
The thing about hallucinations that's so interesting is that it's a potentially unsolvable problem. A lot of us think that the hallucination issue is just, oh, they just haven't fixed these systems yet. The next version will be better, and it's true that for really purpose-built AI systems like ones used in, say, medical diagnosis, for example, where they're they have a very narrow task and a narrow focus. They are improving. Their accuracy rates are increasing. But for these more general purpose, generative AI systems, the problem is really that they're trained on the world. They're trained on an entire corpus of all the data that's available, which turns out to be finite. We thought big data was endless, but it turns out to be finite. They're actually running out of data to train these systems on, and so they're using anything they can get their hands on. Well, think of everything you know about the internet and the world of data. If there are some facts out there, what the ratio of facts to everything else is, is an open question. There's an awful lot of opinion. There are things like humour that AI systems are not particularly good at understanding. Irony, things that are actual fiction or very convincing fiction that are designed for entertainment. There's actual intentional, of course, misinformation and disinformation floating around all over the place. There's tons of conspiracy theories that are very well documented, book length conspiracy theories and so on. And the systems are learning from that, along with factual information from the world, and their ability to discern the difference between fact and fiction. Well, if we struggle with it, you might think an AI might be better than we are, but humans have a sense as to when they're hearing something that's nonsense. We understand, for example, sarcasm in a way that AI doesn't. We often can read cues about when something is being presented as fiction, intentionally presented as fiction, where an AI may struggle with that. So this hallucination problem is not really going away, or not in a serious way anytime soon, and that's means that the product that we get from generative AI, when we need it to be reliable and factual, is going to continue to be unreliable and only questionably factual.
20:08
Host:
That is absolutely mind-blowing stuff, because I know what you're saying is that it's presented quite often as a teething problem. You know, chatGPT will get over it. It will be programmed out. It will be decoded so it no longer makes these mistakes, which have earned this title of hallucinations. But you're saying no, it's a fundamental issue with the way that an AI works. An AI type captures all the data that's available to it, true, false or forecast. And of course, the poll example I gave is drawn from forecasting. You know, it's taken a forecast or a collection of forecasts, and it's come up with this number and presented that as fact.
20:47
Mike:
And another thing, these tools are now learning from the material that they are generating. So you put in an image generated by Google Gemini, for example, you put in 1000s, 10s of 1000s, hundreds of 1000s of images created by one of these tools into the world. And then you continue training all of the image generators on all the images that exist. They're going to start training on the images that they or their compatriots have produced. And so now you have these sort of, they call it model collapse, which is that eventually the model becomes weighted down with a net plurality of image, text, fact and so on that was actually generated by an AI. And so the reliability thread becomes even thinner and longer and even more fragile, even more brittle.
21:36
Host:
And invisible, essentially invisible, presumably because it snowballs to such an extent that the original truth is lost long ago.
21:43
Mike:
Yeah, exactly. And this is something that the engineers are scratching their heads about, because these systems require so much data to be trained. One thing that distinguishes artificial intelligence from human intelligence says, yes, we require a great deal of data as we grow up, as we form from infants to adults to become the people that we are. But it's not done as this brute force operation the way artificial intelligence is trained over a period of weeks or months, to be given all of the information that can be found in the world and shoved into this model so that it can answer, a question about your love life or make a prediction about, you know, the next presidential election or parliamentary election, and there's just not enough data in fact, to train these tools to be coherent, to give productive, useful answers. And in part, that's because we already weren't particularly great stewards of reliable data in the world in the first place. You know, Wikipedia, yeah, fairly reliable, but there's still plenty of nonsense in Wikipedia. YouTube? I mean, don't even get me started about YouTube. I mean, how much of YouTube can you honestly say is reliable and factual versus, you know, interesting at best?
22:58
Host:
No, no, exactly. I mean, it does beg the question that, what do marketers do? Again it's another opportunity for marketers looking to the future. What do they do to make sure that they are the guardians of truth in this stuff? That they can spot the inaccuracies? Yes, they can use these AIs as tools, but they don't always believe them. They have to verify them, and then they can take the good bits and get rid of the bilge that comes with it.
23:26
Mike:
You know, marketers, I think, have a really almost unique problem in this space, because the work of marketing is already to construct an account of the world that isn't necessarily 100% truthful account, right? It's an account of the world that's meant to direct you towards a certain set of choices or impressions about the world. So you're gonna put in a little bit more extra information that favours your view and take away information that disfavours your view. So you're already trying to create a fairly skewed version of the world that that suits the goal of your client, or whatever it is the message that you're trying to present. So then you turn to a generative AI tool to give you some of the fodder for that information. And so if you're being responsible, you need to know the absolute truth of what it is the material that you're using, so that then you can shape the narrative in a way that achieves your goal. Now, if you're an ethical marketer, you're trying to achieve a goal that you think will actually have a socially beneficial effect in some way, even if that effect is simply to sell a product that we think people would want or use, or a message that would be useful to people to have. But if you're starting with questionable materials, your ability to do that responsibly is going to be diminished, and so it's setting up a serious ethical dilemma for marketers in doing their best work, in being people who can look at themselves in the mirror and saying that the message that I put out in the world today, yes, it serves my client, or it's self-serving or whatever, but at the end of the day I'm not entirely misleading people in such a serious way that I'm doing damage to them or doing damage to their communities, or doing damage to our society. But when you're working with questionable material to begin with, it's much harder to be confident that you're making those decisions in a responsible, ethical and fully informed manner.
25:18
Host:
So what does responsible marketing look like when you're using this stuff? How do you get there? How do you become something? We at CIM are very much about ethical standards, responsible marketing, being spearheads of the industry. How do you get there when you're working with stuff which is always going to be questionable?
25:35
Mike:
I think the first strategy always is caution. Caution is a great way to approach these tools, because we know about the risks. And hopefully, through this podcast, people who didn't know about the risks, now know about the risks a bit more. Another way to think about this is that generative AI can augment your work, can help you with your work, but you shouldn't rely on it to do your work. Once you start doing that, that's when you lose agency and control over the work product. I've heard people talk very thoughtfully about how they can use AI to say, check their work. So let's say you're writing a bit of copy and you're struggling with, you know, a few sentences. You just don't feel like you're getting them right. If you feed that copy into ChatGPT or Claude and say, rewrite this, or tell me what this means, or give me three alternate versions of this thing that I just wrote, that's a very useful way of using that tool. So instead of asking the tool to tell you about the world, you're simply asking the tool to look at what you've done, use its logic to compare what you've done to 1000s, millions of other examples of sentence structure and grammar, and give you feedback about how you might polish this work up. Writers are saying that this is incredibly useful. They can be writing something and they're struggling with that third paragraph, that 16th paragraph, or a whole passage, and they put it through the tool, the tool gives them something back and gives them ideas about how to rephrase this or that, or a different word choice and so on. And that's the kind of use that I think is very positive and ethical. Essentially, just enhancing your work, rather than trying to replace it.
27:11
Host:
You’re asking it not to tell you about the world but to tell you about your work.
27:18
Mike:
Yeah, exactly, exactly. So it's like you're actually having a closer relationship. You're having more of a few having more of a feedback loop. So rather than relying on these tools to manufacture the world or to reflect the world, rather having it reflect your world and your work, is much more useful. I don't know how this would work in the image generation field, but my understanding is you can give the image generator sample images to work with and say can you generate something that's like this? But, you know, with this background or with these kinds of characters. It takes a fair bit of work to do that exchange, but you're more likely to get something that corresponds with your original intent than simply starting with a few prompts and saying, show me a picture of protesters outside of Westminster, you know, then you get some very strange results if you just ask for basic things like that.
28:07
Host:
There are biases in the system. As well as the inaccuracies that we've spoken about, as well as the sort of conflating fact with forecast to create this uncertainty, this questionability of it all, there are also biases in the system. And before we wrap up, I just want to quickly have a chat about that. Do you think that big tech are doing enough to combat those biases? When people are either asking it about the world, which we say we should do less of, or indeed asking it about our work, which we say we should do more of, there's enough being worked on in the background to twiddle this stuff. So we are trying to remove bias from the system, which is, of course, a collection of data, which is, in and of itself, bias, because everything is to some degree bias.
28:52
Mike:
Yeah, so bias is a very popular topic in my field, in AI ethics, and we think a lot about it. We talk a lot about bias, about trying to deal with bias in AI systems, and especially in the data that's used to train or build AI systems. But as you suggest, bias is in the DNA of AI systems in part because the data that's used to train AI comes from human activity, and humans are biased. Even from the most practical standpoint, if you have a set of data that you need humans to label, to give names to, let's say it's a series of images, you need humans to go back and give names to what they're looking at. Well, there's a great deal depends on which humans you ask to label that information. There was a famous case where somebody was looking at Google image data, a big data set that Google was using and discovered that if you searched on the term wedding dress, what you would get was a lot of European and North American looking people wearing a white gown that looks like weddings as we would commonly associate them from the context of being in the UK, for example. However, if you showed the same image generation system, a picture of a woman from India wearing her wedding dress, the system would not recognise that as a wedding dress. Because it wasn't trained to understand that an image like that could be a wedding dress. It was trained on the understanding of wedding dresses from one narrow part of the world, a large part of the world, but a narrowly focused part of the world. That's one source of the bias.
The other source of the bias is that we actually have to bias these systems. That the bias is actually necessary. So in any example you talk about in which generative AI hallucinates, or earlier examples of generative AI which failed miserably because they spewed racist or sexist content and so on. The solution to that is filtering and tuning, and that filtering and tuning is itself a set of biases. One of the jokes I heard about ChatGPT, for example, is that ChatGPT is inoffensive to the point where it just sounds like a corporate trainer. You ask it to tell you something, and it gives you something that's really just kind of boring, but that boringness is a feature, not a bug. It is to keep ChatGPT from accidentally uttering things that would be seen or viewed as offensive. So the openAI engineers intentionally go in and they work with teams all over the world who are paid peace rates, to annotate and label the outputs of ChatGPT and the like, to try to refine them, and to identify which utterances might be offensive or incorrect or otherwise just inappropriate, and those are then squelched to get the most acceptable results out of the system, the most coherent results out of the system, which is not the same as saying the most accurate, or even the most fair or the most even handed.
31:52
Host:
Or even the most interesting, right?
31:53
Mike:
Or the most interesting, right exactly (laughs). Because sometimes extraordinary opinions are extreme opinions, right? Or extraordinary views are extreme views. But bias is a major issue. When we when we talk about bias in AI systems, what we really mean is that we're actually trying to compete with the bias in human society and cause our technologies to do better, or cause our technologies to reach for the aspirations we have as human society, to be less biased, to be less discriminatory, to be less sexist, to be more sensitive to difference of every type, of different types of identities. We as humans are still struggling with that. Some of us do it better than others. Some societies do it better than others, but we definitely don't want our technologies to do it the same way we do it, or worse than we do it. But unfortunately, most of the technologies we're talking about are made by a handful of very large companies, almost all centred in one place, Silicon Valley in California, and they have a bias that they bring to these tools. The engineers bring a certain bias to these tools. That bias may work fine for them and their communities and the worlds they live in, but they don't necessarily work for the rest of the world and other people's worlds and worlds that they have no access or understanding of.
33:11
Host:
It's interesting, isn't it? The key takeaway there for marketers is that you must not devolve your critical faculties and functions to this stuff. Ask it about your work, not about the world, and when it's presenting things to you, recognise that although there's lots of stuff going in the background to de-bias it, it is produced by a bunch of people with a fairly homogenous worldview based out of California, and therefore it's not going to give you a holistic view of the world. The wedding dress one – remember that if you're listening to the show, Dr Mike Katell tells us that vif you ask it for a wedding dress, it will show you a white gown, because that's what we're used to in Europe or European communities in North America, it will not recognise an Indian lady's wedding dress as a wedding dress, despite the fact there are more Indians than there are Europeans by quite a few 100 million people. So keep your critical faculties. Don't trust everything it gives you. Learn to use it as a tool that can review and help you with your work as a colleague. Final question before we go, 10 second answer for a very long question, 10 second answer, please, Dr. Mike, for something that you could probably talk for another half an hour about – should we be scared of it?
34:35
Mike:
Yeah, that's a good question. I don't think we should be scared of technologies. I don't think we should be scared of AI. However, we should question the motivations behind the people who are making contemporary AI, what their aims are and what AI is being targeted to do. So the fact that we are seeing AI threaten potentially the livelihoods and creative lives of creative workers is not an accident. It's the intention of people building AI making certain choices about which technologies to build and what areas of work to target. So that is something I think we should be thinking about and focused on also, in the larger question, people talk about like, the question of existential threat, is AI going to kill us? Is it going to, you know, annihilate civilization as we know it? I don't think so. Or at least I don't think that's that AI is going to do that on its own. The threats we have to our social, wellbeing, our collective flourishing and so on, these don't come from technologies themselves. They come from the producers of technologies and the uses that people choose to put them to. In the hands of people who do not have the greatest or most elevated motivations for our society, AI can be quite dangerous, but it's not the AI that's dangerous. It's the people behind AI that we should be concerned about.
36:02
Host:
Yeah, and as marketers, we have got a role as guardians to make sure we're pushing back in the right direction as ethical and responsible marketers, we need to do that. We must do that if we're going to make this thing work for us and for the world. Dr Mike Katell, it has been fascinating. This is Dr Mike Katell, ethics Fellow at the Alan Turing Institute. We're very privileged to have you on the show, and thank you for your time and insights. I've learned a lot by it.
36:27
Mike:
It's been a pleasure Ben, thank you so much for having me. Yeah, been a lot of fun.
36:32
Host:
It's been a lot of fun and really insightful. Thank you very much indeed.
36:35
Outro:
If you've enjoyed this episode, be sure to subscribe to the CIM marketing podcast on your platform of choice. If you're listening on Apple podcasts, please leave us a rating and review. We'd love to hear your feedback. CIM Marketing Podcast.
Explore related content and courses for further insight