Episode 31

Should AI be a Mirror of Society or a Tool to Make Us Better with Mike Nemirovsky

Can artificial intelligence truly be unbiased? Should it mirror the society we have or guide us towards the one we strive for?

This week, we have long-time friend of the show and returning guest Mike Nemirovsky joining us to help untangle this complex issue. We explore the mirror effect of AI, reflecting on our experiences with image generation and the stereotypes it often defaults to. We also discuss the implications of AI bias in various sectors, particularly content creation and education. Acknowledging the potential of AI to revolutionise education, we navigate through its possible applications for personalised learning and enhancing critical thinking skills.

But the conversation doesn't stop there. As we peer into the future, we speculate on the trajectory of large language models and their potential repercussions on various industries. Will there be a battle of biases as companies compete to develop superior models? We contemplate the potential of a freemium model for these language models and the ethical considerations around them. And, of course, we can't ignore the elephant in the room - the progress in deep fake technology and the inundation of cheap AI-generated content on platforms like YouTube.

Join us as we journey through these thought-provoking topics, emphasising the necessity for responsible AI usage and the importance of regulations.

Takeaways

  • AI-generated content is not copyrightable, which raises concerns about ownership and protection.
  • Separating AI competitions based on different rules and regulations can allow for more exploration and innovation.
  • User experience is crucial in AI tools, and different tools can provide varying results and experiences.
  • The AI industry is expected to see more players in the large language model space and increased AI integration in software.
  • Ethics legislation and the fight against deepfake technology are important considerations in the AI industry.

Happy holidays!

//david

To get weekly email updates for the Creatives with AI podcast, visit the AI Podcast Network, scroll to the bottom of the page and subscribe.

Tools we use and recommend.

Riverside FM - Our remote recording platform

Music Radio Creative - Our voiceover and audio engineering partner

Podcastpage - Podcast website hosting where we got started

Transcript

00:00 - David Brown (Host)

So, Mike welcome to the podcast, and I think what I wanted to talk about today is I really just wanted to have a casual conversation more than any sort of strict thing. But the question that keeps coming up for me is, with all the talk about bias and the data and the stuff that's coming up that's surfacing from you know that we're finding in the data that there's all these biases and that you know everybody's talking about it. There's all these built-in biases and they talk about. You know it's white men creating the algorithms and all this stuff. I can't help but wonder if that's what we want.

00:46

Like, should AI be a mirror to what the world is really like? So should it actually reflect back the current situation and the data that's out there that you know? If the data shows that you know there are more women in sorry, more men in certain careers, or women seem to be disadvantaged in a certain area? Or you know if somebody requests a photo of an entrepreneur and it pulls up a white man, like, is that? Should it? Is that what it should do? Should it reflect back how the world really is or should it? Should we get into the business of tinkering with that and sort of and having it present a false version, some sort of fiction that doesn't really exist but that's trying to nudge us as people into behaving in a different way, and it's. It's that question that's been really really just niggling in the back of my mind and I thought you'd be really interesting. It might, you'd be an interesting person to talk to about it and just to sort of noodle and have a conversation. So what do you think?

01:55 - Mike Nemirovsky (Guest)

Yeah, first of all, thank you so much for having me back on the podcast. Well, like I feel so honored to be a second time a repeat guest. You're the first repeat guest, that is. That's blowing my mind and I don't feel unworthy. But yeah, that is a super interesting question. I love these type of more, let's say, philosophical questions around AI, and so my initial response, if I was to, just if I had to give one yes-no response, boolean type response, I would say no, we shouldn't. No, we shouldn't what we should be sure to, I'm sorry, no, we shouldn't have it reflect or be a mirror of our society. And let me now, let's, let's jump in right.

02:45 - David Brown (Host)

And this is why, yeah, let's go, man.

02:47 - Mike Nemirovsky (Guest)

When you first kind of prompted me on this question and you know I think you were trying to scare the shit out of me, but no, you didn't but one of the things that came to my mind, as controversial as it might be, is you know, we've already had assist. We've already had multiple systems for thousands of years that have been trying to nudge humans towards, let's say, morality, right, and I'm sure people will argue to the level of their effectiveness, but they're probably not as effective as they might have intended to be. And of course I'm talking about religion, right, like we've had these let's say morality.

03:28 - David Brown (Host)

Nudge is a kind word for it.

03:30 - Mike Nemirovsky (Guest)

Yes, a very kind word for it, and I think the reason why my mind went to that was the thing with AI is, yes, the data it's been trained on is biased, because it's been trained on more or less the history of human civilization and all the biases that are incorporated in that. That's also its superpower, and I think one of the use cases I've been playing around with a lot lately is yeah, that's great at summarizing things, but it's so good also at saying, hey, this is how my company normally positioned something. Help me to see it from a completely different perspective, how I would never have positioned it, you know? Or how would Shakespeare position this as opposed to, you know, a 21st century marketer?

04:17

Love it yeah yeah, yeah, and it's that because it has no inherent bias. It's just been trained on this data and it's just making these statistical assumptions that, okay, it told me, you know, you gave me this prompt and I'm going to give you this output. It has no intent behind it, it's just doing what it's basically built to do. Keeping it generalistic like that, I think, is important because, for the exact reasons that the other systems for the past thousands years seem to fail, is there's never like one size fits all. It always ends up being dogmatic, it always ends up being corrupted and for the benefit of certain individuals, more than the actual intent in the beginning. And so, to the degree that it's possible and this is where we can really get into the weeds is, I think we should try to keep our AI as generalistic as possible, giving it as much information as possible so that we can have it. You know, take these different perspectives, which should hopefully elevate us. But the other part that I want to talk about is the nudging, and besides the fact that me and you have, you know, spent the past I don't know 20, 30 years of our lives working in systems of helping humans, use data to nudge other humans to do things, to buy things yeah, to buy things in particular. But I think it's just a natural human instinct to like nudge other humans and, you know, species in general. It would be pretty cool and, I think, pretty useful if we could keep AI generalistic but still nudge us towards this baseline of ethics and I think we talked about this briefly last time is.

05:59

I'm not talking about any particular East versus West philosophy and morality or anything like that, but there's some baseline like human ethics and maybe there as a as a mom's three laws, you know, do no harm, and all that stuff. But there's got to be a. I really believe there is a baseline of like what is the morality baseline? And it's probably pretty low. Like don't kill, you know, don't do something that's going to hurt someone else, or don't try to benefit off of the pain of others, that type of thing. Or like don't torture people for years on end, anything like that is pretty easy from like 99.9% of the planet to say, yeah, okay, that's at least the baseline to start from. So yeah, I'd say, not just towards doing better, but like keeping the morality, let's say, vague.

06:50 - David Brown (Host)

I think it's a good point and one of the conversations there was a lady that spoken at event that I was at earlier this year and I think I may have mentioned it before at some point, but what she was saying is, or her point was, is that when you get into the ethics discussion and this goes back to the nudging, I think, discussion and the as well as she said, the only place that you can start really is with the, the international human rights law Right, like that's the only thing that has sort of universal agreement across every single country in the world is that the human rights, the core human rights legislation, and aside from that, then it starts getting really wooly. So she was like, you know, it was her sort of idea that that's kind of where we should start and try and build from there. So I think that matches with what you're saying.

07:41 - Mike Nemirovsky (Guest)

Yeah.

07:43 - David Brown (Host)

You know that there is some core set of beliefs, but even when you get into some of the, you know, even like the, the engagement rules and war and all that sort of stuff like, not all countries agree to that and so you know, you don't even have universal agreement about that. So it gets really wooly, really quick. And I guess I guess my worry is is that if we I think that I think AI could be useful as a mirror, because it it continues to sort of and as the data gets updated and it, you know, as we move through time and and through society, then it's going to, it's going to continue to reflect back. So if we start to see a change in the way that it works, then I think we could say, okay, well, actually we're seeing a change in the data, so we know it's actually we're actually changing, but I just I don't know that.

08:35

Again, like you said, I I feel like what group then takes precedence over another group? So if you start presenting, you know, if you say, if you just, you know, say what is an entrepreneur, or show me images of an entrepreneur, what you know, I think at the minute, it would probably show you why. A whole page full of white men, but who decides if we're going to change that? Who decides what goes first and who decides how many women get shown and how many different races get shown, and which races get shown, and in what order do they get shown in? And do you know what I mean?

09:11

And it just feels like it's going to turn into a massive bun fight over wall. But my group should be ahead of your group because my group is more disadvantaged than yours and yeah, to what degree, I guess?

09:24 - Mike Nemirovsky (Guest)

Do you want the AI to police that, or maybe not police it maybe guide you right. So maybe, if you ask the prompt of generate an image of an entrepreneur and I've been playing around with this obviously very, very recently where I'll say- yeah, which I thought you might be a good person because I know you're doing this, so it's part of the reason why I'm literally generating content and I wanted some iconography.

09:50

And I would ask I think I was using, I guess it's, it doesn't matter, dolly, right? I guess whatever, microsoft, bing is Whatever, yeah, but you know, I put in something like one colleague talking to another colleague about a problem and, you know, put some idea bubbles over their head and it was interesting because it did generate these like very generic kind of white people in business suits. So that, because I didn't want that representation and iconography made me prompt it again to say, give me a female colleague, maybe of color, not wearing a suit, because, like, who are suits anymore? I mean, I'm sure there are sorry for anyone that likes suits, I'm sure there's clean, but I think Lawyers still wear suits.

10:37

Lawyers, business or banking probably still wears a lot of suit and tie. But yeah, it wasn't very representative. So now I I prompted it and a lot of people will probably re-prompt it, but maybe we do incorporate things in the AI to say, before I generate this image, can you give me a bit more information of what type of representation you'd like to see? Right, so it already tries to nudge us into helping it not go with the default. But yeah, you know that it's interesting that the I don't know like. Are we 100% sure that it's biased in bringing back those images, or it's just a case of there just happens to be more images of white people in suits with tags of entrepreneurship, right and so, which is probably the case, right?

11:25 - David Brown (Host)

But what it's unearthing is it's unearthing the bias that's already there, exactly Right Like like. I was at an event the other day. I was lucky enough, somebody asked me to emcee an event, a local event for the, the business improvement district, and and I went along and one of the presenters that was there asked the question of the group. There was about 80 people there and one of the presenters asked and said how many people in the audience doesn't have bias? And no one raised their hands. And they said how many people in the audience, how, how many of you feel that you're not prejudice in any way? And no one raised their hand. It's, it's a human thing. We all have biases and we all have prejudices and we all know that. So the fact that it's in the data isn't surprising. Which is what again? Which is you know what gets me back to the mirror thing. So the mirror almost enables us to gauge how much of a bias we have and you know I I don't know I feel like that might be useful somehow.

12:23

It it but what was interesting Go?

12:24 - Mike Nemirovsky (Guest)

ahead. Well, I think I think you're right. I think, especially in today's environment where, like it or not and I'm probably on the not not side of wokeism, from the perspective, I think it's it's actually done more harm than good, but it, where it has done, good is, I believe but again, my sample size is just my colleagues and the, the companies that I've worked for it's made it more of an issue, which means that people like the people in the audience that you were saying now realize no, I, I definitely am not without bias, I definitely am not without prejudice, because it's almost impossible to be. I simply am biased by this. You know, 46 years of experience that I've had. No one else has the exact same biases I have, but I probably have similar biases to other people, and so right.

13:13

And so, now that we're aware of that, I think it's wonderful that we have this mirror to be like, oh, even more glaringly, like you know, hey, this is, this is the, this is the reality of the, the breadth of, let's say, human literature that we've trained these machines to understand, and so that's 100%.

13:35

I would say it's good that it is a mirror and I would say and this is where I'd like to also talk a little bit more about is because I know you've had a couple of people on about using AI and education and that whole debate around. Well, kids are just going to use chat cheat, btc cheat. Well, the thing is is the this mirror is exposing a lot of things to us, and if we just use it as the mirror and we just get the the, you know the default response, well, that's no good, but it's actually going to expose that to us and then it can also even help us in. In going back to the example I used of saying hey, here's a piece of content that me and my colleagues wrote for a marketing piece or a blog post or anything like that. Point out all the biases that I have in here. It's actually good at doing that, you know, and and help us really get yeah, like, help us to to reword this.

14:21

That would go towards an audience that we are not a part of, that we are, you know, and that's the problem is like it's really hard to escape your bias again because you've had this lived experience and you haven't had any other. And even if you, you know you, you have multicultural friends and you try to expose yourself to multicultural, you still haven't lived that experience, and neither has the AI. But at least it can go and find what even small amount of content it's been trained on to try to to reward it.

14:52

And so I think yeah, you're right, like the mirror is important, and then if we can even push it a bit forward to say, let the AI actually suggest I'll you know, have you? Have you thought about what I like actually, that bar does is give you the three options or responses. It'd be nice if they also said would you like different perspectives or would you like you know, I don't know. Of course, you'd have to be relevant to the, to what you're asking.

15:14 - David Brown (Host)

No, no, no, that's a.

15:15

That's an excellent idea and I've made a couple of notes while you've been talking, and one one of the things I want to go back to, which I think is is relevant to this bit, is the prompting questions, and I find that really, really powerful is if, if I'm asking it to do something like help me make a marketing plan or help me write a business plan or help me write an intro paragraph for this thing, right Like this.

15:41

You know, the studio thing I'm working on I got, I was like I had some ideas, so I put a bunch of ideas in and said help me, you know, write this, make this into like an intro paragraph or whatever. But then I've learned that a lot of times if I just tell it to ask me questions before it writes the answer it, it gives way better results because then it will come back and say, well, who's the, what's the name of the company, or you know what target, what's your target market and all these sorts of things that I didn't put in the prompt in the beginning and I really like that. I think it feels like maybe there's a, there's a version of an AI somewhere that like it's literally programmed that every time you ask it to do something, it's programmed to ask you at least like, at least three follow up questions.

16:30 - Mike Nemirovsky (Guest)

Yeah, like it can't just give you an answer until it asks why three times right?

16:35

Which was the classic analytics training that I remember I used to do is like when somebody asks you for a piece of data, ask them why three times and yeah, it'd be great, and I think it's. It's probably just the case of it's going to come. And and this is interesting as well to maybe explore is the sharding I know we've talked about before and I think you mentioned, like Sam Altman talked about a federated system of AI's and how they might be a bit more specialized and even if there does become, let's say, general AI, which was still TBD, but, yeah, the fact that you might have very specific AI's and nice layers on top of those AI's. So the large leg, large language models, are one thing, but one of the things that got me into thinking, oh, like, okay, let me, let me start tinkering around with my own stuff, Wasn't necessarily because I wanted to build a whole new large language model. It was that, as useful as they were, I felt like there wasn't enough of a layer on top of them to be even even more useful, if that makes sense. Like I was a bit inspired by by podcasts I listened to where he was.

17:48

He was a co-founder of of Optimizely.

17:52

I'm sure you remember that company, the AB testing software, where that was a crowded space which he was into and he's like, but it wasn't about competing and the way he phrased it was amazing because it was like it's not just about competing on the feature site, it's competing on the user experience you give to your customers.

18:07

And as a product manager I really felt that in my bones because that's exactly right. It's not about the competitor has this feature, so we need to have it as well. It's about, well, does that feature even deliver value to the customer? And I think, with AI and what ChatGPT did and how Bard is doing and all the rest are doing, this one input prompt where you can start talking to an LLM is an amazing experience. But it's just the beginning. And to be able to put a nice, a nicer layer on top of that which might actually prompt you, especially if it's specialized in a particular industry or a particular use case, then it abstracts the technology of this large language model just doing statistical analysis to say what word I love that meme that you shared of Gromit putting the next track down.

18:56 - David Brown (Host)

That's. All it's doing is just making guesses.

18:59 - Mike Nemirovsky (Guest)

But if you can engineer this beautiful layer on top of it, that actually says no. Dave, you don't just ask me for an entrepreneur. What is your entrepreneur trying to convey?

19:10

and really gets you to express what you want it to create. Then you start creating much better outputs from it and hopefully actually getting even more use from it. But in that could. Now this is where we come back to the debate. But then it's still on whoever's building that layer on top to okay, well, how much ethics do I put into this and how much diversity do I make sure that it's trying to evoke from the user?

19:37

And then, going all the way back to the first question, it's kind of like, well, I don't know, because, although I want it to nudge us in the right direction, because, let's say, I try to be a quite moral, upstanding citizen of the world, I always feel that any type of control that a human is going to put into it, or even a group of humans put into it, will ultimately not lead in the direction that is intended.

20:05

Maybe at first, and it probably is better than the opposite, but at the same time, maybe it's better just to let almost like a free market approach of let the chips land where they may and the checks and balances of the system will put pressure on people to get better at using AI so that they apply the morality of the day. Even right, because I think that's a huge problem as well. There's no way. Even though it's a mirror of us up to now, it will continue to reflect as things change as well. You kind of touched on that as well. So, in theory, the market forces and the checks and balances should automatically make it more diverse and there should be better data sets and all that.

20:58 - David Brown (Host)

Well, the other side of that also is that I think, moving forward, what we're going to see is like up until now, models have pretty much just had unfettered access to train on whatever they wanted to, but moving forward they're not. So there's a lot of, there are a lot of companies out there, there's a lot of groups that are pushing, obviously, for more copyright protection and that sort of thing. So I think that the potential information that's available is going to become much more limited as well. So we could actually slow down a little bit, because not so much current information is going to be available and it's still going to be models trained on data from 100 years ago and stuff that's out of copyright. Do you know what I mean?

21:47 - Mike Nemirovsky (Guest)

ite me an essay on the war of:

23:56

And I think there's parallels. But I'm sure if there was an author or a musician or someone on here, I'd get smacked down pretty hard. I just feel like I don't know. I think there are parallels there where, yes, your books, just as they're currently available on the market, if I read a book and then write 15 blog posts or even start a whole new career or a billion dollar, billion dollar company just because I read your book and I'm not gonna give you a penny, I might give you a credit like oh, I was inspired by this book and I created this billion dollar company. Well, that author's not getting any piece of that billion dollars exactly. They maybe got $9.99 for the book. So I don't see it as completely different, especially since there are some safeguards already in place, from the AI to not just be a plagiarism machine literally spitting out the contents of that book. So, yeah, I mean. What are your thoughts there? Like how, how can there be? Why does it have to be either? Or is there not a best of both worlds approach for this?

25:01 - David Brown (Host)

yeah, I was lit. I literally had a call with somebody about this before before we we had our chat now and what was interesting is I think there's there's still a well established you know, if you, if you take a copy of something that someone's done and then you repurpose that in your social media, or you do you know whatever, and you know someone sees that. Or you take a bit a piece of music that someone's done and you, you know you use it sort of as is, that's a clear copyright infringement. Right, there's already laws and rules around that. Where it gets really wooly, like you said, is in that inspired by you know. And and my point was this you know, if I go into a photography studio and I take black and white, you know profile pictures of people and am I copying Rankin? No, do you know what I mean? It's not care, ranking is ranking, but all he does is all he does, but it's gonna. People are gonna cane me for this, but it looks still at its core at its core.

26:01

All he's doing is is generally, you know, taking. He's taking profile pictures of people and you know, maybe there's a way that he sort of produces his images and the way he kind of tweaks his color balance and all that sort of stuff. But at the end of the day it's, you know, he might do a black and white, you know, profile, and so it's really difficult to say, well, you copied Rankin and it's like, well, it's just a black and white profile. So there's a massive gray area there and I agree with you and I think where the specifically staying on the education topic, I think where educators are actually finding it really helpful, and I'm I've got a guy named Byron who's coming on and and he's a he's an assistant head teacher at a primary school, I think, here in the UK, and what he's figured out is is it's super engaging for the students. So what he does is he adds extra content that's not part of the curriculum strictly part of the curriculum but that helps the kids learn better. And he's used tools like 11 labs. So shout out to 11 labs. But he uses things like 11 labs to create. He writes a script and then he puts it in 11 labs, has a fake voice read it, and then he puts that up as sort of a fake podcast and then the kids can go and listen to it and so the kids, like, think it's really fun and it's interesting and the kids can help write it. So he'll get them engaged in class to say, hey, let's put some of this stuff together. I'll use some of your work, we'll put it into this tool, we'll make it and then you can listen to it and you can have your parents listen to it or whatever, and it's it's another way to engage the kids while also teaching them extra stuff at the same time. So that's that's one really good use for the tools.

27:45

Another one and I think I might have mentioned this at some point along the way is a lot of students are using it to help them prep for their exams. So you know, in the UK in particular, it works a little bit different than it works in the US and what you and I were used to. But when they have these A levels, which are kind of and GCSE, gcses are like your high school finals kind of, and your A levels are sort of like a, an SAT or an ACT kind of thing that you take. But the thing is is that you know the students spend an enormous amount of time preparing for those exams so that they can pass and they can get good scores and they can go to uni. And what the teachers have worked out is that the students are taking the questions from old exams, putting them in, writing their answers and then saying grade my answer and give me suggestions on how to write a better answer to this question yeah and it's almost like a personal tutor.

28:41

Yeah, and the teachers love it yeah because it means that the students can get almost one-to-one tutoring without having to, you know, take up more of that teacher's time, and the teachers can work with those students who really are struggling and who really aren't getting the concept. So it frees them up as well. And they've done the few teachers that I've spoken to done a lot of research. They've looked at you know. They've said go and try it, bring in what it tells you, because I want to check it to make sure that it's giving you the right advice. And they're like a lot of times the advice is stuff that we didn't even think of. They're like it's so good, so now you've got a whole and I'm encouraging my son to do that because he's gonna do his A levels next year and I'm like look, when you start doing your revision and all that, you need to start dropping those questions in, write your own answers, do that and use those tools so that you can try and develop better answers and you can get better scores on the test. And for that kind of stuff it's not, that's not cheating, that's the same as just having a tutor and it's amazing for that kind of stuff, you know.

29:48

So I think the potential for education and even if you think you know special needs students and things like that, where maybe you've got dyslexic students who struggle, or you've got students of you know, maybe, where English isn't their first language, or if they're in France, maybe French isn't their first like, like it could be any language, it doesn't matter, yeah, but those sorts of things can really help students learn and develop their skills and, like you said, it's it's not about. It's not about cheating in quotes, you know and I don't know. There's a part of it. That's like this whole cheating thing is is. Are we just salty because we didn't have access to those tools when we were young?

30:26 - Mike Nemirovsky (Guest)

right and we're like well, that's cheating.

30:28

But if we had had them, we would have totally used it ourselves yeah, and beyond this, let's say the special needs cases, or like the learning disabilities, which will probably even uncover slightly more, like we could maybe even learn more learning disabilities that aren't necessarily even defined. But you know, one other issue is kids getting bored because maybe they are ahead of the rest of their class. You know, if you have a class of 30, it's probably a bell curve and there's gonna be two or three that are just waiting for everyone else to catch up. Well, now the AI can challenge them further. Right, that tutor can can challenge them to go above and beyond and, and I think, in a way coming back to authorship and, you know, letting the AI train on your material, hopefully it's also gonna like rise, you know, raise all boats, in the sense that it might challenge you as an author to go a step beyond.

31:28

And I guess it's an impossible question to answer. You know, I don't know if I would read a book generated by AI. I think for sure not yet at this point there's just a lot of uncertainty on the, the accuracy. I don't think completely generative text from AI is quite there yet. But yeah, you know that that just comes back again to if it's a little bit well, yeah, it might get.

31:54

It might get better. And you know, even you Google's self-checking Bards answers with Google searches. And what was funny is I just used it yesterday, did the self-check literally every statistic it gave me was like it couldn't find an actual statistic like that's, that's insane.

32:11 - David Brown (Host)

But yeah, I always ask it for citations just on that. So when I ask it for something that that I know needs data, if you just in the prompt ask it for the citations of where it gets it, then it tends to be more accurate because you can then click on the links and go see where it has to find a link.

32:30

Yeah, so it takes it a little longer but and it's not as creative, but it gives you stuff that you can actually reference. So anyway, top tip from from my learning anyway, no for sure.

32:43 - Mike Nemirovsky (Guest)

but I guess, also slightly coming back to the original question as well is Maybe another example is we've talked about before and I know you've had other people on the podcast about this as well as the creative industries of, let's say, movie making, movie writing and music for that matter. But let's stick with, like, movie making. Yes, ai will certainly democratize who can create film now, and the writer's guild certainly has right to be concerned, let's say, of the, let's say the, this democratization and then the diminishing of their value, maybe because I still think if you have the talent in there to be a, let's say, particularly like a comedy writer, that is almost impossible to mimic. Yes, you can make, forgive me, hallmark, you can make tons of hallmark Christmas movies that all follow the same template, but you don't need to apologize for that. No, exactly, but you know we'll watch them because they're just on in the background. I won't watch them. My wife will watch them because they're just on in the background. During the holiday season they're feel good and all that type of stuff.

33:47

But truly good content, I think is still going to be the hybrid approach where, great, now, I don't need maybe necessarily studio budgets but to make a movie like super bad, which is going to be class, you know an instant classic you're not going to put. You can try to copy Seth Meyers is writing tactic and you can try to copy you know those actors delivery of it. But there's something inherently human of that lived experience which is never going to be in the machine because it's never going to have that lived experience. So I think it's I don't know. I think we shouldn't worry so much about copyright infringement and not letting these models train on it. In fact it's. It would only help you as an author to then go to the next level and collaborate with it.

34:41 - David Brown (Host)

But yeah, I'm glad you said that, because one of the one of the points so this, this went all the way back. So the lady I was talking to I met at the women in AI fringe event when they had the AI summit here in the UK and we started talking about this and and I'd be interested to get your thoughts on this as well and it's it's sort of along the lines of what you were saying, which is you know, what I wonder is is I wonder if you're going to end up in a situation where the, the creatives that lean into AI and say, actually, yes, you, it's fine, use my style, use my tools, use my information, use my data, whatever, it's totally fine. Well, well, ultimately end up getting more PR and more coverage and more recognition, because that will start to show up in what other people do, and the example I always give and I use for this is Dave Matthews band. So Dave Matthews band for people who don't listen to Dave Matthews band or aren't American, they're like the largest touring band in the world, and the way they became popular is by not copywriting, protecting all of their concerts, and in fact, they used to encourage the college kids to record it and they would even let people plug into the sound board to get really, really good sounding cassette tapes back in the day. And then they wanted those college because they were mainly playing at universities and stuff like that, and they wanted those uni students to take those tapes and then to make copies and to give them to their friends, because it got the word out about their music and it meant that more people came to the shows and so they were selling out shows really quickly because everybody'd heard of them, because they allowed everybody to make all these bootleg, what would be called bootleg tapes, like sanctioned bootleg tapes.

36:29

Basically, yeah, exactly. But that's how they you know, their goal from the very beginning that Dave Matthews has said this in a few interviews is, like you know, their goal was to be the biggest band in the world. Like there was no, it wasn't like a happy little accident, like they had a plan. They wanted to be the biggest band and so, but but they realized that in the beginning that restricting people from getting their stuff wasn't the way to do that. They wanted everybody to hear their music because that meant that everybody knew them and everybody would, you know, come and they would get the most exposure and I that really that had a massive impact on me and has had on my thinking for a long time. And I just wonder if we're not in that same type of situation at the minute, where you know AI has the potential to generate massive amounts of coverage and scale for people and artists and writers and designers and stuff that maybe they never had the ability to, you know, to do before.

37:29

And combine that with social media and you know you could, you really could take off by you know, just by AI kind of copying your thing, and then people start to look at it and they go, oh, this is the original person that did that and I don't know.

37:44 - Mike Nemirovsky (Guest)

I don't know man, it's a, I don't know. No, I think, I think that's 100% right. I think we shouldn't discount, but luckily we have the technology to trace things back Right. So as long as we're putting in safeguards to make sure the AI does give credit where credit is due, that does exactly give exposure to that. But I really believe in everything that you said. As far as what you know, dave Matthews' approach is like, actually, rather than trying to police it and restrict it, let's lean into it and then ride the wave and whatever that brings. But in particular, from a creative perspective, creating music can be quite hard.

38:21

You know, you talk to even Dave Matthews' band, probably, and any other like like, maybe at first, when you're 18 and you're full of rage, you know piss and vinegar and you've got lots of things to say, the music's flowing out of you, but at some point you're going to start getting, you know, creative blocks and not knowing where to go next. And I just watched the Quincy Jones documentary on Netflix, which was pretty nice and over and over, you know, they film him saying things like it's the same 12 notes, like we've had the same 12 notes for whatever 800,000 years in Western music. And exactly that Like, first of all, from those 12 notes, look at, you know, look at the amazing breadth of music and styles that we've been able to produce. But, however, there's probably times where you're trying to figure out what can I do next and where?

39:07

In the past, maybe you got inspiration, or someone like Quincy Jones got inspiration by exposing himself to different genres and going from jazz to movie scoring to hip hop and, you know, and everything between pop music with Michael Jackson, and that kept his creative juices flowing. Well, now, not only do you have all these genres, but you have the power of this AI to say, okay, yeah, I made this composition, you know, give me some ideas to change it. You know what are some influences that I'm not thinking of and in right away, instantaneously, it's helping you to create something you never would have had the exposure to as quickly or as easily, like you know.

39:49

I want to throw in some Andre:

40:45 - David Brown (Host)

So yeah, and exactly like you said, like I can foresee and probably, if it's not happening already, I can I can pretty quickly foresee that there will be.

40:58

There'll be digital, you know, ai art competitions, right, and it will only be for AI art and there'll be, you know, photography, ai photography competition. So it's use AI to create a photograph of something and then you, you know, you can have a competition around that because and that makes sense Then all the people that want to do that sort of stuff have a, you know, they have a vehicle they can use to to go and get their name out there and whatever. And those aren't copy rightable in any country, anywhere, which is quite interesting. So if it's created by AI, there's no copyright on it, so people would just be doing it for fun. There's no protection that that. That can be afforded for that, at least at the minute, and but it might help to keep it's. It's sort of akin to what people say about the Olympics and sporting events. Right, it's like you almost feel like you need two separate Olympics. Right, you need the clean Olympics and then you need the likes.

41:52

Let's take, let's, let's do as many drugs as they want and sort of you know, let's, let's just see how far it can go. But you'd need to separate those two out. Like how fast can a human possibly run if you just let them take as many steroids and do whatever they want, as they want, Like you know, could they cut the time in half? I don't know, maybe, but you know, it'd be be quite interesting to see, in a way, Well, you've got.

42:16 - Mike Nemirovsky (Guest)

you've got the chess competitions already right, where you have the, the human plus computer competitions, and from what I've read about those is, yeah, again, they're not trying to overtake human versus human or even the human versus robot, but the, the bionic is that the word for it when that's like a mesh of human and robot anyway, like they're almost at a different style and they're almost like this different type of competition, as long as everybody understands.

42:43

Hey, yeah, you know you can use your favorite AI. I'll use my favorite AI and yeah, the, the combinations of moves and the different tactics that happen almost reinvigorate the game, because they're like, wow, that's, that's so clever and I would have never thought about that.

42:58 - David Brown (Host)

I would have never thought about that, yeah.

43:00 - Mike Nemirovsky (Guest)

Like these are grandmasters saying this and they're like that is so bizarre. And, like you know, the AlphaGo competition obviously was like why the hell is it?

43:07 - David Brown (Host)

doing that? Yeah, I remember I saw something and I remember them when they're watching the ghost stuff and I'm just sitting and looking and they just couldn't. They were like this is totally on the wrong track, like they just couldn't understand what the machine was doing.

43:21

And then it was only at some point, about three quarters or 80% of the way through Everybody just like, oh, and then they realized that it's been doing this intentionally the whole time and they were like, oh my God, like they just couldn't even they were. They were just stunned at you know how, how far ahead it was actually planning and how it worked out. So, yeah, yeah, yeah, I just want to go back on something that you said before. I do have a couple of notes that I've written down here that I wanted to go back to, and we're sort of 45 minutes in already, if you can believe it. You talked about this sort of the UI, UX part of it, which is that layer that sits on top right Of the of the different AIs, and maybe that federated layer or whatever it is, and I I totally agree with you on that, and I think that that sort of user experience piece is is going to be hugely important, and I put that in the context of podcast tools.

44:22

So there are there's probably 10 different platforms that I could use. I use Riverside to record my my remote shows, but there's tons of other ones out there. There's a cast, there's all these different ones, and you can go, you can record it, you can record it locally, you can do all this stuff, and most of them now have AI tools that go along with that, but there's also other tools like Podium and Swell and Jasper and all these different you know tools that basically do the same thing. You can upload an audio file. It generates your you know suggested titles. That does show notes. It does, you know, it does all of your timestamps for your chapters. It does everything. It creates social posts, blog posts, like All the stuff right.

45:08

But they are all very, very different and the results that they give are very, very different, and the way that they write their show notes and the tone that it has and the words that it uses and the things that it pulls out of the conversations are all. Some are very different and some are very subtly different, and I probably tried six different tools before I settled on using Podium, which is my personal favorite. But again, you know, loads of people will use different ones, but I found that really interesting in that that goes back and I just I just wanted to highlight that because my experience exactly reflects what you were saying, which is, you know, it's all this subtle stuff that kind of goes into the, because I'm pretty sure that all of them use chat GPT's API in the back end, so they're doing some sort of something on top of that, so they're adding their little bit of magic to it, right, and and that's where it becomes really interesting and and you know, I think you know you being a product manager and stuff like that, you're probably even more sensitive to how to create that experience than I am. You know, I just use the product and I go Well, this is great and that's terrible and you know, and and some of it and and shout out to.

46:35

They're probably going to hate me for this, but when I very first use podium, it had the worst UI, literally like it was the most raw plane. It's like literally upload this thing and the only thing you can do is download a zip file, right, like that's all it did. It did Like it was so simple and the stuff that it gave it was really raw, really rough, plain text files, all that sort of stuff and they've you know, kudos to them. They've worked really hard to improve their, their UI that you know we can use now and that their user experience and stuff, but the con, the core of the content, hasn't changed and so I use them. Even though the UI was not the best or it wasn't, you know, as good as some of the competitors, I still used it because the content that it generated felt like the stuff. It matched the stuff that I felt when I had the conversation.

47:26 - Mike Nemirovsky (Guest)

No.

47:27 - David Brown (Host)

And and that's you know that's hugely important and that's you know. I guess that's the core of why there's different. You know there's there's loads of different tools to do all sorts of stuff, because you know every world different and we'd like one thing over another and you know there's not, there's not one tool to rule them all.

47:43 - Mike Nemirovsky (Guest)

No, that's, that's right. And so, without turning this too much into a product management podcast, one of the big reasons why I started with helpereeio was a little bit to scratch my own itch, but a lot to start thinking about these, what we call the product industry jobs to be done. I don't think it's just the product industry, but it's a very popular term jobs to be done. So as a podcaster, you have some very specific jobs to be done. And so podium, and I completely feel their pain put something out which was probably a crap UX and they intentionally did that. I'm guessing. Maybe they didn't but to put it out and have you start playing with it and start saying, yeah, this is all right, but this sucks, and that you know I can't quite do this, but that's how we learn as product people and I think if there's anything I learned over the past six, seven years as a product manager, that I'd rather put something out now that sucks and is a bit embarrassing and my early adopters are going to kind of grunt and hate me for but I will learn exactly you know what jobs we've done I should focus on, because it's kind of like well, okay, I'll put up with this little niggle like okay, this is a bit annoying, but it's really cool that they let me do this and I'm really you know, and so you sort of learn what to prioritize. But the other part of it is 100% accurate. As well as that, we all will have our own take on how to deliver that experience.

49:12

I wish I could say that it's 100% the case in all products that there is a lot of room for competition. You can very easily showcase that. That's not the case. Like Google, the functionality behind the search of Google meant that the UX could be super simple, is just like type something in and you know we'll handle everything in the background, and they more or less took a monopoly position on search. But, yes, in a lot of software, no one company, no matter how much competitive intelligence they're going to do and try to even steal ideas from their competitors, is going to corner the market on the best way to do things, because everyone's going to take a slightly different take and the users will have slightly different needs and what's important to them.

49:58

And so, yeah, I think there's so much space right now for so many companies to create these layers on top of the large language models, which will then, in turn, improve the large language models and possibly have them compete against each other and say, hey, this is what's missing from your large language model, because my users don't like the output. And so I started incorporating three different ones and they kept choosing, you know, bard or Gemini for this, but they chat GBT for that, and so it's all quite self-fulfilling and kind of helps everyone out. And yeah, that's exactly why there's so many. There's so many different paths to take right now.

50:40

If anyone's interested in going to the AI industry, trust me, there's room for competition, and I think the more the merrier at this point because, although it might take time and finding product market fit for whatever idea you have on AI is not going to be necessarily just build it and they'll come. That never happens. But if you do pay attention to when people start to use it and start teasing out these use cases, I think that will give us even going back to the original question as well, like how much ethical guidance do we need to put on top of this, you know, and then you might choose right. You might choose the one that actually helps you be a better person, whereas some other tool just spits out the typical default kind of you know biased garbage that we don't want to put out there, and so that's exactly. You know, that's exactly the way that I think the free market, let's say, or good product management, helps to deliver value to the end user but also possibly enhance the industry as a whole 100%.

51:49 - David Brown (Host)

do you think we might see in:

52:38 - Mike Nemirovsky (Guest)

d. So what I see happening in:

53:45 - David Brown (Host)

Disaster.

53:46 - Mike Nemirovsky (Guest)

Yeah, that was a disaster, I know, but I think it had some intended or some some consequences, that they probably took a little bit of a step back from commercial aspirations, and maybe that's exactly what their board wanted, because they always never wanted to be a commercial operation, but it might. It might not have taken him down to three pegs, but at least one peg in the sense of people saw chink in the armor and there's, there's going to be competitors that have already, I can't believe, blanking on the French competitor, competitive LLM and those coming out of france Mistral.

54:23 - David Brown (Host)

is it Mistral?

54:24 - Mike Nemirovsky (Guest)

Mistral yeah where Mistral?

54:26

Mistral. So you've got Mistral, you've got Gemini and you've got others that are already also quite further ahead than people might think as far as how good the quality is. Then, of course, you've got the hugging face of the world, where people have access to these large language models and who knows what they're doing with them and and it's still quite expensive to really run a large language model at scale and have the full pipeline to keep it up to date and training and actually output in content as quickly as the more common ones like GBT for and barter.

55:00 - David Brown (Host)

But do you think, sorry, just stop your, your, your thought there. But on that, do you think that there's Like I would feel like that. The obvious business model is that you say, like someone like Elon Musk, when he, you know, sort of gets his and you know it becomes more available Because he always wanted it to be sort of open source and available and freely available to everyone. But there there is a practical consideration, right of cost of training and maintaining and all of that and running. So do you think it'll end up where I'll say, okay, look, if you're an individual and you just want to use the platform to help you with your work or whatever, you can have it for free, like any individual can have it for free. But if you want to connect via an API, it's the API users that then pay for access and that's what they use to fund, you know, the platform.

55:48 - Mike Nemirovsky (Guest)

Yeah, that's exactly right. I think, and that is for the most part what I can see. That is the model like they, these LLMs, they want, they need as well they, the more people using it, the better for them anyways. So for sure, I think. Well, actually I can't remember no chat GPT, gpt 3.5 is still free. I actually don't remember the subscription model, I think 3.3.

56:11 - David Brown (Host)

3.5 is free for is for $20 a month. So like I have a subscription, but I don't like. I'm actually considered. I've considered canceling my subscription because I actually don't use it. Ironically, even though I have a podcast about it, I don't use it directly very much anymore.

56:29 - Mike Nemirovsky (Guest)

going to see a lot of that in:

57:43

And the, these large companies or these companies working on these large like a mouse, are not immune to. What we were talking about earlier is they're all going to take a slightly different model and take the data scientists at opening hour going to work a little bit different than the ones at Google, than the ones at Mistral, and they have different biases and different kind of and games in mind, you know. So these are all going to bubble up to the surface of their large and I think we're going to start getting into the the era of LLM wars. Right, like mine is better than yours. And then, you know, is Elon Musk's on s the one that everyone is like afraid of because it's spewing so much, you know, misinformation or or just just horrible kind of like hate speech and you know it's like yeah or not, yeah, or maybe because it's open to everybody, it can, it can police that or something like that.

58:31

So I think we're going to see a lot of that.

58:32

And then I would anticipate even more AI in software that you've normally used or typically use, and you already, in most of the software use, I mean on a day to day basis, you're already seeing this, but I think, even even more so, people will it'll.

58:52

more legislation come out in:

59:42

There'll be. There'll be slightly more regulation and hopefully more on the ethics pieces of this.

59:50 - David Brown (Host)

I think a lot of the ethics are going to come from the ground up and and I think because the government, you know, in the EU and the UK and the US, whatever they can put these guardrails in place or whatever, which are which will be very general guidelines, but I think where we're really going to see the hard limits are going to come from industry bodies that are already existing right, like the legal industry has the bar and all these different things, and they have ethics boards and all sorts of stuff that the lawyers have to abide by, and the AI is going to have to abide by those same rules and regulations.

::

And so that's how I see that playing out. I think, at least on the on the ground, is that they're those bodies are just going to say, well, we just have to treat AI like any other lawyer, right? So anything that it does, it has to act in accordance with all the rules. So if you use AI in your law firm, then it's like that's an employee of yours, and if it does something that's outside the rules of you know, normal ethical legal behavior, then you're going to get in the same trouble that you would, as as if one of your lawyers did it. Maybe that I know, but that seems like natural way.

::

I hope so. I hope so. And and somewhat related to this, which made me think about another prediction for 2024. That is perhaps a little bit darker, but I hope that it means there's I really hope there's people working on this right now and and and I know there is to some degree, but I think deep fake technology, which is already scarily good, especially apparently next year, is like some massive. It's one of the biggest political years around the world, like between US presidency, eu parliament, all this stuff.

::

So UK, everything yeah.

::

So yeah, you have to believe, you have to almost anticipate that 100%. There will be a lot of misinformation through the use of deep fake technology out there next year, which also means there's an opportunity to help safeguard against that using AI, hopefully as well, to be like oh yeah, this is very likely deep fake Once, like shed of like. I guess optimism is that we, because of social media, I think more people are hesitant to necessarily believe things immediately. I hope, god. I don't know. You know like I'm in a bubble, so the people around me probably are, but I don't know if everybody around the world is so hopefully any coverage.

::

Sorry, did you get any coverage down there about the Kirstarmer thing that happened here in the UK? No, so there was a recording of Kirstarmer, who's the leader of the Labor Party, and it came out and it sounded like he was like at lunch or something and he was, you know, swearing at his assistant and all sorts of stuff. That came out and it was very quickly roundly analyzed and decided that it was deep fake, that it was fake, but it was quite interesting that people were on it. You know, quite quickly and in less than pretty much 24 hours that had been debunked already.

::

That's good Okay.

::

But I think that was a test. I think people were probing and just putting something out there to see if you know, if it would kind of be accepted and if anybody would pick it up. And it got. It got tested and sort of you know roundly decided that it was, that it was, that it was bad, and so to going on. What you're saying is I think that that's happening and I think there are people out there policing a lot of this content now, but I think this will be the last set of elections where we will be able to trust even anything that we see, because four years from now, forget it yeah.

::

No hope. It's just going to be a constant, you know, chase the pirates type thing. Yeah.

::

Yeah, 100%. I mean one of the things I did want to ask you, so many things, like we could talk for another hour, dude. So much stuff. I know One thing that annoys me, though, and I don't know if you've noticed. This is like, particularly on YouTube, it's like there's been an explosion of AI, red like instruction videos and stuff. So you know, and it's just like, oh my God, if I hear another one of these AI videos where it's just a fake voice reading a script, it just does my head in and I'm just like, oh my God. So there's the deep, fake side of it, where some people are maybe trying to generate really good content and then there's just there's like this flood of this cheap AI content and you know, because there's so many videos out there going, yeah, you can just generate content. You know you can do 100 videos a week and you know, put all this stuff out and you know three to five minutes and it, you know it's going to fit with the algorithm.

::

Thousands of months in an in-cove.

::

Yeah, and it's going to you know, it's killing even YouTube now. So I'm sure YouTube will put some some sort of you know tools into test and see if that's been AI generated content and to figure out how to maybe deep prioritize that over real human content. But yeah, yeah, I think there's going to be. I mean, that's one of my hopes for next year is that there's going to be a big bit of pushback on on that stuff where people are going to go, you know, basically fuck off. We don't want to hear that.

::

No, me too, Like I don't need more listicles read out by an AI. It was interesting. It's because it is happening on YouTube, but famously you know YouTube still owned by Alphabet. You know very separate from the people at Google and you know working in the SEO industry right now. You know I learned things how Google does punish people for AI content because it's not really useful. It's just copy pasting and that's not really useful for people in search. So hopefully YouTube does the same thing where it's like yeah, this is just really poor quality. At least use a good AI voice. Like there's much better AI voices than a lot of these videos will have.

::

Right and yeah, but they're cheap, so they this is the whole thing right. Or they're free, so they're just using free ones and just cranking out this terrible, god's terrible.

::

So I hope, yeah, hope, that YouTube team you know can can incorporate similar I mean the Google search. I didn't really actually know this, or maybe I did, but I didn't really pay attention to. You know, google, from the search perspective, actually employed a lot of people, almost mechanical Turk style, to to randomly check content as well, and maybe that's, you know, maybe that's something that YouTube really, maybe they already do, but maybe they need to do more of as well and really kind of penalize people who are putting out their own videos, people who are putting out just this like rubbish that isn't really useful. That you can, you know it's just clogging up space so that those people don't get rewarded by accidental even views, or certainly not promoted in the right hand navigation.

::

It's like what Spotify did. Spotify demonetized millions of accounts because all they were doing is they were just putting up ambient, like a lot of like the ASMR type sounds and stuff like that, and they were using it to get, you know, royalty payments off of that and they're like this, isn't it?

::

it's just sounds and it's yeah, it's actually creative.

::

It's just, you know, the same thing over and over, and they had, you know, I think it was pretty easy for them to identify where they had accounts that were, you know, posting thousands of these sound files and what they were doing is they were just picking up the long tail revenue off that right. So you know, even if you get one or two pens or you know, sensor, whatever, off of plays, because you've got 10,000 sound files out there, you're actually making $1,000 a month or whatever it was. And you know they're like, no, that we were not going to have that on the platform, because we want to take that money and actually pay people who are really creating actual content and music and songs, and you know. So, maybe, yeah, maybe, youtube is going to, you know, be a bit more aggressive about about that and we'll see. I expect that we'll get some.

::

I expect to see some M&A, so some mergers and and acquisitions going on next year. I suspect that we'll see some of the smaller companies start to to gather together. You know, as the cost of you know using the, the API, is and getting access to those core LLM models goes up, I think you know they're not going to be able to maybe afford it at such a small scale. So we might see some of that happening and I think we might also see some of the bigger sites, even like the medicine stuff like that, who have their own models. You know they're, they've always had a business model of buying. You know good technology in, and so I suspect that we might see a little bit of that going on as as this becomes more of a of a business. But we'll see. I have some other thoughts, but others.

::

Yeah, it's always impossible. It's fun to or sometimes fun to predict, but we didn't want to come back next year and, yeah, we'll do this next December and go right what did we say last time?

::

Oh shit. It's totally different than what we thought. Exactly Right, thanks, man.

::

That was always a pleasure. Keep doing it, man.

::

I need to come and see you and we can sit on the beach and and have like I don't know sangria or something, and actually just talk about this all afternoon and our wives can just go do something else.

::

Yeah, there's a prediction they want to hear about. Definitely, come out to my orca, let's do, let's do. An AI on the beach, yeah, let's do it, why not?

::

Yeah, let's do that, Mike. Thank you very much. Thank you David.

::

Enjoy the rest of your day.

::

Enjoy your holidays and we will speak to you soon.

::

Awesome have you all used to your New Year's Cheers, bye.

About the Podcast

Show artwork for Creatives With AI
Creatives With AI
The spiritual home of creatives curious about AI and its role in their future

About your hosts

Profile picture for Lena Robinson

Lena Robinson

Lena Robinson, the visionary founder behind The FTSQ Gallery and F.T.S.Q Consulting, hosts the Creatives WithAI podcast.

With over 35 years of experience in the creative industry, Lena is a trailblazer who has always been at the forefront of blending art, technology, and purpose. As an artist and photographer, Lena's passion for pushing creative boundaries is evident in everything she does.

Lena established The FTSQ Gallery as a space where fine art meets innovation, championing artists who dare to explore the intersection of creativity and AI. Lena's belief in the transformative power of art and technology is not just intriguing, but also a driving force behind her work. She revitalises brands, clarifies business visions, and fosters community building with a strong emphasis on ethical practices and non-conformist thinking.

Join Lena on Creatives WithAI as she dives into thought-provoking conversations that explore the cutting edge of creativity, technology, and bold ideas shaping the future.
Profile picture for David Brown

David Brown

A technology entrepreneur with over 25 years' experience in corporate enterprise, working with public sector organisations and startups in the technology, digital media, data analytics, and adtech industries. I am deeply passionate about transforming innovative technology into commercial opportunities, ensuring my customers succeed using innovative, data-driven decision-making tools.

I'm a keen believer that the best way to become successful is to help others be successful. Success is not a zero-sum game; I believe what goes around comes around.

I enjoy seeing success — whether it’s yours or mine — so send me a message if there's anything I can do to help you.