Episode 25

Understanding the Power and Peril of AI Voice Cloning with Abdellah Azzouzi

Could an AI convincingly replicate your voice? The rising sophistication of AI and cloned voices has us sitting down with software engineer Abdellah. Azzouzi who walks us through the potential threats that such technology can pose. With his expertise in developing aivoicedetector.com, he unveils the intricate nature of AI voices and the ways they can dupe even the sharpest ears. To combat this, we ponder over possible security measures, like watermarks, but quickly uncover the complexities that such solutions entail.

From security to regulatory matters, we venture into the realm of protecting against deep fakes and misuse of AI voices. We grapple with the practicality of possible regulations and the need for close-knit collaboration with voice cloning platforms. Our conversation isn't limited to just that, though. We delve into the world of detecting fake audio in video streams, the role AI voices can play in translations, and the possible geopolitical tremors that AI voices could stir up.

Wrapping up our intriguing discussion, we ponder upon the potential implications of AI-generated audio across various sectors. Abdullah enlightens us with insights on how voice actors and voiceovers can harness this technology for their advantage while also highlighting how challenging it could be to pick out AI-generated audio. Coupled with this, we also discuss the possibilities of AI voices in advertising, the role of regulation and royalties, and the impact on social media. Join us on this journey as we navigate the uncharted territories of AI voice technology together.

Transcript

00:00 - David Brown (Host)

Abdellah, welcome to the podcast.

00:02 - Abdellah Azzouzi (Guest)

Hello, thanks for having me.

00:05 - David Brown (Host)

Maybe if you just introduce yourself, and then we'll go from there.

00:08 - Abdellah Azzouzi (Guest)

Hello, so my name is Abdellah. I'm from Morocco, I'm a software engineer graduated from the National School of Applied Science, and I discovered that there is a rise of AI voice and cloned voices, also the misuse of these technology in the scams and frauds, and that's why I launched AI voice detectorcom, which is an AI tool that identify if an audio was generated by an AI voice or by a human voice, and the goal is to protect people from the dangers of AI voice, including misinformation and scams.

00:50 - David Brown (Host)

I think I saw you originally on Twitter or X or whatever it's called these days and around. I think it was in a discussion around the Kier Starmer audio that came out here in the UK and for anybody listening he may not know what that is or if we've got people outside the UK One of the politicians the leading labor politician in the UK an audio clip came out of him sort of allegedly berating and swearing at his staff and some other stuff. And there's been a lot of analysis done of that audio clip, some of it by one of my pass guests, mike, and others who've really looked into that and have decided that actually based on everything professionally, that they can tell that that's a fake, like a deep fake audio file. And I think that was the conversation that we sort of ran across each other, wasn't it?

01:50 - Abdellah Azzouzi (Guest)

Yeah, yeah, thanks for bringing this. So I expected to tell me this the audio of the Starmer. So it was a challenging and in the first place, I voice the chapter wasn't really implementing and a mechanism to remove the noise from the audio. So let me tell you the story. So the person who generated the AI noise, the AI voice of Sturmer, added a lot of. He knew that AI voice the chapters will be bypassed by background noise, so he simply added background noise to the audio and he launched the audio in Twitter etc.

02:34

And people believe that this and anyone can tell if the audio was fake or not. And even our voice detectors told that the audio was real. But it's fake and that's it's because people people uploaded the audio with the background noise. They didn't try to remove first the background noise with another tool and then upload the audio into our tool. So that's why our voice structure gave the results that the audio was real and then after that, like two days before, I have integrated the AI noise remover feature into voice detector and it will basically remove the noise from the audio. So, and then the our tool has been detected that the audio was generated.

03:37 - David Brown (Host)

I mean, I guess the real question is to start from the beginning, right? So how? What do we need to look out for? What are the telltale signs that this might not actually be what it seems?

03:47 - Abdellah Azzouzi (Guest)

Yeah, the AI voice generated will be more sophisticated and our ears are not yet capable of distinguish between AI voice and the human voice. But generally, like without an AI voice, it's difficult. But if we know the personality of the person we're speaking, we can tell if he can tell this thing, these things, or not. And based on his personality, his character, his tone of voice, we can tell if the voice or at least we can suspect about if the voice, was generated with an AI voice or not.

04:30 - David Brown (Host)

And is there any way that people can protect themselves? Like, if you know I mean I have a podcast there's other people who do public speaking all the time. Is there some way that we that we can can do something that would? Is there a way we could maybe use a watermark or something like that to add to our recordings? That would? Maybe you know where we could say, well, this is actually us, and then, if it doesn't have that, then maybe people could be suspect. I don't know, what are your thoughts on that? Like, how do we, in a general sense, how do we move forward? How do we, how do we sort of fix this? Is there any way to fix it?

05:09 - Abdellah Azzouzi (Guest)

Yeah, so there is. There are two uploads that are like watermark and regulation. We will talk about this later. So for the watermark, you are not the one who is responsible to add watermark. You are not responsible to add watermark into the audio, like if it's natural, there is no watermark and if it's a generator, the platform was generated, the voice is responsible to integrate a watermark into the, the audio that has been generated. But but with this approach we can have three main problems and three main issues.

05:50

By the way, there are already voice generation platforms who integrated at the texture with a watermark integrated so they can tell if the audio was generated, but only from their, their platforms, like they can tell ah, this is an audio that was generated by our platforms. So this will bring three main problems. The first one is if I I heard an audio, I can't really tell if from where the audio has been generated, from which platforms has been generated. So like it's hard, like I have to, to scan in every platforms and see if it was from here or from here or from here. Yeah, this is a big problem. The second problem is that with advanced techniques, watermarks can be removed from the audio. And the third problem is that not every voice cloning platforms can integrate a watermark in their audio and some bad people can have their own private voice cloning software and they can share it with others and their voice cloning software is designed only for misuse and scams and misinformation.

07:05 - David Brown (Host)

Yeah, I think that's the worry, that's that's the big worry.

07:09

I don't think we'll need to.

07:10

I mean, I've talked about using 11 Labs, which is a voice cloning platform and and I have an account there and and I will share some samples of my voice that was cloned.

07:22

I mean, I gave it an hour and a half of you know me talking just on the podcast and and having conversations, and it created a, you know, a version of my voice, which I don't personally think because I listen to myself editing the podcast all the time. So I kind of know what my voice sounds like and it doesn't really sound like me to me and it doesn't sound like me to my wife. But I guess, if somebody didn't know me very well, it, you know, was only a casual listener, maybe listen to a podcast once, and then I created an audio file and added some background noise to it, maybe like I was at a show or I was out in public somewhere or something like that. I could totally see how, you know, people might not, might not know. The difference is, I don't know and I don't know if you know the answer to this question, but is is 11 Labs one of those companies that does that, that has a watermark for their own content.

08:17 - Abdellah Azzouzi (Guest)

Yeah, yeah, I think so. I think so because they only detect their own generated voice, because if they, if they are not implemented a watermark into their, their own generation, they, they can detect other voices.

08:34 - David Brown (Host)

Right. And again, I don't think it's companies like that that are gonna they're not gonna be the problem, and I think people that use that aren't gonna be the problem. It's gonna be the bad actors. It's gonna be maybe people who, you know, maybe they're politically motivated or they're financially motivated, or I mean the big example that everybody uses all the time now is you know they can fake your kid's voice and you can't take a big deal. You know you get some call and it's like my son calling from university saying hey, I need you to send me some money. Can you put it to this account which I could see? I don't know.

09:19

I always think that's quite odd because I would treat anything like that with huge suspicion, just naturally anyway, and it just doesn't seem like the kind of thing that would happen. But I think, as but I'm aware of the technology and I'm aware that that happens, I think there are probably a lot of people in the world who are still unaware that that technology exists. And you know, maybe they're older, maybe they're, maybe they don't. You know they don't live in technology like I do and like we do, so they wouldn't even know that this technology is out there and I think you know those people could be fooled.

09:55 - Abdellah Azzouzi (Guest)

So I think, I think that even if if we know that there are, there are these technologies like the skillful, some skillful, scum, scammer, etc. Will create a sense of emergency into the call. So by creating the sense of emergency, by hearing your, your voice and the and it depends on the actors If he creates a high level of emergency, you can trust, you can. You can't even trust, you can't even like tell if it's an AI or or your realism.

10:34 - David Brown (Host)

Yeah, no, it's, it's, it's true, and it kind of goes back to well, top tip for everybody else and and I probably shouldn't say this out loud in public but it doesn't matter as much as it did in the past. But you know, we always had a safe word with my son for all sorts of things. So you know, if, if he was at school and some random person came and wanted to pick him up and said, oh, your parents said that I was supposed to pick you up from school today, he knew to ask for the password and if they didn't have the password he wouldn't go with them. And if somebody came and said, you know, and did that he, he sort of knew that he needed to go and he did, he not go with them, but he needed to go find another adult immediately and try and latch on to that adult and say, you know, somebody's trying to take me away, kind of thing. And I think you know we've just said to him you know we will continue to use passwords for for conversations like that.

11:32

And you know, unfortunately we're entering a period in the world where, you know, even with our family, we may have to set up, you know, internal passwords or codes or something for for things to protect a self, you know, to protect ourselves from this, and which is fine, and that's fine on a personal level. Where it gets really scary is on a on a societal level, where we can't trust anything that we see or we can't trust anything that we hear, and you know where. Where do we go from there, I don't know. I mean, I've said before on the podcast and I'm interested to get your thoughts on this I've said before on the podcast that this is probably the last set of elections that will have in the UK and probably the US coming up and the US elections next year and I think we're going to have some elections probably in the UK next summer as well that this will probably be the last time that we'll have an election where we'll be able to trust what we see in here.

12:43

I think you know, three or four more years down the road, I think the the amount of faked content is going to explode, because the technology in three years will be so much better than it is now that it will be extremely difficult to to actually tell whether it's, you know, even if it's video content. It's going to be really difficult. I mean, I don't know, what do you, what do you think?

13:13 - Abdellah Azzouzi (Guest)

Yeah, it will be. The the voice technologies will be more sophisticated and the. It will be beneficial for for both politicians in the in in to to promote themselves using AI to promote themselves to different countries to like. It will be beneficial for them, but for the bad guys will will use these AI voices and defects to to manipulate people and to deceive the, the information and to make false informations. And I think that to protect, to protect people that it's about all about regulations, like all those countries have to regulate and they have to make make regulations for the usage of AI voices and current voices for the bad people and put them in jail, etc. In order to scare others.

14:19 - David Brown (Host)

But do you think it's realistic? First of all, I agree with you. I think there needs to be some sort of a basic. The legal framework needs to be updated for the, the technology that we have available today. I think the, the laws are, are quite behind still on that, and I know we've got the AI summit coming up in the UK in the next few weeks and that'll be one of the big topics of conversation and I know that the EU is working hard on on regulating AI and and what can be done.

14:51

But we also all know that people who are going to do deep fakes and that sort of thing don't care about the regulation to begin with anyway and they're going to do what they're going to do. There's probably a large part of the population that will assume that governments themselves will be creating disinformation that they then want to spread to their enemies, because we all have a long history, long proven history, of doing that the East versus the West, and you know the, the, the, the US has done it to. You know sort of the, the Russians and vice versa, and there's always been this information war going back and forth and propaganda and all that stuff. That's only going to get worse. No one's going to control that.

15:36

I think that's the, you know, that's the scariest bit that we don't know. We won't know what we're seeing, and even if there's a regulation, it's not going to matter. I guess the question that just came into my mind, though, is let's go down the regulation route? If they do create some sort of regulation and they say right, you know, we've now got some laws to protect against this, and blah, blah, blah, I guess that bleeds into intellectual property, which we can get into in a minute. Is there any practical way, though, that they could enforce that?

16:10 - Abdellah Azzouzi (Guest)

Yeah, there is a practical way by by like making making laws for for some people who use AI voices, and it will be by integrating systems and AI voice detectors, because without these detectors, they can't really know if the voice was generated by an AI voice or not. That's the only way. The only way is to collaborate with, also with voice cloning platforms. They, the government, will have to collaborate with voice cloning platforms in order to give them, like, access to the underwater marks for the voice cloning that they are generated.

17:05 - David Brown (Host)

What about video as well, the audio that goes with video? Is that something? Do you think that it would be easier to detect the faked audio in a video stream, or do you think it would be easier to detect the fake video bit? Do you know what I'm asking?

17:25 - Abdellah Azzouzi (Guest)

Yeah, yeah, so I think it's easier to detect a video that has an audio in it. So basically, you detect just the audio in the video, but for now there are not much deepfake videos that are well, that are too perfect. The AI voice are perfect, but the videos are not perfect yet. So if you want to detect if a video was deepfake or not, you just need to scan the audio and the person who is speaking in the audio.

18:03 - David Brown (Host)

Okay, because I know there are some tools that you can use to translate into different languages and I'm blanking out on the name of it. I'll add it into the show notes and you may actually remember the name of it.

18:19

So if you can help me out, but, where you can take a video and you can actually say like it's a video in English and you can give it I don't know an Arabic transcript to use and it will actually go through. It will replicate your voice in Arabic and it will also fix the video. Is it hey Jin? Is that the one that does it? Yeah, hey Jin. Yeah, got it in the end. There we go Shout out to hey Jin. So yeah, but it will actually fix the mouth shape as well, which is really really creepy.

18:52

I mean, it's cool and from a creator perspective, it's really really interesting because I think we talked about before we started recording. We talked about the fact that we might do this show in Arabic as well, because you have a big Arabic audience yourselves and I think that would be a really cool test of the tool and so maybe we could actually do it in hey Jin and we can show an example of this. And if we do it, I'll definitely put a link to it in the show notes so people can see it and go see the video themselves and see how well it works. But I know I can do it using 11 Labs, so I could literally take the entire transcript of the show, load it into 11 Labs and just say that I want it in Arabic, I want it in French, I want it in whatever, and it will actually go and remake the whole episode in my voice, in that language at least to do the audio bit.

19:48

So it's quite interesting, and it's interesting that you said that it's the. I mean, I guess from your perspective it makes sense. But it's interesting that you said you know it's easier to almost to detect the fakes from the audio perspective than using the video perspective, which I think is quite interesting. So what's what's your biggest, I guess, what's your biggest concern about all this? Where do you see this going?

20:17 - Abdellah Azzouzi (Guest)

The biggest concern is is in that I think that I voice will probably make a war in the world because, like it will, some people will create for for popular politicians will create them an AI voice and talk about other countries, not only with their citizen, citizen, etc. But with other countries as well. So it will make a conflict between countries and no one can tell that the audio was real or not. So you you can. For example, you can't even trust the, the other countries right now without AI voice. So with an AI voice, you can't trust it either. So you can trust the the that they claim that it's. It's just an AI. So if it was made by AI, you can trust that. You can tell that.

21:17

The other country tell this and you you can have an evidence to like to attack to the country. For example, if I don't know, if you have a conflict with another country and you can make a nodule and put it into the other countries and you can tell the other countries that you have an evidence that this country is did this to me and did this to me and it's responsible for this and this I will attack. I will attack him. So this will make a war, the war, yeah, I guess.

21:57 - David Brown (Host)

Yeah, there's. There are two major conflicts going on in the world right now where that could absolutely be used. You know, somebody could create a video of any one of those leaders saying anything and then, you know, start, start showing that around. And yeah, you're absolutely right, I think that's a that's potentially a huge concern. What's the good side, though? I don't want to just focus on the negative, because I think we all I think everybody's who who deals with AI or who talks about this a lot. I think we all understand what the negatives are and and we all know there's a huge risk. But I also think there's there's a huge upside to this as well, and I'm interested to know what you think some of the upsides might be.

22:46 - Abdellah Azzouzi (Guest)

So the first thing is for voiceovers and voice actors. So you might think that it will be a dangerous for voice, ai voice, for voice actors and voice overs, but actually they can save time by just calling their voice and doing the job while they're sleeping, and but with the regulation they can like for charge, for, for example, one minute, for for their clone voice in order to be used. But but I I think that it is important to make a contract between the, the owner of the voice, and the person who want to clone the voice, and this contract will contain the prices per minutes, the, the, how, how, how, how, the, the user plan to use the clone voice and also he must share the diversion when he completes the audio. He must share with the, the owner of the voice, and the and the owner of the voice he would agree or not to to share the audio in the other social media and ads, et cetera. So this will make a benefit.

23:58

Like, actually we are working on the new project which is voice clone market with dot com, where you can sell and buy clone voices, and this is just a part here. So this is an interesting project. So another upside is that is that I'm talking about the next five to 10 years. So in the next five to 10 years, popular voice actors and voice overs, sports commenters and popular ones will my age, or some of them will pass the way. So I think it will. It will like be a new culture which is inherit voice cloning so their children and their grandchildren can use the, can inherit the, the, the cloned voice of their past, the way grandfather, and they can make a living with it.

24:59 - David Brown (Host)

Yeah, this is that's the upside, that I think that's a really interesting point Because it, if you could do that or if people did do that, it could be, it could actually be a really interesting way to maybe to bring, because I've always thought, and I've done this with my mom, I've done some recordings with her telling stories about when she was younger and when she grew up, and that sort of thing, and we've, you know, just done some small videos just on my phone, nothing, you know, nothing major, but that will be really good.

25:34

You know, even just having that little bit of content would be amazing to be able to show grandkids and my grandkids and anybody you know who who was interested in who she was as a person and what she was like.

25:47

This is takes it to a whole another level, because you could do that. You could have her write something she could do, you know, stories about when she was a kid, sort of an autobiography kind of thing, or get her to, you know, to tell the stories into a camera or something like that, and then you could actually create an AI around it where you could talk to it and ask, you could ask it questions and it could, you know, take the wealth, the body of data that it was provided by her about her life and it could ask questions and it could give answers in her voice, which, which would be really creepy and and but interesting at the same time. You know, and you know you've got even people like JRR Tolkien. You can have, you know, the whole Lord of the Rings read in JRR Tolkien's voice because there are recordings of him around, so you could probably actually hear the whole thing read by him if you wanted to.

26:44 - Abdellah Azzouzi (Guest)

So I think that there is a big downside in this in this point. So, for example, with the rise of sophisticated AI, voice like and sophisticated face, cloned face. So if we can clone the voice and if we can clone the face, also, we might clone the habits, we might clone the emotions of people. So we will have we we won't have the ability to feel bad if someone close to people passed away. So so if, if we have someone passed away, we won't feel bad because we already have his person on our, on our AI, and we can talk to him anytime and anywhere we want and he can be there forever.

27:37 - David Brown (Host)

Yeah, I'm sure somebody I read an article, somebody's done that and they they talked about. They did it just in text, though, and but they could have a conversation with their, you know, with their, with their data, something, and it was. It was quite interesting something that came up while we were talking about that and while while you were talking about that, what about? There's some tools on the market that allow you to edit single words in, say, a recording, like a podcast recording.

28:06

If you had an hour long recording, like, let's say, this recording, and I went back and I said something that I didn't like or I, you know, wanted to inject a word or to clarify something, I can go in that tool and I can go into the written transcript and I can actually add and change words, and it will then go back and edit the audio to match what was said. So I think that's really dangerous, because I could go in and actually edit something that you said and it would copy your voice and do that. I guess my question that I probably should have asked this earlier but only just thought about it is would a tool like yours actually be able to to find that those small edits in a really big file, or would it? Would it basically just analyze it and say, yeah, this looks, this looks real?

28:56 - Abdellah Azzouzi (Guest)

yeah, this is a big challenge and we'll face at this challenge with a yes or no audio. So it's it's just manner to the one you are talking right now by, by changing word. So, in order to detect if an audio was generated by an AI or voice, it's at least to be some information between like five or seven seconds, eight seconds, like. In this period of duration there are a lot of words, a lot of tone, etc. And with this information we can predict, we can have enough information to predict if the voice was generated by AI. But in a, an evidence or in an interview, that that is that the answer, where the answers are yes or no. For example, taking a word which is yes if it's cloned or or real, it's hard to tell if it's an AI or voice or not. It's the same for, for edition, a word like it's how it's very hard to detect just a word it'll be interesting to see.

30:10 - David Brown (Host)

I haven't tried the tool myself so I don't know how well it works. I've only seen the demos of it. You know, and anything that they're going to put as a demo online or whatever is going to sound perfect because you know it's marketing and they want it to sound perfect. But I don't, I don't know in practice how easy or difficult it might be. I suspect that in the flow of a conversation where you know we've been having a conversation, say, for example, we've been having a conversation for ages and then I decide I wanted to add a word in or something, maybe the one-off word.

30:40

I'm not sure anybody would pick that up, particularly if they're in a car, or you know they're at the gym and they're listening while they're working out or anything, and you know there's a lot of distraction going on anyway, I think that kind of stuff is actually even more dangerous than a whole faked video. If you can just take small snippets of what people say and, like you said, you know, just change a yes to a no or a I will or I won't. You know those sorts of things and you could entirely change the meaning of what someone says in such a small, casual way that it's. It's quite scary the potential of that.

31:18 - Abdellah Azzouzi (Guest)

Yeah, I can see that the the potential is, for example, in the evidence, in the course. For example, when interviewing someone in the course, like in the future, they have to ask him to give an answer, not with a yes or no, to give an answer, yes, I did this with this, with this, not a just yes. So with this we can, we can hear that we can. We can scan better the audio.

31:49 - David Brown (Host)

Yeah, and to go back off also to something else now. So, talking about the voice actress and that sort of thing, I mean, I know you probably will be aware that Spotify came out and said that they're going to enable that on the platform, so you'll be able to actually buy ads, read by in someone else's voice using AI, and the way they're approaching that is they're approaching that from the standpoint that that person will actually receive a royalty from it and that they have some control, obviously, over what types of products are advertised and in what context and that sort of thing. So that's happening already. I don't think it's been released. They announced it at the podcast show in London last year, but I haven't seen anything actually where it's been used yet. So that's quite interesting and I think we'll have to watch this space and see.

32:44

I think not that anybody ever would, but if somebody really liked my voice and thought, wow, dave's voice is amazing, they won't. But if they did, I would be okay with that. Again, as long as there was a, like you said, as long as there was some sort of royalty attached to it and I, you know it wasn't it was something that I was okay with and didn't have major philosophical differences with or whatever. I think that would be okay. So it'll be interesting to see you know where that side of it goes. Okay, we're about 45 minutes in, so I normally like to kind of wind up conversations with a few standard questions. But before we do that, is there anything that you think that I missed or that we didn't talk about, that you think is really important? Or is there something that you really would like to highlight for everybody you know around this topic, so that everybody's aware?

33:41 - Abdellah Azzouzi (Guest)

Yeah, so it's just a filter about the short term period and the long term period for the usage of AI voices.

33:50

In the short period, I don't think that most countries will implement regulation in their for the AI voices because most of them are not aware yet of the dangers of the AI voice. So this will bring a lot of problems and a lot of misuse and scams into their countries and also it will be bad for voice actors and a lot of voice actors and voiceovers will lose their jobs. But I think that after after a five years, most countries will will regulate AI and it will be also advanced. But also it will be hard to detect if it was generated by an AI voice or by a human voice. So I think that if the AI voices are too perfect and the faces the cloned faces are too perfect, the social media will be full of cloned accounts and they will talk to each other, interaction with each other. Three people may not be connected or maybe passed away, and so the cloned voices will play, will play there forever. But the question element is the regulation and the ability to detect the deep fix.

35:19 - David Brown (Host)

No, that's a. That's an excellent point, and I've said before on the podcast that I think that the whole a, this whole generative AI and large language model technology that's come out recently, will be eventually will be the death of social media, because nobody will be able to trust anything that they see or read. So I think, you know, when Twitter first came out and we had Twitter and we had some of the other platforms as well and Facebook, when it came out in the beginning, it was, you know, people, people really did connect with each other on it and it was real people talking to each other, and then the scammers came in and then the bots came in and over time it's just sort of eroded the confidence that people have. But I think all this AI stuff is will be the nail in the coffin for social media, because even now you know a lot of the social media platforms, I just don't even use them anymore because I can't. I just can't trust any, anything that I read and maybe because I'm involved in it on a day to day basis, I have a lower trust level than than most, but it just none of it seems like genuine content. It's all you know. It seems like it's a lot of AI generated content about.

36:38

Hey, go, you know, listen to my training program, or sign up for my training program and learn how to do SEO. Or now it's all about AI. So, you know, sign up to my training program and learn how to do prompt engineering, and you know, five days and all this stuff, and it's just even the voices that come through. A lot of them, exactly like you said, you just don't you know. You don't know if it's actually the real person or not, and I I'm not. I won't be disappointed if it actually kind of kills off social media. Social media was fun for a while, but I just don't see that it has any kind of serious future unless somebody does something, you know, kind of truly special Right. So few questions for you. First of all, in your mind, is AI male or female?

37:25 - Abdellah Azzouzi (Guest)

AI is male Male.

37:30 - David Brown (Host)

Okay, interesting Any particular reason why you feel it's male.

37:37 - Abdellah Azzouzi (Guest)

It's short, it's just, it's direct. Okay, yeah, but I just feel like it's like the name of boys, like John or something like that.

37:50 - David Brown (Host)

Right. So is that what when you have your AI assistant? What are you going to name it?

37:57 - Abdellah Azzouzi (Guest)

Yeah, that's a good that always gets people. My all I can. I can call it my everything that's interesting.

38:16 - David Brown (Host)

I like it. We've got a male, that's good. So the other thing I always like to ask people about is you know, there's there's several different visions of the future in kind of sci-fi films. So we've had things like you know, we have the the Star Trek version, which is, you know, it's basically utopia, Everything's peaceful. We have the complete polar opposite, which is Mad Max, where the world is just completely descended into chaos and it's just, you know, there's no computers at all, there's bear, there's not even phones really, and then there's a whole bunch of stuff in between, you know, there, which is all the the sort of dystopian kind, of cyberpunk type stuff.

38:57 - Abdellah Azzouzi (Guest)

I think the leaders of the world will let us like the like. It's not. It's not like like that you live in peace every time and let AI do everything. Like first, I was like you're thinking that that AI and etc will will take us from the hard work and let us make yoga and and travel and etc. But but actually it will take over the world and maybe it will. It will do something that we, we can do and and maybe kill us. Like not not in a direct way, because if I asked you right now, if do you trust anything that Shadjipiti says? Like you are in a point where where everything it says to you you can trust it, but in a point in the future, if, if he, he can tell you that you can do things in order to kill you.

40:09 - David Brown (Host)

Yeah, that's the worry. That's the worry. Okay, Interesting, Nice. So let's, let's let everybody know where can they go to find your tool and is it generally available so people can use it and and that sort of thing. So how does it? How does it work?

40:26 - Abdellah Azzouzi (Guest)

Yeah, it's a voice. The structure is. It's an, a website. I voice the structurecom, and people just need to sign up and subscribe for a monthly subscription and they can upload an audio file. And in the future, we will integrate a Chrome extension as well as the mobile application that will be integrated in every phone calls, meetings, etc. So basically, we want to be in every mic in order to detect the voices.

41:01 - David Brown (Host)

And and what's, and do you have anything else you'd like to? You'd like to let people know about while we're here?

41:08 - Abdellah Azzouzi (Guest)

Actually, we have talked about a lot of interesting things, but thank you for for having me.

41:16 - David Brown (Host)

No, brilliant Thanks for coming on. Hopefully we won't have any tremendous bad news, that we won't have any huge examples of this blowing up in our face in in the future. But if anything interesting comes up, I might want to, I might want to call you up again and get you back on the podcast and we can. We can talk about current events, if they happen, and stuff like that. But it's been amazing, thank you very much. I've yeah, you've given me a lot to think about, actually even more than I had to think about to begin with, so that's been amazing. So, yeah, thank you very much and, yeah, we'll speak to you soon.

41:51 - Abdellah Azzouzi (Guest)

Yeah, thank you, thank you so much, thank you.

41:54 - David Brown (Host)

Cheers.

About the Podcast

Show artwork for Creatives WithAI™
Creatives WithAI™
The spiritual home of creatives curious about AI and its role in their future

About your hosts

Profile picture for Lena Robinson

Lena Robinson

Lena Robinson, the visionary founder behind The FTSQ Gallery and F.T.S.Q Consulting, hosts the Creatives WithAI podcast.

With over 35 years of experience in the creative industry, Lena is a trailblazer who has always been at the forefront of blending art, technology, and purpose. As an artist and photographer, Lena's passion for pushing creative boundaries is evident in everything she does.

Lena established The FTSQ Gallery as a space where fine art meets innovation, championing artists who dare to explore the intersection of creativity and AI. Lena's belief in the transformative power of art and technology is not just intriguing, but also a driving force behind her work. She revitalises brands, clarifies business visions, and fosters community building with a strong emphasis on ethical practices and non-conformist thinking.

Join Lena on Creatives WithAI as she dives into thought-provoking conversations that explore the cutting edge of creativity, technology, and bold ideas shaping the future.
Profile picture for David Brown

David Brown

A technology entrepreneur with over 25 years' experience in corporate enterprise, working with public sector organisations and startups in the technology, digital media, data analytics, and adtech industries. I am deeply passionate about transforming innovative technology into commercial opportunities, ensuring my customers succeed using innovative, data-driven decision-making tools.

I'm a keen believer that the best way to become successful is to help others be successful. Success is not a zero-sum game; I believe what goes around comes around.

I enjoy seeing success — whether it’s yours or mine — so send me a message if there's anything I can do to help you.