AIAW Podcast

E128 - Generation AI - Alexander Noren

May 02, 2024 Hyperight Season 8 Episode 14
E128 - Generation AI - Alexander Noren
AIAW Podcast
More Info
AIAW Podcast
E128 - Generation AI - Alexander Noren
May 02, 2024 Season 8 Episode 14
Hyperight

Join us on the AIAW Podcast Episode 128 as we welcome Alexander Norén, esteemed journalist, news anchor, and economics reporter at SVT. In this episode, we dive deep into the "Generation AI" series, exploring the transformative impact of artificial intelligence across various sectors. Alexander shares his unique insights from his journey through the series, from the surprising artistic abilities of AI to its potential in revolutionizing education and the job market. We'll discuss the most impactful interviews and revelations from the series, including the ethical dilemmas and potential risks that AI poses to our society. Whether you're intrigued by the idea of AI as a creative partner or concerned about its implications for the future, this episode offers a rich discussion on the possibilities that AI holds for shaping a dystopian or utopian future. Tune in to uncover the highlights and profound moments of the documentary, as well as the personal and societal impacts of AI, as seen through the eyes of a leading voice in Swedish journalism and economics.

Follow us on youtube: https://www.youtube.com/@aiawpodcast

Show Notes Transcript Chapter Markers

Join us on the AIAW Podcast Episode 128 as we welcome Alexander Norén, esteemed journalist, news anchor, and economics reporter at SVT. In this episode, we dive deep into the "Generation AI" series, exploring the transformative impact of artificial intelligence across various sectors. Alexander shares his unique insights from his journey through the series, from the surprising artistic abilities of AI to its potential in revolutionizing education and the job market. We'll discuss the most impactful interviews and revelations from the series, including the ethical dilemmas and potential risks that AI poses to our society. Whether you're intrigued by the idea of AI as a creative partner or concerned about its implications for the future, this episode offers a rich discussion on the possibilities that AI holds for shaping a dystopian or utopian future. Tune in to uncover the highlights and profound moments of the documentary, as well as the personal and societal impacts of AI, as seen through the eyes of a leading voice in Swedish journalism and economics.

Follow us on youtube: https://www.youtube.com/@aiawpodcast

Speaker 1:

The Data Innovation Summit stage last week and we had a conversation and she said something that really brings home what we're talking about right now, and when it comes to data and AI, it's a little bit like a generative AI. It's almost like you need to think about and split the use cases and value down the middle. Either we are talking about the human in the loop, the companion. You can never take away accountability. This is one type of how we can move into this and this is great. And then we have enterprise scale ai, where you know she's framing it. You should worry about the safety nets, because when you want to put accountability, when you want to do something, enterprise scales, then we have a very different ball game, I guess.

Speaker 2:

And I mean, then you can imagine what SVT, swedish Public Television, they wouldn't let Ilex do anything without knowing for sure that this is 100% true, and blah, blah, blah. So I think that's why we're rushing slowly, yeah, up front, but we're, we're keen to discover things and explore things.

Speaker 3:

Uh, back on us, and so hilex was this kind of avatar or thing that you created before, where you tried to have some kind of companion or internal. What was the exactly?

Speaker 2:

yeah well, um, my, my co-worker, I would say, but you have a good metaphor in in talking about it as doing him, in this case as an intern, my smart intern and we're going to get much more back into that.

Speaker 3:

But but after this, you know, generate the generation ai series. You said you changed basically the way you at least work a bit. You're trying to get more AI into your daily work somehow.

Speaker 2:

Well, it's like we've taken the genie out of the bottle and it's impossible to put it back, because once you've seen what you can do, I wouldn't want to.

Speaker 3:

I can just share an experience from yesterday. It's a holiday in Sweden, but I was sitting with a neighbor and they had a daughter, 10 years old, and she has a bit of disability in learnings et cetera, and still she was very passionate about certain things like the great time of the 18th century, sweden and stuff, and she hadn't realized Shatibiti or Gemini in Google and I just showed her to her and created an account for her. She had a bit of a hard time writing, but you know, you can speak directly to Gemini and it works surprisingly well in Swedish as well. And she just started to ask questions to it and for her it was like such a mind blowing experience of being able to get answers to anything and what kind of you know extreme levels of education you can have when you just can ask an oracle like that, any question, and you get an answer so quickly. That was really extreme you know.

Speaker 1:

But but I think what you said is the genie out of the bottle. Experience I think comes for when someone in in a private or in a personal, I decide to sit down and have a go, I do something, I build something that is for me. I'm doing for my. That's when this aha moment will happen. I I don't think you can understand. A lot of stuff is going on in the news and we can now get some of it by looking at the, the series and all that, but I think the real experience is like when you, when you build ilex or something like that exactly that's that.

Speaker 2:

That's the kind of moment at work at home it was. I mean, it's very, very interesting. My, my, one of my, my kids is is jonas, my son is, who is 17 years old, and you would think that he would be very much at the front edge of everything because he lives his life on TikTok et cetera, and I'm just this old man doesn't understand anything, but I'm the one coming home every day talking about hey have you seen what you can do with.

Speaker 2:

AI and he says you always talk about AI, dad, so get out of here. But then I showed him how you could use a chatbot, uh, helping you with with the math assignment, and it was kind of impressed.

Speaker 2:

I would say he wouldn't tell me he would never tell you no but when he actually told me he was impressed is when I helped him write a complaint letter to a travel agency because his bus on the ski trip was so hugely delayed that I think that he should be compensated. And then I just prompted it to say, oh, this is the case, this is what we want, and we want to be very. We want to give the impression that we know everything about the law, about the travel compensation, blah, blah blah. And you should be very formal, correct. But also make sure that they want to know, they want to have the impression that it costs more time to reply to this person 15 times than just giving the money.

Speaker 2:

And immediately we got this incredibly well-written letter that referenced the exact places in the law that were correct. I checked it and sent it away, and then he said, Jesus, can you do that If you ask it the right questions?

Speaker 1:

it will Cool eh. And then he was a bit impressed, and then we can just extrapolate endlessly on the application of this, of course.

Speaker 2:

Yeah, and I see that this is a way of. I hate to use the word democratize because it's so overused, but this truly does it, because in the correspondence with authorities and with businesses, with lawyers, not many people are up to writing such letters no, and we are uncomfortable and we are scared and we end up always on an uneven playing field because of this and our linguistic.

Speaker 2:

Many people don't even understand what it says when they receive such a letter. And then you could just put that letter in and say please tell me in plain Swedish what the fuck is this all about?

Speaker 1:

Right, you can actually put that in and then get it. We should use that more.

Speaker 3:

Yeah, what does it mean for me? And I'm eager to ask you know if the ILEX is more objective or more biased than yourself.

Speaker 4:

but I'm going to wait with a question on this going to wait for that question until later.

Speaker 3:

And let me first welcome you here, alexander noren, senior business correspondent right at swedish television, but also have I guess you have like a special mission to cover development in tech exactly.

Speaker 2:

I used to be the tech correspondent for a couple of years and I'm the business correspondent with special mission to take care of tech as well. Be the tech correspondent for a couple of years and now I'm the business correspondent with a special mission to take care of tech as well.

Speaker 1:

But tech also includes Nobel Prize and different things like this.

Speaker 2:

No, that's just an extra. My career is developing in this way. I do more and more for the same salary. They just say hey, we want to promote you, you can be this and do this as well right, oh, you're a sucker.

Speaker 3:

You have AI to help you with that.

Speaker 1:

Oh, and I can see how the boss is oh, you're so good at this.

Speaker 3:

That's why I need AI yeah you need AI, but you're also an author of several books, right. What was the who Gone? Who's right.

Speaker 2:

On Högon Hus right On the real estate market, Exactly how to house flip successfully the theory behind making a deal in the housing market.

Speaker 1:

Making. What do you call it? The career? What was the Sanskrit word?

Speaker 2:

Bostadskarriär. Bostadskarriär, yeah, housing career, or something like that. Housing career in English. I don't know if that's. I think house flipping is more the term.

Speaker 1:

English term. I should have read that last year. But the Swedish term is, you know, Stockholm term is bostadskarriär. Exactly.

Speaker 3:

And other books Nudge, så Funkar Det right. Also, how should you call it Economical psychology in some way yeah?

Speaker 2:

Behavioral economics, economical psychology. But I recently read a piece where psychologists lamented that, hey, they're writing these economists about things that we've been talking about researching for 100 years and now they've given it a new name and it's become popular. But well, okay, finally economists are incorporating how people actually work and act when they're making their models how change really works exactly. So that's paper economics, and that has fascinated me, so I've studied that quite a lot and then written one or two books, I would say, about it.

Speaker 1:

You've been very productive. Have you done all this and, at the same time, been doing your normal correspondence work at the same time? And that was before AI, before.

Speaker 2:

AI Wait and see how productive.

Speaker 1:

It is on the home front too. How many kids do you have?

Speaker 2:

three kids, three kids, three kids and a wife yeah, that's also and two cats um that uh that. I have a difficult relationship with them, yeah, well, with one of them and and in the series.

Speaker 1:

I could spot there's a skier guy in here, because you could sort of see oh, he knows what he's having. He hasn't rented that stuff, that is his own stuff. Yeah, you're a skier yourself, I guess.

Speaker 2:

Yeah, that's actually my favorite pastime, my favorite hobby, and in my post-career I see myself running a. Um, yeah, but I've understood from people who have done that that you shouldn't run a ski lodge where you actually have to serve food and and and make the rooms getting clean, because that keeps you busy 24 hours.

Speaker 1:

You can link this davos. You know we can do. You know I think there's an opportunity for a segment here, right.

Speaker 2:

Yeah, so something that doesn't take that much work, where I could sit in the Alps writing, perhaps with the help of AI, and do just the two magnificent hours of skiing per day that are worth getting up to the mountain for.

Speaker 3:

That's my dream. But speaking about AI, I think one of the special purposes for you coming here today is you have spent some time on an awesome piece of production called Generation AI, a documentary series on SVT that I also personally had the pleasure of being part of. But I think it was really well produced and we are really eager to hear more about why you started to do that, some of the key highlights from it. I mean you as an Just as Hen do that. Some of the key highlights from it.

Speaker 1:

I mean you as just as Henrik and me or the guests you met. I want to go fly on the wall on some of those interviews. You know what were you feeling, what were you thinking, you know.

Speaker 2:

Yeah, hit me, I'll tell you.

Speaker 3:

Henrik and me. You know when we have these podcasts. You know we did it, you know, just for fun to begin with, but it's surprisingly educational to sit on the other side of the table and just learn from the leading people in AI and hear their views. So I guess you have an awesome overview and insight now into speaking to these leading people in AI, and we're really eager to hear more that, um, you know how, what we talked about, and and the themes, the six episodes, it's almost like we do a research and workshop for um for the next season.

Speaker 2:

Yeah, I'd love that. I've actually had viewers emailing me and saying, uh, where's the next episode? And I said, well, there's six episodes. Which one of them did you watch? I've binged watched the whole. I mean, when is season two?

Speaker 1:

Now, okay, I haven't thought of that, but thanks for asking, because we have Anders on Brain.

Speaker 2:

Yeah, so you'll help me now.

Speaker 1:

Because I see the connection here. He was doing the psychology and the brain thingy and you can do the AI thingy. That's a brilliant yeah. So let's and if we push this out, I'm sure we can try to get some feedback on inspiration from the data and. Ai community Could pitch in as well, perhaps.

Speaker 3:

Awesome. But before we move into all of these topics, we'd love to hear just a bit about who is really Alexander. Perhaps you can describe a bit about your background and passion and what led up to who you are today.

Speaker 2:

Yeah, good question, the big tough one. Um, I'd start out with saying that one defining moment was when I, with my very good grades from school, I was, um, I was uh, what's this english word for plugast? You know, when you do all the nerd, nerd.

Speaker 2:

Yeah, I was kind of yeah, but I was kind of the nerd playing the clown in the class. So I was Studious maybe, yeah, studious, having two of these roles that are available in a classroom and I could choose between. I wanted to become a journalist, I think, but I also thought that Handelsskolan, the Stockholm School of Economics, that seems to be a place where you could learn a lot of stuff about economics and I also like economics and I just want to go somewhere where I can get paid off for having studied so much. So I'll take something where it's you have to have high grades to get in. So that also says something about my upbringing and how I prioritize.

Speaker 2:

And in hindsight I would say it was a good decision because I sure did meet a lot of teachers and learn a lot. That has been good for me to know. And it was a cool environment with all these other smart kids that you get together with. And we did a lot of fun because I immediately went to the student union and parked myself in the cellar room it's in the basement of the student union where they do radio student magazine, where they do radio student magazine. And then back in those days there was an event called Media Expo, which was kind of businesses meet students and try to recruit them, fair kind of thing, you know. And then the internet was new and I thought, okay, this is cool, I became the project manager for this and then I went. That was a leap into the internet business. So alongside studies I was a product. I worked as an internet consultant, as it was called back then.

Speaker 2:

This is the 90s, that is the late 90s, yeah. What year model are you Year? I'm born in 75. The same I you year I'm born 75 the same.

Speaker 1:

Yeah, I could hear, I could almost hear.

Speaker 2:

This is exactly the same trajectory and now we're into uh, it's interesting that we've ended up in this part of my life, because I remember how then, back then, there was a window of opportunity that was open and I shut it, and, and I've regretted that ever since which one? Because, together with a friend, we had an idea for a site, website that would compare prices.

Speaker 3:

You know that I did that in the 90s. No, did you? The buyer's guide. The buyer's guide Before Price Runner. Yes, yeah, exactly.

Speaker 2:

Well, the buyer's guide before Price Runner, and this was Pris Pressen before Price Runner. Exactly, yes, yeah, and we ended up as one of the main featured web pages on this portal that was called Torget in Sweden one of the biggest.

Speaker 3:

I had a big collaboration with Torget as well.

Speaker 1:

Yeah okay, because you've been in the same internet business, oh so cool, yeah.

Speaker 2:

And then we started getting huge traffic and I realized, oh God, we have to take care of this. This can't just be a pet project we do between classes. We have to do something, and we couldn't figure out how we would make time, get time to. So what did we do? We shut it down and said, put it on pause Because we had to go through our studies and get a degree.

Speaker 1:

And I mean Jesus, you could have gone the other way.

Speaker 2:

Now this sounds so strange these days, when the people at that school are more keen than anything to go in startups, and if you're not a startup-er, then you're failed. I did the complete opposite.

Speaker 1:

But born 75, we weren't talking cool to do sort of go uh, startup as much then. I remember this vividly oh, you go to hundles and I want to go mckinsey. From mckinsey I can get that was what you're supposed to do. No, that was a cool thing, not only supposed to do so the management consultant route to get a lot of experience, and in that that was in vogue, I would say.

Speaker 2:

Thanks for giving me this excuse. For shutting that window back then. This is so different, by the way, but I console myself with one fact that I've learned later on, and that is that it's not, or it could have turned out, bad anyway, because it's not being first. It's not the first mouse that eats the cheese and I'm putting toothpick in a piece of cheese, because this is really afterwards I have cheese here.

Speaker 2:

It's not the first mouse that eats the cheese that gets, that gets it, because, yeah, you know, so it's uh, it's execution, it's perseverance, it's uh failing and then trying again and I don't know if I would have done it. So, yeah, but it feels like a defining moment. That says something about me back then.

Speaker 1:

But do you have a double degree? I mean like journalism and economics?

Speaker 2:

I I work my way into the industry as one of my first. Well, my first extra job was this internet consultancy thing, where I was a product manager and tried to explain for big Swedish corporations that didn't know anything about the internet why they should have a webpage. And after that I went on to I did student radio. So another extra gig was at a local radio station and from there on I had that as my extra side gig. And then after graduating I took a job, as I wrote my thesis about, on the subject of how the internet changes the music industry, and this was before Spotify was even born.

Speaker 2:

This is the Napster times yeah, napster time and we went around the world meeting the most interesting people in the forefront of startups, exploring the new online music business A bit like going around the world doing a documentary about AI and none of those people are still in that business. None of those companies still exist. But it was interesting to see then that an idea like Spotify that wasn't kind of original in itself. Some people have been trying more or less, but it's a combination of doing it the right way at the right time and having, I don't know, slightly bit of luck as well, perhaps.

Speaker 3:

And a lot of execution and patience to do it for a long time.

Speaker 1:

Yeah, doing the right thing, and the grit and the money thing, and there are so many facets here that in the end plays together for a hyperscaler to really be the number one, of course so, yeah, it wasn't enough to have the idea that this is how it was supposed to be done, and do you remember first day at svt? How did you end up in svt I?

Speaker 2:

ended up by these swedish, uh, by these. I worked at swedish radio. That was the closest step um, um for me and uh then for some. I still don't understand why if you work more than like 11 months, then you get sacked because of their loss in Swedish.

Speaker 2:

For international viewers. I understand it's very strange for you, but the employer is kind of very afraid of employing someone because they're afraid that then they will never get rid of them. So if this genius that you just employed turns out to be the real slacker that never turns up to work and you actually have to put a bomb in your offices to get reasons to sack him Somehow, they're afraid of that. So I did a good job, but I was still okay. See you next in two years, when you can come back and start a new period of 11 months. So then I went over the fence to the other building on the other side of the park, switched television and started as a program host for an afternoon talk show news talk show that was broadcasting the digital TV network, svt24. That almost no one watched, but it was a good way to start learning television. When was this? This was about 20 years ago.

Speaker 1:

TV24 as a brand has been around.

Speaker 2:

Yeah, yeah, and so that's how I ended up there, and then I would have been. They would have said hello and goodbye and come back in two years to me as well, because of these Swedish labor market laws that we have that are interpreted this way by some employers. If not, the tsunami had happened Really. Yeah, very strange. But then they discovered that they had a small lack of newscasts that were not broadcast during the night, and the tsunami happened in Swedish time during the night, so SVT was very late at reporting about this very important event.

Speaker 2:

So all of a sudden there was a new slot in the roster that needed to be filled, they had to hire someone that would sit up at night ready to do a newscast, just in case, and that one turned out to be me, and then I did that for a while until I found a way to migrate over to the other side of the 24 hours that perhaps you want to be.

Speaker 1:

There was a major event. It was my foot in the door, yeah, but it's a major event. It's a major compelling event. There's something that makes people change.

Speaker 2:

There's also learning in there that when you want to get change happening, it's like something major probably Awesome, and you stayed for SVt now for 20 years yeah, because I've changed jobs many times at svt, um, mostly as a program host, uh, news anchor in the morning, in the evening, um, but lately also getting more specialized and getting back to my roots, getting back to economics and business and now as a business correspondent and, and during the years as a tech correspondent. That was very, very, very uh. I mean I could explore things. I I had like very free, a very good leeway to do what I wanted and just go out there and explore things. It was so super cool. I mean I think you're a very good leeway to do what I wanted and just go out there and explore things. It was super cool.

Speaker 3:

I mean, I think you're a very famous person these days and I've always been impressed by all the episodes you've made. So, good work with all that. But with that we're very eager to get more into the big topic as well about generation AI. Perhaps you can start to just speak a bit about how did the idea get started?

Speaker 2:

Well, it was well. To be honest, it started a year ago with our previous documentary series that was called Cryptokungen or the Crypto King. That was about cryptocurrencies. That was at the surface about cryptocurrencies, but behind the surface, underneath, it was about what is money? How does a monetary system work really? And then we felt, okay, we want to do something again. What, what, what, what would people need to know more about? What is also or even more so I would shaping the times we're living in, ai, of course.

Speaker 1:

And is this? Let's see. If it's a year ago, it's after the big boom with chat GPT coming onto the market in November, exactly. So from November to March.

Speaker 2:

So then it was obvious that this is happening and this is putting AI on the surface for a broad audience, and now we have so many questions that have to be answered. I mean, of course, as a tech correspondent, I've been covering AI and what the implications might be for the labor market or research and science before, but it wasn't in your face in this way until ChatGPT came and everybody talked about AI and you really wanted to understand. But what's behind this?

Speaker 1:

really and, as a reporter, to get the point, to pitch a real series, to really have a concept.

Speaker 2:

This is different than having one show or one yeah, or doing a news piece or something no but I was helped by having done this crypto series first and it was a success and we managed to do it in a short time but with high production quality. And for the news department at SVT to do documentary series, it's not very usual, I mean it's kind of new.

Speaker 3:

It's a big investment, right.

Speaker 2:

Yeah, but I'd say it's very cost-effective. You don't get that much quality TV per Corona at other places than when we do it, and especially when me and my team and Marco's filming and editing does it because we're very tuned in together.

Speaker 1:

When you do it right now, like the real team doing this show, how many is the core team? Is it two, three?

Speaker 2:

Me and Marco Nilsson, photographer and edit. Uh, who does the post-production? Um a hundred percent. And then, uh, john sorted the editor, yes, uh, not a hundred percent. Uh, because he has a his other job in parallel, except during a couple of weeks when we work intensely together the three of us to cut it all. Yeah, so I would say we're not more than that. So two and a half, yeah, exactly. That is cost-effective, that is very cost-effective, especially when you have gotten to know each other so well that you have shared the brain kind of yeah, yeah, so, yeah.

Speaker 2:

So then we wanted to do something again together and we wanted to, and then ai was like a no-brainer a small sidetrack I'm just curious how do you pitch an idea for a new show?

Speaker 2:

you have to uh have some track record if you want to do it in short notice uh with on the side of the ordinary budget, because then some boss has to find another hole to take the money from and then trust you to to deliver on it. Um, so, um, I guess that having done it once helped, and also perhaps me having for several years gotten interviews with, with people that are well known and are good names, um, helps to get new ones so you had the credibility that you can pull it off, and then in the end it's like a one-hour pitch.

Speaker 2:

Very concretely, oh, no, it's more like an elevator pitch that has to become put down in some half pager and then bosses have to take talk to bosses. And then they they got back to me a week later and said we'll manage this somehow. Just start working. This is a cool idea.

Speaker 3:

Get going. That's nice, awesome. And what's the plan originally to start with?

Speaker 2:

like six episodes Two or three, but then we discovered that there was so much that we wanted to cover and also we think that it's good to be very clear, having distinct angles on each episode, so that one would be about school education, one would be about jobs, one would be about the dystopian vision, one about the utopian, et cetera, so that you could go in being interested in just one of them and then liking the way it's done. So maybe you would be interested in watching episode two or three, you could jump in between them back and forth, and that's how it works for me because I found the episode where you were in and this is pretty cool.

Speaker 1:

And then I went into the app and like, oh, there's before.

Speaker 2:

Okay, now I start from the start so yeah, and that's, that's very good I like to hear that, because it's a new way of thinking and we're experimenting our way to a format that that works in the play environment, because now at svtT and every broadcaster that wants to survive, you're not doing TV for broadcast anymore, you're doing TV to be successful in an online environment on demand Exactly.

Speaker 2:

So now we were thinking, and first they said, okay, we're doing just for SVT play, just as we're doing just for SVT Play, just for SVT Play. And then I knew from the beginning it won't end up there. They want to put it out in the tableau as well, as sure they did. And then I just thought, okay, that's great marketing, that's great promotion for the series at SVT Play. So in the end of every piece that was broadcast, we had this trailer that kind of said okay, you like this? Well, then you have these episodes on SVT Play.

Speaker 1:

But did you have stats on this? Because I was thinking about that, because I found the first one on the tableau watching Anders, but then I did exactly this and then I was, you know, streaming watching, binge watching, the way you do in an app, right? So I would bet you that that behavior is very you know, you can find that pattern here.

Speaker 2:

I believe in it. Yeah, this is my thesis anyway, and I don't know if we have any stats on it. But I've noticed that from other people as well, who come up to me on the street saying, oh, what a great program you did about AI. Okay, you mean the series. No, I just saw something on TV yesterday and okay, well then you should know you have more episodes on Essay to Play.

Speaker 3:

Do you have a special way to make a series like this? Is it about the people you meet and interview? Is it some way you form the discussions? Is it some kind of message you want to get out, or if you were to give some advice for people who want to do something similar?

Speaker 2:

what would you say? Since you're doing video, you're doing TV sort of, you have to think of what could be, not only talking heads talking into a camera, because those kind of documentaries, how much you even try to edit them spicy, uh it just ends up as being important people talking to a camera. So, finding scenes that are scenes, where can we be? Uh, to show and tell how, what, what is, what could be, uh, what is the image illustrating this theme that we're talking about? Uh, to to, to think scenographically, yeah, to think you're doing a movie, how would you do it? Would you just have a voiceover telling the story or would you have people doing things, telling some parts of it for you? So you have to think like that.

Speaker 1:

That is one thing that we're trying to think of. So storytelling and also understanding where you know to tell the full story with a B-roll, everything like this Exactly.

Speaker 2:

So, for instance, that's why, among other reasons, we went to Bletchley Park for this AI Safety Summit. First of all, it's newsworthy it's the world's first one bringing together so many heads of states and business, blah, blah, blah that talk about AI safety. But also, as cleverly as they thought themselves, they put it at Bletchley Park, with this history of being the birthplace of the computer and those ones giving me the story. So then we were down there telling that story and kind of said, yeah, that was a scene.

Speaker 1:

But it's so clear when you say it like that because you know when you meet Max Tegmark, okay, you need to go over to his joint. You know you need to get that feeling, you need to get the whole American vibe. Then you went to Davos and get that vibe. Yeah, it's so.

Speaker 2:

So you know, when you tell this, you know the thinking here is yeah of course, yeah, and with max termite going to his joint was the only scene that we could get, apart from sitting in his office, because I was really trying. We were there in boston and and first of all, he's he's not very easy to communicate. He's very easygoing person once you you start communicating with him, but but it doesn't answer all your emails and I've learned that you have to send a couple of them and then you get an answer and uh, and I, I wanted to go to his home. Yeah, where, actually at his home, he started this?

Speaker 4:

um, I don't remember this organization, the organization, organization with the future of life.

Speaker 2:

Exactly so I was thinking, okay, maybe we could be in the living room where you all group together for the first time, blah, blah, blah. But he had to get acceptance from his wife and I understood between the lines that it wasn't a popular idea back home. But I was trying to pushing gently all the way until we were actually sitting in our rental car in the city of Boston saying hey, Max, we're on our way to your suburb now just to film some B-roll. So we will be around actually in your neighborhood in like 45 minutes. Could we kind of just say hello to you, get some B-roll of you standing on your lawn, perhaps going taking out the garbage or something? And I hear him saying, okay, well, I'll ask. Not such a good idea, I'm afraid We'll do it at the office tomorrow as we planned. Okay. So we really tried to get more scenes, but that was the only one.

Speaker 1:

Yes, Okay so we really tried to get more scenes but, that was the only one. I'm keen to go into all the different persons you've met and all that, but maybe we should take that. Which way do you want to go, Anders?

Speaker 3:

Yeah, and I mean we can take that question. And I mean you did meet so many legendary AI people throughout this series and perhaps you can share some of the highlights. Some people like Max Tegmark let's start there yeah, let's start there.

Speaker 2:

We're talking about him well, because my takeaway from meeting him was actually understanding. Where's he coming from? What? What? No, no, what what he's talking about when he's giving us this dystopian uh yeah, because people may not know.

Speaker 3:

But I mean, he has a summer talk you know, last year that was very dystopian because people may not know. But I mean he has a summer talk.

Speaker 2:

Last year that was very dystopian yeah, he introduces himself with saying blah blah. I'm the last one in my family now and I'm not sure that I will survive AI or something like that first 20 seconds on summer.

Speaker 1:

What's it called? P1, which is big.

Speaker 2:

It start like that it's a tough way of starting, and then I've seen him in interviews on Aktuellt our evening news show one of them and other places as well, being interviewed. But when you have five, six minutes to talk about this, you can only go so deep, exactly, and it's the follow-up questions that get you somewhere. And you don't have time for follow-up questions when you're doing oh, by the way, and now from the war in there, we're going to talk about doom, so you don't really get.

Speaker 1:

I mean like his book Life 3.0, if you read it from cover to cover, then you get a much more nuanced understanding for what he's trying to say.

Speaker 2:

Yeah, so now my mission was okay, we take this doomsday warning and we try to understand it, follow-up question per follow-up question. And why would that be? And why wouldn't it be like this instead? And why should I be worried about that? And why would that happen if the first three things happens? And then, finally, okay, now I understand exactly how you think. Now I can take this understanding with me to a dissident who disagrees with you, yes, and ask all the questions to that person with your view in mind.

Speaker 1:

Yeah, and then you took it to the Norwegian AI superstar what's her name? Inga Strømke? And she wrote like what's the chat? And we wrote a book and we had like 50,000 who bought, you know, 10% of.

Speaker 2:

Norway, or whatever she said has read her book, so she's like the Norwegian AI star she wants a nuanced story.

Speaker 3:

But before we leave, Max, I'd just like to hear some of your impressions from him, for one you can think, you know, why does he focus so much on being dystopian? Is it because he truly believes it? Or is it perhaps because you have some kind of more strategic goal in communicating that and that people are not, you know, focused on it otherwise? Or do you believe he truly believes that this is the future we have? Well?

Speaker 2:

I don't have any reason to mistrust his genuine worry. Okay, however, I would say that he is a very good communicator that understands that if you are to get a message out, you have to be very clear.

Speaker 3:

Yes, could that be a?

Speaker 2:

purpose. Yeah, and one way of being clear is painting a very vivid picture of the worst case scenario, and he's good at that, but I don't distrust his genuine worry, okay good.

Speaker 1:

I think that was a very good summary. When you read the book, then he's more nuanced, but it's super clear that he's worried, but he's also trying to paint it. We need to start the conversation. So if I need a fire starter, let's have a fire starter.

Speaker 3:

I mean cool.

Speaker 1:

And yeah, okay, and, but yeah okay. So also a little bit starstruck around Max Tiegmark as a as an interest in AI. How is his office, or how does it work, or how does that work Well, his office is very, very plain.

Speaker 2:

It's like it's been the same since 1977 when the previous professor had it and he has perhaps put in some books about the planetary system and some framed quotations from other scientists that he is inspired by. And then like he can't be sitting behind his desk ever because it was crammed with a suit that he perhaps wears when MIT wants to put him in some sort of donor, blah, blah, blah. And then it was like cartons with papers and books that were just a mess.

Speaker 1:

And last question did you get a sense of where was his focus agenda when you were down there? What was sort of he working on?

Speaker 2:

He was working on. He was very worried about disinformation, disinformation, disinformation and disinformation and how AI enables multiplication of it. So he was talking about some project he was working on that could be mind opening for how the news might be affected by it.

Speaker 1:

But that's the segue to the Norwegian superstar, because I think the bottom line that I think she conveyed really well in the series is like, yes, utopian, but we have issues here and now, so keep the eye on the ball.

Speaker 2:

Yeah, that was her main message. Okay, theoretically it might be a risk that a super intelligent AI goes rogue and kills us somehow theoretically but here now we have what she was mostly concerned about, also disinformation. Disinformation and how algorithms influence us without us thinking that we are subtly manipulated, manipulated en masse. And then we're just talking about plain use of social networks.

Speaker 1:

To begin with, and when you now did because this is one of the episodes which was the dystopian one when you were editing this and when you were thinking about that, what was the main message or what were you really? Okay, we're getting all this, but when we put our spin on it, where do we want to take this as a story? What was your thinking there?

Speaker 2:

My thinking was in this episode, we want to give you an a la carte of all the risks there are out there, and in the upcoming episode that you preferably watch after, we will show you an a la carte of all the possibilities that are out there, and then it's up to you to to to pick your entree, your main course and your dessert and see what do you make out of this, what kind of menu do you think it will be, in order to form your own opinion exactly.

Speaker 2:

Because, if there's anything that I have come to, my main conclusion is about everything that has to do with this topic is that the future of AI is what we do, yes, what we make of it, and we should be very careful with saying, oh, we're doomed, ai is going to take over. Oh, it's going to be so fantastic, it's utopia. Look at all these medical drug discoveries there's going to be, and we won't have to do any boring job anymore. Well, it will be what we make of it. If we have identified certain risks, then we should be very good at putting guardrails against them so we don't fall on the wrong side of the rooftop, and if we identify fantastic opportunities, we should not hinder the development within those areas and, in the end, I think we will end up having a lot of this will be a lot of political decisions being taken.

Speaker 1:

Yeah, political decisions, and also I like that summary. It is what we make of it. Where do we put our efforts? Where do we put our investments? What guardrails do we put in? So it's the whole ecosystem that we now need to decide upon and act upon, and the worst part is maybe inaction. Or, like the Peter Drucker quote in times of turbulence, it's not the turbulence that is the scary part, it's acting with yesterday's logic.

Speaker 2:

Yeah, and it's good that you bring up individuals here, because I don't want to sound like, oh, we'll have to let everything to the politicians. Definitely not. It's you as a student, it's you as a mom, it's you as a business leader, it's you as a teacher, it's me as a journalist. It's up to us to discover how we use this responsibly, in a constructive, good manner, and then spread the word and the know-how so that the good use cases win.

Speaker 3:

That sounds like one of the awesome learnings you have done throughout all of these interviews. But before we leave the topic of just some of the highlight people, so to speak, if you were, you mentioned Max Tegmark and the Norwegian star as well. If you were to mention just you know one or two more anyone that you really were struck by in some way, who would that be?

Speaker 2:

oh well, um, I would say that, uh, I'll just take two people, uh, up front. One is another researcher at mit, regina barsley, who is doing research about how to use ai in the fight of cancer, and how she talks about the medical industry, the as being very hesitant. The medical industry is hesitant, yeah, everything from hospitals to well, mainly the not drug discovery part, because they can make money out of this. But hospitals and authorities, the healthcare system, and I understand that they might be very cautious because you're talking about people's lives here, but it impressed me that they are working. They have a rather steep hill to climb up because even if they were to still show good research results, it would take a lot of work to get it implemented, put into practice and put it in production.

Speaker 2:

But she seemed like this energetic person that would not be defeated in the first time, so she was inspiring in that way, would not be defeated in the first time, so she was inspiring in that way. And another person was a student at a high school outside of Stockholm who was talking very, she reflected very intelligently about how to use AI in her life as a student and it impressed me that you didn't have to have like a formal training in this to discover how to use it well yourself. So that makes me hopeful that you don't have to have all these upskilling and reskilling courses that are talked about these days. People can do a lot themselves if they just start testing things out.

Speaker 1:

And then the call denominator comes back to almost how we started this conversation. Start using it, start getting your own point of view on it, then you will start thinking about this. So it's a scary part when we're not really in tune at all with it.

Speaker 2:

And sure, of course it's cool meeting the CEO of Google, and yeah, how was that? Et cetera. Well, that was.

Speaker 1:

Was that done earlier?

Speaker 2:

That was done before we started doing this AI documentary.

Speaker 1:

Because he was in Sweden, right yeah?

Speaker 2:

he was in Sweden and I got 20 minutes with him and we actually had I had time to cover a lot of topics during that time. And then also, what was your takeaway there?

Speaker 1:

What was his main message?

Speaker 2:

Well, the takeaway was okay. We were perhaps a bit late releasing our AI sports car, but better get it out rolling now than not at all.

Speaker 1:

So the race was clear that he had to come in.

Speaker 2:

He was number two out on the racetrack, yeah, and it was very clear that now they were doing everything to kind of get ahead of Verstappen.

Speaker 3:

I think the AI race is something that we can speak more about. I think that's an interesting topic to get back to later.

Speaker 1:

And then let's see there were six episodes. So remind us again how they would line up.

Speaker 2:

Yeah, the first one is called Creative Machines. It's about generative AI and the creative industry, how it changes music production marketing. Spotify people.

Speaker 2:

Spotify people Gustav Söderström there, for instance, and Per Söderström there, for instance, and Per Söderström at Pop House talking about investments they've done in pure AI, like A&R companies, but also the movies. We were going down to Gothenburg Film Festival where they had a special AI theme this year and they had this special remake of Ingmar Bergman's Persona where they replaced Liv Ullman, the actor, with Alma Pösti, a new star. So strange and it was very strange, and it was also very hush-hush because it was a one-off experiment, with the only people having seen the movie were us sitting in that cinema at that time, and then no one will see it. Well, very strange.

Speaker 1:

But it was almost like a piece, an art piece that goes into the debate.

Speaker 2:

Exactly, it was meant as a debate piece, I think. And then we touched on, we met this great creative director, carl Axel. Right, carl Axel, exactly, we had him on the podcast as well.

Speaker 1:

He's been on the pod as well. Yeah, the fashion, ai fashion.

Speaker 2:

Exactly so cool and he really he was so inspiring when he talked about how he spends endless hours refinishing, endless hours refinishing, reiterating, prompting, once again using another AI tool on top of a third AI tool.

Speaker 1:

We've got the whole scoop right here In two hours. We dissected his process right.

Speaker 2:

So it's all AI, but it's not one thing, it's many things Exactly and come to thinking of it, I think I discovered him listening to your podcast, actually, so thanks for that, and so that was a really great guy to meet. And that was the first episode, more or less. And then the second episode is about school and education system how that will change with AI. The third one is the very big question of how jobs will be affected. Will it steal my job or not? And the fourth episode was uh, the risks, um, it's called um, the um, the end of the world, more or less. And then, uh, the fifth, four fifth episode is is paradise? You told the utopian side of it. And the sixth is was it's called avatar? It's, uh, it's my personal experience of, of creating an avatar of myself, uh, on top of an ai news bot. Uh, to see how much of news production and presentation can I let to ai and then you had henrik nieberg helped you, who was actually the.

Speaker 1:

the backstory here is that he was supposed to come on this pod a couple of weeks earlier and oh, I'm busy with SVT doing a piece and then he showed up, more or less, like you know, the week after. He was sort of done. I think you work intensely for a couple of weeks on that.

Speaker 2:

Yeah, they were hacking together some off-the-shelf AI products to put together into an interface that I could use. But it's an interesting thing.

Speaker 1:

He could do a hack in a couple of weeks with off-the-shelf stuff, like almost for free, you know, as a showpiece. You know, think about what you can really do.

Speaker 2:

Yeah, and that's so fascinating that, as long as you're in the know of where to look, and you have to be able to stitch them together, so you have to have some-.

Speaker 1:

Some coding, but you're more stitching and coding, yeah.

Speaker 2:

And you have an eye for the use case, yes, then you can be in business. Now we're in this part of and this takes me back to the end of the 90s and the feeling then when I shut the open window. That was an era of opportunities, endless opportunities. I mean, I could go out to big Swedish companies as a 19-year-old, 20-year-old, and they were listening to you. They were listening to me because I came from this cool internet agency Swedish companies as a 19 year old, 20 year old, and they were listening to you. They were listening to me because I came from this cool internet agency that was supposed to know everything about the digital future. So, of course, we listened to this Swiss kid and I didn't know very much more than what I have learned from my smart friends there and from just hanging out and write forums online smart friends there, and from just hanging out and write forums online.

Speaker 2:

And now we're at the same kind of era this window of opportunities, where there's so much that can be done with not too many resources, so many people can get a go at it.

Speaker 1:

And I think that one of the misconceptions because, working on this in large enterprise, you think typically this is so expensive. Because working on this in large enterprise, you think typically this is so expensive, this is so big and it can be done in this and this. And of course it is if you do it in a certain way, with all the guardrails as a large enterprise. But if you flip it, how easy is it actually to stitch together an alix in a couple of weeks that becomes the intern and your companion? Yeah, it's quite amazing as well, right, how easy if you flip it.

Speaker 3:

Cool, and you brushed over all the episodes very quickly there, and perhaps if we just go through them in bits, one by one, and thinking about the creative machines in the beginning and some perhaps surprising highlights, and some perhaps surprising highlights Is something that you were surprised to learn during what AI can do as a creative co-worker, assistant, intern or whatever you like to call it.

Speaker 2:

Well, let's go back to Carl Axel, who was a previous guest here on this show. Something that struck me, talking to him and learning how he works, is the insight that AI doesn't mean less work for him. Right, oh, that's a good one. It's a demanding craft. If you want to be really excellent I mean, of course, if you just want to get a random picture of Madame Antoinette in an American football field, you'll get it, but it's not the picture that you had envisioned. If you want to take the machine to deliver to you what your vision is, that's a craft in itself. That's time consuming, that demands skill and experience.

Speaker 1:

Yeah, and this is just a simple next abstraction layer of what is your tool to do your craft. Is it a camera, is it your sketch block? Or now it's a different set of tools.

Speaker 2:

It's another tool. And then, on the other side, if you talk about it from a jobs perspective, in his industry, the advertising industry, the creative industry thanks to this tool, fewer people are needed in producing the end industry. Thanks to this tool, fewer people are needed in producing the end result, but the people who work with it, they are needed and work a lot.

Speaker 3:

So okay, jumping to the job sector, then I guess you know some people are very afraid about, you know, ai replacing people. Some people think that, well, perhaps we still have you know have so many things we need to do. It just will allow us to be more productive and do more stuff. What's your main thinking when it comes to impact on job markets?

Speaker 2:

I was talking today earlier with someone about this and I think my conclusion was this with someone about this and I think my conclusion was this it will be a transitional period where a lot of people will go out of become jobless. Yes, I mean, there are branches, there are sectors where it's obvious that AI will carve out 50% of the man hours that are done today.

Speaker 3:

Can you give an example? Perhaps?

Speaker 2:

Well, let's say you work in customer service, for instance.

Speaker 3:

Transportation perhaps.

Speaker 2:

Sorry.

Speaker 3:

Transportation, perhaps in the future.

Speaker 2:

Yeah, sure, once we get to self-driving, of course, then that's a no-brainer. Once we get there, in my line of work I would say as well. But I mean, then then you could say in some jobs uh what you call jobs uh where like 50 of what you do during a day at that job could be done by ai, then you're, then you could be displaced. Then they could say if you're in a, if we're in um, if we're in a declining economy, then the low-hanging fruit for the company would be to sack people and say that now we replace you with AI, we don't care if the result will be as good, it's still very much cheaper, and then you will have more unemployment. If we're in good economic times, the company might say, okay, hey, we're in a recruiting mode in general for a lot of different positions. Why don't we start with the people we have? Because we know them, we know they're good people, so we can trust them. So let's upskill and reskill these 50%, and then it won't be that hurtful. So, depending on the economic mood, I think it will be more or less painful, this transitional phase. And then you come to the big question there's not a finite amount of jobs. There's not a finite amount of types of jobs. So how many new kind of jobs will there be that we can't think of today?

Speaker 2:

Okay, I'm pretty confident that there will be many new jobs that we can't think of today. Not so much because everybody will be getting jobs being prompt engineers, more if we manage the economical transition in a way that doesn't make all the profits from increased productivity to end up in a very, very small amount of hands, but it will be dispersed more or less equally. Thanks to economic growth that will come many people to the game. Then those who are in jobs will have the economic possibility of wanting things and services that perhaps they didn't want before, and then new jobs will be created to fit those needs that we didn't think of before, that we didn't appreciate before, and many of those needs, I think, will be human, because I think we will have a future where the handicrafted things that you can actually put a human, made by human, stamp on will be so much more valued in a synthetic world where most things will be machine-made.

Speaker 2:

So, in the same way as today already you appreciate a genuinely handicrafted dollar horse dollar carly horse from Newsness. You pay like 500 bucks for one of those it could be, and it's rough in the edges. It's not perfect compared to the made in China, one that is perfect. Yeah, 3dd printed 50 bucks, uh, kroner, uh, you would still want the more expensive ones. I think we'll have more of that in this synthetic world. We will appreciate human crafted things, uh, that we perhaps didn't appreciate today.

Speaker 1:

That's just one thing, but I think this storytelling you're doing now one of the things that I want to put a finger on is going a little bit off track here is that we've been discussing a lot around what we have referred to as the AI divide in the world, and you can measure it in societal terms. You know the tech giants and how much money and how the divide in the world of economic power is actually going in the wrong direction. You could argue and then you can talk about the AI divide in society on the micro scale, in our companies, people who know a little bit about data and AI and people who don't know so much, and you can talk about the AI divide around politicians. And what really really scares me now in this AI divide is that what you are saying now actually means that, back to the point that it's all up to us if we understand this better, if we understand how this will you know you're not doing a very deep analysis to think about what happens in high or low economic climate. You're not doing a very deep analysis.

Speaker 1:

If you understand that, well, it's going to be a lot of new jobs and we need to have a safety net around this. If we want to do enterprise, ai, it's going to be a shitload of new jobs, but if the people in power, in the people in leadership positions, in politicians, don't have this point of view, that scares me the most right now. That's why we need to have so much more education or immersion around this topic in order to basically build a point of view, because if you have a point of view, we can have legislation on it, we can have policy on it. If we are policy-less, if we don't have an idea on this, that's what scares me.

Speaker 2:

It's not a controversial things to say that politicians are not early adopters in this matter they are kind of laggards and of course there's a lot of things that they should be aware of. I mean for once. I mean, if you take the taxation system and if you take a difference in taxation of human labor versus machines, you have a built-in disfavoring of persons, for instance. How many are thinking about that? That is a natural thing that we've had for so long. That might be more pregnant now when automation is leaping forward.

Speaker 1:

Just one example. And we have so many opportunities to tax.

Speaker 2:

If you know the new world, yeah, and actually you don't have to be right or left to to uh approach that issue with the policy, because you could either increase certain taxes or you could lower other taxes. Just be aware of how this is influencing automization, for instance.

Speaker 1:

And Kassik Kusork.

Speaker 3:

We're really going into a rabbit hole, yeah, but it's the whole unintended consequences discussion.

Speaker 1:

Everything we do has consequences and unintended consequences and I think the more we understand about this we will understand the consequences, unintended or intended, that this will do. Sorry for that rabbit hole.

Speaker 3:

Perhaps just one of the episodes, before we leave that topic as well. You spoke about the educational systems as well. Is there anything that was surprising to you there? Some highlights, for how will AI impact, potentially, the educational system?

Speaker 2:

yeah, there again.

Speaker 2:

We went to this school where we have a teacher that is a front runner and and she she uses herself but, most importantly, she uses it with her students and teaches them how to use ai in a sensible way.

Speaker 2:

But that is not because it's like the Swedish Ministry of Education or Authority of Education has stipulated that now Swedish schools should do this and that.

Speaker 2:

To the contrary, there is no such guidance, and when I talk to the Swedish authorities about it, they say we don't want to give any guidance because things are moving so fast now so that if we give out the guidance for how schools will be working or relating to AI, it will be outdated in three months. And they are inundated by requests from headmasters with questions about how should we do about this thing? It's happening in our classrooms and then our students are using it at home, but we don't know what should be our policy. So that's one takeaway that it's happening from underneath, it's not happening from top down, and I love things that are happening from the underneath growing upwards. But my impression is there may be, as long as we are organizing ourselves from the top down, a little bit more interest and awareness from the top down and there are clusters of knowledgeable and interested people in departments here and there, but from the main leadership it's more like oh AI, that's scary.

Speaker 3:

Let's just wait until we don't have to wait anymore to take action but do you see ai as more of an opportunity for education or potentially a problem, because people will cheating more and that's a very easy to answer.

Speaker 2:

Very much of an opportunity. Very much of an opportunity because oh, I'll take this word again in my mouth it democratizes the availability of an extra teacher, of a study buddy, as long as you use it the right way. And the best example of that is someone I saw on LinkedIn who was posting how he I'm sorry, I don't remember his name how he went into the settings of ChatGPT and went into this. Yeah, amir.

Speaker 4:

Mohamed.

Speaker 2:

Yeah, exactly, thank you Saying that every answer. When I ask you a question, don't give me the answer straight away. Think of me as a student of this grade blah blah and show me how to come up with this answer myself, and don't give me the answer away until you. You've really reached that level that's great.

Speaker 2:

It's an awesome opportunity to basically personalize all the education and cater to every you know specific needs that people have yeah, and I'm thinking of all these poor teachers that have to sit and manually correct things that are more or less standardizable, etc. And so it's an archaic education system we're living in and we haven't changed that for 100 years. We're still putting 30 kids together based on their age in one classroom, and I think there's a lot of great things happening when you're physically there IRL, yeah, but maybe there are more intelligent ways of organizing the education part, apart from the social part, and this becomes the problem now when the frame is not set up in a relevant way.

Speaker 1:

So the other example we had someone, we talked about this here on the pod is like well, you need to rethink the way you do your education, your school, how you do tests, how you do home assignments. Maybe now your home assignment is show me your prompts in relation to your topic and then we do something here as a test, without any stuff.

Speaker 2:

So it's also about you know, just figure out the curriculum in a different way, and this is a non-problem exactly and addressing the the cheating issue that you brought up, anders, I'm so fascinated about how today, everybody is surprised that, oh, you can cheat on your homework if you have a home assignment that is graded. That's been wrong for 50 years Before. It was my smart sister that I made do my homework. We all had different possibilities to have help, and that has been in just for so many years. It's just now that it's obvious.

Speaker 3:

Very well said Cool.

Speaker 4:

It's time for AI News brought to you by AI AW Podcast.

Speaker 3:

Awesome. So we have this middle break where we just reflect on some recent news. Each one of us bring up some interesting articles that we have read about. Alexander, do you want to start? Do you have something?

Speaker 2:

Yeah, well, I start digging where I stand the media industry. And I think it's super interesting now how media outlets are dividing up into two categories those who are suing the chat GPTs of the world and those who are going to bed with the chat GPTs of the world. And those who are going to bed with the chat GPTs of the world, like, on one side you have New York Times suing Sam Altman's OpenAI for having trained on their stuff copyright infringement, and on the other side you have, like recently, financial Times, another high quality journalistic uh, uh production. Um, that is, uh, partnering with open ai. Yeah, so which one of which one is doing the right thing? I don't know.

Speaker 2:

Uh, it's easy to think that partnering with the devil will be nice in the beginning but costly in the end. And now I'm talking about has the devil because it's it's a threat to, of course, it's a threat to traditional media. Because why should I? I mean if, if we all know about the perplexity, for instance, that is getting closer to delivering an article on demand, so to speak, how they are referencing to the sources, thankfully, but how many click on them and give a click revenue back to the original source? And we all know that advertising isn't enough to support these magazines anyway.

Speaker 2:

So it's a real threat to the current media business model. It's an obvious threat. And then some are going for what I think is the easy short-term money, and some are going for easy short-term money and some are going for copyright, going to the court in order to try to I don't know if it's a part of their bargaining New York Times or not, but I think that we will have to see. It will be very interesting to see how these court cases turn out, and I hope they won't do a plea bargain, because I think you need some prejudice, you need some precedent to state what's wrong.

Speaker 1:

We need case law here In the end. Here we need case law. We need to test some of these things in order to get guidance on.

Speaker 2:

Yeah, and if we will get a president in the United States, we need one in Europe and we need one in Sweden. So I think New York Times is not the United States. We need one in Europe and we need one in Sweden. So I think New York Times is not the last one.

Speaker 1:

No, no, no, Someone else will turn. We will need to interpret it in our Swedish jurisdiction.

Speaker 3:

I mean this will influence, of course, the journalistic trade to a large extent, but it's interesting, I think, to compare to the music industry as well, and there was this case with the artist Blake that had this kind of ghostwriter I think his name was that basically copied his song, actually published it in his name on Spotify, made money by basically taking his name, and that's obviously wrong and illegal. But then you had Grimes, the other artist, the ex-wife of Elon Musk, et cetera. But she basically said, fine, use my voice, use my type of text, use my lyrics, but I want 50% of the revenue. She made a lot of money from it.

Speaker 2:

Exactly, and that echoes an article I read here in Sweden about synthetic voice and audiobooks. Synthetic voice and audio books, and one of the famous actors that are lending their voice to well, they're reading books and recording it said never in the world will I let them have my voice. This is an artistic thing, it's beyond dignity. And the other one that's equally famous said well, hell, why not? As long as I get handsomely paid, they can clone my voice anytime. Um and uh, yeah no, but I think when it comes to this copyright thing, um gustav sardar stram at spotify, whom I interviewed for the first episode, he took a parallel to how sampling in the 80s in the hip-hop community was kind of sorted out organically where de facto standards not laws but standards developed.

Speaker 1:

Eight seconds that's fine, but not nine.

Speaker 2:

And everybody more or less agreed to it, because and then economical transactions happen based on those standards. I don't know if that's what's going to happen now. I think it will have to go to court.

Speaker 1:

Similar needs to be sorted out.

Speaker 2:

It will happen yeah, but I mean you have some, some, some uh cases that are difficult. I mean, for instance, getty Images. I don't know if they have sued Stability. Yeah, they have, because that's in your face when there's the watermark Getty Images appearing in an image In an image.

Speaker 3:

Yeah, tough, and it will be interesting to see what happens with the news media going forward. I mean it has to have some business model. As you say, tough, and it will be interesting to see what's happening with the news media going forward?

Speaker 2:

I mean it has to have some business model. If you can get your news on demand and a news piece video, audio text created like this on exactly the topic that you want to know more about, with fresh news and analysis that is trustworthy An AI service can deliver that for you Eating, cannibalizing out of those who have put down money researching this and putting this together then obviously the news and media business will be out of business.

Speaker 3:

if that is the future, unless you can find a way to attribute some kind of monetary value back to them.

Speaker 2:

Exactly so some sort of joint venture, de facto standard or lawsuits ending up in law retributing money back somehow, will have to happen.

Speaker 3:

In the financial times. If they actually partner and get some kind of attribution back from OpenAIM, perhaps that could lead to something.

Speaker 2:

They didn't disclose the terms of the deal.

Speaker 3:

I think they paid a lot like 50 million dollars.

Speaker 2:

You think so yeah?

Speaker 3:

Perhaps I'm mistaken, but I think they got a lot of money. Anyway, should we move to something else?

Speaker 1:

I'll take local news and well, it's a little bit personal, but still I think it's newsworthy. Last week we had a Data Innovation Summit, which is one of data to value in applied and generative AI. So it's very interesting to see the community of practitioners and the vendors sort of say what was the general buzz? You know, it's like taking the pulse of the industry, and I'm not now talking about the super tech industry, I'm talking of the people building this in Sweden. Not now talking about the super tech industry, I'm talking of the people building this in Sweden. So what was the pulse? And I would say there's a couple of standout things.

Speaker 1:

The community has been growing the whole time, but we could definitely see a trend that there was more new faces, new, younger people. The generative AI scene has entered into our enterprises and it has slightly brought in another set of people into the data and AI community Very, very healthy. So there was more diversity this year than ever before. It was more in gender and in age. Very positive, very positive. And then we had something we called a future outlook session. Last year it was Markus Wallenberg on stage and this year it was Karl-Henrik Svanberg, and it was very, very newsworthy to us in the community to hear his original ideas on how to take on the AI commission chairmanship job right now.

Speaker 1:

So this is about the latest commission. This is really now. We've done this in the past several times, but I think now it's on governmental level that you know, maybe we need to really put money behind this as well and stuff like that. And it was interesting to hear his thoughts. And what I find really interesting is the top leaders coming to a practitioner's conference. It's not his conference, but for the practitioners to bridge to the business leaders really healthy, really healthy. So I think you know, did you have any takeaways? Any takeaways, goran, do you want to mention?

Speaker 4:

or or anything like that, because I think this is in in the nordics is quite a big deal yeah, I might I don't think we should do so much promo about it, but I was extremely excited that, uh, we there was a really really big amount of new faces, as you mentioned, and it's really good to have like a new talent coming into the technology and getting excited about it, because these are the people that will shape the future as we know it, and more we have of those, better it is going to be, because AI is as good as the diversity that is sitting behind it and working with the data and the modules and everything else.

Speaker 1:

Maybe that's the point right. So if we want to shape AI that reflects the society, the practitioner community needs to reflect society.

Speaker 2:

Yeah, but now more and more people will be practitioners. So in a natural way, I think it will be branching out to more than the ones that are doing AI in labs.

Speaker 1:

Yes, or in Silicon Valley right. So the more the merrier and the more good AI we will get, the more perspectives on this we will get.

Speaker 3:

Yeah, sounds great. I'm more happy that I actually got to hug and got to introduce Charlotte Pirelli.

Speaker 1:

That was a big moment for me. You're a Downspans hero fan.

Speaker 3:

Anyway, I have one news which is one of my favorite news topics in general, which is about neuromorphic computing. You found one.

Speaker 2:

Neuromorphic computing. Explain that to me?

Speaker 3:

Okay, I can give some background. Neuromorphic computing Explain that to me. Okay, I can give some background. There is this traditional type of computing which is based on what all computers and mobile phones are using. It's a von Neumann architecture. They have memory, they have CPUs, they have a clock, they have high-frequency CPUs that do calculation which is extremely energy-consuming, and our brains are very energy-efficient Exactly.

Speaker 3:

So I think the brain is around like 12 watts and if you take something to train chat EPT, which is thousands of H100 or A100 GPUs, it's in the gigawatts scale. So, it's many orders of magnitude higher energy demands for the type of AI brains.

Speaker 2:

Okay, I see where you're going. Could we get a computer to work like our human brains? Much will be won.

Speaker 3:

Yes, and this is basically neuromorphic computing and we had spoken a bit about this. Sam Holtman from OpenAI had a big investment in Rain AI, which is building neuromorphic computing systems. But now recently Intel released their big neuromorphic computing initiative. They've been working on it for many years but now they had a big second release of the HalaPoint.

Speaker 3:

The HALA point and in general you can think of, if you take the normal CPU, you know you have a clock and it's working at gigahertz frequency and the brain is working at a kilohertz, a very, very slow frequency, and you have the brain and the neural cells that you have, which have the axons, you know the output point and the dendrites coming into the other neurons and the synapse in between. But they are spiking and they're all working independently and they're working at low frequencies. And the thing is it's not separating memory and compute. So each neuron has some kind of state in it, whereas computers they have transistors, you know they don't have any state, it's one, so zeros and then memory. So when you have to train a model, you have to move data back and forth all the time, back and forth and it consumes so much energy and it's so high frequency. So if you instead can have the memory at the point where the computation is happening, which is what the neuron in the brain is doing. You would have significantly less energy consumption happening.

Speaker 3:

This is basically neuromorphic computing. They call it like memorysters. Like a memory and transistor combined become a memoryster. So it has states.

Speaker 2:

How far have we come in developing that?

Speaker 3:

It's a lot of working lab prototypes. It's hard to make it work from an economical sense, but this is really where they are doing a lot of research right now. So it's basically still in the research stage and we need similar to like fusion. It's easy to get some kind of small prototypes working, but to make that work in an economical sense is still hard. But this is one of the big breakthroughs now that Intel just released and they call it the Lohaiha 2 processor, and it's a lot of numbers it's hard to speak about, but it's up to like 10 times more neuron capacity than ever we've seen before. 12 times higher performance they have. Like if you take the number of operations that they can do per second, it's like 15 trillion 8-bit operations per second.

Speaker 3:

But that's not really the point. The point is how much energy it consumes, and this is orders of magnitude less energy than the traditional GPUs that Google has or TPUs that Google has and NVIDIA's TPUs, etc. So it's getting there. It can't really do all the human brain stuff. It still has a rather high frequency.

Speaker 3:

So it's not down to the energy consumption that the human brain has, but it's getting closer to it, I think, unless we want to see the continuation of this big AI divide where a few set of companies if you look at the big investments happening in recent months where you could see, you know, the $7 trillion investment from Sam Altman or 100 billion investment in a new compute center from Microsoft or Meta that was investing they want to have 350,000 H100 GPUs at the end of this year. Elon Musk and XAI is saying he's going to have 100,000 H100 GPUs Everyone now in the top or very few number, but the top one is all extremely expanding their investments in compute infrastructure. This will lead to a super big divide. It's only going to be these kind of very, very few, few, very rich companies that are going to do this.

Speaker 2:

GPU war. I listened to Elon Musk in an interview recently talking about how much it would cost how many GPUs he would need for his next croc model, or whatever it's called yeah 100,000.

Speaker 2:

I was sitting in my car listening to this podcast and it was fortunately a cue so I could think about this at the same time. I don't remember exactly the amount, but I was doing math like okay, so how much does one of these NVIDIA GPUs cost? $50,000. And then you multiply that with what was it? He said Okay, so I ended up like 60 billion kroner or whatever it was like. Okay, you need to have a good cash flow to be in this business of developing large models.

Speaker 3:

That's not sustainable. Either it's going to be this very few, very rich companies that are going to do all the future air work but with this kind of development it's going to have some chance, or is it democratizing?

Speaker 1:

it again.

Speaker 2:

Yeah, then it's probably what I read about lately, where they had a good metaphor Instead of having someone going to the library every time to retrieve some information, you could get that person to do the job at the library. So the consuming energy.

Speaker 3:

Going back and forth would be, Then I get it. It's obvious that today's system is very inefficient Moving data back and forth, back and forth like insane speeds and having these kind of high frequency operations with all synchronized operations instead of them working in parallel.

Speaker 1:

But to put this also in a little bit of a context. So the von Neumann architecture when was that invented? 40s?

Speaker 3:

or 50s or something.

Speaker 1:

So something that we are building our GPUpus on is from that era now, and then, of course now we talked about new morphic computing. But the other research field is then, of course, quantum computing, and I think if you ask andesh and I've been studying under andesh on these topics you know quantum computing has been giving a much bigger sort of media spotlight. People love to talk about quantum computing and quantum this and that, but that is way more science fiction than neomorphic computing is. So the distance from the Neumann architecture to neomorphic is much, much more feasible. We are much, much further, I would argue, to something that is commercially viable compared to quantum computing. We don't even know if it can be sold at scale, and here we could have a debate right now on if it's good or bad.

Speaker 2:

I feel guilty because I did a piece on quantum computing, but I haven't done one yet on neuromorphic computing.

Speaker 3:

No this is interesting right, we could go to the topic about quantum computing as well. Yeah, but why don't you?

Speaker 1:

But it's an interesting contrast, right. So what is quantum computing now compared to this and why will that work or not work? What is the real challenge here? And it's a different one, right, but it's interesting how much spotlight the quantum race you know that is way in the future and relatively little on neomorphic. But then you see the Silicon Valley guy like Sam Altman where he's putting his money. I find it interesting.

Speaker 3:

Pressure should move to that topic a bit the AI race.

Speaker 2:

We started to speak about that a bit.

Speaker 1:

Yeah, I think so.

Speaker 3:

And if we just frame it in these terms, you can see this kind of insane investments happening now Microsoft putting up a hundred billion, and OpenEye, of course, and Google already have probably more compute than they say and are probably leading as well, and XAI and Meta, and everyone is doing it. And China, of course, with Tencent, baidu, alibaba, et cetera.

Speaker 3:

I mean there are these companies that are investing in computing infrastructure to levels that we have never seen before, but it's, I mean, basically to train a model in the future, you would need the power of New York City or something, and that is not sustainable. I mean, either we're going to have this kind of very, very few, super big AGI kind of systems that is owned by a few set of people or companies, which is scary, or we can find a technology that can really, you know, train this model in a much more efficient way, and I think the latter is really needed. And I'm really scared if we are going to continue to see this kind of extreme concentration of power now into a few select companies. What do you think about that?

Speaker 2:

Well, I'm thinking first of all, is that also a question of open source or not open source? Perhaps it could be, and I share your rather dystopian, potentially scary future where you have like two, three companies in the world that everybody else is dependent on. Because what do we know happens when you have oligopolies or monopolies? Well, inefficiencies in the economy, to be talking like an boring economist but in practice that means that we're paying too much for things that could be less costly and we're putting too much power in the hands of too few people.

Speaker 1:

And and connect this with the open source trajectory that you would go on?

Speaker 2:

I was thinking, if you don't open up the hood, if you keep the secrets close to yourself, then it will be more difficult to innovate on the basis of what has already been achieved, I imagine. So someone branching out, making a version of this or something, or startups that would be developing applications on top of a language model, might not have the same way of springing up through a third time. I think this is so important.

Speaker 1:

And we had a guest here, erik Hugo, who is a South African. He grew up in the same hospital as Elon Musk, like two months apart. They don't know each other, but he's in that era, right, and he's been in tech early, right, and now he's working with the Silicon Valley company called DeltaTrack. They're working with the cold chain. You know, um, you know how you transport, uh, tempered goods, like like our fruits and vegetables, and he he basically come into this topic of the ai divide, but he's, he's instead of looking at the super tech giants and us in europe. He even pushes the term AI apartheid. So what happens, you know? Let's not only forget about, you know, the tech giants in Europe, but let's talk about the full scale of society. And let's go to Africa. And what happens when ICA is pushing farmers in India to use tech that they can't afford in order to be part of their cold chain?

Speaker 1:

in order to you know and then he said the only philosophy we need, you know, what really saved Africa was open source, was Bluetooth, was Linux and stuff like that that allowed people to innovate at no cost, you know, and to allow them to put stuff on 2g and and all that. So so in his mind, when he's really zooming out to talk about the ai divide here or the tech divide, um he, he says, like we have an obligation to think open source. Uh, because in the end, when the real shit hits the fan, it's probably going to be the inequalities from the lowest to the highest. And if you look at the size of the population in the developing countries, you know when they are not. You know. How are you going to do that? You're going to have two different worlds. How are you going to do that?

Speaker 3:

Yeah, but open source, you know, is interesting. It's very nice to see some companies like Meta actually going very open source. Actually, google is getting more open source with Gemma, etc. Even Apple now, finally, are releasing some papers and they are seeming to go more into the direction of open source, which is a bit surprising, because if you would like to see a company make the most money, you would like to you would keep them to yourself.

Speaker 2:

Yes, how do you explain the business incentives behind that?

Speaker 3:

it's a great question. I mean for one. If you listen to what they say, you know publicly. They say you know it's good for our business because we get more people working on it. It's get much more traction. If you have a model that is being used, like what they had done from meta, etc. And they get help pushing it out, and even what. If you have a model that is being used like what they had done from Meta, etc. And they get help pushing it out, and even what. If you listen to Jan Lekunen in Meta, he basically says that from a security point of view, it's better to have all people looking in and being able to see what the model does, to know how to prevent it from doing bad things. But I'm not sure that's really the reason. Another underlying reason could be simply that if you want to have the top talents in AI working for you, they need to publish things, they need to work in certain things, they have to have certain principles in place. So I think there are mixed reasons for it.

Speaker 1:

But open source don't think this is charity. That is a hardcore business model that has worked. If you think about the whole AI community now and what we are doing with Python, tensorflow everything is open source. What has made this speed come about is all about open source. Nothing of this is proprietary. So all of a sudden, we have done open source for maybe 10 years. That is really. That's when everything is really starting to take off. 2010 to 20 no one's building an ai model or something. You know when we talk about traditional ai, you go to tensorflow.

Speaker 2:

You find your stuff right but if you take meta, for instance, I mean, how could they capitalize on on on their open source philosophy? Of these other aspects set aside, is it that okay, more, more developers would be working in our ecosystem, so our products would end up being the most used, and that will, in the end, I mean I can quote.

Speaker 3:

You know I'm a big fan of jan lecun, so I can literally tell you what he said, at least. But they can also say if you take Spotify, for example, they have a free tier. Why do they give away their service for free?

Speaker 2:

Well, it's one way to get users into it, to get me into the family so that I want to stay there and pay for it. Adoption of their technology.

Speaker 3:

by doing so and by having PyTorch, which is one of the most awesome libraries for doing AI, of course they get more people working with their technology, and by publishing their models, of course they get more models or people to use it. So, and if you also listen to what Jan-Li Koon said publicly about this is we already have a business model. We have a business model where we are the leading company for social media throughout the world. We don't need AI or this type of technology to get more users. We still have the business model for that. We can only gain by putting these models out there and having other people help us improve them, because the business model is something else. So it's a bit related to what you said, Henrik, that it is very well thought through. They think about. This is the way we make money and this is how we get adoption and get traction.

Speaker 1:

So the core topic then. Meta has a business model focus on this. Where AI is an enabler, so we open source the stuff to accelerate the AI enabler. To put this in our business model over here, which is a different ballgame to open AI, which has the core business model in the. It's the model, it's the model right.

Speaker 1:

But then you have another one that came out recently which is very interesting, where I think it's a very interesting take, and that is Databricks, an enterprise software vendor. They build platforms, so they're basically what type of technology we use in enterprise in order to actually get access to these models in a safe way and where Databricks core business model is to be the platform provider with the data and the data plumbing and all that kind of stuff and to create a data engineering experience. So what do they do? They acquire Mosaic, which is then building an open source LLM and they are pumping that up, which is then building an open source LLM and they are pumping that up, and then they of course doing the open source LLM completely open source but of course then now to put that into their platform infrastructure where they basically make the money on the storage and compute of being in their platform environment.

Speaker 3:

It's exactly the same as Meta. I mean, they have another way for the business model and if you don't have AI that enables that, they will never compete.

Speaker 1:

So the interesting thing is then OpenAI, which is this is their core business model. They go proprietary, but Databricks they have all to lose. I mean like they don't give a shit which model you use, as long as you store your data in their platform technology. So then it becomes super logical to push this. So if you look at the whole data and AI vendor spectrum, you'll find some people gravitating over here, proprietary or locked down. But I would argue that the larger spectrum of technologies where the AI will be part of this will push the open source a magnitude bigger.

Speaker 2:

But then you have the security issue.

Speaker 3:

That's the big one.

Speaker 1:

Yeah, which one is safer right?

Speaker 2:

Well, in first hand, the answer would be that the open source is easier to manipulate in evil ways since it's open. On the other hand, you know that OpenAI and Gemini, they have to have all these people working to make it not say the wrong things, for instance, and it still does say strange things.

Speaker 1:

But even in your TV series, you, you debunked that myth, because you could take oliver and he could crack, uh open, uh, oh you know.

Speaker 2:

Yeah, sorry, we're getting excited here.

Speaker 1:

He could crack the proprietor model and make it do whatever he wanted in in minutes.

Speaker 2:

I think it was an open source model was it an open source? Model. So, um, but I'm thinking about these people working around the clock on the other side of the planet, low-wage workers reading feedback from or trying to provoke the language model to say and do and write gross stuff that's not fit to say in this room. And they have to. I mean, it's like working in a coal mine. This is the version of today's coal mine workers.

Speaker 3:

I mean, I think there's safety aspects to point out. That is one hard one.

Speaker 2:

And they are obviously needed. Or you would say, hey, and now I will be Elon Musk. Oh, they are kind of censoring things, they are steering. They are kind of censoring things, they are steering, they are tastemakers, saying what is allowed to be illegal or not. I think there are so many angles to this question. This question is super hard.

Speaker 3:

Yeah, I mean if you take a simple example, if you want to use AI to build a bomb or how to hack a system.

Speaker 3:

I mean, obviously you want to prevent that somehow. And the question then, because how do you do that? And there will be a point when a system is too powerful enough that it actually is questionable. I would even say that it should be open sourced, because an open source model is easier to hack, even if it does have safety guards built into it. So for me, and even you know, they published this kind of discussion between Elon Musk and OpenAI and said you know, of course we want to be open in the beginning of OpenAI. They're not anymore. But they said even in the beginning, and even Elon Musk said himself, that there will be a point where we can't be open anymore, when the model becomes too powerful, and that is a big problem, but do you believe in this yourself, anders?

Speaker 1:

Because I always seen you as an open source proponent and I'm very strong now in the camp of Eric Hugo on the big side, and now he's working more with security, so he thinks about that all day.

Speaker 2:

But do you think the models today are so powerful that you should lock them up?

Speaker 3:

I think you know I'm glad that OpenAI and Meta and all the companies are putting a lot of work into safety. Even Meta saw someone counting the number of research articles and more than 50% about their AI research is about safety nowadays, so they are spending a huge amount of time in trying to make these models safe because it is so easy to abuse them otherwise. So this is a big concern. The question is really about you know how do you do it. If you were to build this kind of you know, if Meta now have 350,000 H100 GPUs and are building this big HGI system coming here, will they release it? Questionable?

Speaker 3:

They had an example in the past. They build a model to generate research articles and they publish that open for everyone. So you could just say write me a research article about quantum computing and it's wrote a very nice research article. It got so much backlash that they had to shut it down. So Meta also shut it down. What was the backlash about that? It was so easy to fake research articles that you could claim like ah, look at this research saying this and that, and then you can easily make a research article looking very convincing and you can fake research and you can get published about saying fake stuff.

Speaker 2:

But what was the research that was published? Good research, it was just gibberish.

Speaker 3:

It sounded very good.

Speaker 2:

There were even graphs that looked like, but it didn't take science any step further.

Speaker 3:

No.

Speaker 2:

Okay.

Speaker 1:

Well then it was crap in a nice package, yes, but I think this discussion ends up where there's somewhere here a principal route where I sort of for you in principle, this is my point of view okay, I think the world is better place if we go the open source route, and now, when we go down that path, now we need to work on safety. I think I think there's some fundamental pathways here where we can go wrong or right, or I don't know which one.

Speaker 3:

That is not an easy answer. I think that's the simple question. I think just go in open source and letting everyone use anything. That's bad but having some way to.

Speaker 1:

But my argument is like I think on principle level, we need to explore safety from an open source perspective. Okay, so we cannot go bananas, we cannot have anarchy, but let's figure safety out from the open source principle, that's kind of what I believe in.

Speaker 3:

That's a well put.

Speaker 2:

We'll have to find a Swedish way of doing that. The third way we don't choose, we compromise, open source is the Swedish and Nordic way.

Speaker 1:

This is what made us great with Linux, with Bluetooth and a couple of other things like that, where we have had open innovation and science as the you know science, open, transparent science as the first. That's what made us great.

Speaker 3:

So I think that is the Swedish way. The French is doing the same with Mixerol now, and it's, you know, government supported in a big initiative.

Speaker 1:

But open source not meaning anarchy, open source meaning a path, but then staying on the safe side, the safe open source way, okay.

Speaker 2:

Nice one, Alexander time is flying away here a bit, but.

Speaker 3:

I would love to hear some perhaps feedback you have received from viewers, Anything that you can mention about you know, some comments you have received from viewers, Anything that you can mention about some comments. You have received some positive or perhaps negative feedback that you have heard from the generation.

Speaker 1:

Was it success?

Speaker 2:

Yeah, so far, so good, really. And regarding comments, I can honestly say I haven't heard any negative feedback. I mean, I would say that the most negative feedback I could get is people not having seen it. So I would say the most negative feedback I've gotten is from my kids. I have forced them to at least watch one episode and the most difficult viewership is my 17-year-old son, who is I don't know if I mentioned to it, but he's living his life on TikTok as media consumer. So I'm up against some tough competition of some very, very intensive short span. Will it be a TikTok?

Speaker 2:

version of Generation AI yeah, well, we've done a couple of TikToks actually yeah sure as a marketing promotion thing. We've done that, yeah, and I think we've gotten some new viewers, uh, from that. But anyway, he said it wasn't bad, dad and I and and I I took that as a compliment, that's it, that's take it, run with it.

Speaker 2:

Yeah, I did, I did, and I didn't push, uh, push it any further. Um, the the good feedback, the best one, has been from a couple of teachers that have approached me, emailing me and saying, hey, this was really good because, well, I'm showing this to my class. This is a good introduction to the different themes that AI, the AI development, brings about. So thank you for doing this in an effective, viewer-friendly way. That was actually the best heartwarming feedback I've received.

Speaker 3:

Anything, looking back after you've done this, that you would do differently, something that you regret, perhaps not taking that opportunity something that you regret perhaps not taking that opportunity?

Speaker 2:

no well, there are always things that we had planned to do but didn't have time to do. But that's kind of the name of the game. We do these things quick, intensively, and also when it's a news theme, things are happening so quickly. You can't do a documentary and have a time frame of like a year or so norm. Normal documentaries take ages to do. Uh, so we had to move quickly. So no, hey, I'll just say I don't regret anything but can I take over here then?

Speaker 1:

So what should we brainstorm around the next episode? Very good question. So let's start with what did you not have time to do that could potentially now? This is the backlog. We start there.

Speaker 2:

One idea was to focus on AI in the military and wartime. The name of the episode would have been, or will be, AI with License to Kill.

Speaker 1:

Yeah, but you know why? I'm glad you didn't do that one? Yeah, tell me, because there's a Netflix two-hour show on exactly that theme, right? So you would have been a copycat.

Speaker 2:

But not the exact name, right? Oh, more or less, go and watch the Netflixcom. Honestly I haven't seen it. I'll have a look I'll have a look, but so they yeah but the reason why we didn't do it was I felt that is such a huge subject in itself that we can't just squeeze it in, and also you have to do far more research and get close to the actual I mean's a massive.

Speaker 1:

That's a two-hour well-researched documentary.

Speaker 2:

I'd probably have to go to the front lines in Ukraine to shoot the scenes that would do this episode worthy of being broadcast. So we put that aside. Yeah well, what else did we not do? Yeah well, what else did we not do? Well, no, that's the only one I come to think of right now. But also, I mean, of course, there are themes that we didn't cover, but we are very much driven by. What kind of persons can we get that can talk about this vividly and seriously, and what kind of scenes can we make so that it becomes TV?

Speaker 1:

because I think go first.

Speaker 3:

Either you go like I want to make this entertaining, I really want more people to learn you know the possibilities of AI or you get a bit nerdy, like I am perhaps, and think you know what can you really do to make AI as useful as possible. You know we have one big thing coming up coming years which is the ai act from eu. I think you know the ones that going to be able to adopt that most efficiently is going to be a big winner. You know that would be interesting to just think. You know what will the consequences be. But but in general, just how can we maximize the benefits and minimize the risks? I mean to really dig deep into that and see what people can do.

Speaker 2:

Yeah, there's so much more to do, and one thing that I would like to do as a follow-up, maybe just covering in the news, is what are the possibilities when it comes to deaths and disinformation when we have this flood of synthetic media? What are the possibilities when it comes to deaths and disinformation when we have this flood of synthetic media being produced? How can we sort things that are verified? We know the source and we don't know. Is it the watermark way or is that the cat and mouse race that will never be won, or is there another technology that could be adopted, or is there some sort of global standard that has to emerge? I mean, that, I think, is super interesting and I think the need of that is imminent, and the entrepreneur or the scientist or both of them that team up with the capital or the big player that will enable establishing such a standard and solution will have a lot to do.

Speaker 1:

I have a couple of angles that I'm not sure it's good TV, but I know it would be very, in my opinion, good for Sweden. So one angle could be because now we're talking season one, now we're over here trying to get a feeling for what this is all about and we have the smorgasbord of entry points into this, right, if you can imagine that. Okay, how do I propel Sweden forward with this? So we have a couple of angles with this. So, okay, this AI stuff, apparently it's huge for Sweden and apparently it will change the job market.

Speaker 3:

Or is it it should be?

Speaker 1:

So then you can start dissecting what are the future roles or how do you make AI happen? So this is the practitioner's views. Then you can start dissecting what are the future roles or what does, how do you make AI happen, so you know. So this is the practitioner's views and this is would be for for the community. Of course. Oh, how cool to to get a broader understanding and sense. But if I, if I honestly, if you go up to the top executives in the top companies, they don't really understand what goes in to build an AI system or how we should think about that. So you have many angles. Where sort of how can we get underneath, like one step lower of the surface? Now, not more broader topics, but actually, okay, so how, how for me as an individual, how for us as a society? What becomes the bigger policy questions? We have the AI commission. What are the key things we need to make decisions on?

Speaker 2:

You know I'm spinning. I'm spinning, yeah, and I'm thinking like, okay, these are very good, important. I'm giving feedback back to you guys If you were the reporters going to. Okay, ai, eu, ai act Surely extremely important. Not many viewers, but very important.

Speaker 2:

And then the challenge is okay, how do we bring this home to a general audience so that they care? This is theoretical, on a super meta level. How do you make it feel in my stomach? How do I feel happy or sad about it? Well, you have to find and I think the key is the same thing as when you try to convince a board to invest in your project you have to have a compelling use case that speaks to the audience. So we have to find things that the EU-AI Act will change. That is something that can make me enraged or happy or feel comfortable about a threat that is minimized, or a new threat that has appeared. And and well, you have to be in lack of better words a bit populistic when it comes to packaging the story.

Speaker 1:

And this is so true because all the stuff I talked about is obviously super important and everybody senses that this will be important, that we make these decisions, but it's not good politics, right now it's good politics is.

Speaker 2:

Is is born where you find the use case that people understand. So, for instance, if I were the Minister of Education in Sweden and my mission was to make Sweden leading in using AI in the educational system, I would probably not talk about productivity gains and blah blah blah, because who cares except for the people who are sitting handling the budget? But if I were to talk about how I care about giving all kids the same opportunity of doing their homework, then we're talking so you have that angle.

Speaker 1:

So I think this is actually populistic. Sounds a little bit bad, because what you're really like about better, because what? You're trying to do. We need to connect with the humans. The story needs to connect in the heart, not only in the mind. I mean like it, yeah we can. We can understand that or henrik said it was important for us to know, uh, which roles we have and all that. You can do that super boring, or you can maybe find the what's the angle here, which one is your next job, or you know what?

Speaker 2:

you should educate yourself on finding metaphors that we can relate to, that uh finding situations that we recognize from before in our lives and that makes it uh comprehensible what's at stake and what can change and then another comprehensible what's at stake and what can change, and then another angle on this.

Speaker 1:

I mean like I grew up when we had all these television shows.

Speaker 2:

UR still exists. They've changed the skin a bit, but yeah.

Speaker 1:

So UR exists and we could learn everything from German to English, to physics and math. So I think there's an opportunity to do a whole series which is more educational. We have the elements of AI educational systems, so you could actually you can have a completely different angle here, where you focus on getting it out in schools.

Speaker 3:

Or you can do a like what's called a black, what's the TV series?

Speaker 2:

Black Mirror, black Mirror thing.

Speaker 3:

Where you say that there is no question it's going to be a future of AI. I mean, we know it's going to happen. It doesn't have to be as dystopian as Black Mirror, but thinking, okay, if you just assume instead of what you did now was okay, this is today, this is what we see now. Now, in the next season, you say in 10 years, this is the society we will have and then you started point out different examples.

Speaker 2:

That's very, that's very thrilling, uh, the thought of doing uh generation ai 30 years later.

Speaker 3:

Yeah, I think 10 years is sufficient.

Speaker 2:

10 years is probably sufficient yeah, we're pitching you to pitch. Yeah, so 10 years then will be 2034. It would be better to say 2030 or 2040. Something like that.

Speaker 3:

And yeah, then we could just extrapolate the different Do some stories like Black Mirror, but it can be both positive and negative. It doesn't only have to be negative as Black Mirror.

Speaker 2:

I need a larger budget though.

Speaker 1:

No, no, no, you can use Sora. Yeah, exactly.

Speaker 2:

That was exactly what came to my mind after having said that I need a larger budget because I have to have actors. No, I don't have to.

Speaker 3:

No, exactly.

Speaker 2:

I just have to be good at prompting it.

Speaker 1:

But to bring it back to, I mean like, if you summarize, there's definitely room for a second season. But I think the core topic is and I think this is the problem for the whole community you know we are so techie, right? So how do we end up with the tech community connecting with the business community or the public sector? How do we bridge this? And the problem is a little bit like you need to take the techie words and communicate with people on their turf number one, where their interest lies, number two and so we need to start there and find the story. But then I think what we need is to bridge a bit more to the how topic, but I don't know how to do that. I need to be a journalist to do that.

Speaker 2:

Well, imagine the future of the medicine communicator or a good storyteller story do you?

Speaker 4:

remember in the 90s, there was when I was growing up. There was like this documentary I don't know if it was on national, national discovery or something like that which was about future prototypes. So they had like this um, full episode where they're focusing, for example, how the space will look like, healthcare will look like, how healthcare will look like, how education will look like. So, instead of talking about what it is right now, it was more hypothetical this is how it's going to look like, but looking at the prototypes of today. So let's say that you go to education, then you sit down with companies that are already doing AI right now and how they're envisioning you know how the future will look like, because those are the pioneers that are changing everything for us, and I completely agree with you.

Speaker 4:

And there is also one thing that I think that is very important. People basically want their imagination to be aroused. Let's call it like that and that some kind of way, um, so what are the potential of this? It's very important to sell to people, because that is also nice to look at. Right Black Mirror many people cannot handle that because people do not like dystopian things.

Speaker 3:

No, but if you make it a mix of both dystopian, how many people would you say?

Speaker 4:

see romantic comedies versus scary things.

Speaker 3:

Imagine you have a positive view of. You know you basically have a personalized educational system where people with special needs get so much better help than they do today. But you also have some dystopia. You know you're going crazy with the military angle and you know you have crazy drones that go and attack specific people and you have biological weapons happening everywhere.

Speaker 1:

I mean you could go a mix of both dystopian and utopian use cases and look like 10 years ahead and say, ooh this is actually what we're going, as it can be simpler than that, because if you think about the art of possible 10 years from now, but we're exploring the techniques that are out there. So what you did with ALEKS, you were exploring the future of journalism what? And you do the same. If I look at all the you know what. What tech is out there for education? What tech is out there for uh, I don't know elections or you know whatever you could. Basically, there is so much invention out there already, so we don't really have a tech problem, even to a mat to fill up the next 10 years so you could.

Speaker 1:

You could find tech in different areas. To paint a picture, if I built a hospital and we had all this tech in this place, it would look something like I'm like it's the prototype story.

Speaker 4:

Yes, I would love to see at least one documentary that is tackling the the challenge of loneliness, because I think it's a. If I have all the money in the world right now, I will basically focus on loneliness, because I think loneliness not thing is scientifically proven. The loneliness is killing more people per year than any diseases combined. If you look at the global population, there are cultures where basically people will never get married because there is not the opposite sex that they can marry. There is.

Speaker 4:

Look at our elderly, like my father and et cetera. Once their spouse dies, they are completely alone in this world. We basically maybe they will be either at home or in a what is called in an elderly home and et cetera. Where people are lonely, there's technologies right now when they're giving them have you seen this? When they're giving them like fake cats that are just basically robots with fur and the person they can bring them a normal cat but they cannot take care of them. So they're giving them, you know, to the elder person these cats so they can tap them right, and only that sensation that you are not alone. Because, keep in mind, if you divide the scales of emotion on negative and positive, the most negative emotion will be being left alone and the other one is belonging to something else.

Speaker 2:

Well, obviously there's an AI for that.

Speaker 4:

Exactly right.

Speaker 2:

And there are more or less successful use cases. But I think that what's telling is that one of the jobs that are on the hit list of AI that kind of surprises people and at first surprised me as well, is psychologist. You would think that you would want to talk to a person to get the job done, but if you think about what is it going to shrink? Really, it's getting asked questions so that you talk and you bring things to the surface yourself. Just being asked the right questions, as I call it. Good psychologist is not a good talker. It's a good listener and asking the right questions.

Speaker 3:

But I would like to flip that. It could actually be a new job called an AI psychologist in the future.

Speaker 2:

Yeah.

Speaker 3:

Because I don't think you can explain models using traditional technical techniques. You would actually have to employ these kind of psychological methods to understand why AI models behave as they do. So it could be in other AI potentially.

Speaker 2:

Yeah, but what do you mean? Do you mean that the AI psychologist would behave in such a different way?

Speaker 3:

It's a person basically trying to understand why was Chattipati being?

Speaker 2:

evil to that person. Okay, now I understand. You need a psychologist to understand the inner workings of an AI model. That will be a new line of job, exactly, and if I can draw another one, just basically last one and I will shut up Legacy. Legacy.

Speaker 4:

Yeah, I think we, as a human, we are fighting all of our lives actually to bring some kind of a legacy. We plant a tree, you make kids.

Speaker 4:

You build a house, they will sell it, but that is a different story. You write a book. Everything is basically for us not to be forgotten. Imagine one day we are gone, right, I have three kids and there is enough synthetic data of my voice and my expressions and everything else, and that will be available, of course that even when I am gone, there is an agent or a robot or whatever you want to call it at that point of time. Then you know, wake them up in the morning, so, like, hey, it's time for you to go to school. Right, and all of these other things, because emotion is what actually sells, because these are the fundamental questions that makes us human. Why do I die? Why do I basically cease to exist? How can I extend my life? This is everything that we have done until this moment of time has been to prolong our lives in that sense, and it's the search of truth, search of being together with something, search of basically living legacy. That would be a powerful thing to see. I would love it.

Speaker 2:

I agree with you and Goran. You're actually talking about episode six of Generation AI, because that's exactly what I touch upon and it's a self-reflection I give there in that episode after having seen how lifelike Ilex is to me and thinking that, okay, that's today, I can spot the difference. But give it a year or two and I won't be able to spot the difference.

Speaker 2:

Imagine the same stuff in five years. And then the next thought was okay, he will live after I have died, he can still be there and, exactly as you said, Gordon, if he reads all my social media posts, if I feed everything that I've left behind that is digitally recorded to him, will he perhaps be able to think like me or be able to answer a question when my kids ask hey dad, now I've divorced for the third time and I've met this new guy or girl. That person is like this and this. Do you think we should give it a go again?

Speaker 1:

How far away from his answer would he be from your answer? I don't know.

Speaker 2:

But that's the question that arose, exactly what you're talking about, you know.

Speaker 3:

Jensen Huang, you know, the founder of NVIDIA. He has, like an avatar, basically trained on all his saying, actually having the look of him, etc. So and I think a lot of people are going to have that- so when you're onboarding at NVIDIA, you get him as your personal tutor and imagine the business.

Speaker 1:

The time is flying away here, but I just wanted to hear if Alex had any pitches for us, because we tested pitches on you now of different angles.

Speaker 2:

You talked about the backlog, but do you have you know if you brainstorm on the spot? No, I think we should go into tangent of this. Brainstorm on the spot. No, I think we should go into tangent of this. Think about guests that could talk about these philosophical questions that are in the intersection of technology and being a human. I mean, that can answer the question what is it being a human in a world of?

Speaker 1:

AI this. I get goosebumps because now we open up, you know, do we need chief philosophy officers in our companies?

Speaker 2:

Surely that's another job that is on the rise.

Speaker 1:

You know, because it's like there are so fundamental questions now that this is opening up. So exploring those questions from the human perspective, I mean, like we say AI agent, have we thought about human agency and human perspective on this to be ready for AI? So the whole AI readiness perspective? So what do we as humans and as leaders, what is important now in order to be in this world? So I think you nail it if you take it into the human angle here and then, because then you're all socio-technical, You're going from the technology into what does it mean for humans and what are the you know, and now we have many interesting guests.

Speaker 3:

Awesome. Time is flying away and I'd like to end with the standard ending question. That is extra appropriate, I think, for you, as you had the undergången. How do you say the end of time? You have the paradise episode. So now asking you, Alexander Norén, the dystopian versus utopian future. If we at some point, 10 plus years ahead, or whatever, have AGI, what kind of world will it create? Will it be the dystopian kind of nightmare where we have the matrix of terminators machines trying to kill us all? Dystopian kind of nightmare where we have the matrix of terminators machines trying to kill us all? Or could it be a more utopian future where we have humans living in harmony with ai, are more free to pursue their? You know passions and interests, perhaps not working 40 hours a week, but you know working when the need to medicine have. You know cures for cancer and whatnot. And we have fixed energy problems and energy is free. We've used so many things. What do you think?

Speaker 2:

Well, we could have it all, both the bad and the good, of course. So my take on this is is to stress the fact that it will be what we make of it, and not have a deterministic view on this and letting our hands off thinking, oh, technological development will happen by itself, it can never be steered, it can never be, it will just happen. I mistakenly gave you a metaphor in that direction before, when I said the genie is out of the bottle, because that implies that there's nothing we can do. I do think the genie is out of the bottle because that implies that there's nothing we can do. I do think the genie is out of what.

Speaker 2:

We can put it back, but it doesn't mean that we can. We can be friends with the genie. We can talk the genie into being a good genie and not the bad genie. We can try to nudge it in a certain direction, and that's in our power as individuals, as business leaders, as politicians, as you name it. I would not like to live in a world anyway, where we give up and say that we can't influence our future. So it's not in the kind of world that I would like to live in. So, therefore, I believe we can choose.

Speaker 1:

But that really means, you must do a second season yeah because, you've convinced me because because, if you think about that step one we leave. We leave it to technology to take care of itself bad, we leave it to some tech nerds in silicon valley to take care of, with no accountability for societal consequences. We leave it to some tech nerds in Silicon Valley to take care of, with no accountability for societal consequences. We leave it up to them to use technology, invent. What are the other peoples that we need to really start acting on this and be part of this conversation now in order to have a balanced view of what we should be doing. So you really now, you need now the second perspective of other people that should be in the conversation that you know now we talked to the tech guys. We really need to talk with the other guys that are equally important in shaping the future of AI. What's their voice in there?

Speaker 3:

And I think actually you have a big part or you know responsibility here, alexander, because you have a big platform. And if you were to delineate or really explain, you know what happens if we have this kind of concentration of power into the few tech companies and there is three companies that operate, you know the big AI model what will happen then? And try to paint that picture and really tell that story properly.

Speaker 1:

But you can take that much more pragmatic, micro. What happens when a company like a large scale Swedish company only leaves these questions to the tech departments? Where is HR? Where is corporate governance? Where is you know? What are the other normal perspectives? That needs to balance what the tech department is doing. I find that super. You can take that on a macro scale or you can take it all the way down to the company.

Speaker 2:

Then I'll end on with an appeal to listeners who have a good use case or a good practical implementation of AI that is, telling of a larger theme or picture to contact me. You'll find me where I can be found, uh in all creative ways, um, linkedin, yeah, linkedin, for instance. Uh, and, and just give me a piece of advice where I should start looking, because it's these stories that we need, that can have a life on on their own because the story is what paints the picture Exactly.

Speaker 3:

Awesome. I'm looking forward already to that season.

Speaker 1:

I can't wait for the next season.

Speaker 2:

Thanks for enthusiasm. Thank you very much.

Speaker 3:

Alexander Norian, it's been a pleasure. And I hope we can stay on for some after work or after after camera kind of discussions as well.

Speaker 2:

Maybe A-W-A-W-A-W. That's right, thank.

AI and Data Innovation in Conversation
Career Trajectory in Journalism and Economics
AI Documentary Series Format and Content
Navigating the Future of AI
Evolving Role of AI in Society
AI Impact on Job Markets
AI Impact on Education and Media
Advancements in Neuromorphic Computing
The Future of AI
The Future of AI
Exploring AI Themes for TV
Bridging Tech and Business Communities
Future of AI and Humanity
Appealing for AI Practical Implementations