AIAW Podcast

E157 - Human-Centered AI - Paulina Modlitba

Hyperight Season 10 Episode 15

Get ready for a powerful conversation with Paulina Modlitba — MIT Media Lab alum, tech humanist, author, and public speaker with over 15 years in digital innovation and future-facing strategy. In this episode, Paulina will share how her early fascination with technology led her to explore the intersection of AI, creativity, and human values. She will unpack insights from Sweden’s evolving AI landscape, reveal why most AI projects still fail, and explain how ethics, clarity, and optimism can guide better tech development. From demystifying AI for the public to reimagining creative work and preparing for AGI, this episode will offer a compelling vision for a more human-centered future of AI.

Follow us on youtube: https://www.youtube.com/@aiawpodcast

Anders Arpteg:

It's good in terms of having a bit cold weather, so the cooling of the data centers could be nice and green energy yeah, they love that too. But then the question is what could really, you know the government do? It's depressing.

Anders Arpteg:

I mean, I think for one we need to do, you know, proper investment right, and if we just take the amount, like 30 million each Swedish crowns, it's not that big. But can you just elaborate a bit more? What was really the part of the Swedish spring budget, so to speak from the government? What was it again?

Paulina Modlitba:

Well, so the parts of the I mean the spring budget. I totally understand that they have to focus on there's a lot of warfare, you know warfare, security, other aspects to consider in the spring budget. So I totally understand that. You know, maybe AI cannot be AI is part of everything now, but you know AI cannot be sort of the full focus of the spring budget. But I'm still really disappointed because, you know, out of the AI commissions, you know hundreds and hundreds of pages and and sort of really brave suggestions, they picked two which are not even actually building AI solutions. It's what we Swedes love sort of doing some sort of assessment of what we could do and how to do it. So let's talk about how we could do it instead of what you know. Let's just get to it.

Anders Arpteg:

So the only you know the money that was awarded, so to speak in the spring, but it was not even to build something, it's just for another investigation.

Paulina Modlitba:

Exactly more or less. So something that was expected was that Skatteverket and Försäkringskassan I have to sort of.

Anders Arpteg:

Tax authority on the Swedish the.

Paulina Modlitba:

Swedish tax authority agency and the Swedish social insurance agency. They got 30 million each in the spring budget to build a joint AI workshop and I thought that meant, you know, actually helping especially sort of public organizations to build AI prototypes. But apparently it's more of a workshop for elaborating and sort of assessing how it could be done and like how could a centralized, a sort of public AI agency or motor work.

Anders Arpteg:

So yeah, so it's not even a workshop in terms of building a prototype, it's simply just to investigate how it potentially could be done.

Paulina Modlitba:

Yeah, and that's the only money that was At least that's what I sort of read between the lines or in the lines. Yeah, and then EMU, the data protection agency, got I don't know how much money, but a certain sum to build or enable a regulatory sandbox to support both public and private companies. The AI act, which is really complicated, can be really complicated, so that's good, but still, again, it's not actually building and making things happen.

Anders Arpteg:

That's a bit sad. It's good that it got some money at least, but compared to the report from the AI commission right, it's not at all what they were asking for.

Paulina Modlitba:

No, and it's also good. It's also good that it's a combination of at least trying to make things happen, but also the sort of ethical, regulatory parts of it as well. Yeah.

Anders Arpteg:

I heard someone say it really nicely before. Let me see if I can recall it, but you know you can think about the opportunity cost. You know what happened. If you do not invest in ai and a lot of people say you know, oh, we couldn't build this and that, but someone phrased it, I think, even better and it was for health care, and they said something like you okay, if we invest this much and compared to that much, how many lives are getting lost because we didn't invest that much healthcare services? We could actually save a lot of life and have the nurses actually performing patient care instead of, you know, working on whatever kind of strange system they have to work with. Right?

Paulina Modlitba:

Yeah, and the funny thing is or funny, I'm funny but Eric Slotner, the minister who's responsible for these matters, he actually emphasized how many sort of millions if you still want to talk about the money aspect, the sort of the numbers, because some people want to talk about the numbers he's sort of very aware of how much more. You know how much of the Swedish BNP could be increased if we actually put an effort into building AI solutions et cetera, and sort of how much the Swedish economy would be boosted. And he still doesn't yeah, he's still only sort of I don't know.

Anders Arpteg:

You can wonder where that is. I mean, if you take the biggest companies in the world, we know they invest heavily in AI and they do that for a single person, a purpose. I would say it's simply because they will make more money and sell their business better if they do it. And if you look at the latest like investments like Stargate in US with $500 billion, it's like the whole. Gdp of Sweden more or less investing in that. So you know people are investing heavily.

Paulina Modlitba:

I know, but they have bigger muscles as well. I mean, they have the technology, they have the enormous budgets that we don't have. My guess is that Erik Slotnert is probably much more keen on advancing AI than can be seen in the spring budget, but he's probably not 100% prioritized due to wars and everything else that's going on, so it's hard times to get your ideas through as well as a minister is my guess.

Anders Arpteg:

And one can understand you have to invest a lot in the defense industry right now and I guess AI. You know it's not a super popular topic from a politician point of view to talk about, right. It's kind of hard to build an opinion around. What do you think?

Paulina Modlitba:

You mean in general or AI in warfare?

Anders Arpteg:

If someone were to put on their upcoming election in Sweden, ai as a top priority or something, I don't think anyone would do it because, even if they may believe it could provide a lot of value, it's not something you build popularity for.

Paulina Modlitba:

I know, I know you have to do it in the right way, I guess, and sort of understand the worries that people have. And sort of understand the worries that people have. That's, you know, one thing that I always consider when I talk about AI that, regardless of audience, even if it's, you know, leaders within this people have when it comes to AI and our kids and schools and AI companions and everything you know, you have to understand the worries as well. So I definitely understand that it's easier for a politician to talk about sort of regulating AI and the worries with AI, but I definitely I know that there are positive stories that you can tell, especially the type of stories that you just told. You know what? What could we do with the help of AI? Like how many women could be saved breast cancer, ovary cancer. You know, ai can do a lot and save so many women if we use AI and we are I mean, sweden is in the forefront when it comes to using AI to prevent cancer. So, you know, talk about the positive stories more.

Anders Arpteg:

Sounds awesome.

Paulina Modlitba:

But that requires politicians who are visionary, you know Right. And perhaps a bit brave as well. Exactly, and without becoming sort of party politis, talking about specific sort of parties, I think we lack right now in Sweden. We lack visionary politicians who are brave.

Anders Arpteg:

Here, here. Couldn't agree with you more. With that, I'd love to welcome you here, helena. Let's see if you can pronounce your last name, modlitva.

Paulina Modlitba:

Yeah, yeah, the hard part is pronouncing it just the way it's spelled.

Anders Arpteg:

Modlitva. Yeah, yeah, so that's how you should pronounce it. Okay, good, awesome, but okay, you're rather famous, I would say, ai expert. You're a keynote speaker and also are writing a book that I'm looking forward to hear more about shortly, and have been, you know, also a frequent appearances in Swedish television, and I think a lot of people recognize your expertise and knowledge in AI. So that's with great pleasure to have you here. So very welcome here, thank you, but perhaps first could you just describe a bit more who is really Paulina? What's your background? How did you get interested in AI?

Paulina Modlitba:

Born and raised tech nerd. I was born this way, to cite, to quote Lady Gaga my dad was interested in technology. He was self-taught, I mean, he didn't have any education within tech, but he got a PC. He worked at a Swedish public agency, csn, and through that agency he actually got a PC really early. We had this PC program. You could buy a subsidized PC and bring it home, and he did, and so I started playing games on that PC and eventually I started coding and I discovered the internet and fell in love with everything you know. It was just like a perfect match.

Anders Arpteg:

So this was in the 90s or something.

Paulina Modlitba:

Well, yes, thank you, it was more. Well, I was born in 1980. So perfect timing to actually discover all these things. So, yeah, I probably started discovering computers when I was like, yeah, eight, nine, end of the eighties, and yeah, just loved it. But I was, you know, equally interested in philosophy and history and you know all of these topics and I kept. I was very sort of I think I aged backwards.

Anders Arpteg:

What do you mean? Aged backwards?

Paulina Modlitba:

Well, I was born an old, old person, wise person, who preferred to hang out with grownups and talk about philosophy, I mean, and and but I I loved MacGyver Inspector Gadget. I loved the Ninja Turtles and April especially. So I wanted to be like a combination of something with tech and robots, but also a journalist. I loved writing and I couldn't really put these things together in my head. Like what does that mean? Like what type of job actually allows you to combine these things? But I didn't really care and I enjoyed it yeah, then you got some education.

Anders Arpteg:

And how do you into press? You can speak, just. You have also some education from mit, if I'm not mistaken.

Paulina Modlitba:

Well, so the interesting thing is this usually happens with girls, especially because research show that girls who are tech interested are usually sort of interdisciplinary or, like me, interested in many different things, whereas guys who are interested in tech are usually more sort of nerdy into just tech. So identifying with KTH sort of studying technology is easy peasy. But for me I was super nerdy and I love tech, but for me considering KOTUHUA here in Stockholm, the Royal Institute for Technology, was like no, I've seen these nerdy people in overalls and that's not me. I'm interested in fashion as well.

Paulina Modlitba:

So I started considering other schools, like Handelsvikskolan here in Stockholm, economics, finance, but I felt lost and then, luckily, a friend of mine recommended like they just started this new interdisciplinary program at KTH in media technology and I just looked at the brochure and I, you know, saw them listing like design, journalism, but also mechanics, physics, programming, and, yeah, I just fell in love and I kept falling in love for five years and at the end of that program it's a long story I sent an email to a professor at the MIT Media Lab and I actually got accepted and stayed at the Media Lab for three years and did even more hands-on research and worked as a research assistant. I never actually got a PhD, but I now have two masters in media technology and sort of my focus was always, has always been on the intersection of sort of humans and robots or humans and technology and you know, yeah, and society.

Anders Arpteg:

Awesome. How did you get into AI then? What's your first impressions of that?

Paulina Modlitba:

So I don't think I understood that it was called AI, but at the Media Lab definitely, because Marvin Minsky, one of the godfathers I don't even know if I like that title, but it's being used broadly he founded the first AI center at MIT and he was a friend of the Media Labs. He loved to hang out and play piano at our Christmas parties and some of my friends who were students at the Media Lab they actually were students collaborated with him in research projects, so he was there a lot and so I heard about his work. But back then this was 2005, we were in the midst of an AI winter, so we didn't really know that on the West Coast, obviously, google and these companies were already breaking, you know, starting these projects that would become the breakthrough projects for the AI that we're using now in deep learning and everything. But for us we were in the middle of an AI winter and so I was sort of admiring Marvin Minsky and the rest of the AI people, but we also view them as a bit, you know, delusional, because they had all these ideas of what AI could be and could do, but they hadn't really figured out how to actually do it in real life. And so, yeah, that's sort of you know, the first encounter I had.

Paulina Modlitba:

Eventually, I got involved in a research project with a super cool woman called Rana El-Khaloubi. She's a well-known name in the US. She has her own AI pod now. She finished her. She had. When I got to the media lab. She had just finished her PhD in Cambridge, I think, or Oxford, and joined the media lab for a postdoc and she did research on how AI and facial recognition could be used to support children with severe autism in reading other people's facial expressions faces and facial expressions and I got to be her research assistant.

Paulina Modlitba:

So, that's the first time I got hands-on experience of actually using sensors, data and, to a certain extent, apply AI to it, and she eventually, long after I was gone, commercialized that product, not targeting group individuals with autism, but sort of using facial recognition to understand how people react to commercials and stuff like that. So a more, yeah, commercial use.

Anders Arpteg:

Awesome. And then you moved into, you also have your own company or companies, perhaps.

Paulina Modlitba:

Yeah, no, just one which is not. You know, it's not a scalable business, or I'm not, at least not trying to scale it. I am, to a certain extent, scaling it now with the help of generative AI. Maybe we can get back to that, but it's, it's a one woman consultancy, so I we should be friends, right.

Paulina Modlitba:

Yeah, we should be friends. Uh, I founded it 12 years ago, 11 years ago and, um, yeah, after working in in both large corporations in the media industry but also tech startups, for a couple of years, and then, yeah, started my own business and I've been doing that since.

Anders Arpteg:

Awesome. Can you give some example? What do you do at your own company?

Paulina Modlitba:

It's so hard to explain, but what I would. I guess the easiest way to explain it is I'm a futurist, so I look into you know, I analyze trends, I look for signals and I try to prepare for them and I package that knowledge. I'm still just as curious as I was as a five-year-old, and I ask questions and I read and I listen to podcasts and I just gather all this information. I package it in very different ways, in many different ways. So I have a newsletter with almost 5,000 subscribers I think 220 paying subscribers with insights, and then I also package that as talks, as courses, ai courses.

Anders Arpteg:

What's the name of the newsletter, by the way?

Paulina Modlitba:

Well, it's a sub-stack, so I think it's. I haven't done packaging well, branding that. Well, to be honest, I think it's called Pulse Plings, transponding or something. So I started out with having like bilingual ones, so a Swedish one that I auto, with the help of AI, translated into an English one, but it turned out to be be the translation was still so bad that I ended up, you know, ended spending like eight more hours to actually find the right language and tonality in English and it wasn't just like I couldn't actually find the English audience.

Paulina Modlitba:

So I decided to focus on the Swedish one to start with and put all my energy into that, yeah, so I decided to to focus on this. Which one to start with? Um, and put all my energy into that, um, yeah, so. So courses, uh, talks, I moderate a lot of conferences I. I like meeting people and sort of using my knowledge and sharing it in order to make people more excited about the future and and also make it more tangible. Just like you, you know where's the actual value in this and how should we prepare and what's the mindset. So people mostly pay for my, for my energy and my sort of I guess to some extent positive.

Anders Arpteg:

And your expertise.

Paulina Modlitba:

Yeah, and my expertise, yeah.

Anders Arpteg:

You actually recently were at this event in Regoletto as well. Yeah, Can you just describe a bit more? The name was AI Unlocked right.

Paulina Modlitba:

Yeah, we tried out this, or we, but Per Klingfeld, my friend and former sort of colleague or one of my clients he actually wanted to try out a half day event for people who don't have you know, people are busy, maybe you don't have time for a one day or two day conference but a half a day with focus on both sort of theory and and trying out things. So an actual workshop on your phone to make it as accessible and easy as possible. So, and he asked me and two others, david Fendish and Ahmed and Mohammed two amazing guys. They have a podcast together to talk and then Per had a one and a half hour workshop, just next level. You know, beyond the most obvious prompts and settings. You know the memory settings and, using projects in ChatGPT, you know how to sort of level, the way you use these chatbots.

Anders Arpteg:

So we're checking how to use the chatbots. Yeah, exactly.

Paulina Modlitba:

A lot of people are still sort of figuring that out and love to get hands-on examples of what you can do.

Anders Arpteg:

And you do need to train it, right. I mean, some people think you know how hard can it be, Just prompt it and it works. But it's not really the case, right, If you want to do it properly.

Paulina Modlitba:

Exactly, yeah, yeah.

Anders Arpteg:

Okay, so so it was a set of AI experts there at the AI Unlocked and you had a workshop you went through.

Paulina Modlitba:

you know how to work with AI different ways, yeah, so my scope was slightly different to what I usually have. So I usually have this sort of humanist, overall perspective on AI and why you know why we should consider it more of an opportunity than a threat. You know those type of things. But also like this is how, what you can use AI for, if you want to use it to for innovation, business development, get out of your box. How will it change the way we lead companies, the way we work with each other, stuff like that. But this time Pat actually asked me to because I was a moderator as well to open up with more sort of paint, the picture of who more is in Sweden.

Anders Arpteg:

So AI in Sweden, the current state of AI in Sweden.

Paulina Modlitba:

Exactly Because we know that we're hesitating and we're lagging behind so much. Why is that? And and what stats do we have? And you know, do we have any sort of? Do I have any ideas of how to actually just the current state of ai? It's sad. No, so it is.

Paulina Modlitba:

It is actually positive in the sense that more and more people are using ai. So, um, both, if you look at sort of on an individual level, we're passing at least one third of all Swedes are using it regularly. And within businesses, we've gone from maybe 10% of businesses using it to 25% or more. So there is an increase and a pretty obvious one, but I think 3%, only 3%, are actually trusting AI platforms and the way that they use our data and sort of the decisions that it makes based on the data it has.

Paulina Modlitba:

And you know so we definitely, we were keen in somewhere, we definitely want to use AI and we see the benefits, but we don't trust it, and Swedes are, I know, you know, we know we are probably, I think, the company or the country within EU that has made the most sort of detailed Bokstavsdrogen is a word we have in Swedish, so, like you know, interpretation of the data protection law, for example. So we do everything right. We're, you know, like Germans. We read the regulations and we follow them and all of them, and I think that goes for AI as well. We want to be certain that you know this is an American company. Is it really okay for us to use you?

Paulina Modlitba:

know very careful and we want to be proven, sort of we want to see proof before we actually try it out. We don't take risks. So that's what's mainly holding us back.

Anders Arpteg:

So you would say Sweden is not as good as other companies in Europe, and I guess US and other parts of the world as well, in finding value from AI, or what's your assessment? No, because we haven't gotten there yet.

Paulina Modlitba:

You have to try it out. You have to get past building prototypes to actually find a value and see it hands on. That's my background from the MIT Media Lab. Learning by doing is definitely something that I still preach.

Anders Arpteg:

Yeah, that rings so many bells for me as well. Perfect, I love you saying that Cool. Any other highlights from the event that you'd like to share from the AI Unlocked event?

Paulina Modlitba:

I'd like David Fendish. He had this really provocative title for his talk AI Sucks and he was sort of, yeah, he and Amr they're like this duo, I guess where Amr is very positive and emphasizes sort of the positive aspects of AI, whereas David he's been like you working with AI for you know, the last 25 years he's a little bit more realistic when it comes to what AI can do and not do. So he had this amazing talk about sort of you know what are the limits of AI, and I think it's just as important to understand that and not just overhype AI and see it as either something very dangerous or something like a superpower that can do everything and can solve all your problems in a day. Uh, but to understand what the limits are and and he did it in a very great, in a great and very humorous way um and yeah, yeah, awesome and yeah so, so well said statement yeah we really need to understand what ai is good at.

Anders Arpteg:

What humans are good at, which usually are, are different things.

Paulina Modlitba:

And he I don't even know what the English word is for this, but he dramatized it, avdramatisera the whole topic of AI, sort of making fun of it to make it less threatening, I guess.

Anders Arpteg:

Yeah, that sounds awesome. And, speaking about AI sex, I love the title of your upcoming book.

Paulina Modlitba:

Can you actually say it in the podcast or will there be like a beep?

Anders Arpteg:

No way, it's no problem. So we can say it easily what's the title of your? You're working right now on a book, right? Yes, and can you just share what's the title going to be?

Paulina Modlitba:

So in Swedish it's Vad fan ska vi med AI till? And I wasn't even sure what the English version was. But in our emails, what title did it get? What the fuck do we need AI for?

Anders Arpteg:

What the hell do we need AI for? It's awesome. I love it yeah.

Paulina Modlitba:

You know there are several parts to it. It just came to me, and number one is it just came to me, and number two is obviously I want to stand out. There are so many AI books now and I don't want to be, just like you know, meeting top leaders within a company CEOs, cfos, et cetera. I'm always me and I wanted the book to reflect that, like I want people to take my knowledge, my experience, my expertise seriously, but other things are. You know, I'm just a human being, yeah.

Anders Arpteg:

Pablina. What the hell do we need AI for Nothing.

Paulina Modlitba:

Nothing really.

Anders Arpteg:

I don't think you believe that. Okay, yeah.

Paulina Modlitba:

As with many other things, I'm very much in between, so I'm definitely not one of those evangelists who are like we need AI, no matter what.

Paulina Modlitba:

Like, just use as much AI as we can put it, you know, implement it everywhere and you know the blessing, the big blessing.

Paulina Modlitba:

I'm not religious when it comes to AI.

Paulina Modlitba:

I definitely think we have to be more aware of why and I think sort of the state of AI technology right now and the effect that it has on climate change and our planet and sort of we have to become smarter for many different reasons when we use AI and not, and why we use it and not, et cetera, ai and not, and why we use it and not, et cetera.

Paulina Modlitba:

But when we, when there's a reason, when it's motivated to use AI, it really can do amazing things. And and yeah, whenever I feel like, oh my God, why am I doing this again, I go back to. I always sort of go back to reading what Google DeepMind, for example, is doing in not only advancing you know everything that you know everything that we know about the human body and research that will be so important when it comes to developing new everyone and sort of open source. I'm a big fan of open source and making knowledge and data and insights available so that people can continue that path of sort of advancing that research and applying it to completely new areas as well, not just medicine or proteins or whatever.

Paulina Modlitba:

I guess you're referring to alpha fold yeah amazing thing they did there and the nobel prize, and I'm super happy about that. So what do we need? What the hell do we need ai for? Um, we definitely need it to get us, as humans, out of some of the really hard problems that we put ourselves in. So the climate, climate change we need AI there, too, to sort of simulate catastrophes and understand, like, what actually hasn't, what changes actually have an effect on the planet and what type of effects does it have?

Anders Arpteg:

I can recommend a site. It's called climatechangeai and they list a lot of use cases where AI can help fighting climate change. So there's so many use cases there as well.

Paulina Modlitba:

So to help us survive and thrive. And a lot of challenges are due to the fact that there are so many of us right now, so pandemic spreading quicker because there are billions and billions of us using AI for that. And cancer. We get older, we get more sick due to that, so we need it for a lot of things. But there are beautiful things in our everyday lives that we can use AI for too, and that's sort of more what I focus on in the book and the main reason-.

Anders Arpteg:

You focus more in the book on what?

Paulina Modlitba:

Stuff that you know, you and I might well. The everyday person may not be thinking about sort of how protein folds, so that type of AI research might not make a huge difference in their life, daily lives. But I want to focus on the use cases and show the positive aspects of AI that actually does affect people here and now, so everyday activities in some way and how AI can help with that. I focus mostly on sort of the business person in you, so it's a lot sort of around business development innovation.

Paulina Modlitba:

But I also start the book with actually showing that it can be used for, and is being used for, preventing suicide in the subway here in stockholm right now really yeah, and that it's, um, again my favorite example with breast cancer, because women have struggled with being taken seriously within health care for so many years and now suddenly their AI is actually advancing healthcare for women, which excites me a lot, and you know so these type of use cases and actually arbetsplattsulyckor as well, so accidents.

Anders Arpteg:

At the workplace.

Paulina Modlitba:

Workplace accidents that you can use AI to analyze patterns and prevent them.

Anders Arpteg:

So those are the type of like yeah, what the hell do we not need AI for? I mean, I think you phrased it well, saying you know, some people believe AI can be used for anything and you can just use chatty-bitty to solve whatever problem there is.

Paulina Modlitba:

And.

Anders Arpteg:

I think that's wrong, and it seems like you say the same here. So are there some things that AI should not be used for?

Paulina Modlitba:

That's a great question. I just realized shit, I should have had dedicated a chapter in my book to that very question, like to actually set the limits for what AI should be used for or not. Well, so you probably noticed that a lot of people I think this is interesting A lot of people have used generative AI to make action figures of themselves. And you know, yeah, studio Ghibli, sort of characters of different photos. And you know, I'm not saying we shouldn't use AI for that, I'm pretty liberal, but considering how much energy it consumes and sort of, you know it's it's not the top priority, but I think it's super important for people to explore their creative side too. So I'm not, you know, I'm divided. I I sort of um, um, one more clear answer, one more sort of specific thing that we definitely shouldn't use it for Um, as with many things, women are often the target of the backside of many of these developments.

Paulina Modlitba:

So 99% of all sort of sexually well, sexual deepfakes, porn deepfakes have women as sort of victims or targets or objects victims or targets or objects. So it's, you know, 99% is like some of these models. I saw an example there's this deepfake generator that actually this female journalist tried to generate a nude pic of a man and was told, unfortunately I can't do this. I haven't been trained on enough male data, enough pictures of male naked bodies, which says everything you know. So you know AI is being misused a lot when it comes to these things. That's obviously the backside of everything.

Anders Arpteg:

Yeah, so many negative aspects and I guess, warfare and cyber attacks and you know if you want to develop a new virus or coronavirus or something. Yeah, there's so many examples of horrible things AI could be used for.

Paulina Modlitba:

Yeah, almost all positive examples have a negative sort of Jekyll and Hyde kind of dynamics to it, I guess.

Anders Arpteg:

Yeah, I saw this I think it was a paper in Nature a couple of years ago and they spoke about the UN development goals. They have defined like 70 goals and for each of those.

Anders Arpteg:

They have like 165 targets that they want to measure and they try to see how AI potential is contributing to reaching those goals or actually hindering reaching those goals. I don't recall exactly the numbers now, but I think it was like in 70% of the cases it actually was a positive contribution and in some cases, for all the data centers that we're seeing being built now, they actually consume so much energy, so in some sense they actually not contributing to a more sustainable society. So there are pros and cons, but in general, more positive. Is that your view as well?

Paulina Modlitba:

Yeah, it's definitely not positive, I would say, and especially with the factors again climate stuff that we won't be able to solve without the help of AI. It's too extensive, it's too complicated. We can get so much help from AI in understanding our problems and sort of simulate solutions before we actually try them out, and yeah, Yep, yep, and I know you're interested in philosophy and so am I, so the ending part of this podcast, we go more philosophical and I'd love to hear your thoughts about that. Bring it.

Anders Arpteg:

Perhaps I'd just like to hear I mean, you also are appearing a lot in national Swedish media, etc. So I'd just like to hear a bit more about you know what's your experience of that, any challenges or how do you think you know, when you appear on TV4 or similar kind of channels, to just be as valuable as you can, so to speak?

Paulina Modlitba:

I would say there are probably two things at least that I consider. One is that it's a huge responsibility to talk about AI to a public. Yeah, I'm in New York tomorrow, so just anyone my mom, friends, somebody's grandma is watching. It's a huge responsibility to talk about AI, especially. People have different experiences of AI. They have different knowledge levels, and I'm still a bit upset when it comes to Max Tiagmark's summer talk Sommarprat yes, Back in 2023.

Anders Arpteg:

And perhaps we just need to give some background to understand.

Paulina Modlitba:

So those of you who are not Swedish, in Sweden we have this really sacred tradition where we let celebrities, some more sort of famous than others, talk for an hour or an hour and a half, with music and sort of so one person per day between June and August. So it's a summer thing, and people love to just uh line the grass in a in a couch and listen um and and sip a glass of wine or whatever they do. And so two years ago, max Tegmark spoke, I think for the third time, and you know whenever I I usually uh start my talks by. I sort of opened my talks with referencing to his summer talk, because he had just lost his mother and his dad was already dead. And so I think it's super important to consider the fact that even the most knowledgeable sort of AI godfathers or experts are people, human beings, and I think somehow that sort of phase in his life where he's suddenly sort of dead well dead he's actually getting closer to the end of his life.

Paulina Modlitba:

He had this really, I don't know really really dark view on AI Dystopian view yeah about sort of AI taking over during his lifetime, and you know we better prepare for it. Yeah, and, and I think you know just dropping that, you know just saying that in public radio.

Anders Arpteg:

Exactly Would you? I mean I think you said something like you know he don't believe that his son or child will survive because of AI will kill the world in some way, would that be? I guess you don't like to say that in public like radio like you wouldn't do the same, right?

Paulina Modlitba:

No, I wouldn't, even if I believe that I don't, but even if I believe that I wouldn't say it like that because I, you know it's I would want to make sure that people have other, have enough knowledge about AI to make their own decisions and sort of filter. Whatever I say, I would definitely give them the context of where I come from. That's why I love you know, love the fact that you actually asked me about sort of what's your background? How did you actually fall in love with AI's, your take on ai? Because my truths about ai are, you know, filtered or colored by you know that story, yes, and so we have to understand max thiegmark to understand why he talks about ai in this way. And I think it's, yeah, I, I it made me upset.

Anders Arpteg:

Speak on media, so you don't speak like that. No, do you have any way to think about how to speak about AI then in media? That is your preferred way.

Paulina Modlitba:

Yeah, I tend to talk like 30% what the problems are, so I definitely don't want to be naive either.

Paulina Modlitba:

I want people to be aware of the fact that you know the dark side, the flip side of the coin, so to speak, but mostly like 70% maybe positive and 30% the challenges, and I think that mix is usually good and most people leave my talks with a net positive feeling around AI and, the most important part, learning by doing I. I can't stress enough how important I think it is to actually get your hands dirty to to de-dramatize the whole thing. If you try these tools out, if you learn again ai sucks in what ways and sort of allow yourself to laugh a bit at how stupid ai can be sometimes and but also the powers for you and the value, it suddenly becomes something much less scary and much more positive. And so I keep talking about against. Like agency is super important. I want everyone to feel agency and the fact that you can. It's not too late. You can actually shape the future of AI. It's you know. You don't have to be max tg mark to to shape that future.

Anders Arpteg:

you can do it yeah, I'm gonna, given how old I am as well, and I've seen ai being even stupider than it is today, but you're not that old, you're not marvin minsky old.

Paulina Modlitba:

No, he's dead now.

Anders Arpteg:

Okay but still, you know I get surprised, know every time saying okay, AI can do this and I'm super positive. I know, you know it makes mistakes sometimes, but I'm still super surprised how well it works. And so many other people are so angry that AI can't do X and Y and Z, so they are instead, you know, negative, saying I thought AI can do everything. And then they become, you know, disappointed a bit.

Paulina Modlitba:

And that bar is so low. So I still hear people say, like I Googled myself and it didn't know. When I was born it got the wrong city, you know. And that's when I decided to not use AI anymore and I'm like, oh, you know, it's not perfect, but it's perfect enough or good enough.

Anders Arpteg:

Yeah, I think in media as well, it's you know, ai has a tendency to be portrayed very, very negatively in general, especially in news articles that are being written. Seems like they always want to find an angle that is, you know, is focusing on the negative and potential abuse part. So I guess in your case you're trying to balance that a bit and not be as negative.

Paulina Modlitba:

But I also understand that it sells the negative stories and that's why many of the AI prophets or AI gurus or godfathers tell these dystopic stories. Because if you build worry in society around the topic of AI, you are the person that they will turn to when they need comfort. And better stories like oh my God, we're going to die, are we, can you tell us? So it's a way for these AI experts to to build an ecosystem where they're needed and their truth about AI is needed, and I just don't work that way. Well, if anything, I want to be needed with solutions, you know, because I contribute with solutions, but not the worrying part. I think it's unethical.

Anders Arpteg:

Well, awesome. I congratulate and thank you so much for taking that kind of stance and speaking about AI in a balanced and positive way, which so many people and media outlets are not doing otherwise. So so great. I also know, polina, you work a lot in different type of advisory roles for companies and different decision makers. How would you phrase you know, if you were to look at the challenges that different companies have in finding value from AI? What are the main challenges? You would say it's a tough, it's a big question, but still.

Paulina Modlitba:

It's definitely something that I talk about in my courses, but also in my book, so I should know the answer to my answer to that it's.

Paulina Modlitba:

It's usually that you either start to you know too big or too small, but not in between. So you either, like you either have top leadership, that things you know AI can solve everything. Let's do AI, everyone has to do AI, and you don't really have a plan for how to, what that means and if it's actually creating value and if it's actually sort of tied to your strategy, your, your business proposal, you know. So it's just like let's do AI, and usually it can become. You know you start out to tech oriented as well and like let's, let's collaborate with this big AI company that provides this platform and that will solve everything for us. And then you know it turns out you don't have the data for it to actually create, find the value in your data. You know it's definitely.

Paulina Modlitba:

You know the wrong place to start or you start small. You're too afraid, you don't want to use your own data because it's not safe enough. You want to do a prototype, you want to try out a sandbox, a workshop, and you're happy for that, and that's sort of you know you don't actually have a long-term plan for how to scale that, how to make it come alive. Well, yeah, what's your long-term plan and you're content with your prototype and being innovative, but that's not good enough. So I've seen both and not that many companies that thrive in the middle and actually, yeah, I'm so glad to hear you say that, because that's one of my core messages as well.

Anders Arpteg:

I see so many companies that may be able to build a prototype they may even use their own data potentially and they see, oh, it could potentially work, it seems to provide some value, but they don't understand the complexity of actually putting it in production and building a real product out of this, where it starts to make proper value, which is probably 100x more work than actually building the prototype itself.

Paulina Modlitba:

Yeah, and I I don't want to sound like a business consultant, you know, with sort of yeah, if you have to tie it to to your actual business proposal and and it's a people matter as well but it is, and and that's the truth. So, not understanding the change, the mental, human change that goes into this and the change of, you know, the mindset that has to shift, that's probably a big mistake as well. That you know. You can't just press that, let's go AI button and expect everyone to go yay, this is amazing, it's. Yeah, you have to.

Paulina Modlitba:

I heard somewhere, I think, futurist Ian Baycraft. I heard him at South by Southwest a couple of months ago. He usually keynotes every year at South by and he had this slide about sort of companies need to invest just as much in education and the human sort of transformation as they do in tech. So look at your tech budget, double that some more and that's your people budget. Know, uh, because it it's, especially when it comes to ai, it takes a lot, um, yeah so it's not sufficient with a tech right.

Anders Arpteg:

So the change management and getting the business to actually start adopting it and making use for onboarded on it and upskilling and everything it takes a yeah. Some people say you know it's about the tech is around 30, 40% and the change management and the people work is about 60, 70%. Would you would that be accurate to think or?

Paulina Modlitba:

Yeah, but I don't want people to take the tech part to sort of easy as well. What do you say? Like lightly as well, it is important to get it to work properly. It is important to sort of. One thing that we're really bad at is sort of especially when it comes to the public sector. We have all this data in Sweden, but we constantly fail to collaborate with our data. We don't want to share it. We're really protective, and so again sounding like a guy in finance or sort of a McKinsey consultant Sorry, mckinsey, but you know, actually doing your data work, your infrastructure work, making sure that all your data, that you have your data lake and everything is sort of synced and is so important too, and I think that's why I don't want to say like it's. You know, tech is 20 and human aspect is 80. It's probably more like 50-50, I'd say.

Anders Arpteg:

I'm so glad to hear you say that. It's so easy to say it's just about the people, but in reality going from prototype to product is actually very complicated.

Paulina Modlitba:

Yeah, and especially I meet startups and I advise startups as well, and when you build your whole sort of business or your business, your idea on a certain data set or a certain AI application on some type of data, I've seen many startups actually having to actually failing because they realize, like they can't, they don't have the data set that they need, they can't buy it from anyone else, it's not good enough, and so the whole idea fails because of that. So especially I mean, if you're a big organization with a lot of sort of different data sets, you're not as vulnerable, but if you're a smaller company building your business on a certain type of data, you better do. Yeah, the data part is probably even more important.

Anders Arpteg:

Right, and speaking about data, I mean there is a lot of challenges that are just you know, I met a lot of companies that say we have so much data it's not a problem and then in reality, if you just want to organize it and clean it up and have it in a state where you can start using it, it's surprisingly difficult and so many fail simply at that. But then we also have the regulatory aspect of this and a lot of people or companies are super afraid about both GDPR and AI Act and potential you know, personally identifiable information that they can get sued for. What do you think about? How big is the impact of regulatory fear?

Paulina Modlitba:

In Sweden it's huge, Again because we take it super seriously, again because we take it super seriously, so I would say it's really sort of stifling innovation and progress.

Anders Arpteg:

Yep. Yeah agreed and you know, in some sense, I think so many people think you know you're not even allowed to use personal information. But you are. You just have to be compliant and you have to do it right. So I mean, use personal information.

Paulina Modlitba:

But you are. You just have to be compliant and you have to do it, and I think I'm definitely not. Yeah, I think it's stupid, to the extent that you know there are ways to build secure solutions and to a certain extent, I just think we use it as an excuse because we don't want to do it. It's easier to say no than to say, yeah, let's look into this and do our homework and see how we could solve this in a secure way, a compliant way. So I think that's the problem in Sweden that it's easier. It's so much easier for us to lean towards I don't know, so I say no, rather than I don't know, so I'll look into it know, so I say no, rather than I don't know, so I'll look into it.

Anders Arpteg:

Yeah, well said, and I remember there was a law professor in Uppsala, I believe, who said it really well. This was in like 2018, when GDPR was just launched, and she said I mean, in reality, it's not the problem of being compliant, the problem is that people don't know how to be compliant. So it's the uncertainty of the regulation that caused the problem, not actually the regulation. If people just knew how to do it.

Paulina Modlitba:

Do you?

Anders Arpteg:

agree with that.

Paulina Modlitba:

Yes, and I see that all the time I have clients, different clients, within the public sector, for example, and they all do have. They've all made different interpretations of whether they can use American platforms or not, sort of. And so the question is why is Stockholm Stad using Zoom?

Anders Arpteg:

for their meetings.

Paulina Modlitba:

Whereas you know this agency that I also have as a client, they're using Skype, which is, you know, soon to be obsolete. You know it's not even being used anymore, but that's the only platform that they have, you know, considered safe enough. So that's why I'm super glad. I know I make fun of the different sort of sandboxes and you know the small spring budget sort of the small AI budget in the spring budget. But I'm glad that they're looking into centralizing interpretation, because this is my interpretation of what the sort of regulatory sandbox will be doing to actually centralize the interpretation of the AI Act. And so that you know it takes so much time and it's you know it takes so much time and it's so unnecessary and stupid to have every agency organization within the public sector do their own interpretation and look into all these documents. It's just stupid. So we definitely have to be more efficient and centralized when it comes to that.

Anders Arpteg:

But how do you do it? I mean, I remember you know I've spoken to a lot of legal people and professional lawyers and you know you have to respect their role as well. They are basically responsible for making sure that they are not, you know, breaking any laws and using whatever data they have, et cetera, and of course they have to be in some ways safe rather than sorry. To some extent At least. I heard some lawyers say it really well, you know, he said, or she said actually we have to first recognize that laws like GDPR and AI Act are always a level of risk.

Anders Arpteg:

It's not a question of being 100% safe. No one can ever be 100% safe. Even if you do the best kind of compliance work, everything documented, all the kind of risk analysis done, you still can get sued. So the question is rather, you know it's risk management, what is the proper level of risk that you're willing to take and how can we minimize it? Of course, but still there is no way that we can be 100% safe, because then we have to throw away all the data and basically turn off the internet and whatnot.

Paulina Modlitba:

I think that's a healthy kind of thinking right. Yeah, and the EU AI Act has been designed based on sort of different risk levels. So I don't know if you're sort of how much you know about it, and it's definitely not my area of expertise either, but I have a short chapter about it in my book because you have to.

Anders Arpteg:

About the AI specifically.

Paulina Modlitba:

Well, yeah, and the risk pyramid and I really like the idea. You know the fact that different companies, different organizations are judged differently depending on how much risk, how much risk is involved in what they do. So companies that are handling sensitive data tied to specific individuals or members of society should be judged harder and regulated harder.

Anders Arpteg:

So I really like that idea of that triangle and sort of applying different regulatory frameworks to different type of organizations and companies and yeah, we have spoken about the AI act a number of times on this podcast and let me try out an idea on you and see what you think about it. But, okay, it's risk level based, so you know, the regulatory burden, so to speak, depends on the potential risk or of harm it can cause, which in itself, I guess is a good thing. Yeah, so for most, as they claim activities of using ai, it's actually low risk or no risk and basically then there is very little work you have to do. And then for other things, like if you want to manipulate human thought or something that's, you know, high risk and it should be regulated a bit or much harder or even forbidden, exactly.

Anders Arpteg:

Right. So the question then is I think I saw an early version of AI Act and they spoke about chatbots specifically. I think I saw an early version of AI Act and they spoke about chatbots specifically and they took the whole technique that just categorized as a single chatbot thing and said chatbots are low risk. I think they said in this case.

Anders Arpteg:

Yeah, it is. But then you think, okay, is it really the chatbot as a technique that should be categorized in the level, or is it the application? If you just take chatbots, then you can use it for, like, customer service Okay, the harm level there is super low, right. But then if you use a chatbot for as a therapist for children and suicidal children, perhaps you know that is very, very high risk. So I think it's very dangerous when you try to use techniques like a chatbot or even AI in Xtreme and then have a separate law based on a technique instead of the use case.

Paulina Modlitba:

Yeah, definitely the sort of EU Act shouldn't be tech-centric either, like nothing should be tech-centric when it comes to AI. It's the purpose, it's sort of the risks that are tied to it. I totally agree.

Anders Arpteg:

Yeah, and then the claim in the AI act to be use case centric. But I think you know when you actually read it, it is not. It's very tech focused.

Paulina Modlitba:

And I think it's interesting. So, again, my sort of area of expertise is not the regulations of AI, even though I'm interested in it and try to stay up to date. But in my one of my AI courses I have guest lecturer Emily Terlinder, who's who you know? She's a she's an AI lawyer, basically working with the AI act on a daily basis and and she says that for the first time in a completely different way to when the GDPR was introduced she's seeing a shift in mindset already happening. So people are actually in the EU Commission in Brussels are already reconsidering the AI Act because they can tell that it's been designed, you know, by people who don't understand this fully and might be sort of yeah, I'm gonna get stuck in this topic.

Paulina Modlitba:

So it might, it actually might change. It is right, yeah.

Anders Arpteg:

We actually spoke to a person in the AI or had a talk with a person in AI commissioning here. He got the question you know, what do you think we should do with the AI act? And he said we should deregulate it? What? And I said you're crazy. But he was a bit serious actually and he didn't mean remove it all, of course, but we need to potentially have a simplification of the regulation and I see a lot of movement now in EU and especially France, I think, is driving this a lot Also because they're a commercial factor.

Paulina Modlitba:

You know interest in this pushing, you know lobbying for that to happen. But one part that I do like and a lot of people I usually ask this when I give talks if they know that this is, you know, happening. One of the parts that was because the EU AI Act is being implemented sort of incrementally, so bit by bit, and one of the first ones that was implemented, like in the beginning of February, is AI literacy. So you, any company, that sort of uh, high risk, low risk, everything in between you have an obligation to actually make sure that everyone in your organizations know have sort of a fundamental AI knowledge, sort of a base of AI knowledge, I guess. Sort of a fundamental AI knowledge, sort of a base of AI knowledge, I guess, and not that many people know that and it's not.

Paulina Modlitba:

I'm trying, I usually try to explain it in the way that you know it's not. You're not going to be fined. The EU is not sort of looking into and sort of doing inquiries to see if you have enough AI classes in your organization or if you educate people enough, but if there's an incident, something serious happens and they do an investigation into it and it turns out that this happened because of the human factor and because people in your organization didn't know how AI works, then that's your responsibility and that's a problem. So it's yeah.

Anders Arpteg:

Yeah, super interesting. So yeah, I mean cool, we can speak forever, I think about regulation. And it may not be the most fun topic, but I think it's an important one, right?

Paulina Modlitba:

It is.

Anders Arpteg:

Yeah.

Paulina Modlitba:

It's time for AI News brought to you by AIW podcast cool.

Anders Arpteg:

So we usually take this small break in the middle of podcast to just speak about some recent news events that we've heard about do you have something, pauline. I have some topic I'd like to talk about. Do you have something, pauline? I have some topic I'd like to talk about. Um, do you have something that you'd like to bring up?

Paulina Modlitba:

I do and I'm I'm looking it up. It's an article in in the New York times. Um, I have a newsletter and and I actually am sort of diving deeper into this article in in the newsletter I'm sending out tomorrow to my subscribers Uh, kevin Roos he has this podcast Hard Fork as well, that I love to keep updated with AI. He talks about the fact that he has noticed that now the discussion around AI is slightly shifting to sort of a post-AGI discussion and, whether you know, having AI models that are aware means that we should start considering how AI models actually feel. And you know, are they okay? And you know?

Anders Arpteg:

Care about the feelings of the AI system.

Paulina Modlitba:

Exactly, and what type of rights? I mean, the talk around sort of the discussion about AI system having rights but also obligations is not new, but now it's really happening in sort of in this whole ADI or not era. And, yeah, he, the reason why he wanted to write about this is because one of the biggest AI companies Anthropic they have Claude, for example, the chatbot Claude actually hired a person I can't remember what the title of his role is now, but a person who's solely responsible for the welfare or the well-being of upcoming aware AI models. So, yeah, whereas we're talking about so, is AI AGI even possible? These companies are already preparing for it. They probably some of them probably already have an AGI-like model hiding somewhere and that's why they have to prepare for it.

Anders Arpteg:

But I think, you know, it's a philosophical ethical topic yeah absolutely, and I think Anthropic is a very impressive company. I think it has a lot of ex-OpenAI and Google people in it.

Paulina Modlitba:

Yeah, kyle Fish, ai welfare researcher. Yeah.

Anders Arpteg:

Awesome. I mean they really focus a lot on safety of AI and they also have this awesome paper recently about tracing thoughts of AI, trying to understand how it actually works inside and that they do reason in certain ways Different from humans, but still do reason in some ways. And you know, I think they do a lot of really impressive research. And this is. I haven't read this, but it sounds really cool.

Paulina Modlitba:

It's pretty interesting, really cool, it's pretty interesting. And he also, of course, brings up the fact that it's slightly provocative because we haven't really figured out how we develop AI systems that are for our welfare or our wellbeing, and sort of agreeing on that. So is it really sort of should we focus and spend time and money on researching how our AI models thrive? But it's happening, obviously.

Anders Arpteg:

You would think you would primarily focus on the human safety, but I guess they do a lot of work and research on that as well.

Paulina Modlitba:

Exactly, and I guess they go hand in hand to a certain extent. And there's this other research project that is mind-boggling, where a couple of researchers tested an AI model and told it to react emotionally more like a human while it's providing therapy to people who are sort of, yeah, bouncing their problems or suicidal thoughts. And after these sessions, after a number of sessions, they asked the AI model so how are you doing, like, compared to before these therapy sessions, how are you doing and what's your emotional state? And they could actually discern that these AI models were feeling worse and actually sort of absorbing the negative energy and dark thoughts of the patients. So, yeah, this is definitely something that we will hear more about. Provocative or not, we will hear more about it.

Anders Arpteg:

Do you feel sorry for AI systems today?

Paulina Modlitba:

I do, and before starting this recording, we actually talked you and I, and I have my background in human robot interaction and part of my research was sort of realizing how little we need, how much sort of. It's really easy to build robots that are human enough or human-like enough to make us feel feelings and sort of empathy towards them. So one example is just this robotic trash can, that that ran around in a restaurant and picked up trash or ask people to throw their trash in it, and people are like oh, this is you know, it's a pet, it's our friend, and so it's. Yeah, I do too, I definitely do too, and like yeah, it puts human feelings.

Paulina Modlitba:

I feel sorry for a lot of things, like you know, even like non-living things. I can feel sorry for the car or my motorbike at some point, oh, when it's cold, feel sorry for the car or my motorbike. At some point when it's cold.

Anders Arpteg:

You know it's a machine, but still you can get some kind of but I mean, that's one thing. You can, as a human, get emotional about machines.

Paulina Modlitba:

Yeah.

Anders Arpteg:

But then the other point is at some point perhaps we should give it some kind of rights.

Paulina Modlitba:

Yeah.

Anders Arpteg:

I guess we're not there yet, or what do you think?

Paulina Modlitba:

No, we're not there yet. Or what do you think? No, we're not there yet, but definitely it's. You know, there are all these questions that need to be prepared for I mean also like the whole workforce sort of robots replacing or AI replacing humans and the whole discussions around tax and who's creating value and who has to pay tax and not but also rights, I guess, and robots having rights. Yes, and it's such a multifaceted question with so many sort of entries into it. Yeah, it's complicated.

Anders Arpteg:

Human rights and machine rights. That's interesting topics for tomorrow.

Paulina Modlitba:

Yeah.

Anders Arpteg:

I had perhaps a bit more technical news, but I think you know yeah go for it.

Anders Arpteg:

It's actually from Byta, so the TikTok company, and they released a new open source model called UITAR, I believe it was, and it's this kind of agentic model that actually have. You know, I usually try to categorize these kind of three components. If you take, like, a car, you can think it has perception, can see the cameras and understand what's happening around it and build up this kind of vector space, as Elon Musk call it. But then you have some reasoning, some planning, you know how to work around to come up with the end result that you want to have, but also taking action. So in a car it's very obvious, you know you can steer left and right, gas and brake.

Anders Arpteg:

But in this case, for the UI TARS, you know they also have these three components which, by the way, are OpenAI's AGI, three bottom layers. So they have these five levels of AGI and the three bottom ones is basically the conversational knowledge part and then reasoning and control. But this one is then focused on controlling the computer or the phone. So we've seen ChatGPT and others can have the operator, which is basically a browser use.

Paulina Modlitba:

Isn't it pretty shitty?

Anders Arpteg:

It is very shitty and it's not even a prototype. I mean it's very hard to use, but it's getting there. Obviously, humanists are much better in taking actions currently, but you know we're trying to make AI have agentic capabilities.

Paulina Modlitba:

And I definitely don't want to be one of those who say no to agentic AI because it's shitty now. So, yeah, go ahead, but it is shitty now, I agree.

Anders Arpteg:

But this one actually works surprisingly well. So this one can take control of or take control. It can use your computer and it can play games, for example. For example, if you take Minecraft, it actually just takes screenshots and looks at the game in real time and controls the keyboard to take actions and be able to play the game, so to speak. And they played like 1300 games and could use it for that. You could even ask it to what's the example? Something like take the cells from Excel and put them into Word and make sure the format looks the same as it did in Excel.

Paulina Modlitba:

That sounds like we definitely hit AGI if it's able to do that.

Anders Arpteg:

Sounds impossible. It does right. But then you can see it opening Excel. This was actually in LibreCalc, the Linux version, but still similar to that opened the spreadsheets, it copied the cell by clicking on the different cells using the mouse, copied it over and then tried to reformat it in in words, so to speak. So it looked the same, with a lot of clicks back and forth and you can see the reasoning happening on the side. So you can see step by step, you know how it's reason, say I must do that, I must do that, et cetera, and then it could also operate on a phone and things like that.

Anders Arpteg:

I think it's super interesting because it's an indication of where we're moving in some sense. Still, it's very shitty when it comes to agentic tasks. Of course, human are much better, but we are getting better at it. And this one, of course, has awesome knowledge skills but semi-good reasoning, even worse agentic action-taking abilities. But imagine a future still when this is 10x better. Imagine when it can actually control, not by having API integrating the applications that is manually written. It can use the human type of interface, which is keyboard and mouse, to interact between different applications that you have. That will be a big thing, wouldn't you agree it?

Paulina Modlitba:

won't. Yes and no. I think it's a middle stage, or whatever you call it. I think there's no real reason for AI systems to use our way. I mean mouse and keyboards and everything. Again, my background is in ux and and interface design and you know that whole it's. It's so unnatural, it's such a stupid way of communicating with tech and the whole idea and the whole strength with ai and generative ai is that we can use natural language or or you know whatever it is code in the background. So, yes, to begin with, but but then you know the real strength of a genetic system if they can help us completely get rid of these stupid, old, outdated interfaces and do things in completely new ways. And that's what I try to emphasize when I talk about AI too, like we're not just using AI to do whatever we're already doing faster. We can do things in completely new ways and we have to talk about what those new ways are here and now and prepare for them.

Anders Arpteg:

And that type of communication would be so much faster than having to use keyboard and mouse.

Paulina Modlitba:

Yeah.

Anders Arpteg:

But still, you know, in the meanwhile, until those exist.

Anders Arpteg:

I think you know a nice metaphor here is still the self-driving car and you know, today you have to have the cameras, you have to look around by seeing there are other cars and pedestrians and whatnot. But in the future and some people call it level six so level five is the full autonomy, without the wheel and completely operated by the AI. But level six could be actually when they start to talk to each other without humans involved and they simply can see that, oh, this car will be there, I can speak to him, he can let me know, oh, there's a pothole over here that they've seen, and that type of communication can be so much more efficient than having to use the camera to do it.

Paulina Modlitba:

Or like do you come here often Flirting a bit? You're fine yeah.

Anders Arpteg:

Yeah, awesome, polina. I would like to go a bit more into another topic, which is into investments, and you're also an angel investor. And it's also an interesting topic for myself, so perhaps we can start simply thinking when you are trying to look at AI startups, what do you look for? What is it that you think could be a good sign or really bad sign for the success of the company?

Paulina Modlitba:

I would say that it's more complicated than ever just because everything is changing so fast. And all the middle layers that you could invest in, I mean the whole sort of software as a service or like AI, yeah, whatever you call it. I don't even know what to call it, but you know, the sort of service layer on top of different AI models is exploding. But just you know, tomorrow one of the big actors, one of the big tech companies, could sort of expand their AI models or introduce a new one that sort of just eats that business opportunity up.

Paulina Modlitba:

Yeah, in a second, and that sort of makes it more difficult than ever to actually understand which startups build long-term value and will be needed even like five years down the road. So I have to be honest, I think it's more difficult than ever to invest in AI startups.

Anders Arpteg:

Yeah, but yeah, I've been killed by Google once, and it was in 2011.

Paulina Modlitba:

You're not bitter Well?

Anders Arpteg:

partly bitter. Yeah, I mean, if we just elaborate on that thought a bit more and we know the big hyperscalers that are building the foundational models that can do a lot of things and, at least according to Sam Altman, you know you can do so many more applications on top of it. But in reality, what is really stopping if you get a successful shopping experience or whatnot, or travel business or bookingcom suddenly got, you know, eaten by Google flights or whatnot? Yeah, I mean, is that something? Okay, let's phrase it like that. I mean, either you can see it as a positive sign when a big hyperscale go vertical into some field, because basically it's validating that the business model that you had is useful. But then you know, and if they get an exit, but because they get acquired, that's a positive thing, yeah, so wouldn't you as an investor? Would you be really afraid about getting killed? I mean, it could also be something that you can get acquired by a hyperscaler in that way.

Paulina Modlitba:

Yeah, and I think that's definitely the most interesting part to look into, like what which companies are building solutions that the bigger actors are looking into and are interested in acquiring.

Paulina Modlitba:

I think that's definitely one way to go I'm also a firm believer again to get your hands dirty and actually try the services out, because I still think you know AI being AI, it's still human and the whole experience is still so important. And there are so many now I'm being approached by so many startups now that are doing different types of vibe coding solutions and they all say but we're doing it better than those other guys or girls, um, and our sort of user experience is a lot better. And and you can, you know our solution is a lot better at going from um and a really limited prompt to whatever it is that you want to achieve and sort of makes everything easier. And and yeah, there are so many options and you've really yeah, my first answer is well, I need a demo. You have to show it to me and I have to experience it and sort of actually be able to tell that, yes, you're not just saying it, you're actually better at what you're doing.

Anders Arpteg:

And what do you look for then? If you see a demo, what is really, if you were to give some guidance and someone now running their startups, you know, want to know, you know how should they, how should they do the demo in the best way, or what should the business model that they're building be like?

Paulina Modlitba:

Yeah.

Anders Arpteg:

Or not be like.

Paulina Modlitba:

I have approached why the hell do?

Anders Arpteg:

we need another AI startup.

Paulina Modlitba:

So the thing is, I've lived close to I don't want to mention names, but I've lived with and close to entrepreneurs and I almost became one myself, and I've been around entrepreneurs long enough to know that they all fall in love with their own ideas. So that's the first thing that I try to sort of pierce through, like how true you know whatever you're trying to sell here, how true is it? Or you know, I always ask I do the same thing with humans. Whenever I meet people, like how comfortable are you with talking about your own weaknesses? And like, compare yourself to other people. And whenever I meet startups and I ask them, like what about competitors? And like what does the whole sort of field and space, competitor space look like? And when it's pretty common that they start saying you know, no, we don't have that many.

Paulina Modlitba:

There's this one in America, but it's blah, blah, blah. And yeah, no, we're't have that many. There's this one in America, but it's blah, blah, blah. And yeah, no, we're one of a kind. And then you go home and I do my homework and I Google it or use chat, tpt, and it's pretty obvious that there are hundreds out there. So that's a huge red flag for me.

Anders Arpteg:

Is it because they haven't done their homework?

Paulina Modlitba:

Because they don't want to talk about it. I'm pretty sure I know that they're aware, but I've met so many entrepreneurs who are like oblivious, they don't want to talk about it, they don't want to take it in, they live in their own bubble.

Anders Arpteg:

So a recommendation from you would rather be to if someone were to do a demo or pitch to you, they should be very open rather than trying to hide that there are competitors.

Paulina Modlitba:

Yeah, I want to hear them talk about the risks and like, yeah, who could sort of bite you in the neck or ass tomorrow.

Paulina Modlitba:

Yeah tomorrow, like yeah, and I want to hear them talk about it and be comfortable with it and still believe in their own idea enough to do it, to talk about both the good and the bad things. So that's one part. I mean I haven't been a hundred percent. I haven't been that successful in my. I haven't done that. Many angel investors invests, investing investments, yet Two of them have failed after three or four years. I must say I haven't really. I've invested in people and companies that I believe in, but I could have done smarter investments and let me explain what I mean with that. I started out working at Stardoll. My first startup experience was Stardoll with Matthias Mikse back in 2009 or 10.

Anders Arpteg:

Was Donald Ake still there at that time?

Paulina Modlitba:

He wasn't, but his aura, his reputation was definitely there. So he back then he was. You know. People were talking about the fact that he was building this music company and his tech people were calling. They were actually calling the tech people at Stardoll and trying to recruit them over.

Anders Arpteg:

Nothing was forbidden at that time because the competition for the tech so, for people that don't know, daniel Eke was also one founder of Stardoll and also founder of Spotify.

Paulina Modlitba:

And Mattias Miksjö is one of those people who actually saw tech, especially tech competence, really early in Daniel Ijek and some other guys that eventually became really sort of successful in other big startups here in Sweden. So Lifesum, for example, kry you know my colleagues at Startup have, yeah, they've been successful, yes, and so if I would have been super calculated and smart, I would just have followed the money. I know that this guy is the best friend of this guy's because they worked at Stardoll, you know, five, 10 years ago, and so they have each other's backs and in that whole infrastructure and knowing it and knowing what the network looks like, you know I should have placed my money according to that knowledge and based on that knowledge, but I didn't. I started. I chose to invest in people that are not that well known, that have other, you know, deliberately, people who are not those people that are established, who already have somebody's back or knows people at Sequoia or whatever yeah.

Paulina Modlitba:

And so it's more risky and it's harder. It's much harder because it's harder for them, after angel investments like the first round, to actually get the big sums and get the real support.

Anders Arpteg:

Yeah, I mean. A lot of people say you know you invest in the people and perhaps not as much in the product or the tech. But and I guess it's it is simply safer, you know, if you have people that have connections with other tech people and investors, of course, it's a higher chance that they will make it.

Paulina Modlitba:

And I'm not saying that they don't deserve it. I want to emphasize that, but it's easier. It is easier.

Anders Arpteg:

It is easier and it is safe. I mean it's probably true. I mean they will, with a higher probability, probably succeed simply because they have the connections right. Yeah, it's a simple truth of life. In some way it's a bit unfair, but that's reality in some sense.

Paulina Modlitba:

Exactly One of my classmates at KTH. He founded SoundCloud and Alexander, together with a guy called Erik Walfors, and one of his brothers is now or his brother, I don't know how many he has is now the founder of one of the most spoken about sort of invested in AI companies, and I think he's based in San Francisco now. And so, again, knowing the ecosystem, knowing the people, investors, knowing them, the big ones, it definitely helps.

Anders Arpteg:

But thinking about other people skills. One, of course, is networking and the connections they have, and that's a huge indicator and important one. But then you can also think about other traits of people, like the drive, the passion, their skills or whatnot. How much do you look into that in any way? If you were a startup where completely unknown people are in it, how would you judge if the people potentially were good?

Paulina Modlitba:

How they interact with each other, how they tell their story. If I can tell that, they'll be good at sort of convincing other people, and especially if we're talking about AI. I don't just invest in AI startups, but if we talk about AI, it's all about like why they're using AI obviously. So I would be more hesitant if it's somebody talking too much about sort of the tech AI aspect of it and sort of emphasizing just because they know that a lot of capital is flowing towards AI right now and using that too much to AI right now and using that too much. I want to hear them talk about the users and the people and understanding how you actually sort of build real value and solve real challenges. And and yeah, back in the days I was interested in sort of e-commerce and and as well. Nowadays it's mostly companies that try to build a better world, targeting climate change or education, making that better, you know, making people's lives better. So for me that's a really sort of heavy important factor as well.

Anders Arpteg:

Cool. I just have to ask this question as well. That can be different role models when it comes to leadership and how different companies have some kind of CEO or person that is driving a company and they have different mentalities and styles. Do you have any role models when it comes to the best CEO you ever met or seen? Can you name someone that you you think this is actually the perfect CEO, that I, yes, so the?

Paulina Modlitba:

reason why I'm laughing is because I chose to start my own company because I had bad experiences of leadership and you know I'm, I'm, I'm not angry, or maybe I am, but I also work mostly in startups where people were young and they had zero leadership experience. So you know there's that, but I've seen people from a distance that really influenced me. Helene Bonnikov, of course, is one of them ex-Microsoft and she's yeah, she's just amazing in the way that she sometimes, every now and then, slides into my DMms and supports me in different things and you can definitely tell that she cares about, sort of the human in everything. And she's, yeah, she's just very modern and progressive when it comes to trust-based leadership, and you know the type of leadership that I would like to see more of, I mean I guess she's American right.

Paulina Modlitba:

Is she? No, I think she's based in Sweden.

Anders Arpteg:

Oh really yeah, okay, good, because otherwise there's usually a big difference between Swedish and American type of leadership and the trust-based approach is a bit stronger in Sweden, I would assume, right.

Paulina Modlitba:

And just skipping the hierarchies and letting people work from home and like you're not being measured as as much as in the us. So yeah, um, yeah, she's definitely the one that comes to mind. Um, instantly, okay, good, yeah cool.

Anders Arpteg:

um, I see the time is flying away here so I'm going to skip some topics here, but you're very interested in design right and more creative aspects.

Paulina Modlitba:

As well.

Anders Arpteg:

yeah, and that's also a bit controversial when it comes to AI, of course. But if we take more creative professions perhaps fashion, as you mentioned, is one interest of yours how do you see that change going forward because of AI?

Paulina Modlitba:

A friend of mine who's also like very philosophical, and I love to have conversations with him, arash Gilan. He has written a book called I Love AI. I love AI. He recently wrote this article in Resumé, or Kronika, like an op-ed in Resumé, about the fact that AI doesn't reduce creativity. It shows us if we have any, and I think it's spot on.

Paulina Modlitba:

I think everything with AI and why it's so uncomfortable for many of us not just creativity-wise, but other aspects as well is because it's a mirror and we have to face ourselves, and not everyone can do that. I usually take this as an example. I loved the film the Nouvelle Historie when I was a child. What is it called in English?

Paulina Modlitba:

The Infinite History, yeah well, anyway, there's this guy who imagines, reads about and also sort of is immersed in this other world where a character that sort of symbolizes him has to manage, sort of survive, three different, I think, different obstacles. And one is the last one, I think, and the hardest one, is looking into a mirror and accepting what you see and it's just not sort of your actual face, it's. You know everything that you are, so you see your whole life and what you stand for and how you treat other people. It just passes by and that vision is something that you know, know, a lot of people can't take, and with AI it's sort of the same thing. And I, and what I try to emphasize is that this is a great opportunity to find our way back to creativity, because a lot of us have been forced to work like robots. We are the actual robots because we're measured, you know, on performance and we have to be database. Then we have to. You know it's da, da, da, da da, and there's no space for innovation, there's no space for creativity. And now's our chance, and it was. It's still.

Paulina Modlitba:

It was hard in the beginning and it's still hard for me to understand when I use ChatGPT, like how can I make this prompt more out of the box, like not just the most obvious one that I can come to think of, but how can I expand it to get completely new answers? And that takes me To make it more creative. Exactly, and I have to accept the fact that I'm a beginner. Again, I'm not an expert, and I'm disappointed with how limited I seem to be. My brain is so limited. I want to be more creative and I'm using AI as a tool for that, and there are so many sort of creative ideas that I have that I'm not able to, since I'm not a full-blown artist or musicians. I can still have creative ideas, but I don't know how to actually shape them or make them real, and AI is a perfect tool for that.

Anders Arpteg:

I can react to that. I wish I was a musician and I could sing Goran is awesome at that, but I'm not and then using AI tools like Zuno or Yudio and these kind of tools and I can create a song with the lyrics, with the instruments and everything. Yeah, it makes me a bit happy. Um, yeah, but you can also see the negative.

Paulina Modlitba:

Yeah, the copyright aspects are are complex, yes, and we definitely need to talk about it too, and I'm sure we're seeing a lot of sort of um, legal processes being uh initiated now. Um, yeah, we're just in the beginning of trying to figure out how we should tackle these things, because I definitely think that we AI companies cannot use fair use and sort of we're just training our AI models as an excuse anymore. We there's this whole sort of in-between regulation in-between that we haven't thought about before in the pre-AI era. That needs to be taken into account now, and new laws that have to be shaped.

Anders Arpteg:

So, talking about fair use, and if we take even software engineering, or let's take music, you could argue that any human that creates a new song of course have listened to other songs and been strongly inspired by it and sometimes even very overlapping or even duplicates songs, but they don't get sued for it unless it's exactly the same.

Anders Arpteg:

But still, all humans are inspired by other songs when doing a new song, but then an AI can do that, I guess, at a scale that is significantly higher than any human. And would that then make it not legal to have AI listening to all songs and then creating a new one that hasn't been seen before? I mean, is that not okay? You think yeah, that's the whole discussion, I guess.

Paulina Modlitba:

And there are lawsuits when it comes to human sort of. I mean, after every Eurovision song contest there's always a lawsuit like this song sounds exactly like this one, and I compare them and I agree. Like it sounds like. Um, yeah, you copied the song and changed a couple words, but the, the regulation, the laws, have been designed to be very liberal. Like you rarely see a case where somebody is um, um, what do you call it? Dumbed, uh convicted convicted for it.

Paulina Modlitba:

Um, and so I guess that's what we're going for too. We want to be liberal and accepting when it comes to AI generated content as well, but I don't know, I don't know, I don't have the answer. Again, there are lawsuits happening.

Anders Arpteg:

Should it be some kind of you know in sweden, uh sting knows uh what's called yeah, stim stim, thank you and um, at least you know if you use the originals of some lyrics or some song, then those people get, you know, reimbursed. You know, with that that's good, I guess. But when it's not as clear cut, you know that it's literally that you know artist that's been based on, but it's just inspired by it. Some people are still trying to have some kind of attribution model where if it's very close to another author or musician they should get reimbursed in some way. Would that be way forward, you think?

Paulina Modlitba:

And it's happening. I mean, what I would like to see happen is that we don't need the lawsuits, that the AI companies themselves are signing deals and that's happening now Actually signing deals with, whether it's text or music or video asking for approval first because now they're making money. They're still not profitable most of them, but you know, it's somewhat like Spotify, because, again, daniel Iyik I mean he downloaded a lot of songs illegally and his whole approach was I'll ask for approval afterwards. You know, when they contact me, I'll say I'm sorry, can we make a deal? And this is sort of the same. You do it first and then you ask for approval or find the right sort of setup for it to be profitable and and sustainable. Uh, but yeah it's it, yeah it's hard.

Anders Arpteg:

It's hard and it's also if you take like in the movie business, and we spend a huge amount of money today to do a feature movie like millions of dollars or hundreds of millions in some cases and if it becomes super easy to create a movie which it's not yet you know on that level, of course, but if you look a couple of years ahead, you can imagine that anyone can create a movie that is the Terminator 10 or something.

Anders Arpteg:

I mean, wouldn't it be super hard? Okay, let me phrase it like this If you take the patent kind of question, if you didn't have patents, then you know no company could really do the R&D properly to come up with new inventions because they don't get, you know, the money back. Can we really do the more high quality movie productions if everyone can do it super simply in the future? I mean, will we lose basically the ability to do high production movies in the future because of AI?

Paulina Modlitba:

Because it's so easy thanks to AI. I think the most challenging thing when you work like me, being some type of futurist trying to prepare for the future is reminding people that we are so prone to look at the future with our current glasses, so to speak, and our current framework. And, you know, looking at that question, you have to consider the fact that our values will change too. Like what actually it might not be the fact that a movie is looks expensive, like whoa, this costs you know X, you know billions of dollars to produce and it's so impressive. Our values and what we think is impressive and worth paying for or experiencing will probably change completely when everyone, anyone, can do one of those blockbuster, really impressive movies. Yeah.

Anders Arpteg:

Just take, you're writing a book right now.

Paulina Modlitba:

Yeah.

Anders Arpteg:

And I guess you're using AI to help you at least but it could be imagined that in the future anyone can do the book like you did, but it's personalized and it has the exact same type of value that you provide now because of your human knowledge and expertise, but in the future it could be simply that you know books are you know, generated in real time with the needs and knowledge that exists at that time and for what that person needs, and it was super hard to do something like writing a book, like you're doing right now.

Anders Arpteg:

Yeah, could could you see that? Or?

Paulina Modlitba:

yes and no, I, I think, my approach to it. Of course, I would be super pissed if somebody just took the the book and released it in a sort of similar, slightly adjusted version, uh, next week or the week after I released mine. Um, I would take it personally, but but I, I've already sort of, I've already sort of I don't. I think my, my sort of mindset has shifted already. I don't see the actual content of the book as the main sort of value that I contribute with and, and the same goes with my newsletter. Like sometimes I can be like you know, can't you see how unique my newsletter is and the content and it's. You know I provide so much and you get gift links to sort of important articles that are behind paywalls and you know I offer so much.

Paulina Modlitba:

But when it comes down to it, what I use and what I focus on is what I contribute as a person.

Paulina Modlitba:

I build communities and relationships. So it's not the actual content in the book and I'm not writing it with a protective mindset. Somebody will probably write something similar or already has written something similar, and it's not about that. It's about me writing it and me maybe inviting people to outwine and discuss the philosophical questions around AI tied to the book and offering that community and understanding that we're human beings, and so I really appreciate the fact that my first job roles were sort of related to community building and understanding people and what really you know marketing, pr, product development everything comes down to your relationship with customers and people, and that's what I do here as well the actual sentences, the actual content. I hope that really helps people in their daily lives, but it's it. You know, the most important part is what what I build as a community and and that they appreciate getting that knowledge from me and and choose me and probably somebody else as as their source of insights. And yeah, does that make sense?

Anders Arpteg:

I don't know, maybe trust in some way to the human or the person and the subject?

Paulina Modlitba:

Yeah, it's not the book itself, but it's, I don't know. At the same time, I totally understand that people get really offended by the fact that their art is being copied in various ways, and I definitely don't want to disrespect creators in any way, creators in any way but I think our view of ownership and creativity and owning yeah, copyright and patents will have to change. Wasn't that one of Elon Musk's wife's grimes? Which of them yeah grimes is the coolest? I?

Anders Arpteg:

mean she released a song and then lots of people did AI copies or clones of it. And she said you know you're perfectly aligned. I will never sue you, but I want 50% of the revenue and she got a lot of money from it. Perhaps that could be something like if someone gets inspired by your book, you know, say fine, but you know I want some kickback.

Paulina Modlitba:

Yeah, no, and that's what I mean. Like we have to rethink and reevaluate the models that we're using and living by.

Anders Arpteg:

And yeah, awesome, yeah, now we're moving closer to the ending topics here, moving a bit more philosophical, but you also speak a bit about the human-centered approach to AI. Can you just elaborate a bit more what do you actually mean with a human-centric approach to AI?

Paulina Modlitba:

I think we already touched on it, but if you want to talk sort of AI lingo, we talk a lot about it and I write about it in my book, of course, sort of explainable. Ai and human in the loop are two popular terms, buzzwords. So as AI models become more complex and we don't even know like they might even advance human intelligence or the type of intelligence that we have, it's becoming increasingly complicated and hard for us to understand what's going on. You know the black box problem. So one way to develop responsible AI, of course, and the main way I guess the main discourse out there is to actually design systems so that they explain what's happening.

Paulina Modlitba:

So yeah we're seeing that with the reasoning models that we have now yeah, chatgpt, but also sort of the Chinese models, and so reasoning models are actually usually explaining what they're doing step-by-step, because they have the time to do it, because it takes a bit longer, and so that's one way of doing it to help us understand what the AI system based its decisions on. Or, yeah, and it's becoming even more important when we have agentic systems and sort of independent agentic systems. It has to be controllable and sort of we have to, yeah, the methods for actually understanding what's going on in there in the agents. That's super important. Going on in there in the agents, um, yeah, that's super important. And then human in the loop is actually also.

Paulina Modlitba:

It's it's closely tied to to design of AI systems is that we make sure that we always keep us in the loop and everything that's happening. And it's both. There are several aspects to it. One is the control part, that we actually understand what's going on. But it's also important because we don't want to lose our capabilities and, just you know, brain rot, just you know, trust AI systems and sort of lose our own cognitive abilities basically, and our own intelligence. We have to stay focused and we have the sort of final sole responsibility for what the AI system does, so we have to keep up with everything that's happening, and so we have to keep our brains in the loop as well. Yeah.

Anders Arpteg:

And I guess we as humans also want to be in control in different ways. And I think you know, in some ways, if I collaborate with any other human a lot of human have, you know, so much more skills and intelligence and abilities than I do. But I can still operate with them. But in some cases, if I operate a company, I still want to have some kind of say in or trust in that the other people working for me that may be much more than me still is moving in a direction that I'm in control of in some sense.

Paulina Modlitba:

So every now and then you get updates and you can say yes or no or something in between.

Anders Arpteg:

And, I guess, also being aligned in some way with the values that either if you have as a company or even as a society in some way, that AI is operating in.

Paulina Modlitba:

And that's actually what it's called in in the world of ai as well, the alignment problem both like on a larger level, like higher level, ethically agreeing on what, what does? Oh, I just did this trump thing. I didn't mean to I'm a bit italian here trumpish um Actually agreeing on what sort of developing AI for the sort of what do you say For the?

Anders Arpteg:

best of humanity, yeah best of humanity.

Paulina Modlitba:

So that's one level, but also alignment when it comes to, you know, the more detailed things in our AI systems.

Anders Arpteg:

If we take a bad example of this and I think we are working hard, all the reinforcement with human feedbacks are trying to align it in a direction that we control in some way. But also if you take Anthropic, they had this kind of paper recently where they were trying to understand how the model reasons inside internally and they asked it you know afterwards can you please explain how you come to this conclusion or some kind of math problem that they assign us of some kind of value, and then it gave you know when they asked it afterwards. You know how it actually got to the conclusion. It gave a perfectly reasonable answer. You know, this is how like math textbook kind of example of how you come to that conclusion.

Paulina Modlitba:

Yeah, but it wasn't the right one.

Anders Arpteg:

No, it was not at all how we actually reasoned inside, so the model knew when you asked it, you know. Please elaborate how you came to this conclusion or why you think this answer is correct. It said something else than it actually did.

Paulina Modlitba:

That's kind of scary yeah, but that's what it does all the time, like it gives you the answers that you want and the whole sort of fundamental problem with hallucinating that it wants to have the answers. So yeah, it's um. Yeah, no, I know it's, it's um.

Anders Arpteg:

Probably one of the most complicated things I guess you know that's why I you know like so much what anthropic is doing as well. They actually do look inside and then they do compare it, and if we do get that knowledge, hopefully we can start to actually also see that it actually reason inside as we would like it to do, in some sense Exactly.

Paulina Modlitba:

And now they're investing in open source robots as well. So, just like all the rest of the AI companies are investing in humanoid robots. I'm really glad to see no, sorry, actually, no, I'm wrong, it's actually Hugging Face investing in open source robots.

Anders Arpteg:

Yeah, I was thinking about asking the final question here, but I have to ask you, since you are a proponent of open source and something we've been discussing here in the pod a lot as well what do you think the future is? I mean, we can easily see that open source have pros and cons A lot of pros, but it's not for everyone, because Do you think AI should be completely open sourced? Even the frontier labs should be open, sourcing all the models, or what's your thinking about how we should the?

Paulina Modlitba:

naive answer would be yes, because if we want to develop a responsible, transparent AI, we need to be able to look into it. But you know most, even the most transparent models. For example, when DeepSeq was released and everyone went like, oh, it's open source, we can look into how it works and you know, hasn't been trained for open AI. You know, look into how it works and you know hasn't been trained for open AIs.

Anders Arpteg:

You know models you can't really look into all the details in it. I mean, if we take something like I mean I like Jan Lekun, you know, at Meta, and he's a very big proponent of open source and says even if you think like cybersecurity kind of issues, and if you do release a model and it's being abused, it's better to have someone being able to experiment with it and do research on it so that you can figure out ways to safeguard against it. But on the other hand, you can think of it's so easy to abuse a model if you are. I mean, there are people that are really bad.

Anders Arpteg:

There are a lot of people that shouldn't be allowed to have a gun right, and you wouldn't allow people. I mean, if you have a three-year-old kid, you wouldn't have your gun being placed in front of them, right. So in some sense, some things need to be protected sometimes, and the more powerful it is, the more potentially protected it needs to be from being abused. Can you see that for AI models as well, especially these super powerful, super big frontier models being developed, that it actually do need to be protected potentially, or should they still be open sourced?

Paulina Modlitba:

I'm probably in between with this as well. Of course, some things have to be hidden and some things have to be protected. I'm not convinced that the right people, that the right things are being hidden for the right reasons, if you know what I mean. I'm pretty provoked by the fact that, again for checking his castle, I can't remember what it's. You remember what it's?

Anders Arpteg:

called the swedish in this insurance.

Paulina Modlitba:

Yeah, something like that um, they're using an ai system and they've been doing it for quite some time now. Maybe have you been involved in it. No, I better ask before I criticize.

Anders Arpteg:

No, it's fine.

Paulina Modlitba:

Go ahead and they're using it to flag potential sort of hoaxers, people who are trying to get support without actually what is? This Bob, bob Bob.

Anders Arpteg:

Oh yeah.

Paulina Modlitba:

Claim that they're staying at home with a kid and need support for that, but they're not so actually trying to trick the system.

Paulina Modlitba:

And Svenska Dagbladet, one of the major newspapers here in Sweden, together with an agency, did this sort of wrote an article where they looked into this system and tried to get access to the AI system and the data to sort of see, because they had indications of the fact that this system was highly sort of biased and used in maybe the wrong ways and flagging people just based on the fact that they're women and and people of non with non Swedish heritage. And, um, they couldn't for checking his castle. And I actually claimed, like no, we can't let you look into our AI system. Uh, because it's it's our secret tool. Like we, it's a very important tool for us and we don't want to reveal to just anyone how it works. Um, so they actually had to use old data because there are agencies whose role it is to actually um, uh, god, my, my english when it comes to these like specific terms which uh, so the agency that's controlling for checking yeah, actually had did, actually did look into their system many years ago, maybe five, six, seven years ago.

Paulina Modlitba:

So they had something to to use and look into and base their article on um, which showed again a lot of bias, and for checking, discuss on. You know, it's not really a problem because we always have educated human beings making the final decision and looking into is this really a person who's trying to use us and use the system? But it's not that easy. We are biased as humans and, no matter how educated you are, it's so easy to trust an AI system and it's hard to go against it if it says this person is probably trying to trick the system. Yeah, there's this bias where you tend to the more you use the system, you tend to lean back and trust it even more, and so actually using AI systems is playing with fire and we have to make it possible for media or for Tilsonsmyndighiet to look into what AI models we actually use and how they work, and that's why I lean towards Moho a long answer to your question but lean towards more open systems.

Anders Arpteg:

And yes, but I'm not 100% pro. It's two different things. One thing is being able to look into a system, but another to be letting it go publicly right?

Paulina Modlitba:

Yeah, so that anyone can use it. Yes. I know it's a tough question. I don't have a clear answer to questions. It's a tough question. I don't have a clear answer to it. It's a great question. I'll have to come back in a year. Maybe I will have a more clear opinionated answer.

Anders Arpteg:

If you haven't answered it in one year, I would be super surprised. I don't think anyone will have it, probably the answer is something you know.

Paulina Modlitba:

It lies somewhere in between. It depends.

Anders Arpteg:

It depends, it depends. It's a good answer, paulina. Yep, if we look really really far ahead here and assume that at some point in time there will be an AGI system that is, as Sam Altman calls it, have a higher skill factor than an average coworker.

Paulina Modlitba:

Yeah.

Anders Arpteg:

So he can basically perform even agentic tasks and whatnot and being innovative and all of these things that AI systems cannot do today to the level that humans can but still imagine.

Anders Arpteg:

That will be a time where we'll have AGI. We can imagine two different extreme scenarios and I'd like to hear where you are on this scale. Either we are on the Max Tegmark kind of dystopian future, where we have the Matrix and the Terminators and the machines are trying to kill us all, or we could be on the other extreme, potentially saying that AI will solve climate change and fix cancer and cure cancer and fix energy crisis and we potentially will have a world of abundance, where Elon Musk usually calls it something like the cost of price and services will go towards zero and we will have a world of abundance in some sense and I guess the the truth is always in between somewhere, but where? Where do you think we will end up? Do you have a thought about where, for one, if we start like this, do you have a timeline where, potentially, we could reach something that we could call AGI?

Paulina Modlitba:

So the problem is that we don't even have a mutual sort of definition of AGI that we have agreed on, so it's impossible to answer that question.

Anders Arpteg:

But if it takes an old-fashioned one, you're saying the AI system that can operate to the performance that an average human co-worker can.

Paulina Modlitba:

In any task.

Anders Arpteg:

Yeah, not for all tasks, but if someone is a coder, if someone is doing the books financial books, if someone is doing marketing and whatnot, and just taking the average, I know this is a super boring answer, but I hate timelines because nobody knows.

Paulina Modlitba:

I already mentioned that I think that some of it's definitely possible that one of the AI companies, the tech companies, the big ones magnificent seven already have developed AGI to a certain extent Could be possible, could be I'm more leaning no, I don't know. Again, with AI, it could happen at any time, with us actually being able to foresee it. There are so many things happening within AI that I feel like I should have seen this coming. My work is being a futurist. I had one job and that's predicting things, and I still couldn't predict generative AI and I even found old slides where I have sort of generative AI, really early images generated with AI and I didn't know what it was and I didn't do my job. I didn't look into the research from Google, for example. Who?

Anders Arpteg:

can predict the future.

Paulina Modlitba:

No, so I think you know I will be really better at predicting AGI as well 50 years or five years?

Paulina Modlitba:

No more five, 10 years, more 10 years than 50, for sure, uh, so, um, and what will the world look like? So the the boring, most obvious answer that I think you already know, you know, is something in between or both. Uh, but I want to emphasize and in wrapping this discussion up and sort of tying tiny knots, whatever knots, whatever we call it I I want to emphasize the fact that we can still control where it takes us, what we make of AI, you and I, anyone can decide, and we're not just you know, hopefully not just fools or puppets, even though I totally understand that it definitely feels that way sometimes, and even I feel that we can. Yeah, there's this agency thing. We can definitely make use of that, use our agency in things.

Anders Arpteg:

Yeah, I don't know who said it, but a famous quote is basically the best way to predict the future is to build it exactly and if we keep in control of what ai should do, perhaps we can.

Paulina Modlitba:

One thing that I'm prepared to actually predict or not predict is sam altman, back in the days, um, he actually imagined a future where AI would be so productive and helpful that we would all just smoke a joint on like a mountaintop somewhere.

Paulina Modlitba:

And you know, I don't know if that's your definition of abundance a life of abundance it's not mine. I don't think that will happen. I think one thing that we have to think about, consider and regulate as well, is what we do with the extra time that we get. I think we're so prone and trained and the economic models are forcing us to be productive all the time, even more productive so we will definitely fill that time with more jobs, more work, more of everything, and not necessarily focus on leisure time or smoking a joint or whatever you're doing, and that's a bit sad. That's definitely something that I think we should try to shape as well. Make sure that this actually contributes to a life that is better for us and makes us feel better about things and have more time for things and our children, and yeah yeah, nick bostrom wrote a book recently called deep utopia, and he looks more on the positive side.

Anders Arpteg:

you know, and what can happen in if everything goes really well? And um, he, he talked to us about you know what's happens if people don't have jobs and he said you know what? Today we have a lot of people that don't have jobs. We have children. Are they sad and depressed? No right, we have retired people. Are they depressed? No, some perhaps, but most not. So humans are really adaptable.

Paulina Modlitba:

Yes, but the premise there is that these people who are not depressed not having jobs is because they're not expected to have jobs. So the problem is expectation and the system is built around the notion of being nyttig. Är du nyttig, lilla vän, in Swedish. So like are you productive? Are you actually creating value in society? And yeah, it's embedded in the walls, it's everywhere in society. We're expected to be useful, to be valuable, but if we think that you know, we will expected to be useful to be valuable, but if we think that we will have food on our table, we'll have a roof on top of our heads.

Anders Arpteg:

Actually, we already do today to a large extent, but then you have thoughts like the universal basic income as a way to just distribute wealth in some way, so people that don't want to work perhaps still can live.

Paulina Modlitba:

I know, yeah, but it's more complicated than that. And again, it's about sort of what is expected of us, because a friend of mine, siri Helle, she wrote this great article about sort of the government again not preparing enough for what's coming when it comes to sort of people being unemployed, again not preparing enough for what's coming. When it comes to sort of people being unemployed, depressed, etc. I do believe that we will see new jobs coming, but there's definitely this phase in between where people will not have jobs and people have missed train, the AI train and are not updated and are completely outdated.

Paulina Modlitba:

And what we see in history, like in the 90s, where the industry in Sweden was struggling and a lot of people were sort of in that digital transformation, were losing jobs. They were masses of people depressed. Long tid sar beslösa. Like you know, when it becomes a lifestyle, even though you don't want it to, you haven't chosen it, and you become depressed and you become ill, it costs a lot for society and so that's a huge problem. Unless, again, unless we manage to change the expectations like it's okay, maybe base income is one way of communicating that, like we don't expect you to be useful all the time.

Anders Arpteg:

It's okay, you can, you know, do whatever you want to during these hours, you know you're worthy regardless, right, and I guess you know I think one thing I hope you agree with AI will cause a lot of change.

Paulina Modlitba:

Yeah.

Anders Arpteg:

And it will be a big transition for a lot of people and companies and society. Coming years knows how long time, but it will be a big transition and usually change is hard yeah so at least during this time it will be a lot of discount search, a lot of people that are get a lot right yes, and that's why it's so much easier to say no.

Paulina Modlitba:

It didn't know my name when I used ChachiPT to Google me, so it's bogus. It's much easier to hide behind that. It feels safe. Yeah.

Anders Arpteg:

Paulina Modlita.

Paulina Modlitba:

Yeah.

Anders Arpteg:

It was an honor to have you here so many more, so many amazing topics and great discussions. I hope you can stay for a bit more off camera after after work discussions as well going even more philosophical and other topics. But thank you so? Much for coming here. It's been a pleasure.

Paulina Modlitba:

It's been a pleasure for me too, thank you.

People on this episode