AIAW Podcast

E178 - Designing Resilient Digital Systems - Catherine Mulligan

Hyperight Season 12 Episode 6

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 2:07:22

In this episode of the AIAW Podcast, we’re joined by Dr. Catherine Mulligan, technology strategist, sustainability expert, and author of Designing Resilient Digital Systems, for a timely conversation on the future of digital sustainability. Drawing on her work at the intersection of AI, blockchain, 5G, IoT, and public policy, Catherine explains why traditional digital transformation—focused primarily on efficiency and scale—is increasingly failing to address long-term environmental and societal challenges. We explore what it truly means to build resilient digital systems capable of withstanding climate shocks, geopolitical instability, and rapid technological change. From the energy footprint of massive AI models and the global race for AI infrastructure to Europe’s ambitions for digital sovereignty and the role of human judgment in an increasingly automated world, this episode looks at how organizations can design technology that serves both business and the planet. If you care about the future of AI, sustainability, and responsible digital leadership, this is a conversation you won’t want to miss. 

Follow us on youtube: https://www.youtube.com/@aiawpodcast

Framing India’s AI Summit And Sovereign AI

SPEAKER_01

Not normally um at the forefront or it hasn't been. It's not, right?

Henrik Göthberg

And what is the thirty-second framing of this conference? What's the context and background?

Anders Arpteg

There has super high profile people there. I think every major player uh was in that con kind of conference.

Henrik Göthberg

Is it the first one or is it uh has it been ongoing for some time?

SPEAKER_05

No. This is the first time.

Henrik Göthberg

Is it the first time, right?

SPEAKER_01

It's the first time, and I think it's in response to some of the work that's been done at the UN, for example, and how does uh a country like India sort of uh help create the impact required?

Anders Arpteg

So I guess it was uh I mean it's a lot to spoke about sovereign uh AI, right? At that conference, if I'm not mistaken.

SPEAKER_01

Yeah, there was a lot around uh, you know, uh that India doesn't produce its own LLR models. I think that's something we in Europe can uh relate to that. Australia can relate to it. A lot of countries can.

Anders Arpteg

So are you from Australia, Manner? Yeah.

SPEAKER_01

Yeah, I'm Australian. That's the accent.

Henrik Göthberg

So you have another fan of Australia here. I lived in Australia, but my my two youngest or oldest boys are born in Australia.

SPEAKER_01

Oh, wonderful. Uh I'm from Sydney. I grew up on Manley Beach to make you really jealous.

Henrik Göthberg

No, no, no. I I I lived on and Freshie. I lived in Freshie.

SPEAKER_03

No.

Henrik Göthberg

No. So the joke is this. I went to University of Wollong Kong.

SPEAKER_03

Really?

Henrik Göthberg

And then parted like an animal and surfed when I was in the 90s. Yes. And then we moved to Australia back again when we were a little bit bored with like me and my sample and uh 2004. And then we m lived in the blue building in on Manley on Manly off Manly Corso.

SPEAKER_01

It's not blue anymore, but it's not blue anymore, but yeah, another one.

Henrik Göthberg

We we still call it the blue building, but it's not blue anymore.

SPEAKER_01

Okay.

Henrik Göthberg

My two boys are born in Manly Hospital.

SPEAKER_01

Oh, wonderful. Well, there you go.

Henrik Göthberg

It's a it's a beautiful So you so this is one top three places in the world for me. Yeah. I absolutely adore the place.

Anders Arpteg

But when did you leave uh Australia?

SPEAKER_01

Um I left. I mean I've been back very pretty much yearly, but I left in 1999, actually. Yeah.

Henrik Göthberg

So you're a Northern Beaches chick.

SPEAKER_01

I am, yeah. Can't you tell just by looking at me?

Henrik Göthberg

But I think you you're Aussie Okay, but Sydney, you need to we need to recognize that Sydney accent and even Northern Beaches accent is not Aussie accent like the rural accent.

SPEAKER_01

No, no, it's not, no. And uh after living in Sweden for 10 years, uh I I had to flatten my accent a lot. More British. Yeah, I had to flatten it a lot because people didn't understand what I was saying when I worked.

Henrik Göthberg

I I there is a small joke. When I was at university, one of my best mates was uh from New Delhi, and one of my other best mates was from Canberra. Canberra, Phil Powers spoke hundred miles an hour, yeah. Really fluffy, and Bird speaks with his pigeon accent, perfect English. They both turned to me to interpret between influent English speaking people who didn't get each other.

SPEAKER_01

Yeah, it can happen. I mean, everyone thinks uh English is the same the world over, but really uh we all speak very different languages, to be honest. Yeah.

Henrik Göthberg

We need to get back into the pod.

SPEAKER_01

Sorry.

Human Flourishing And AI For Good

Anders Arpteg

Yeah, yeah, yeah. Small talk. But okay, but please tell us more. You you um had some kind of meeting today speaking about the uh the conference in India, right?

SPEAKER_01

Yeah, so it was a debrief uh that was sort of held by Chatham House and Observer Research Foundation America and uh a company called Energy Unlocked. Um so they've done a lot of work uh previously looking at climate impact of different types of digital technologies. Uh so they were all sort of reviewing okay, what what is the right way to actually work with um the environmental impact and implications of AI? Super interesting bunch of people.

Anders Arpteg

But uh Yeah, I mean I can imagine. But I think a lot of people don't realize how big or high profile summit this really was.

SPEAKER_01

Yeah, it was huge, yeah.

Henrik Göthberg

Yeah, because I I I it really passed through my radar. I didn't really Well No, I I'm sorry, I had other topics. But please can't we use frame it how big this is? Because everybody was there. Almost.

Anders Arpteg

And also a lot of I mean the framing was a bit about the AI impact, right? And and please tell me, you know, what you think. But but a lot about that and the serenity of AI, and uh also of course a lot about India and and and their position in the world, uh right?

SPEAKER_01

Aaron Powell Yeah, and you know how they can use it effectively. And um so but there was a lot of reports that came out of it that I thought were quite interesting. Like the one on climate impact was very interesting, but also um there was a lot on how you can actually use it for uh human flourishing, which I thought was a nice way to phrase it. Human flourishing, what does that mean, really? Yeah, well that was my first question, right? What is human flourishing and how can I get some?

SPEAKER_03

How can I get some?

Anders Arpteg

I love that baby. Oh, but please tell us what is really human flourishing.

SPEAKER_01

Well, human flourishing, I think, is about how we use AI to make sure that people still um have dignity and respect and are able to, you know, have agency and control over their lives.

Anders Arpteg

But it's uh I think it's for the best of humanity in some kind of way.

SPEAKER_01

Yeah, a little bit, I guess.

Henrik Göthberg

But we can say AR for good, but we might lose ourselves what we mean with AR for good if we are not thinking about humans having a better life and a better and flourishing in in the new society. So I I get it.

SPEAKER_01

Yeah, yeah, exactly. And I think it's important that we sort of frame some of these conversations in that light as well. I think we focus a lot on efficiency and effectiveness, and but actually, you know, even thinking about when we apply these things in the workplace, well, we should make sure that people have nice jobs that they enjoy going to as well, right?

Henrik Göthberg

But it's because it's also about be happiness. We have Mikel Dalien in Sweden, who is the first professor of happiness. I love that. That's brilliant. You know, and happiness and well-being. And of course, if you put that as the AI objective function, it may be slightly different. We need to have Mikel Dalien on exactly this topic.

SPEAKER_01

Absolutely. I I think the other thing we need to think about as well, though, is what sort of society do we want? And I think quite often we ask that question a little bit too late, right? What do we want this technology to do for us?

Henrik Göthberg

And this is all about unintended consequences. Absolutely. So the most stupidity, people are not evil. We are just not thinking it through the implication of the a certain trajectory, and we we end up in unintended consequences, which is a hugely researched topic on its own, what that means and all that. But this is what we're talking about.

SPEAKER_01

Absolutely, yeah.

Anders Arpteg

Well, perfect. I think that's a perfect segue also into the theme of today's podcast. So, with that, a very welcome here, Catherine Mulligan, PhD, but also you I think are a very prominent uh leader here, focusing on digital and sustainability in different ways, right? I mean, uh, I think you you also were part of founding the the World First, if I'm not mistaken, academic like blockchain institute, right?

SPEAKER_04

Yep.

Anders Arpteg

And you've been uh working with the World Economy uh Economic Forum um as a fellow there as well, and uh with the UN secretary secretary as well, and the high-level panel and digital cooperation and and so many more things. So super knowledgeable, and we're really proud to have you here on the AI Afterwork podcast.

SPEAKER_01

Thank you. I'm very happy to be here.

Anders Arpteg

But perhaps you can uh start by just giving a quick background. Who really is Katherine Mulligan?

SPEAKER_01

Well, apart from a Northern Beaches chick, which I think we've already established. Uh if we focus on the technical side of me, basically I I uh started coding when I was about 10 years old and I never looked back. Literally, that was all I've ever wanted to do.

Anders Arpteg

Cool. Um so but um coding still through anthropic or through other anime tools?

Katherine Mulligan’s Journey: Telco To Sustainability

SPEAKER_01

I like to try and keep my own skills sharp, but uh I have noticed that my abilities are uh definitely outstripped by uh anthropic now. Um but it's also a super exciting time. I think we're in the the third wave of software engineering. Right. Third era, it's fabulous.

SPEAKER_07

Okay, so I'm second rabbit hole. This is rabbit, this is already a great rabbit hole. I'm not gonna go there then. Sorry, sorry now.

SPEAKER_01

Um yeah, so uh I've finished my undergrad in Australia. I did it uh UNSW. Sorry, not the University of Wollongong. No, no, no.

Henrik Göthberg

UNSW is the the real it's the best university in Australia.

SPEAKER_01

No, I wouldn't say that. I don't mean to insult my old university. I think it's probably second.

Henrik Göthberg

Because I remember the rankings. I remember the rankings. And which one?

SPEAKER_01

Sydney University, I think, would be classically ranked the highest. Yeah, yeah. Um but UNSW has done a lot of really cool engineering stuff. So they're the ones. Did you ever see the solar panel cars? Yeah. They're the ones who run that kind of competition.

Henrik Göthberg

They run the competition and uh Wollangong wanted to com b be part of that. We and we had our faculties with strong uh you know research and my in mining and you know BHP. So we were had we had but UNSW is the big brother, I think. And then of course we have the Canberra and Monash and Victoria, and they would argue that there's the best one is in Victoria, but we know in Sydney's the front side of uh we don't need to go there.

SPEAKER_01

Melbourne doesn't compete.

Henrik Göthberg

No, exactly.

SPEAKER_01

No, so um, I finished that. Uh I still was in love with technology and I set off in search of the most difficult telecommunications problems in the world. And unsurprisingly, at that particular point, I ended up um through a long circle, I ended up living in Sweden working for some.

Anders Arpteg

What year is this approximately?

SPEAKER_01

Uh 1999. Okay. Yeah. Uh so um yeah, that was because a friend of mine had ended up going to Lillior University. Right. She'd met a guy, fallen in love. Uh I was in London, she called me and said, Do you want to come live in Sweden? I went, sure, why not?

Henrik Göthberg

Is there an Ericsson connection already here?

SPEAKER_01

Is that sorry?

Henrik Göthberg

Is there an Ericsson connection already here?

SPEAKER_01

Uh well yeah, there is a yeah, I worked at Ericsson. Yeah.

Henrik Göthberg

Yeah, but already and then now when you sort of made the move there, are working on huge telecommunications problems.

SPEAKER_01

Yeah, I I basically went off in search of the biggest most diff I wanted the most difficult I was young and stupid. I wanted the most difficult technical problems in the world to work on. Um and I landed in Ericsson and ended up, you know, really falling in love with telco actually.

Henrik Göthberg

Which problem was the first uh to fall in love with in Ericsson?

SPEAKER_01

I was on the AXD 301, didn't really fall in love with that too much. Uh so I moved quite quickly onto Bluetooth and uh those kind of things and then ended up on the core network doing core network, yeah. Yeah, doing IMS. Oh gosh, the IP multimedia subsystem. So it was uh taking parts of the core network and putting it into the IP rather than circuit switched. Yeah, yeah.

Henrik Göthberg

So shifting to other protocols.

SPEAKER_01

Yeah, exactly. But it was actually Ericsson that I I did some volunteer work for Ericsson Response, I think it was called at the time. I don't know if they still even have that. But I got sent to the North Pole. Um really they wanted to get rid of me, honestly. I think that's what it really was.

unknown

Cool.

SPEAKER_01

But um yeah, so I ended up uh doing um I was technical support for climate change researchers. Um so they were drilling the ice core for you know IPCC reports and looking at things like um uh bird migration and tracking bird flow and all this kind of cool stuff. And uh using AI AI relative at that time, or I think they were using mainly they were sort of doing starting data analytics really rather than you know we're starting entering sort of the big data era.

Henrik Göthberg

So already early here we have a uh we have the convergence of technology and working technology in a in topics around environment and sustainability. Um I can see how that already shapes now.

SPEAKER_01

Yeah, I mean I I I literally was standing on the North Pole just after we'd you know gone across on the boat, obviously, not on the North Pole, but on the boat with the North Pole and I was sort of like uh I sort of ended up thinking like how can I use my skills for for this rather than sort of you know 4G after okay, 5G, what's after that? It's just another G, really, isn't it guys?

SPEAKER_07

I love it, I love it.

SPEAKER_01

But yeah, so I ended up um going to uh do a master's for engineering for sustainable development um at Cambridge, and then I went on to do a PhD there. I got the end to of my master's degree, realized basically um the only way to convince people to do sustainable development is tell them how to make money. Yeah. Um so since then, basically my research and my work in the world has looked at how we balance, use digital technologies, all sorts of digital technologies to balance economy, environment, and society. So a lot of people focus on environment only from the um sustainability perspective, but I don't believe we have true sustainability unless we've balanced economy, environment and society.

SPEAKER_07

Yeah, that's really good. That's awesome.

SPEAKER_01

So that's what I try and do.

SPEAKER_07

What was your PhD quest topic?

SPEAKER_01

Oh gosh. Uh I I looked at the evolution of the communications industries over several um well, from what was it, 1875 through to 2010. And that's where we get to the data analytics. That's where I started to get very, very interested in data because what I did was I basically just collected a whole bunch of data around the industry as much as I could get and started to try and analyze how industries were evolving and how things had shifted and changed over time. Um, so I built a model off that and I didn't publish it. I worked out how to invest off it instead. I love it. I did publish a book, but you know, there's nothing in the book that can help you make money, I don't think. It's very boring. Yeah.

Anders Arpteg

Yeah. But you started to write books at that point in time. Yeah. What was that?

SPEAKER_01

Yeah, I don't know why. I just uh I did, yeah. Um so I read the first book basically, um, and research monograph, but then I got um invited to write some books around telecommunications, so it enabled me really to really keep in touch with the technology I genuinely loved, right? Um and uh so we wrote um a couple of books around IoT, a couple of books on things, yeah. The Internet of Things, yeah. Uh book on the IMS um and how to actually use that for application development, and then lots of books on the on the core network. Um while working at Ericsson or was it No no, all of those books actually were after I left Ericsson. So that's what I I was able to uh keep my um connection to the technology at least.

Henrik Göthberg

So these books were intended as uh for what purpose? To teach, to learn, for dummies, or for who's the target audience for these books?

SPEAKER_01

The target audience was really for engineers who really needed to get to know and understand the standardization uh that had been done. So for example, I think it was like R19, R20 of uh the 3GP standards. Yeah. Uh so worked with the guys who actually sit in the standards uh bodies and uh we translated that into the real world. So there's a period of time in the telco industry that is super cool. Uh sorry, I think it is at least, but everyone else probably thinks it's very boring. But um, where the standards have sort of been developed, the technology's ready, but then you have a whole bunch of engineers who really need to be got up to speed very quickly on those standards, right? They need to understand them, um, but they don't need to go into the depth, they don't need to read the entire specification. They want to know the the arc of how it all fits together.

Henrik Göthberg

But uh this is super interesting uh point in time. Let me see if I understand this right, because I I even remember going doing consultant work back with Ericsson at at some point, a little bit later. And I think this this conversation converges a little bit like with the fundamental convergence of I of telco, IT OT, IoT, and all of a sudden now we have uh we have very standardized protocols, OT within telco, uh, you know, even down to wireline, uh, you know, wireless. And here we have the traditional IT protocols and then completely different set of engineers. And all of a sudden now with IoT, these are converging and you know, and we are still having challenges who actually understands from radio to you know, you know, 5G and different spectros into data, into edge and all that, or edge intelligence. Are we is it a little bit like we need to have fundamental learning for engineers that don't need to know know everything, but we need to be T-shaped enough, need to know enough about the arc in order to work together. Am I hitting it or am I missing it?

SPEAKER_01

Yeah, I mean I I don't think that's necessarily related to the books, but I would definitely say we're in a period where we need to um, you know, we need to do what I call boundary spanning, right? So um and IT mm systems and networks are set up fundamentally different to uh telecommunications networks. And actually what you find in a lot of businesses is they're still run as separate teams. Yeah. So people are trained in IT or they're trained in telco. You know, what you actually need is people who would dun can do both. Yes. Right. And that that's where I hope we're slowly but surely getting to.

Henrik Göthberg

But were the were the books moving in that trajectory already, or was it still more fundamental in telco space?

SPEAKER_01

Um the IoT books were definitely moving into that space, and then the uh but the core network ones are really just hardcore, hard, hardcore telco.

Anders Arpteg

But we're we're really here to um to once celebrate your new book, right? Um congrats on that, by the way. Can you perhaps just give us some kind of introduction? You know, why why do you choose to to write this book and uh yeah, and and give us some intro to what's that?

SPEAKER_01

Yeah, um mainly I seem to have an addiction to book writing, I think. Um but the the new book was actually as a response to um the fact that we see a lot of work around digital uh sustainability. A lot of people were talking about how to use it. Well that's the name of the book as well, right? That's actually, yeah, the name of the book. Um and so myself and a colleague who works a lot with uh the UN, they're an advisor for the for the UN still. Yeah, um, we were sort of talking about okay, there's a lot of ways to think about digital technologies, there's a lot of books out there on digital transformation, but there's not really much out there that um i is really helping people understand how to do digital sustainability or what we have termed really digital resilience. Um so uh there seemed to be a space for the book, and I had publishers nagging me, so I pitched this and um we decided that this was a a good way to do it. But also to put it in the context for senior leaders rather than just you know people who so that's the audience for senior leaders basically, or to understand this properly or definitely written for business people and people working more in the policy space. So yeah, it's written from that context.

Henrik Göthberg

But in in in a nutshell, uh if you frame the value proposition, the problem for the book, uh you know, what what was the pitch to the publishers? Like why this book is needed and and uh and how would you sort of the uh uh encapsulate the core of the book?

SPEAKER_01

Yeah, so um we looked basically well, I mean the pitch was basically there's a market gap for uh digital sustainability, it's been done very poorly currently. Um there's two ways to approach it. One is the very traditional sort of uh we're using digital technologies for transformation within the company, and a lot of there's a lot of good things to do in that space. But there's also another argument that could be made around um transforming the way that companies and organizations are actually structured. And I think that's a broader conversation that needs to be driven also within AI. Um, so I think we're at a point in time where we're seeing the 20th century methods of organization and structure that we have are currently really not working very well anymore. We're starting to see some of these fail. And so we need 21st century solutions to 21st century problems. Right.

Anders Arpteg

Um I I think you you said something about um what was it? Um you need to move beyond digital to digital to fake, so to speak. Yeah. What do you mean by that?

SPEAKER_01

Well, I think you know, there's a lot of us out there who are technology uh geeks, really, right? Uh we love technology I love technology, so my first answer is how do I fix this with some some digital. Um but uh I think we need to also sort of it is about stepping back and saying, what is the sort of society we want? What is the sort of economy we want, and what is the sort of world we want to create and leave behind. And I think currently what we're doing is very much focusing on you know, it's almost too narrow. People are thinking I'm gonna solve this problem with you by using, you know, technology, I'm going to solve um for the uh AI water consumption by doing this small measure. And then we get back to your unintended consequences, okay. But that small measure fixes one problem, but then there's a whole other part of the system that suddenly flies away and blows up. Exactly. That people have got to then run after.

Anders Arpteg

And I guess you can replace digital with AI here and say you shouldn't simply use AI for the sake of AI, right?

SPEAKER_01

So yeah, but I think a lot of people are starting to realize that, right? The for example, the MIT report that we are talking about before, the 95% are.

Resilience In Practice: India–UK Hub-And-Spoke Case

Henrik Göthberg

But can we go back a little bit to the book? Because I'm quite curious. Um I think this is a brilliant topic. I think the uh the the value proposition is brilliant. I think the way that we need to reform organizations is actually where the bottleneck is.

SPEAKER_02

Yep.

Henrik Göthberg

You know, we can take this conversation all the way down into the AI strategy in Sweden last week, where basically we talked about all this stuff, but there's a pink elephant in the room, sort of thing. Yeah, yeah. So how did you go about tackling because I I think when you're now tack using resilience as a label, but what you're really talking about, what is the bottlenecks or pitfalls, or how do we why is the old organization not fit for the new? Could you elaborate on how you how you went about in the book to unpack that a little bit. Because I think it's highly interesting.

SPEAKER_01

Yeah. So I think one of the interesting sort of uh maybe, maybe not specifically unpacking that directly just yet, but uh for example, there's lots of really interesting approaches that we have done sort of work on in the real world and also through research, which is trying to understand what resilience would actually mean in the context of delivering it for society or delivering it for a business. One example that I've used a lot is some work we did in India and UK. So we did a compare and contrast and this is where I got very excited. We did a project that was looking at how you create economies of scale and scope using digital technologies for they were rural companies at the time, but frankly you could use it anywhere. It doesn't need to be rural. And what's really exciting was we we did a four-year project, proved a huge amount of money by the British government to do it. It was wonderful. But what we came to came to understand was that by using different types of organisational methods, different types of organisational structures, so we used a hub and spoke model to create a company in India. Well the Indian research team did that you're able to create different types of economies of scale and scope and actually help people step up the value chain in a different way. But what was really exciting was actually the same exactly the same process worked in the United Kingdom as well. So very different nations, very different economies and very different organizational structures normally that are used in in by government and also by companies in those areas. But the archetype worked in both nations and that means that there's something fundamental there that enables us to to work slightly differently and think differently. And perhaps we can just elaborate what is really the Haversburg model and um how does it relate to economies of scale etc Yeah sure so uh to give a very specific example um the project in India looked at non-edible almond oil uh that sounds very esoteric maybe but I guarantee every single person in this room has used non-edible almond oil uh several times today. It's in literally every soap in the world. Oh really I hope you've used it.

Henrik Göthberg

Yeah shower today but it's it's this some of these raw material ingredients that goes that we don't think about that goes into a lot of different products as an example right?

SPEAKER_01

Exactly so though the non-edible almonds are gathered from the foothills of the uh Himalayas actually um it's five rupees a kilo was what they were paid at the time might get paid more at the moment. But on the global market like 100 milliliters is about 1500 US dollars right so there's a massive disparity there. So but through connecting multiple villages across a region and asking them to work together, having a a hub model which enabled them to process the non-edible island they can get they can get in the action but they can also step up the value chain right because they did there was also some mechanical engineering um to to try and fix it so to actually leave more of the value to the first part of the value chain. Exactly um and that company is still running today so that's 15 years later which uh is very unusual for academic research so um you know kudos to the people who put that together um and it it still uh functions but it also is extremely resilient because if one of the villages doesn't have uh enough almonds they can still contribute into the supply chain because they can um coordinate across multiple different hubs.

Anders Arpteg

Interesting um yeah so that's one example cool and you mentioned before that the yeah sustainability is more about you know the the climate uh impact etc it's really about the economy as well so if you were to give some highlights from the book you know what is really digital sustainability and and please if you could elaborate on what that really means.

SPEAKER_00

What is it really?

Measuring Impact: Triple Bottom Line vs SDGs

SPEAKER_01

Well uh okay what is digital sustainability I'm going together the whole thing yeah I was gonna say two sentences so digital sustainability is actually two things. It's making sure that the using the use of digital technologies is environmentally socially and economically viable but it's also ensuring that the impacts of digital technology are also uh equally um balanced if that makes sense so are we uh another good example might be blockchain um if anyone has has looked at those things right so some people have heard about that so we did a lot of work so I did a report with the the WEF where we looked at actually the environmental impact of uh blockchain and we were trying to say actually what you need to do is have sensible methods where you can measure economic social and environmental impact so then you can make the correct decision about where to use blockchain and for what purpose. Was it looking at the mining process impact or what what did you do with the blockchain well we looked at everything from extraction of silicon all the way up to the use so it's an entire value chain approach to to that um which is what a lot of the techniques for example have been done for um concrete so concrete use is you know quite it's one of the most environmentally damaging sort of and a lot of work has been done to look at how you reduce um the environmental impact of uh concrete but is it a lot of measurement uh techniques have been developed as a response so we tried to pick up a lot of those um to see if those could be applicable into uh digital technologies they can be um the issue is that I think that politically it's a very difficult discussion for people to take so what I noticed was and I I mean if you look at who funded the work for concrete sort of CO2 reduction and environmental impact reduction it was funded by Microsoft uh to a large extent but uh I don't think you'd get them to talk so much about the environmental reduction of digital technologies. They will, however, talk about using AI to measure and reduce uh environmental impact of the energy system for example exactly we use our techniques and models and we help for others.

Henrik Göthberg

Exactly if I'm a little bit brutal.

SPEAKER_01

Yeah. Um and I think there's uh when you look across the spectrum I mean if you look 20 years ago we were having exactly the same discussions about the telecommunications networks. How do we measure uh the actual environmental impact of uh and a lot of great work was done as a result of that at that time as well. So this is something that comes up periodically it's with every wave of technology someone says what about the environmental impact? And then you know my big concern is that we forget all the work we've done previously.

Henrik Göthberg

Right. Or yeah but in in the book now because I haven't read it so I don't so I so I just want to make that clear so that's why I'm asking stupid questions. So so for me when you are talking about now sustainability and then you're moving into resilience there there are there are so many rabbit holes here in relation to on a macroeconomic point of view and how do we measure it and how do we benchmark and how do we understand that this is a problem to actually creating fundamental resilience and distributed agency and all the fundamentals that becomes the micro patterns that actually says something about how we organize to be future proof for AI even yeah yeah so how much did you spend on sort of the understanding sustainability of digital you know on a macro geopolitical level and how much we were you able to sort of pinpoint it down to something that is sort of approachable for the single enterprise or stuff like this?

SPEAKER_01

We spent most of our time looking at how you would do it in a single enterprise. I mean macro is great it's exciting to talk about to be honest and I it gives the perspective of scope if we do this right or wrong.

Henrik Göthberg

Exactly yeah but to fix it you need to go down.

SPEAKER_01

Yes you need to work on an individual level you can't uh not on an enterprise level or a government level or even potentially at a national level. And one of the key issues that I think is interesting about resilience as well is it broadens the discussion to actually what happens when things fall over like digital systems fall over that then you get into a whole other rabbit hole.

Henrik Göthberg

How many rabbit holes do you want to go down we love rabbit holes and we have a we had a joke that we that word is even we had someone do an AI analysis of the of the whole season and rabbit hole is one of the favorite words. So we love rabbit holes so which one but okay so let's let's can you highlight what are three or four major rabbit holes and then we can pick or choose to go into a couple of them or one of them.

SPEAKER_01

Sure a major rabbit hole would be okay so what does it mean to be uh resilient? So how do we use some of the theories that we worked with you know to create community what I call community based resilience. I think everyone is looking at societal resilience or you know they're looking at from a defense perspective or a very large scale um uh sort of network perspective and they take very certain views on that. So that's one rabbit hole. The other rabbit hole is how do we go about actually uh measuring that on an enterprise level so how do we measure for sustainability?

Henrik Göthberg

And then the other KPIs and metrics and benchmark our performance on resilience.

SPEAKER_01

How would yeah yeah I mean you could do that but I I would actually suggest that KPIs are too slow, right? They're they're a lag indicator.

Henrik Göthberg

Okay so the question even my language was wrong then. We need to evaluate and measure we need to find ways to understand and monitor or what do you mean?

SPEAKER_01

Yeah so I well I mean it's uh a a KPI is a a a lag indicator right you measure it at the end. Um what I mean is we need lind lead indicators I was using terminology uh nonchalantly I I fully understand you need leading indicators a KPI is not the leading indicator in your vocabulary in my vocabulary you know yeah so yeah um so you know how do we go about uh building those um and there's lots of different uh things you need to think about on that um yeah so that was how many rabbit holes that was two now that was two okay uh so resilience measuring and understanding how to find the leading indicators to to work with yeah uh and then I guess the uh third one would be how do you engage uh your leadership to ensure that you can actually do this how do you get people on board internally in the company or fourth rabbit hole policy but what is an organizational rabbit hole is an organizational pattern for resilience and adaptability a major topic because if we organize division of labor and we are not sort of having agency out in the spokes so to speak yep we we we we we build an uh something that is very efficient when everything is happy yep but it ha it has no self-organizing self-fixing property because it's not cross-functional enough so so the pattern around resilience becomes the core DNA pattern of organization.

Henrik Göthberg

That must be a rabbit hole in itself.

Power, Data, And The Emerging Data Economy

SPEAKER_01

Yeah uh and you know one of the things we've looked at quite a lot is that what we call the dynamic strategic network uh which is just fancy language to say that you know you're using modular and recomposable uh organizational structures um and actually that's one of the most exciting rabbit holes for me actually is that digital technologies enable us to reorganize they allow us to do things faster and faster and more efficiently if you want to but they also allow us to reorganize how we structure a company and even more broadly how we structure an economy.

Henrik Göthberg

And this this this is one of them AI after work after work after work discussions I want to have with you because this is right what I am super passionate about understanding those patterns.

SPEAKER_01

It's uh I mean there's lots of different archetypes that that can emerge right which are uh super uh super interesting so yeah well we get there if we want to get there we can just continue diving there but perhaps we'll extra it if you just want to make it a bit more concrete as well and and uh thinking more about if take uh leading indicators for example what do you think some examples of that could be great question well um I mean okay so that's a very generic question but if we're looking at a company um let's say you have a company that is I don't know choose a an industry I'm gonna take care of something telco let's go with telco or let's go with finance banking or let's go with finance yeah so I mean there's a lot of work that's been done in sustainability of finance right so if you are looking at um you know if you're looking at how you could think about do we want to do it with AI or do we want to do it just in general?

Anders Arpteg

General to start with I think AI would be a lot when you do step one and then you AI. Let's do that.

SPEAKER_01

It's AI afterwards okay um so if you look at um sustainability financing uh you know you could build some very interesting models there's a lot of work done on this in the blockchain era as well but they didn't quite have good enough AI at the time um you can look at resource depletion in and in different parts of uh the world let's say it's pollution um going into a water body um you can monitor that and automatically sort of um impose fines preferably you'd automatically stop people um polluting um but that's a little bit more difficult um so you can actually connect all of that into some of the financing mechanisms and if you look at what some of the larger banks are doing some some very very large banks were already working on these kind of structures the other one of course is then looking at how you can link that to the insurance markets um so there's some very interesting work going on in that space.

Anders Arpteg

If we just make it super concrete because I think you you you lost me a bit here. Okay. And uh if we just continue on the leading indicators and the finance business and then perhaps for large banks and they have these kind of indicators but could you give some kind of you know concrete use case here would it be that leading banks should know what to invest in and that in that way have these kind of leading indicators at this bit more sustainable or or what do you mean?

SPEAKER_01

Yeah so uh I'm trying not to name the bank that we worked with so uh for example uh banks make a lot of loans yeah um and a lot of those loans would then be linked to certain types of uh sustainability funding and uh at the time it was ESG reporting right so that needed to be uh measured effectively um a lot of the uh work that needed to be done was to ensure that they were uh measuring in advance to make sure it was measuring what in advance well okay so water let's say pollution um let's say chemicals uh being expelled into a water body uh a river basically um from a particular type of processing plant um so if that gets to the you can't wait until the end of the year yeah to measure that because that's going to be disastrous for the ESG reporting but if you can monitor that on an ongoing basis um you can then work out uh how to either stop it hopefully that's what they do uh or they will get um auditing from the companies for their ESG reporting and the benefits here is really for the bank to ensure that they give loans or invest in things that are sustainable or what would be it is useful for the bank to make sure that they are compliant with the ESG reporting guidelines which have been dramatically reduced. So that's that's probably already been shut down.

Anders Arpteg

Yeah okay so yeah okay but you mentioned AI somehow can can you just elaborate you know what do you mean with using AI for for these kind of purposes? Oh that was mainly just a joke uh really I mean yes you can of course imagine a scenario where you have uh different types of dashboards uh which are monitoring those things for you on a daily basis right but if we just move in as you know in AI as a field in general because you know AI can be used for very good purposes to improve sustainability and minimize the climate impact in many ways but on the other side it can also be really damaging and the energy consumption we're seeing now with the biggest data centers is insane right and what do you think about this how can we find a digital sustainable solution when it comes to AI?

SPEAKER_01

So I think that we need to be a little bit careful when we say things are disastrous um do you mean disastrous in terms of well you know the biggest kind of data centers we have these days like Elon Musk Colossus II they basically consume more than a gigawatt of power and it's basically the the energy consumption of like New York City or something or bigger I mean it's an insane amount of power that they require and of course that's bad from a potentially an environmental point of view.

Anders Arpteg

But on the other side you know the models that they produce can be really beneficial for you know improving our energy efficiency and even improving the data centers themselves or or finding ways to to use it for science and in other ways you know improve the in uh the in environment environmental impact in different ways so I I'm just thinking here you know given that a technique a digital technique like AI can be double sorted in some way used for good and bad um how should we model this how how should we come up with a digital sustainable solution when it comes to a technique like AI?

SPEAKER_01

Yeah and so I think this is where it's quite interesting to think again about the the balance right you need to understand how to measure things effectively between economy, environment society.

News As AI Input And Journalism’s New Role

Henrik Göthberg

I'm very basic other people go for the SDGs but I'm really I'm a Northern Beaches girl I'm very basically so you're trying to I mean like you can have 17 goals or you can try to have three fundamental dimensions is a little bit what you're saying now. So to get it clearer.

SPEAKER_01

Yeah and maybe I'm um so in traditional sort of you know it was the um it's triple bottom line reporting it was originally called right so environmental social and economic it was an attempt to include these things in the accounting infrastructures of companies um it was a HBR article um in 1994 and I've the name the guys the guys will come back the name's the guy's name will come back to me but this is the uh origin of ESG as a as an idea almost yeah it's probably the it it is the origin. So I think the SDGs for me are a little bit too fluffy.

Henrik Göthberg

They're a little bit too high level SDGs could we use summarized this is the UN goals in different categories. Sustainable development goals right sustainable development goals and it's like 17 or 15 of them around diff a c a a wheel of different areas, right?

SPEAKER_01

Yeah there's 17.

Henrik Göthberg

17 in five, six clusters? I can't remember I can't remember.

SPEAKER_01

I know that so it's too much.

Henrik Göthberg

You get confused. Each of these 17 are you know having separate goals as well I think it's like 160 Yeah but how to make that into a practical leading indicator model is quite hard.

SPEAKER_01

Aaron Ross Powell It's also completely I'm being quite dramatic now, but it's completely useful useless for a company because uh the SDGs are measured on a national level. They're for the United Nations right it's a macro approach when you need a micro approach exactly so that's why when when I go back to talking about organizations I always go back to the triple bottom line. So how are we working to understand how you're balancing environment economy and society? There are models to do this there are economic models that would help you to measure these things. So for example you know we as uh organizations, companies enterprises, they've all focused on the economic aspect of measurement for a very long time. But what you're talking about is actually how do you actually make the decision that the investment of X amount of energy or money is worth the social output we're getting. Yeah. That's a completely fundamentally different question. Uh and that's why you know the um I I developed what I called a performance in use model this is in 2014 which was designed to allow you as an organization to measure and be able to compare and contrast environment economy and society side by side and make a balanced judgment. And most people get very scared when they see the outcomes from that. Yeah and can you why why is that because if you take a truly you know primary tertiary secondary and tertiary um first second and third order effect uh measurement on what we do uh when we build digital it's not brilliant it's not impossible to do it correctly but it is it is more complicated Complicated, it's harder, and it requires a greater depth of thought than frankly most companies have time for. So I'm not surprised they go back. But at the end of the day, however, if you think about Colossus II, is that really a company decision? Or is that a societal decision? And I think one of the questions that AI brings up for me is who gets the right to make these decisions about our resources and how they're allocated in society.

Anders Arpteg

But we have these kind of huge AI investments, especially in infrastructure now. You know, we have the big Stargate investments.

Henrik Göthberg

I don't want to can we can we use lockdown this topic? Sorry, Anders. I want to stay here before we move on. Okay. Because I want to understand, if I understand what you're talking about here, and I want to use another lingo. I I think this is a very deep. What did Cassie say now? Moment, right? I think you're talking about, I use my words to say the same thing. Aren't we talking about we need to find other ways to define value than pure financial metrics? So when we look at how companies are run and it and everything boils down to shareholder value, it fundamentally drives unintended consequences. And now what we are doing with triple accounting is to broaden the view of value. And now comes the real kicker when we go to Colossus, where he is caring about shareholder value.

SPEAKER_04

I'm not sure he's caring about that, but yeah.

Henrik Göthberg

No, but I mean I'm just going with the logic that a company is doing something that looks good from a shareholder value, but it's shit on a societal value or energy value. But we are only measured on one piece. And now we you know the triple accounting is about how do we bring in measurements that basically makes us do less things with unintended consequences. Am I sort of I'm trying to with a layman's term, what are we talking about?

SPEAKER_01

Yeah, I would say that's a very good way to look at it. And actually, um, the organizations that have done best with these aren't the ones who got it perfectly balanced, but they instead used it to surface some of the questions, some of the unintended consequences, um, and then use that to sort of engage with maybe communities that might be affected or other people in in different ways, um, you know, or even sort of thinking about the impact on their employees.

Henrik Göthberg

Yeah, so before back to Anders, but you said something you dropped, you dropped a provocative sentence. Let's go back to that one and unpack what that one, because I think it's a profound question mark there. So, should we have, how did you say it? Elon Musk or Colossus has this, but we are ending up in a position where this has what one company is doing has impact in other spaces like society. Could you could you come back to your question? Because you said it and then that was heavy. And unpack that one second just.

Anthropic, Defense Uses, And Guardrails

SPEAKER_01

Yeah, so I think every company, uh actually every human, all of us, as we walk through the world, we we make decisions about things and we have impact um sort of on other people. So Eric, back to Ericsson. Erickson will make decisions that have impact on society, environment, economy. What I'm concerned about with AI and some of the hyperscalers, I would say, is there is an outsized impact now from their decisions um on economy or society, right? Um and so the question then, you know, if you look at them, they're now as big as the Seven Sisters were in the 70s, right? We go back to the the oil era, if we want to compare them to the oil era, you know, in terms of monopoly. Um But uh so the question is then how do we actually make these decisions effectively?

Henrik Göthberg

So how can it be okay that four or five companies making decisions that is rational for them but unrational for two billion people in the world? And how how how the hell do we navigate this space in terms of policy making and understanding is can they do, you know, if they how far can they take it without well this is where this is Is that what this is the core problem, right?

SPEAKER_01

Yeah, so one of the things I get quite concerned about is well, I think we're entering an era where corporations are starting to act as government rather than as just corporations, right? And what we currently have is uh what I call magma, which is the you know the seven large companies or the the largest uh sort of AI data-driven companies, really, um making uh decisions on behalf of the entire planet. So before, for example, let's say a very large company um that was housed in the United Kingdom, they would have to take a discussion with the British government, right? They don't actually have to do that anymore. Sorry then. Apologies. It's you. That's me, sorry.

Anders Arpteg

That was me before I shot mine on.

SPEAKER_01

Sorry.

Anders Arpteg

Yeah, whereas these can kind of companies are getting so huge. I mean, the impact that they have and the economy they have is you know bigger than most companies. Or sorry, countries in many cases. Countries. And and uh yeah, it it's it's really big. But then if we guess just go back, I mean, of course they have to I mean if we live in a capitalist society, then of course they have to maximize the uh the economical revenue.

SPEAKER_01

But do we do we only live in a capitalist society anymore? I I posit so that we're moving into a new type of economic structure.

Henrik Göthberg

Yeah, really, I think so. Yeah, I think elaborate piece.

SPEAKER_01

Yeah, I think we're moving into an economy that's fundamentally founded on data. Um and data think about it, right? So what we've had is um companies chasing capital for for years, many, so they could have uh freedom of movement, shall we say, to to build their products, do what they want to do. What happened? Um this might be a little bit too esoteric, um, but you know. We love it. We love it. Okay, I I always say it started in 2004. This new type of economic structure emerged in 2020, but where companies started actively hunting data because data is what has given them control and power.

Anders Arpteg

But also capital, right?

SPEAKER_01

Has it given them capital?

Anders Arpteg

I think so.

SPEAKER_01

Has it? Sorry.

Henrik Göthberg

But it's it but it's an interesting thing because we st we we still need capital to run our companies, yeah. But if we look at the magma, the the common denominator, what they're they are doing differently on a completely different scale than anyone else, is chasing data. Everybody's chasing everybody's chasing capital, but what are they chasing that no one else is chasing? Data.

SPEAKER_01

They are expending huge amounts of capital to get more data.

Anders Arpteg

Yeah. Yeah, but not for data in itself, necessarily, is it? Is that the end goal? Or is it the means to some other people? It's a means. It's a means to power.

SPEAKER_01

But but but but but what is money? What is money? Yeah. It's a means to power. Sorry. So it's all power. No, it's I think it's all about power and control, really. So without data, they would never have been able to build the models. So you're correct, right?

Anders Arpteg

I think those are And the model in its turn give power, which give money.

SPEAKER_01

Yeah. But money gives you resources, gives you control over output. Yeah. Right? So uh it depends how you want to build it. I don't think capitalism is gonna go away. I think we're seeing the emergence of a new maybe it's a new type of capitalism. We could say that, right?

Henrik Göthberg

But are we talking about the power of flywheel? We're talking about a power of flywheel. So we we I mean, like so when when Putin says who's gonna have geopolitical power is about AI, when when we were talking about Google and chasing data, it's all things to push around the flywheel where we have more and more power and influence, I would argue, in order to then better shape money and in order to, you know, isn't it this power of flywheel?

SPEAKER_01

I think it's a little bit more than power, actually, because what I find super interesting about AI, uh, in particular in the area we're living through right now, is we're seeing, you know, before you would have a human worker machine or a human worker laptop, I would do my coding and you know, whatever it is, and someone else would make some money off it, I'd get a salary. What we're actually seeing is uh the embodiment or the encoding of my capability, capacity, and skills through AI. It's been sort of captured through data and AI, maybe algorithms is a better way to say it. Um and that is, I think, what is giving the power flywheel, right? You're you're sort of taking uh or capturing human capabilities and putting them into the digital era, digital world or digital sphere. You're all trapped in your laptop now. And you didn't know it, Andash, but you are. Of course I am.

Henrik Göthberg

In for for because because then I understand what Andash is saying also. We're saying they're all chasing data, but that is simplifying the flywheel discussion because from data, what they are really capturing is human ingenuity and human intelligence, yeah, and they are bot then they're putting my, yours, human intelligence and ingenuity on a bottle that they then makes them more intelligent. More intelligence then leads them to do something more productive and effective, leading to power. So there's this and more money.

SPEAKER_01

Yes.

Henrik Göthberg

So it's a consequence chain. This is the flyingwheel effect, it's a consequence chain. So you can simply say, Oh, they're all chasing data, but they're chasing data in order to get to there, in order to get to there, in order to get to there, to power, and then back again.

Beyond UN: Designing 21st-Century Governance

SPEAKER_01

Yeah, I guess is that I guess maybe the better way to say it is I completely agree with you, um, that they're chasing data in order to have the same level of control that money used to give them. Yes, that could be.

Anders Arpteg

So I mean there's a lot of useless data around there. So in in some way, it's really the insights or the knowledge that you're seating, or if you use the term intelligence here, but intelligence, if you you speak intelligence community, it means basically knowledge. If you use it in the AI sphere, it means the you know ability to acquire knowledge. But still, I think the the ability to have information in some way that is useful is is something that a lot of companies are chasing, chasing, and even perhaps more so than for capital in some sense.

Henrik Göthberg

But this sorry, you can go first, but I was trying to connect it back to when we talk about the AGI race. So are we chasing data or are we chasing intelligence? Yeah. So uh but but you take your first.

SPEAKER_01

Well, I was gonna ask actually, because uh I think what's really interesting is who's actually winning as a result of the AI race? Like how many companies do we know that are genuinely able to make as much money out of AI as possible? Is it only the hyperscalers that are well not even making money, they're losing money?

Henrik Göthberg

I think I think it's the industry. I think it's the hyperscalers and the vendors and architecture and infrastructure that is making the money and not the other way around, uh, us buying it yet.

SPEAKER_01

I mean, one of the things I saw recently actually that was really super interesting and just sort of made me sit up was um News Corp. Massive uh speaking of Australia. Murdoch. Yeah, exactly. Um they've recently signed a$150 million sort of contract with I think it's OpenAI and Meta. Um and basically they now consider themselves, you know, this is a massive, very powerful company. They're basically saying they're input to AI. They actually went out and said that publicly.

Henrik Göthberg

I think there's something more money that way than uh newspaper subscribers.

SPEAKER_01

Exactly. Uh so that's that's a fascinating shift for me because that's mega industry saying we're now just input to the AI machine.

Anders Arpteg

The whole value chain shifts completely off its head. But would you I mean for one one thing we're saying is as well that you know the power is concentrating more and more, and um we can see that the the big hubby scalers and the biggest and the most valuable companies in the world, if you look at the top 10 list, is basically all uh data-driven companies, I would say. Yeah. Uh except one perhaps with uh Warren Buffett's company, but otherwise that they are. And it seems like that kind of gap and that kind of difference and concentration on power and money is increasing a lot. Uh, would you agree with that?

SPEAKER_01

Or yeah, but um, is that a result of AI or data? I'm not sure. I think it's also, you know, we've seen these kind of periods before in the world um over several thousand years, actually.

Anders Arpteg

Um, so there's some change, you know. Do you think that's the cause of AI in some way? I mean, Nvidia, why why they make so much money? Microsoft's in the top, why they do they make money? Apple, uh perhaps Apple is an exception here.

SPEAKER_01

Well, I mean, most of those companies have made money because of data, I would say.

SPEAKER_03

Because of data, yeah. I think so too.

SPEAKER_01

Yeah. So um that's why, you know, instinctively I think there is a a new type of economic structure emerging alongside capitalism, or maybe tightly coupled even to capitalism.

Henrik Göthberg

So you it that the capitalist model is evolving with money is money, yeah, yeah. But now we have another type of financial instrument, or we have another bar data in itself becomes is it money? You know, what is it, right? It's another means to this.

SPEAKER_01

Yeah, and if you think about money, uh it's evolved a lot over 5,000 years, right? So I think that's the origins of money. But also, um, you know, we we've seen a lot of uh changes around that.

Henrik Göthberg

But can we connect this back now to me? It's is this is no, but but I I was I was gonna go down, are we chasing data or intelligence? Why, you know, what are we really chasing? And then we even had some conversations here. Well, we're not even chasing uh people are arguing, are we chasing intelligence or we're chasing something else than intelligence as well? Can we try to elaborate a little bit about that? Because how does we say we're chasing data? What about the AGI race? What how does that connect to this conversation then?

SPEAKER_01

So I think, well, I mean, uh I think if I may give you an introduction to that topic.

Anders Arpteg

And the weird thing here, I think, with AGI specifically is that we're seeing insane investments in recent years, single year, and starting with the Stargate for$500 billion made by OpenAI and uh and Softbank and Oracle. And then we saw the big 200 uh billion uh euros from Europe in um also doing a big AI Invest AI as it was called, and then of course we have Elon investing insane amount of money in Meta investing in insane amount of money in Google doing the same. I mean, every one of these is investing insane amount of money. If we just take open AI, they just have a revenue about like 20 billion dollars, just 20 billion, but still they are investing 1.4 trillion dollars in AI infrastructure.

SPEAKER_01

Yeah. That's what I mean. They're spending a huge amount of capital to get data.

Henrik Göthberg

Is it data or is it something else? Is there data data, yes, but now we are talking about something else. What what you what about is the model what is it? What is it really we're into investing in?

Anders Arpteg

But if we were to just elaborate on that, if we believe AGI will happen, if it if you just make in hypothesis saying that they are actually doing this, because the one that gets there first, it's some some kind of winner-take us all, take us, uh, take it all scenario. Meaning the one that actually will have AGI will be able to build even more powerful solutions and companies. So then we'll have this kind of self-recursive improvement kind of scenario where it will be a fast takeoff potentially, as some people call it, right? And could that be a reason that you're investing this kind of absurd amount of money? I mean, people have been laughing about open AI's kind of investments here, but if you think in this way, and they actually do end up first in the AGI race, it does actually make some sense potentially.

SPEAKER_01

Yeah, I mean, I actually don't laugh at their investments. I think it's quite a serious investment. And I actually I I I don't 100% agree with the theory, but I do in in sort of principle that if we look at what some of the things that have been achieved with AI around gene, you know, some of the medical advances, some of the stuff around identifying genes has been done, you know, super fast. Uh you know, and the insights it's giving into some of the the medical sort of uh uh systems that we've seen, or the, you know, it's creating new proteins, it's creating new things that are viable and usable. Um I think I think there could be something there. I think also that well, for me, uh the thing that I really like about uh AI and the race for AGI is it actually shows um us humans how stupid we actually are.

Anders Arpteg

Yes, exactly.

Horizontal “Data Economy” Regulation Over Point Rules

SPEAKER_01

You know, and and how we think how we think we know everything, and that opens a set of you know, it's opening possibilities. Just being just having new proteins put forward that actually makes sense and uh you know it it challenges what intelligence is.

Henrik Göthberg

But but uh there is there is a narrative uh here, right about here, that basically emphasizes why this is such a great idea to invest this insane amount of money. Because on the one hand side, you have the utopian goal. If you've completely unlocked the productivity frontier and you sit on that golden egg, it it's kind of a no-brainer that that will lead to power.

SPEAKER_01

Yeah.

Henrik Göthberg

But throughout the whole race, from here to now, who influences the narrative and what truth is, what what the data is built on, uh what the model is built on, what's the evaluation criteria? Means that in in if the end quest then in is a power flywheel, and here we have we have a power flywheel where we reach a completely different type of productivity frontier. Actually, all the way through here, where you are building a society with your values, you are so to speak winning the geopolitical war without taking up arms. So there is a logic that you are actually influencing the model and therefore influencing the narrative all the way through.

SPEAKER_01

Yeah, absolutely. And I think maybe we're focusing a little bit too much even on the American companies here. I mean, there's a whole super cool set of stuff coming out of China too that is just mind-blowing, right? Like the physical AI stuff that they're doing is so I want to do that. Yeah, it's brilliant, man. So it's, you know, there's a lot of really so so, yeah, they're investing huge amounts of money, but I also think, you know, I know there's a lot of worry about, oh, this is gonna, you know, transform how we work, this is gonna, you know, cause all sorts of problems. And yes, it it is causing problems and it is demanding that we think about what democracy is and how it's gonna work in the 21st century. But we're also on the cusp of some of the most exciting discoveries, I think. Yeah. Uh you know, uh it's just so cool to be alive. Yeah. Sorry.

SPEAKER_06

Yes. It's time for AI News, brought to you by AI8W Podcast.

Anders Arpteg

Yes, so let's have a small, let's see if we keep can keep it small, uh, news section. We usually do that in the middle of the podcast to just have a small break and uh speak about some of the most exciting news that we all heard about recently. So uh let's go around the table. Anyone, do you have anything that you'd like to bring up? Any news that you heard about that you'd like to share?

SPEAKER_01

Well, mainly the one about I've already brought it up. It was the the News Corp. Um major, that was the major one for me this week.

Henrik Göthberg

That's uh could you just paint a picture? Well, that came out now, and we're talking about one of the biggest news media companies in the world. And it's Australian-based still, right? Or as Singapore?

SPEAKER_01

I don't think it's based on yeah, I don't know where it's.

Henrik Göthberg

But we you we we used to know the family, the Murdoch family owning this stuff, but you know, it's massive, right? And and I guess they know they uh own a lot of different news outlets around the world. And now into the story, you know, could you use take it one step further, what you said before? Because it's a big news.

SPEAKER_01

Yeah, I mean, for me, that was just astonishing to see News Corps say that I mean, effectively they're saying that they're now part of a an AI supply chain. Um so they've basically um signed a deal uh where they allow Meta and uh OpenAI to scrape uh all of their um news sites uh and take that data in to train, continue to train the models. Um I think that for me is really interesting, you know, for people who are working in journalism. I think it sort of raises quite a few questions um about what you're gonna do with your career or what you should be thinking about doing. Um but you know, The Guardian also signed a very um similar um deal, I think it was last year, um, to allow uh AI open AI to scrape its websites.

SPEAKER_05

New York Times as well. You remember at the beginning they were actually suing uh this LLM rules left, right, and center until they got paid. Yeah. Then it was okay.

Henrik Göthberg

But but but but I what I think is uh wh why this is shocking or very, very interesting is that Before we have said, yeah, yeah, we will cooperate with them, we we will allow you to coexist with us, which is sort of we are still the top dog, we are media, we know the shit, blah, blah, blah. Here, the way they are expressing the same thing is fundamentally rewriting the value chain and where you sit in the value chain.

SPEAKER_04

Yeah, absolutely.

Henrik Göthberg

This has not been done before.

Anders Arpteg

But aren't they really digging their own grave here? I mean, if they actually are feeding the AI, why do we need journalists in the future?

SPEAKER_01

Well, that's what I was kind of wondering. And as someone who writes books, you know, why bother? I can just I can just sit and talk into the AI, maybe.

SPEAKER_05

I need to comment on this. Like, you really think that there will not be any place for a human actually like uh objectivism and uh journalism? No, no, I I I think this is completely viable. Will AI go to Gaza and report from there? Or maybe what is happening in the world, or be a war journalist, or basically uh look at uh the cookbooks by a company or crooked uh things? Will AI do that? No, but I no, no, you don't think so? No, absolutely.

How CEOs Start: Mapping Systems And Leading Indicators

Henrik Göthberg

So I I don't think uh maybe a cyber will, but I don't know. That is far away. I have something on my tongue here. I think what they are trying to understand is that the old financial man model how we, you know, that we need journalism is super important. That we need people writing real pieces, and that is then used to train AI is super important. But what is happening is that the fundamental old financial model, monetary exchange, value exchange between a consumer and a news outlet is fundamentally shifting here. So basically, we get to the point where actually we are conversation with our chatbot and we are shopping with our chatbot. We are doing everything here, so we don't have the end consumer relationship anymore. So now then news corporate needs to say: okay, we need to we can't win and take that spot in the value chain. So therefore, but we are important, but but we're gonna make more money by paying meta to do this shit and then the consumers doing this shit. So instead of us, it's it's like you instead of going to the delegate of a conference to pay for the conference, we're going to the vendors who want to sponsor the you know the bad bad idea.

SPEAKER_01

It's even that's a great business model. Yeah.

Anders Arpteg

Sorry, but you know what I mean, right? What is the USP of humans in journalism in the future? I mean, yeah. They certainly have lost the race when it comes to gathering data and trying to summarize that. There is no way a human can even compete today to the AI functional out there. Then you can think about, you know, we have some kind of angle, some kind of bias that is positive here, saying we want to frame or to frame the news in a certain way. And then, okay, um, you can think who can do that the best then? Um, could it be the people that are actually prompting the AI then to report the news in a certain way? So AI moves up the value chain here, not really gathering the data, not summarizing it, not writing the articles, but really providing these are the framing, the bias that I want the news to have. Could that be the human angle then? Or what should it really be?

SPEAKER_01

I think it's quite a similar question for coding, isn't it? It's like what is the role? Yeah, what is the role? Exactly the same. You you you don't want someone sitting there writing code anymore, you want someone who's guiding the system, guiding the system architecture.

Anders Arpteg

But then it could potentially be one single person that drives all of News Corp, right?

SPEAKER_01

Yep, and maybe that's what they're ultimately thinking about. There is a question though, I think, that is very valuable that I think you were sort of mentioning, which is around this idea of the fifth estate. So what the fifth estate? The fifth estate, so journalism is the fifth estate of um, you know, government. Um so you know, how do we create a little bit sort of uh to your comments? How do we ensure that we have scrutiny? Scrutiny. Because the the thing about News Corp giving all of their information to uh AI is what happens to those truly investigative journalists. So uh is it that you know you have someone who is a true who becomes truly investigative and they're all doing sort of things like Watergate or whatever, um and you leave AI to do the average everyday stuff.

Henrik Göthberg

I mean because true journalism died when we started to report and when we you started to repose what other else is doing. Social media, social media ping-ponging is not journalism. So so the true investigative journalism, the the scru the fifth state, the scrutiny of our leaders is what keeps us uh free speech and all that. So I think what you are looking at then is that take away the noise, and that is what true journalism is all about. And then and then maybe that is the core objective function of journalism, and then you can reimagine something completely different. I think the scrutiny dimension is the fundaments of journalism still.

SPEAKER_01

I yeah, I would I would hope so at least.

Henrik Göthberg

But this cannot be expertise, but I'm used to I'm I'm used seeing stuff that you know what is that is that are you use ping-ponging stuff, you know.

Anders Arpteg

Yeah, well, I think it's clear. I mean, humans, as long as they do have a place in the chain, which I think they will for quite some time, yeah, but they're moving up the stack, so to speak. So just as um a truck driving company may not have humans driving the cars or the trucks, they will still you know tell uh the trucks what to do. And just as coding, as you said, we of course don't have humans writing the lines of programming anymore, but you still tell it what to write and how to write it. Um, so and the same for journalism. So I think that seems like a clear way forward. But then the question is when will humans be left out? It could potentially be a point where AI does that better as well.

SPEAKER_07

But um great first question topic.

Henrik Göthberg

Sorry. No, but it's a great, great, interesting news piece. Thank you.

Anders Arpteg

Yeah, Eric.

Henrik Göthberg

No, I we we I I stole the show with the whole AI strategy doc discussion yes last week.

Modular Orgs, Agency, And Agentic Teams

Anders Arpteg

So I'll go last this time. I'm impressed we can just speak a bit about the the weird thing happening with Anthropic and the war against Department of War that they had. And uh it's really weird. Um, and it's happening like Friday, Saturday, uh last week, and it's been exploding ever since. But of course, then you know, Anthropic and Claude is awesome for coding, and it's actually what the Department of War and Pentagon has been using for a long time. They were the first one to really supply the um Pentagon with AI, and they've been using it for a lot of purposes. They used it in Venezuela when they did the attack there, they used it in now in Iran as well. And now suddenly, you know, uh Dario uh Amade is saying, We are not going to allow you to do for two things. You can use it for whatever you want and for lawful purposes, but not for mass surveillance surveillance, nor for autonomous weapons. And then uh what was it? Uh Trump went out and said he retweeted some you know extreme things, of course, or on uh social truth. I think he said something about the horrible vote company of Anthropic. We're going to completely remove all uses of you, and you're going to be a supply chain risk. And something that's been usually usually used only for like Russian companies like Karpasky and uh also Huawei and things, companies like that. And suddenly domestic companies like Anthropic is being blocked, uh not only for Pentagon, but for any kind of company related to Pentagon. So extremely strong statements here. And then OpenAI just went in and said, Are we going to accept that? So we were happy to take you know the orders from Pentagon. And of course, uh Grok did the same and XAI. And I think XAI got uh a lot of progress there, so they're going to use that, but still it was a weird situation that happened. And um and I saw saw an interview with Dario, and and he he tried to explain this a bit. And I think no one is disagreeing that using using AI for mass surveillance purposes, similar to you know China and the social scoring that they're having, is no one that wants to do, nor in the US, nor in Europe, nor in uh I think anywhere, or except some part of the world perhaps. Um still, and I think you know Tonous Weapons, what Dario said about that, you know, it's simply that we don't trust AI to be good enough to make decisions about you know, should we kill a human or not yet, right? We can see a point when AI becomes that good, but it's not there today. So we're not we don't want our AI to be used for that. And um, yeah, and then they had this kind of you know, really, really strong statements. I have to look up, you know, what they said here.

Henrik Göthberg

But I I I picked up on the whole thing from a slightly different angle. So it's also interesting how this is portrayed or or you know, in in different LinkedIn uh different people on LinkedIn or people commenting, having different agendas, but trying to uh tell the storytelling. Because I've heard another angle when when the Swedish community, so now we're not talking about the real discussion, but we're talking about how it's framed in Swedish media and in Swedish social media. So there the angle has been. Does not want Pentagon or the US government to have free access into surveyans people through using antropic. So the whole so this is a completely different angle and spin on the narrative than what you did, which is more about antropic of the all the only ones that have that everybody else is giving away your information to to the government, and Antopic is saying no to that. So this is slightly different. Uh, spin on the same story. News media can't be trusted anyway, right? No, I don't know which, but it's interesting because you went back to you went back to the American US, the original real story. Yes. That story almost didn't come up in my sh flow. It was more the story about uh they are the ones to hear us saying no to you know uh giving away your data.

Anders Arpteg

I mean, some medias you know are really misreporting this and they don't really recognize that you know Anthropic was the first one to be brought into Pentagon. They were the first one to do it, they wanted it to be used for these kind of purposes and for work purposes, and and they still do. It's just that they are prohibiting these kind of two things. And uh I think so, yeah, people are you know but i it don't you agree?

Henrik Göthberg

These are two different stories that is actually from the same fundamental problem or issue happening, but but it it's completely two different narratives. Yeah.

Anders Arpteg

And yeah, that's humans really.

SPEAKER_01

But do we ever want AI to make a decision about who should die?

Anders Arpteg

I I hope not, right? Or what do you think?

SPEAKER_01

Well, it's definitely not ready for it yet, but I'm sort of wondering again, I mean, there's so many questions here, isn't there, about what kind of world we want to live in.

Anders Arpteg

Yeah. Um Would you would you like if you have like a drone defense and drones are attacking your house, would you like to wait until a unus says stop that drone, or do you want to have that autonomous defense in place?

SPEAKER_01

So you're saying, but then that's stopping it killing someone. Yes. Well stopping stopping someone killing someone is a different thing to do.

Anders Arpteg

Yeah, but you you agree on that, right? Because it's an easy decision to do that.

SPEAKER_01

I guess it's relative relatively, yes.

Anders Arpteg

So then the next question would be if you allow that to kill a drone, would it allow it to kill a human uh flown aircraft that is going to kill you autonomously?

SPEAKER_01

So these are these are but these are if it's a human flown military aircraft.

SPEAKER_03

Yeah.

SPEAKER_01

Well, I guess that's a decision made by my government, not by me.

Anders Arpteg

But still you have an autonomous defense here. Yeah. Either case it's a drone without a human. The other case it's a plane with a human in it.

SPEAKER_01

I would want I would want someone who is far more qualified than I am uh to make that decision. And I do, I mean, this is part of why, you know, um AI is really fascinating, because you actually normally we elect governments and we allow them to make the decision, right?

Henrik Göthberg

We sort of, you know, like they would know better.

SPEAKER_01

They probably don't, but I yeah, but you know, what I mean is we delegate that responsibility to government. That's why we have elections. That's why, you know, that's how we've decided to do it. Um so technically, there's still some kind of human in that loop anyway, even if it's autonomous, right? So that's my weasel wording.

Anders Arpteg

Yeah, but I agree with you. I think it's should be as well.

Henrik Göthberg

But I there there is another angle to this, and this is we can talk it, we can take it out, take it out of this very charge context. How do we have uh safety by design in an autonomous system when this wheels are spinning faster and faster and faster? And we have had very good conversations around this. Like when we say human in the loop, what does that mean when something is really autonomous? Because I can I can I can from a practical point of view, you have human in the loop in the design and in how you set the framing criteria and how you set this topic. To think that we will have human in the loop in the practical sense, in something that is a millisecond war, that someone is gonna press a button in the middle of the process.

Anders Arpteg

And you can That's a trolley problem, yeah.

SPEAKER_03

Yeah, yes.

Anders Arpteg

I mean, we have this, it's the same problem, right? And would you then choose, do you require a human in that case to make that decision? Or would you be allowed be willing to let the AI do it?

Humans As System Coordinators And Education Shifts

SPEAKER_01

I mean, if it is uh autonomous vehicle, you've there is no human in that loop, right? So it's already. And you can literally read a book if you get in some of the cars overseas these days, right? It's only really Europe that doesn't have this, right?

Anders Arpteg

But I think the Netherlands getting close now to make the decision here. We'll see.

SPEAKER_01

I mean it it's uh yeah, tough decision, but at the end of the day, it's probably in small scenarios, and this is where we can have an interesting discussion about AGI actually. So in that scenario, it's probably gonna make a better decision or a more realistic, quicker, more safe decision than a human. Yeah, for sure. Than a human, right? Uh because a human's gonna have a panic reaction, shall we say? And can't deal with amount of data and reason in the same way on what like so is there something to be said about the fact that okay, if we get AGI, do we really want AGI just roaming around doing anything, or do we want specialized AGI that is trained within a particular uh scope? Well, actually, some of the work I was doing before I moved back to Sweden actually was looking at how we would use something uh along the lines of guilds. So you you the guilds were like you know, something that happened in the UK, you would be a master guild uh in um carpenting carpentry, masonry, masonry yeah, all this kind of stuff. So you were specialized in one particular area, so why can't we have sort of AI guilds that are f sort of very deeply trained in one space, and then they are able to uh exchange data, talk and discuss amongst themselves. Yeah, so they can say, Oh, look, there's data, I think I might have some incorrect data or I've got something going wrong, um, or check with a human as well. There's no reason why I can't ask a human to review. Um do we actually just want AGI as it is talked about today, which is effectively that it can do everything, right?

Henrik Göthberg

This line of thinking links to our guests last week, Magnus Hütste, and many years at Google, who are really talking about the fundamental issue of how do we do evaluation, the way we set evaluation criteria to frame and box AI in. So he says all the conversations now is oh, we can do everything, everything, everything. But what we're talking about, actually, in order to do this repetitively safe, and the more high stakes it is, we need to start boxing things in.

SPEAKER_01

Yeah.

Henrik Göthberg

And and and the guild approach could be a way to okay, we have AGI boxed into a guild, and now we can have eval criteria. And he was he was doing exactly this example. Now we can you you it's like me human, I'm going to work. I can't be Henrik the Surfer or Henrik the Party guy at Vattenfall. I need to be, you know, I need to be boxed in as my job description is, and then I can teach them this. So the guild idea is a way to stepwise understand how we can go to AGI or you know, a narrow AGI or whatever you want to call it.

SPEAKER_01

I mean, you can think about it as actually almost as an education process, in particular as AI, the process that AI uses to learn, right? You know, in the same way that we educate humans, we educate them bachelor, master's, phowe, however, we want to classify education in today's world. Um, you could also think about the educational process. You could actually say, okay, this is a this is an AI in this particular guild at a, I don't know, it's PhD level, but this one's master's level.

Henrik Göthberg

This makes total sense because in a way, if you go to Karim and how we build in, you know, otherwise on how we build intelligence and learn recursive systems. So the AGII is like the fundamental pattern is here, but it's on a five-year-old until we t until we put into a certain guild and we let it grow up and certified here. Now you're a certified masonry. Then we can in theory continue, and I'm gonna use that AI, but I'm gonna give them these books or these frames, and now it can be a police or a carpenter or whatever. So I I I kind of resonate with that way of thinking about learning.

Anders Arpteg

I think it moves actually to an interesting topic that we continue. And I think I can misquote you. No, sorry, quote it properly. But did you have anything, Coran, that you would like to add or for a new section?

SPEAKER_05

I will be short because it's like we're running out of time. But it's uh to tie uh to everything that you have said, there is like a new pro-human AI declaration signed by many now AI, what is called scientists and uh companies, uh Richard Branson and whatnot, and etc., which is uh one of many, yeah, which is looking at keeping humans in charge, avoiding concentration of power, protecting the human experience, human agency, and liberty, and then uh responsibility and accountability of uh for AI companies, which is just another basically declaration saying, like, okay, uh, one of the big major things is like we should not develop uh artificial superintelligence before we have the guardrails uh uh set in place, which is a good idea. Yeah, my my my only observation to this we have a number of those. You have one from uh uh OPSE, you have like uh United Nations and etc. To what do they come uh when it comes to actually putting those things in charge? Because right now the concentration of power it is in those companies that is like we're discussing Pentagon on uh uh Anthropic Open AI and etc. But they don't care about this right now because there is no penalty because they are playing with governments, and as long as that is the case, there is no actually urgency and incentive for them to think about uh the harm that they're doing as long as they are protected under the uh supreme uh let's say uh ambition of uh company or a country or something like that. So, how can we actually make pro-human AI declaration when we have five, six companies that are tightly connected with governments and economic power?

Anders Arpteg

So perhaps uh that's a great segue actually into you, Kath, as well. I mean, you've been an advisor for UN as well. Uh what do you think about the UN potentially becoming a party in trying to regulate or ensure that we have pro-human use of AI? Could that since we have these kind of companies that are you know bigger than most countries, perhaps we need some kind of like a UN to address the other thing?

AGI Timelines, World Models, And New Intelligence

SPEAKER_01

There actually is a uh new panel that's just been announced to the UN AI panel. It's got some extremely interesting people in it, really exciting. Um my concern um I'm gonna lose a lot of friends by saying this, uh, but uh I think you know, going back to my statement that uh the things set up for the 20th century aren't going to serve us in the 21st century. I think the United Nations was set up almost deliberately to have no teeth if you look at. Origins and the story behind it. So it's very, very difficult for the UN to effect change. Um okay, there's some teeth in the Security Council, but we can see even now that's sort of not doing really brilliantly for the world, is it? Um let's put it that way. Um so my question is what does it these kind of organizational structures look like? We don't need to look to the past in order to sort of try and and I think we're trying to fit current problems into old solutions.

Henrik Göthberg

Yes, old patterns.

SPEAKER_01

Old patterns, and we need new ones.

Henrik Göthberg

We need to understand the patterns that are in line with another objective function.

SPEAKER_01

Yep.

Henrik Göthberg

I use the word example all the time. Oh, we have an organizational pattern that is for economies of scale or an efficiency in a slow-moving context when you can take an ERP system and a process and you can run it for 10 years and then you do a change and then you do an innovation leap. This is division of labor. This is the kind of patterns Michael Porter came up with. What happens when you have a constantly moving productivity frontier and your core uh objectivity is uh resilience and flexibility and continuous learning and adaptability. So the fundamental objective function of organisation is something else. So here it's very simple, right? You have a pattern that is efficiency focus, and now I want a pattern that is adaptability, flexibility, resilience focused. What is optimized for this cannot be the same.

Anders Arpteg

Yeah, but going back to the question, where is really, you know, how is UN playing a part in this or not? What's the pros and cons?

SPEAKER_01

Yeah, so I think um I think the UN they're very good at convening people. But honestly, I think more impact has been seen from some of the other platforms. So when you get business together, you know, uh at the WEF, they they were able to change things because business is currently more powerful than a lot of the governments. Um however, at the same time, I think the panel for the UN is desperately needed because, in my experience, most of the governments, uh, you know, when I've spoken to different government entities, they desperately need help to understand AI. Actually, they desperately need help to understand any digital technology. Um and so having you know extreme extremely high quality uh computer scientists in the panel and able to have some of those conversations and help hopefully educate some of the UN will be very good. But at the same time, the UN is being, you know, the the budgets have been slashed, right? Yeah, yeah.

Anders Arpteg

So but what else could we have?

SPEAKER_01

But this is what I think is interesting. I think some of the analysis that the world needs to do and sort of reflect on is in an era where we have data and capitalism sort of closely linked, what are the choke points? Choke points that's what are the bottlenecks or the places where we can actually genuinely affect change and how do we do that in the right way that gives people agency? Um so you know, there's a lot of these, like you're saying, like a lot of these declarations have come out. Um I think there's sort of a report every week to the point that MOUs. MOUs, reports, white papers, academic papers. Um I think to be honest, I don't have the answer, but we need to start looking for new solutions.

SPEAKER_05

Perhaps the next one just to add a little bit to this and then I will shut up because it's not my point. I to the point of uh Hendrik and you as well. I think that um it's hard to make a pro-human AI declaration in uh in a pocket. So let's say only for Europe or only for US and etc. And I think that if we are looking at those structures like United Nations and everything that was post-second world world was done for the benefit of everyone. That's why we have United Nations, all the nations are coming so they can decide on a global scale what actually can be done and how it's going to be done, and etc. Which means that that is the only way how we can introduce change globally. Because if Europe introduces change, but China and uh United States, they are basically who cares about this, let's run. At some point of uh time, Europe will say, like we are saying right now, oh shit, we fall behind, therefore, uh put all these things on the side, no uh regulation, no sustainability, no things like that. Run because otherwise we will be outdated. And in a world like that, when there is a competitiveness between regions and uh companies and countries and etc., I think that is uh all of these declarations and even the European Act and everything else will not serve to any power. Because it's hard.

Henrik Göthberg

Yeah, because we know now we go full circle so I can tie up this because we need UN, but we need another form and shape that is relevant for the objective function, what we're doing.

SPEAKER_01

Yeah, basically. But also I think sorry, just to I've got some very strong opinions on Europe. Let's go, let's go. Yeah, because I I really think you know GDPR was a great uh concept, but the way it was executed was basically the biggest act of self-harm. Um because they actually took the entire European Union out of the AI race. We have some of the best AI researchers in the world sitting in Europe, and they weren't able to do the research uh in the same way, or indeed build companies in the same way.

SPEAKER_05

We have done a lot of critique about this. Tell me that version 2.0 or 1.2 of GDPR has been done. Has somebody revised this after uh eight years?

SPEAKER_01

Well, there's GDPR 2.0, right? That came up almost immediately because they realized how bad the original one was.

Anders Arpteg

They are trying to remove the cookies now. So thank you so much.

SPEAKER_02

In the end.

SPEAKER_01

But and here here coming back full circle, sorry, to um is I think that if we instead of, you know, the every time a new technology comes out, we get a new set of regulations from Europe. So we have cryptocurrency, we have some IoT stuff, we have some, you know, blah, blah, blah, blah, blah, GDPR, et cetera, et cetera, DSA, the platform. Instead of trying to regulate every single new technology that comes out, if they instead start thinking about the fact there's a data economy emerging and regulate for that horizontally, a lot less regulation. You can still go.

SPEAKER_05

What do you mean horizontally that was interesting?

SPEAKER_01

Basically, okay, so I can as you've noticed, I have a theory we're moving into a data economy. Um so I think instead of uh regulating each, you know, GDPR and the EU AI Act actually have a lot of overlap, if you think about it.

Henrik Göthberg

Complete overlap. It is a mess in here now.

Utopia Or Dystopia: Guardrails, Transitions, And Hope

SPEAKER_01

And and what they're look doing is actually creating uh a lot of cost, a lot of expense, and a lot of complexity for companies to actually respond to, um, you know, and actually show compliance with. If instead they take a step back and think sort of sensibly about actually what is happening is we're having this new layer of our economy that's called data. It's having a lot of impacts across different aspects of our society and our you know regulatory environment. And we're going to regulate for data, not for AI, not for cryptocurrencies, not for platforms, because platform, the DSA, is also just regulating data, right? Um, to think about it more broadly and more deeply. And I think there are economic models that they can use to do that.

Henrik Göthberg

Actually, this this actually, if you do it like like Cathy is saying, and you you look at a data economy, you can on a macro level get coherence with something that actually is already happening when you get to the engineering level of building products. Yes. Because the problem right now is that as long as I listen to the lawyers, I have 50 different regulations, and some are overlapping and some legal. So I need an AI to understand what applies to me.

SPEAKER_01

I built one, by the way, if you need to.

Henrik Göthberg

But it all it all comes to comes to the one singular product owner who has built this AI system, either if it's for internal use or for product use, uh consumer use, where you he needs to go 360 degrees around all the different regulations, i.e., the data economy of this singular microeconomic product. So if you take a product you, and I said this to the anyone who wants to listen, you need to solve it from the engineering perspective of building products. And what you did now, you simply took the financial value side of the product and did the data economy. It's the same thing.

SPEAKER_01

It's the same thing, but done for to so that hopefully that policy makers understand it.

Henrik Göthberg

But it's like because if I say you need to take an engineering perspective on policy, then they don't get me. But if you say we need to take an end-to-end economic value perspective, I product, you know.

SPEAKER_01

It's like what Yeah, what they've done with the digital technologies is is you know it's equivalent to saying I want to um, you know, I need to regulate an URA and then I need to regulate the uh 200 crown note and the 500, and I need to regulate them separately, right? So it makes no sense. That's just my humble opinion, of course.

Anders Arpteg

But perhaps um I mean you you mentioned you're thinking about the next book as well, right?

SPEAKER_01

So my next input to AI.

Anders Arpteg

I think you know AI sustainability perhaps could be a good title there.

SPEAKER_01

It could be, yeah. I think uh it could be. We'll see. Um we'll see what the publishers say.

Anders Arpteg

Cool. Um if we try to move into another topic here a bit, and the time has been flying away a bit, so I'm going to cut some of the discussions here. Um but okay, so if we think about you know someone driving a company right now, and um you have a book speaking about digital sustainability here. And we we wanted to make it a bit more concrete, not just a theoretical uh like framework. We we really want it to be some practical kind of steps like companies and CEOs can take, right? To to make this happen. How how can we start to speak about this in a practical sense so that uh that CEOs and companies can start making use of it?

SPEAKER_01

Yeah, I think um one of the most obvious things is that you need to know what you have. I think quite often when we talk to organizations, um, they think they know what they have, but they really don't. Um so one of the first things is to really map what I would call the ecology almost of or map your digital systems, but not just sort of from a very um sort of box perspective, but also looking at how they are connected. Uh what are the dependencies? Um, because quite often you can find small dependencies there that create brittleness, shall I say, where resilience will fail basically at those brutal points. Um then I think you need to think about actually reframing value, maybe not to the sort of very deep sort of uh what is the value of data sort of perspective, but look more strategically at how you can um think about using digital technologies across um the entire company. And if you have the mapping, then you can do that, but also looking towards um adjacent industries is also I think very useful in order to do some of these um things. And to also think about what are the critical points of value within your uh company that need to be protected if resilience uh needs needs uh framing. And then yeah.

Anders Arpteg

Okay, so understanding how the business works, understanding a bit what the digital systems that you do have are, a bit, I guess, about their capabilities in some sense, and then trying to potentially reframe the value from a more sustainable point of view.

SPEAKER_01

Yeah, so the triple bottom line is where you know we we went into that in quite a bit of detail already. Um so what what does it take? You have to sort of map your ecology, what I call an ecology, um, but then also think about what is your broader ecosystem, not just the shareholders. So you have to think about the communities, the employees, uh, and other types, you know, it could be your partners, it could be um even competitors sometimes in some types of industry, right? Where there's a lot of competitive co-creation that may be a need to work together. But then also think about how you're going to govern that system in a different way. So are there ways where you can use a more, shall we say, decentralized or decoupled um organizational structure? Can you reorganize in very simple uh terms to create more resilience in your organizational structures?

Henrik Göthberg

Now you have you have two parts here because as soon as we start going to other definition of value and we want to govern that value, we we we need to now figure out the new leading indicators. We understand how we have measured e a bit and financial metrics. What is it that we are now are the leading indicators that make sense in the other dimensions? That becomes a critical one.

SPEAKER_01

Yep. And uh it becomes uh probably it also actually enables you to delegate some of those indicators out to the um boundaries of the firm, right? Rather than having everything centralized, uh the measurement sort of making sure that uh different parts of the organization have greater agency and control, are able to take decisions on the edge to enable greater resilience.

Henrik Göthberg

Yeah, yeah, this now becomes where we can sort of talk about this topic all the way from architectural system engineering, team topologies in software systems all the way down to this. So, what we are talking about is Federic. How where do we decouple? How do we build more tighter, efficient teams with agency that has a very much clear purpose and how they then decomposes and modularize into different parts? So you're literally doing something that is not a very hard, lot, brittle process chain, but more modular composable.

SPEAKER_01

If it if it makes it a bit simpler for an AI audience, maybe you can really think about this. Uh you know, you you we're gonna see a complete redefinition thanks to agentic AI or whatever agentic AI becomes. Um, where we're not going to see uh companies or organizations sort of structured in the same way. You're going to see probably one person uh running a team of agents and a couple of people. Um and these are going to be connecting and coordinating with each other in what I think are completely different, different ways. So I think, you know, in order to create resilience, if you have um you can think about different ways to organize, uh to decompose, and actually you can think about the company then in a completely different way. So what does the company actually do? Which is that bit that holds it all together on the top. Sorry, that's uh you you look skeptical. Go on, go for it.

Henrik Göthberg

No, it's a different it's a different side story because we have done research on this topic uh within our organization, looking at this from the core fundamental patterns and and looked at this from what we refer to as agent-based organization. But it has before we got agentic from the AI community, it's about our organizing for agency. So, how we understand agency is a fundamental view of how to understand intelligence as a hive mind. So, this is such a rabbit hole, but I I I fundamentally agree upon on your principal view of uh of decomposition. What we have worked on is are the principal design patterns and principles, how to think in order to carve out what is a good decomposition heuristic.

SPEAKER_01

Yeah, and to try and work out exactly where you know what is the largest sort of grouping uh that you should have in the in a team, for example, or group of agents. Uh where does it actually get too big? Have you looked at any of those kind of things, right?

Henrik Göthberg

Yeah, but it's quite but it is so I think it's a fractal pattern, and ultimately, as a fractal pattern, it's very simple to understand what are the key design principles around that. And then yeah. I agree with yeah, so we have that so uh our PhD Mikkel Klingwell is uh he's the guy who is sort of the the brain on thoughting thinking. So he's data scientists at at work, but uh, we are looking at those kind of principles actually fractally the way we have thought about this to be complex adaptive systems and AI and all that actually teaches us on how we need to think about the future enterprise.

SPEAKER_01

Yeah, I would completely agree with you on that because I think uh well we've seen uh uh since probably about the 1980s, we've seen a hollowing out of uh corporate structures, right? What we call the hollowing out process. Um and actually, if you think about what a company really is and what it contacts, adaptive system. Yeah, uh and it's also really just an aggregation of uh back to money, capital of human resources and some re resources, right? Some physical resources, and those can be redistributed and delivered in a very, very different way.

Henrik Göthberg

So Mikhail Klingwell is more or less world leader, in my opinion, and understanding this from an organizational point of view. He he's he's an organizational sociologist, data scientist.

SPEAKER_01

Very cool.

Henrik Göthberg

So he has a very, very interesting angle into this.

SPEAKER_01

Do you have a podcast of him? I want to watch that.

Henrik Göthberg

Yeah, he needs to. He was actually here for many years ago, but we were a bit drunk that uh episode. It wasn't that good episode because we were actually drunk. It was an awesome episode.

SPEAKER_01

Sorry, I'm drinking water. Yeah, yeah, yeah. No worries.

Anders Arpteg

But continuing perhaps a bit on what you recently said, which is that you know, we are going to reorganize differently, we're going to have AI agents. That is going to be the team that most humans have and play with. How do you see the human role evolving then? As we are going to have more and more AI agents as co-workers or whatever we call them in the future. What do you think the role of the humans will be?

SPEAKER_01

Well, I think we'll move into an interim period like we were talking about before, where you end up being system coordinators, right? So um going back to sort of tech, you know, you start encoding, uh, then you used to work your way up into sort of system architecture and then into sort of you know more end-to-end system architecture. Um, I think we'll see that uh the issue will be you need to move straight from uh university now into sort of end-to-end system architecture, right? So that puts a lot of um it puts a lot of problems for the education system, actually. Um I think uh we need to completely redefine how we educate and how we uh train people. Um but uh I think over time, um, hopefully, uh what happens is we have a lot of agents that are off doing work for us, maybe uh making money off our assets for us. It it would be lovely to have a car, an auto if you're gonna have an autonomous vehicle, it would be wonderful to have, you know, you might own the car and rent it out through um Right. But you're not using it yourself. Yeah, an agent could theoretically drive it and with a you know, it could put money into your bank account, take it out for gas, you know.

Anders Arpteg

Do you use do you use agents yourself in any way today?

SPEAKER_01

Um mainly to play around with them to see what they can do, actually. Yeah. So um I'm I'm a bit too scared of uh I've done too much security research previously to use open core. I like I like trying it, but then I keep going, oh, there's no way I'd put this live.

Anders Arpteg

Just put in all your passwords, give access to all the files. No, no, what whatever could go wrong there.

SPEAKER_01

But Kathy, I think what I'd love to have actually is an agent that can go through my laptop and organize it for me. That that that I might like, I haven't got around to the OpenClaw can probably do that for you, though.

SPEAKER_03

Yeah, right.

Henrik Göthberg

Yeah, but OpenClaw can do that, I think, even better. Yeah, but you said something that I think is quite interesting that there's the interim, and then is when we can move up an abstraction layer. So it's it's so we had this conversation before. So it would be nice when we have when we feel so confident in this level so we can live on a different abstraction level. You know, sending them out and do work, blah blah blah. But the system coordinator role here, the interim here is a little bit like we, in some ways, moving up the abstraction level, but we are nowhere near to be able to let go of being part of or monitoring. So that's the problem. We're gonna live on the same abstraction level as the AIs and try to move up. That's is that what we mean with the interim? We can't really let go. Uh that would be stupid to let go too fast.

Anders Arpteg

We do let go a lot in coding, for example. It's really the window into the future in many aspects. So we we're not really looking at the code. Some people, I still do, by the way. I because I I I have a hard time to let go, but a lot of people do not. And the reason.

Henrik Göthberg

Are they mad or are they are we there? Are we there now? Or are they mad? Mad, yeah, yeah. The conversation on LinkedIn is like, are you mad that you're not looking at the code? And then then we say, of course we can do it now.

SPEAKER_01

I think that's a bit arrogant, isn't it? To have a go at people who are not looking at the code. I think it's also a great democratizer. It is. If you think about it.

Henrik Göthberg

Yeah, I wasn't having a go. I was like, I was more like, oh. There.

SPEAKER_01

It is the conversation.

Henrik Göthberg

Are we there or not? And and I think we are there.

SPEAKER_01

Yeah. And if you think it's kind of slightly irritating to have spent a lot of my life learning how to do good code.

Anders Arpteg

I I think it's still valuable to do that. I still think you know universities should still you know learn people to code, but perhaps on a different way. But still you need to understand the basics.

SPEAKER_01

Yeah.

Anders Arpteg

You need to understand what the CPU is still, even though you don't know how to make one.

SPEAKER_01

Absolutely. And you you know it's so it's one step towards Star Trek. We we have to accept, you know.

Henrik Göthberg

But I there's so many good uh sayings here. Like, you know what? We put our a coder put all his identity into oh, I'm so good at coding. Finally, we can let go of that thing and then talk about what a real in engineer is all real engineering and software engineering is all about, which is many more facets than use coding. So finally, we don't need to have that as our crutch. Someone said it quite well. You know, you know what I'm saying? To build good systems, to build a system with product market fit. That was always the job.

SPEAKER_01

Yeah, and you know, it enables us to sort of use our higher order thinking powers of thinking, right? You know, to sort of think, okay, what do we want the system to actually do, as opposed to sitting there going trying to debug a little time?

Henrik Göthberg

So if we if we really look at our role as engineering from 306 degree around building great products with product market fit, coding is then this piece of my identity. I'm happy to leave that because these are the big the real questions. I someone said this really well, and I I completely ate it up. I think it's true. What do you think? I mean, like engineering is so much more than coding.

SPEAKER_00

Yeah, absolutely.

Anders Arpteg

I mean, it's never just been about you know writing lines of code of it. Exactly. It's it's been much more about the system thinking here and what they really should do.

SPEAKER_01

I think most people are worried about AI mainly because it shows us what you know, these traditional sort of again, 20th century jobs are mainly what was that book, you know, bullshit jobs. Yeah. You know, like most of what we do is actually sort of admin or repackaging someone else's email to give it to someone else.

Henrik Göthberg

But this is functional stupidity that should be taken away anyway. That that actually we talk about, oh, we have so much more higher productivity frontier because of AI. And then I'm I'm I'm I'm a little bit like pro provocative. Is AI showing us transparently functional stupidity that we're cleaning up? Or is it, you know, so what is the efficiency of productivity gains that some of these fucking bullshit ways of working and stupid things become you know, people don't have agency, teams are not smartly organized. Oh, with agents, we need to figure that out now. No shit, Einstein. You should have done it 10 years ago with your team of people.

unknown

Yeah.

Henrik Göthberg

I I think there is so much good by thinking about systems and agents that actually use shows. Did we really give teams agency before in the right way? Maybe now's the time to do it.

SPEAKER_01

Maybe don't say that too loud. That'll stop everyone using AI.

Henrik Göthberg

I think it's the opposite. I think it's it's a healthy way of looking at the same thing.

SPEAKER_01

I mean, the the amount of the the the moment I was convinced to use AI actually was when, you know, uh uh you would do your marking for students. So I did a lot of teaching before um and you'd write the sort of comments and then and then I'd be looking at it going, oh well that's a little bit harsh. I need to go back and be a just a you know, not that I wanted to be harsh to them, like I'd given them a good mark, but I wanted to so I just asked the AI, you know, take this and smooth the language to make it nicer. And I was like, Oh, that's beautiful, thank you. You know, that's that cut probably about 12 hours from marking. Yeah, you know. Some days it took, you know, cut two weeks, uh depending on the students.

Anders Arpteg

Yeah, I think it's it will be amazing when AI can move not just in coding but in any kind of job and task like that, like teaching and and grading exams or whatever. Uh super fun. I think uh we should uh move perhaps even more philosophical here. Okay. And uh if we, you know, there is this metric called Metter, it's basically trying to measure for tasks how long time does it take for a human to do a certain task, and then try to see what can AI do today, and then they measure that progress over time. And and uh we were like half a year ago, I think around like four-hour tasks that AI can do to 50% success rate. And apparently that doubles every four months or something. I think we're up to like 14 hours of work that an AI can do today that takes like 14 hours for a human to do. So that will continue to improve a lot. So soon we will have like AIs that can do weeks of work as a human will take it, but it can do that successfully. If we believe this will continue, then of course we will reach AGI at some point that uh jobs that Henrik does or that I do uh can be much better done with an AI. And um if that happens, for one, do you do you think that will happen? When do you think potentially we'll have an AGI system that actually do improve? Let's you know, as Sam Sam Oldman has a nice definition here saying AGI will happen when an AI can do what an average level human coworker can do, and you can properly replace him.

SPEAKER_01

When do I think that will happen? Yeah six months.

Anders Arpteg

Really?

SPEAKER_01

No, I I actually I don't know. Six months to a year. I think um I think advances are being made. Now, however, um at the same time, I also have a lot of belief in these new world models. I'm really excited to see what happens with those. Um the reason for that is I think when we talk about AGI, effectively what we're looking at is is basically we're being challenged as humans because current before before AGI and all of this wonderful stuff that could write pages and pages and pages, we kind of thought we were the only intelligence in the world that could have language. If you think about that, it's a massive level of arrogance that humans have had about intelligence, what intelligence is and how brilliant we were. We were the supreme sort of animal on the planet Earth. I think so. I think those are going to continue. And are they gonna be able to effectively replace uh a coworker who is sitting in a you know a very boring task-oriented job sort of bit of writing? Yeah, probably quite soon. I don't doubt that. What I'm super excited about, however, is what would happen when we see what other types of intelligence can emerge from these sort of world models, physical models, which are much more about understanding um, you know, learning from the world around us, different types of intelligence. Um and that to me, I think is almost more exciting. Okay. Because imagine if you get introduced to a new type of intelligence that we we haven't seen before, that really challenges our understanding of being human.

Anders Arpteg

It's fun also to see that because we can also imagine that humans are really, really bad at some things. Right? And and if you take that knowledge management, we are horrible in just reading a book and trying to do recall on that book. But AI can do that insanely much better than any human. But then I think also we underestimate sometimes what humans can do and overestimate what AI can do. I'm not as positive or optimistic as you in like one year or six months. I think there are a lot of things that actually humans do that we do not appreciate. Uh so I wouldn't say six months.

Henrik Göthberg

By 2029, you'll be sticking to Kirchfall's number the whole time.

SPEAKER_01

Four years, okay. Yeah.

Anders Arpteg

I think it's also a big difference between the digital world and the physical world.

SPEAKER_01

What co-worker are you replacing?

Anders Arpteg

Uh Henry.

Henrik Göthberg

Any. Well, but but the the the core argument here is a little bit.

SPEAKER_01

There is no such thing as any co-worker, right? There is there is a co-worker who might be, and that's why you know, a co-worker, for example, who might book all my meetings, book all my travel, uh schedule things, uh, you know, that I think can be done probably six months. Probably can open claw could probably do it now. But we we are we have maybe not.

Henrik Göthberg

But it's an interesting uh nuance here that I think if you look at this, go back to Sam Altman's uh definition of AGI. We can that we have an AI that can be taught to do any job a normal person can do from A to Z. I think the logical way of thinking about that is that you have the fundamental intelligence here, and then you need to start in one corner with one guild, and then you can replace this coworker and then this coworker, and this then this coworker, and then this coworker, and then you can move across that space digitally, uh office work, and then you can think about doing that in the physical world, uh, you know, so AGI on so many different levels. But if we just take the office work, the the white-collar work, I think it can be understood that the AI is smart enough now, but it still has a learning curve guild by guild by guild. But we still haven't seen replacement. I'm not sure not yet.

Anders Arpteg

And please disagree with me, that full replacement hasn't really happened.

Henrik Göthberg

No, we are on task level. We are not on fundamental job level. We're not, we're not.

SPEAKER_01

And it depends what you mean by a job as well, right? You know, that's so these are definitional, but uh journalists chopper could be replaced. Apparently, according to News Corp, they apparently have been. So can book authors, by the way.

Anders Arpteg

No, no, okay. I'm kidding there.

SPEAKER_01

So But actually, I think isn't it a bit of a weird metric? Like, think about it. Okay, so replacement, okay, that's one thing. What about um uh the invention of a completely new type of coworker? You know, something I but it would be in the chat, yeah.

Anders Arpteg

That's right, in some way.

SPEAKER_01

A bit, a bit, a bit. But again, they're task-oriented, aren't they?

Henrik Göthberg

But I loved your comment here because when we go into this replacement rhetoric, we are we are we are already here putting old models on a new problem. Yeah, because we all know that jobs have been reshifted or reworked, and new roles emerge, and we call them different things. So we we know by a fact that we cannot define the job approach of this 21st century. So so that becomes then tricky because we are trying now to box it in into definition of replacing jobs to be done, when in fact this is evolving, it's it's changing.

SPEAKER_01

There's a fantastic museum that you guys need to go and have a look at. It's called the Museum of Unknown Artifacts.

SPEAKER_07

Ooh, that's funny.

SPEAKER_01

And and what I I I use this a lot as an example of what I think AI is going to do for us. So there's uh it's uh I think it's it's in Nottingham, I think, in in um the UK. Um and what they have illustrated is that there is actually uh there's all of these implements, um, and they don't know where they've what they were used for. They're they're tools that we use probably pre-Victorian era, and they have absolutely no clue. They can they've got lovely labels that say, we think this has something to do with farming.

Henrik Göthberg

This is a cool tool, and it's clearly a utility, but we cannot figure out what it was used for. Exactly.

SPEAKER_01

And there was obviously some human there working with that tool, probably an expert in it, and suddenly, you know, it sort of has been replaced. And hundreds of years later, we have no idea what that even is. I think that's where we're gonna end up. We're gonna see something, we're gonna be, you know, people are gonna look back at us and go, look at these idiots using this instrument. What is this? I mean, imagine this.

Anders Arpteg

Imagine in 10 years, you know, people are going to look back at us and think, oh Jesus Christ, imagine how they worked in that time.

SPEAKER_01

I think they're already doing that, aren't they?

Anders Arpteg

But that was a good framing, Cathy. I like that. Okay, so uh a final question here. So imagine we will come at some point, and the exponential progress in technology technological AI progress will continue, and uh we will see some uptake and some adoption as well, potentially. Then there will be a point where we'll we'll have AGI and ASI potentially as well, and it could end up in in two extremes here. You can think about you know, one extreme being dystopian kind of matrix and the terminator and machines trying to kill us all as humans. Or it could be the other extreme, which is the utopian version where we have yeah, the world of abundance, we we have AI that's uh cured cancer and it's fixed the energy crisis and made fusion energy work and fixed uh yeah so many things. And basically have the price of goods and services going towards zero, a Star Trek world in some way, perhaps. Uh where do you think we will potentially end up here on that spectrum?

SPEAKER_01

Well, I mean, I I hope for Star Trek. I've got to. Um Yeah, I think I think we end up a little bit more towards Star Trek, but we've got a long way to go. And if you look at any of the I mean, okay, showing my supreme geekness now, but if you look at any of the canon of uh Star Trek or you know, you you can see that they always talk about history, uh Earth history going through some kind of massive turmoil and then they're moving through that in order to get into still surviving though, still surviving, yeah. Um so how do we you know I think it it gives us some inspiration, but like all things uh uh science fiction, it's always just a concept, an idea. But the thing is, like, how do we move as a society? And I think we need genuinely to have some very open and honest conversations and try and work out how to have these across society of the type of world we want to live in. How do we help people transition as well? Because yeah, it's super exciting to talk about okay, someday someone's not gonna know what this is, or you know, a pen is, or all of the tools we use today are gonna be completely unknown. But that does mean that there's gonna be a lot of people going through a lot of turmoil. So how do we help? Um, how do we help that transition? I think that's a key, that's something the UN should be working on.

Anders Arpteg

To come into a world of um sustainable AI, perhaps.

SPEAKER_01

Hopefully, yeah. Right. Yeah, that would be good.

Anders Arpteg

I'm hoping I think you phrase it well. I mean, I've been saying that also. I think I'm more afraid today for people abusing AI than I will be at a point in time where we've got AI that can supervise at least other AIs.

SPEAKER_00

Yeah.

Anders Arpteg

And then I feel a bit more secure. Um, but I'm really scared actually today when it's so easy to abuse AI.

SPEAKER_01

Yeah, and sort of understanding how to put AI into some guardrails uh would be very much.

Anders Arpteg

Especially in a geodoctical geological kind of or geopolitical situation that we have today, then it becomes a bit scary, I think.

SPEAKER_01

Yeah, and I guess we could use some of the archetypes going back to the beginning of the conversation. Some of the archetypes that are used or design patterns used in telecommunications originally could be very useful to provide some of those guardrails.

Henrik Göthberg

I think I think there are I think there are I even go back to the point where we had uh talking about the core pattern of intelligence. I mean, like uh startup in Sweden that is trying to crack the idea of intelligence, right? And who has a you know, in my opinion, trying to understand what are the first principles of intelligence, what are the feature set of intelligence. And here we are now talking about fundamental patterns. And I think these are fractal patterns that go into the design thinking of how we build intelligence that goes into how do we build organizations and teams and stuff like that. So I think there are some fun there is beauty in that idea, and I think we can figure I think there's things here we can figure out that is reusable.

SPEAKER_01

And is there something also to be said about you know, we talk about intelligence, but are we actually really talking about cognition? The ability to cognate. Sorry, no, I'm getting declining in English. But um, if you think about it, a lot of the things that we see around us that have what we would classify as intelligence have a cognitive capacity, right? And there's a lot of really fantastic work that's been done in that space. There's a great book, Bacteria to AI, by Catherine Hales, that's worth reading.

Henrik Göthberg

Yeah, this is so uh this is we need okay. Sorry.

SPEAKER_07

Now it's time for AI after after work.

Anders Arpteg

So this is the cliffhanger where we bye bye. Yeah, we leave all the business behind now.

SPEAKER_07

Um now we now you guys see why we want to have AI after after work. What a great question.

Anders Arpteg

Yeah, so looking forward to digging deeper into that question very, very soon. But thank you so much, Katherine Mulligan, for coming here to the AI after work quick podcast.

SPEAKER_01

No, thank you very much. This was the sort of conversation I moved back to Sweden for, so I really appreciate it. Thanks. Thanks.

SPEAKER_02

Thank you.