AIAW Podcast
AIAW Podcast
E168 - AI Innovation at Central Bank of Sweden - Hugi Aegisberg
In this episode of the AIAW Podcast, we’re joined by Hugi Aegisberg, AI & Innovation Lead at the Central Bank of Sweden (Sveriges Riksbank), for a forward-looking conversation on how artificial intelligence is reshaping the future of central banking. From rethinking monetary policy and financial stability to building public trust through open-source AI and digital assistants, Hugi shares practical insights from inside one of the world’s oldest financial institutions. We dive into Sweden’s approach to hybrid cloud, data sovereignty, ethical AI deployment, and what it means to empower every citizen with their own AI agents. A must-listen episode at the intersection of technology, public governance, and economic transformation.
Follow us on youtube: https://www.youtube.com/@aiawpodcast
Perfect.
Hugi Aegisberg:And what was the name of the conference again? Econdat. Yeah. Econdat, like economy dat data. Econdat.
Speaker 1:Yeah.
Hugi Aegisberg:Oh, deep that. Econometrics data.
Anders Arpteg:And how many yeah, disco. How many um participants are there, you'd say?
Hugi Aegisberg:Well, this conference, I think, had about maybe a hundred participants. Uh yeah, different people flew over from all over the world. I mean, beautiful Canada, nice fall weather, autumn colors. I drove around a bit, you know, checked out the vibes, went to Canada. You know, Canada's great. I was actually really surprised. I hadn't been to Canada before. But I sort of felt like, oh, I could live here. And then I realized, oh, it's because it's it's just like here. Yeah.
Anders Arpteg:But it is very similar, both in the verse of the nature and the weather, but also the the I think the society in general and the way they you know operate.
Hugi Aegisberg:Yeah, people are just slightly warmer, friendlier, and more helpful than sweets are. But uh so it's like sweets, but like on a good day. They have weird drinks as well. Beertinis. Did we try that? You know, I didn't have beertinis, but I had poutine. Oh what jeez fuck as well.
Henrik Göthberg:What is that? I haven't heard about it.
Hugi Aegisberg:Okay, I'll explain poutine to you. Okay, so imagine you you take you take really dirty fries. You take you make really good french fries, right? You put very big plate.
Henrik Göthberg:Okay.
Hugi Aegisberg:They take some gravy, like on on meat, sort of gravy. You pour that on top of your french fries. And then you take like a weird halloumi and you chop up this halloumi and you pour that all over the french fries. So it's like halloumi french fries and gravy. And then if you want to get really dirty, which this particular poutine had, it's also like a mound of smoked meat just on top of it. I was gonna I was going to have to. It is a bomb of all sorts. Like it was. I I would not be surprised. Like I was like, I have to try this, and it was uh two days of punishment.
Henrik Göthberg:But and it's this a Canadian thing.
Hugi Aegisberg:It is a can it's a French Canadian thing. Okay. So you'd imagine the French, you're like refined cuisine. The French Canadians are more like, let's have some fries with gravy. Like that's that's their their vibe. Loved it though. So it's just like the It's tasty, right? It is it is tasty and decadent, yeah.
Henrik Göthberg:It's decadent and I it sounds pretty good. I I I could see myself, ooh, I want to try that, but oh my god. What about poutine?
Anders Arpteg:Okay, but you went to a conference at least as well. I did I did go to a conference as well, yes. I did manage to make it there as well. Yes, indeed, yeah. Cool. And that was for like a central um summit or conference for central banks in in the world, right?
Hugi Aegisberg:And and you know what? It would it's a summit for the nerds, really. That's what the summit is for. It's it's uh ML alternative data sources forecasting, that sort of thing. So so you basically had a bunch of uh you know uh PhDs who are there presenting their papers. So some of it gets very, very nerdy, nitty-gritty into like, you know, showing lots of maths and and and so forth. And some of it is more like, okay, how do we practically work with with AI and in and our our um our central banks? So the first part of the conference is open, actually.
Anders Arpteg:Uh and the more or less.
Hugi Aegisberg:I thought I think if you're an academic like you can get invited. The last day is just for the central banks that are there to be able to talk a little bit more without insight. Yeah. Did you present something? Or was this? I didn't present. I was there. So I'm actually an innovation manager at the IT department. Right. So I was the only guy who is from an IT department at this conference. Everyone there is from a policy department or is an economist or is a machine learning expert. A quant. A quant, basically, yes. It's a bunch of quants. And I was the guy who was there as like, you know, I'm in the IT department. They're like, usually for them, it's sort of like it's not the enemy, but it's definitely not some, you know, you basically talk about IT as like, oh, it's fucking it's IT again. You know, they're asking for this and that, and blah, blah, blah. So that's sort of you know the guy was at. Um cool. Yeah. Did you see anything interesting? Any highlights from the conference? It was great, actually. Loss of highlights. So they had um, do you know Mila in Quebec? Yeah, so so they had uh the CEO of Mila there. Yeah, Ben Joe, right? No, Ben Joe was the founder. But I think I think there was a CEO. Uh so woman lost it, but she she was great. And then they had one guy who they basically brought in as like the doom and gloom, the AI is coming for you, sort of guy. I'm really not good with names. His name is Anton K, something. Anton Kurtz, maybe he basically came in and said, as an economist, he started researching AI a few years ago. And he's the guy who has sort of the Joffrey Hinton perspective of like, yeah, you might mean maybe you can be a plumber for a few years, but then you know, you're sort of like he came in with that perspective, right? So you had that sort of person, and then you had this the CEO of Mila, who was really a lot more like, no, I'm in it for the people. There's gonna be space for people, like, you know, AI isn't gonna take over. So you had a whole full spectrum, right? And then those were sort of the things framing it. And then you had all these presentations of uh PhDs doing their papers, and then you had some of the people who are basically coming in and saying, these are the recipes for success in enabling. Yes, exactly. Valerie Pisano, that's her. Yes, she was great. The recipes for success um in making AI work for you in your organization. So there was also a few of those talks.
Anders Arpteg:I mean, it's interesting. Canada actually is um one of the biggest pioneers of AI, especially for deep learning. I learned, yeah. Yeah, yeah. So the big uh deep learning mafia, you know, with uh Jeff and Jeff Hinton and also Jan Le Kun are all from Canada, actually. That is interesting. Is he French Canadian? Yeah, he is he's from France to begin with, but he lived. So they're all in the same uh university in Canada.
Hugi Aegisberg:You know, Valerie Pisano told a very interesting story, actually, about about um uh again, founder, um Benjo Benjo, Benjo that when ChatGPT had come out, you know, they basically went into his office and they asked him, like, ChatGPT, what do you think about this? And he said, Well, GPT, it's it's uh yeah, it's like pre-trained transformers. Like, we know that G P T. Like, we know all these things, like we what's new, basically. He was very unimpressed. And then they noticed that he's not really coming out of his office for the next few days. And then he comes out and he basically says, like, okay, I get it now. I get it. I think we're past the turning point now. And it was basically the difference was not him seeing a paper or understanding something that he hadn't understood before. It was just using it. It was just using it and sitting down with it and understanding that like we're way past turn tests. And we're in a space where he felt like we're past the tipping point. Something just happened. And it happened in a way she said that nobody really expected. Just have it sort of happen.
Anders Arpteg:It's like boiling the frog, you know, it's just slowly get better and better, and suddenly it's just surprised how well it's like.
Henrik Göthberg:And the difference, I guess, for all of us is like in in a in a casual way, interacting with the AI. So we had AI all the time in recommender systems, but it was hidden for us. We didn't really interact with it. So I think that's that is, I guess, even if you're one of the mob mob guys who really knows this stuff, it's it's a different experience.
Hugi Aegisberg:And I think you know, interacting with it is one thing, but to somebody who uses it a lot for all sorts of things, it's almost like you start to see information in a different way because I'm expecting how I'm going to work with a language model to interpret this information. Like, do you see is it a little bit out there? Do you understand what I'm saying? Like when I'm putting information into a language model, I'm actually thinking about the embedding space. Like I'm thinking, if I set the model off in this direction with these premises, it's going to go in a particular direction. And I can almost see it in my head vicially as a gradient. And I think when you start seeing the AI model like that, suddenly you sort of start seeing your own thinking like that. Yeah, right. And it starts to reflect upon your own thinking. You start understanding that, you know, if you wake up a certain you know what they say about Trump, right? I mean, he basically um he is most affected by the last person you talk to. Yeah. You know. That's what they say. I'm not sure I agree, but that's what they say, right? Primed by the last conversation you had. That's the only context window he has. Exactly. Exactly. It's the context window. It's just like goes on some path, right? Yeah. So it so I think it's like, yes, interacting with the AI, but it's almost when you use it a lot, you almost become like the centaur creature with it, right? You start thinking with it. Cool stuff.
Anders Arpteg:But okay, so the conference. Uh any um any specific highlights you'd like to mention? Something, you know, like how well does the Swedish central bank actually compare to some other central banks, perhaps?
Hugi Aegisberg:So for reasons that are basically legal, um, we have not been able to employ uh a lot of models that require significant GPUs for anything um significant. So it means that we, along with many other European um, I would say, say, central banks institutions are a little bit behind in some respects. But I would say this we are actually very well positioned in one aspect, and that is the way in which we tweet and work with our data in terms of our maturity as a data-driven organization, data lakes, data teams, uh orchestration, and having actually the um responsibility for the data integrity lie with the business and with the teams and with the analysts rather than lying with IT or some integration team. So that is actually an aspect that a lot of the other central banks are sort of catching up with.
Henrik Göthberg:So architecturally organization-wise, how you have sort of built data-driven into not only being a word, but actually organized properly. Yes, here we're not too bad.
Hugi Aegisberg:We're really not too bad at all. And and we we did like an AI assessment just to be able to have some KPIs around this stuff. Compared to other public sector organizations, we are in the top tier when it comes to data. Really? It that doesn't mean very much. Like it's not like we're on a five, you know, we're basically on a good 2.5, but that is pretty good being on a two point five. Was it AI Sweden's? Yes, AI Sweden's AI maturity assessment. Yeah, exactly. So so so, in terms of our our day maturity as a data-driven organization, we're pretty good. So that lays good foundations for what we're about to do.
Henrik Göthberg:Right. Cool? Because sometimes it can be also the opposite, that if you haven't organized properly as part of the journey, you you you get yourself into deep territory technically when you don't have the systems uh organized and architectured to be robust enough.
Hugi Aegisberg:That's right. And that was what everyone actually is talking about. That you know, you a lot of organization, this isn't just go for central banks. We just talk about it generally, companies, corporates, whatever. You get caught in what's called like pilot hell. Like pilot hell. Oh, yeah, yeah. Yeah, prototype graveyard which is prototype graveyard. Like you can't put anything in production because everything just works with this one pipeline that cannot really be repeated any meaningful times. Exactly. Yeah. So that so that's where people end up. And the reason for why I don't think we'll end up there is because all of our data works through, for example, Daxter pipelines. So we have Daxter is like sort of like Airflow, you have these pipelines that you put up. Most organizations don't have that. Like they don't. I mean, in corporations, yes, but in in in public sector, you don't really have that sort of thing.
Anders Arpteg:Cool. We're getting into a lot of uh interesting topics and representations. Right now, but before we learn more about how the Swedish Central Bank actually can make use and do make use of AI, let me just formally welcome you here. Um, Hogi, let's see if I can pronounce your last name. Um Agisberg. Yes, Agisberg, perfect. And you're both the innovation manager and also product owner, right? At um at the Swedish Central Bank.
Hugi Aegisberg:Yes, that's correct. Product owner for a team that up until now used to be DevSecOps and AI. DevSecOps. DevSecOps and AI. And it was started as a unified team, and now we just split it down. So now I'm the AI team product owner.
Henrik Göthberg:Oh yeah.
Hugi Aegisberg:Cool, cool stuff.
Anders Arpteg:But before we go into those kind of discussions, perhaps you can give uh a few minutes of background, you know, who really is Hoogie?
Hugi Aegisberg:Right. Um, so name Icelandic, born in Iceland originally. Yes. Um moved to Sweden where I was a kid, so I've I've grown up here. Uh study at KTH, uh, study biotechnology. Uh-huh. Very different from what I'm doing. But you know what? Here's an interesting fact. My colleague who's an innovation manager at Expank in Israel, Annika Tourison, studied the exact same thing at the exact same school.
unknown:Yeah.
Hugi Aegisberg:I think it has to be complex systems. You sort of innovation complex systems sort of works together with bioinformatics is complicated.
Henrik Göthberg:Absolutely. Yeah.
Hugi Aegisberg:Um, I worked as a data consultant, software developer. I did everything from you know messing with big, messy SQL database pipelines to coming in and writing some, you know, web app. Like it was very wide in what I did. Um, and I worked with a research network called EdgeWriters for many years. Um, and there I worked with what we call like collective intelligence software. We did large-scale ethnography and research projects, mapped out semantic networks uh within the conversation that was had within that research project and sort of delivered the semantic maps where you could do something called semantic social network analysis, basically trying to figure out how do concepts fit together and who's talking about them with their consent in the in this research, right? So we did that in European Commission projects. Um, so I did that for a while. Um, and then I worked with an organization called Open Collective. And Open Collective is a you can call it like a fund management and fundraising platform for open source software and also just general civil society projects. Uh did that for a while as well. Sort of analytics and all sorts of stuff there.
Henrik Göthberg:And that was what I mean. And what was the transitioning and storytelling into Riksbank or Central Bank of Sweden?
Hugi Aegisberg:So, in all of these situations, I've usually or always worked for myself. Um, I've never been employed before. So it was literally me being in a crossroads trying to figure out what I'm gonna do now. For a while, I was maybe I'll start one of these AI companies. I tried a little bit. We did uh deep research before deep research came out, but we didn't have the compute to be able to properly do it. So at the end, we're just like, yeah, you know, someone's gonna beat us to it. And yes, they did. So I decided to, well, maybe I'll try to be employed as a big organization. And this just came up at just the right time. They were looking for an innovation manager who could code. And I was like, for AI stuff. And I was like, yeah, okay, this sounds really interesting.
Henrik Göthberg:That's interesting how that also is framed. Innovation manager, you can think about that as someone who doesn't know how to code, but works on a on a on a it usually is.
Hugi Aegisberg:It usually is somebody who does post-its. Yeah, exactly.
Henrik Göthberg:And then here they were they were they were more specific. They wanted someone who actually could not understand the innovation they were doing, actually be quite hands-on with it. Indeed.
Hugi Aegisberg:They had run into the fact that you know, innovation management, of course, predates having to do innovation with AI. And they were trying to approach innovating in the AI space as they approach innovating in organization. And it sort of worked, but the problem is they were running into like the pilot hell. I mean, it's you know, you you try to do something and then you're dependent on other people who are technical and you don't really understand what they're doing. So there's miscommunication and everything just doesn't really work, right? So they thought, huh, what if now that we have an opportunity to bring a new innovation manager, what if we look for someone who can code, see how it works? Okay, and that's what they did. Interesting.
Anders Arpteg:Cool. And I think most people know what the Swedish uh central bank is, but still, uh, if you were to give a bit more uh quick introduction to what does really the Swedish central bank do? Sure.
Hugi Aegisberg:So the Swedish central bank, like all central banks, is um responsible for maintaining the the value of the Swedish corona currency, right? And is responsible for being a sort of guarantor for the financial system in general. It also we also have a payment system uh for the banks through which large transactions.
Anders Arpteg:There's an IT system for that, right?
Hugi Aegisberg:Yeah, that's it's an IT system. Yes, exactly. Um and then it is also an organization that sets monetary policy, as probably most people are familiar with setting the the interest rates. Interest rates, yes, the the um governing interest rates or the policy rates. And in order to set policy rates, you need to do a lot of forecasting. So it's an organization that does a lot of forecasting and does a lot of reports. And it also, in order to do that, gathers a lot of information, gathers it from the banks, gathers it from the public, from interviews with with corporations, business leaders, etc. And also, as a part of this, monitors financial stability. Right. So I think that would be I hope I got it right.
Henrik Göthberg:It it's it's a stupid question. Is that a fair model is is that a generic view of what Riksbank, the central banks do, or do you see differences in what the Fed is doing in the US versus what we are doing and stuff like that? So so there is a focus.
Hugi Aegisberg:Yeah, so there is a difference in that in Sweden there is a split between uh Riksbanken, uh, which is uh and and and um uh Financies Pekune. So in many in many countries, these are the same organization, but in Sweden they're split. Yeah, and sometimes you might even have Riksjälden as a part of the Riksbanken. And also in if you look at um countries like Norway, I mean the Norwegian Riksbanken is actually split into Oliefonden and the Riksbanken. So Oljefonden is actually a part of the Norwegian central bank and they own one percent of all the stocks in all companies in the world. So sometimes the central bank does you know things you wouldn't expect it to do.
Henrik Göthberg:But I think the the the interesting distinction is the the separation in Sweden between finance inspection and and the normal Riksbanking stuff, which is not always the case.
Anders Arpteg:Yeah. Correct. And and just to get some understanding then, you know, for uh organization like uh Swedish Central Bank, you know, what are the potential you know highest needs to get more data and AI driven? You said forecasting, of course, uh I guess monitoring, you know, how the current uh financial systems are doing. Yeah, but I leave it to you.
Hugi Aegisberg:Can you give some examples of what they so those are the three things we're focusing on, right? So we actually made a plan for what we want to do with with the AI in the coming two years. Yeah, and one of them is becoming doing even better analysis for monetary policy. So that's one gathering information, analyzing information. The second one is um monitoring the financial stability of the economic system and becoming better at that through AI machine learning uh methods. Okay. The third one is to uh further increase the uh sort of the stability and the security of the payment systems.
Anders Arpteg:So so not your own payment system, but in general, or is it mainly your uh I mean that's the one that sits within our walls.
Hugi Aegisberg:Okay, okay.
Henrik Göthberg:And uh in what way would AI and data improve the payment system stability? What's the angle there? Just to uh get it one deep, one level more.
Hugi Aegisberg:Yeah, I'm I'm I'm going to be a little bit general here in terms of and I'm going to speak about what other institutions are doing. So I think this is a point at which I have to sort of separate like what is XMIGAN doing, where I can talk about certain things, and then I can talk about in general terms. So this will be in general terms.
Henrik Göthberg:Okay.
Hugi Aegisberg:But of course, one thing that you can definitely do, which is an indirect improvement of security, is just to bring uh AI into your cybersecurity. Of course. So so that is one way in which you could use it. I mean, a system like that has a lot of logs, and you know, that is something you can do. Um the Bank of England has uh run a project called Product Hertha with the Bank of International Settlements Innovation Hub in London. Product HERTHA aimed to detect uh financial fraud and crime within the payment system. And the way they would do that is basically that because each and every bank only has their own view of what payments are passing through, the central bank has actually a much broader view of the economic system. So if somebody is doing something where they're actually utilizing multiple different banks and then sending everything to one bank and then taking it out from that bank into a crypto wallet and sending that off somewhere, the uh central bank might be in a position to be able to detect that.
Henrik Göthberg:So some patterns can sometimes not be understood and seen until you take one step up and see the holistic view of the patterns. That's correct.
Hugi Aegisberg:And in this case, Product Hertha, I mean, this has been published, they improved um detection by 13%. But when you look at the specific uh subset of new types of financial crime, like patterns that you haven't seen before, it actually improved it by 26% in this project. And I think that um perhaps what that has to do is that it, you know, I mean, this isn't just, you know, I mean, this is some sort of uh machine learning models more than it is generative AI, of course. But it seemed to have some good effects. I think they managed to capture people who are basically doing exactly what I was saying, like siphoning money from here and here and here and here and here, and then it went into a crypto account. And then they could basically um contact that bank and say, hey, uh we noticed this thing, you might want to look into it.
Henrik Göthberg:I think this is how you said it now, it's machine learning and not generative AI. In this particular podcast, AI Fswork podcast, I think we have a quite broad, much broader view of what AI is than pure generative AI. And when if you look at it when we started, it was more machine learning cases than generative AI cases. So, from that point of view, I think it's very important to not uh discard the machine learning cases that we can talk about in this context.
Anders Arpteg:Agreed. You I didn't hear you mention, but perhaps you did. But uh I guess also forecasting of uh interest rates. Yeah, no, no, of course, of course, and everything else.
Hugi Aegisberg:That would that would be the first thing that I said going into the the monetary policy. Ah, okay. Yeah, so so you you do that in order to set monetary policy.
Henrik Göthberg:Yeah, but I think you said it well. But let me summarize if I got it right. Because I think on the one hand side, doing a sharper policy setting, but then equally important in being as fast and as adaptive in understanding how that monetary policy plays out in order to do microadjustance much more faster.
Hugi Aegisberg:I believe that is the way they would frame it. But I'm not going to put the words into the into the mouth of the people that are doing these analyses. And that's a general thing about you know how we work at Riksbank and not speaking on behalf of others, but I think they would agree with that, yes.
Anders Arpteg:Yeah. Perhaps you can can give some highlight if you can, Cher. I understand you can't speak about you know everything you're doing, of course. But uh, if is there something concrete with a new AI team that you're also driving and being an innovation manager that you're working with right now? That could be something that potentially you could talk about.
Hugi Aegisberg:Absolutely. Um, so we are now starting an AI team, um, splitting off in this DevSecOps team. And really the reason for we're doing it now is that time is right. Um we've set up an AI um roadmap. Okay. And in this roadmap, we speak about that we'll have a hybrid strategy that will allow us to both use uh you know third-party basically cloud solutions to bring in AI capacity. But of note is that we also need um an on-prem capacity to be able to do most of these analyses. Um that is now uh being set up as we speak. So we will have that uh capacity on hand uh from the beginning of next year.
Anders Arpteg:So it's a lot of infrastructure work as well to set up the Yes, okay.
Hugi Aegisberg:Lots of infrastructure work. And in that um on that note, um a few years ago, I would have been very hesitant about starting that sort of project within a reasonably small organization of 500 people. I feel what has happened since then is that companies that are providing these platforms have become better at building the infrastructure that means that you can sort of go deploy a model without having to worry about a lot of the stuff that's used to be quite finicky to set up. So if you're working with, for example, NVIDIA, you can use the NIMS from their repository and it just sort of works, to be honest. Um, you can use blueprints, you can use prior work, and the the sort of infrastructure around that has become a lot more mature. And this is why I'd say that this is possible in today in a way that maybe wasn't a few years ago.
Anders Arpteg:So even you know, AI is of course a very new technology, and compared to traditional software engineering, the tooling is it's rather poor, but it's getting more and more mature, right?
Henrik Göthberg:But but this this is the point. We have we don't need to name names, but we have other friends who is working in the traditional software space, hardware space I'm talking about, and where they start building their machines, and and from the machines they get into this orchestration software. Yeah, so we have NVIDIA and the NVIDIA stack with CUDA, but you you have other options now where they with for all the cases like where you know if you have a Dell machine or an HP machine and you want that you want to be able to uh ultimately get closer to the features that you have in the large cloud providers, but with the limitation that it has to be on-prem. And and we see we see a lot of work being done in that space. Uh so this is what you're now reaping the benefits of that you can now use.
Hugi Aegisberg:Absolutely. And I think something else, which is just the ability to be able to portion up your GPU capacity in a way that makes sense for your organization. I mean, we've come a long way from Slurm, right? I mean, that's not what we're doing anymore. Um people are. Of course, some people are unfortunately still using that.
Henrik Göthberg:Some people still prefer it.
Hugi Aegisberg:Well, yeah, maybe that's the same. Yeah, I mean, you know, they can you know everyone has their own fetish.
Henrik Göthberg:I mean, like Robert is a slurm. Nah, I'm joking. It's not slurm.
Hugi Aegisberg:Slurmer. It sounds gross, doesn't it? Yeah, Slurm. Yeah.
Anders Arpteg:So portion up GPU somehow.
Hugi Aegisberg:Well, well, okay. I mean, so so after Slurm, then you started having this sort of you know semi-hardware splitting MIG things where you basically portion up uh the GPU and you say, okay, you this model lives over here and that model lives over there. And you know, yeah, I mean, that gets you pretty far. Um, but in most organizations where you're not worried about data leakage between the compartments within the organization, that's really sort of overkill. Um, and what I think has really enabled a lot of great use now is timeshare GPU use.
Anders Arpteg:So the same GPU for many different things. And okay.
Hugi Aegisberg:You you basically run containerization and virtualization on top of it. You lose very little. I mean, Nvidia bought this company called Runai to do basically this. And um if you're trying to sell inference to a client, this doesn't work, right? Because the problem, the trade-off with timeshare GPUs is that, you know, eight o'clock. Yeah, exactly. Eight o'clock in the morning, it takes you know this amount of time to get a response, and then the three in the afternoon takes this amount of time. That doesn't work if you're selling inference. But if you're doing analytical work within an organization, who cares if you know your response time is 10 seconds and then it's one second? It doesn't matter. It really doesn't matter. So in that in that situation, um, having timeshare really makes sense. And it means that you can oversubscribe your GPUs like times eight compared to doing it.
Henrik Göthberg:Which is all about the cost-benefit or the total cost of ownership of trying to do stuff on-prem where that you need to get the utilization up.
Hugi Aegisberg:Exactly. And the problem when you're doing it in these very convoluted ways where you're saying, well, you get this part of the GPU and then you need to sort of recombine it and blah, blah. The problem is that you can't give people what they really need in order to be creative, which is a sandbox. You know, they need to be able to one day have like a little bit of capacity to be able to just accelerate some numerical analysis they're doing. The next day they might want to run a pretty big model, but not for very long. They want to try it out and see if it might work for their use case. This is what you need to be able to do. You need to be able to give people that uh ability and that creativity within the hardware that you use. Flexibility, right? Exactly.
Anders Arpteg:Flexibility keyword here. Yes, I think you also mentioned a term, and correct me if I'm wrong, but I I think you had a term uh buy to build.
Hugi Aegisberg:Yeah, that's my my uh tagline.
Anders Arpteg:Can you elaborate? What does that mean?
Hugi Aegisberg:Well, people kept asking me all the time when I was starting to recommend what I thought we should do. Should we buy or bill? And I I started getting tired of this dichotomy. Because buy, what does that mean? What are they expecting? We're gonna buy like Microsoft 365 and then everything's just gonna work? Like we're just gonna have AI now? Like that isn't it's not the way it is.
Henrik Göthberg:Exactly.
Hugi Aegisberg:Like you can't just buy AI and then say, like, oh, we have AI now. You know, some people think so.
Henrik Göthberg:Some people this is such an important topic. And buy to build, I'm 100% into it and come from another angle. So please set this straight now.
Hugi Aegisberg:But I mean, you you see a lot of this. I mean, I'm not gonna name any names, but you see even companies selling AI laptops. Like, what's an AI laptop? Like, what is it supposed to do? How is it different from Mercury? I I don't get it. Right. Okay, rant over. Buy to build to me is about that you need to you need to buy the right level of hardened, well-maintained, well-designed components that allow you to put a system together any way you need it to be with flexibility. So I think um all of the hardware providers are working on their own version of like NVIDIA enterprise. AMD is working with with Silo from Finland, they're trying to build that similar thing for the AMD infrastructure. And this is what everyone's sort of getting into because you understand that once you have something like containerized um models and you have containerized parts of an infrastructure, you can make that work to your advantage. But, and here's the but if you're sort of relegated to just downloading whatever's on Hugging Face, uh, your life is not fun because your security team is sitting next to you and it's just like, stop downloading shit off of Hugging Face, man. We need to have time to actually be able to vet these things, right? So if you have some sort of system, some sort of um provider that is vetting a lot of the most important things for you and hardening those things for you so you can sort of take it from there, you've already won so much. Your security team says, okay, I trust NVIDIA security team or I trust AMD security team, I think they're good. Let's just put it maybe through our own pipelines, but they're gonna be green, it's gonna be fine, right?
Henrik Göthberg:But but to me, this whole story goes all the way back to a very traditional IT and business application era. And we had the make or buy or buy or build uh very deeply ingrained in procurement, and where procurement now is trying to push that into the data and AI space incorrectly. And it's a little bit like you know, as they don't understand that whatever you're gonna do with data and AI will require some build. It's about which abstraction level are do we try are we are we wanting to do this? On what lowers, what is the hardened level that is relevant for our company?
Hugi Aegisberg:I'm going to take a moment here to just praise and heave love upon the procurement teams with which I'm worked at Expanding. They're great. But this is many are. No, no, no, many are. I agree. And I'm just gonna say they are. They're great. And the reason why they're great is that they do get it. You not they don't have the technical capacity to understand what Kubernetes is or why we can do this and that. But when I explain it to them, they go like, yeah, it makes sense. Okay, it's cool. I mean, that they're not trying to push anything on me in that respect at all. And I would say that, you know, bringing onto what has happened. I mean, what has happened is that um organizations like the Cloud Native Foundation have done a lot of the groundwork setting up all these um standards that we can use and we can point to and we can say, hey, no, I'm not just making something up. It's this thing over here. Another sort of organization that has done so much work is OWASP. So OWASP, which is the security standards organization. When you have an organization, what OS opens. Oh, you maybe some can bring it up on the screen there. OWASP, OWASP OWASP. OWASP. No, OWASP. There you go. There you go.
Henrik Göthberg:The OpenSure Foundation and they are focusing on security.
Hugi Aegisberg:Yeah. Yeah, it doesn't say what they stand for. Anyway, they are basically hardening resources from a system. They're not hardening resources, they are compiling lists of known vulnerabilities and the ways you need to think about security when you're employing X technology. So if you use an organization like this and you take their standards and you take their lists, so here are the things you should think about if you're employing LLMs in production in your organization. You already have the answer to the risk team asking you, hey, have you thought about what the risks of LLMs are? Yeah, we have actually this is the list, and here's we've gone through it.
Henrik Göthberg:This is a good resource.
Hugi Aegisberg:It's a fantastic resource.
Anders Arpteg:I really, really recommend it.
Hugi Aegisberg:OWASP is fantastic.
Anders Arpteg:I wish they helped to be compliant with the EU AI Act as well, but perhaps not yet, right?
Hugi Aegisberg:Um well, maybe not, but I don't think it would be outside of their scope.
Anders Arpteg:No, I can see them as doing exactly the same thing. There is a big initiative in Europe called harmonizing standards to try to do that. But I think they they should really reuse and collaborate with these kind of organizations to get that quickly.
Henrik Göthberg:Most importantly, OVASP has a pattern of how to get this done in open source that works compared to if you go to a EU and they have no coders, they have no knowledge of how to deal with these topics and try to get into harmonizing standards. That's right. They can get in deep water.
Hugi Aegisberg:Well, I I the reason for people get in deep water is that they are talking in hypotheticals. Exactly. So so they're like, well, what if it's like this? Well, it's not like that. So we why are we talking about something that is not the way it is? Yeah.
Anders Arpteg:But just go back to buy to build. I mean, for one, I guess you're also saying that building from open source only or from scratch is not a good idea in most cases.
Hugi Aegisberg:I think you can build from open source, but I think that it's probably a good idea to buy hardened open source whenever you can.
Henrik Göthberg:Yeah.
Hugi Aegisberg:Um, because it just saves you a headache. Now you might not always have the affordability, like you might not always afford to do that, and that's fine. You know, then you need to either take that risk or harden it yourself. But I say whenever you can, like buying hardened open source is a good idea.
Anders Arpteg:I guess it's not a bad idea to underestimate the need for support or actually being able to get help when necessary. That's right. Right. Yeah.
Hugi Aegisberg:And somebody updating and you know, keeping the vulnerabilities at bay and all that.
Henrik Göthberg:And and just for the record, I think this is equally important, even if you go for like a cloud provider, which is supposedly all hardened. You buy something uh you know from one of the big vendors, you still need to build, you need to put your own architectural patterns in place, how you want to use their tool. That's right. So this is the problem, right? Oh, we we bought this uh blue box. Let's call it Snowflake Databricks, AWS or Azure. Uh now we're done. No, you haven't even started with understanding what you need to build now, so this becomes patterns. So and then it's a little bit like, oh, we all have Snowflake now, so now we are standardized. No, you're not. You're working in fundamentally different ways.
Hugi Aegisberg:Yeah, you were standardized last week when Snowflake looked the way you thought it looked, and now it looks different, and now you're not standardized.
Henrik Göthberg:And now you have three hundred guys who've been starting up their own Snowflake account and they're putting the the different folder structures, all of them. I mean you're done.
Hugi Aegisberg:I've run into this situation quite a few times, and this is actually one of the main reasons for like, look, even if we had all the legal problems and all the data securities and all the geopolitics solved about going into the cloud, I would still say, okay, when I do it, I want to again buy to build. Because if I buy something that already has a lot of preferences into it, things change. And I actually just ran into this situation. I'm not going to go into too deeply what happened. But we decided to use something uh widely within the organization that's a famous, very well-known cloud product. Right. Take a long time to pass this through all sorts of compliance and get it done. Right. It's an AI, AI tool. Now we're just about to launch it and we're just about to have this like seminar. And before the seminar, somebody tells us, like, hey, you can do this thing with it now. And we're like, yeah, sorry, what? And it turns out, you know, there's this all these agent capabilities into it. And I'm like, okay, we need to turn that off because you know, we haven't put that through compliance. Uh you can't. You can't. You can't turn the agent thing off. So it's just starting to do things on its own. And we're like, there's no way in hell we can actually employ this now. We have to we have to roll it back. So this organization in this case, like basically lost a champion of this tool by just introducing things as if they were a you know startup trying to get new adopters to cursor.
unknown:Yeah.
Hugi Aegisberg:And it doesn't really work, you know.
Henrik Göthberg:But can I ask, and also I want to get your opinion on this, Slanders. Is there ever anything else than buy to build? Isn't everything buy to build?
Anders Arpteg:Well, there are cases where you can use open source, but you never start from pure scratch. I mean, you always have some libraries to start with. So of course never from scratch. But I think open source, yes, you you can have a use case. Okay, so you can go all the way build. You could theoretically you could not all the way build. It's it's based on something, it's not necessarily commercial product, though. Yeah, exactly. Okay.
Henrik Göthberg:Can and then I flip it. Can you ever get away with only buy? No. No. So when I say there's always buy to build, you can never only do buy. Never ever.
Hugi Aegisberg:Well, I in our space. I think yeah, okay. So well, in our space, let's see what our space is. I mean, for example, yes, sure. But uh say say you're a retailer. You can definitely buy the whole thing from SAP. I mean, you you have to mess around inside of SAP, but you're not I mean, it's my point.
Henrik Göthberg:You even need to configure SAP, month, my friend. And it's an important thing.
Hugi Aegisberg:Yeah, I mean, that that that is how people make a lot of money, configure.
Henrik Göthberg:And then and then in the end, oh, we bought this stuff. Well, you haven't filled it with data yet. And how are you gonna fill it with data? And what are you gonna orchid?
Anders Arpteg:But there are a case of ways you can buy if you take game industry, you go to you by Candy Crush. I mean, you don't need to build on that, right? Yeah, yeah, okay, okay, okay, okay.
Henrik Göthberg:So, so I'm I'm I'm I'm of course completely skewed to enterprise, I'm completely skewed to this. So I so if I say anything doesn't hold, of course, you're right, right. You have B2C, you have apps, right? We have apps. Yes. But from a point of view where we're gonna string stuff together, enterprise grade, do stuff, I I think this needs to we need to educate procurement, in my opinion.
Anders Arpteg:Well, it's need to integrate with the systems you do have usually. It's very rare you can write or run something completely independently without any, you know, some kind of integration work.
Henrik Göthberg:But I I find traditional IT that grew up application-centric, business application-centric, they did the buy of the application, didn't really worry about how to integrate the application. And that whole mindset becomes very, very tricky in how you select your path in data and AI.
Hugi Aegisberg:Yeah, I mean, somebody's going to sell you and try very hard to sell you that you know, I'll sell you an AI assistant that I just hook onto your SharePoint and we're done.
Speaker 1:Yeah, exactly.
Hugi Aegisberg:I mean, yeah, yeah, you can try to do that. I mean, have fun.
unknown:Yeah.
Anders Arpteg:Important topic. I think it's a well-phrased term. I think more people should do it, do it. So greatly.
Henrik Göthberg:I'm gonna use by to build all the time. I'm gonna use that from now on, all the time.
Anders Arpteg:And perhaps in that topic as well, uh I think you actually in Svaris Riksbank or Central Bank actually worked a bit with the public sector digital assistant at some point, right?
Hugi Aegisberg:Or we are a part of the Svia project for AI Switzerland. Yes, absolutely. Um now that doesn't mean that we are using Svia on-prem. No, it means that um we think that what they're doing is important, and we're already a member of AI Sweden. Okay, so we just decided to join this part of AI Sweden too. Right. Uh, this gives us access to Svia, so we can evaluate it, we can see if it fits for us. Um, but yeah, we we mainly joined in because we're very interested in what they're doing. And also, um they have an ambition to um annotate a lot of um Swedish public sector language, right? This could potentially lead to some pretty good smaller, mid-sized language agents in the future. And I'm all for that.
Henrik Göthberg:Yeah, yeah.
Anders Arpteg:And perhaps people need to have a quick introduction to what the Svia assistant really is. Sure.
Hugi Aegisberg:I mean, so Svia, so yeah, I think you should probably have you want to have Patamic on the podcast to talk about that. Uh, I mean it's it's an AI assistant for the public sector. That's what they're trying to build. Yeah.
Anders Arpteg:But we can also then I guess in integrate uh the data you have at each organization and then ask questions about it and have yeah.
Hugi Aegisberg:Yeah, yeah, basically. Yeah. But what we're but uh but again, we're evaluating a few different options. So we're going to have something on-prem, um, some sort of AI assistant for SpaceX Mic. We don't yet know which one it's gonna be that we are currently looking into.
Anders Arpteg:And I see you don't want to go too much into details here, but but I guess uh it will be some internal use case to start with, at least, right?
Hugi Aegisberg:Or there will be a whole bunch of internal use cases, and I think you know that ranges from you know the really low-hanging fruit that are actually going to yield a lot of benefit and that we're just gonna get out of the way first. One of those is transcription, right? Oh, yeah. You're doing interviews uh or you're having meetings, and very sensitive things can be said in those. So you can't send it to a cloud. Transcription can save you literally hundreds, if not thousands, of hours per year.
Henrik Göthberg:I think I think the basic uh first use case at Rearingskans Lee, where you work with Sarkunia, who needs to delve into a lot of reports and synthesize stuff, and where where actually in a way you can have a framework, but then in the end you get to a very specific way how you need to feed your assistant for a very specific topic. That's right. That's a fairly interesting repetitive type of uh pattern to look at.
Hugi Aegisberg:I think you also are just in situations where you know you have tons and tons and tons of documents. And you're basically doing repetitive work in the line of the organization. Like the same sorts of reports are being written uh every year. Um having a template to start from is great. That's informed by previous work. Another thing that I think we'll definitely do is be able to have MCP agents that actually interact with the data lake. So you'll be able to understand if you want to make some sort of new analysis, like is there's the information already in the data lake? Because this today is like data lakes get filled up with redundancy real quick because people don't know what's in there. That's a real good use case.
Anders Arpteg:Um and I guess in Expunkion's case, you have a lot of document-based information, right? Yeah, yeah. Is that and it could potentially be found in a data lake then?
Hugi Aegisberg:Yeah, eventually. Yeah, maybe. Yeah. I think most of the data lake is statistics and action. Yeah, not text.
Henrik Göthberg:But I I think the whole thing with re-incessely is targeted the fundamental work of going through tons of reports and documents and and organizing those flows more efficiently, even to the point where they could do it quite safely in terms of having one person who is the author and responsible so couldn't he, but then use giving that person a a better workflow without even needing to fix new accountabilities.
Hugi Aegisberg:Yeah, absolutely. And I think basically, but I'm but I'm gonna say this. So I'm gonna turn on his head and I'm gonna say in the organization, it is my job to build the platform and to inspire use cases to show what can be done. But right from the start, it's a hub and spoke model.
Henrik Göthberg:Okay.
Hugi Aegisberg:The teams and the capacity and the competence needs to be built up out in the organization. But because because what we're not going to build is some sort of central AI consultancy agency that just goes and does AI everywhere. That's not going to work. This is general purpose technology. At some point, you need to you learn to use Word and Excel and all these things. Welcome to the future. You're now needing to learn to use this. And for some of you, that is going to mean the same journey that you've taken for the last few years when you moved from Excel to notebooks. You start, and and our economists have had to start to learn to write Python and to do things in notebooks in an environment. And I think the same thing is about to happen in the AI space. And I think when you're doing things that are a little bit more advanced than what you can do in a notebook, it might mean that you need to actually think about well, you need a DevOps person here that can help you containerize things and can help you actually build this in a way that makes sense. So I so I I definitely think that there will be that sort of decentralized capacity out in the organization. I think there's you know good reason to think that it's going to be like that because we already have out in the organization people who are writing code.
Henrik Göthberg:But I I I think this I think I think so too. I think one of the main headaches or question marks is if that's the end game, where do we start based on if we don't have the uh a capacity or experience to to sort of manage cross-disciplinary teams? Yeah, because you're talking about a product team at essentially in the end, where we're we're where someone has the domain expertise and the workflow in mind, that's right. But they need to have people around them who can actually orchestrate an AI compound system, even if it's very small, right?
Hugi Aegisberg:You need interdisciplinary.
Henrik Göthberg:Yeah. But how do you but what has been your thinking of where to start and how to start going in this direction? Because where the end game is going, I'm with you.
Hugi Aegisberg:Yeah.
Henrik Göthberg:But what what do you think is the first baby steps in this direction that works?
Hugi Aegisberg:So the really good thing is that we can sort of repeat to a certain extent what has already been done with what's called the analysis program, Alali's program it. And then at X Prank, that was the move towards having a data lake. The move towards having the data teams out in the organization being responsible for their own orchestration, being responsible for their own, you know, cleaning up the data, et cetera. Starting to write code, writing notebooks, dashboards, etc. Right. So we know how to do that. And doing it again now for AI is really not that different. It's just a different skill set. So we we I'm seeing it. I again ready to repeat it. Yeah.
Henrik Göthberg:What about governance and essential governance? That's that becomes the sort of question mark when you go distributed. That's the other end of the same question.
Hugi Aegisberg:Sure. Uh we spend a lot of time thinking about these issues. That's what I'll say. I mean, there's so there's a lot of thought that goes into understanding how is this going to be governed. And you know, I mean, in the sort of data space, I mean, you know, there is this sort of way to do it. You have a DSO and then you have, you know, information owners and that sort of thing. I think when you're doing it for an AI organization, being speaking more generally, um, you probably have a team that allocates resources. So you'll say, okay, you get a workspace or you get a work group in our GPU cluster that allows you this and this and that, and you have this priority. That's the job of the AI team. The central team, yeah. Exactly. The AI central team would also say, yeah, you can use this list of packages. If you need another uh container, please submit it to us and we'll put it through our security scans. If it's green, you'll probably be able to use it, maybe even automatically. Maybe it doesn't even have to go through vetting. Um if it's yellow, you know, talk to the people and the and the team. So you'll basically have to set up that sort of as much as possible that's automated and standardized, and then have the people deal with the exceptions. I think that's the way to go.
Henrik Göthberg:And you're essentially talking about a platform becoming an enablement team for self-services.
Hugi Aegisberg:That's right. You build it, you run it.
Henrik Göthberg:So we are providing you with the scaffolding or with the hardened things that you then uh has as a toolbox for your own DevOps.
Hugi Aegisberg:That's exactly right.
Henrik Göthberg:I agree.
Anders Arpteg:Yeah. Self-service platforms and training for the rest so they can do it themselves as much as possible.
Hugi Aegisberg:That's right. And then also outreach and education about who do you need to hire in order to do this. Because if you have one department somewhere and they're not able to get things off the ground because they don't have the competence in-house, it's our responsibility to actually explain to them, well, okay, so these are the sort of people you need to look for in order to get this done.
Anders Arpteg:Switch into another topic potentially. Um you mentioned a bit about MCP agents and the yeah, the term agent specifically. And there's a lot of different abuses, I would say, of the term. And but it in some sense, if you take MCP at least as the term of you know, model context protocol and being able to at least run actions at another place using a lens or models in in an easier way, it at least enables us to go from just being an information or knowledge manager to someone that actually can perform some kind of task or kind of agent. Is that what you think of when it when you say agent as well? Or what do you think?
Hugi Aegisberg:Well, no, well, I think when I say agent, I think I I say MCP servers is one thing. And I think when I think agents, I think I would say that they're taking some sort of autonomous step of actions that go beyond select between this and that, right? But being actually a sort of an open-ended space. That's when I'd say it's an agent, otherwise it's just software. Okay. Um agreed. I think that uh MCP servers to me are sort of a successor to just naive rag. The way we used to think about rag, you just embed and index a bunch of chunks, and then you just sort of naively with some bells and whistles, sort of like search through them, bring some stuff up, go to the source. MCP agents are a little bit better than that if you build them right, because they can actually topically go through things. And then you can build some rag into them as well. I mean, they they might have a tool call that is some rag to this place or that place, doing a little bit of you know, uh embedding search.
Henrik Göthberg:But you're getting more much more sophisticated understanding the scaffolding and the right techniques used for the right purpose and not one size fits all.
Hugi Aegisberg:Yeah, that's right. And also, I mean, MCP agents are not very hard to build. I mean, I'll build a bunch of them myself, just free time. I mean, they're not very hard to build. So that's great, actually. That's the it makes it very useful as a technique.
Henrik Göthberg:Do you use the word scaffolding for this stuff? Or you know, how do you use these terms? Because uh it's all over the place.
Hugi Aegisberg:Yeah, I don't know. Scaffolding, why not?
Anders Arpteg:And the logic in you know how you actually do connect them all into some kind of architecture and landscape.
Hugi Aegisberg:I mean, that's the hard part. Yeah, building an MCP server is easy, but how do you get them to work together in an ecosystem? That's the hard.
Anders Arpteg:But I guess we also agree that MCP is really good because it is a standard at least. So otherwise it would be horrible, horrible to try to interact with all kinds of solutions everywhere. So just having a simple standard is super useful. But then, okay, um we can also say, see that, you know, of course, just having great AI for knowledge management is useful. You want to summarize a big document, you want to write some other you know, email or whatnot. That's of course is useful, but uh we can potentially see that proper agents that even you know do choose to take an action themselves autonomously can do other things than that. Oh, for sure. Uh is there any use cases you can think of in uh Swedish Central Bank where agents potentially could have you know new uh added value?
Hugi Aegisberg:Absolutely. And I think um if we are a little bit conservative and maybe think there are humans in the loop, um, which is a probably a good start. Probably a good start. Um agents would be great for everything that has to do with uh internal or possibly even external service stuff. You know, yeah, I mean basically vetting, vetting questions, that vetting issues, putting it down this path or that path, like is this about this or is this about that? And then you have like a line of if and when and and so forth. Yeah, I mean, I think you can do that. Um I think that probably every time an organization like ours uses agents, for the foreseeable future. I'm not sure I would call them agents according to my own definition, because of how careful I think we're going to be. But I think the way they're going to work, perhaps, is basically by um doing a little bit of the pre-analysis for you. I I I think that's probably where you might go. Yeah.
Anders Arpteg:So if you look at you know how cursor works, you know, for software engineering purposes, and it has a lot of agents in it, and um, it can take some autonomous actions in saying, I need to look at that file, and now I need to find another file that actually has this kind of information in it, and that it can do autonomously and choose to find more information, for example. Yeah, and that's safe. You don't really need to review that as a human, uh, at least for for that kind of just information seeking kind of purposes. Yeah. And and you can think about the task like running a test is probably safe, right? And yeah, so you can think of a number of actions that it can choose to take autonomously that you can allow it to do without review, but then there are other tasks, right? That's what we're doing.
Hugi Aegisberg:Well, well, okay, so so so let's just get technical there. It's is it safe for it to run the terminal command cat? Like it depends what's on your hard drive, right? If there's something on your hard drive that can't go into that cloud, that's not safe. Right. And and when you start to get into that, like basically what can it then do? So, so I I would say that depending on the environment and depending on the consequences of your information leaking, you have different assessments.
Anders Arpteg:Right. Cool. But but we can then potentially see a future, uh, I guess in a Swedish uh central bank as well, where we could have a more of an agentic kind of workflows where at least some of the actions are potentially autonomously being even executed without a human in the loop, but then others are you know definitely needed to be reviewed and approved by a human.
Hugi Aegisberg:Yeah, so so I'll say this. And I again I'll speak more generally rather than about Cash Punking. Uh I'll I'll say that central banks are looking into agents um in various ways. Um I saw one talk which was about researching how it could potentially look if agents were cash managers.
Anders Arpteg:Cash, okay, cash, like money cash. I was thinking uh something.
Hugi Aegisberg:No, so no, it's good, yeah. Right, yeah. No, so so so at banks you have um uh people who are basically monitoring transactions that are coming in and out.
Henrik Göthberg:Yeah.
Hugi Aegisberg:Not at the central bank, but at the bank that is using the central bank assets intermediary for transactions. You need to manage your liquidity. This is actually a bit of a prisoner's dilemma problem because your transactions coming in and transactions going out. You want to maintain your liquidity at a particular equilibrium. And if you send things out too quickly, you lose equilibrium, you lose liquidity, and you have to actually loan money and you lose money, or you need to post-collateral. Um if you do it uh too slowly, you gridlock the system. Right. So this is a sort of problem where they were looking into, well, what if agents were doing this? Because maybe it might, there's an argument for in this research that they did that maybe it's more auditable than the humans' decisions in this case. But but I I think that um my bet is that agentic workflows are going to be introduced in such sort of in and and steps that basically have to do with you know analyzing some information that came in and putting that into a data lake and doing something else with it in a way where I basically just think they're going to be more like switches in these highly regulated organizations than they are like agents.
Anders Arpteg:But if you think now that feel like you're you know detecting criminal fraud or activity, I mean you could potentially I'm just making stuff up now. In general, kind of, you know, not Swedish uh central bank specifically, but you could see that you trying to monitor transactions, you can say that some are uh fine, some others are perhaps definitely fraud and just could be flagged as such.
Henrik Göthberg:Yeah.
Anders Arpteg:And others need to be reviewed. And and you could see some kind of grayscale there.
Hugi Aegisberg:So so there, I think if we go to, if we say we we put the word agent aside and we say, like, okay, we're using software to basically flag something as fraud or not, and then take an automatic action if it's flagged as such. Yeah, absolutely. I I think that that sort of application I think will be used, you know, anywhere in society, really.
Henrik Göthberg:So, what I like with this one is that you're being careful of blowing up the agentic or AI agent uh definition here, and what you're framing and hinting at, I think it is an emergent stance in how we can actuate more and more. And contrasting that or benchmarking that was uh regeringskanslier, I think it is similar because they are very careful to call it assistant, and the core workflow is fully accountable for the sak kunja for the person for the civil servant. And then as you go along, you can see safely what things can be automated and given to the assistant. That's right. So you're you you are in a way an identic behavior emergent, and that's right, and maybe that. That's that's a good way of looking at it because we're getting into this very weird conversation and hype conversation where it becomes very dangerous if you're not knowing what you're doing.
Hugi Aegisberg:You've put the um you put the nail on it. You know, why am I being this careful? I'm being this careful because people put a lot of meaning and a lot of weight into this word. Yes. If I say, yeah, I think Sagittarius spikes should do this and this agents, I'm gonna have a bunch of emails, you know, not coming to me, but to the press department saying, like, are you crazy? Like, we're not gonna use agents, they could do anything, you know, and they might be right if they're understanding it in that way. Yeah. So the way I'm being careful here is basically to make sure that you know people understand we're using this very, very carefully, and we're definitely not gonna rush into you know letting some agents loose in the economic system.
Anders Arpteg:I'll like a Swedish prime minister that was, you know, yeah, I'd go anyway.
Henrik Göthberg:It's time for AI News, brought to you by AI8W Podcast.
Anders Arpteg:Yes, so we take a small break in the middle of the podcast to just reflect on some of the recent news and try to keep it short, but we usually fail. But uh let's see if we can uh go through some of our personal favorites when it comes to recent AI news. Do you have anything, uh Hugi, that you'd like to mention? Anything you read about recently that was um that caught your eye? Yeah, let me think for a second. Maybe I'll pass it over to you and I'll come back. Yeah. You start on us this time. Okay, okay. Um so a lot of stuff. I think one interesting part was actually that um Ilya Sushkever, who is one of the founders of OpenAI, and uh together with Sam Altman, he actually published you know a bit more about the background what happened when they fired Sam Altman two years ago. Interesting. In the big rebackup. And um then we got to learn a bit more about you know behind the scenes what really happened. And this was actually part of Elon Musk suing OpenAI. You know, Elon Musk was also part of starting OpenAI in 2016 or whatever it was. And uh then he sued them for you know going against the original vision of being in an open AI company and being a closed commercial company. And anyway, now as part of those briefings, um Ilya Sutchkwer had to publish basically what he thinks. And he also brought in a lot of other people, like the CTO Mira Murati, and others at OpenAI, to speak about you know why they actually did try to fire Sam Moldman. And uh I think it was really interesting news. Um, did did they say why? Yes. And uh don't don't keep us here.
Henrik Göthberg:Oh, by the way, he's clear from me like our smell over it over here. Smelly guy.
Anders Arpteg:No, but it wasn't really it was really harsh words against Sam Alt Sam Altman. And uh basically Ilya said, you know, he was lying to people time and time again. He was pitting people against each other to get them to fight against each other inside OpenAI to bring him, Sam Altman, to the top, so to speak. So he was manipulating people, he was lying, and he was not the person that they thought they could collaborate with. And that led up to basically them um yeah, try to fire him, or they they did. But then, you know, it was uh it was, I think, in October 16th or 17th in 2023. So two years more or less exactly when it happened. Right? Two years, yeah. And uh in the first day, it wasn't I don't even remember this day still, you know. I just saw it, what the heck is happening? This is it's increasing it's crazy. And um and then first he got fired and he was just receiving like a short notice saying, You're fired, and then I was like shocked. Right, American style, yeah. Um and then uh Mira Murati was appointed as um as uh temporary no, it was November 17th. I think it was October, I think that's uh anyway, it doesn't matter. Um and then Miramurati was uh appointed um an acting uh head of OpenAI, and and then she was removed, and then finally uh you know Sam Altman basically tried to get all the employees of uh OpenAI to go to Microsoft because uh Satanadella did a super smart thing um saying, Sam Altman, come to Microsoft, you get your new uh AI department, you can drive as you want and bring anyone you want from OpenAI to us. And then basically 90 plus percent of all the employees said, Oh, we're following Sam Altman. Really? And the whole board has to back down saying, you know, this will kill OpenAI. So either we bring Sam back or OpenAI is dead. So then he came back, and uh then you know, later Iliah left, Mira Murati left, and so many other people left OpenAI. But the level of deception and manipulation that Sam seems to have been performing, and this is multiple people now with rippon uh written uh uh depositions have said was staggering to me. You know, I I've actually said in the beginning of this year, I think this will be the year that OpenAI stops being the frontier AI lab. I think that's already happened, but it's gone bigger.
Hugi Aegisberg:Well, and Anthropic caught up with them enterprise, didn't they?
Anders Arpteg:Enterprise there bigger, yeah. But I think even for consumer users, I think of the I think. Oh, yeah, for sure. Yeah, yeah, yeah.
Henrik Göthberg:But and I think there are two ways to look at this. You can either be appalled about this uh deception and manipulation, yeah, or you could simply say, Welcome to the enterprise world. Yeah, I mean your company is being cutthroat, it's not exactly I mean, like so if if you look at the political battles in any large enterprise and the way people are brutally saying what they need to say to to maneuver to the top, the the it goes on everywhere. But but this maybe maybe this is more extreme, and maybe you're not expecting that sort of rife in in a in a such a small company. But if but if you look at it on on a large, on a mega company scale, you know, the number of psychopaths in percentage is higher, you know, in in these environments than in the general population for sure.
Anders Arpteg:So I I'm not surprised normal behavior though. I mean it's it's too extreme to say that this is a normal behavior in enterprises. I mean, this is extreme behavior, I would still call it that.
Henrik Göthberg:It's of course extreme behavior, and it's of course very seldom this extreme behavior gets caught out uh in the in the public eye. But if you take any large enterprise that has lived for a hundred years, you have multiple examples of this happening in under their 100-year history.
Hugi Aegisberg:Aaron Powell I mean I'd go further. I'd say in any sort of organization where you live and die on the expectations and the sort of the expectations of a market, you'll have this sort of cutthroat behavior. Another field in which you have it is politics. I I I can you you have it among among people.
Anders Arpteg:But there are good examples, like Satyanadela, I think, you know, is a good example in Microsoft or even Sunapichai in Google. I I mean they if they get got caught in this kind of behavior, that would be extreme. I I yeah, but they're not founders.
Hugi Aegisberg:If you're not if you if you're not a founder, you're replaceable. I think that's a difference.
Henrik Göthberg:I don't know. Yeah, I don't know. I I mean like in a normal case, they would used to have the the CEO would have used to have left, you know, in the sense where they're not happy in the board with each other, they they simply replace the CEO. And why did they replace the CEO? If you take any large enterprise that is 100-year-old and they have replaced the CEO and they come a new one in, and it's a battle around getting to the seat, and it's a battle of keeping your seat, and it's a battle around um uh being part of the team. I I think this is extreme, I give you that, but I don't I don't think it's I think we should also not be naive. Just because we don't see it in the news, does it doesn't mean it doesn't happen. It's and and then it's a spectrum, of course, of how extreme it is and and how how violently has people lied. But if you go to Volkswagen, we had Dieselgate, you know. What is Dieselgate if not the same, right? Cut through throat behavior uh for a middle manager to to show the right stuff. Who knew about it, who didn't know about it, blah, blah, blah.
Hugi Aegisberg:Well, I think Enron, you know, et cetera, et cetera, et cetera. I think maybe, well, Enron, of course, maybe a different, I mean the whole thing was fraudulent. But that that are exceptions. Yeah, but but I would say this, you know, um while I'm saying this about public perception and expectations. Um, you know, the the line is then between marketing and lies. Right? I would say that the the AGI uh terminology, I think Sam Altman and others who were spouting it for a long time, I mean, they knew that's the, you know, that it's not going to be what people are seeing thinking when they say AGI. You know, they say AGI and people think Skynet. I mean, they knew that's not like what's happening.
Henrik Göthberg:The real problem is that the influences everybody has been living by people like they're telling the truth when in fact they're used doing cutthroat behavior. So it's more, it's more a reflection, I think, about the whole media and the whole influencer community that that that that goes on the bandwagon where you don't even question steps along the way, you know, what people are people are allowed to say outrageous things as marketing claims without even being scrutinized. Yeah.
Anders Arpteg:But I think you know, a step where you actually lie to board members and and your colleagues in in the top management, that's a bit extreme, I would say. And I there are certainly similar examples, but I believe and hope that's more of an extreme.
Henrik Göthberg:The main problem is how we can then be be there. I mean, like if you if you do that and you get caught out in a large enterprise, you would be removed. But this is the founder problem, I guess.
unknown:Yeah, yeah.
Hugi Aegisberg:Well, I mean, on that note, I mean I the thing that actually came up as the thing I like to discuss, is not extremely recent, but it is Milamurti.
Henrik Göthberg:Yeah.
Hugi Aegisberg:And it is her company thinking machines because I'm really fascinated with what they're up to.
Henrik Göthberg:Yeah.
Hugi Aegisberg:And she's a cool lady. She is a cool lady. And what it seems to me that they're doing is they're building uh factory to build AI models. Uh they're basically betting hard on that. You know, we're tapering off on the mega size models, and you know, the small to medium size models is where it's gonna be at, but they're gonna be even more powerful when they're fine-tuned and tailored. Yes. And they're basically building a factory for you to be able to do that, to come with your data. And they're like, yeah, okay, we can we can help you do that really easily. Um I think that is going to be huge. I really do think that it's going to be huge.
Henrik Göthberg:We've been predicting the same thing with a super.
Anders Arpteg:I think also what you're speaking about here is uh they basically provide an API to do post-training and fine-tuning a belief.
Hugi Aegisberg:Right right now, yeah. I I think right now that's what they're doing.
Anders Arpteg:Yeah.
Hugi Aegisberg:If I was to guess what I think they're going to do, I think they're going to do way more than that and basically provide it as a whole, somehow curated, very smooth experience to come in. And it's not only about you come in and do a bit of lore and a web interface. I think it's going to be a lot more than that.
Anders Arpteg:I perhaps misspoke there a bit. I don't want to downplay it. I think it's super useful. And I don't think people understand that actually doing fine-tuning and post-training is actually more difficult than pre-training. We spoke about that last week, actually. And um, this is something that is missing today. So if some company could help an enterprise to adapt the model for that specific use case in an easy-to-use way, that's something that would be extremely valuable and is missing today.
Hugi Aegisberg:And I think specifically something that is sort of what I'm really interested in that's multimodalities uh for us with regards to time series. Yeah. So so there's these a lot of these specialized use cases that are too niche for a big lab to take them on, but they're extremely useful. And you have you will have a lot of companies and a lot of like quant firms that are going to develop them internally and absolutely not release them to the public because that's their competitive edge. So you're not going to get these great multimodal niche models for something like you know, time series and text um without being able to have some way in which to do that yourself as a medium-sized organization, bringing your data there and saying, okay, we just need help actually doing this properly.
Henrik Göthberg:But if you take this back into a Swedish context, an example. If I take the example of uh the HPC undertaking, joint undertaking and our AI factories, are they thinking both in relation to pre-training and fine-tuning, distilling and post-training? Because in a way, you know, how do we utilize GPU power in Europe effectively? And what would be the best uh market fit uh product or service they should provide? What you are what you're hinting at is like I think it's more of a killer app to to work with destillation and fine-tuning than to working with pre-training. And I I don't really know if they have ruled that out. Inference not.
Anders Arpteg:The latest I've heard if you take Mymer in Linton. Yeah, let's take that as an example. I mean, in the beginning it was, oh, we want to do a big uh pre-training of a Swedish LLM. And it's like, oh no, not once again. Uh stupid idea. But then, of course, they they want to do more, the fine-tune the post-training, but they even want to do more than that, which I'm not sure will happen, but I hope will happen, which is it's not only the training part, it's also the inference part that they can help.
Henrik Göthberg:That would be amazing, right?
Anders Arpteg:So if they actually make it easy and you don't have to switch from one cloud provider to do the training and then suddenly move to AVS or Azure or Google to do the deployment of the application, that would be great.
Henrik Göthberg:Yeah. But I I even as because destillation and post-training, as you know, uh fine-tuning, that that that I think is a no-brainer. They need to go there.
Speaker 1:Yeah.
Henrik Göthberg:But they uh but inference would be amazing. Have you heard is that something I heard people say that they want to go there. Then if it's happened, we'll see. Yeah, because the inference is because you you can also imagine that you're doing inference as a service. So I mean, like in some ways you're having your normal product or whatever, but then you're routing stuff to the real GPU power.
Hugi Aegisberg:That would be cool. That would be cool to solve. I think what they'd need to provide in order to make this really useful for most organizations is the step that comes after you train the model. How do you package this? How do you containerize it? What do you run it on? You know, if if they can't help you with that, you're stuck with the lab toy.
Henrik Göthberg:Yeah, but that this is but simply to take simply to take the definition of DAM2 container, yeah, post-training, that would be amazingly much more than pre-training. Absolutely.
Anders Arpteg:Yeah. I think the positive side of this is that there isn't a big solution for this, nor in US nor in China. We actually, if we do this, which is reasonable, even in Europe, it could actually be a killer app.
Henrik Göthberg:To take the container as the definition of downfall your service, it would be amazing.
unknown:Yeah.
Henrik Göthberg:Sorry, out of should we have another news or should we leave it? We got really carried away.
Anders Arpteg:Some more, but if someone else has something to share, yeah, I want to finish. So if you don't want to take one more short, take one more. Because it is rather techy, but I think it's actually very interesting. Uh and perhaps it's just me thinking that. But um, it's a new paper actually from Meta, and um, they call it continual learning via sparse memory fine-tuning. So it's it's continuing on the fine-tuning topic here, which is interesting. That's good. But if you just think, if you look at take a step back and think about the big differences between human brain and the type of LLMs and AMOs we have today, is that you we do have pre-training, yeah, we do have post-training, and then we have inference.
Hugi Aegisberg:The networks trained every inference run in our brains, whereas yes.
Anders Arpteg:So in when we use the brain in inference, it actually does train at the same time. Exactly. So training and inference is not separate in that way, but it is for AI. Right. So now they're trying to find a fix for this. And meta, and I was surprised, I've been wanting this for a real long time and seeing when we'll actually get a model, uh parameters that is being continuously in a continual learning kind of way being updated. And before it wasn't possible, if you just fine-tune a model and you put in like a new data set to something that was pre-trained, it usually had this kind of catastrophic for forgetting problems and it forgets a lot of stuff. So now they came up with a new way, and uh I I will try to keep it short here, but they have basically the memory together with the parameters that they do have. So then basically it can, through a key-value kind of mechanism, find the most relevant relevant parts of that memory. Let's say you have a million memory slots in this memory, and now you want to train uh a new piece of data that comes into the model in a streaming way, and it's asking a prompt about, you know, how can I where should I go on vacation for blah, blah, blah. Then it actually goes in, it tries to find an answer, but it also stores that in memory. But it only updates a really, really small part, like perhaps 30 places out of a million in the memory is being updated. And the memory, not the weights, the weights are not effective. No, no, but it is similar because it's you know it's still memory, so to speak. But but the parameters, the normal parameters are not changed. Sure. But if you go into technical details, you know, the transformer had these kind of uh self-attention blocks, and one part is a self-attention, but then they have a big feed forward network as well. Yes. So and then it's just repeated many, many times, right? Yes. So the feed forward block is then basically replaced. So that becomes more of a key value store where you can kind of look in a big memory on the side saying, for this specific one, I fetch these 30 blocks from the memory. Uh-huh.
Hugi Aegisberg:And I do and I do this different path through the through the network.
Anders Arpteg:So think like a rag solution, actually, but inside the model. Okay. But also that it updates in the memory. Right.
Hugi Aegisberg:So this is cool, of course, amazing. Here's my question though. Does that mean everyone has to have their own model running on their own hardware? Because that's the thing. When when people are saying, because I keep saying this, okay, until you solve this problem with a self-learning system, you're not going to get anything close to what you're thinking about with AGI. You basically, Jan Lacoon is more or less right. Yeah. I think. Thank you. Yeah. So take that for a little bit. Yeah, no, no. I mean like Jan is usually right, and people just don't want it to be right.
Henrik Göthberg:Yes.
Hugi Aegisberg:Um, so he's basically right. If you don't have a self-learning network, you're never going to get there. And then people are, oh, we will have self-learning networks. Exactly how much electricity is that going to take? Because you're going to need one self-learning network for like at least every bunch of people. Like, what do you put a million people into the same self-learning network? That's going to be a schizophrenic network.
Anders Arpteg:But I think you know, in some ways, you will have more generic models that is being updated for a lot of people, perhaps through an organization. So you have an organizational level model being updated. Some could be on a societal level, some could be on a personal level, some can even be on a task level, right? So you can think like task-specific models, you can think people-specific, company-specific, society-specific, right? So you could think of this kind of hierarchy of different models. So some of them are being updated in general sense and some others in much more specific ways, right?
Hugi Aegisberg:So in a situation in which we are today, we're running inference on a static model is not even cost efficient. What is going to happen when you need a self-running model? Because it's going to be X10, X100, X thousand in terms of energy demand problem.
Anders Arpteg:It doesn't really, I don't think it imp increase the efficient or decreased the efficiency, so to speak.
Hugi Aegisberg:In terms of power energy usage, no?
Anders Arpteg:No, actually not. So because normally training is like you know a thousand times more expensive than a infrared. But not in this case, because it's single pass. This is actually one of the cool things in the paper. You don't need to run epochs and epochs of training anymore. It's a single pass through, and they could see they avoid the catastrophic forgetting, but still gain as much in the accuracy improvement, so to speak, with a single pass.
Hugi Aegisberg:Okay. Interesting. I mean, so I I I mean I do follow this space, and I I mean Google had the Titan paper that came out. I don't know if you knew about that. I mean, they're doing self-learning models. They were talking about this Titan architecture. I mean, once in a while these things come out which seem to have solved this problem. And I mean, you know, maybe this is the one. Who knows? I mean, it would it would be great for meta because they're not really. And you know, Zuck is sitting there, heads, it's Jan's fault.
Henrik Göthberg:It's self-learning, right? Tracking.
Anders Arpteg:So there's a problem that full solution, but at least it's it's a big step, at least in a direction that's been lacking for a long time. Absolutely. So it's I was let's see if it catches on this particular paper. You know, it's still on research level. But it's interesting five years before we start.
Henrik Göthberg:It's interesting how we can send retrace the history back to the certain paper, you know, attention is all you need, uh, you know, stuff like this. So so once in a while it's one of those papers. You don't how can you tell? Yeah, and afterwards you can.
Anders Arpteg:Cool. Okay. Um let's go back a bit to the Swedish uh central uh bank. Sure. And uh you've been mentioning a bit the on-prem. Uh and please just say that this is not something you can speak about, then we'll skip it. But I think a lot of companies are thinking about you know the cloud versus on-prem here. And um, perhaps without going into too much detail, can you just think a bit about you know how the Swedish Central Bank or the central banks in general are thinking about you know using data and AI or compute in general for on-prem versus the cloud?
Henrik Göthberg:Yeah, but was architecture the stack? Like what are the considerations? Don't be specific on the code. No, no, no. Okay, so about how to think.
Hugi Aegisberg:I I'll say this that um most organizations that are like ours um have not invested that much in on-prem capacity. I think the reasons are that it has been prohibitively expensive, both in terms of investing in it, but also enable to be able to attract the talent that can run it.
Henrik Göthberg:And the competences to the competence.
Hugi Aegisberg:Uh exactly. So yeah, those two things. Now they've been doing a lot of stuff in the cloud, but all of them, even the ones that are not in legal situations where they can't put things into the cloud, they still don't, uh, with a lot of data that they're just, you know, they're under agreement and things like that. So um so so basically there's a floor of different architectures. Now I can speak about ours and I can say a few things. Okay. Okay. It's going to be big enough that we can run any model out there. Any model? Well, any model that is open source.
Henrik Göthberg:Okay. Any open source model.
Hugi Aegisberg:Any any any open source model. Yeah.
Henrik Göthberg:So we do we take it give us an example. We have a couple that is the fake. I mean, the useful model is a form, right?
Hugi Aegisberg:Yeah, I mean, if we wanted to run on quanticized Deep Seek, it could. It could do that, but I don't think we will because there's no reason to run on quanticized Deep Seek when you have GPT OSS that's just as good, but 100%.
Henrik Göthberg:So GPT OSS is a nice benchmark here.
Hugi Aegisberg:Yeah. Yeah. So GPT OSS for sure. Yeah. Then we then we get a frame, right? Yeah. Um the way we've set it up is that we rely heavily on um good hardened systems for containerization, you know, Kubernetes environments, uh the the stack around running it and and so forth. I mean, the sort of the NIM ecosystem, all that stuff. I mean, it basically it's it's set up in such a way that a small AI team can manage uh allocating resources when resources need to be allocated. Um what we're depending on a lot because we are not looking into at this point in time fine-tuning or training models, although I'm sure we will, but that is not the first thing we're going to do. And perhaps data using thinking lab, I guess. Yeah, absolutely, absolutely. But the main thing for this is actually running inference. And it's about on one hand, um, building some sort of capacity for an organization-wide uh assistant system where you have your own departments have their own spaces, they have their own way to do, you know, documents and what makes sense for them, they can sort of set up um some sort of way in which we can build an ecosystem of MCP servers that do things with our other systems. Um, and a way for you to be able to interact with certain tools. I also want there to be a way in which that you can sort of do things like deep research uh searching on the web in a space in which you cannot upload documents that are too sensitive to be in that same space. Because what you need to think about when you're doing deep research is that you don't control what ends up in the logs of the uh of Google or or the site that you are eventually visiting, right? So so you do need to have those considerations when you are working in an environment where people are um very concerned with their data security.
Anders Arpteg:Okay, I'm biting my time, but but okay, please.
Hugi Aegisberg:Um Well, I I I so so I I think that there's an important distinction to be made in all of these situations between what do we think is um rational for us who are sort of able in any situation to be able to understand is this safe or not? And and to be able to say, so I actually just had this conversation today, and I think it's an interesting one. So I'm gonna I'm gonna go into it a little bit. Um compared to a lot of other um uh government agencies, and we just had a conversation with one today. Um, we're actually a little bit out there in what we allow to be done. And the reason is we've decided to um to basically have a team where we have people who are security experts on the team, and hence we're evaluating all the time is this a good idea, is this a good thing to do to be safe? That's awesome. Um what a lot of our organizations do is rather they just put lockdown on everything because that's the safest, easiest way to do it. Trying to be on the edge is hard and there's a lot of work. You need to constantly re-evaluate everything. And hence you end up with a lot of these sort of policies where you're like, you know what, 99.999% of the time, nothing's gonna happen, but we're not gonna take this particular risk in this case.
unknown:Yeah.
Anders Arpteg:Yeah. Okay, but but just going back, I mean, the pros and cons of the cloud versus on-prem. Sure. Of course, you know, the pro with on-prem usually is security related, yeah, or even legal reasons, maybe.
Hugi Aegisberg:Mostly legal reasons.
Anders Arpteg:Yeah. Because I would argue that actually the cloud is significantly more secure than on-prem, right?
Hugi Aegisberg:Absolutely.
Anders Arpteg:Um but another problem then if you cannot use the cloud, the public clouds or the big clouds, is of course not only the hardware or where the data resides, but the tech stack you get on top of it as well. And which is significant and extremely valuable and extremely powerful for an organization to be able to use. And if you are not able to use the full stack to do the training, to do the inference, to deploy your application, to use Kubernetes on uh GKE or Amazon's Kubernetes and engine or whatnot, you will need to spend a lot of time and resources, and it will not be as functional and well monitored and secure as it would be in the cloud. So you will lose a lot of value if you can't use public cloud. Would you agree?
Hugi Aegisberg:Um, yeah, in a sense, I would agree. I I yeah, of course. I I I think that um in terms of security, I mean, you know, um of course, you the the that consideration is there, but but I think um as a lot of European uh countries are finding out sovereignty is also important.
Anders Arpteg:Um so a lot of the public cloud providers, you know, the top one are from the US, the Google, the Amazon, the Microsoft, they are you know deploying regions in Sweden more and more, and Google just launched their big region in in Sweden as well. Sure. And of course Amazon has it, and and so does um yeah, others. What do you think they claim to have a sovereign solution then for Sweden?
Hugi Aegisberg:Yeah.
Anders Arpteg:Would you say that just because they have a region and the machine and a data center in Sweden that it's sovereign?
Hugi Aegisberg:No. And uh the recent, well, again, sovereignty, I'm not sure if it's even a very useful term, but let's walk with it. I did name it. I think it really depends on the jurisdiction in which the company is in. Uh as from what I understand, it was even a thing that went to court in in France. And um, I think it was someone from from Google, uh, a lawyer there, who basically, when questioned enough, said that no, there are certain situations in which we would be required to, you know, give this data up to the US federal government. Yeah. Intelligence services and whatnot. And I mean, so so you so in that situation, I mean, you know, by any definition of sovereignty, that's not sovereign. Right. Yes. But but I think, I mean, when you're not dealing with things that have to do with the currency of your country, like we are dealing with, or national security or things like that, it's sovereign enough a lot of the time. I mean, you know, it's like you don't necessarily need to care about this most of the time, but I think in certain situations you do.
Henrik Göthberg:Yeah. Yeah, good well summarized, I think. Would you use Alibaba for Sandra?
Hugi Aegisberg:No no comment So one question.
Henrik Göthberg:In in the work done so far, what what has been sort of um headaches is maybe the wrong word, but what has been the most effort spent in in understanding the architecture and establishing the stack to where you are right now?
Hugi Aegisberg:Or is this more about what you where you're going with the so I so I think I mean um when I came into it, uh and basically I realized that due to the way things are, uh probably the the best way to go is some sort of on-prem solution. Things sort of start falling in place. Like once you decide that you need that sort of infrastructure, and once you decide that you need everything that surrounds that sort of infrastructure, not just the GPUs, but really what we're talking about here. The the whole the whole thing, the the stack of licenses, someone who runs it, the usability, etc. Then, you know, scaling it up to something that can fit anything you want to run. I mean, that's you know, that's not that much more at the end of the day, adding adding some GPUs, and basically then you have a bigger capacity. So I I sort of looked at it like that, and then it was basically about sitting down and understanding, okay, what is a reasonable size and how will these things be able to run so that we can utilize them at peak in a good way without constraining that we need to have like, you know, this library of models because other things don't fit, like more than that doesn't fit. So that's how I arrived to the time-sharing thing that we talked about before. It sort of made sense for us because of that. Yeah.
Henrik Göthberg:And was it was it hard choices in terms of patterns and and and you know, centralized, distributed, you know, different things? You you said all of a sudden you you're taking a very strong node approach immediately. Yeah. In philosophically, you actually then are clearly understanding it as as product teams and platform self-services. Yeah. That leads down to architectural choices. Sure. Can you elaborate a little bit about what those kind of not what you did, but what were the options that you were considering? So because because this is distributed.
Hugi Aegisberg:Yeah, so so so the way you need to have it is to have some sort of you know, Git op situation into which people can deploy their applications through a pull request. I mean, you you need some sort of architecture like that, right? So I mean, you know, I mean Helm is basically the way you do it. So yeah, and Helm app pattern that um you basically ask somebody out in the organization can make a pull request to and say, Hey, I'd like to deploy my app. And well, for it to be deployed, it needs to be in the registry. And have you put it in the registry? Uh yes. And has it passed through the security scanning? Um, yeah, it's green. Okay, you're probably good to go. Right.
Henrik Göthberg:Sorry, I I want you to call one line of question this longer on exactly the same thing. Yeah. Because working with Scania, we have a lot of discussion. We are working towards distributed architecture. One of the key question marks that we've had, and where we have sort of even gone down a path and then reversed back, has been what is the level of technical maturity and expertise we need in the domain teams in order to be, you know, to find a level on the architecture on the software patterns, which is equivalent to their maturity in terms of using it. So if you go one path, it's super cool, but you can only hire people from Spotify or Clarona to make it happen. Yeah, or or then oh, we need to have this. We can only hire at this level of competence. So we need to productize, we need to make the usability of the engineering experience a little bit better.
Hugi Aegisberg:No, absolutely. And and and so if you want to deploy your own sort of dashboards completely custom-made with some sort of model, like you can do it. That's what I'm saying. Like, and maybe we'll have to help you a little bit with some of the DevOpsy stuff and maybe some of the containerization stuff. Um, but most of what we're doing, it doesn't require that. Most of that can be done in notebooks. So most of the problems that people will be solving, uh, they will be solving in notebooks. And then they don't have to do any of that stuff, right? I mean, we give them access to the libraries that they need. If they miss a library, you know, we can pass that through container scanning and so so notebooks set the reference point for this.
Henrik Göthberg:Is the use of the tool already out there that we're going for?
Hugi Aegisberg:So now we we need to calibrate sort of to is that a so so I think notebooks, I mean that you know, you you can sort of think about notebooks in a lot of different ways depending on what sort of notebooks you've used. I mean, so so we both have Jupyter notebooks. We also use something called Maremo, which is very cool. So Maremo notebooks are um basically a sort of notebook where you can take the notebook and then you turn it into a fully fledged dashboard. And uh then the Jupyter then the kernel and everything runs in WASM uh in the client. So so you can sort of turn them into dashboards, and then what you need to solve is sort of the the data authorization layer, which you can do with uh with you know some sort of like an intra ID that maybe piped through. Um so so you can do a lot with these notebooks, and and I think for most applications, most dashboards, most analytics, even real-time stuff, you'll be able to basically do it there. But sometimes you want something that is a bit more of a bespoke tool, or you want to run a model, you want to run a Python model, and you want to be able to query that from this uh notebook and it has to actually run, it does a bunch of stuff that has cron jobs and it has workers, and I mean if you're doing that, then yeah, you need to containerize it.
Anders Arpteg:But we're straight off, I think, from the original topic. Um and going topic. Going back to cloud and on-prem, etc. And and I think you know everyone agrees we would like to decrease our dependency to American cloud providers and Chinese. Uh, but then the question is, you know, what is the alternative? And there are a number of initiatives in Sweden as well. And um, we actually got some funding from the Swedish government recently for the Shakes Gasan or uh and the Skatteverget to actually build up some kind of more common Swedish uh cloud, if you call it that, for public sector. Yeah. What do you think about that? Would that be a good idea?
Hugi Aegisberg:Absolutely. I I mean so so the way I see it is that you know most organizations are going to be using inference models um with open API, open AI compatible APIs. Um if you build it the right way, you have an API proxy and you can point it towards whatever place the API is in, you know, be it one of these uh commercial cloud providers, or be it that's got the vet kit, or be it our internal on-prem situation. And if you build your systems right, you'll be able to classify those different paths uh as suitable for a certain level of uh of security. So you're saying, like, okay, I mean, for this level of confidentiality, we're only going to use our things on-prem. But and I think the you know, the next time uh time role, you know, uh the next time a new investment needs to be made in an on-prem architecture, I hope that these legal issues are going to have been solved. And I don't think there's a change of law that's required, it's basically a clarification that's required.
Anders Arpteg:I agreed.
Hugi Aegisberg:Yeah.
Anders Arpteg:But okay, let me just uh go back a bit to the topic because I agree that you know having a Swedish cloud is something everyone would would like to have. We can't have every authority or even every Swedish company to build their own kind of cloud solution from the hard drive to the application, so to speak. But I think sometimes if we just phrase it like this, uh, the top cloud providers in the world is the Google, the Microsoft, the Amazons of the world, and of course they build their cloud solution by first in 10, 15 years, building an internal cloud for their own purposes and then coming to the realization, ah, we can actually flip this to be a kind of public cloud and earn a shitload of money on it, uh, which they do. Um, but who in Europe or in Sweden could do something that have a similar or at least not too horrible functionality compared to what they can?
Hugi Aegisberg:I mean, so so now I'm just speaking sort of for myself, and I'm gonna step out of my Rix Bank role because I also just build things on my own. So now this is me building things on my own, looking into alternatives, right? Good. So it is already public knowledge, by the way, about X Banking. I mean, we do use Badiat for inference. Um Badiatoi, and that's a good idea. Yeah, yeah, yeah. So it's like an AI inference provider. Sweet and I think they're great. Uh so we we use them for like a pilot use case.
Speaker 1:Okay.
Hugi Aegisberg:Um, when I've looked at it myself, I have thought that Scaleway is pretty good. I think that Scaleway provides a reasonable amount of the stuff that you need uh usually to build, so to start building with these big beam of platforms. I mean, of course, I mean they provide like 5% of what AWS provides, but also I don't even understand what most of the stuff that is on ABW, I don't need it, right? So yeah, Scaleway is pretty good. Um I think that the biggest problem with some of these providers is actually not the services they provide, but the UX, which somehow manages to be even worse than AWS. Like I don't think that should be possible, but they really, really, really manage.
Henrik Göthberg:But it's a good point. And I mean, like we have friends who love GCP, which is basically in still in the big league. Well, they have the best they have they have the mm both m uh best usability, right? And they have the core feature which is really, really useful. And and the way I can work my flow is better now. Yeah, I don't care if uh AWS is the depth of three times the GCP in terms of different features.
Hugi Aegisberg:Who who is who is the Amazon of Europe? I don't think that's I don't think there is one. It's it's Lidl. Did you know that? No, you're not joking, I'm joking. But you know Letal Cloud.
Henrik Göthberg:We had this as news when it came out. We were talking about this. But you have this now, so let's see. And then you had now SAP news. We didn't take that news. Maybe that's did you hear about that? But that was one of the news. SAP Cloud. Yeah, the new partnership, uh, SAP and was that launched today? Yeah, uh yesterday. Yesterday, right. So we could have said something about that. Yeah, we could have.
Anders Arpteg:But we also have like companies like Everc and others. I mean, that there are companies that try, but I mean we have to realize they're very far behind. But as you say, we don't perhaps need all the functionality. But I don't think people either understand the level of security that do exist in in these cloud providers. They are pouring like billions and billions of dollars into just the security intrusion detection systems and whatnot. And you know, that's something that they don't share.
Hugi Aegisberg:And it's um I mean, yeah. Um I do I do I think we're gonna catch up. I I think to some extent, yes, in Europe. I mean, no, not with everything. And perhaps it's gonna be more of a flora of specialized providers that do a little bit more of this and a little bit more of that. Yeah, but I think really the reason for why it's possible to catch up is is Cloud Native Foundation and you know all the open standards that have come along.
Speaker 1:Yeah.
Hugi Aegisberg:Um and and basically that you can run a private cloud now for yourself, and it's not, you know, it's it's still a headache, but it's not like a mind-numbing headache.
Henrik Göthberg:But let me be can I just go one down one rabbit hole? You you were opening it up a little bit, but uh I want to go back to it like okay.
Hugi Aegisberg:Can I just say Hetzner? I would just like to say that Hetzner is actually great.
Henrik Göthberg:Hetzner, okay.
Hugi Aegisberg:How do you spell that? H-E-T-Z-Z-N-N-E-R. So I'll just run a little story here. So I have my own Kubernetes cluster on Hetzner. Okay. Like my own private, well, private in the cloud.
Henrik Göthberg:Like I mean, yeah, but your your fun, yeah, yeah.
Hugi Aegisberg:Yeah, but it but it but it's not set up through like Hetzner's Kubernetes service. It's it's set up on just um uh virtual servers with a control node. Um a con uh a Kubernetes uh cluster that is basically managed through like yeah, my own Git repos. And it works really well. I mean, it's really fast. I can bring it online and offline like this, uh with code. Just like kill it, bring it back up, three minutes. That's pretty cool. Um I can deploy with uh like Argo CD, deploy apps on it with uh Git. Nice. So I just deploy some apps to Git, uh, make some changes, it automatically rolls it out, and Argo CD. It's cool. I mean, you know, would I sound German? Is it a German one? Hetzner is definitely German. Yeah, all right. Yeah, absolutely.
Henrik Göthberg:But but let me let me go back and and and pose a hypothetical question. And I would like I think this is a very important question. It's we have now AI Waxta, and we have now we're investing from from the government's budget uh to really have an AI Waxta or a or a factory, you know, platform that we now then want to utilize uh for the different parts. And here we now have Rixbanken. Rix banking needs to take on your end like a little bit like of a platform view versus your nodes. Yes. So the the question is like if you could dream, what would you know, and and we and it wouldn't be about someone stupid trying to push idiotic tick on you or setups on you. How would you frame the most useful service or how would you understand what they should be doing for you that you should utilize and that you shouldn't build yourself, but rather they build and then you use it and then you put it to the nodes?
Hugi Aegisberg:I I think there's a very, very simple solution to that. Just provide inference as a service, man. Like just have a bunch of really good models, uh, the models that you know we're gonna end up in some sort of semi-standardization. Like there's gonna be some model that is the default, you know, we have llama now at GPT OSS. So provide those an API uh to uh other government agencies in a way where we've already dealt with all of the like government y legal stuff. Yeah. Like we've already standardized that. We'll say, like, you know, if you're under this jurisdiction, you can do it like this. If you're under like you're you're dealing with OSL, you need to do it like that. Just deal with this. Most of it is not technical, most of it is legal coordination. Like that is that is the really born answer. Most of it is going to be legal coordination.
Henrik Göthberg:So if you really think carefully now, and if they're listening carefully to some of the leading uh actors that have started going into the space and they're not trying to figure this out on their own, they can get very clear killer apps. If you structure your focus and service around the legal coordination and inference. Yeah. And and then, you know, and then maybe help for the less uh I you know, you need to enable them. Yes. You need to be maybe in enabled from them. You you get you get a core, core fundamental value problem.
Hugi Aegisberg:Yeah, you need you need to solve two main things. Yeah. Just make inference available to other government agencies. Yes. Um, and two, have some sort of legal consultancy to which you can turn and get some some advice on like we have this and this and this setup, and will this be compliant with us? And they say yes, or maybe they say no.
Henrik Göthberg:Because they have some great ideas, but I'm at the same time scared that they will spread themselves to thin and not do anything well, rather than having two or three killer things that really make sense and gets the ball rolling.
Hugi Aegisberg:Yeah, I think so too. And and something that I'm uh quite reluctant about is basically, and and again, I say inference and I say APIs. You know, just have something that processes and and sends back the information. Don't try to deal with with storage and rag and all that stuff. Like that can be done at the um at at the domain level. Yeah. I mean, should probably have your own databases, you know, and not put those over in and Ivax them.
Henrik Göthberg:But but once again, they can provide the infrastructure patterns. Yeah. So you get if you're not as, you know, but but what once again, a self-service around if you need this type of functionality in your domain, don't invent it from scratch. We have those patterns, yeah. We have those scaffoldings, but then you need to fill it yourself and own it.
Hugi Aegisberg:If the patterns are good enough. A lot of the time this is the problem, right? That you know, when you put out something that is average, if you don't want to be average, you can't use the pattern. Like if you if you just say, like, okay, we're going to be average like everyone else, go ahead and use the pattern. But sometimes your ambitions are higher. And then maybe using the pattern is not the right way. So I'm just saying, yeah, it's gonna work for a lot of organizations, but maybe not everyone.
Henrik Göthberg:But you but you can sense a couple of killer apps that really would make sense for you know for you.
Hugi Aegisberg:But no, no, but the the the one the one killer app is just like uh okay, so what are the other things that you could do, right? I mean, you you you can do some you can do inference with models, and you can have models that are multimodal, you can have models that are text, you can have time series models, you can have forecasting models, all these things. Uh I mean, okay, so on the other side of what you do with GPUs is basically you know GPU accelerated calculations. Um are you going to send workloads over there? When will you need that? I mean, are you that's yeah.
Anders Arpteg:I think you know what you said, uh what sounded really easy, but is super hard is to have some kind of legal consultancy. I know. I would certainly love to have it. I think so many companies would just love to be able to say we are we have a risk appetite that we actually are satisfied with. We know it's not 100% sure against being sued, but we believe we are willing to take this risk so now we can use AI or data for this purpose. If we could come to that point for a lot of companies, that would be a huge benefit, but we're not there. And I think it's surprisingly hard. And I think you know any startup or company that does that would make so much money.
Henrik Göthberg:Let me test another one. I'm like you you highlighted the transcription use case. Sure. Which is obviously a there is a transcription use case in every single public sector approach. Absolutely. And you could also argue that you could have a scale, sensitive, you know, super secure to simple, right? Sure. Why shouldn't they uh simply build a most awesome transcription tool for public sector and you use it?
Hugi Aegisberg:I think that they could, but I think it would be a much more useful thing for them to do to just provide a transcription endpoint because setting something up on-prem with a containerized application that's a transcription tool is not hard. I mean, it is like they can do that, but what's gonna happen is that you know, you'll you'll you'll see that tool and you'll be like, uh it's sort of what I need, but maybe not exactly what I need because I also need this and that that, for example. And then I think a lot of the time you'll say, like, okay, I mean, we'll just build that thing, or like basically we take somewhere else.
Anders Arpteg:Just build like a reference architecture, some like a starting point. Here's a transcription, here's a translation service. Take it, use it. It's been actually properly vetted as now you tweak it. Then you take it, you adapt it, and you apply it.
Hugi Aegisberg:Yeah, I I think the key thing for them to focus on is the stuff that requires GPUs.
Anders Arpteg:When we say they, I'm not quite sure what you mean.
Hugi Aegisberg:Only like stuff. Only backstand.
Anders Arpteg:Yeah, okay.
Hugi Aegisberg:Yeah.
Anders Arpteg:Not only I was about case. Yeah.
Hugi Aegisberg:So so when I when I when I think about so maybe I'm misunderstanding what you mean. That might be the case. Like I whet the are are you thinking about like because when you say killer apps, I'm thinking about services, but you're thinking broader in the yeah.
Henrik Göthberg:I I I think we're getting too deep here in the in in in a nutshell, they they are envisioning buckets of types of services. Okay. So some services are really proper apps for the very untechnical or completely for a municipality with very little, you know. I see. And then they have a spectrum where they're coming to the sophisticated agencies in Sweden, which is which is more what we're talking about.
Hugi Aegisberg:I understand what you mean now, and then I have a different answer. Uh so the um the Department of Defense in the US has something that's called what's it called? It's like uh a repository of hardened um infrastructure, like applications you can you can run yourself. Like it's basically like a Git repository of hardened stuff. It's not GovCloud.
Henrik Göthberg:It's called hardened and vetted tools. It's hardened and vetted stuff that allows you to build your own shit faster.
Hugi Aegisberg:Yes. And it's vetted by the Department of Defense. Yes. You know something like that would be amazing. So if if some central authority took care of actually hardening, vetting things, patching the stuff that doesn't work for our legislation or doesn't keep up with our standards. And as long as you use things from that repository, you're basically good to go. Exactly. And then I think they should also supply those things as a service when it makes sense.
Anders Arpteg:Yeah, so so basically what Amazon does partly have done for CIA and other agencies, and even UK is using um Amazon GovCloud. But I get goosebumps. You can go on my LinkedIn actually.
Hugi Aegisberg:I post it on my LinkedIn if you want to find it. It's like on the second post.
Henrik Göthberg:But but I'm getting goosebumps because the when I when you dig dig down into the philosophy of what they want to do in OIWAX then, they actually they are they are they are this mature, right? So they they understand that you can have some dumbass uh transcription service that you can use use if you're a municipality of Amder. Why not? And then they have a spectrum of they haven't used this terminology, but I really like it. Hard and and vetted uh building blocks. Yeah. That's a good call, right? Because that allows you to be a proper domain and build your stuff, but getting out of some of these problems or what is really inertia for what you're doing.
Hugi Aegisberg:And you can get past the most difficult thing to answer for most of these government organizations, which is that, yeah, but how do we know that this is safe and secure? Exactly. Yeah.
Anders Arpteg:That was a long rabbit hole. I love I love my rabbit holes. Perhaps we, as time is flying by, can move a bit more to that one, trusted by DOD.
Hugi Aegisberg:This one. Yeah, right. Yeah, it was Peter Clamp that posted actually. Iron Bank. It's called Iron Bank. It's a cool name. Iron Bank.
Anders Arpteg:Okay, let's keep a number of things here now because we don't have the sign. Um but okay, let's take this perhaps. If we move a bit more into like yeah, trust, of course, security, transparency, and like ethical consideration is something that's uh important for all of us, not just for the Swedish Central Bank, of course. But what do you think we could do to if of course like this? Um we go back to the big um problem with Europe versus US. And what is the the someone famous said something about you know in in Europe we want to be leading in AI regulation in in a very negative sense? Um and that kind of be weird, and we already see the big tech giants, you know, not deploying models or making them available in Europe because of the uncertainty of how to even become legal, uh legally compliant. Um and uh that can be a problem, of course.
Henrik Göthberg:Sure.
Anders Arpteg:Uh how do you see this kind of balance? You know, what do you think we're currently in uh too high of a level of regulation in Sweden and Europe that actually stifles the innovation as we have it today?
Hugi Aegisberg:I'm not sure I'm gonna comment on that because of my my my role and and what I'm representing. So I think I think this is uh one of the things where yeah, I probably shouldn't comment on that.
Anders Arpteg:Okay. Let me then quote uh Ulf Christenson who said uh we need to deregulate.
unknown:Yeah.
Anders Arpteg:Um and even people from the Swedish AI Commission have said the same. Absolutely.
Hugi Aegisberg:And um many people have said it, said it.
Anders Arpteg:I mean I agree with that. So then you don't need to stand for it. But but I think a lot of people have said it. And and I think no one really wants no regulation. I mean, that would be absurd.
Henrik Göthberg:Yeah, yeah.
Anders Arpteg:That would be horrible. But the question is, you know, we know that too much regulation is also a problem. And we also know that it's so easy to add new regulation, but it's really hard to remove anything. Absolutely. So as the years pass by, you know, we get more and more regulation, even to the point where sometimes they are conflicting.
Hugi Aegisberg:So most regulation has best intentions, yes. And you know, and a lot of that is is where it's coming from. And then, yes, it's hard to remove.
Henrik Göthberg:Let me go uh into the same topic from a slightly different angle. We had uh Luis here from Asa Bloy, who is responsible for AI compliance for their external products, and he is speaking warmly about uh harmonized standardization. So the problem is not the regulation itself, it's the legal uncertainty and the lacking of clarified standards, what it means, right? So it's too ambiguous, we don't know what it means. So a way to think about that then is to think about when you're doing regulation, the real problem is not having the package, it's not the definition of done, so to speak, is not in place. We're putting things out there, but we don't know there's too much legal uncertainty to navigate it. So I think it would be safe to say how how can we be much stronger at putting out a more not only the regulation itself, but also the standards that goes with it. So that's key. That's key.
Hugi Aegisberg:So the legal uncertainty is the problem. It's most a lot of the time is the problem. Sometimes, you know, regulation is regulation, but legal uncertainty is most certainly often a problem.
Henrik Göthberg:Yeah. And I think if we can frame it like that, yeah, uh we don't need less regulation or more regulation, but we need less legal uncertainty.
Hugi Aegisberg:Yeah.
Henrik Göthberg:I mean That is an obvious, right?
Hugi Aegisberg:And and when new technologies come out where you can go into hypotheticals, uh things get real murky real fast. I mean, I've heard people use the argument that if you store a large language model on your servers, and that large language model deterministically will output personal information about a particular person every time you ask that question, you basically have that information on your servers and their hands breaking GDPR. I mean, you you can basically go into radables that become stupid, right? It becomes completely useless. Absolutely, it becomes completely useless. And this is what I'm saying. What is needed then is basically like, yes, you can interpret it like that if you enjoy that. But here's this um here's this clarification that is an official clarification. This has already been dealt with and clarified. We don't need to have pop.
Henrik Göthberg:This is the clarification. Yeah, this is this is the framing of what is the documents, what you need to do in relation to the different categorizations. Absolutely. And here's how you do it. Absolutely. So the so we need less legal uncertainty. I'm not even saying we should take away any regulation, but we should definitely take away any of the legal uncertainty that goes with it. And and then the problem is can you really is it wise to have regulations coming in with that create so much legal uncertainty? So it's a it's it's it's a flawed process in the sense that you can drop something that is allowed to have so much illegal uncertainty, then then it's not ready.
Anders Arpteg:So I think so to perhaps on one more, even more difficult question. And it's not it's not a technical, it's not a legal one. Right. But being at the Swedish central bank and you are looking at the job markets, you're looking at the financial markets a lot, and um it would be interesting if you have any thoughts about how AI potentially will impact the job markets, the financial impacts, and what do you think the potential impact will be in coming years? Do you have any thoughts about that?
Hugi Aegisberg:Yeah, small questions. Yeah, okay. But let's let's let's say this again. It's a caveat like I can't speak on behalf of JSP. And if I speak on my opinions, it will be interpreted in a certain way. But I'll say this. Um, if you look at the Bank of International Settlements again, they put up out a lot of reports. And one of the things they're concerned with is this question. Uh, they've looked into uh a meta study on what different academics and researchers in the space are saying about the you know productivity gain that's going to come from AI. And it really runs the gamut. You know, you have people that's they're saying it's going to increase productivity by hundreds of percent, like 200%, 300%, like the McKinsey report said, something like that. And then you have some very, very um reputable people who are saying, ah, it's more like 10 or 15%, not percent, percent, yeah. And those people are basically saying, look, this is like every other technology. And basically, yeah, I mean, might have hold potential, but the implementation is not going to be as easy as you think, guys. Like calm down a little bit. And other people are saying, you know, this is going to completely revolutionize the space. And honestly, I can say that I don't think any central bank really. Has an opinion on exactly what they think because that would be guesswork. So is that that's also what we do to a large extent? Yeah, but you only do guesswork when it's useful for policy. Like guessing and speculating in what is this going to mean. There's there's a lot of different alternative scenarios, and you need to basically account for these different alternative scenarios.
Anders Arpteg:I guess one of the things you need to forecast at least is some kind of unemployment rates, right? And sure. This would potentially be heavily impacted by AEI, right? Potentially.
Hugi Aegisberg:Yeah. And I think I I I would say this that I know and I believe that they do model on these things, but of course, um they will look for indicators, not hypotheticals. You know, so so I mean that that's what I'll say because that's what I know about. So it's becomes different to do this scenario analysis based on indicators and based on the I so so I so I think that so so for example, like one of the one of the things that I always think about if I'm going into media or or um going to talk to anyone like in a podcast is basically like don't speculate. Okay, why is that like here we speculate a lot about all sorts of stuff, but now we're actually in the domain of what Riksbanken does. Okay, and then I don't speculate. But the reason is actually the reason is that speculation is not useful for stability. I agree with that.
Anders Arpteg:But okay, let's go beyond um central bank and uh think about your kid, for example. Right. He's going to grow up soon, he's going to have to choose an education. Yeah. What would you recommend him to study?
Hugi Aegisberg:I so let me put it like this. I, as somebody who uses a lot of AI agents for a lot of stuff in my own like tinkering with whatever I'm working on outside of my job, uh, have come to realize that structural thinking is more important than ever. Um, being structured in your thinking and being able to understand a lot of complexity and how a lot of things work together is more important than ever. Because think about this. If you give a person a hundred people that work for them, a random person, and tell them create something useful of value, how easy is that? Like, okay, suddenly you have a hundred people working for you, go. I mean, you're not going to automatically be able to create value just because you have a hundred people working for you. It's hard. You need to think a lot about how to create value and what is actually valuable. And teaching people that skill, that ability to be able to understand a lot of information and process it actually, I think, requires them to go through something that is pretty close to a regular education. Because guess what? You need to train your brain in order to do that. We can go back to how AI models are trained as an analogy, because we're an AI podcast, right? Um, training them on uh poetry actually makes them better at chemistry. You know, it's it's a generalized thing. So, what I'll say is this don't go to university uh and waste your time doing something easy to get a degree. That's dead, I think. That's dead and useless and probably has been useless for quite some time. Go to university and do something that's hard, that's hard for you, and that challenges you. And you of course, if you also find it fun and good, that's even better. So if it's hard and you find it enjoyable, awesome. If it's easy and you find it enjoyable, just go do it, man. You don't need university. Like, why you know why are you wasting your time? Just go do the thing. So, yeah, I think I think education and to do hard things that actually really, really challenge your brain. Yeah.
Henrik Göthberg:Um and kind of I I like I like that view, and I also like the view of structured thinking that oh we we don't need to learn how to code because we they will do it for us, or we don't need to learn this and this and this. And then it sort of boils down to oh we need we should be more on the humanities. Yes, I can see that. But I think what you just said now is a good analogy. So imagine you are dealing with 100 agent co-workers in in 20 years, then it's really about how to deconstruct and decompose the problem, sorting problems, sorting stuff like this. And then you're back to fundamental STEM in terms of uh uh you know complex thinking, system thinking theory, like and it's not like you are gonna do all the work, but you you damn well need to orchestrate some of this work and even more and more.
Hugi Aegisberg:Yeah, and you need to be able to ask the right questions. And if you don't have any background, asking the right questions is really hard.
Henrik Göthberg:That's also a very simple analogy. Where do we get the right critical thinking in order to frame questions and objective functions?
Hugi Aegisberg:Yeah, that's right. That's right. Um and I also think, and I have another answer to this, which is do anything relational.
Anders Arpteg:So people relational relationship.
Hugi Aegisberg:Yes, people relational. Do anything relational because here's the thing. So one of the things we haven't talked about is for a while I did a completely different thing in my life. I was actually in arts and culture, and I ran an art center in in Frijamnan in Stockholm. So I start I co-founded an art center in Frijamnan. It's called Blevan that's still there. Um, I can tell you this: it doesn't matter how smart the AI is, the AI cannot run an art center. Because no, like the the the whole complexity of it is the interrelational human dynamics of the thing. Somebody's ego's bruised, somebody like doesn't like this person, somebody is like has a really big ambitious project, and you know that they're a bit difficult to work with, but it's still the better for the whole to let them do it because it's going to attract so much beauty around this thing. Like, there's no way you can outsource that sort of work to something that is not flesh and bone and able to go up to somebody and give them a hug. You just can't do it. So so anything that is relational, truly relational, I think is very safe from uh AI interruption.
Anders Arpteg:Could it be AI relation relational as well, you think? Well, let me just uh back that up a bit. Uh and um one way that I usually speak about this is to use OpenAI's uh this kind of AGI ladder. So they have five levels uh of what we need to make AI good at before we have AGI. Yeah one is level one, which is basically where we are today. The rest is not really uh where AI is better than humans, but for one is the they call it conversational uh or just knowledge management and being able to work with large amounts of data in some way. But then we have reasoning, then you have autonomous, the identific part, and then innovation, and then organizational.
Henrik Göthberg:Yeah.
Anders Arpteg:So in some sense, you know, AI is starting from the bottom, and uh humans usually covered all five of these levels, but now they can start to delegate more and more to AI for the bottom layers, potentially. We're still mainly on level one, where AI is actually better than humans.
Henrik Göthberg:Yeah.
Anders Arpteg:But then as years pass and it gets better at reasoning and um and also a genetic tasks, humans will actually move up the ladder. Yeah, meaning they perhaps will have a team of agents working for them. So you become an agent manager, perhaps, as a human, and every human potentially have a set of 10 or 100 agents that they need to manage somehow. And they just because you have 100 people, as you said, doesn't mean you can actually find value. But if you get the skill to manage people or AIs, you potentially will be successful in the future.
Hugi Aegisberg:I actually think that um having an internalized sense of empathy uh in my experience makes you a better prompter. I guess I could agree. Um because so much of it has to do with putting yourself in the mindset of a fictional person.
Henrik Göthberg:Right.
Hugi Aegisberg:I had this great thing. I was working on so I don't really know Rust, but I'm using Rust right now, and I build a whole back end in Rust now, uh, which was so much fun because it solved a real problem. The thing was running really slowly, and I was like, what if I rebuild this Rust? Solved it. Amazing. Beautiful, isn't it? And one of the things that I did when I was building that is I was like, okay, I need this thing, long story. I need this thing to work in a sort of like a template language and blah, blah. And I thought, okay, who would I want to review this to understand if I'm doing this in a stupid way? And I was like, you know how I'd like to review this Linus Tourbelt? Because he would he would really, really give it to me, you know. So I was like, I just thought, okay, I'm gonna imagining Linus reviewing this. And I had the AJ just like, you are Linus Tourwelt. Like, don't hold back. Did you try this? Yes. It was super useful. It was super useful. And basically it set me on a fun and that set me on a track where I was like, okay, so this comes from Linus Tourvald's archetype, right? But I'm not sure this is right because the Linus Tourwald's archetype is very full of himself, right? So I need somebody who's also wise to basically go, like, okay, now I've used this archetype to break through to me. I need another archetype to basically take it and soften it because I don't think necessarily this exact approach is right. So then I had like a wise programmer like Woz, you know, sort of like Wozniak coming in and go, like, yeah, like I see what he's doing, but he's being an asshole. Here's, you know, here's how you can think about it. It's a little bit different. That's like that. This, my approach to this requires me to be able to actually empathize with people that don't exist. Like, I need to have an empathetic creation of an archetype in my head that I'm actually having a conversation with. If you don't have a developed sense of empathy and relational being, I think that's really hard. And I think this is sometimes I see people struggle with it who are very engineery engineers. They don't really get talking to an AI, though. Like, I don't know, like it's a it's you know, and I think it's because they're not really relational. That's my hypothesis.
Henrik Göthberg:But and and then we come back to something we we talked about in another pod, which is an important skill, which is fundamentally the communication and delegation skill. Sure. And we had Henrik Nieber made a good comment, I think, I think it was Henrik that made it. Which basically said some engineers, super super, super good software engineers, can uh 20x themselves with with with the cloud uh setup, and others who are equally good as engineers, yeah, they're not getting shit out of it. Absolutely. And then we realized what what are we dealing with here? Well, some people cannot communicate or instruct or delegate, and some people have that skill. Yep. And then all of a sudden, now wow. So this engineer who had the communication, leadership, and the delegation skills, they blossomed. And the and the ones who didn't have that fundamental skill had had more problems. Absolutely. So I think that also says something about communication, empathy, leadership, delegation becomes fundamental traits.
Hugi Aegisberg:Yeah. And the at least the models as they are right now are not good at this. Actually, because they're psychopathic. Like you're you're not a good leader if you're psychopathic. Yeah, that's great, man. Yeah, that's great, man. Yeah, that's great. Okay, well, now you just have a sketch of running tea.
Henrik Göthberg:Interesting stuff.
Anders Arpteg:Yes. Awesome. Hoogie, yeah, awesome uh topics. And we had a lot more topics that we would like to discuss, but the time is flying by, so I'd like to end off with an even more philosophical topic. Absolutely. Let's get into it. So uh we spoke a bit about the AGI. Um, and perhaps first question do you believe it will happen? And if so, approximately in how many years?
Hugi Aegisberg:I think that AGI and the way people understand it would require a breakthrough into another model paradigm that we're currently not in right now. So that becomes sort of the same question as we were in when we didn't think that the LLM breakthrough would happen. You know, it's like uh how long will that take? I mean, people then were saying like maybe it's 30 years off, you know. Like, I don't know. I think it's very difficult to say because I really do think it's going to require another, at least one, probably multiple uh breakthroughs, the size of uh attention is all you need. Like, you know, we and we needed a couple more papers like that. A couple more papers, and then not just the papers, but the sort of dumb luck a little bit of having some guy and some like just having a crackpot idea like was behind the chatbot, like ChatGPT. Just go like, what would you feed it back into itself and just you know, just talk to that, and it works. And everyone who was working with this guy, I don't remember his name because he's sort of lost to history. I mean, we know his name, but I don't remember it. Everyone thought his idea was sort of dumb. Like, what you're just gonna have like feed it back, like a conversation back into it, and you think that's gonna work, and then it worked. And you know, you so you need luck, and then you need the right innovation. You need just a so I don't know. Ilya switchkibus who basically came up with a GPT model. No, yes, the GPT model, the chat GPT. So the thing is you have you have completions, but then in order to actually make it into what feels like intelligence, you needed to understand that how you can make that into chat. Like because completions on its own didn't really get us there. You know, we had that for a while, you know, but it wasn't until people saw that that it took off.
Henrik Göthberg:Cool. Okay, so at some point it will happen, but there's some there's some steps left for 10, 5, 10, 30 years, it really depends on how fast those steps come.
Hugi Aegisberg:I think that there's going to be some sort of generalized artificial intelligence that we could call artificial generalized intelligence. However, I think that generalization might not be quite what we think it is.
Anders Arpteg:And you don't believe Elomascus asks Grock 5 this year, we'll have AGI. No. Good. Okay, but imagine at some point though we will have that. And uh you can take whatever definition uh I I still like you know Sam Altman's old definition of an AGI happens when an age AI can uh behave uh equally to an average human coworker. And it's certainly not the case today.
Hugi Aegisberg:Yeah, but do they smell good?
Anders Arpteg:Good question.
Hugi Aegisberg:You know, I mean I I I'd say this that like I think that um surprisingly fast, whatever that technology is, is going to start to seem mundane. And I don't think the experience of being human as in how it feels is necessarily going to change all that much because we'll find ways we live on another fraction. We've done this the whole yeah, we'll we'll find ways in which to exist in that space. However, increasingly different things we will realize was not as important to being human as we thought it was. And I think one thing that's sort of starting to show with that the whole like cognito ergo sum thing. Yeah, I don't know, like dreaming, imagining, like I mean, that's basically random generation. So it seems like okay, like machines can do that. I'm not worried about that. Doesn't take away from my humanity because my humanity is in my feeling of myself and my feeling of the people around me.
Anders Arpteg:So okay, so so you basically answered my question already here. But but then imagine that it will come at some point, five, ten, or whatever number of years. Um, and then we can think, you know, potentially you can have you know two extremes. One is the dystopian, you know, we have Matrix and the Terminator, and and the AI is trying to kill all humans. And we could have the other extreme as well. Um, basically coming up to a world of abundance where AIFs cured cancer and fixed the energy crisis, fixed uh fusion energy, and um basically makes education super easy for every human, so they are so much more knowledgeable and whatnot. And we basically live in a world where the price of goods and services go close to zero, so we don't really have to work unless we really want to. Um, and potentially that could be more of a utopian view. In that kind of spectrum, where do you potentially think AGI could go?
Hugi Aegisberg:I think the spectrum depends less on the technology and more on how we wield it because I'm not scared of AI, I'm scared of what humans will do with AI to be able to do it. Yeah, so so the what we need is real consequences for people who do mean evil stuff with AI. And and we cannot tolerate it.
Anders Arpteg:Yeah, yeah. Agree. I mean, that's usually what I say as well. You know, I'm not afraid about the future when we'll have AIs that can supervise other AIs. That would be a good thing, but I think I'm really scared about evil people abusing AI because we don't have a good way to supervise that today, and certainly not as humans. Um, so there will be a period where I will be really scared about people abusing AI. Agreed. Cool, thank you so much for coming here. Very much Agisberg. Uh, it was a true pleasure, and I hope you can stay on for a little bit longer to speak more about a lot of things that we started to speak before the podcast as well. I'll hang out for a while. And uh thank you. Super cool, thank you so much.
Hugi Aegisberg:Thank you very much for that.