
AIAW Podcast
AIAW Podcast
E161 - Responsible Use of AI - Luis Martinez
In this episode, we’re joined by Luis Martínez, AI Compliance Expert at Assa Abloy, for a thought-provoking conversation on one of the most urgent topics in the field: the responsible use of AI. With a background in telecommunications and regulatory affairs, Luis brings a unique perspective to the challenges of building trustworthy AI systems inside complex, global organizations. We explore what it actually means to be AI compliant in practice, how harmonized standards and certifications can accelerate adoption, and why AI governance needs to evolve beyond checkboxes into something more agile and integrated. Luis also shares his views on regulatory sandboxes, the contrasting approaches of the US and the EU, and the ethical dilemmas that loom as AI systems grow more capable. Finally, we look ahead to the possible future of Artificial General Intelligence—will it usher in a dystopian surveillance state or a utopia of human creativity and abundance? A must-listen for anyone navigating the intersection of AI, policy, ethics, and enterprise transformation.
Follow us on youtube: https://www.youtube.com/@aiawpodcast
internal tools in practice, right, no, but it establishes a distinction between the deployer, yeah, and the provider of an ai system.
Anders Arpteg:So in in short, if I understand you correctly, you're saying that when actually acquiring or building or getting some external tool, then we have a bit more, better grasp of what the best practices should be, but not so much for internal tooling or development. Is that what you're saying?
Luis Martinez:Yeah, the obligations again. If we look at again at the AI Act, the obligations about where and what to do when placing a product in the market are for the provider and most of the obligations are for the providers of high-risk systems. So it's like, okay, there are obligations for providers, but a big portion of these obligations are related to those developing high-risk AI systems and there are a lot of categorizations. So the point here is that we as the company or the companies, shouldn't overthink the governance and the, let's say, the compliance when dealing with AI. My invitation is to make it as simple as possible and start thinking okay, are we going to develop AI to be commercialized, to make it available into, for example, an European market? To make it available into, for example, an European market, yes or no? If yes, is the answer okay, you are a provider. So we need to look at what are the obligations of a provider, and most of them are related to, let's say, trustworthiness.
Anders Arpteg:But does the same apply for an internal provider in a company then?
Luis Martinez:So if you build something for the employees in some way, yes, but in this context this will be just best practice, so it doesn't have to go through the, let's say, the conformity assessment process required for a product that will be commercialized, that will be available or placed in the market and make available for public use.
Anders Arpteg:This was news to me. So you mean that if we develop some internal tooling, we don't have to go through the same kind of compliance process as if we were to procure an external tool?
Luis Martinez:That's the analysis that we are establishing right now that the implementation of this regulation, or these obligations from the regulation, should be applied as a best practice. And if we are talking about developing of internal tools, if we are talking about the deployment of tools developed by external actors let's say Microsoft, aws, OpenAI they are the providers of that solution, so it's their obligation to fulfill the requirements and go through the, if they require the certification process.
Anders Arpteg:But if a hospital develop a new tool to do AI surgery, for example, and then they employ it internally and then something bad happens, potentially Would not the AI Act be as applicable in that case.
Luis Martinez:It is, but the context is different because it's different to, for example, if you develop a chatbot or, let's say, some tool for your intranet, the intranet of the organization, where you are not facing customers, external customers with this type of solution or external actors from your organization. In the example you're given, in this case, anders, is the situation or the app is developed is the main goal of this is to face an external customer. Probably it's not the best word to use when talking about the patient.
Luis Martinez:It's still high risk right, it's a high risk and you're facing an external actor, external stakeholder from the hospital organization.
Anders Arpteg:That's different, that's not internal use, because you're facing a customer and in that context you're placing the product in the market just because the user, so to speak, is an external customer exactly so so if then you let's say, construction site is using ai to detect risks or control an elevator or something in the construction site, so AI is now controlling the security of that risk, and then suddenly it breaks and a person in that company is killed because of the AI system controlling the elevator, so to speak, is broken, In that scenario, we are talking about a safety component AI as a safety component that goes directly to high risk and that goes under the scope of the AI.
Anders Arpteg:So even if the user is still internal, then it would be still applicable.
Luis Martinez:Yeah, in this context? Yeah, because it's seen as a safety component. The AI system is a key element in guaranteeing the safe operation of this type of solution or system Really cool and you had these kind of discussions yesterday.
Anders Arpteg:Right, yeah, we had these discussions.
Luis Martinez:Yeah, it was interesting because the topic of one of the presentations during this session was about AI governance, and before that there was another presentation about, okay, the implementation of AI in SVT, and SVT, more or less, is a kind of non-governance or non-existing governance.
Anders Arpteg:Swedish television yeah exactly so.
Luis Martinez:They don't have clear rules on governance. So we triggered this discussion after these two presentations when, okay, it was presented how SVT is working and dealing with AI in a kind of anarchist approach.
Luis Martinez:I'm not going to label it in that way but close to, and the importance of, or the relevance of, governing AI. That was presented in the last presentation, the last session. So we started this discussion and I would say that this is still a work in progress. So many of the questions you are asking here are the questions that we are internally discussing about. The invitation here is probably to think about or to start working in a collaborative environment with other experts in the field. It's engineering, it's R&D, it's legal area, it's compliance, because the analysis of these kind of situations, the analysis of these potential business cases, required, let's say, a holistic approach. It's not only about, okay, my interpretation, based on my first impressions about how the system should work.
Luis Martinez:We should incorporate other perspectives.
Anders Arpteg:And what was the event yesterday?
Luis Martinez:by the way the event was the Top 33 organized by high right and yeah, it was, it was. It was a cool event because the idea was to talk about ai, agentic, ai, how to incorporate these elements in in the organization and, as it was mentioned, by going in the session it's, it's a perfect scenario to realize that we are all in the same shield, in the same, we are facing the same challenges, we are facing the same problems and these type of events opens the floor for these collaborative environments where we can find together, work together on finding solutions and alternatives to the problems we are facing in this new field.
Anders Arpteg:Cool. I feel we have so many uncertain answers about regulation like the AI Act that is actually enacted already but we still don't really know how to comply with it, which is kind of an interesting situation, I think, for many companies. But to discuss that a bit more, we are really proud to have you here, luis Martinez, which is an AI compliance expert right At Sambloy, and you've been working at a various number of really prestigious companies in Volvo and in Ericsson. You also received the DARE awards right For Responsible AI.
Anders Arpteg:Person of the Year Last year, exactly. Yeah, congrats on that. Thank you, awesome. Well, with that, we love to get into some of the discussions here, but before we get more into the details, perhaps you can give a quick introduction to yourself. Who are, luis Martinez, and how did you get into the role that you have today?
Luis Martinez:okay, luis Martinez is a Colombian, colombian engineer, who came to Sweden to complete, or to to following the dream of doing a PhD in wireless communications. That's the reason why I I came to Sweden to do my PhD in wireless communications. I'm a family guy, wife to children, really really interested on contributing to, understanding, or to contribute to the industry and understanding, okay, the compliance challenges connected to AI. Passionate for, let's say, taking a holistic approach when dealing with different type of problems, because usually the solution for a specific situation or a specific challenge don't come or doesn't come directly from one specific perspective. It's important to try to cover, let's say, the different perspectives in order to find a solution or to fine-tune a solution that can help us to overcome specific challenges. And that's been the approach I've been applying during my professional career and my professional life and here in Sweden. It's like, yeah, it's natural, because it's part of the way of working here. It's like, okay, the consensus approach, what we were discussing before this, that's interesting.
Anders Arpteg:And when did you come to Sweden, by the way? 15 years ago, 15 years, and it was for the PhD.
Luis Martinez:perhaps it was for the PhD and the original plan was to go back to Colombia, but life happens Exactly.
Anders Arpteg:And the original plan was to go back to Colombia, but life happens Exactly.
Luis Martinez:We found with my wife that this is a nice place, stable place to have family, to raise children, and that's why I have two Swedish children in the family.
Anders Arpteg:Awesome and perhaps you can just elaborate a bit more. What do you do today in your role at Assa Employee?
Luis Martinez:A little bit of everything now, but when I was hired by, I was recruited by Assa Employee. The main challenge of the organization was to build the compliance framework.
Anders Arpteg:And perhaps you can just quickly remind people what is really Assa and Bloy and what do they do Assa?
Luis Martinez:and Bloy is the leading company on the production of access systems, access solutions, locks. Yeah, we are famous for locks and keys, but this is one part of our extensive portfolio. Assa and Bloy is a company that that's been growing since okay, it's. A lawyer is a company that that's been growing since okay, it's. It's a 30 years old company and how many employees 63,000.
Luis Martinez:Jesus Christ, yeah it's. It's big and it's growing every second week, by, just by, by, by, by acquiring new companies. Every time it's like the organization has this strategy of looking for organic growth but also growing by acquiring other companies. So under the Assa Abloy umbrella or brand, you can find a lot of big and small companies, all of them related to access systems, access solutions, identification, hid, for example, yale are part of this consortium.
Anders Arpteg:Awesome.
Henrik Göthberg:So Assa Abloy for many of us is known as the consumer brand, assa Abloy Deluxe and the Keys and all that. But of course there is a quite large portion of the portfolio which is more B2B, more industrial like organizing the access system in a jail as a larger type system.
Henrik Göthberg:So it's B2B, B2C in that sense, but I think that's the thing that is not well known outside the industry, so to speak. So everybody knows Assa Abloy in Sweden, but we recognize Assa Abloy as the B2C part only. I think, yeah, yeah, yeah, I'm like that as well, if I'm not looking careful at the website.
Luis Martinez:Yeah, actually, when I moved to Assa asabloi and I I told my friends, I told my family okay, I'm moving to from volvo cars, where I was working before, to asabloi. Okay, when, when you talk about volvo, it's like immediately you think, okay, this is the product. There is a car, this, this is, this is the brand. Asabloi is like what is that? What's's Asavloy? And those that knew about Asavloy the first thing that asked me is like it's the key company, right?
Anders Arpteg:Yeah, it's the key company, but it's much more.
Luis Martinez:And the first question is what are you going to do in a key company? So it's like it's hardware based, it's heavy stuff.
Henrik Göthberg:What was the AI here? Where is the AI here? It's hardware-based, it's heavy stuff. What was the AI here? Where is the AI here.
Luis Martinez:Really, are you going to help them to make the metallic key smarter? What's in it? And then I realized I saw that it's a big challenge. It's a big challenge. I saw that it's a big challenge. It's a big challenge Because the first couple of months I was in this learning process of knowing more the organization and realizing that this is a company that, yeah, started as a hardware company the keys, the locks, the typical and classical product but it's evolving and it's interested on becoming more and more digital.
Luis Martinez:And and part of the challenge for for this organization is okay, if we are, if we are, if we are entering into the digital area, we also need to be sure that, okay, we are compliant, we are compliant, we have a strong culture of compliance and quality with this side, the hardware side, of our business. We need to and we have to follow that approach when dealing with the digital world, and that's why they look for someone who can help them or guide them in this process. Okay, what needs to be done and implemented, set in place if we want to be compliant in that digital area, especially when we talk about AI? Because in AI it touches basically everything and, from the Asavloy perspective, it touches processes like biometrics. Biometrics is a big thing for a company like Assa Abloy. When access systems is in place the identification systems, the access solutions they all require a lot of digital components.
Anders Arpteg:Sorry for this kind of very general question, but then in Assa Abloy, what needs to be done to be compliant?
Luis Martinez:To follow the rules and follow the framework, the compliance framework. Of course, we need to understand the regulation, what the regulation states, most of these regulations, and, let's say, in order to show compliance with most of these regulations, and let's say, in order to show compliance with most of these, these regulations, we also need to identify the, the available standards that make easier the process of showing compliance. Right, I would say that not all the processes or not all the compliance processes have, but let's say, a standard, but most of them. In most of these processes, we can rely on existing standards. So to structure our compliance framework and being able to go to the market with a certified product that at least shows our customers that, okay, we are following the rules when developing this product.
Anders Arpteg:So you're speaking about ISO standards now to become compliant, or what kind of standards are you thinking?
Luis Martinez:about ISO standards, harmonized standards from the European perspective. So, yeah, these kind of standards, iec standards that are here for, let's say, helping also the market, the consumers, to rely on, okay, the quality or that the products develop and place in the markets fulfills a certain level of legal requirements, not only in terms of quality quality in the sense that, okay, how good is the product in terms of construction or in terms of implementation, how good is the product in terms of construction or in terms of implementation, but how this product can guarantee that we fulfill requirements in terms of privacy, in terms of security, in terms of safety. Those are part of these relevant elements when dealing with compliance and regulations.
Anders Arpteg:And, more concretely, what do you need to do? I mean to ensure that you do, for example, follow the tdpr kind of privacy compliance rules? Okay yeah, just to take some example. Um, you know, what do you in practice need to do as a company? Just if you were to give some concrete example for companies to that wants to be yeah, the first.
Luis Martinez:The first step is to look externally. What are the regulations applicable, depending on the market? Because every market or every region might have or pose different type of requirements, different type of obligations. So the first thing is to identify what are the relevant regulations in that market. If we are going to place a product in Europe, what we need to do in this case. We are going to place a product in Europe, what we need to do in this case. Second, or let's say, as part of this process, we need to go into the details about okay, from this regulation, what specifically applies for our product.
Luis Martinez:If we are, let's say, relating this to the AI Act, if we are going to provide or we are going to develop an AI system and we are going to place it in Europe, we need to say, okay, the AI Act is the relevant regulation to look at. Now we need to see, okay, in this context, what's applicable for us? Are we going to develop a high-risk or non-high-risk AI system? Depending on that, we identify what are the obligations, what are the requirements from the regulation. Then, once we have identified those, we need to think okay, what do we need to do to show compliance? What are the requirements from the authorities to show compliance? What kind of reports, what kind of documentation, what kind of processes we need?
Anders Arpteg:to implement, because that would be a major thing. Right To make sure that you have documentation. So if you get sued or some kind of audit, then you need to be able to show that right.
Luis Martinez:Yeah, exactly, so that we can show that we are following the regulation, the documentation, the processes, the certificates and all this stuff. So, and to help us in that process, we rely on standards. So, okay, there is a standard or there are harmonized standards that include, or, let's say, operationalize, what the regulation states, makes them understandable for R&D, for product and development. Okay, what are the specific things I need to do in order to fulfill what the regulation says? The standard is, let's say, the translation in concrete words, in concrete parameters, of what the regulation says. So we follow the standards, we apply the standards, we secure that. Okay, we follow all the steps and requirements set in the standard. And then, if the regulation states it, we need to go through a certification process, third-party conformity assessment. So some external reviewer, notifier, body will look at the documents, the processes, all the information we provide them according to the standard to say, okay, you are following the rules, you are compliant, here is the CE mark or here is the certificate, and then you can place this product in the market.
Luis Martinez:But besides, let's say, this process related to placing the product in the market, we also need to support the organization on understanding the regulation. We need to, we need to secure that everyone within the organization understands what they need, what they have to do, why they have to do it. Um, we need also to make things. We need also to make things easy, practical, understandable for them. So if we identify in the regulation or in the standard, okay, we need, for example, coming back to the AI Act, prohibited practices, what the prohibited practices? Not prohibited? Yeah, the prohibited practices. Yeah, you can read the text and say, okay, there are eight types of prohibited practices and someone will read it and will say, yeah, but how can I interpret this when I'm developing something, when I'm developing something? So we need to help the organization to land that knowledge and prepare some material document checklist, at least something that can be concretely used by.
Anders Arpteg:I mean, is that a role that you have as a employee or is it someone else through the standards that can do that? Or how do you get this kind of concrete help with the checklist or something else to do this?
Luis Martinez:Some cases it's. It's part of my work to start drafting this. We can. We can also take a look at material, official material published, for example, by the European commission. So how to land this in concrete stuff, do you think they are? Useful the material provided to the organization is useful from the EU commission.
Luis Martinez:I think it's a good intention. They have a good intention, absolutely no question there. But I cannot question the intention of that kind of material. But from my perspective it's like they miss the intention of that kind of material. But from my perspective it means the point of making things accessible for non-legal experts. So, for example, when we talk about prohibited practices, besides what was stated in the regulation, they also published later on the guidelines, guidelines to identify prohibited practices and if I'm not wrong it was 108 pages.
Luis Martinez:That was supposed to help you on understanding what is a prohibited practice. That's not useful. That's that's not concrete stuff. So when I, when I approach the people in the organization and say, okay, here is the guideline, who has read this guideline? One or two guys in a team of 20? How much did you understand? Nothing, because it wasn't written in a normal language, in a simple language. So my role, my job, was to take that material and try to make it understandable and accessible for people that are not dealing daily with to read through the 180 pages and try to summarize somehow.
Luis Martinez:But it's always important the human over the side.
Anders Arpteg:You cannot rely only on the interpretation of the AI. So if you were to I mean, since you're an expert in this as well and I think a lot of companies want to feel that at least minimize the risk of being sued and have to pay a fine and I guess you agree that you know being compliant is always a risk game. I mean, you can never be 100% compliant, or would you agree with that? It's always a risk that you get fined, right?
Luis Martinez:Yeah, but the work is to be compliant, always compliant. It's the aim, but do you think anyone can be 100% compliant with zero risk of getting fined? We can do our best and we can get closer to. I would say. I remember this level of the reliability levels on the networks is five, nines, 99.99% of the times you are successful with it.
Luis Martinez:I would say that not 100%, but there is a minimum risk and I would say that the advantage of, let's say, having the standards and the advantage of having regulation and standards is that the standards will provide you this guide to make things closer to being compliant with what the regulation states.
Anders Arpteg:I mean the standards help and certification helps, right, yeah, but still, I mean, for a normal company perhaps I mean you're extra good in SIM law, but for normal companies they need to at least. I mean they can never be sure, right, they need to at least have some kind of risk management. I would say to at least have some kind of risk management. I would say that they say you know, we can't be 100% sure if this is high risk or not. You know, according to AI Act. But we think this is the level that we should end up with. And you never know until it actually goes to court properly, right. So I mean, you can never be sure, but you have to do the best you can, of course. But if you really want to be 100% sure, you can never be sure, but you have to do the best you can, of course. But if you really want to be 100% sure, you should never deploy anything, right?
Luis Martinez:Yeah, it's like applying the principle of cybersecurity. What's the best way to be 100% cybersecurity.
Anders Arpteg:Pull the plug, yeah.
Luis Martinez:Yeah, exactly that's the principle.
Henrik Göthberg:But I was thinking a little bit like you were onto this quite well.
Henrik Göthberg:So a lot of the stuff we do now it's regulation and compliance and it's even illegal documents to some degree, and you have guidelines, which is even more legal text, and then you get to the point where I think you're quite excellent point down that it's it becomes not really useful because it's really hard to interpret.
Henrik Göthberg:I think one of the key questions now is, if we want to be compliant with AI, who should we look at as our target audience or target person who really needs to understand and work with these topics? And my take on this is very much that one of the key challenges is like we need to take this away from the legal arena and make it into the fundamental compliance by design discussion that you have as a product owner building something, stuff. When we build stuff, what do I need to do practically as part of my life cycle to make to, to manage this risk and also to validate that I'm? You know that I'm working towards a compliance path, so I think this is the main problem. I want to understand how you see this that we have we have a legal text and we have regulations and it's not useful after we get it into the hands of the people actually building stuff and they should understand it.
Luis Martinez:Exactly, that's one of the key points. I would say that it's one of the most important things. It's like the people actually in the field, the people developing, the people working with the staff, understand what are the implications of what the regulation states, what the regulation states. But if they get, let's say, if they get this close contact with and they embrace the regulation in some way, if they incorporate it in the processes and in the way of working, like you say, the compliance by design, compliance by design, compliance by default, all these principles about privacy, trustworthiness and all this stuff needs to be landed in concrete ways and needs to be, I would say, embraced by those actually developing the technology, actually providing it, taking, taking the role of regulation.
Henrik Göthberg:Let's stay here, because I think this is that, if we zoom out into this compliance problem, if I go into a bank and now I'm the product owner, I'm thinking about building a system that has some AI in it, that has some data in it, all of a sudden.
Henrik Göthberg:Now the first question is how many compliance directives not only AI Act? But is it one, is it two, is it 10? So then you come to the next topic when we do frameworks now, should we do one framework that works a little bit different for each and every regulation, or do I fundamentally need to flip it into the fundamental use case lifecycle and having the engineering process as the core sort of guardrail that we now want to look at? Okay, in the ideation phase, you have a step to check which compliance regulations applies in your case. In the prototype stage, how can you fail fast? What I'm saying is we I think there is a how to do that. As I undersays, you need to, at some point, really take the, the, the headroom of the developer or the product owner, and look at their process and and understanding how they should now incorporate compliance tasks as part of their lifecycle.
Luis Martinez:So how do you do that? I can speak from, let's say, the experience we have in El Sabloy. There is an internal process. We manage a framework for all the product development activities that we we carry out in the organization, and this framework identifies steps with responsible stakeholders. So it it, it. It goes from, let's say, the ideation phase to the post-market monitoring that it's integrated to, let's say, the life cycle of a product.
Luis Martinez:Yes, exactly, we are in the process of updating this framework because it was, in the beginning, was mainly focused on, let's say, hardware stuff and how to incorporate this compliance by design in different stages of this framework. But it was structured in a way that, okay, the main focus or the main customer for this was the product development area. Yeah, so how to incorporate these different stages, different gateways in the process, to evaluate and assess the compliance requirements associated to those phases. And part of the work we are developing now with the compliance team is how to integrate this digital compliance part. Exactly Hardware, yeah, we are okay with that, but how to integrate data regulation, cyber resilience regulation, artificial intelligence regulation in this context and in the process development.
Anders Arpteg:So how do you do that in Assemblo? Do you literally write the documents? Do you have upskilling? What do you do?
Luis Martinez:It's a kind of WASP operation with multiple approaches. So of course we need to document stuff. So we are creating the typical stuff the documents, the guidelines and the material. But we are also trying approaching the, the, the community of our product developers, with different process of training and, and let's say, information and and training so they can, they can embrace and they can own the compliance as part of the development process. It requires also the development of different type of tools that make them able to develop some of these assessment processes related to compliance so that they can do some assessments by their own. So we need to provide them with the tools, the material, the checklist, the information. So it comes in different ways.
Anders Arpteg:And that's something you develop in-house.
Luis Martinez:then there are some things where we rely on external tools. There are some other tools that we are developing in-house, and this is, I have to say, this is working progress. So we are in this journey right now.
Henrik Göthberg:We're in exactly the same journey with Scania and the core question becomes if you take the compliance by design area, you want to understand how can I assess something and understand risks early to fail fast or classify and not do it? And then it comes to the fundamental we go to deploy and what are the certifications we need to do now? And the main challenge and I think it's similar to Bloy and Skåne when it's these large companies is that in the end, it needs to go beyond guidelines. We need to, in a way, build a data engineering or an engineering experience where the lifecycle is in some way checked in and checked and monitored so we can scale this as a self-service tool. So, first of all, it can't be documentation.
Anders Arpteg:Secondly, it's very. It cannot be documentation.
Henrik Göthberg:Not only. If you want to scale it on a large scale, you need to take it away from.
Anders Arpteg:Of course you have to have documentation, right, you?
Henrik Göthberg:need to have documentation. But what I'm trying to say is, if you listen carefully, is that when we are doing it in such a way that we are producing a lot of manual work to get this work done, or when the compliance team is supposed to work with every single product developer, we get bottlenecks. It will be very costly to get this off the ground. So how to sort of create self-service ideas around this that allows them to do assessments and do the assessment in a correct way and then be able to print their own documentation, so to speak out of this? Yeah, so it's a little bit like how can the team be self-reliant as far as possible around the compliance?
Luis Martinez:Yeah, and empower the product developers and the product owners on these, let's say, compliance related tasks that, okay, will reduce the impact on, let's say, the time to the market and will also reduce the potential bottlenecks, as you mentioned, when developing a product. I should mention a previous experience I had when working at Volvo Cars. At that time that was like two, three years ago, and it came this boom about generative ai and all this stuff and there was a need for governing, starting governing and controlling what's going on within the organization about the generative AI. And one of the ideas and actually it was implemented yesterday I realized that the process is still going on or it continues is that it was decided to create a generative AI committee, a committee with people from regulatory affairs, from legal, from compliance, from data management that had regular meetings to evaluate and assess different business cases or use cases implemented within the organization.
Anders Arpteg:Is the committee the right way to do it.
Luis Martinez:Is a commit to the right way to do it. I would say based on the experience I have after two or three years, it's not the best way. It doesn't scale well.
Henrik Göthberg:Yeah, exactly.
Luis Martinez:You need to start somewhere, maybe. Yes, you need to start somewhere and at least try to offer a solution too. Probably it's not the best, it's not the optimal.
Henrik Göthberg:The challenge is, of course, when you get everything through a committee, it becomes a massive bottleneck right.
Anders Arpteg:Yeah, and it's not compliance by design, then no, it's not.
Luis Martinez:But then gradually the process was improved by implementing some checklists, some type of internal tools to try to take some of these compliance decisions, or the evaluation or the assessment of some of the products closer to those developing the product, so making that just those critical and probably known or really complex cases to be escalated to the committee.
Henrik Göthberg:But we need to go to Elon Musk's story here soon. Yes, Because I really think what is the problem is when we look at the compliance and regulatory as a side part and not as a fundamental definition of done in your engineering process. So if you can get compliance by design and you can have these things as part of well, when I'm producing a new car or if I'm changing the headlights of the car, it's a definition of done that something should be checked or certified that I can use this component, and so I think this is where we need to go a lot more deeper into the fundamental. What is your engineering lifecycle? And ultimately here, understand how we can do each part just in time and not like an afterthought or like that, I mean.
Anders Arpteg:I guess you can take this story quite well. I will go there shortly, but I'd just like to try out a theory of mine to you. I mean, where work and have worked, you know, if you actually can define the process properly, if you can bring compliance by design to the people actually building it and, just you know, do the Sherlock Holmes method. A Sherlock Holmes method meaning, if you have a complicated problem like being compliant, if you just break it down into into sufficiently small pieces, everything becomes easy. So I I think you know and I've seen and I know it can work. If you want, for example, to be ddpr ai compliant, if you just break it down to sufficiently small questions, that is clearly described and you have a checklist and your documentation and do a risk analysis that you have to do, then more or less anyone can do it. Would you agree with that?
Luis Martinez:Yes, and actually what I would say, or let's say, the bet we have, is that the bet we have is that the people developing the products can, independently of the implementation of these tools, that will be useful and will help in the process of, let's say, evaluating and assessing the compliance of, let's say, feeling and understanding, interpret that the values guiding the organization and even the values guiding that person as a professional engineer can be reflected in the product we are developing. It's understanding that, okay, if we have a clear ethical understanding of that, okay, if we have a clear ethical understanding of, okay, what's wrong and what is right, it should be, by default, incorporated in everything we do. It's probably a kind of philosophical I can relate to that let's say something.
Henrik Göthberg:But I really want to latch on to that because I think you're on the right path. What I'm trying to, that let's say something. But I really want to latch on to that because I think you're on the right path.
Anders Arpteg:What I'm trying to say is it's a very stigma-connected area with being compliant and I don't think it needs to be. No, exactly, I think if we remove the stigma and say actually it's easy to become compliant, if you just know how to do it, then the question is really how can we know how to do it? And if you break down that problem into sufficiently small pieces in Sherlock Holmes style, then actually it is easy to work with personal data or work with AI. It's not really a problem to do that, if you just know how to document it properly. Do the risk?
Luis Martinez:analysis, do the classification, etc. Right, but do you agree with that? So, yeah, I agree, I agree, and, and, and, and. The idea, let's say the utopian perspective or way of thinking, will be probably we wouldn't need to have all this documentation and all these controls and guardrails if everyone knew, okay, the difference, the distinction between what's good, what's right, what is okay, what is not okay. And we can develop and, let's say, reflect in everything we do these high level principles that we have developed.
Henrik Göthberg:I really want. I want to have feedback now on a thesis how to actually go in exactly this direction, using slightly different words, but we've been working at Scania and we're looking into a collaboration with the researchers at RISE to understand how to build a test evaluation facility. So Scania is thinking carefully. Do we need to build our own internal methods? How to scale this out in European markets? We need to build our own internal methods, how to scale this out in European markets, and we took the approach with the RISE researchers that, first of all, we are doing compliance as a way to guide companies to be more risk conscious in relation to AI or whatever it is.
Henrik Göthberg:That's a good point. So compliance is not the end game. The compliance is there because we clearly are not risk conscious in the stuff we do. So then we took an approach. So, if we look at this from a lifecycle perspective, and then we started to look at, okay, what are the dimensionalities we need to look at to assess and validate risk in an AI compound system? And then we started to break this down into okay, if we want to assess an AI system, you have all the guidelines, but if you think about it from a system perspective, what are the risk vectors? And, coincidentally, then the risk vectors of building an AI compound system is also the value vectors. They're all the things and features you need to figure out. So we looked at, sort of okay, one obvious risk vector is the model itself. Another risk vector is the model itself. Another risk vector is the data. A third risk vector is the UX that you have a stupid UX that makes people go wrong. A fourth one is the system integration and system part. So all of a sudden, now we are starting to build, break down into actually a very, very good practice to be value and risk conscious in your system development. And the risk vectors highlights core dimensionality of building a sound system. It's nothing else, right? So, in this sense, now we can get it to very small compartments and then we can say how will we assess the model risk, how will we assess the data risk? And then here comes the standards and all this exactly the same thing. So, if you break it down in sufficiently small pieces and the key point is to have a framework here, we call it risk vectors.
Henrik Göthberg:We look at an AI compound system and all of a sudden it's a decomposition topic and exactly, if I take the understanding, look at, okay, those dimensions that are now looked at, the risk vectors. They're actually very, very good vectors that look at the value and how do you build a usable system. So it's the same question. You want to ask what's the right UX and what is a risky UX, what's the right system integration that is robust, and what's the risk in this? It's two sides of the same coin and you can go to a good practice and all of a sudden, now you can get it by design and I think if you, if you get clear on this, now you can take a step. How can I assess this if you don't have, if you don't know what you're assessing. How the fuck will you do self-services on it? So I don't think it's a. You need to go best practice first and then you know.
Luis Martinez:And actually trying to connect your two opinions or your two ideas.
Luis Martinez:When we take a look at these standards and the way they are structured, at least in the field of AI, they are trying to break down the problem they are trying to take. Of course, there is this risk consciousness or risk awareness perspective when developing the standard, but they are also trying to break down and this is a good point. They are trying to break down the analysis of an AI system in different components. That's why they are trying to develop a standard on data quality and data management and cybersecurity and trustworthiness on quality management. So they are breaking down. The problem up in these standards and when being ready with these standards, is that, as anders has mentioned, the idea is that it will be super simple just by following this, let's say, structured way of breaking down the problem integrating, following these, these small pieces, integrating them and being able to show compliance in the in the in the near future, then being able to identify okay, I have fulfilled, I have breaking down the problem or the potential risk represented by an ai system that I'm developing.
Luis Martinez:I'm assessing the different vectors you have mentioned and being able to say, okay, I'm okay, these are fine, we are compliant but here you're triggering me, sorry for in.
Henrik Göthberg:Isn't this one part of the problem then why, if we only have legal people working on this and trying to explain this, we now have a massive effort of translating that into real risk factors for building systems and all that. It would be so much more helpful if the holistic engineering understanding was there from the beginning. So when the guidelines comes out, it's not 108 pages of legal text, but it's something that fits in a simple engineering context alpha assessment, beta assessment. You know, whatever you see, what I mean. So this whole flipping to something useful or adoptable.
Anders Arpteg:Perhaps we can phrase it as like a general question here. I think we, as most companies, consider being compliant is a complicated process, so I guess the question here then is how can we make the process of becoming compliant easier? And I've seen frameworks that makes it easier, so I know it can be done, but many companies and and I think, most companies have not. And what do you think? How can we make it easier for companies to become?
Luis Martinez:compliant. I think it's about massification, it's about communication, it's about sharing and working and not reinventing the wheel, because, yeah, the solutions are there and the potential to make things easier for the organizations are there. But I think one of the challenges.
Anders Arpteg:What do you mean? It's there. I mean, do you really think there is a clear process today to make a new organization or some company to become compliant with AI Act or GPR or something else?
Luis Martinez:Just following the standards. Yeah, I'm sure that following the standards, when they are structured, they will be the key to be compliant.
Anders Arpteg:But if you take AI Act, is that really a good standard to make it easy to become compliant with AI Act today?
Henrik Göthberg:If you decompose it. There are standards on data, there are standards on data quality management, there are standards on cybersecurity, et cetera, et cetera. So the problem is, they're all over the place and you need to now collect them and understand which standards applies.
Luis Martinez:But I would say that this is one of the first times, or the first time, where the Commission we talk about the European regulation tries to make, tries to connect the release of a new regulation with concrete elements connected to the implementation of the, let's say, the certification of this by developing standards. That was one, for example. That was one of the mistakes that we have identified. When GDPR was released, it was just a legal component, yeah, and there was no way to interpret or to operationalize the principles.
Anders Arpteg:But I must say I haven't seen a clear standard about how to become AI act compliant. Would you say that they have? I mean, I saw the guidelines for the TPAIs. I mean they are horrible, the guidelines.
Luis Martinez:Yeah, I totally agree. The guidelines are not there, but I'm involved. The standards are currently under development.
Henrik Göthberg:We're going in this direction.
Anders Arpteg:Shouldn't they have done that before enacting the law. Yeah, that's the tricky one.
Luis Martinez:That's the tricky part.
Anders Arpteg:That's the tricky part, but imagine for the company then then 10 times trickier. They have a new law, they have to be compliant with it, but you don't know how.
Goran Cvetanovski:They have consultants, right. They say they do what annoys me is that
Henrik Göthberg:it simply feeds consultants and feeds. The problem is that now there are 50 guys in Scania doing this and there's another 50 guys doing exactly the same work at Asala Bloy and there's another 50 guys doing exactly the same work at Ericsson, and in the end, they would all be better off if we could have a standardizing body or if we could organize this. So I can understand that everything is not perfect, but you need to have an implementation operationalization story. It needs to be much sharper. We have the AI test. Of course it goes in this direction, but it's too loose, right. So imagine if you took the guideline and, as part of the enactment of the law, these bodies that work through the standards were there. They're not. We are now muddling through to we understand we need them, but they are not there as a design implementation. There's no budget for it. There's no.
Anders Arpteg:But perhaps you can give an example. I mean, you mentioned the harmonized standards, If you were to just describe it. What is that? How do they work?
Luis Martinez:Okay, the harmonized standards are basically the documents that will operationalize the implementation of the AI Act. So they try to offer, let's say, simple translation of what are the principles stated in the AI Act and make them understandable for those companies interested on yeah, be compliant with what the regulation states.
Anders Arpteg:And they are still under development. Right, they are under development, so there are no clear standards today for being compliant right?
Luis Martinez:Yeah, unfortunately there is no, and there is some deadlines that needs to be fulfilled in order to be compliant. The aim is to be ready with the standards before August 2026.
Henrik Göthberg:Here's the problem If you have made a package when you enacted the law that the harmonized standard for part of the package would have made more, sense it would have been so much more and you should even have the standards like two years before it became enacted.
Anders Arpteg:So you had a chance to be compliant.
Henrik Göthberg:It's actually not until you have the harmonized standards that you can pressure cook and pressure test the regulation.
Luis Martinez:But the challenge here is how the regulatory framework is structured, because to be, let's say, to release, to officially start working on a harmonized standard. You need to have a working document there should be a standardization request prepared by the commission, and that's connected to the existence of a regulatory framework that makes it or that that kicks out so you're highlighting something appropriate.
Henrik Göthberg:There's a due process. There's a due process with a very gated step that it's, I mean, like so in. In a sense, you need to respect the due process that. So, in a sense, you need to respect the due process that you need to have that. You need to lock that away. You need to have a memorandum signed here. Now we can lock the next sequence. The problem, I think, is then to make a enforce a law before the actual thing is not really sharp, but that's actually how it works, I guess, in most. I don't know.
Luis Martinez:Yeah, and one can argue that yeah, the implementation or the need for the implementation of standards according to the AI Act will be needed after or from 2026, August 2026. Even though the AI Act was enacted and was published last year, it has set some timeline.
Anders Arpteg:But some part of the AI Act is already enforced right, so August next year is some part of it, but some other parts are already enacted.
Luis Martinez:Yeah, what is enacted is related to AI literacy. What are the prohibited practices?
Anders Arpteg:Can we just go there Because I think the AI literacy part and we're jumping all around now.
Luis Martinez:I feel, but anyway.
Anders Arpteg:I think the AI literacy is something that a lot of people don't realize, and I've heard multiple stories on this, so having an expert here is interesting. But, in short, I guess AI literacy means that the company should have, and they were enforced by the law and it's already in place today. So any company should have plans in place to make the employees AI literate. But does it play for everyone? Is it just the provider of AI models or is it for also consumers of AI models?
Luis Martinez:Providers and deployers, yeah, so those providing an employer must be aware of the challenges.
Anders Arpteg:So if you're an internal deployer of an AI model for just employees. You're not required to have AI literacy you are, so then it means basically any kind of company that uses AI is compliant Of a certain size, no, no, or what is the requirement? When do you have to be the idea in this case?
Luis Martinez:it's quite broad and once I was speaking with someone from the commission and she said actually the purpose is to make it broad enough so the organizations can interpret it and take it more as an invitation to start activating or triggering some training actions within the organizations about AI, the challenges, the risks, the potentialities of this technology, rather than setting, let's say, standards like a strong framework about what should be done. But the requirement is quite broad. It's for deployers and providers that they must ensure that the employees and actually the whole organization at different levels must be aware about AI, the potentialities, the challenges, the potential risk with this technology. And that's really broad. It can go from a tutorial about okay, what are the prohibited practices, and an explanation of what's AI and what's AI. But once again.
Anders Arpteg:Now you're describing how it should be done. But right, this should be a checklist somewhere. Okay, we know it's a law. You can get fined today as a a company. If you do not follow this law and you can get fined today and people don't know, how do I fulfill the requirement to provide ai literacy and there is no standard and there is no, and there is no check list in some way it's. It's a law that no one knows how to be compliant with and it's just a small part of the AI Act.
Luis Martinez:It's an open requirement. Yeah, I agree, and that was part of my conversation with this person from the commission. But okay, I asked her what is the body of knowledge or the structure or the content that this AI literacy program?
Anders Arpteg:should have. Could you send an email saying this is what what AI means, and then you're AI literate. I mean, what does it really mean to make it?
Luis Martinez:clear, it's quite open.
Luis Martinez:It is right, it's quite open. I agree with that. And we are in the company, we are in the process of defining our own body of knowledge. What are the, let's say, the key skill sets, the key areas that should be, let's say, part of the basic knowledge on AI for different levels or different roles within the organization? But it's up to us, it's us, it's us, it's the DCF framework. We have decided okay, probably we should cover these areas, but nobody's guiding us, nobody's telling us. This is the content list for this literacy program.
Anders Arpteg:I'd love to come back to the Elon Musk story here shortly, but before that perhaps we can just close that kind of question about you know how to make the compliant process easier and somehow, I mean we know there is standards lacking. We would love to have it. We want to have a clear, broken down checklist, make it easy for anyone without having legal expertise, knowledge, to still be able to do it and and do have compliance by design. But we don't have it. What would be your thoughts about you know how can we make the compliant process easier given the situation we have today? What would be your recommendation to a company or to authorities like the Swedish EMEA authority, authority for Privacy Protection, or whatever they call it in English? I mean, how can we make it easier for companies?
Luis Martinez:I think the answer is not simple, but probably the answer is also simplicity In the sense that perhaps the most urgent thing about the regulation and what should be regulated, what's, let's say, the key target of the AI regulation is high-risk AI systems, and I think that not all the companies in Sweden, not all the companies worldwide, are aiming or are implementing this type of solutions.
Anders Arpteg:But just knowing that you know it's a risk the company takes probably like a few percentage of companies is actually going to be categorized as high risk. But if you don't know it it, then you still have a super high risk of being fined and you don't know about it right, but but in in this case, in this case, I think at one.
Luis Martinez:one point is is probably try to reflect on what are the, let's say, the guiding principles behind the ai act. This, this respect for fundamental rights, is like the guiding principle, it's the philosophy behind the text. So these principles about human oversight, about trustworthiness, about what should a company do?
Anders Arpteg:I think the best thing is to employ a person like yourself to a company right.
Luis Martinez:But also to collaborate, and actually that's one of the things I would like to propose for the community, because something that we see is that all the companies are facing the same challenge. Yes, everyone resources, some of them lack of these resources. We face the same challenge, but we can do two things. One is to develop this work in separate silos, separate tracks, and basically every one of us reinvent the wheel and try to find ourselves, okay, what's the best approach, what's the best solution to face this challenge. Or we can try to work together, face and share some of our best practices, share some knowledge and work as a community trying to develop some, let's say, at least basic baseline framework that help us to be compliant and to understand what are the challenges linked to the compliance area in AI. I've been in different spaces in standardization in CIS, in different spaces in standardization in CIS. I've been also talking to Combient, this organization, about probably integrating these groups, even in technique for the tagging.
Anders Arpteg:We have been talking about why not creating a joint group of interested companies trying to work together on the definition of interpretation of this framework for compliance, but shouldn't you know we have authorities that have responsibility to assess this and do that, and you know we have the Swedish authority for privacy protection or whatever they're called. Why shouldn't they provide the guidelines? Why shouldn't they have a process? You know they're speaking about the sandboxes and whatnot that should be provided.
Henrik Göthberg:Do you?
Anders Arpteg:think that's a good way forward, or you know how should we do it.
Henrik Göthberg:Can I you answer first? But I have a very strong opinion here that actually there are. There is some clarity here that we are simply not following.
Anders Arpteg:But okay, but can I answer?
Henrik Göthberg:first because because we had peter dalunde here as a guest here and she's the director for the RISE, tef Testing and Evaluation Facilities and then she's working as a co-director for the new AI factory, looking more at the small medium enterprise, how they could use that, and here they're talking about compliance and design. With them connected we have Emi and a couple of other actors. So Pietra actually has a very, very nice picture of how all this works on the different initiatives around these areas from an EU perspective. Basically, there's so much going on in EU that the left hand doesn't know what the right hand is doing. But if you're really careful around this, the TEF has a Swedish idea of bringing this together and they are looking into these directions. And then on top of these, the AI factory and the TEF, you have the engagement with the AI verkstan for public sector right, and we did a collaboration.
Henrik Göthberg:We took the work we done with Scania and with RISE and the TEF and me and Petra went and talked to Dan, the guy running the program manager for the AI factory or AI VAX. So I think, even if it's unstructured, as long as those key groups that have got some sort of management around this. If they can build a center building or something like that around safe and smooth AI, this is the perfect start. You know, then you have something that you could grow into open innovation around this, because what we are talking about is fundamentally open innovation. Scania wants to do stuff, you want to do stuff. Everybody's investing money in it. If we all pool our resources, it's not only about we're getting more for the money. We're getting to one message, which is more important in my opinion. So what I'm saying is there. The problem is that we have a teff and we have people who are supposed to work on these areas, but we don't know about it.
Luis Martinez:We're not finding an arena there is probably one of the problems is yeah, it's, it's, it's it's more about sharing and communicating this, this kind of experiences, and yeah but you need to design this arena for it to work.
Henrik Göthberg:Someone needs to and it's work to create this, to create these meetings to, from the meetings document stuff. But it's all there if you want to do.
Luis Martinez:Yeah, it's, it's more about who, who wants to take the initiative and try to who can to put them together you can, can't Scania, can't TEF, can AI Factory, can AI Werkstrand can potentially but someone needs to have the ownership of this.
Luis Martinez:Someone needs to be assigned to say now you need to make this happen, you don't need to do all the work, but you need to make sure that you have the right arena, the right authorities to really drive it and there, yeah, and there is a need because there is a need, and but part of the message I'm delivering with sharing this discussion with seas and combi and all the other authorities or groups is that if we wait until the standards are ready, probably it's too late. We are.
Henrik Göthberg:Actually we want to influence them, so they make sense if I'm flipping it.
Luis Martinez:And there is a need also for, let's say, developing something where we can integrate our experiences and knowledge. Yes, and there is also the potential of collaboration between the Swedish companies and why not? The Nordics that share a lot of things I'm going to put you in contact with people in Scania immediately after this. Because that would be great. We are discussing on this.
Henrik Göthberg:Because the reality is that the TEF or the myndigheter, they cannot develop this without collaboration with the real companies, because it's not until you hit Scania or Assa Bloy that how will this work in practice for us? How will we interpret this? So that's why they can't do it themselves either.
Luis Martinez:And one of the problems is that if we rely only on the authorities doing this, there is a risk. Yes, doing this there is a risk, and the risk is that they will offer an answer to the problem that is tailored to their view about the situation how this situation should be solved and usually that's disconnected Sorry to say this, but usually, most of the times, that's disconnected from the reality. That's what happened with these guidelines.
Henrik Göthberg:That's what happened with the prohibited practices. This is why this would never work. We haven't been engaged on that. I get goosebumps because sorry, you said from the beginning, in the starting point, how do we open up the balancing act between control and autonomy, or control and innovation, and that's why we need to do it together.
Anders Arpteg:I'm sorry to interrupt this very engaged discussions about the very, very exciting topic of regulations.
Goran Cvetanovski:Did you hear that Henrik said goosebumps? Yes, two seasons.
Anders Arpteg:So we normally take a small break in the middle of the discussion to speak about some AI news. It's time for ai news brought to you by ai aw podcast. So um with that. We usually take a few minutes to just reflect on the latest week's newsworthy stories. I actually have some regulation kind of ai news that I could share, but before that, luis, do you have any news, ai-related or otherwise, that you'd like to share?
Luis Martinez:Probably something I would like to deep more is on what's going on with Oracle. It's like it seems to be the new NVIDIA in the context.
Anders Arpteg:It's like cloud is now richer than Elon Musk, exactly 40%.
Henrik Göthberg:In how many days? One day right.
Luis Martinez:In one day. In one day, I would like to get that formula.
Henrik Göthberg:But what triggered it? Do we know what was the trigger? I don't know.
Luis Martinez:Agreements at the.
Henrik Göthberg:The agreements.
Goran Cvetanovski:Yeah the agreements Like the Stargates.
Anders Arpteg:It's insane. I mean, oracle is going well for them, but this I I don't think I've seen a 40% jump on a huge company like oracle in one day before. It's insane. It's insane. I think it will jump back though. What do you think?
Henrik Göthberg:what's the substance? Substance.
Anders Arpteg:Is there a?
Henrik Göthberg:substance in this 40% jump no.
Luis Martinez:Yeah, probably there will be some rollback From 40 to 20.
Henrik Göthberg:But what triggered it was that there was more clarity on Oracle's involvement in Stargate and therefore people could extrapolate ooh, they're going to get a lot of work from this, there's a lot of money going this, going Oracle way now, Isn't that what this is, sort of indicating that this is how people interpret the whole thing?
Anders Arpteg:I think they published some kind of results as well, or something, that they had some really positive results in some Q2 report or something, and that's what's triggered it.
Luis Martinez:Yeah, and then the announcement of this agreement with OpenAI and the collaboration with OpenAI that has opened the door for, let's say, involving more the, but it's also how much we can rely on the use of the cloud for leveraging the results of AI.
Anders Arpteg:Yeah, I don't think he actually is the richest man anymore, I just have to look it up. But he was for a brief period of time and then Elon Musk took it back.
Henrik Göthberg:All right, other news Still rich.
Goran Cvetanovski:Yes, other news yes, all right, other news still rich. Yes, other news yes. So so, basically, uh, they're integrating cloud technologies now into their, yeah, product portfolio, so they can uh, yeah, companies can uh can uh, they're really good in building data centers, apparently, and even in a musk-employed Oracle before. They're the number one CRM company in the world.
Anders Arpteg:CRM company.
Goran Cvetanovski:Yeah, all the biggest cloud provider for CRM.
Henrik Göthberg:I didn't know that B-Cloud Interesting.
Goran Cvetanovski:So I think that this is the Oracle cloud is coming up. That's right, but keep in mind it's 40% because they have said that they would deliver quite a lot of. You see here the forecast the OCR revenue growth of 77% to 18 billion this fiscal year. So if they miss that it's going to be double down, but I think it's going good for them. We actually spoke about Oracle many times on the sleeping giant.
Henrik Göthberg:I tagged them. I said no, no, no, they're a sleeping giant. Yeah, yeah, you've been, you've been, you've been pulling that story all the time. I give you credit for that. Did you invest in it before?
Goran Cvetanovski:no I invested in nordisk instead I'm laughing, but uh, yeah, sorry risk instead I'm laughing.
Anders Arpteg:But yeah, sorry, that was some cowboy style investment.
Goran Cvetanovski:Yes, if they're coming back up.
Henrik Göthberg:I think that's all right, more news, more news. Yeah, it's something. I mean like we need to say something, and maybe you have some of the data as well, goran, there was. There is some more budget proposition stuff that came out this week. Urs Kristesson are starting to articulate more details that you know. If you go back to the Swedish AI strategy, we don't have an AI strategy. I think the most important stuff is to see money being dedicated. I think it's even more important than a fluffy document. So when I see now that the AI-verkstan this is a collaboration around how to build practices and tooling in the public sector, led by Skatteverket and SMLs, they got more clarity on money to continue on that journey.
Anders Arpteg:And it's not bad money. It's not bad money. I had to look it up. It's like $480 million in one year.
Henrik Göthberg:It was like no, I think it was like $100 million per year or something, for a five-year period. I can't remember.
Anders Arpteg:No, it's $479 in 26 and then $500 million each coming year up till.
Henrik Göthberg:Was it $500? Yes, so it's a lot, it's basically $500 million per year here, because Ulf explicitly talked about AI Factory and what they want to do with AI Factory. That's Mimer, that's together with Linköping and that is Pietra Lannala, and then explicitly on another budget account towards AI Verkstang. So it's not like they are focusing on one and killing the other. Both are getting a shared.
Anders Arpteg:I looked it up here. I can give you a quick breakdown of what they did. So it's basically 500 million per year until and including 2030. And then the Försäktenskassan and Skatteverket. They already had the assignment of building this kind of AI verkstad.
Henrik Göthberg:But now these guys are getting specific AI money earmarked to do that service. Yes, so that's really important that it's visible for AI.
Anders Arpteg:And then it's AI fabric, the Mimer thing. And then it's also extra money for IMU to build the sandbox. This is for regulatory purposes.
Luis Martinez:The sandbox.
Anders Arpteg:So yeah.
Henrik Göthberg:We'll see if they actually do that properly. We didn't have the news, but I said IMU, I said AI Factory, I said AI Verkstan. So, hint, hint, these guys have the money and we should come with our.
Anders Arpteg:Let me just finish here, before you spoke too much, henrik. There is more stuff here. They're also actually digitalizing and making data available from Riksarkivet so they can use for AI training purposes.
Luis Martinez:That's kind of cool, yes.
Anders Arpteg:They also give money to Lovve and the Royal Library to build more language models, which are already done.
Henrik Göthberg:So that's really cool. That's really also a check in the feather in the hat. It's a big testament to the Royal.
Anders Arpteg:Library and Luve and the work they're doing in their AI lab, what else? They also have some Digital Europe project. 100 million per year, it's a lot of money. And also, yeah, right, they're going to have a data steward in some kind of new function in the government, basically a data steward, you know a data steward in terms of they had some person that is in the business, so to speak, that can basically evangelize and drive and help guide the work forward, also in the government. So really good.
Henrik Göthberg:I like it. I don't know how you interpret this, but sometimes I get much more excited when I see real numbers on the table with a clear direction to where it goes, than any fluffy document this is actually surprisingly good, with clear actions and clear actions and clear money and clear earmark money. They can't be used for anything else, with clear owners of the money not and not that amount of money.
Henrik Göthberg:either I think this is the best AI news, or this is way more important than the AI commission report or the AI strategy. This right here.
Luis Martinez:Because this shows concrete actions concrete stuff.
Henrik Göthberg:And finally, people who are working on this, like Danne in Skatteverket. They are building on a thesis on how to spend the money and now here oh shit, he needs to get his action plan going. This is much more to us. This leads to action. You need to have the money, you need to have it earmarked, otherwise it doesn't exist, it's just talk. I'm really. I think this is much more important than people understand, because we all didn't see the digital strategy. Fuck that.
Anders Arpteg:I want to see this and also the value from a signal point of view that the government actually put and understands the importance of AI. I think that is also a very good value in itself. Yeah, so really good stuff. Let me just finish off with some more legal news as well, and perhaps one of the bigger one was that Anthropic, one of the biggest AI labs, that we have made a settlement of 1.5 billion dollars. That's a big I think it's the biggest settlement of all, at least in ai. So it's it's a huge settlement.
Anders Arpteg:And what they've done, you know they used books for training their models. Now the point with this is really that the fair use regulation you probably know this better than me, louise, but as far as I understand it, the fair use allows certain use of copyright material, but it needs to be of transformative value, and then they have like four principles for that, and apparently the court found that their use of training a model is transformative. Therefore, it is actually fair use of the copyrighted material in this case. So that was not what they got fined for. So this is I mean, this is, I think, really big news.
Anders Arpteg:It is okay to train AI models in US on copyrighted material. This, I think, will set the precedent. That is really, really big. However, they got fined because they used the copyrighted material and the books from piracy sites, so they downloaded the material from some sites that didn't have the permission to publish the books and and that was illegal. So then they got the big fine and they had to pay, like you know, three thousand crowns per book, and it was half a million books, so then it became like 1.5 billion dollars it's an interesting uh twist the rights for all the books for those money, probably even by the publishing companies they should have used to check them out at kindle.
Goran Cvetanovski:That would have been fine yeah, but they knew what they were doing, right, so I don't think, I don't think.
Henrik Göthberg:they't think they thought they were going to get sued on the fundamental copyright thing, because if they knew it was going to play out like this, they could have used a download of those books.
Goran Cvetanovski:But right now they have also the opportunity to pay for it right. So they built the company, they built the valuation.
Anders Arpteg:Yeah, but still it's a big chunk in their wallet, like 1.5 billion.
Goran Cvetanovski:There is a lot of other people.
Anders Arpteg:They could have bought a lot of GPUs for $1.5 billion.
Goran Cvetanovski:They can ask Oracle, all right.
Henrik Göthberg:Any more news?
Goran Cvetanovski:for sure I will focus on this a bit because you're talking about Sweden and what is happening right now Investment. We cannot miss the Spherical AI new, the new uh company, basically that wallenberg investment together with astrazeneca, ericsson, sub and scb are doing to operate new sovereign ai supercomputers that will transform and prepare the country leading industries in in the age of AI. So it's a new supercomputer cluster in Sweden with I believe it was 1,152 interconnected advanced GPUs from NVIDIA DGX superpods. So, yeah, I think that now we have MIMR, we have Brazilius, we have a couple, and this is quite a lot.
Henrik Göthberg:But how can we understand spherical? Because this is now completely in a private sphere, in a private sector where these companies decide exactly what they want to do with this.
Goran Cvetanovski:Well, I mean, if you look at it, the key word here is sovereign AI supercomputer. Yeah Right, we have been talking about this, that basically organization more and more will go into hybrid and and on-prem solutions. Because if you're doing uh, especially, keep in mind ericsson, astros and okay, if they do r d right, is it safe to do r d on cloud and american uh cloud providers and etc. Where ai open, ai, all these other things where one pattern is actually like uh worth billions and billions and billions. So I think this makes sense. How do you protect that, this cluster of companies that have a joint uh interest party as an investor and a common shared owner.
Goran Cvetanovski:Yeah, exactly, so I think it makes sense and more and more companies are doing this actually to tie to this and to prove that it's right. Why do you think ASML bought Mistral?
Anders Arpteg:They didn't buy it, but they invested. Yeah, but they did buy it. They're the biggest shareholder they have 11% of the entire organization right. So they don't have the majority in any way, but they invested in them. They didn't buy them, but why?
Goran Cvetanovski:Oh yeah, I mean yeah, so AWS now has 11% share in this stuff, so they are the majority. What is your owners of that?
Anders Arpteg:And just to give some background, ASML is actually a super important company. This is the company that makes the machine to make AI ships.
Goran Cvetanovski:Yes.
Anders Arpteg:So they are the sole provider of the lithography machines that TSML in Taiwan is using to build NVIDIA ships? Yes, so they are the sole provider of the lithography machines that TSML in Taiwan is using to build NVIDIA ships.
Henrik Göthberg:So this is the Dutch company behind the whole miracle that has the absolutely unique capability that no one else can do. Yeah, yeah.
Goran Cvetanovski:So one of the theories is that they're pretty much. It makes sense as well. They have invested in that because they want to utilize mistral for their R&D purposes, for their a world workloads. You know everything that they build internally. Also, they need to secure what is called AI compute right, and again we are coming to the same word sovereignty, which means that this is actually it's a French company and ASML is a Dutch company, so it's very near, so they can collaborate and make the best of it. They don't need to outsource and put all this in the United States or somewhere else. So you see it, it's becoming like a pattern that more companies Are we getting a European sort of a super pod here?
Henrik Göthberg:I mean like the collaboration with ASML below the chipset in terms of competencies, and then Mistral on the LLM is almost like you know what is happening. Are they looking for more? Is this a conglomerate?
Anders Arpteg:For one ASML. Of course. They made so much money Now that NVIDIA became the most valuable company in the world and that is because they're using TSML to build the chips for them and TSMC is using ASML to have the machines to build that for them. So they made a shitload of money. It's a European company and then you can, I guess, just guess for why they invested in it. But they do have a lot of money, I think and I hope because they want to build a European vertical here.
Henrik Göthberg:Yeah, that's what.
Anders Arpteg:I'm talking about Are we?
Henrik Göthberg:looking at the formation of the European vertical.
Goran Cvetanovski:Yes.
Henrik Göthberg:Are we looking at that? I think so, I hope so, and I have to say something.
Goran Cvetanovski:This is going to be very stupid, but I think that Donald Trump is a king.
Henrik Göthberg:He finally made Europe great again. Thank you, Donald.
Anders Arpteg:But it's good, right. He makes America good and he makes Europe great. Who knows, I'm not going there.
Henrik Göthberg:I love that. That's my teacher. Thank you, Donald Schaub. Make you as good and Europe great.
Anders Arpteg:Mega, mega. Yeah, not maga, but mega Sounds better.
Henrik Göthberg:Make Europe great again. Mega, we need to have a mega dish. One final one, very quick one. We have a new unicorn in Sweden.
Goran Cvetanovski:Did you read?
Luis Martinez:about it. You mean Lava Ballona.
Henrik Göthberg:No, Lava Ballona. It's been a long time. Lava Ballona is also in a stock exchange now.
Goran Cvetanovski:Yeah stock exchange no Flightradar.
Anders Arpteg:Flightradar is a new unicorn.
Goran Cvetanovski:It's a new unicorn, so shout out to Mina.
Henrik Göthberg:Mina, yeah, she's been here.
Goran Cvetanovski:Fantastic.
Anders Arpteg:Fantastic job and, of course, lovable as well.
Goran Cvetanovski:We talked about Lovable, so we know Lovable is.
Anders Arpteg:There's also a unicorn rather recently, so two unicorns in two, three months, but but it's an interesting one.
Henrik Göthberg:Klona made it to the new stock exchange and they are now.
Anders Arpteg:I mean like now they're like an american company, they're not a swedish company, in a way right, and they went in at 130 billion, I think, dollars, and I think they rose like 30 percent after one day or something. Yeah, yeah, but they're gonna, yeah, yeah, same as spotify did. Yeah, but also the one of the founders of klarno invested in a new yeah, but they're going to Same, as Spotify did.
Goran Cvetanovski:Yeah, but also one of the founders of Klarna invested in a new defense company that is forming in Sweden. Did you see that? Half a billion Swedish crowns.
Anders Arpteg:You mean in Helsing, or what?
Goran Cvetanovski:No, no, Helsing is a German. Okay, we are getting into.
Henrik Göthberg:But Oliver Molander made a nice post on this where it's sort of interesting to see that the big you know how many, how many companies in Sweden, you know, and Finland has sort of gone this way and they're surprising a lot. That's sort of in the end, yeah.
Anders Arpteg:Well, we have this going rabbit yeah.
Goran Cvetanovski:Let's leave it oh right Cool, yeah, oh right Cool. The final launch is Defense, sir yeah.
Anders Arpteg:Defense is going forward very rapidly 833 million Wow.
Anders Arpteg:Investments to open it. Flat capital Yep, interesting, cool. Okay, so let's get back to the interesting discussion and the very engaged discussion that we just had with louise, and perhaps we should you know, before we forget just go to the Elon Musk story and just speak a bit about that and and I can give some background louise and it would be fun to hear do you think, if you think that the Elon Musk approach to be compliant is a good one? So I'm not sure if I mean you are, but I can give a quick but they basically have a very agile approach of becoming compliant, so let's call it the agile compliant process. And just to give an example of this normally, you know, if you build a car and you change a water pump or whatnot, then they need to have certification from some kind of external authority to make sure they can put the road, the car, on the road, right Now.
Anders Arpteg:Normally that takes a lot of time and you have to submit some documentation and you have to wait months or weeks or more to get some certification. And that doesn't work for Tesla. We actually update the car every day, so every car that comes out each day is different from the day before. So how can you make a car compliant when you change so much, so rapidly? So what they did, as far as I understand it, is that they send small updates to the oversight committee, basically daily, with super small changes, very, very iterative. So instead of saying we're just going to tweak this water pump to be 1% more efficient and this is the only change in how it looks, is this still compliant? And then they said this small small document to the authority saying would you agree with this? And then they get basically flooded with these kind of requests and after a while they realize is only this small change, ah, that can't really change anything. And then they get a super quick approval and they become compliant in a very rapid way. What do you think about that?
Luis Martinez:It's agile. Yeah, I have to admit it's agile it's overflowing the it really makes super small iterations, but it's looking at compliance in the same way as a continuous release train of code or whatever.
Henrik Göthberg:So it's the if, the coding mentality. We're not going to have one release per year, we're not going to have three releases per year, we have two week release cycles. And if you follow the engineering and hardware of tesla, it's really really hard to understand what is a 2024 model and a 2025 model, because all of a sudden they've changed. They've changed stuff like the suspension, you know. So I, I have a tesla car and I've been nerding about this even when I to buy a car. It's really difficult to understand. So it means that they have. They have a continuous deploy in production cycle, also for hardware, exactly the same as we have been used to finding it in software and stuff like that, and therefore they had to decompose their whole certification process into a simple definition of done like. So when you do a, a production like an agile cycle, agile cycle has the definition of done in. When you go to production and then whatever compliance components for that little thing you've done, they take care of it there and then, but it goes to the detail.
Luis Martinez:I would say that, from what I've listened, it's like, okay, they go to the detail and actually it shows great care for, let's say, the compliance concept, because even with small details, even for small details, they realize that, okay, it's important to report to the authority and say, okay, we are compliant with yes or no, and then move forward it's agile instead of waiting for, let's say, the whole car. Yeah, exactly.
Henrik Göthberg:And because, if you and is this, they're a software company on wheels. It's literally showing again that they're having a continuous deploy mentality that no one else can copy, because they fundamentally I mean like an automated car. I've never done that before and it doesn't matter if volvo says they can do it. They are not.
Luis Martinez:It's not in their dna to be a software company like this but but I don't know in way in some way one can say that it keeps some contradiction with what we can see from Elon Musk's perspective about the regulation and the authorities, Because, from the case you are describing, it might exemplify a good setup of collaboration and good communication with authorities. It's like, okay, you are keeping the regulator as part of your, let's say, confidence loop.
Henrik Göthberg:Yeah, but everybody needs to. I mean, take a step back here. Everybody perceives Elon as a cowboy and now he's even a crazy cowboy, but if you look at the verticals that he's chosen to compete in space rockets, satellites, neuralink Neuralink is probably some of the most hardest from a compliance perspective you can think of right. And he has not looked at compliance as how do we get compliant as a bureaucratic process. He's taken a hardcore engineering mindset to it and sold it as any other engineering problem. So in SpaceX and all that, it's just a way of looking at engineering and that this is a part of definition of done. And then he has then obviously pushed the authorities because they couldn't deal with it in the beginning. But in the end he's just following the law. He's in his fucking right to come with any small compliance. If you want to be small, that's his problem and they need to change. And so he pushed change through with SpaceX. I mean, like the whole space industry.
Luis Martinez:And this might be also an example of, okay, how we can connect quality with compliance, how these two concepts might go hand in hand. Because, uh, yeah, what, what we can see in, in, in, in. In the interest of elon musk is okay, we are designing technology, we are creating new products, we are innovating, but also trying to offer the best quality possible. At least that's what's the main statement when he's working on several projects he's engaged to. The quality and the innovation are like two things, the two concepts that go hand in hand when the companies he owns are developing products. And now what we can see is a connection between this concept of quality and compliance In a good sense right.
Luis Martinez:In a good sense. So this is something that more companies and authorities should adhere to right, yeah, and actually that might be the introduction for the topic of regulatory sandboxes. Yeah, the topic of regulatory sandboxes.
Henrik Göthberg:Yeah.
Luis Martinez:Because in some way the regulatory sandbox is, in a small scale, the implementation of this kind of practice, where you have, let's say, a test scenario.
Anders Arpteg:Let's go there properly, okay. So if we start with that, what is really a regulatory sandbox?
Luis Martinez:Yeah, it's a test scenario. It's a safe space where you can deploy. You can develop and deploy certain kind of technology.
Anders Arpteg:What do you mean with deploy in that case? Because, there's no customer user that's going to see it right.
Luis Martinez:But it's a control scenario where you can basically, in close collaboration with the regulator, test, try out a product or a solution in the market and be in contact with it In the market. Yeah, in a control scenario.
Henrik Göthberg:In a testing scenario, so it's not in a market Simulated, simulated, simulated Like if this was a real market. We simulate this now in a controlled environment.
Luis Martinez:In a controlled environment where you can directly ask.
Henrik Göthberg:So it's of course not for real, real.
Luis Martinez:It's not real, but you are in contact with the regulator and the regulator is in touch with you, is collaborating, is answering your questions, is providing you feedback, information about what are the key considerations to take into account when testing this product in this simulated scenario of operation.
Anders Arpteg:But how do they work in practice? Because I haven't seen at least a clear definition of the process, of how it actually works. And if you do get like the proper we know the Swedish, like you know authority for privacy protection, have the assignment to actually build out these kind of sandboxes. And we have the TEFs et cetera as well, and it's kind of unclear to me how they really should work. And it's kind of unclear to me how they really should work. Will you actually have access to a proper authority or privacy protection agent that will review it and basically have free time with that person and basically get a certification?
Luis Martinez:in that case, not getting the certification, but getting some initial feedback on how this will operate in a real scenario, if there are some considerations that we need to take into account before placing in the real market this kind of solution, but is that really the case?
Anders Arpteg:So let's say you have a small startup or something you want to put a new product on the market, can you go and, potentially for free, get guidance?
Luis Martinez:Access to this regulatory sandbox and get guidance from the authority about okay, I'm implementing this solution, so it's a free consultancy for getting compliant.
Luis Martinez:I don't think it is by the way Similar to, in the sense that in this context, you can contact the regulator, ask for let's say, for input on the operation, what's going on during the operation of this system or this product, and test how this product or this solution behaves in, let's say, in the real context or in a controlled real context, and test and try out what's going on here, what we need to take into account.
Anders Arpteg:You know, I heard a company that tried this and they basically went to email and said you know, we want to evaluate this, Can we get some help from you? And they said no, no way, we are not a consultancy company and we were not allowed to compete with them not consultancy company and we were not allowed to compete with them.
Henrik Göthberg:But I think this is tricky in terms of the ideas how this should work and what in reality it is and not is. I just want to highlight we've had quite deep conversations with this with Petra Dalunde in the TEF and she paints a quite nice picture which is similar to what you are saying, louise, but it's a little bit different. So the idea with the TEF as an evaluation facility, but now we go to TEF, no, no, no, it's different from the ratio I want to differentiate. So Pietra is trying to highlight that most cases, when they are within the frame and we can maybe over time have harmonized standards, so when something is clearly within the frame and boundaries of what it's clear cut, this is category three. We should do this, we should evaluate and test this.
Henrik Göthberg:So Pietra's vision is that this is tough work, how we should test and evaluate a valuation facility. So in a sense, maybe a sandbox or a way to test the environment. Her way of interpreting the regulatory sandbox is actually when there is actually quite a lot of legal uncertainty in terms of certainty, in terms of is this really a category three or four? I don't really know how to interpret the law in this particular case. So this case to me is really hard to judge by the criteria. So her way of describing it is a little bit like we have the highway and then we have the regulatory sandbox for the things that is not maybe super clear cut if they fit the highway. That's her vision. Nothing exists in real real all the way, but that's the idea she has when she's trying to differentiate the regulatory sandbox from what the testing facility should be doing.
Luis Martinez:Yeah, because in the regulatory sandbox the idea is also to try to identify potential risks.
Henrik Göthberg:Oh, legal uncertainties yeah. Identify potential risks or legal uncertainties?
Luis Martinez:Yeah, exactly what are the legal uncertainties? And, of course, if we look at the regulation and the AI Act, if we talk about the because, yeah, I'm actually looking at the articles 57 that defines the regulatory sandboxes it states that they should be operational on AI by August 2026. So the authorities, the national authorities, they need to figure this out.
Luis Martinez:Yeah, exactly Need to figure it out and need to work on and ensure that the competent authorities will allocate resources to comply with the requirements of this article, and the idea here is to provide the controlled environment that fosters innovation and facilitates the development, training, testing and validation of AI systems for a limited time before they are being placed on the market.
Anders Arpteg:So it's yeah, it's not clear. There's no question that the intent is good, right.
Anders Arpteg:So the intent is perfect, the intent is there, but how are they going to implement this? And I do, I had to look it up as well a bit here, and I think it's actually very accurate what you said. They want to have a discussion with the authority about you know what, in some kind of dialogue, collaborative way. So it sounds good. I haven't seen it yet, I don't think it's in place, and that's why you know the person I'm thinking of got no and there is no way.
Anders Arpteg:But I hope that they can do that and and if they can actually do it, but I think you know they have to select, you have to submit or apply for getting you know into this program in some way there should be a national authority.
Henrik Göthberg:Yeah, yeah but, but it's. But the whole application process you highlight is exactly right, because in it's no way in hell everybody can go to regulatory sandbox.
Anders Arpteg:They would be overloaded like crazy.
Henrik Göthberg:So the trick is here it's almost like going to hoover at them. Right, it's going to be okay. We have a case now it's not clear cut. We think there's legal uncertainty here because of this. So we now apply to work through the legal uncertainties. It means that you can fix an amendment to the actual law and we get a clear cut confirmation. So this is what Petra is trying to push that. This is hovretten. The regulatory sandbox is like legal uncertainty that we need to iron out, and then we have a mechanism for that. It should not be for everyone. Everyone should then be serviced by the TEF or whatever, I don't know. Otherwise they will be overloaded for sure.
Anders Arpteg:I mean the intent, I think, as you say, is clearly good. I just don't see any kind of way of how they should implement it or who would really be allowed to go through this process. Everyone should do it. I don't think any company would not love to go through this.
Luis Martinez:Yeah, but perhaps one of the concerns from the companies is, okay, how to deal with, for example, the copyright or intellectual property issues when, for example, applying in these regulatory sandboxes and making the solutions to operate in this sandbox. So what type of documentation should be delivered, should be handed in, and how to protect, how to guarantee protection of intellectual property when being part of this regulatory sandbox.
Anders Arpteg:I think you need an AI approach for this, so the EMU should be using AI agents to help the companies, perhaps.
Henrik Göthberg:But can I, if we are leaving that topic, can I try to set up another topic here on this which is very close and adjacent? So there is different ideas on how to set up a testing and evaluation facility. So now we're not really talking about assessing documentation, but actually how to assess code or understand the real technology, the real engineering, the real data. And here now there are two lines of thought and I want to hear how Asab Loy is thinking about that. So imagine you're a built-in model and you now need to have a, a formal evaluation of the code, so to speak. So should you then take your code in your system and bring it to an authority and upload it by them, or do you think it's? This needs to be some more of a federative approach? They call it, or like an API, how you can connect in and basically bring the testing to your system.
Henrik Göthberg:So the core question is if we want to test and validate code, should we bring the code to the testing facility or should the testing facility be able to be plugged in? So one is a central approach and one is more of a federative approach. They call it. Have you thought about this? I mean like, so now we are going beyond documentation assessment in this way. Soft soft, now we're going into hardcore.
Luis Martinez:Looking at the engineering yeah, and and and how to? What will be the best approach? Yes, to do we bring the?
Henrik Göthberg:do we bring the system to the tester or do we bring the testing mechanism to the system? Do you see what I mean? Right, have you thought about this practically?
Luis Martinez:Practically. It's an interesting question because it might bring some potential risks.
Henrik Göthberg:Both ways it's tricky.
Luis Martinez:Yeah, it's tricky. It's tricky and this is something we need to think about because it comes with certain risks and challenges. I'm going to move from this side to something that I experienced when I was working for Ericsson and the certification of the base stations, good example. And the certification of the base stations Good example In China at that time, to certify a product, the authorities requested basically the blueprints of the base station and the antenna and the design. So imagine releasing this type of material to the Chinese authorities and internally there were concerns about okay, should we do that and basically reveal all our intellectual property on? Yeah, exactly, just to fulfill the compliance requirement. What should we do in this case? Because the idea in this case is basically to evaluate the system by the outcome it's generated, not by the, let's say, how this is structured. So, up to a certain point, it would be good to evaluate the need for just releasing or, let's say, revealing the code and the mechanisms behind the outcome, or evaluating the system based on the outcome of this.
Anders Arpteg:I guess the challenge here is from a number of dimensions. One is the security aspect, of course. If you release it to Chinese authorities, it will be highly likely to be copied in some way or form. But if we just take the AI Act and you are potentially classified as high risk, there is also requirements in actually you describing the method of the training and even what the data sources you have used, et cetera, and that could be considered part of the IP, the intellectual property that you do have. Do you think, I mean, some companies would really react badly to say that we are going to give away our secrets? You know what makes our system good. Do you have any thoughts about that?
Luis Martinez:Yeah, but in this case, most of the process of certification according to the standards will be based on documentation and sharing documentation.
Anders Arpteg:But it should be shared in a public database as well, right? The data, the samples, yeah how it works, what data you used, the method, everything. I mean, why would someone, if a company that is building a commercial business model on that, why should they give away their secrets?
Luis Martinez:Why should they give away their secrets? It's tricky to think about it because, in one hand, of course, what they are looking for from a compliance perspective is to understand how the system is generating the information, how the system is generating certain output base. Of course, and the intent is good, but the effects of it is bad right, how, I mean, like how to do it practically it's really, and it's also about how to rely on the authorities and the competence and the integrity of the authorities or the notified bodies when delivering this documentation.
Henrik Göthberg:But this is why I mean like. So when I talk to Pietro about this. This is where maybe it's more feasible with a more federative approach where basically there is something you do, there's certification service that you bring inside to Scania. So Scania is thinking about okay, we actually need to look at stuff and find it internal. I mean, forget about the compliance. Now it's from a pure risk management point of view. We want to verify and find risks and bias if the system is bad before we release it.
Henrik Göthberg:So how to build that verification service internally and potentially then this is the Scania argument if this can be built or by harmonized standards to a certain level of quality, in a certain way that if you have verified your system in this way it's safe, tef potentially then releases that and use that and it becomes like a component, like a software component like you buy in any other software component. That becomes like a component, like a software component like you buy in any other software component. That becomes like a, like a, something you can buy that you can then use internally in Scania. That would take away the whole thing of sending the whole documentation away or sending something in its way. So it's just like rethinking from another perspective, that the test needs to come to the system in order for us not to divulge our IP and all this kind of stuff.
Luis Martinez:In some way way we'll be basically assessing the system by the output generated. I don't know.
Henrik Göthberg:I mean like when you peel the onion on this, how do you, how do you assess the system right? I mean like that is the dimensionality of the data and the data quality. I mean like it's, it's the data, the data sets you are. Are they robust enough or will will they drift, drift? This is one type of test you would do. Another one would be in the model itself. Is it model drift? All these things that you know better than I do.
Henrik Göthberg:We have, as data scientists, different ways that we test this and we have all done it. Professional data scientists have done this all the time right Validity, reliability, whatever we do in order to say what's the problem with the system, how accurate is it, and stuff like that. Now, if you can have a mechanism that the authority develop and that mechanism is like a guardian stamp, but you need to be able to buy that and input that into your own environment as a way to have a certain swan and system, I mean like so these are brain, crazy, brain ideas, like because the other way that you would bring your whole system to someone else, to the facility yeah I don't know that this is so many problems with that, you see I'd love to move to another question if we could.
Anders Arpteg:Unless you have some last remarks, I want to hear your comment on that now.
Henrik Göthberg:Please go ahead. No, no, but I'd like to move to another question if we could. Unless you have some, last remarks.
Anders Arpteg:I want to hear your comment on that. No, please go ahead. No, no, but I'd like to move to another topic. But did you have some?
Henrik Göthberg:No, no, please Could you finish on that? I just want to hear your take on that.
Anders Arpteg:Can you just repeat?
Henrik Göthberg:the question, I mean the whole idea. Can we make a certification service like a software plugin that you can then take in and use?
Anders Arpteg:Is that a more feasible approach than to bring the system to the tester, which I think is very unfeasible with the IP problem and all that? I think still, we need to have a documentation point of view approach, similar to what Elon Musk does to external authorities To have full access to source code or something. It's not a proper approach. So I think you know it's more that finding the proper balance of what you need to document and describe, um, and then if they are lying in that then they will get fined but there is also something I I forgot to.
Luis Martinez:I for there is also something I forgot to mention when, when talking about harmonizing standards is that when they are, they are implemented. There is also something I forgot to mention when talking about harmonized standards is that, when they are implemented, there is also this possibility of applying the self-assessment process. So, instead of going to the third party to assess the conformity of your product, you can just show that you are following the standards and following the metrics.
Anders Arpteg:And that's a normal way of doing it, right? Yeah, so in most cases, you know, for most companies, I don't need an external certification. They simply do the self-assessment and if they get sued, they are required to show that they have done it. Yeah, right, exactly, and then it's no, that's a problem, really.
Henrik Göthberg:I would say but I think you, what you are doing now, from a practicality point of view, maybe we're getting to ourselves ahead of the game. Oh, we need to build a verification model and we need to test the code when it's like are we really doing that in other industries, in other parts, or is this a documentation process?
Anders Arpteg:I think for one so many other fields, like the medical field or whatnot, is heavily, heavily regulated and it still works. I'm sure we know it can be done. I don't think that's a problem. Problem, the problem is that we don't have clear guidance standards of how to do it. A really good point.
Anders Arpteg:But perhaps we can just move to another question and the time is flying away here, so we have to start soon, start to to finish off here, but I would love to hear your thoughts louise on more of a philosophical level. You know, what do you think the proper level of regulation is? We know that too little regulation is horrible and if we live in an anarchy, as you said in the beginning, that of course would be a situation no one wants Not in China, not in the US and certainly not in Europe or Sweden. But then too much regulation, of course, is also problematic and that will hinder innovation and could be also a very big problem for companies that don't dare to use data or ai. And and just to give some more background before I would love to hear your thinking here is that the us recently, like a month ago, released their ai action plan and one thing that trump and here's Elon Musk's fingerprints was all over this, I can tell you.
Anders Arpteg:But one thing they said there is they really want to deregulate a lot in US and make sure that they can take the lead. They want to be the leader in the world, of course, in AI, in US. And then it shouldn't, you know, the regulation shouldn't hinder at least the progress in some way? We heard a lot of similar kind of thoughts, not the least by the Swedish prime minister actually, who says we need to consider to deregulate in Sweden and Europe. So I guess the question then is Louise, do you think the level of regulation no one is asking for anarchy, but we need to have a balanced view of this Is the level of regulation in Europe and Sweden too high?
Luis Martinez:For AI. Yes, I would say that I'm I like. I have to say that I like the approach of the AI Act. I have to say that. And when people say no, that's over-regulation, this is a really tambour attack to the innovation because they are trying to over-regulate everything. My first thinking is that perhaps it would be good that they take a look and read through the AI Act and try to identify what is really regulated.
Anders Arpteg:The AI Act is like 300 pages plus or something right. I mean it's not really a consumable piece of material. Yeah, that's a communication problem, it's a really a consumable piece of material. Yeah, that's a communication. I think the problem is communication.
Luis Martinez:It's a communication problem Because, yeah, probably not all the people in my role have to do my job of going through the regulation and trying to interpret it and that's a communication problem. But if you look at what's the structure of this regulation and what really requires the control of the authorities and demands like really be compliant with, it's a small portion of the whole ai ecosystem because it's only targeting but no one knows that right, and that's, and that's exactly the problem.
Anders Arpteg:Yes, that's the because it's only so. Is that the problem of over-regulation, or is it the problem of under-communication?
Luis Martinez:or poor communication, because if you see the pyramid, this famous pyramid is showing the levels of, basically the risk. Yeah, the risk level. It's just the tip of this pyramid, that's regulated.
Anders Arpteg:They have some requirements on the other levels as well, but they are minimal and actually minimal and actually minimal In this minimal risk, it's only transparency. It's still some requirements. It's not zero, but I agree the high level is the highest level of requirements, for sure.
Luis Martinez:Yeah, but for example, informing people, informing a person that you are interacting with an AI, but I have an anecdote that sort of backs up your thinking here.
Henrik Göthberg:I remember when we had gdpr coming into vattenfall and we took it super serious because you know we have five million or four million consumer customers and stuff like that. You know in electricity bills and and and and, what we could do because we were a big company and we were even. Also, then we needed to show good. We were a state company so we fucking needed to know this better than anyone. We put all the lawyers we had on it and basically read and dissected the law in and out and in and out better than the guys writing it, until we found a way to educate people on legitimate interest.
Henrik Göthberg:You know, when can you do? Do you have a legitimate reason to use this? So we could get to a point where, like you don't, we don't act, we don't need to get compliance, we don't need to do the cookie thing, every fucking thing, because in order for us to take care of you as a customer, to manage your electricity, we need to know stuff in order to for citizens to work. So in the end it came back to super high degree of knowledge and understanding of the law and then understanding how to make that simple for the so we have a problem of communication, yes, but do.
Anders Arpteg:regarding the level of regulation. Now specifically saying that let's imagine that we had perfect communication, but we don don't, but still imagine we had. Is the level of regulation in Europe too high With the current AI Act and other acts that we do have? When the US claims so, they believe very much so and they even believe that regulation in the US is too high, which is lower potentially than Europe. So they want to deregulate a lot. We have a lot of politicians. Now even you know, Ulf Kristersen came out and said you know, we need to pause the AI Act because it's not ready. Companies is going to get hindered by this. They don't think it's proper to do it like this. Do you think that level is too high or not?
Luis Martinez:I think that the level might be. There might be some overlappings that need to be adjusted. There are some overlaps and probably some connections between regulations that could be, let's say, clean up. I would say that there are some things from GDPR that are already in the AI Act. There are some elements from the Data Act that probably, although the cybersecurity and the Red Directive that overlap and even conflicting in some ways.
Luis Martinez:And conflict, yeah, and delivering a conflicting message to the organization. So in Red Directive there was now an update and needs to take a look at the cybersecurity requirements. But the Cybersecurity, cyber Resilience Act is also requesting some cybersecurity elements, so probably it's good to clean up and try to focus, narrow down the scope of some of these regulations so we can have, from the companies, a clear perspective of what is really needed and what is, uh, what we need really need to take care of without these overlaps.
Anders Arpteg:I'm a good point because you know it's so easy to add new regulation, but it's very, very uh seldom that we actually remove a regulation right. So it creates a fragmentation and overlap which is increasingly becoming more complex than ever.
Henrik Göthberg:So maybe that's I really like. If I summarize what I hear now, we can understand deregulation in two ways. Someone can understand deregulation as a way to always going to take away regulation. We're going to go bananas and anarchy, but but if we, if we maybe you don't use the word deregulation, but the complex, the takeaway, ambiguity, takeaway overlaps. So right now the problem is really why we can say we are over regulated is that we have not one act, we have four or five different acts. That creates regulatory uncertainty. So from that perspective, as long as we have a very clean setup, a simplified setup maybe, if you look at the net of everything, it's not less, it's perfect. But we need to deregulate in context of decomplex, takeaway, complexity.
Anders Arpteg:I mean in software engineering. You call it that you refactorize, refactorize.
Luis Martinez:That's perfect.
Henrik Göthberg:We don't have a defactor, I love it. We're not going to deregulate, we're going to refactor regulation.
Anders Arpteg:I love it, but still, if I phrase it like this, then I'm trying to make it super clear. Question Are you afraid that the level of regulation in Europe is hindering innovation for companies in Europe compared to other parts of the world?
Luis Martinez:I would say that, rather than the, let's say, the level of regulation is the fear and the knowledge, or the uncertainty about what the regulation states. What's hindering the innovation?
Anders Arpteg:It's more than a problem. Yes, agreed.
Henrik Göthberg:But I think you nailed it. When we say let's not use the word we need to deregulate, we need to refactor regulation, that's a very interesting way of putting it.
Anders Arpteg:Yeah, I think so Cool. Okay, it's over. We should start to wrap up here a bit. Let me see here so okay, We've spoken a lot about compliance and regulation here, but there are a lot of other ethical and more challenges that AI provides or creates for society. What are you afraid of when it comes to AI and society challenges beyond regulation?
Luis Martinez:I would say that it's what we can see, for example, with weaponization of AI.
Anders Arpteg:That's the weaponization of AI.
Luis Martinez:AI, that's the weaponization of AI AI as a tool or as a component in weapons and military industry. That's basically. We have reached a level of democratization on AI. That's basically the top of the yeah, the tip of the hand of everyone. So it's like it can be misused and it can be incorporated weapons and and and um and and and war components. That's it's. It's beyond what we can imagine and in those those scenarios, um, the ethical uncertainties are quite broad.
Anders Arpteg:And then sorry for going back to regulation then, but EU Act, for example, is not. It doesn't include military use, for example, right? Nor would any kind of bad actor ever care about that, right? Cyber security criminals, you know why would they ever care, right? So what does really the ai act help in these kind of really bad actor use cases, right? Do you think we can have some way to to manage that, because you know the ai act at least will not handle that at all, right?
Luis Martinez:yeah, but I I think the contribution, or one of the contributions of the act is at least that makes us realize about the risks that come along with with ai that these bad actors, these bad guys can come with and bring to the table these risks, and at least it it it's part of the conversation where we talk about ai. It it makes us aware about what are the risks.
Anders Arpteg:It's no question that the regulation has a good purpose, right, so I don't think that we don't need to even argue that. I think that's, of course, a good purpose. It's just how do you get to the bad actors? And just to give an example, it was some example where red teaming is a technique that a lot of AI companies use to basically try to have some set of people that try to act as bad actors and try out the best they can to hack the model, remove the safeguards and use it for bad use cases. To just try it out before they release anything, do red teaming, which is a good practice for anyone releasing that. So they build this kind of AI agent kind of framework for doing red teaming.
Anders Arpteg:Now, suddenly, the bad actors got hold of this and they use it to attack companies instead. Right, it's ironic. So I mean, and of course, they won't care because they know what they're doing is illegal already. If they use it to hack into a system or, you know, use it for bad things, they are already breaking the law. So in some sense, I guess what I'm asking is we have a lot of regulation, even though it's not AI specific, that will catch a lot of bad use cases, right, if you use it to break into a system, we already have regulation for that. If you use it to kill a person, we already have rules for that. There is a lot of use cases for AI that is already captured by the normal regulation.
Goran Cvetanovski:Do you?
Anders Arpteg:see, this is such a broad topic. We have spoken about this, henrik, a bit in the past, but do we really need to have so much specific AR regulation that is focused on the technique and not the use case, because most of the use cases are potentially covered already by existing regulation? I don't know. Wouldn't you say so?
Luis Martinez:Wouldn't you say so Again, I think, at the core of the regulation, when they devised this regulation, when they thought about this regulation, they were thinking about the fundamental rights, the protection of the fundamental rights. We have laws for that yeah, we have laws about it and how to protect some of the fundamental rights, but how to make them accessible and understandable in a new context like AI, for the population that is not aware about technology, that is not following technology. Probably that was also the intention to reflect in this new field. To reflect in this new field. I agree that there are already in place regulations that might take care of some of these criminal offenses and some of these bad behaviors where AI is basically a tool to achieve.
Luis Martinez:It's a technique, right, yeah, exactly, but perhaps that's the scenario with these prohibited practices. But when we talk about the use or the deployment or the development of AI, at least what they try to do in this regulation is try to reflect some European philosophy or European values, and I think it's something that's valuable to see in a tangible way how this is reflected when we are deploying technology.
Henrik Göthberg:But I have, to some degree I'm very much in the camp philosophically that why do we need an AI regulation when actually the application of AI is already regulated in normal law?
Henrik Göthberg:You cannot kill people, you cannot do this, and I come from this camp quite strongly, almost saying we don't need AI regulation. We simply need to make sure that the real regulation is strong. But I am sort of flipping a little bit on this. I'm not flipping, but when we've been working through it and basically tried to take a step out and say, okay, regulation is not the end game, it's a way for authorities to make us risk conscious. And the way I then interpret the AI Act or the need for the AI Act is basically because this is so new technology and there is an AI divide and know-how and knowledge of what it can do and how it works. So what you're doing is you're trying to, in a way, superficial regulation on AI. Create an arena for risk consciousness around AI and therefore it should not be needed is my fundamental philosophy, but I can understand why it's needed from a risk consciousness point of view, good point.
Luis Martinez:It's a good point. It's something that we internally, we have also discussed about. Okay, the discussion with developing a framework or compliance framework just for AI Something that I mentioned within the organization is okay.
Luis Martinez:This company is well known for some principles and we have some guiding principles. We have a mission, we have a vision and we have principles that we reflect in the products we currently deliver to the market. So I agree with the point that, basically, when we develop a new product using AI, the idea is basically to take these principles when developing AI. So why we need, for example, to define responsible AI principles for the organization if we have some principles that are, let's say, outside of the scope of AI that are also applicable in the context of AI? And one of the things that one of my colleagues mentioned is this is a new field, this is a new arena, and probably we need to in some way bring a reminder or bring some mental tool or kind of guiding tool to the people using this technology that those principles that we already reflect in this context need to be translated to the new digital arena.
Henrik Göthberg:But it's an interesting one because you could potentially have gone a different route, where you fundamentally lived by the real regulation, the underlying regulation, and then made an AI act how to apply or how this implicates. These laws exist, how are they implicated or how do they manifest in AI. So you could have approached the whole thing that we don't need a new regulation but we need an amendment.
Anders Arpteg:To fix the old laws and also have a standard of how to combine it.
Henrik Göthberg:I heard the argument why they didn't do that because then they realized it would be all over the place and it would be very hard to find. So they thought they need to centralize. But you can just have a standard instead of them. But I think, if you think about it, what is most important here Is it the regulation or is it the harmonized standards that come out from it? That, in the end, will be the real value From a practical point of view.
Luis Martinez:I think the harmonized standards is when it gets real Exactly. It's the real stuff we are going to implement and we are going to use to guide us, but anyway it's a tricky one.
Anders Arpteg:It's a tricky one and I hope everyone understands that. I think no one of us, or I think no one in China or US believes that we should have no regulation. I think regulation is super important and it would be horrible if we didn't have it.
Henrik Göthberg:So to just make that super clear, I mean it's interesting to hear George I listened to a couple of the latest podcasts with Jeffrey Hinton. He's been on different things and how it's now quite outspoken Him and Hassan many are talking about that who basically says like we cannot expect the money makers to also self-regulate, so there needs to be a way to steer this, Like we have in air traffic or airplanes, Medical fields, Medical fields. But the problem is that are we doing the right things? So I'm thinking we spend too much time on a stupid legal text and we shouldn't spend all that time on a harmonization, standard approach, so we're getting stuck on. The wrong thing is my sort of pet peeve here.
Anders Arpteg:Anyway, luis, the time is flying away and I'd like to ask you a rather philosophical question. So if we believe that AGI will happen, do you believe AGI will come? I think, so Any timeframe potentially, when it will happen?
Luis Martinez:Probably I would say that yeah, five to 10 years.
Henrik Göthberg:I would say it's the whole defining it first.
Anders Arpteg:Probably it will be a spectrum. Of course it will continue to increase in small iterations, so it's super hard to say exactly when it will happen.
Henrik Göthberg:But it's not an if, it's a when. I think so too.
Anders Arpteg:Cool. But when it does arrive, then we can think about two extremes here. One extreme is that it will be the horrible terminator, the matrix of the world, where the AI will try to kill us all and that will be the end of civilization as we know it. Or it could be the other extreme, which is that it is the utopian future, like Nick Bostrom wrote about in the Deep Utopia book, where we live in a world of abundance, where the cost of intelligence and products and services will go to zero and it's a Star Trek kind of world or whatnot and humans are basically free to pursue their freedom, creativity, happiness as they see fit. Where do you stand on this spectrum?
Luis Martinez:I would add a third alternative, the WALL-E alternative. Have you watched? The WALL-E movie what happened with the population? Basically everything.
Henrik Göthberg:Everybody is fat utopia was also dystopian.
Luis Martinez:Yeah, exactly exactly it was. It was a kind of because one of my concerns with the situation is that, and one of the concerns I have with the situation is that and probably we're starting to see how the impact on, for example, the critical thinking on the use of AI.
Luis Martinez:We are starting to see a less or a reduction in the critical thinking with what AI is generating. It's like it's more often that people just take for granted that what AI is generating. It's like it's more often that people just take for granted that what AI is generating is the truth.
Henrik Göthberg:Making us lazy.
Luis Martinez:And it's making us lazy, not thinking and not thinking.
Anders Arpteg:Some people claim we're making people stupider. It's actually a scientific experiment.
Henrik Göthberg:I've seen scientific research claiming that we get stupider by using it. That's what you're onto here.
Luis Martinez:So probably we will see. I don't think this will be the apocalypses or the end of the world, but I have some concerns. I have some concerns about how the society, how the humanity, will lose certain abilities if we are not taking actions to, let's say, keep the AI as a nice tool to leverage our potential, to help us, to boost our potential.
Anders Arpteg:And to fix our illnesses, poverty, education.
Luis Martinez:Yeah, exactly, instead of replacing us, because it seems in some cases that we are just giving them, they're giving it the role to represent us, like the kids at the school, basically using it for writing their report.
Henrik Göthberg:It's good in a sense right, yeah, but the child has not learned a thing if not doing it right. But it's a tricky one because there are so many things. That sort of the spectrum of dystopia here, that sort of it all leads down to that we need to take in the reins and have a very clear objective of how we are trying, how we want this to go. This is number one. I think another key comment has always been like someone our guest has said it and I I kind of agree with it more and more why won't we have both at the same time in in the world? It has always been riches and poor, utopia and dystopia at the same time. We have war zones in some parts of the world and we have quite healthy nations in other parts, so it's a little bit like full utopia or full dystopia.
Luis Martinez:The reality is both and I don't know. I would dare to say that the level of impact or the level of presence of this utopia, dystopia connected to AI will depend also on the level. It will be different depending on the country, depending on the region where we see this. It will be different the impact probably in rich societies if we compare to poor areas in the world. So that that that that's my will it?
Henrik Göthberg:will it increase the divides between rich and poor? The hot didn't the ones with a eyes and not? That's a really real fact. You know, risk, of course that'd be. I mean like, so we're talking about people who can work at a cybernetic speed with a cyborg versus the rest of us and will the big divide to the few tech giants in us and china continue to increase or decrease?
Anders Arpteg:some people claim I heard this, you know, and I thought it was rather wise is that the winner after AGI comes is the part with the capital and electricity. Electricity, I guess AGI can help with, and fixed fusion or something, but with capital and then you can think of which part of the world or which part of the companies have that and then you know the answer, which is potentially a bit dystopian, I would say.
Henrik Göthberg:But yeah, because we, we get, we get the fundamental distribution problem here how to have equality and inclusiveness and diversity of society where everybody has their piece of it. I mean like and this is where some we have had another deeper philosophical question why open source? We had a who go who sort of draws the parallel of how open source and linux saved africa. You know, like, if you think back, what happened when something was available in abundance so people could innovate on on their own versus everything needs to be paid and proprietary.
Anders Arpteg:So yeah, louis, are you more positive than negative about AI?
Luis Martinez:I would say that I'm in between. You're both, I'm both Log on.
Henrik Göthberg:Luis.
Anders Arpteg:Martinez, it's been a pleasure to have you here. I hope you can stay on for some additional off-camera kind of discussions. I have a lot of questions, I know I want to have you here. I hope you can stay on for some additional off-camera kind of discussions. I have a lot of questions, I know I want to ask you, so I'd love to have a continued discussion.
Luis Martinez:But thank you so much for coming to the AI After Work podcast. Thank you so much, Okay thank you.