AIAW Podcast

E158 - Where AI Meets Intellectual Property - Rasmus Fredlund and Daniel Enetoft

Hyperight Season 10 Episode 16

Join us on AIAW Podcast Episode 158 as we delve into the complex world of AI and intellectual property with Rasmus Fredlund and Daniel Enetoft, European Patent Attorneys at EIP. From the challenges of patenting AI-generated inventions to navigating open-source strategies, the EU AI Act, and global regulatory contrasts, this episode is a must-listen for anyone at the crossroads of tech, law, and innovation. Discover how legal frameworks are evolving to meet the pace of AI and what it means for the future of ethical leadership and invention. 

Follow us on youtube: https://www.youtube.com/@aiawpodcast

Daniel Enetoft:

expressions in in the letters and the same way of explaining why this and that culture value that we have is applies to them.

Anders Arpteg:

So they are talking about their personal kind of input and just the company and the job ad.

Daniel Enetoft:

Yeah, exactly, and they even had sort of the same reason as to why diversity, I think, was good in this line of work and so on.

Rasmus Fredlund:

Was there any sign of their CV being used as input as well, or was it just?

Daniel Enetoft:

Yeah, I don't remember. I just remember they had the same sort of language in several of the letters.

Anders Arpteg:

We have some collaboration with the university in norway and they they basically forbid using ai for any kind of input they provide for the homework and whatnot, and I think that's wrong in some sense, at least you know. Of course they need to review it or have a lot of manual input to it, but to use nothing, I guess okay. But going back to your use case, it's basically that okay, you had a job ad for a trainee people. That's just out of uh out of school or or phds, okay.

Daniel Enetoft:

And then, uh, some people applied and you could see it was ai generated, more or less yeah, back in both because, because of the style, since I've been using AI a lot, I know what the style of the language is, and also that it was so similar between the personal letters. It was obvious.

Anders Arpteg:

So multiple people had a very similar kind of Exactly exactly.

Daniel Enetoft:

And those were. We removed them all of them, I, I think because the other had good sort of experience and so on, but that was. It was a bad thing from our point of view at least yeah, I guess I can see that.

Anders Arpteg:

But if you were to give some advice to someone seeing a trainee ad like that, how would you, how would what would you recommend for them to not use AI at all?

Daniel Enetoft:

At least start with thinking on your own. Why would I like to have this work? Why would I suit? Why would I be a good fit for this job, and then at least prompt, based on that.

Anders Arpteg:

Yeah, it's a tough situation, but I guess we will have to be adjusted to the future, where everyone is using AI for any kind of content that's being produced, right.

Daniel Enetoft:

Yeah, absolutely.

Anders Arpteg:

Well, with that, I'm very much looking forward to a discussion about legal matters when it comes to AI and patent and material or IP rights, and having two experts like you here to discuss that will be very interesting. So I'd like to welcome to start with Rasmus Fredlund, right? Yes, that's correct, and please introduce yourself. You're an authorized patent attorney right at EIP.

Rasmus Fredlund:

Exactly yes, I'm a European patent attorney. I've been with EIP since September last year. I have a background in mechanical engineering. I studied vehicle technology at KTH.

Anders Arpteg:

Yeah.

Rasmus Fredlund:

And well, I've been mainly working with mechanical inventions, medical devices, vehicles and things like that.

Anders Arpteg:

But for some reason you turned into like that, but uh, but for some reason you turned into.

Rasmus Fredlund:

Yeah well, I, when I joined my first law firm I was introduced to telecom, so I started working with telecom back then. Okay, telecom in what way? It's some company or, uh, well, four telecom companies, and then, yeah, protecting their inventions. And well, it's quite fast-paced, because all the inventions are on that, all of them, but the ideas are entered into the standards.

Anders Arpteg:

Everyone wants to protect their contribution to the standard, so yeah, there was a lot of work in the telecom industry, exactly yeah so there was a lot of work within that field.

Rasmus Fredlund:

so I kind of transitioned over to telecom back then and I worked mainly with with telecom for the last 10 years 10 years. And what's your current role? My current role.

Anders Arpteg:

Well, it's kind of the same, but you're at EIP, at EIP, yeah, and just briefly describe what is EIP.

Rasmus Fredlund:

EIP is well, let's see kind of quite international patent firm we have offices in. Well, the company was founded in the UK, so we have offices in London, cardiff, bath and Leeds, but now we're also in Sweden, so you have an office here in Stockholm, we have an office in Stockholm, and an office in Malmö.

Anders Arpteg:

Malmö as well.

Rasmus Fredlund:

How many?

Anders Arpteg:

people are you approximately throughout the company 200 something I think.

Rasmus Fredlund:

And then we also have an office in Düsseldorf in Germany and an office in Denver in the US.

Anders Arpteg:

I think this will be a super interesting topic and I actually have a pattern myself, so it will be fun to just discuss more what your thinking is about this going forward. Love to hear more about that. Yes, I'm happy to to talk about that but, before that, I also would love to introduce daniel ene toft. Right, yeah, and you're also a patent attorney at EIP, right yeah, exactly. So I have a and a partner as well.

Daniel Enetoft:

Yeah, I am. So I was one of the founders of the Swedish part of EIP three years ago. Well, I have a background in computer science. I started out as a software engineer and worked for five years in medtech image analysis, stuff like that. I immediately got thrown into all sorts of software inventions where AI has become more and more common. The last I don't know seven, eight years or something. So back in 2015 or 14, I think it started to increase. So I work a lot with different machine learning inventions.

Anders Arpteg:

Okay, so more like IP or patent-related questions connected to AI.

Daniel Enetoft:

Yeah, exactly so protecting AI inventions and making sure that our clients get the best possible protection for their R&D, more or less.

Anders Arpteg:

Super interesting, and perhaps before. Just the idea of patents in AI or software or in mathematics perhaps is an interesting topic in itself, but given that you have worked for a number of years in IP, can you just describe, given the recent progress in AI that we've seen just in the last three years, it must have changed a lot, right? Or have you seen a change in the type of tasks that you have to work with when it comes to protecting more AI-related?

Daniel Enetoft:

jobs. It's definitely increased. I mean the number of, and what we discussed before is like rasmus that work a lot with mechanical companies, that you also start to get more and more ai yeah you see that in every industry, that they are starting to use AI to improve their systems and all that.

Anders Arpteg:

So yeah, so you see it as well. Yeah, exactly. Ai is coming into the field a lot. Yeah, it is Okay. Can we just jump to that question then? I mean, what would you see the pros and cons of being able to patent AI is? Perhaps you could start by giving some example that you have recently worked with that was some kind of patent related to AI.

Daniel Enetoft:

Yeah, so I had a MedTech invention just recently where they use AI to determine if some medical device was put in the right place in the body. So they output sensor data from that device and use that in various ways to inform the doctor if he should sort of put it further, further into the body or not.

Anders Arpteg:

So I mean it's, it's everywhere, but otherwise it's a lot of image analysis applications, of course, within surveillance and monitoring and if you just backtrack a bit here there are a lot of people that are thinking is it good or bad with patents? But I guess, especially in medicine, it would be really hard to even do the development and research necessary unless you could have patent or if you were to backtrack why do we need patents?

Rasmus Fredlund:

And try to describe that I mean development is expensive, so there needs to be some way to yeah, to get your money back for that, and if you can't patent it and you don't have any rights, anyone else could.

Anders Arpteg:

Just once you're finished with your development and everyone could steal your ID so someone do a COVID vaccine and then you know they deploy it and someone else just copied it and were to produce it for. So much less than no one could really do the research necessary they could, but it would.

Rasmus Fredlund:

There would be no incentive to invest that kind of money to develop the vaccine if you wouldn't get a return on that investment.

Daniel Enetoft:

And another thing is also that you I mean the patent right is a sort of trade. You trade by explaining what you have invented and you get the patent right back, so you're alone on your market, but in return you explain what you've done and without patents, everyone would keep their development secret instead. So, at least as I see it, it's a good way of sort of trading explanation of technical concepts against getting a patent right back.

Anders Arpteg:

So not just the financial incentive for this purpose, it's more also like spreading knowledge in some sense, you would say.

Daniel Enetoft:

Yeah, exactly I think so. I mean otherwise, for sure people would keep it a secret. And in AI, for example, I mean in reality you could keep everything a secret almost, because it's more or less a black box. But anyway people patent it because they get financial incentive for it. But on the flip side, then you have to tell everyone what you're doing and then you can use that as a start starting point for for the next development cycle and how does it work?

Anders Arpteg:

so, for if you continue on your example with a med tech company and they have some kind of sensor device or sensory device that they put in the body somehow, I guess they need to have some kind of level of innovation for it to be patentable, right, or how does it work?

Daniel Enetoft:

Yeah, exactly so AI and algorithms as such is not patentable.

Anders Arpteg:

Math Okay.

Daniel Enetoft:

But the use, the application of the technical or the math or whatever algorithm that's patentable. So you need to have a technical advantage, you need to achieve something technical, otherwise you can't patent it.

Anders Arpteg:

Yeah, so a solution to a technical problem that uses ai, exactly patented and just elaborate on that, because I guess in us and europe it's a bit different as well. Right, so in us you can patent software right or algorithms in some sense, or yeah, it's it's actually they in.

Daniel Enetoft:

Just a couple of years ago it was much easier to patent stuff in the us and you can still patent somewhat a bit broader than you can in Europe, but it's actually. They have actually also sort of started to get to a point where the technical advantage is super important. So it's leveling out actually.

Anders Arpteg:

Okay, so it's becoming a bit more similar between Europe and. Us what you can parent. Okay, so just to to try to understand what does really the use versus the algorithm mean? So in your case with the sensory device, I guess it has some kind of ai in it that interprets, you know what the output from the device, what was?

Daniel Enetoft:

yeah, I can give an um an example. It's um. Let's say you say you have invented a thing that makes it possible for different AI implementations to output the same data based on the same input data, even if the structure of the AI is different or the machine that you have done implemented AI is different, or the machine that you have done implemented AI is different. So it should in reality probably should output different data, but you have done some student teacher thing or whatever. That idea as such is not patentable. But let's say you implement that in a camera system where you want to be able to track people between cameras, and the cameras have different implementation of ai and they have different hardware and so on. If you implement your smart idea of making sure that the output is similar even if the ai is different on a tracking system, that could be patentable. Because then the technical problem is how can I track an object between various various cameras in the best way?

Anders Arpteg:

so would it be fair to say it's really the application, not the internals or the algorithm, that is the patentable?

Daniel Enetoft:

yeah, exactly you can patent the internal if it's specifically targeting a specific hardware. So there's one more thing that you can patent. You can patent an algorithm that fits into a chip of a specific architecture, for example, but that's the only other case that you can patent, other than the implementation case that you can patent other than the implementation.

Anders Arpteg:

So for this specific example, then, the medtech innovation, how do you go about actually to do a patent for it? Someone needs to describe it. I actually did a pattern myself so I have, you know from the spotify day, some experience with it. But if you take this example, you know how do someone from that company that you worked with go about to create a patent If you just describe the process of doing it?

Daniel Enetoft:

Yeah, so typically they have decided to patent something and then they come to us as experts. They may have some internal people with patent experience or they may not have it. They may have some internal people with patent experience or they may not have it. But typically, even if they have in-house patent experts, they don't have the time to do the actual work. So they come to someone like Rasmussen or me and they come with an invention disclosure, like a one-pager of their idea. But that's just the start. But that's just the start.

Daniel Enetoft:

And then you need to interview the inventor, try to understand all the details, try to understand what is known since before, because we are not we're not experts within that specific technical area. So you need to understand what's the prior art, what, what the state of the art is before that. And then you you in discussion with the inventor and and with with the business idea of the company in mind, because you want to patent something that will will provide a business advantage to the, to the owner of the patent. So it's a, it's a back and forth. Typically, I don't know two, three hours or something.

Rasmus Fredlund:

Yeah, to discuss and make sure that we have all the details we need to start drafting the first patent claims of course, and then the description of the patent.

Anders Arpteg:

So you have a number of interviews, you start to gather this information and then you start to produce some kind of patent application or what's the next step.

Rasmus Fredlund:

Usually, we try to understand the scope of protection that we want to achieve, and that is done by drafting the patent claims, which is the defining part of the scope of protection, and that takes a while to.

Daniel Enetoft:

Yeah, that's almost the hardest part to define and make sure that that provides business value and that it's new and inventive based on what we know about the prior art.

Rasmus Fredlund:

And that we don't limit the scope too much, because the inventors usually describe all of the details of their invention.

Anders Arpteg:

Yeah right, but you want to keep it general. Oh, you want to keep it general.

Rasmus Fredlund:

Yes, as general as possible. Yes, so we usually start with a very broad first claim and then we have fallbacks in independent claims where we add further features to the first claim.

Anders Arpteg:

And then you do a lot of research, I guess in other patterns, and try to see what is really the unique part of this. Is that how this works?

Rasmus Fredlund:

Yeah, sometimes we do it and sometimes the client has already done it and provides that to us, but of course we need to review everything to make sure that we are we find something that is different than the prior art but still covers as much as possible.

Anders Arpteg:

Can I just try to claim on you, or a question on you? Because I heard someone saying supposedly, if you work for some company and they do some search in some kind of patent database and then you later find that OK, this actually looks very close to what we just developed, but you ignore it and just continue to use it versus you didn't do the patent search, and then someone comes to sue you and then they may look up all the emails you have sent or messages you have sent some colleagues and they found out that you actually did a search or did not, that the fines that you get for the infringement of the pattern is higher if you did do the search and didn't know about it. Is that true?

Daniel Enetoft:

Yeah, that's true In the US. So they have the double or triple damages. Is it triple? I think it's triple damages if you knew about something and just disregarded it, compared to if you didn't know anything.

Anders Arpteg:

So would your recommendation be to a company? I mean, unless you really want to do a proper patent application, you should never even search for it.

Daniel Enetoft:

If you operate in the US, yeah, but still, I mean, the professional way to do it is to know about your competitors and what they have done, and it could, as we talked about before, reading their patents can help your development. So if you just don't, you shouldn't disregard something, though that's probably the advice.

Rasmus Fredlund:

And I think that's also two different aspects of the patent law. Of course you want to know the priority, to draft the best patent application yourself. But if it's just a question of freedom to operate are you allowed to do this, yeah, then if you want to know if you're allowed to do it, then you should do the search.

Anders Arpteg:

But if you just, yeah, what to sell the product and hope for the best and cool and I'd love to get into more like what is the difference of patenting like ai innovations versus like traditional? But just before that, you know when you actually do work with this. Of course, you interview the people and you're trying to understand what the patent claims really are and build up all this kind of research. Do you use AI yourself to do the application passwords?

Daniel Enetoft:

Yeah, absolutely quick googling or or summarize like what would internet answer if I ask this question and you get that in a sort of condensed way. But you need to be very cautious. So it's you sort of review or try to understand if, if there's been hallucination going on, because that's quite common.

Anders Arpteg:

Yeah, that makes a lot of sense. Okay so let's go into a bit. You know, if you were to try to explain the difference about having, you know, a patent for a traditional innovation versus something that is a bit more AI related, Do you see some kind of difference there in trying to make a patent for an AI innovation versus a traditional innovation?

Daniel Enetoft:

It's the technical effect, I think, to get something that is patentable. Normal inventions, like in the mechanical area, they are always technical, but in software and AI in particular, there's a lot of stuff that is not technical, like, I don't know, user interfaces or or or algorithmic inventions. So it's what about?

Anders Arpteg:

like training data. I mean, if an ai is used to do the sensory you know, analysis, if you call it that and that's trained on a lot of training data that is potentially from some other sensory device that is very similar is, can that cause the patent application to to not work, so to speak?

Daniel Enetoft:

or well, on the other hand, actually you can. You can patent the training of the AI as well. Oh really, because that's part of solving the technical problem downstream. So you can patent ways of producing training data, for example, filtering training data or augmenting training data, or whatever. So, but isn't that an?

Anders Arpteg:

algorithm. Can you really add? No, because it's the. The end use is to improve I don't know facial recognition or whatever technical you can phrase it in like an application and then it's okay to patent the way you collect the data for training, the model.

Daniel Enetoft:

Yeah, I mean, in the case I talked about before, we patent both how to get the training data using the same device and shunking up the data in smart ways, and then using a trained AI model and inputting data from the actual sensor in smart ways. So we did both.

Anders Arpteg:

Interesting. Okay, so in this case, with the sensory device that had some AI in it, I guess.

Daniel Enetoft:

Yeah it was a machine learning algorithm, to, yeah, to learn how to understand if the device was far enough into the body or not okay, so do you see any like different challenges?

Anders Arpteg:

or is it harder or easier in some way to to just pattern ai innovations, or would you say it's rather similar?

Daniel Enetoft:

it's just the application still that matters I think it's it's so much prior art now so it's, uh, it becoming, it's becoming harder and harder. I think I mean it's the. The development in this particular field is sort of exploding, as you know. So from that perspective, from the prior art perspective, it's difficult, um, possibly more difficult.

Anders Arpteg:

But on the other. What do you mean with prior art? Can you just elaborate what I mean?

Daniel Enetoft:

yep, the priority is a concept like what is known before, so everything that is known before, and and patent office can use whatever information that was public before you file a patent application to argue that your invention is not inventive. So they can find whatever. I mean they can find a blog post on, or or a post on stack overflow, for example. I've seen that as prior art and I've, of course, a lot of patent as prior art, but research articles or whatever it's everything that is publicly available it's prior art.

Anders Arpteg:

And just so I understand it, if you take these kind of huge foundational models that we do have today, they trained on you know, so much data in general and scraped from the web and books and whatnot, perhaps it's more a question of IP, but isn't this a problem? Or do you think it's okay to use you know this kind of data without really having the IP for it, or what's your thinking here? No, I mean, you're laughing a bit.

Rasmus Fredlund:

Of course, if I mean a lot of the data used is copyrighted material, yeah, and unless the owner of the copyright, well used is copyrighted material, unless the owner of the copyright is aware of the use and can claim his rights.

Anders Arpteg:

But if I were to ask a stupid question if I, as a human, go to New York Times and read some articles and then I use that in my head, that kind of information, and to write another article that is, of course, inspired by some of the content there, in what way is that different from an AI doing the same?

Daniel Enetoft:

I guess the scale of things, that it's possible for an AI to output stuff at a sort of scale that is not possible for a person. So I mean, I've heard that many times that, well, I can listen to Metallica song and produce a similar Metallica song. Why can't the AI train from Metallica and produce Metallica songs? But I guess it's a. So in the legal perspective we need to set the bar somewhere. Is it allowed to just use whatever material that someone has produced to output material that is similar and to an enormous scale, or is it not allowed?

Anders Arpteg:

If we just take a recent example, latest like um gpt4o model, which was great in generating images and it's also great to just copy the ghibli style. So this kind of artist that is, you know, really good in doing in certain type of images and it seems to be really good in copying that type of style. But I guess in this case style is not patent, patentable or no, then it's copyright.

Daniel Enetoft:

So, but it's a similar concept. If you look at the legislation, for what? What is allowed right now to to use someone else copyrighted material? It's allowed under in us it's called fair use and in europe it's called text and data mining but it's allowed under in US it's called fair use and in Europe it's called text and data mining but it's allowed for transformative purposes.

Daniel Enetoft:

So the output should be something different than the input and it shouldn't compete on the same market. So I think that's the big problem right now. If I use text data to analyze the progression of language, then my output, the report that I do, is probably not affecting the producer of the training data, because they have just written books or whatever, but I produce a report on on what words are more or less common, for example. But if I output a book instead, then I compete on the same market as they do, so so at least with the current legislation, it's sort of if you, if you output something that is competing competing with the data that you took that is copyrighted then it shouldn't be allowed.

Anders Arpteg:

So, in short, the DPT-4.0 kind of Ghibli output that you can get from whatever prompt you do, it shouldn't be really allowed.

Daniel Enetoft:

No, no for sure not, and it's directly competing with the artist? No, for sure not, and it's directly competing with the artist. And I mean, they were in an interview with Sam Altman where they showed, I think, snoopy or some comic strip, and you couldn't see the difference compared to the original. Even if perhaps there were different words in the speaking boxes, it was still you couldn't see the difference between the original. And they asked Sam, should this be allowed? And he couldn't answer he's like well, if you want to use it, then use it, or something like that.

Anders Arpteg:

Okay, so who is really responsible here? So I mean, either you can say that it's the OpenAI company that have the model that is responsible, or it's the one prompting it and asking it to please produce this kind of image or text that is responsible in some way. What do you think here? Is it the service provider or is it the user?

Daniel Enetoft:

Yeah, that's a good question. I mean the copyright infringement as such is done by the service provider, and they could be liable for that.

Anders Arpteg:

But just using it as a training data, that's an infringement in itself.

Daniel Enetoft:

Yeah, no, but if you sell, if you make money out of it, then it becomes a different thing. So then you can be liable as a user as well.

Anders Arpteg:

As a user as well. So it's both the provider and the user Exactly.

Daniel Enetoft:

So if you start printing T-shirts with Snoopy things that you have got from ChatGPT, then the owner of Snoopy can sue you and not. Chatgpt.

Anders Arpteg:

So if OpenAI makes money from the subscription subscription you have there they could be sued, but also as a user, if you actually sell some kind of T-shirt, then you can also be sued Exactly, super interesting. So how should we fix this then? I mean, we still want to build AI that is using data from New York Times or Ghibli or so many more things, or musical content. Shouldn't we do that simply, or should we have some kind of attribution or credits or payback to the content? What's your thinking? How should we fix this? Yeah?

Daniel Enetoft:

I mean you can compare it to, to to pirate bay, that transition, transition into spotify. Yeah, in the beginning we just downloaded metallica and now I listen to it via spotify, and, and, and the contributor, the producer of the music, gets money back from, from my license or my subscription, subscription. So I think that's probably the best way to to put out reasonable business models for licensing out a copyrighted material to be able to train on but then you have to make it traceable as well.

Daniel Enetoft:

So you know yeah, yeah, no copyrights are infringed and things like that exactly.

Rasmus Fredlund:

I mean, it's easier for music, but then you have to make it traceable as well.

Anders Arpteg:

So you know, yeah, yeah, yeah, copyrights are infringed and things like that, exactly. I mean, it's easier for music because you can really see this is the actual song that you are now streaming, but for an AI model that is inspired by so many different things, how would it even work?

Daniel Enetoft:

I mean, it seems like you have been reading a lot of articles or LinkedIn posts where you can sort of prompt an AI model in a certain way with enough details to get out one of the specific training data pieces. So I guess it's possible somehow. But I mean, for example, in the EU AI Act, for example, in the EU AI Act there will be a responsibility of a general AI engine supplier to provide sufficiently detailed explanation of what training data they have.

Anders Arpteg:

Let's get back to the AI, eric. I think that's interesting in itself what kind of implications that will have for how service provider, I guess, is training AI models. But still, you know, can I mean it seems like the current legal environment that we have and the laws that we have is that really equipped with handling this? Can we say that by current, like copyright laws, it it still works, or how do you see we can potentially fix this in the future?

Daniel Enetoft:

I mean it wasn't built with generative AI in mind, that's for sure.

Anders Arpteg:

Even the AI Act right.

Daniel Enetoft:

No, the AI Act is one way of getting there, but AI Act still say that you need to follow the applicable copyright law.

Daniel Enetoft:

So, it just points to general copyright law, so in the end it will still be the copyright law. That sort of sets the bar. Um, but yeah, it's a difficult question, I mean. But from a fairness perspective, I don't see it's fair that that someone like an open ai or meta or someone can earn ridiculous amount of money by just using someone else hard labor and creativity, I mean it's. It doesn't feel fair. It feels like goliath is killing david all the time because they are the big guys and they have the, the, the power to output all these models and earn money from that.

Anders Arpteg:

And they just steal the little guy's cultural development. So what would be the ethical or the legal proper way to do it? Is it simply to say that they shouldn't use any kind of copyrighted material at all, or should they license it to begin with to pay for it, which OpenAI is starting to do, at least to some kind of news outlets, right?

Daniel Enetoft:

Yeah, licensing is a good way, and then also implement opt-out, as it's called in the legal industry, that you're opting out from being able to, or from others being able to use your data, so they can never use it.

Anders Arpteg:

Okay, so you should always have a choice to say my data should not be used for training yeah, at least that's one, one way of solving it okay, cool. Um, perhaps you could go a bit into eu ai act and if you were to just describe, you know briefly, what does? What is specific for the EU AI Act? How does it work?

Daniel Enetoft:

Well, it puts AI applications in various levels, so it's called high risk and low risk or no risk. And then there's forbidden stuff as well. So it explicitly forbid ai from being used, for example, for predictive policing, that you can sort of minority report thing, that you can predict where something will happen, and and, and, and before it even. Yeah, exactly that's not, that's not allowed. Or, and there's other biometric identification, sort of following people on an individual level around and tracing them outside outdoors that's not.

Daniel Enetoft:

That's not allowed. But then we have the high risk where where the use of the ai can cause damage to the person or property. So that sort of safety components or educational applications that can, if you, you can sort of cause damage to someone learning the wrong thing med tech, as we talked about before, obviously. And then there's the rest and for and for high risk. There's a bunch of things that you as a provider need to do that is regulated, and if you don't do that you can be fined with quite substantial amounts, even based on your sort of revenue.

Anders Arpteg:

So it can be a lot of money, including saying exactly what training data you used, and it needs to be in some kind of public database, even right If it's high risk.

Daniel Enetoft:

Yeah, yeah, exactly, and being able to prove that your training data is not biased in a bad way.

Rasmus Fredlund:

Short of quality.

Daniel Enetoft:

Yeah, quality and fairness. How should?

Anders Arpteg:

you prove that the data is not biased in a wrong way.

Daniel Enetoft:

I mean it should be sufficiently diverse, as it's said. But I agree it's a very difficult thing. But you should be able to trace or understand why the algorithm is taking the decisions as it is, as it does, and then you should be able to fix wrong decisions or whatever you think is a wrong decision, by training it in a better way.

Anders Arpteg:

So it has some requirements on being explainable as well right, yeah, yeah, Trace.

Daniel Enetoft:

I mean being explainable. Being transparent is the word I'm looking for, and fair is sort of the guiding word.

Anders Arpteg:

But we have no technique to do that for, like large language models today, right? So I mean, are they all? Basically, if you use it for some potential high-risk application, then they're basically all illegal.

Daniel Enetoft:

Yeah, I mean it feels very dangerous to use algorithms that can hallucinate for high-risk stuff, stuff. So we discussed this earlier. I mean on the educational area, that feels like that some that produce or companies are not thinking that through enough. For example, they could use deep seeks open, open access, open source model and obviously this model has biased I mean they can't even ask it about yeah, stuff that has happened in the history of China. So it's and I think it's super dangerous for companies to use this kind of sort of open source models or even buying, buying the Western models, because if, if, if it can be proved later on that it somehow failed, you will be liable and that will cost you a lot of money.

Anders Arpteg:

So the open source question is super interesting and I'll get back to that shortly, but let's take some. I actually worked with the project myself for like five, six years ago or something, looking through the journal of patients. It was for dental care and the use of antibiotics and we want to understand if it was an over-prescription of antibiotics, given the journal's text from patients and LLMs is super useful for that and they can really try to understand the content of it to an extent that no other model can. But it seems more or less impossible to use an lm.

Daniel Enetoft:

If you want to be compliant with the ai act, especially since this could be considered a high risk, then right, yeah now I mean you could always use it as a one layer in your model and then you have you some kind of verification layer put on top of it. I mean that's like a human or yeah human or some other algorithm that is not specifically llms but perhaps trained on on specific data, for example so it's uh.

Anders Arpteg:

So as long as you have a human that also is verifying or validating the output, then you're safe typically.

Anders Arpteg:

Would you trust a human? The thing in this case. I can just explain it to you a bit here.

Anders Arpteg:

We had some doctors that were over-prescribing antibiotics and we wanted to be able to identify who it was prescribing antibiotics. And we wanted to be able to identify who it was, and if we asked humans to do it, it was more or less impossible for them to see. You know why a certain person was doing it or who was really doing it. You can see some kind of statistics on, okay, these people are prescribing more than others. But then you came to the conclusion that it's simply because the more senior doctors are handling more difficult patients why they should over-describe antibiotics. So that didn't really work. So then you have to really look into the content of the journals, and if a human is doing that, it's super hard. And they did the kind of small sample checks and whatnot, but it's more or less impossible for a human to do that properly. But an LLM could do it much better. And then saying that as long as a human is involved, it does work, it seems a bit strange to me.

Daniel Enetoft:

Yeah, I guess it's the liability. That's sort of at least one reason why the legislator tries to put so much emphasis on transparency and so on, because if something goes bad, you need to be able to point the fingers to someone that this one is the one to blame to point the fingers to someone that this one is the one to blame, and I mean it's a difference trying to go through all the information available or just reviewing the results, oh, that's a good point.

Rasmus Fredlund:

So a human could potentially do at least that, At least that yeah, then you use it as a tool again, but it's the human decision that is the final decision, and they should be able to answer why in some case, decision.

Anders Arpteg:

That is the final decision and they should be able to answer why. In some case they find it to be overprescription of antibiotics. In this case, yeah, I guess.

Rasmus Fredlund:

So I mean, that's what the human would do anyways, without the help of yeah I wouldn't trust a human, though, doing it.

Anders Arpteg:

So okay, Okay anyway. So the EU AI Act it has these kind of levels and you know, the higher level it has, either it's completely unacceptable or it has more and more requirements on you, the higher the risk is, so to speak. There was, you know, I think, something came and acted now in February in this year, which was some kind of AI literacy claim. Do you have any details about that? Do you know what I mean?

Daniel Enetoft:

No, it doesn't ring a bell.

Anders Arpteg:

I think it was something like any supplier of an AI model should be required to show that they have enforced AI literacy among all the employees or something, and that's actually all really enforced.

Anders Arpteg:

It doesn't matter. I'm not a lawyer so I'm not sure, but this is something I heard at least. So this is super interesting to me because we have discussed, you know, eu AI Act so many times on this podcast and let me try something on you, because I'm, of course, not a lawyer and so you will probably shut me down very quickly here. But one argument that we have had a number of times is that the EU AI Act is actually very tech focused rather than use case focused. It's supposed to be use case focused, saying that the risk level is based on the use cases used for, but in reality, when you look at it, if you just take real-time biometric identification having facial recognition in the camera or whatnot if you were to do that without using AI, you're perfectly fine, right? If you were to look at a set of cameras with a thousand people and saying, now, this person is running in this direction in this way, you wouldn't be fined by the EU AI Act correct.

Daniel Enetoft:

No, not by the AI Act. Perhaps you break some other rules.

Anders Arpteg:

Which one? I would challenge you to find one. So then I mean, in reality, it is tech-focused, right? So it's just because you're using AI as a technology that you're breaking the law. It's not a use case of you doing real-time biometric identification. If you were to do it manually, then you're fine, but if you're using a technique to do it like AI, you're not okay, Right?

Daniel Enetoft:

Yeah, I guess it's the scale again. I mean, using automatic algorithms it's possible to do it without you having a thousand persons, so perhaps that's why they see it as a risk, but then perhaps it should be illegal anyway. It should right, yeah, but I mean there's harassment or something.

Anders Arpteg:

It's a big brother kind of society. I mean, I also saw some early version of the EU AI Act saying that chatbots in general is considered low risk. And then I thought, okay, well, if you take a technology like a chatbot, where you ask some kind of textual question and get some kind of answer, that in some cases, if it's just for what could a use case be getting some music recommendations or something that's of course low risk, no problem. But if it's used for children which have some kind of mental you know illness, that's certainly high risk. So to say that chatbots as a technique is low risk in general is very strange to me, right?

Daniel Enetoft:

yeah, it sounds super strange. I'm not sure if it's still in the act, but yeah but okay.

Anders Arpteg:

So we, we used a metaphor, like me and Henrik. When we were sitting here, I was usually saying you know, if I have a hammer and I can use that hammer to hit a nail, then that's okay, and then I can use the hammer to hit the head of Henrik, and that's not okay, but then you wouldn't like create a law against the hammer. It would be against the use case. But it seems in this case, a lot of the regulation here in the EU AI Act is for the hammer, meaning the technique, meaning AI, not for the use case, because it is okay to have a thousand people to look in cameras, but it's not okay to use AI to do it.

Daniel Enetoft:

It's okay to be a bad teacher, but it's not okay to use ai to do it right. It's okay to be a bad teacher, but it's not okay to have an ai to be a bad teacher, right? Yeah, no, absolutely it's.

Anders Arpteg:

Uh, it has some flaws obviously I was hoping that you would say something.

Daniel Enetoft:

I'm wrong and this is you know, actually, okay, so yeah but yeah, I think it's the automation that is scaring and rightfully so, scaring EU and should scare everyone, to a certain extent at least.

Anders Arpteg:

Yes, so okay, trying to see it on the other side, it is a scale. When you use a technique like AI, because it can review so much more data than any human ever could, so the effect of it would be higher still. You would hope that you could have a, you know, set of laws that is use case based and not technology based, because technology will change all the time and then it will be super hard to keep the laws up to date, I agree.

Rasmus Fredlund:

I mean we have laws protecting the use case. That's the general laws we have.

Anders Arpteg:

So should we simply use them.

Daniel Enetoft:

Yeah, perhaps I mean it's the thing that the EU is trying to do is the transparency. I mean that you should be able to understand why it's taken the decision that you can In normal society, you can at least ask the person doing this stuff why did you do it. You can at least ask the person doing this stuff why did you do it.

Anders Arpteg:

But yeah, yeah, I mean so many difficult questions here and I don't want to come off as saying we shouldn't have regulation for AI. I do think we should, because it can so easily be abused and the effects of it is horrible if you use it for bad purposes. So of course we should have proper regulation for AI. I'm just trying to nail down really how should it really be implemented properly?

Rasmus Fredlund:

We have regulations for guns as well. That's kind of the same thing. It doesn't have to be used in a bad way, but it can be. So you have to regulate it in some way, and if it's used in the wrong way we had a general loss, taking care of that of the use case. So it's not kind of the same.

Anders Arpteg:

I mean mean it's a great example for guns. I mean you're allowed to use guns right for hunting or for, you know, self-protection in us and whatnot, but you're not allowed to use it for for crimes or to shoot someone you know without having for self-protection purposes. So then it's use case based right. It's not a technique as a gun as such.

Daniel Enetoft:

Yeah, but there is also use case. Eu is not trying to say that AI is not allowed, so at least it's somewhat use case based. Even there. Granted, yes, fair point. I mean it's somewhat use case based.

Anders Arpteg:

Even there Granted, yes, Fair point. I mean it's some kind of compromise. But if someone were to come and say that EU AI Act is only use case based, I would disagree. But it's some kind of middle ground here some way.

Daniel Enetoft:

They even have, I mean for the general purpose. Ai engines. Yes, they have a sort of a computational limit.

Anders Arpteg:

If you're about when, whatever, I am to the power of 25.

Daniel Enetoft:

Yeah, exactly, then you have to do a lot of stuff, and if you're not, then it's okay.

Anders Arpteg:

So that's very, that's very weird I mean, I love that you come to that point because that's a very cool, or a recent addition as well to ai Act, right, yeah, so if you have a certain amount of compute when training the model 10 to the power of 25, I believe it is which is on the order of what the DPT-4 was used, okay, then you know you have a set of regulation you have to comply with and if you don't, then it's okay, yeah, what do you think about that? I mean, if you take a, then it's okay, yeah, what do you think about that? I mean, if you take, like, a model like the r1, the deep seek r1, that is potentially as powerful as the o1, which has images in it, which of course requires so much more compute, that and just a purely textual one, like the r1 did, but it's significantly less compute, is it? Isn't it strange to have a technique threshold like number of?

Daniel Enetoft:

I think it was. They had to define general ai in some way. Yeah, to be able to, to use it in. Yeah, it could be based on the size of the training data, for example, or something like that, some kind of level of generality or something, yeah, but then it's very. I mean. The good thing with that is it's a number. You can say it's above or below, and in the US it's 10 to the power of 26 instead, so it's 10 times more.

Anders Arpteg:

I don't envy the people having to write these kind of laws. It must be super hard, of course, to just keep up with it. Okay, so perhaps you should explain what is really GPAI, the General Purpose AI part of EU Act. Can you explain a bit what it means?

Daniel Enetoft:

can you explain a bit what it means? Yeah, I mean it's. It's the general um that it doesn't have a specific, specific use case. I mean it's phrased in a very um worthy way in the in the ai act that I don't remember, but it's something that it it could have a big impact on society if it's used in the wrong way, or something like that. But the purpose of the technology is general and not specific.

Anders Arpteg:

I think that's perhaps the easiest way and it goes completely against the whole point. With use case-based laws, right? This is saying it's a general purpose. You don't even talk about the purpose point. With use case based tech, you know laws, right? This is saying it's a general purpose, you don't even talk about the purpose or the use case.

Daniel Enetoft:

No, I think I mean from in that perspective. I think it's um trying to make sure that the output is fair. I think that's the the main reason they have. A main scare they have is that an engine will become very popular and output stuff that will will have the wrong end result on society.

Anders Arpteg:

So I can. I can buy it in some. I mean, we need to find laws in some way and it's super hard, you know, we know. The way ai works today is that you create this kind of foundational models. They're not specifically trained for a specific purpose. This is really how generative ai works.

Anders Arpteg:

You, you just train it to predict the next token and in that way it creates a very generic model that can be used for so many more things and it has this kind of emerging skills that it can be used for. So it could potentially be used for bad things, and I guess in that way it could reason that we should have some kind of specific laws for general purpose, ai, since you don't know if it will not be used for bad purposes no, exactly so. Then it could be used for high risk or unacceptable purposes and therefore it should be potentially banned, exactly.

Daniel Enetoft:

Yeah.

Anders Arpteg:

So I can buy it in some sense, but it must be so tough to do that. I would love to see the first court case here where all the big AI labs that all produce these kind of frontier models that are far above these kind of thresholds already they will never publish all the training data they've used or how the model works or everything right.

Daniel Enetoft:

Yeah, or they will be fined big fines. I don't know what the end result will be, but I agree it will be very exciting to see the first court case.

Anders Arpteg:

And also problematic. I mean, then, we've seen already that the big tech giants are not publishing their models in Europe. It's been happening time and time again. Now it seems to more and more of them actually are publishing and releasing in Europe or allowing the use of it in Europe. But you know how can they not be sued for this?

Daniel Enetoft:

I mean, I mean the ai act is not um, it's not in force um, all the parts of of the ai act is not in force yet, so it's that could be one reason why it's not much has happened yet half a year or a year, yeah, yeah exactly. No, I agree. I mean, maybe that's a chance for us in europe to develop our own.

Anders Arpteg:

We have still half a year to get some good ai models and then it will. Yeah, okay, exciting times for for sure, but how should we do with this? I'm trying to. I'm struggling with finding the question here. We, of course I think we all recognize that we need to regulate ai, and there are so many bad use cases that you can use AI for, especially the large foundational ones that we have today Developing a new COVID virus, or to crash the bank markets or the financial markets that we do have.

Daniel Enetoft:

Automatic weapons.

Anders Arpteg:

Warfare and so many other use cases. I mean, we need to regulate it and I think we need to do it quickly even. But how should we do it? I'm really scared about what the future will hold in this and I really hope that we would have a good way to do it, but I don't see how.

Daniel Enetoft:

No, yeah, it's the big question. I guess we try not to think about it too much.

Anders Arpteg:

Either you have this kind of self-regulating part where companies need to find ways to do it themselves, or you have this kind of oversight committees that try to estimate it.

Anders Arpteg:

But yeah, we'll see. Anyway, you spoke about open source a number of times and I'd love to just speak a bit more about that. So if we were to take open source just for a quick discussion here, do you think open source you know let me just give my background here and thinking here for open source a bit it's easier to prevent abuse if you do put it behind an API. So we know OpenAI had Chinese actors or state actors that were using it for bad purposes and they could identify it and shut it down. If you do publish it as an open source model and they get the hands of it, they can download it and run it. You have no way to protect it. How do you think open source should be managed? Should it be more restrictive in the ability to open source a model, or should it be rather that you do enforce an API to ensure that you do have the ability to control the use in some way?

Daniel Enetoft:

I mean that's a good question, as we talked about before. If you're allowed to do whatever, you're also liable for doing the stuff that you're doing. So, in a sense, putting it behind an API will make the liability a bit riskier as a provider.

Anders Arpteg:

if you take Meta and you have the Lama models, which is open source, you can download them. China doesn't care anyway, so let's not go there. But if you provide an open source model, like Meta does, which is a US company, wouldn't they be liable for the use of their model?

Daniel Enetoft:

yeah, perhaps, but they are the biggest company in the world, so they can afford. They can afford the lawyers and keeping sort of stuff going in in court forever. So it's, uh, I mean, the main. The main thing is, I think, is to allow to not put all the power in the big companies' hands. Yeah, that would be nice, and that's the same again with all the copyright things that we've discussed. By allowing everyone to scrape everything, all the power will be centered in the people with the biggest resources, and the little guy producing the intellectual property will get nothing.

Anders Arpteg:

But speaking about that, another topic that we have spoken a lot about in this podcast is the AI divide, meaning we can see the concentration of power here going to the tech companies or the data giants and the hyperscalers that we do have, but who is really the most capable one of being compliant with the law or having the legal teams to handle it? Well, it's these kind of hyperscalers right that have huge resources. So the only one potentially, by having this kind of very demanding laws that the AI Act has, being compliant and doing all the risk analysis and documentation necessary to be compliant, will be in the hands of the few companies that have resources to do so, meaning the top companies. So wouldn't it even increase the ability to, or the gap and the AI divide, for the top companies to be using AI and the others cannot?

Daniel Enetoft:

Yes, Maybe for the general purpose stuff, but I mean for the specific use cases. I think it's still very much possible to produce technology that follows. The rules have a big role in it, because with patents or intellectual property in general, you have the possibility to protect and start building your own sort of tech stack and step by step improving your ownership over your own technology. And without that, I mean, it's not surprising that Elon Musk, a week ago or something, said abandon all IP laws. Well, because he's big enough now so he can sort of use his power and his financial leverage to just steal everything that he can and just produce and improve. Well, for smaller players, they don't have that and they need to protect their own r?

Anders Arpteg:

d yeah, difficult questions for sure, but okay, open source. Do you think it's a good or bad thing when it comes to being, you know, for the good of society, protecting bad use cases of AI? Is it Because Jan Likun, for example, in Meta he's arguing that by putting models out there, it allows people to try it out and find the bad use cases more easily than having it behind an API where you can't really do the introspection of them?

Daniel Enetoft:

That's a fair point. I don't know. But what does the open source really mean? Is it open weights? If it's open weights, then it's also again based on whatever training that you have done.

Anders Arpteg:

So it depends on what the open source model really is I guess just before we go into the news I guess soon. But um, my prediction I would love to hear what you think about this my prediction is that for the bigger models, the frontier models, and if we take llama 4 now as an example, they just released llama 4 and they had like three different versions of them One small, one medium and one really big called Behemoth, which is, I think, $2 trillion, $2 trillion parameters. They haven't released that. They say it's still in training. They may release that as an open source in the future, but I think not. And potentially, you know, there will be a point where if the model is too big, it will be too dangerous to release, and it is you know, in risk of the EU Act, gpai clauses and certainly about 10 to the power of 25 threshold, so they could not release it in EU potentially.

Anders Arpteg:

So I would argue that in the future, when we will see even bigger models being released, that it will be really, really hard to make them open source.

Daniel Enetoft:

Would you agree. It sounds reasonable. But I mean, why can't we use a smaller targeted model instead? Does everything have to be bigger and bigger all the time.

Anders Arpteg:

Very good point. I certainly agree, and I think it will be only a very few number of really big foundational models and the majority in the millions and thousands of models will be the smaller ones. That is more targeted and then it's easier to handle.

Daniel Enetoft:

Yeah.

Anders Arpteg:

But there will be a few set of companies that do put like hundreds of billions of dollars in training a single super huge foundational model for text, audio and images, and I don't think ever they will be open sourced.

Daniel Enetoft:

That sounds fair, but yeah, we'll see what happens. We'll see what happens.

Anders Arpteg:

We'll see. Who knows, Time for some AI news perhaps.

Goran Cvetanovski:

It's time for AI news Brought to you by AI AIW podcast.

Anders Arpteg:

So we have this Small break in the middle of the podcast to just share some reflections on some recent news articles about AI that you heard. Do you have anything that you'd like to share?

Daniel Enetoft:

Well, I have one. You know, you read about Trump everywhere these days. It's hard to avoid, and now actually he's putting his foot into intellectual property as well. He is I guess it's because his close connection with the tech bros that, as I said before, does not like intellectual property right now Both Simon Hoffman and.

Daniel Enetoft:

Elon Musk and others. Exactly so. The director of the copyright office in the US, or the copyright office in the US, came out with a report, or a preliminary report. Generative AI companies should not be allowed to scrape whatever data they want because it will cause problems for, as we talked about before, for the contributors of the training data. Their financial situation will worsen because they will have a lot of competition by output from the AI. They put out this report and it was quite balanced. I mean, it was also saying that for certain circumstances, for some reasons, they should be allowed and it should be a balance and it should be a balance. But they came out with this and, like I don't know, it was 12 hours later or something Trump fired the director.

Daniel Enetoft:

Really the director of the Copyright Office in the US and she was actually put in that position by him, I think.

Rasmus Fredlund:

It was someone else that hired her, I think, and she was also fired.

Anders Arpteg:

Aha okay, I'm not sure if we should laugh or be sad, but it's tragic and comical.

Daniel Enetoft:

Yeah, I'm pretty sure that Musk or someone phoned Trump up and said well, this is not okay, you should fire her and then, yeah, that's what he's good at firing.

Rasmus Fredlund:

Yeah, you're fired.

Anders Arpteg:

Yeah, it's such a wild time right now in the us. It's, um, yeah, laughing or crying, I'm not sure what you should do, but it's certainly exciting times, right, yeah, so I had one story. It's actually from China and they released something last week called Absolute Zero. So they basically fine-tuned an LLM to do reasoning without using any human data at all, and this you can compare with if you remember AlphaGo and AlphaZero from 2015.

Anders Arpteg:

Deepmind had these kind of models where you can play chess or Go and StarCraft and many other games, and they did that much better than humans. And first they had AlphaGo, and AlphaGo was basically, let's say, playing Go by combining human expert games with self-play like synthetic games, if you call it that and then they become better than human. But then they had AlphaZero. Alphazero then beat the AlphaGo computer significantly by using nothing of human data, just using syntactic data or self-play for training a cell model. Now, this is basically what this Chinese model is. Doing.

Anders Arpteg:

So before, if you think about the GPT 3.5, going a bit with RLHF, meaning reinforcement learning with human feedback you had humans annotating data saying this is good or bad, and you have this kind of supervised fine-tuning and then reinforcement learning saying this is preferential or not, but this one is actually learning how to reason, adding this kind of O1, o3, o1 kind of capability without using any kind of human feedback at all. So that's why they call it absolute zero and so many people are saying that it's impossible to, or we have a problem because of lack of data, because the human data is running out. We trained on all the internet data we have. But this one is completely going around that and saying there is no lack of data, we can train forever and we are not capped anymore with the human data, because humans are only so smart, can only reason this well, and now we can actually do self-play for reasoning in math or in code or medical or in law without having any human data at all, and then it can surpass human capabilities significantly and they have now state-of-the-art by not using human data at all. What do you think about this?

Daniel Enetoft:

How does they? I mean in game theory, I can understand. You have a specific rule to understand if someone is winning or losing or improving. How do you do it in in this?

Anders Arpteg:

yeah, I can go a bit more into technical depth.

Anders Arpteg:

So, um, they have this kind of three types of reasoning, um and I think they are abusing the terms here, but they they call it the deductive, abductive and an inductive type of reasoning.

Anders Arpteg:

I mean it's deductive is simply taking you know you have an input and some kind of model and you run the input through the model and get some output. They call it deductive reasoning. I would actually disagree with that being deductive reasoning, but still that's how they call it. Then they have abductive saying they have the output and the model and they want to predict what the input was. Now, predicting what the input was is super important because then it means it can actually predict new questions, not just predict what the answer should be, but actually predict what the question should be. And then they have the inductive meaning if they just have the input and output, then they want to predict what the model should be. They can do that as well, so it learns to produce or predict both the output and input and model, and in that way they can actually do the self-play saying I can generate now what the next reasoning or math question or coding question or legal question should be, because it's training on that all the time.

Anders Arpteg:

So it's being self-played and it's continuously improving the ability to both predict the output and the input and the model interesting and it actually does that by doing. You know code execution as well, so it has. It has to generate python code and then it can run the python code and can see does it run? And if it does run, is the output accurate or not. So they can actually do verification as well but what is the?

Rasmus Fredlund:

so there's no user input at all?

Anders Arpteg:

no, no human input oh yeah, no human input. But they, they do, you know, to be honest, they did. They do start from a pre-trained large language model.

Daniel Enetoft:

So they must have something.

Anders Arpteg:

So they do start from QEM 2.5 or the LAMA 4, I think they use as well. So they start from that and that's simply trained on predicting the next token on a lot of human data.

Daniel Enetoft:

So it's not absolute zero as they call it perhaps, but at least the fine-tuning in being able to build up the reasoning part is really without any human data at all, which I think is a big thing. Yeah, for sure. I wonder if they can sort of implement whatever they. I mean, if you look from a technical perspective, can they come up with new ideas? But then you should be able to be tried in the physical world as well in able to, in order to be able to sort of verify to the full extent.

Anders Arpteg:

So it's still uh, on a sort of it's still a digital space here, yeah, but then you know, when it comes to physical space and having a robot doing things and then doing self-play to train itself, then potentially they could do the action in physical space, but this is still in digital space. I haven't heard that much about it, but it was actually. I think it's a very big thing.

Daniel Enetoft:

Sounds a bit scary. It is a bit scary, but it does. Goran, did you have?

Anders Arpteg:

something. No, yeah, okay, a bit scary, goran did you have something.

Goran Cvetanovski:

I had a very short two ones actually. The first one was that you remember, last week Google had a little bit of an issue and a dip in the market because the investors started doubting that they are losing traction because most of the people are actually now using perplexity and chat GPT to Google stuff instead of going to Google, which means that it's going to directly impact their advertising revenue. So they went 8% down. It happens now that there is some kind of a news that they are actually starting to product test some kind of an AI mode directly on the Google search so they can optimize it and improve it. This has been available already since 2023 to some kind of internal users, but now it's becoming a little bit more obvious and I think that for Google, this makes really sense, because that is the entire business and you know if they're not doing great on that, I myself Google more and more in perplexity, in ChatGPT At least not Google but you ask questions that usually you will search for answers somewhere on Google, right, but now you're using different type of tools for that. So this was good.

Goran Cvetanovski:

And then one interesting part here was also that they had like a new, what was called. They had a new, they had a new investor around or AI actually like a fund or whatever. It is where Google is now searching for companies, innovative companies, startup companies that would like to have like an access, early access, to the AI labs of Google so they can use this first time. You know, alpha, beta type of AI capabilities, et cetera. So it's quite good and Google seems to be back now fighting for their not live, but at least I think that they are doing some kind of a. They need to do some remedy and stuff like that. You know, last week we were on Data Innovation Summit and one of the presentations was about this edge computing in space yeah right, exactly so.

Goran Cvetanovski:

This actually came out today. So China just launched like a new satellite, that is going to be launched like a new rocket with around 12 satellites which is going to be part of the 2,800 satellites for AI space computing constellation, which means that basically, they're building an entire edge computing in space and they're the first one to do it. But I think that this is going to be something that many of the countries that have, like you know, satellite power or space power will have to do it in the future, because it's enabled faster actually compute in space, and probably this is for military and for scientific, what is called capabilities. This is quite, very advantageous.

Anders Arpteg:

Don't you think Elon Musk already has it in Starlink? Yeah, but he will not say it right, so I don't know.

Goran Cvetanovski:

Probably he has it, but this is actually the first one official. So the race for space what is called age computing has begun.

Anders Arpteg:

So there it is. That's super cool Space age computing yes exactly.

Goran Cvetanovski:

So here that were the most two. Then there is quite a lot of boring stuff as well online with a lot of stuff, but that doesn't matter. I think these two were noticeable for this week it's quite weak, yeah.

Anders Arpteg:

And the talk by NASA at the Data Innovation Summit when he spoke about the autonomous cars driving on Mars and the. I've never heard about this kind of moon on Saturn that he spoke about with this snake. It's a snake that crawls around in a super weird way like an artificial snake. Super weird and cool stuff. It was the biggest and the best presentation on.

Goran Cvetanovski:

Data Innovation Summit.

Anders Arpteg:

Everybody just like it was so cool, cool. Okay, let's get back to the more fascinating topics of law and legal and IP and patterns, and I think we already spoke about the. Of course you're using ai for legal work in some way already. Um, and I was just, you know, today at this ai sweden event where they spoke about different use cases for ai in law, and we had the consultancy firm pwc that spoke about their harvey ai chatbots that they have internally and you can simply ask it a lot of legal questions and it's really good at that. And they had another one for the Swedish one called Familjejuristen I think it's called in Swedish and they also have this chatbot that is specifically trained for legal questions. What do you think about this? Will this be the future, that people, when having legal issues about patents, about IP etc. Will speak more to an AI chatbot, or will they still resort to humans?

Rasmus Fredlund:

Well, we believe that you should resort to you when it goes to patent questions at least, because there are tools that draft a patent application from scratch from just description description but to really understand the invention and to um yeah, provide the best scope.

Anders Arpteg:

I think you really need to be yeah, agreed, and I must be honest to say that they just use this to get leads into their own company. So I mean they say you can never use this for real practice. It's only a way to get the first kind of insight into what you potentially need to do and what you really want to have proper use. You should, of course, hire the consultancy to do that, to do it right. So it's just a way to get leads, but you can think three years ahead and perhaps it will change, or what do you think?

Daniel Enetoft:

I think the, in the same way that you can't really sort of download a general license agreement online and use, apply it to your own company right now, at least you should never trust a machine to to provide advice and as long as you have the risk of hallucination don't you do that from us as well, if you have a like a junior? Yeah, but then then at least you, you put um sort of the, the trust in the, in the consultancy firm that you use.

Goran Cvetanovski:

That they will use the senior people.

Daniel Enetoft:

So, yeah, I think that sort of try, when you patent your developments, you have an intent of patenting a certain specific feature, as we talked about before, it should be aligned with your business goals and based on whatever prior art and so on. On whatever prior art and so on, and the importance of sort of the legal language around the patent claims are so big that you can never I would never trust a machine to draft that part, because every single word, word counts and if just one word I have one example actually I can bring up, let's say you have just to explain how important every single word is, and this is from a real experience we had a couple of years ago. So we had an invention that relied on comparing two values with a threshold and if both values were below the threshold, you did something. So it's very simple. There was a lot of other stuff, but that was the important part.

Daniel Enetoft:

Two values had to be below the threshold and in the patent claim it said compare value one with the threshold and compare value two with the threshold and if both are below the threshold, do the stuff. And then the infringing product that they tried to sue did the same thing, but they did. They compared the two values first and then they compared the highest value with the threshold, and if that was below the threshold, then of course both values were below the threshold, but there was a different implementation. And that just points how important the every specific, every single word in the patent claim is, because if just one single thing is done differently in another product, then you won't infringe. So you need to be as broad as possible, and that is something that AI is not good at all right now, in sort of understanding how to generalize using words that are not limiting in a necessary way.

Anders Arpteg:

And the reasoning capabilities I mean they're really good in knowledge management but not really in reasoning and see what the consequences can be of a single word being wrongly, but it could change the future, of course, of a single word being wrongly, but it could change in the future.

Rasmus Fredlund:

And I mean, if you're an inventor or you're not an expert in the field of patents, you would probably get a result that looks really good. It looks like a patent application and you have the claim language, which looks nice, but you don't really know what the claim actually covers. So it's easy to well, you get this result and then you file it, and then you realize a couple of years later that there was something in the claim that wasn't it's worth nothing.

Daniel Enetoft:

Yeah, exactly, and then it's worth nothing.

Rasmus Fredlund:

Then you save some money in the beginning by letting.

Goran Cvetanovski:

AI do this maybe but it bites you in the thing later cool before you go forward, just to bring some a little bit of different energy to this. So I was looking forward to actually to speak with you guys regarding this this research this is the latest actually research from Stanford University.

Goran Cvetanovski:

It's called the AI index. It came just recently and part of it actually I presented on Data Innovation Summit in my huge dashboard that I had there, and one of the parts actually which is under the research and development of AI. It's about patents and I was amazed that basically we have been talking about patents all the time on this podcast and I have been reading quite a lot that IBM is doing a lot of patenting and stuff like that. But when this research came out, it is extremely interesting. If we look at this particular what is called diagram here, you can see that it's probably actually identical and symmetrical if you tilt it a bit Probably actually identical and symmetrical if you tilt it a bit.

Goran Cvetanovski:

But it looks like North America, europe and most of the countries are actually declining in patents, but you have China mostly China here and South Korea increasing actually their patents claims. So I would like to ask you a question because you're experts in this. So, first of all, is there any value in the AI patent and why is this important for those countries in, let's say, china, saudi Arabia, japan probably, etc. And it's not very important for us here in Europe and North America? Or is it the fact that it's so hard here to do a patent in these countries maybe North America and Europe that it's not worth it actually even to consider, while the application in China just gets like a stamp immediately and that's it? So how would you interpret that diagram there?

Rasmus Fredlund:

I mean this is just patents in Asia, in the different regions. Yes, it's not like the country.

Goran Cvetanovski:

There is a map as well. If you can read it, then it becomes a little bit more. You see, for example, China has increased in 66% since 2020, I think it was and et cetera. Then there is per capita they're still the biggest one, together with Korea, Luxembourg. For some kind of a reason, they like patents as well.

Anders Arpteg:

Just take companies that have no tax. They want to be in Luxembourg, but keep in mind.

Goran Cvetanovski:

So this is the devil about charts. It's basically this chart is about granted AI patents per 100,000 inhibitors by country, and in Luxembourg it's enough to have 10. That's it. So it's a little bit faulty there, but you can see the china, south korea um, they are on the top and then united states, japan after that, and then germany and the rest of the ones that are coming, singapore included.

Daniel Enetoft:

Interesting it's definitely easier to get a patent in china and it's also a lot of government funding around patents, intellectual property. I mean there's the new sort of or there's been for a while. In the beginning, like 15, 20 years ago, they wanted to copy, to be more, to earn more money. But now they want to develop that sort of the strategy from the government. So it's super high. You can get a lot of money from the government to patent stuff and most of these patents are only in China because they only get money for patenting in China.

Rasmus Fredlund:

Yes.

Daniel Enetoft:

So it's a bit skewed.

Goran Cvetanovski:

Yes, but from South Korea, for example, example, it's probably not the same um yeah, and this is also because I was doing trademark uh some time ago for my company. So I learned basically, it's not that you're gonna have, like, a trademark in europe, that means that you have internationally. Uh, so let's say that you have a trademark in china or south korea. Is that, uh, something that is applicable in europe or in the United States? Or they need to pay a little bit extra or apply for different in order to get international worldwide patent, or is just country-wise? How does it work?

Rasmus Fredlund:

It's well, patents are country-wise or there is a way to kind of get a European patent now, but you only have protection in those countries that you enter and where you pay annual fees and everything. So there are ways, there are international routes where you can get a patent granted or, sorry, not granted examined and if you have a positive result you can enter other countries. But you still need to have national, regional, patents.

Goran Cvetanovski:

So, for example, in order for me to get a worldwide, let's say that I have done some kind of AI application that is quite novice. So in order for me to be protected in each country in the world, I have done some kind of AI application that is quite novice, etc. So in order for me to be protected in each country in the world, I need to apply for the patent in each country in the world.

Daniel Enetoft:

Or pretty much, pretty much. No one does that no.

Rasmus Fredlund:

So you would choose those countries that are relevant, and since China is spending a lot of money on AI, that's probably a country where you want to have a patent, and that's probably why the numbers are so high there as well.

Goran Cvetanovski:

All right, okay, well, that makes sense. And now this, what is called the whole report, gets debunked a bit, because if you look at it, when you read, it it's like, oh my God, china is actually increasing the patents, united States not, but eventually it's like, oh my God, China is actually increasing the patents, united States not, but eventually. Who cares?

Anders Arpteg:

if they're just doing the patents for China, right, exactly, all right, cool, thank you Cool. I'd like to move into perhaps a more difficult question. But if we think IP, normally think IP for the data that you use to train a model, et cetera. But I'm thinking IP for the data that you use to train a model, et cetera, but I'm thinking IP for the model itself though.

Anders Arpteg:

So imagine now, if you remember the DeepSeek R1, then we more or less know that they actually scraped or actually did knowledge distillation from OpenAI, because when you ask the R1 model what is the name of the company that trained you, they actually respond OpenAI.

Anders Arpteg:

They were that bad in scraping data from OpenAI that they even didn't remove that name. But when it comes to that kind of knowledge distillation, it's usually the name called. If you want to have training data instead of using real human data, you simply give some kind of input to a big model and get an awesome answer back, and then you train your own model on that. And then, of course, you have high quality data which is super hard to get by, and then you can, with a much smaller amount of data, train up a new model that have really good performance. That have really good performance. But in some sense you're then taking the value from the big teacher model, if you call it that, and then training a student model from it, and in some sense I guess it's infringing on the IP of the teacher model in this case right yeah, that themselves stole the data from from other people.

Daniel Enetoft:

So it's ironic that open ai was complaining about this. It was uh, it was a big laugh in the ip community at least but yes, um, um, yeah, you can laugh about this.

Anders Arpteg:

you know they stole the data to begin with and they hadn't had the IP to train it on. But if you just think in general, if you do have access to a model and you use it to do knowledge distillation and train your own model from that in a non-accurate or legal way, I mean, openai has a clause saying, basically, you're not allowed to train a model from this model that competes with ourself in legal way. I mean OpenAI has a clause saying, basically, you're not allowed to train a model from this model that competes with ourselves in some way. Some terms of service that they have, isn't this? How should we protect this? I mean, is it as simple as saying if you have a terms of service saying you're not allowed to use it for competitive purposes, then it's fine? Or how can we protect IP when it comes to models themselves, instead of training data?

Daniel Enetoft:

Yeah, the terms and services is probably a good way to start. You can compare it a with, uh, the terms and services for a lot of free tools out there where where it's sort of specifically said in in the terms of services to be able to, to, to be allowed to use this software.

Daniel Enetoft:

You can't sue us for anything so it's a non-sue clause which google and a lot of other big companies have with their free software. So if you sue them, they just remove your access to all the tools You're not allowed to use them anymore, like Google Maps or stuff like that.

Anders Arpteg:

Can that be legal, to say that you're not allowed to sue us?

Daniel Enetoft:

Yeah, I mean it's their product, they can do whatever as long as it's not. I mean it's competition laws and everything, so it's a lot of other layers of laws, but in principle they can do whatever in their license.

Rasmus Fredlund:

Isn't this the same thing as the Tesla patents, when he provided them for free, as long as you didn't sue them with your own patents or use it in a competitive way, but having patents.

Anders Arpteg:

Okay, so I remember the Spotify days. We said we're going to produce patents at Spotify, but we did it for defensive purposes. We would never attack another company. But we need to have some protection if someone sues us. And companies sue Spotify all the time for so many patents. Do you think that's a proper way to see patents that more and more tech companies should do it for defensive purposes rather than offensive purposes, if you call it that, or what do you think about this? I mean, it's similar to tesla, right?

Rasmus Fredlund:

they say that they did the same thing as you did with spotify yeah but they made a big thing out of it that they provided their patents were free, but it's in. Yeah, it's basically the same thing as you did they were.

Daniel Enetoft:

They're just offensive yes yeah, I just say it's a difficult question and and the problem is that there exists patent trolls, that it's called outside it's non-practices practicing entity system is the fancy name, and they don't have a product so you can't sue them back. They only own patents and sue other people. So I mean, that's, that's the downside, I think, of the system, at least in my mind, and many people like patent trolls because it puts a sort of economic value to patents and it should be tradable as any, as any other property that you may have.

Anders Arpteg:

That's a very good point and I can elaborate. You know Palantrol in a couple of ways in a Spotify context as well, and Spotify used a peer-to-peer technology to begin with. This is like 15 plus years ago, so you can actually listen to music through other people's phones or computers and traditional kind of peer-to-peer technology and they got sued for that and we don't have any like peer-to-peer technology anymore in in the way you listen to music. Okay, a bit sad. They also got sued, you know, as soon as they went into us and did the IPO, the introduction to the market there. They got sued for so many things from patent trolls.

Anders Arpteg:

So these were companies that were buying these kind of patents and then they were waiting for a big company to do an introduction in a country and then they sued them. One of the things that I believe they sued it for and I do think it's public knowledge, so it's not a problem they got sued for the use of distributed file system. Everyone used Hadoop and this kind of distributed file system. Everyone was using it. It's just a way to manage big data in that technology Very, very common technology. As soon as they launched and did the IPO, they got sued for just having hdfs, the hadoop file system, and I mean that can't really be a good thing, can it?

Daniel Enetoft:

I think the main problem here is actually not, uh, specifically patents, but it's the US legal system, where you have to put in the money yourself as a defendant, no matter the outcome of the court proceedings. So that's why patent rules are so common in the US, while in Europe it's almost unheard of. So, there are ways of fixing this problem. I don't know if the patent trolls and non-practicing entities have sort of leverage power into the government or something. Non-practicing entities NPE.

Anders Arpteg:

Cool, yeah. Non-practicing entities yeah. Npe, yeah, okay, but I guess you agree. I mean you can see so many positive reasons for having patents, like you know, if you want to develop a new vaccine or whatnot, and of course it requires so much research to do so. So if you wouldn't have the patterns to protect you, no one would ever do it.

Daniel Enetoft:

So I mean there are so many good uses for balance, but in this case, with patent trolls, it is hard to find a, you know, positive impact of this right yeah, I agree, but, as I said, many people, or some people, like the sort of possibility of putting a value of a patent and selling it as any other property that you might have.

Goran Cvetanovski:

Some kind of market for patents.

Anders Arpteg:

Yeah, exactly, okay, cool. The time is flying away here a bit and I'm choosing a bit more like philosophical and the futuristic kind of questions here when it comes to patents and ip, and I guess one question is you know, some people are, you know, trying to compare innovation versus regulation, in some sense saying that if you have too much regulation, it will hamper innovation. Do you have any thoughts about this? What's the proper balance? Do you agree there is a balance here where too much regulation and laws about this can actually hamper innovation? What's your thoughts here?

Daniel Enetoft:

I mean, we are innovating and we are regulating. So, it's just to have the proper balance, I think.

Anders Arpteg:

But you can actually argue the opposite, I think in the medical case, I mean, you wouldn't have even innovation unless you had no, no, exactly that's what we said before.

Rasmus Fredlund:

Yeah, exactly when it comes in the medical case. I mean, you wouldn't have even innovation unless you had. No, no, exactly that's what. Yeah, exactly when it comes to the, to the patent case. I think that's it's really important to be able to protect your innovations. And also since, since you have to make make your invention publicly available, that's kind of the starting point for further development.

Anders Arpteg:

If you wouldn't have patents, then everyone would start from a much lower level and right, probably develop the same thing in parallel so yeah, most probably, yeah, so, but a lot of companies are still you to build very, very costly products, either by doing it open source or by putting it in production very quickly, like Tesla, et cetera. If you take Tesla as an example, they don't make money by the patents, they do it for defensive purposes. They don't make money by the patents, they do it for defensive purposes. So is that the proper way to do it?

Daniel Enetoft:

Or what do you think?

Daniel Enetoft:

I think one alternative to open source is the standardized technology that we developed, and both Rasmus and I work a lot with standardized patents or patenting standardized technology and there is, I mean the telecom industry and all the sort of video and audio coding as you spoke about in Spotify.

Daniel Enetoft:

Everything is standardized in those technical areas and it works in those technical areas and it works. I mean it's amazing to see the amount of companies working together to develop 5G and 6G and it works. I mean they get money for their development. It provides value to the society because everyone can connect, everyone can use the same products and technology, and so on. Everyone can use the same products and technology, and so on. So I think one thing that could come out of, for example, the AI Act, where you need to provide transparency and quality, could be that more and more stuff are standardized. It could be standardized how to control an autonomous vehicle, for example, what data needs to be input to the safety models for an autonomous vehicle to be able to make sure that it will brake in the right conditions, and so on. Everything like that could be standardized, and the telecom industry and video and audio encoding and other industries are proving that it's it can work.

Rasmus Fredlund:

So yeah, I mean, and that's what tesla wanted to do as well, wasn't it like they? They wanted everyone to choose their charging standards and use their charging network right, and of course, they had patents on that as well, that technology. So if everyone uses it and it becomes the standard, like an unofficial standard then they still have an advantage there.

Rasmus Fredlund:

How interesting. And when it comes to open source, the question is also is everything open source or are there patented inventions in the background? So you get everyone to use your open-sourced technology and then you're kind of in their system that is protected.

Daniel Enetoft:

Yeah, you own everything. You own everything yeah, good question.

Anders Arpteg:

Okay, let's think that we actually may end up in an AGI future who knows, in many years in the future, but at some point potentially. Would you for one agree that we are moving to a future where AGI will come in place, or what are you thinking here Meaning? If we define AGI as a point, when an AI system, as Sam Holtman defined it, he basically defined it as an AI system that is as capable as an average human co worker and it's easy to see. I mean, a human co-worker is really good in for one reasoning and also agentic capabilities that AI is really bad at today. But AI is really good at more human or knowledge management and can parse data and recall the information in a much better way than humans can, but it can't really reason that well, nor can it take action in digital or physical space that well. But if we imagine it will do, you think it will come to the point that we will have an AGI in coming 5-10 years. What's your thinking?

Daniel Enetoft:

I mean, I think it depends how you define intelligence, but at least my opinion is that digital devices will never have intelligence in the same way as the human has. What's the difference? Just trying to simplify the human us? I mean, what's the difference just just trying to simplify the human brain to a bunch of of a bunch of activation functions? I mean the cell in itself, the, the brain cell in itself. It's so extremely complex that you have no idea how, how it. It's for sure not just a switch.

Anders Arpteg:

But it's a spiking network, right. So you have some kind of dendrites and axion coming out of it, and when the state in the neuron cell goes above a certain threshold it does make an electrical, chemical kind of spike.

Daniel Enetoft:

It's not super complicated than that sounds right perhaps, but I mean, on the other hand, uh, my brain work function on on a couple of cheese.

Anders Arpteg:

The energy consumption is extremely more efficient, so I think that's the proof in itself.

Daniel Enetoft:

But that's the proof in itself. But yeah, if we come to true intelligence in machines, then probably at least an intellectual property is our least concern. I would say.

Anders Arpteg:

If we compare it to like a bird and an airplane. I mean, I think it's interesting. We know an airplane consumes so much more energy than a bird does when it comes to flying, but an airplane is so much more efficient in going from point A to B than a bird ever could. Could we think the same of an AI kind of capability that, even though it consumes so much more energy, it's potentially so much more intelligent, Even if it does it in a very, very different way? I mean, it doesn't flex its wings.

Daniel Enetoft:

Yeah, I think I mean. Yeah, that's a philosophical question. Almost I choose to believe that the human or the sort of biological intelligence will be better Fair enough intelligence will be better, fair enough.

Anders Arpteg:

But okay, just hypothetically assume that there will be a point where we have some kind of machine intelligence that will be superior. What do you think that would mean when any kind of content that can be generated and the IP for that is by machines, potentially, machines doesn't really have a legal status per se. Should we have legal entities for machines then I think there actually is a parent application where they have an ai as part of the application itself, if I'm not mistaken yeah, it's, uh, it's been a test case thrown out over the world if an ai can be an inventor or not yeah, I mean it's, it's uh.

Rasmus Fredlund:

According to patent law, it has to be a human.

Daniel Enetoft:

According to patent law, it has to be a human and also according to all law it needs to be a human, that it's sort of liable to something.

Rasmus Fredlund:

I mean, that's the big question. Who's liable in that case?

Anders Arpteg:

I think Saudi Arabia has an AI citizen. Ai citizen so in that case I mean if it's a citizen in the country they could be a.

Daniel Enetoft:

What will happen if that person or that entity is doing a crime? It would be interesting to see. I have no idea.

Anders Arpteg:

Okay, let me end with a more philosophical question. And then, assuming that it will be a point when we will have an AGI system, we can imagine a different set of scenarios. And if we just think the two extremes here a really good one or a really bad one the really bad one would be the matrix, the terminators of the world, where machines try to kill all the humans and yeah, you know, we've seen the movies, and the good ones could be rather than the. Ai is trying to find a cure for cancer. It is is, uh, finding some kind of way to fight the climate change.

Anders Arpteg:

Um, it's uh, being the educational teacher that helps you know a lot of people to be so much more knowledgeable than they ever could otherwise. It's potentially building up this kind of world of abundance where you don't need to work 40 hours a week anymore. You can choose to work if you want to, but otherwise you can feel free to pursue your passions and creativity as you see fit. And that could be the other extreme. Where do you think we will end up? Would it be more to the dystopian one or the utopian one?

Daniel Enetoft:

Daniel, if we start with you, I'm just wondering if the utopian one would be as good as the vision. I mean, the good thing or what motivates a lot of people is to be able to be creative and to be able to feel valuable in the context where you're placed. And if machines would take over all creativity and all development and all functionality of society where you provide value, and what would we have left?

Anders Arpteg:

I mean, it's a great question and and we hear that quite a lot and and nick bostrom wrote a book about the deep utopia and he basically says you know what will happen if that's the case. He also also wrote about the dystopia. But still, you know, you can think about some people already today in society that is not required to work. It could be children, it could be the super rich, it could be retired people, you know, are they really that depressed? Are they really not finding value in life? Right, couldn't you imagine that we could have a point where we don't have to work, at least not as much as today, when still finding happiness in some sense?

Daniel Enetoft:

Yeah, hopefully. Hey, it sounds great. I would play more paddle tennis, but wouldn't you get up?

Anders Arpteg:

Yeah, be careful then. Yeah, I'm wearing my arms, Rasmus. What do you think?

Rasmus Fredlund:

No, but I agree with daniel. I think I mean there are a lot of super rich people that are still doing their, their work um, that made them rich, and I mean children. They don't have anything to do for a couple of years, but then they also seek a meaning with life or want to do something for society. So I think people will still do that, but AI will be a tool to achieve those goals, so more positive or more scared about the progress of AI?

Rasmus Fredlund:

Positive, but I think that depends on the people using the ai as well. So, but I I believe in the good of people, so good.

Anders Arpteg:

I usually phrase it as um. I would be more comfortable when we do have agi, but I'm really scared before we don't, because then we're dependent on people abusing AI. That is stupid. That will be dangerous, I think. Okay, now in case it was a pleasure to have you here, rasmus Fredlund and Daniel Ennetoft, and I hope you can stay on for some more after-work discussions and speak about even more interesting stuff in a more off-camera setting. Thank you so much for coming in.

Daniel Enetoft:

Thank you for having us.

Anders Arpteg:

Yes.

People on this episode