AIAW Podcast

E176 - Scaling Enterprise AI Agents - Danilo Nobrega

Hyperight Season 12 Episode 3

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 1:51:14

In Episode 176 of the AIAW Podcast, we sat down with Danilo Nobrega, Founding Go-to-Market Lead for the Nordics at LangChain, to unpack what it truly takes to scale enterprise AI agents beyond the prototype stage. Drawing on his background in data infrastructure at MongoDB and his work expanding LangChain across Scandinavia, Danilo shared practical insights into the shift from simple LLM wrappers to fully orchestrated, stateful, production-ready agentic systems. We explored LangGraph, agent memory, feedback loops, observability, and cost control—breaking down how organizations can build reliable, explainable, and economically sustainable AI applications at scale. From enterprise use cases and ROI to the Nordic AI ecosystem and the future of AGI, this episode delivers a grounded and strategic look at where agent engineering is headed next. 

Follow us on youtube: https://www.youtube.com/@aiawpodcast

A 5‑Minute Agent For Real Work

Anders Arpteg

What when did you build this uh the agents? Was it uh there was recently, right? Yeah, yeah, I just I mean this week. Uh this week, uh and uh okay, so how did it get started?

Danilo Nobrega

You you run this agent builder on your laptop or in no no it's uh basically part of uh Langsmith. So it's a SAS product. Yeah. So you basically sign in and then you can just prompt and and uh create your your own agent with natural language. And what did you do? What kind of agent did you build? So I built an agent that uh scans my Gmail, scans my Slack and my calendar for the next week. Yeah. And then basically uh sends me at 3 a.m. every morning. Um 3 a.m. Yes. Okay, yeah, yeah. Very specific. You never know what time I might wake up, but uh usually it's around six. But uh I receive it a bit before, so I know that by the time I start working, yeah, uh I know what the important things are to do, uh, you know, from that perspective at least.

Anders Arpteg

And nothing falls between chairs, yeah. So how do you start doing it? So you give it some prompt saying this is what I want the agent to do, or or how does it work? Yeah, you just describe in natural language.

Danilo Nobrega

And what do you write? So I just I just wrote, I want an agent that uh looks at all my emails uh from customers so uh that are not uh answered. Uh and I also want you to scan Slack for any messages that I haven't answered. Uh also want you to scan my my calendar for any upcoming customer meetings uh this week and send me a report uh over Slack.

Anders Arpteg

That's it. So you get like um a structured message somehow on Slack?

Danilo Nobrega

And yeah, and I must confess the first iteration it it was a bit weird with the emojis and whatnot. But you can always go in, edit the the pro the instructions and the format of the the report and and make it as you want, you know.

Anders Arpteg

So yeah, yeah. And cool. And you'd be using it now for a week then?

Danilo Nobrega

No, I actually just did it like I think the day before yesterday. Okay. So I'm still tweaking it, but uh, but it's a start and it works. And uh yeah, I'm looking forward to to see how it's going to increase my productivity. Yeah, have a manager there.

Anders Arpteg

How long time did it take for you to to build the first version of it? You would say five minutes. Five minutes. Yeah, yeah, impressive. Yeah. Have you tried any other kernel agents uh that you built?

Danilo Nobrega

Yeah, uh so I have one that's been running a bit longer. Uh it basically looks at all uh Nordic companies that have uh AI job posts that mention Langchain, Lang Smith, uh Langgraph, and then sends back uh a report basically uh with the companies and the job ads, so I can go in and and have a look. Yeah.

Agent Builder On LangSmith

Anders Arpteg

Cool. Amazing what you can do these days, right? Yeah, it is agency, it is. Well, very welcome here. Uh Danilo Norbega, is that correct?

Danilo Nobrega

Nobrega da Cunha.

Anders Arpteg

Ah, thank you. Portuguese. Okay, I'm trying. I am horrible at pronunciation, but thank you so much. And um, you're a founding um go-to-market uh person or lead for the Nordics, right? Yes in Langchain. And uh yeah, I guess we're going to hear a lot more about you know what you can do with uh Langchain and Langsmith and the agent builder that I hadn't heard about before. It sounds amazing, yes, all the things we can do there. But you also have a long background, like 15 years, right, of technical experience in different leadership positions and then Mongo, right?

Danilo Nobrega

Yes, yes, six years at MongoDB.

Anders Arpteg

Yeah, amazing. Well, before we get into all the details of Langchain and what's possible there, perhaps you can just give a bit uh of a personal background to who is really really Danilo.

Danilo Nobrega

Yeah, so Danilo is a Brazilian guy that uh ended up uh moving to Sweden after living in five different countries. He's married with a Canadian woman and has two kids uh uh that were born in uh France and Sweden, so it's quite an international family. And then from a professional perspective, uh I started as a computer engineer. Uh, I think my first job was in the year 2000. So it's been a while, yeah, more than 25 years that I've been working uh with tech. Um after after that, I quickly realized that uh, you know, I had to get some extra education because I was promoted as manager of a development team. So I was actually designing, delivering, selling, supporting software, you know, would go to the data center with the CD with Red Hat, Linux install everything. And uh uh when I got a team, then I decided to get education, got admitted to Kotehua here.

Anders Arpteg

KTH, yeah.

Danilo Nobrega

Yeah, KTH. So uh Royal Institute of Technology, ended up moving to Sweden. Um what year was this approximately? This was in 2005. Yeah. So then after uh after that, I did my thesis at a company called Amadeus, it's travel tech. Uh so I lived actually in the south of France for over 10 years. It was quite a nice experience, and over there I experienced uh different roles: project, product management, uh managing product managers, product owners. Then I moved into pre-sales and then I moved into sales uh so uh at MongoDB basically. So I would say five years with telecom, ten years with travel tech, about six years with data, and now AI.

Anders Arpteg

Yeah. So you just recently moved to Langchain then?

Danilo Nobrega

Yeah, tomorrow it's going to be a month, basically. Yeah, yeah.

Anders Arpteg

Yeah. How come you decided to move to Langchain? Any specific reason?

Early Results And Productivity Hopes

Danilo Nobrega

Or yes, there are many, many reasons, but uh let's say that when generative AI uh took off about three years ago, I was at MongoDB and I saw it and I and I remember uh that Chat GPT moment. And you know, very quickly uh we had launched at MongoDB Vector Search uh before that. So we started working with a lot of customers uh with Rag, you know, uh and and so I that was like my first glimpse uh into what was going on. Uh and of course, Langchain was a partner back then, so I knew the the company.

Anders Arpteg

You met them at Mongo as well.

Career Journey To LangChain

Danilo Nobrega

Yeah, yeah. We we worked together. Um and and then yeah, fast forward three years, uh I I realized that um things were changing, but now enterprises were actually starting to use this technology, right? There was the Klarna moment, but now actually many, many companies started to use it. And I also got the realization that uh this is a giant wave uh that you know at that time it's just two, a bit more than two years had passed, but it's gonna be something like 20, 30 years. It's going to be bigger than what cloud was, you know. And uh when the opportunity knocked at my door, uh I I was like I couldn't say no because you know it's uh it's um Langchain uh Harrison uh you know uh started this uh and released the first version. I think it was a month before ChatGPT came out. So they were at the very, very beginning. Uh so iconic company, uh great opportunity, and somehow I I knew that uh things were moving up the stack, so to say, and and I wanted to be a part of that.

Anders Arpteg

How long has Langchain been around? By the way, three years. Yes. Okay, so it was right after the ChatGPT moment then, or was it in the well they they they launched uh uh before.

Danilo Nobrega

Uh I think it was like a month or a few weeks before ChatGPT came out. Uh Langchain came out, yeah.

Anders Arpteg

Yeah, it's changed uh so much in so many ways. And and today, can you describe a bit more what is your current role?

Danilo Nobrega

Yeah, so my my role is uh I'm I'm responsible for all uh customers within the Nordics. So I basically uh uh manage uh existing customers but also work on introducing the technology to new uh customers and getting them on board. So a lot of these customers are already using the open source. So it's really about uh getting them to understand and and see the value in uh Langsmith, which which is the and some companies may not understand exactly what the Langchain company does and compare it to open source, etc.

Anders Arpteg

Perhaps you can just explain a bit more what does really Langchain provide?

Danilo Nobrega

Yeah, so Langchain has uh the goal of really being the experts on agent all things agent engineering, and the main thing is that we make uh is an agent engineering platform called Langsmith. Um so it covers uh the full cycle of agent engineering build, it covers observe, it covers evaluate, and it covers deployment. Uh so within the build, you have lang chain, which uh uh uh coined the name of the company, lang graph, you have deep agents. Uh you also have agent builder that we were just talking about, the no-code one. And then observability, uh, eval and deployment. That's all in the Lang Smith uh platform, which is uh paid, yeah.

Anders Arpteg

So open source, we have Langchain then and Langreph, right? Or what is really available open source and work what this is. So open source is Langchain, Langreph, and Deep Agents.

Danilo Nobrega

Deep Agents as well.

Anders Arpteg

What is Deep Agents, by the way?

Danilo Nobrega

Deep agents is very interesting. Uh it's it's only been released uh very recently. So the the idea came from Harrison uh had the idea when uh you know observing uh deep research or uh clawed code, yeah, how those things work. Basically, with the prompt you can create a whole app. Or with a prompt, you have this really intricate long-running research that comes back with so much information. Uh and so the idea was can't we make this open source so people can do their own thing? Uh and that's basically deep agents.

Anders Arpteg

So you can basically build your own deep research by using deep agents.

Danilo Nobrega

Yes, it's you literally do pip install uh deep agents, and and uh you know you're you're up and running, and you have an SDK and you have a CLI that you can use to build that.

Anders Arpteg

Yeah, it's very kind of you to release that open source as well. I think a lot of people would love to have that.

Why Join LangChain Now

Danilo Nobrega

Yes, yeah, it's what we call uh our our own uh agent harness, right? Which is basically, if you think about it, um uh deep research and plot code, they're not really using a completely new model. No, they're using the same model as everyone else, but then the it's how you use the model.

Anders Arpteg

It's a scaffolding, right?

Danilo Nobrega

Yeah, yeah, yeah. And the hard the whole harness, and you can really get much more out of it through these uh techniques. So the idea was to package that open source and make it available for other people.

Anders Arpteg

I personally hate the term scaffolding, meaning all the things around the model, but do you have a or how would you phrase it? You know, what do you call the whole system, so to speak, surrounding the model itself?

Danilo Nobrega

Yeah, so so in Deep Agent's case, uh we call it harness. Okay. Uh in Langgraph, we call it a graph. Right.

Anders Arpteg

And a lang chain was just like a a chain of uh perhaps we should do a quick uh description of the difference between lang chain and lang graph for people that haven't.

Danilo Nobrega

Sure. So you can picture it like lang chain is like a recipe, so it's very sequential, right? So if you're baking a cake, first you get the flour, you break the eggs, you add the milk, and and you put it in the oven, then you have the cake.

Anders Arpteg

But nothing goes wrong, you just follow step. Exactly.

What LangChain And LangSmith Provide

Danilo Nobrega

But but think about it like if you're making some nice sourdough bread, right? Well, the consistency of the dough might not be right. So you might want to add a little bit more uh of this or that uh to get things right. And Langgraph allows you to do that because you have nodes uh which are basically functions. You have the edges, which which are like uh dictating like what's going to happen next, and then you also have state with checkpoints at every node, and you can also do branches and you can also do loops. So now you can do if then, you can do while, you know, type of logic. Uh, because let's face it, uh things in the real world they're not always sequential, right? They're messy, and you need that to kind of control it. And then with deep agents, you have different things. You have uh basically um uh memory, which which is like uh the plan, sorry, the planning part, which is like a to-do list. So before it actually goes out and carries on to do anything, it will create a very simple to-do list. Yeah, but funny enough, that that I mean, if you think about a person, right? If you don't have a to-do list and you're not really like focused, you're gonna be doom scrolling on you know LinkedIn or whatever, right? So it's the same thing. The attention span of these uh models is very small, so they need to be kind of uh taken step by step. And then there's um other parts which basically uh orchestrate and then can spawn uh children like sub-agents that run in parallel as well, yeah, yeah. So they they're doing actually specialized tasks, and the idea is always not to have uh context uh um uh rot, so to say. So uh imagine if it was only one agent, it would get so big, similar to what I told you uh before, right? You you you can mess up the context uh or it's not gonna be very precise, or it's gonna cost you a fortune, right? Because it's a lot of tokens. So by having these sub-agents which are specialized, then they're gonna do whatever they're going to do. They're going to then send a summary back to the main one that's orchestrating. And that's is basically an optimization. It keeps the context clean and it also reduces cost and does things in more in a more efficient way.

Anders Arpteg

Super cool. And quickly, if you were to describe a bit more the commercial offerings, what do you have there?

Danilo Nobrega

Yeah, so the commercial, we have basically um uh Langsmith, right, which is the agent engineering platform. So there's agent builder, which we were talking about before, which is a no-code uh agent that creates agents. So uh you can use natural language to create an agent in five minutes, uh and and that's one piece of the build. Then you have Langsmith observability that can basically observe all of the traces that are generated within an agent. So um, you know, every execution underneath the whole trajectory from A to Z, what is happening under the hood, you can see that there, both in terms of latency but also cost, right? Uh and then and then uh it it does much more than that, but just a basic for now, and then uh evaluation, which are evalves, which is basically you know being able to evaluate uh these things. Uh, you know, can be uh let's say LLM as a judge, it can be a human in the loop, or it can be um conciseness, or there are many other criteria that you that you can use. And then there's the deployment part where uh you can deploy an agent with one click, basically. And you don't have to use all of it, you can just use you know uh observability and eval, for example. So yeah, that that that's how it works.

Anders Arpteg

And they're all part of Lang Smith as a package, so to speak. Yes, a platform, yeah. Yeah, yeah, amazing. Um and if we if you just try to compare a bit, you know, the MongoDB background that you do have with Langchain, you know, how how I mean I'm trying to get more into the enterprise kind of sector here, and and of course Lang Smith then is targeted, I guess, mainly to enterprises.

Danilo Nobrega

Yes, right. Yes, I mean both enterprises, but also there's a lot of startups that use them.

Anders Arpteg

Okay. So why why do you need okay, if we phrase it like this, why do you need LineSmith?

Deep Agents And Open Source Stack

Danilo Nobrega

Yes, so that that's a very good question. So um I think to answer that question, uh I need to tell a little story before. So the way that um that agents building agents work is completely different than developing code. Okay. When you develop code and when you test code and you put it into production, um, it's very deterministic. So you know that you know, if it's let's say a tax type of application, that it's always going to calculate that it's 30 or 60 percent tax of this revenue. Not going to change. You can test for that. Sure, maybe the data changes and so on, but it's very straightforward to test. And and and therefore you can do quality assurance and then you can put it into production with very high certainty that that code is is working well. Right now, if you move into LLM apps, which is basically um, you know, the LLM uh maybe with uh some retrieval, uh, that's the example we were talking about, lang chain, and get a result. Now it's no longer deterministic. It's it's already like very probabilistic because or stochastic, right? Because depending on the prompt you give, uh, even if you give the same prompt twice, the the answer might vary. Right. Right. So then how do you control that? Right, and how do you ensure that you will get good answers for your use case or your and then if you take this further to the agents uh of today, right, which we were talking about before, that have multiple LLMs, you know, multiple different models running at the same time, multiple agents running at the same time, um, you know, with human in the loop and so on, just the complexity just increased. So, so then it's even more important to control that. And and so so so then the main challenge here is that the code has moved from the code itself to the traces. The traces are effectively the code of or or the behavior of these agents because the LLM is a black box, right? So if you're able To A understand B, observe the traces and then evaluate and introduce that feedback loop, then you can really control.

Anders Arpteg

I mean, for normal software or IT development, you can simply observe the code and then understand a bit how it works, but it doesn't work for LLM-based applications, and then but you can observe the traces then that they understand how it works.

Chains vs Graphs: State And Control

Danilo Nobrega

Exactly, exactly. And and and that's the whole um new, I would say, discipline of agent engineering. It is right, it is because it's it's uh it's A, it's a lot of work, and B, it requires many different disciplines, right? It's not just the developer anymore, the engineer. You need to have someone from product because the person from product will know how claims are supposed to work, they will know what type of behaviors you know are expected. And you also need a data scientist because when we're talking about production, we're talking about not just one run, we're talking about thousands, hundreds of thousands of uh traces that then you need to find patterns, you know, both positive and negative, right? And then make sense out of those and feed it back to improve the even more the quality, right?

Anders Arpteg

So, yeah, so improve it, of course, and try to understand how it works. But I guess also from a security point of view, you want to guardrail sometimes some behavior. Yes. How does that work in in Lang Smith? Is there some way to prevent it from doing certain actions or taking it?

Danilo Nobrega

Yeah, for sure. But usually you will put that in inside of your uh let's say if you're using Langgraph, for example, you would put that. You usually have there, we have what we call the middleware. Yeah. Uh so you can put it before and after the graph uh finishes. So you can modify the input a little bit or a lot and the output as well. You can ensure that if the output is not good, that you don't send it out into the world, right? And this is very important because when we talk about agent engineering, the discipline, there's a few things that are important, right? Um first, I talked about the the convergence of these three different types of roles, right? Ideally, it should be one person, but if not, they should work in a team, not in silos, right? Within the same objective. Uh but then you can also set evals um to observe certain things for correctness as well, right? I was talking about conciseness, but it could be correctness as well. So you can set whatever criteria you want and observe that, and that means you will get signals. And then based on these signals, you know if your agent is performing better or worse. Right.

Anders Arpteg

And and the evals, if you define them in Langsmith, are they available then in Langgraph? So you can use the signals to control how it behaves?

Danilo Nobrega

That's a very good question. And uh it they they are in in Lang Smith, okay? Yeah, you have the data, then you have to uh modify and and create a new version of your graph, right? So let's say you modify a prompt. Uh you're you're going to then observe what happens with that. Yes. And there's two types of observed observations you're going to do. One which we call offline, uh, and and one which we call online. So the offline is something that you get uh like an aggregation uh uh at the end of the day to see what what is happening there, and you get some intelligence. And the online is you put something into production right away, you can have the signal. Is this working or not? Is it better than my previous or not? So you can very quickly fall back if you need to, right? So yeah.

Anders Arpteg

Okay, and the online is the one that is running directly, right? And the offline is like once per day, you get some kind of aggregated symmetric. Yeah. Okay. I mean super, super cool. And I can see that's being very important to have as well.

Sub‑Agents, Memory, And Cost Control

Danilo Nobrega

I would say much more than very important. It's fundamental. It's the difference between having something that is just a prototype that you download to your laptop and you run it once. Hey, it works, and something that is production ready. So uh this is another thing that we that we say with agent engineering. It's it's shipping into production and observing what happens in production that actually makes or breaks the agent.

Anders Arpteg

You're touching one of my passion topics as well. You know, how how really to put it in production to build real value and all the prototypes, you know, they are usually not providing value properly. Perhaps, you know, even though you've just been there a month, I I know it's hard to speak about this, but still, what would you say the main challenges for customers of blank chain is to build real value in their business? Is it that kind of you know that they are working a lot of prototypes and they can't put it in production? Or what do you think the main challenges would be?

Danilo Nobrega

Yeah, I would say, you know, the technology is there. Yeah, it's proven that it provides value. Yes. I mean, just look at Klarna and some others, what they've done, it's impressive. Now there's two other parts to the puzzle: there's the processes or process and the people. Yes. So this is really about organization, right? Does the organization have a top-down mandate for agents? And if so, then how are these people organized? What we see is that the organizations with more success they usually have a dedicated group to do this with those different expertise, you know, different uh profiles or personas I mentioned before that work together. And that was like product people and data scientists, perhaps. And engineers, software developers, yes. And and then they are able to basically produce value. Once that value is produced, then you can you know disseminate that within the organization. And that's what we've seen works very, very well. Yeah.

Commercial Platform: Build, Observe, Evaluate, Deploy

Anders Arpteg

But still, uh I I guess some people believe I'm trying to speak a bit from my own experience here, but I think you know, people underestimate a bit the the complexity of actually using agents properly, and they think, you know, I can just um take my chatbot, I build a rag application, it it has some tool that use it can make, and and now suddenly I can have an invoice management tool that fix all the invoices and take whatever action using uh some lightweight tooling, but it is a bit harder than that, I would argue. Do you see what I mean? I mean, yes, it's much, much harder.

Danilo Nobrega

And that's why we coin the term agent engineering, because it is an engineering uh effort. It is to get it to work well in production. It's very easy to build an agent, very hard to make it work in production.

Anders Arpteg

Operations are much harder, right? Yes. So the development, if you take think in DevOps terms here, it's very easy. I mean, you build something in five minutes, but then to actually run it properly and make sure it it's it's running in the right way, it's much harder then. Yes, yes, it is.

Danilo Nobrega

Yeah, yeah, yeah. Yes, and and it's a continuous improvement thing. It's not like you ship and you forget about it. No. It's constant, constant observing, and because you don't want the cost to spike through the roof. You you don't want it to be a liability. I I always say the most expensive agents are cheap agents because that's a liability for your business, right? It's going to uh do things that will damage the brand or whatever. So you really have to take it seriously and and have the right tools to make it work in production. And what we see is the companies that have observability from day one, they they are you know uh doing good things. Um uh we have uh a report that we release uh every year around this. Uh uh and and uh there's a lot of interesting statistics there. Yeah.

Anders Arpteg

I love the term agent engineering. We we have a guest that we have brought on here called Jon Bosch, who is a professor in software engineering, and I think he was the person who coined AI engineering a while back. And um I'd love for him to discuss more about you know what agent engineering really is, because it is, as you say, a very different type of engineering in some sense, right? Exactly. Yeah, okay, cool. But I think you know how successful is really Langchain. I think you you hit over a billion downloads. Can I be can that be accurate?

Danilo Nobrega

Yes, it is every month. We have about a hundred million downloads. Every month. Every month. And it's it's uh pretty impressive uh when you think about it, yes.

Anders Arpteg

And still for all the people uh using Lang Smith and Langchain and Langgraph and uh and all the different parts that you do have. Um that there are a number of challenges, I guess, for getting them to use it. And I and I I know a lot of people that are trying to use the open source versions and they try to get by without having to pay for Langsmith. But and and I think you mentioned it really well. I mean, we we need to have the observability and the evaluations that Langsmith provides, otherwise it will be tough. But still, then a lot of people is moving to the cloud as well a bit. Um, and perhaps you can just speak a bit about that, because to my understanding at least, you have a very close collaboration or integration with Google Cloud, right?

Danilo Nobrega

Yeah, so so the um uh LangSmith runs on uh the the managed version or the SaaS runs on Google Cloud, yeah.

Why Traces Are The New Code

Anders Arpteg

So it seems like it's gotten a significant um distribution already, even in the top with like cloud providers, which is really impressive. Perhaps you can speak a bit more, you know. Okay, we see this is what we have right now, and I'm sure you have a roadmap ahead for what's missing today. And I personally is a bit interested in in more the security aspects and compliance aspects as well. I'm not sure if you're doing any work there, but perhaps you can speak a bit about you know what are coming up for Lang Smith.

Danilo Nobrega

So I I can't really talk roadmap here, unfortunately. Yeah, but what I can say is here in the EU we have the EU AI Act, right? Yeah, and I think it's great because um for agents and for agent engineering, that type of traceability, visibility, you know, and accountability will automatically translate into really well working agents.

Anders Arpteg

I will yeah, and and I'm I'm thinking while I'm speaking here now, but I think you know, for an accomplishment that is moving more and more into agent-based kind of development, being able to have the kind of traceability uh of how the system works, I think will be crucial for being able to be compliant with the AI Act, even though the AI Act doesn't really speak about agent engineering, which they should, I guess, right?

Danilo Nobrega

Yeah, but I don't know to if you need the exact terminology, but when you think about it, I remember a few years ago there were these really lively discussions with politicians and scientists, and they were talking about the black box of uh of uh foundation models and LLMs, and that it was impossible to understand what was happening under the hood. And and then people were saying, but we need to know, we need to know what's going on because how can we you know create policy or whatever? We and the traces right now is the best visibility we have uh about the behavior and what happens when you poke it in some way, when you give this input and it gives this output, right? And and then once you start having more and more input and more and more output, you actually see patterns emerging. So then you can actually gain a glimpse of what's going on, you know, inside the the black box. Yeah.

Anders Arpteg

Yeah, and I don't the more I think about it, you know, the the try if you try to really look inside, of course, a trillion uh parameter model, that's impossible to really understand what's happening. And the traditional way of IT software development, as you mentioned, by just trying to understand what's happening by looking at the code, it doesn't work. You so you can't look at the code and and you can't look really at the parameters either. So the only thing you're left with is really the traces and trying to understand what's what's happening. But can you can you perhaps elaborate a bit more? What is really a trace? What can you see in a trace in Smith?

Guardrails, Middleware, And Evals

Danilo Nobrega

You can see um the runs, right? Which are each individual uh step in a trace. So it looks a little bit, you know, when you open your file system. I don't know, I don't use Windows for a number of years. I use Mac OS, but I remember when I had Windows 95 and I opened the file system, there was like a tree, right? Yeah, like the all the folders. It looks very much uh like a tree. So you start here, and whatever is happening, any LLM that is triggered, um, any calls that are done by the agent. Uh, so everything is tracked. And and there's two things that you see there. You see the latency, but you also see the cost. So you're able to know exactly from a trace. So a trace is basically I asked a question, and the agent gave me an answer. It's everything that happened in between. And then you have a thread which is like the whole conversation, right? So so that's a trace basically in a in a nutshell.

Anders Arpteg

It sounds like something you need an agent to help you with observing the traces.

Danilo Nobrega

But that is exactly what an eval is.

Anders Arpteg

All right.

Danilo Nobrega

Yes, it's it's uh something that's observing uh the agent, right?

Anders Arpteg

So the evals are basically agents going through the traces and then from that trying to compute some metric, or I I behind the scenes, I don't know technically how it's implemented.

Danilo Nobrega

So I can't say if it's actually an agent. Um I need to speak to the engineering people for that, but it is what is observing your uh agent, right? Uh at a certain point, yeah.

Anders Arpteg

And and I saw you also comment a bit about the need for lang graph and having this kind of state-based reasoning and not just a lang chain or these kind of sequential um um uh programming, if you call it that, almost for agents. Can you just elaborate a bit more? Why do you think we need to have like these kind of graphs with states and and uh different like loops that you can run through potentially?

Danilo Nobrega

Yeah, so I think uh Harrison said uh he had a quote where the right level of abstraction for agents is is uh you know not more but less, meaning that in order to have something working really well, you need to have all those things defined. Uh you know, what what the orchestration part of it, right? So so you can really get the right behavior out of the agents. Um, another thing, I compare it a lot with databases as well, because I'm coming from that world, is to have something which is reliable, something that is uh, you know, you desire you can have disaster recovery. Uh and and it's if something breaks, it's very easy to understand why, because each of the nodes they have a checkpoint. So it's uh it's very similar to like a database transaction that you know has is saving uh information in a log. So if it breaks, it it can somehow remember what was going on. So it has those checkpoints, and that also gives you uh human in the loop for free. So if you need uh, for example, uh let's say you're doing an approval flow of some sort of uh artifact that's generated, then you need a VP to approve, and you give the VP three days, it can wait for three days, and when the approval comes, it will continue the execution, right? So you need to have state for that. Um and then also we talked about retries, right? The loops. So if it hasn't worked the first time, you might want to try again and and and have these loops uh so so the agent can actually uh finish executing and and uh be successful, right?

Shipping To Production And Learning

Anders Arpteg

Yeah, and but I think also you mentioned uh the feedback loop as as well. Um we I guess you know having the memory, of course, is important in the state, and you can have the checkpoint and restore uh restart from that if necessary. But but for the feedback loops, I guess it's from humans, right? Or or what can you just elaborate a bit more what you mean with feedback loops?

Danilo Nobrega

Yes and no. So if if we take if we take the scenario we were discussing before with with Langsmith, right? Where you have all the data that's generated on the platform with both observability and evals, you can actually get insights from from that. So where you emerge patterns and so on. So let's say all of the thumbs up uh ones and all the thumbs down, I can I can get an an aggregation of that, and then I can you know feed it back to to my uh land graph, uh, so to tweak it, so to say, based on that those insights, right? But if you look at deep agents, it's really interesting because the deep agents it it has a short and long-term memory. So it has um semantic memory, it has episodic memory, it has procedural memory. Really? And it has a little file that's called instructions.txt. And it will actually automatically do that feedback loop and update that file. So you start using that agent. Is that only for deep agents or is that deep agents is is the one that has that, yes.

Anders Arpteg

Yes, it's actually a bit similar to open claw, you know. It's also trying to improve its own skills on the soul.md file that they have, stuff like that.

Danilo Nobrega

Haven't used uh open claw yet, but but deep agents is it does that. So the first day you're using it, maybe you know, you will tell it, yeah, don't use corporate jargon. I don't like synergy. Uh give me things with a bullet point list, um, and always very concise. And so this the second day you you use it, it it will already remember that. But then uh it might throughout the end of the day, it might all it also does like a summarization of everything that happened and then updates itself, much like what we do when we go to sleep, right? Our brains, right? So yeah, and and then over the course of a week uh or two. It it really feels like, oh, it it knows me so much better. And it does. And it's very simple. It's just a freaking text file that's updated, but it works. It's brilliant. Yeah.

Anders Arpteg

It's just saying text file is uh is oversimplifying uh simplifying, I think. You know, the whole idea. Yeah, it is an important part. And and what type of memory did you say you had? You had semantic and procedural memory. Can I just go through them a bit and explain what?

Org Design And Skills For Agent Teams

Danilo Nobrega

So semantic is like um uh yeah, Danilo doesn't like corporate jargon. Uh he he he wants things in in the bullet list uh and he likes things summarized, right? Some personal taste or some kind of yeah, it's it's it's uh basically like uh that type of definition. Then the the episodic is about experience, um and and the procedural is about rules. Yeah. So it will take these three different types, and you know, I forgot there's the hot uh update as well, when maybe during during uh an exchange it will realize ah, I have to update this, and and then it will go and update it on the fly so it doesn't make that mistake again. Uh even in the same sessions, yeah, yeah, yeah. So so it's uh yeah, so so so that's more or less how it works. Um but the memory I think is is really an interesting topic. And it is. And I think it's something that is going to evolve a lot more because if you think about it, you know, where is all the IP uh in the old software engineering? It's in the code, right? Yeah. But if you take agents and agent engineering, where's the IP? It's the memory. Right? Of course, there's the architecture, there's the evals, there, there's the opposite. A logic around that. Yeah, but some somehow the the memory will also gain a certain value, right?

Anders Arpteg

Uh the way how you save in the memory would be an IP as well, right?

Danilo Nobrega

Yeah, I mean it's a combination of everything, but uh you know, you no longer have the code, so everything around that's part of the agent somehow becomes your your IP, right? So you can sell your memory in the future, perhaps? I don't know. I don't know.

Anders Arpteg

Yeah, I mean I can see I can see that.

Danilo Nobrega

This is just a thought I had, you know, while I was preparing for the podcast. Yeah, there's a lot of interesting things that don't work the way we're used to, right? Yeah, in the software world.

Anders Arpteg

I mean if you just go to the big frontier labs as well, and I know Sam Altman and Gemini and Google and everyone is working on you know adding memory to the models as well. You already have it. I think you know, adding it outside of the I mean, I guess they're not going to have memory as part of the normal parameters, it's it's going to be some kind of other model or module around it. Um, but it's it's it in my view, at least, if you I'm not sure you heard about Jan Likun's Jeppa architecture.

Danilo Nobrega

They I've heard about it, yeah. I'm not sure if I I've if if I remember, maybe you can refresh my memory.

Anders Arpteg

I mean it's it's simply saying, you know, it's not sufficient to have a single like LLM model that does auto-aggressive kind of next token prediction tasks. You need more. And and the more stuff could be that it has a working memory, it has a world model. Yes. And in a world model, you can basically judge if you make a step in that direction will be good or bad, like a policy network of some kind. Yes. Um, or or a value network, sorry. Uh, and then you know, having these kind of both short-term memory, working memory, but also long-term memory is part of the Jeppa. So it sounds like you're moving in that direction with Lang.

Observability, Evals, And Scale Lessons

Danilo Nobrega

Uh I I don't I I don't know, and I cannot affirm that. Uh what I think uh that that we're doing is just um you know agent engineering and and I mean deep agents is something new, and there is a whole discussion uh around you know harnesses versus orchestration, right? Right, and which one is is going to be better or worse. And uh we don't know right now. Langgraph works really well, and deep agents is brand new. So it's it's something that we will see what happens, you know, both in production and what developers do, because it's thanks to our community of developers, the 1 billion that download, that we get so much insights and learn so much. So really grateful for all of them. That's what actually helps us uh understand okay, what feature should we develop here or there, right? So it's hard for me to say if if that's what's happening. What I know is that you know, I I know that there are people in the industry that are very interested in doing the next thing that thinks that what we have right now, we've reached the limits. But from an enterprise perspective, it's just the beginning. And already what we have, remember what we talked about uh the model, but depending on what you put around it, uh you can get so much value from it. And the companies, the enterprises are already getting value, and everyone's trying to figure out um the best way for their organization. So this is happening right now as we speak. So for scientists, I I understand. Yes, there's a new frontier, and they're good, but where I work is with industry and uh practical things, you know, and and things that work and that provide value for the business and that we know uh works in production. So so that's uh what we're focused on uh right now.

Anders Arpteg

And and if you were listening to this and and you want to try it out, I mean I think a lot of people then would think, okay, I try a small prototype, uh, perhaps I build something with agent builder and then see, ah, cool. But then you want to put it in production, as you say. And and what would you say the biggest hurdles there are to really put it in production properly? Because I'm thinking about you know, they have to use real data that is potentially protected by GDPR, or they may um have to comply with AI Act, for example, and they may need to integrate a lot with different systems to have the data available there properly, and they may need to integrate into other systems that they do have. I mean, it's a lot of things that could potentially be problematic, and it could also be the whole change management part of having people that did one thing in one way, you know, for 10 years and suddenly they, when having agents connected to it, will have to change. Given all of this, you know, if you were to give some advice for a company that really wants to find value in their company by using land chain, what would you see are the biggest important points to take?

Danilo Nobrega

I know it's a big question. I I just need to look at my notes a bit because I don't want to miss uh everything.

Anders Arpteg

Yeah.

Danilo Nobrega

Yes. So so the first thing is quality. Yeah. Okay. So we we have uh uh right now statistics from from our uh yearly reports that quality 32% of the respondents in that report called it the main top barrier for production. Quality in data or what in no quality of the agent itself, like uh what type of response it's giving. So agent performance, yes, versus the expected uh behavior. Right, right. Okay. So um it's funny because most people would think cost. Yeah, okay, but it's not number one.

Anders Arpteg

It's not cost.

Danilo Nobrega

Uh and I cost has even moved down since two years ago. Uh number two now is security. Yeah, so it it's interesting to see how how things are moving. Um, you know, it's because of the non-deterministic aspect of that we spoke spoke about before, you know, you develop something, you put it into production, and then someone asks something that you were not expecting, and and then how is that going to behave, right? So how do you uh get past this, right? Uh, but we do have like the Klarnas of the world, LinkedIn, we have Clay. They've all internalized one principle, which is basically shipping is how you learn. Okay. Yeah. And not what you do after learning. So you can't perfect an agent at staging environment. No. This is not something you have to do. Put it in production quickly and uh iterate a little bit. And observe what happens, iterate fast. Yeah, that that's that's what number two is observability, or or my more precisely the lack of it. Yeah, right. So um there's a stats that actually blow my blew my mind. 89% of organizations have now implemented some form of observability for their agents.

Anders Arpteg

And among teams, but that is customers of Langchain then, right?

Pace Of Change And Framework Strategy

Danilo Nobrega

Well, people who answered the the questionnaire for the report for the survey. Uh, there were 1,300 that answered the last one. So uh among teams that actually have agents in production, that number goes to 94% have observability. And that tells you something. It it you know the teams that go to production figured out that you have to have observability early on, right? Right. Um and that's really why Lang LangSmith exists, right? Uh that's the whole reason for Langsmith uh existing.

Anders Arpteg

Number three But do you think if 94% say they have observability, do they really have it? Or is it more of a manual kind of process of yeah?

Danilo Nobrega

So yeah, it's interesting you say that because yes, there there are different tools out there, and I know companies that right now uh you know might be using Excel to observe, right? Exactly, and that can work if you have not so many uh people using your agent and maybe just one or a couple of agents. But we're talking about scale here, right? Yeah, so this is the exact same problem I had at MongoDB, right? People would download the open source, they would run it in their laptop and and think, you know, I can put this to production.

Anders Arpteg

Right.

Danilo Nobrega

But you know, when scale hits, well, you need security, you need high availability, you need you need uh uh to make sure these things work with performance. You you need to also uh perhaps change the architecture because now you need sharding, you need replicas, and it's it's very similar to agents. Like by the time you ship to production and you observe, you you're gonna change the strategy, you're gonna perhaps change the architecture. And um there's that conference I told you about it um interrupt that Langchain has. And last year's there were quite a lot of customers. The videos are on YouTube. You you can go and search for Langchain Interrupt 2025. Very soon in May, we're going to have a 2026. And you can see that the customers, by and large, they start with a very simple design. Usually it's the React uh design, and that's just a simple loop, basically. And they iterate and they and they get more and more complex. They add uh you know different parts, maybe sub-agents and whatnot. And this is all from testing and and from the real data, and they observe how what works well, what approach works better for their use case until they get some to something which is quite stable and and performant and working well, right? So so that that's that's uh what we see. And then the this is really interesting. That the third thing is evaluation. So only half, only 52% of organizations have implemented evals for the agents.

Anders Arpteg

Interesting.

Danilo Nobrega

So, you know, most teams can see what the agent is doing, but there only half is systematically measuring whether it's doing a good job or not.

Anders Arpteg

Interesting.

Danilo Nobrega

Yeah. So so this is uh this is basically your test, your QA, right? So quite important to to get something uh uh like that in place. And then number four, it's one that surprises people. Yeah, we already touched upon it, it's organizational.

Anders Arpteg

Yeah, okay, right.

Interrupt Conference And Agent Builder News

Danilo Nobrega

So it's not the technical hurdle, it's a skills gap. And we talked about uh agent engineering, we talked about the three skill sets that need to work together. Right. So this is another thing because if you don't have that in place, it's not going to be successful in production, right? Or it's gonna be a lot harder to make it work.

Anders Arpteg

And I guess it's not organizational for the development only. I mean, this is for development, we need to have these kind of skills combined, but then also for the users, I guess they also need to have the right culture and mindset and openness to be able to use agents in this way.

Danilo Nobrega

Yeah, I would say a good agent is an agent where the user is is happy to use, right? I mean, uh if if it's working really well and it's working better than if it was handled by um, you know, regular people just because it has more of them and and it's able to respond faster and with very good level of accuracy. Me as a user, I'll be delighted, right?

Anders Arpteg

And and perhaps the border between developers and users get a bit blurred here, right? Since a user can potentially tweak the prompts and actually use agent builder to tweak their own software, so to speak, or agent in this case.

Danilo Nobrega

Yeah, I mean, it's it's a good uh good question, but uh you know, if you've done developed your agent well, uh it should yes, we should work. Yeah, and and you should be able to guardrail against those things. Now, an interesting thing though, you talked about it's not just the developers, the users, is that also within the organization, you have the business, you have leadership that wants to gain visibility on what's happening. Yeah, so um not only on you know what's working well, what's not working well, but also uh the cost aspect.

Anders Arpteg

Right.

Danilo Nobrega

So, what business value am I getting out of this? And how much is this costing me? And with Langsmith, you can get that not only for the traces, but you can get it for you can put tags uh for teams, for features, for specific agents. So you can compare is team A, you know, spending more than team B. And why is that? Maybe they're using more a more expensive model, maybe it's not optimized, and so on. So that's also uh interesting. Um uh number five is the infrastructure gap. So you mentioned in the in the beginning, most teams start with an LLM wrapper, right? And a simple API. But and then maybe some prompt engineering, basic RAG, and and that works for a demo, but the moment you put it uh into production, you might want the human, you might want to pause it at some point so a human can approve something or review uh some supervision and so on. And we also talked about uh a runtime because lang graph is not just a graph, it's also a runtime. By the way, all three lang chain, uh lang graph, and deep agents, they run on the lang graph runtime.

Anders Arpteg

Ah, yeah, okay. So that's the base of everything.

Model Updates And Computer Use Benchmarks

Danilo Nobrega

Yes. Um so that infrastructure is quite important. The checkpointing we talked about, and that enables you uh to basically have something which is production ready, not just a toy, right? Because you have disaster recovery, you you have uh all of the things we discussed about before. And finally, that there's the pace of change itself. So 70% of regulated enterprises update their AI agent stack every three months or faster. But you and I, we know that these models they're changing almost every week. So um, you know, the frameworks also evolve fast. Um best practices shift, you know, they're new things we're learning all the time. Um so yeah, it's it's it's the reality is that it creates paralysis in in a lot of teams. And you do have organizations that unfortunately don't want to use AI because they're scared of losing control or not um uh you know being able to manage it. And of course, you have the organizations that are using it and doing it well and and getting the value out of out of it, and you have organizations that are using it and doing it poorly and and getting liability, you know, and damage uh as we've all seen in the news, right? So it's basically these three categories, yeah.

Anders Arpteg

Yeah, so interesting to hear and so difficult actually to do. And yeah, we guess we should have a new um new research field in agent engineering to really understand this properly.

Danilo Nobrega

Yeah, I mean uh I I I yeah, I I'm passionate about this now that I'm uh working here.

Anders Arpteg

So yes. Well, I mean cool. Perhaps it's um time to do a small news section as well.

Goran Cvetanovski

It's time for AI News brought to you by AI AW Podcast.

Anders Arpteg

So we take a small break usually in the podcast to just discuss a bit about the latest news that is uh filling and overflowing us all every day and week um in the world of AI. Do you have any any news topic, uh news story that you'd like to bring? Something from Langchain or something else?

Danilo Nobrega

Yeah. So we already spoke about deep agents. Yeah, I think we touched upon interrupt. So it's happening in San Francisco. Uh it's a company, sorry, not company, it's a conference dedicated to agents. So you have developers, you have all a lot of companies, researchers. Um, as I said, the the past year's talk are on YouTube. You can search for Langchain Interrupt 2025. And the tickets are now uh for sale. So if you want to go, um you can get the tickets. And if you're from an enterprise here in the Nordics, you can reach out to me. I might have some special discounts. Nice. So, and then the other thing I wanted to call out is agent builder. So I think we were talking about it in the beginning. Yeah. It's really a new uh product that we have launched recently, a no-code agent. So an agent that builds agents. So you can do something in five minutes. Um, it's pretty mind-blowing after, of course, getting it to work exactly the way you want to require some iteration, but you can get started quite quickly. And uh it's available on Langsmith, so you can, you know, you can go to Langsmith uh.

Anders Arpteg

Can you try it out without having to?

Danilo Nobrega

I believe so. Um I I I'm not a hundred percent sure, but uh if you want to try it out, I can I can arrange that, yeah, for sure.

unknown

Yeah.

PowerPoint, Tools, And Human Strengths

Anders Arpteg

I sounds amazing, and it's cool to hear your own personal experience with it. And it seems uh really, really powerful to build your own agents like that. Cool. Um I mean uh it's happening a lot in the world of AI, of course, and um it sometimes you know people. Get a bit fed up saying, oh, it's a new version of Opus uh 4.6 now, or it's a new version of um yeah, uh Codex uh 5.3 and whatnot. But it is kind of interesting still. And I think you know, the it was like last week, you know, we have these kind of new models for Opus and and also for uh codecs in in GPT 5.3, but then they released also just this week um the the smaller model called Sonnet from Anthropic, uh Sonnet 4.6. And I think you know, just to give some more general remarks about that, uh we of course know that normally they have to build a big model first, do the big opus, which is probably around a plus trillion parameter model, and they train it and it becomes amazingly good. Of course, all of these models, including Opus and um and the codex as well, is moving into agent space. So they are specifically trained to work in an agent environment and not only in in a chatbot kind of environment, so to speak, or in a code assistant environment, but actually trained to take actions properly. And and I think this is really interesting. And and then they also, if you take Sonnet, which is probably I would, you know, I'm just guessing here, but 100 to 200 billion parameters instead of trillion parameters, so perhaps a tenth or a fifth of the big opus model now being released, that is basically almost as good as the big one. Meaning they probably do like knowledge distillation from the big one. So they they train a smaller one by simply outputting data from the bigger one, and that way having really high quality data, and they can use that to train a smaller one. But to still see how this Sonnet model, which of course is smaller, faster, and cheaper, and is even better in some use cases. So uh the Sonnet one then also brought up the I think it went up to a million token window uh size, which is the Opus one did as well, which Gemini has had for some time, but that also opened up for a lot of things, especially in coding, where you need to have a big context window. They also, of course, have this kind of adaptive thinking, meaning it's not only like saying, uh, I'm going to give a quick answer or a long answer. It actually has some kind of effort measure where it can it can adapt to how much reasoning it should do before providing some kind of answer. But then um what I think is interesting is is partly this kind of computer use metrics. So computer use is basically using keyword and mouse. And they are you know training a lot on that these days. And uh being able to use agents to also control the human interfaced uh digital spaces that we have, I think you know, will be a lot of power in doing so. And now the Sonnet one, even though being so much smaller, is actually on par with the Opus one, and that Opus is the best one in the world right now for doing computer use kind of benchmarks. So many cool things happening, and and also another interesting metric is um what was it called? Uh GDP valve. So GDP meaning gross uh domestic product, meaning it's basically a benchmark of like a thousand different tasks that humans could do, like uh build up a financial spreadsheet, uh dividing some kind of cost according to different cost centers and whatnot, and summarize that properly. And it needs to do that in a very similar way as a human would do it, and then having people judging you know in a blind way, in a blind test way, does the agent actually do it as well as humans do? And and these kind of models then are you know moving very quickly uh up towards human-level performance there, which is uh both exciting, I think, but also a bit a bit scary, right? Or what do you think about this that agents are moving up to more and more of human level performance also for taking actions like this?

Danilo Nobrega

For me, it's like if they can if they can do the boring admin work for me, I'm happy, you know? So um I I actually welcome uh I rather be strategizing, being creative, and letting them do the manual work, like scrolling pages, scraping pages. And you know, I I'd rather just tell them this is what I'm looking for, find it. And it works while I sleep. And when I wake up, I have everything ready for me than than uh wasting time with that, to be honest.

Anders Arpteg

Yeah. I mean it's interesting. This other metric called a meter, M E T R, which basically measures how long time a task would take for a human, and then seeing when can an AI have like 50% success rate in doing a task that takes uh five minutes versus five hours, for example. And the best models now are up to hours at least, or like they can have a 50% success rate of multi-hour kind of tasks, which has happened just recent months and years, and and it's moving very quickly. So it's also going exponential.

Danilo Nobrega

Is this still on the browser or that you're talking about?

Anders Arpteg

Or uh no the the metro it doesn't need to use a browser in this case, to my knowledge. Uh it can be different tasks that it needs to do file and create some kind of uh a document, but it it in my understanding, it also sometimes needs to do tasks in a more advanced way. It's not simply creating a Word document or a PowerPoint or something.

Danilo Nobrega

I haven't used these uh you know browsers with computer use uh recently, but when they came out, I did use a few. I tried them out and they were awfully slow back then. I don't know if that has improved or not.

Standout Use Cases: Klarna Support

Anders Arpteg

But uh they're still very much below humans, and then I think people don't realize that. But it is, you know, AI is really good at some tasks, but humans are better than other tasks. And uh we shouldn't underestimate what humans are good at. And we should, of course, you know, uh try to adapt as an organization to make sure humans use or work on things that are you know, humans are good for and let AI do what they're good at. And memory, I think, you know, is one thing that AI is extremely good at. Humans are horrible at. So let's you know divide the work there. Yeah, agreed. Ah, uh so many cool things happening there. And um, and then Grok also came out with a new model for point two, and I'm waiting for Grok 5. I think that will be super fun to see when Elon release that kind of model. Goran, do you have any uh news update you'd like to talk about?

Goran Cvetanovski

Um, I think that the most interesting was that um yeah, some altman uh hired this open claw creator, yeah. Right. So it's gonna be exciting to see what is gonna happen with that. It's it's just it's it's against the move, I have to say. It's very interesting.

Anders Arpteg

Yeah, so we didn't speak about I think we mentioned OpenClaw the last time, but uh it's we didn't actually because we uh uh we just named it a bit.

Goran Cvetanovski

But I think it's uh it's one of the most interesting things that can happen in uh in a year or so, actually. Of course, models are coming in, but this was something new. This was something interesting.

Anders Arpteg

So single person, Peter, I don't recall his last name now.

Goran Cvetanovski

Peter Steinberg Getter. Yeah, from Steinberger, yeah.

Anders Arpteg

Created this open source product called OpenClaw. It was called um ClawBot first and then renamed to Moldbot and then finally related renamed to OpenClaw.

Danilo Nobrega

Are you using it personally?

Anders Arpteg

Yeah, I played around with it. It's a bit scary though to use because you have to give uh you know access um to your social media accounts and passwords to that and to the files and to the files on your computer and whatnot. But it's amazing what it can do. And um and if you take the latest, uh it was a new model called um Minimax 2.5 from China that was released recently. And the that one is specifically used to build like PowerPoints, something that you know most of the Frontier models is really bad at still. But this one is specifically trained for it by by writing code first to actually build.

Danilo Nobrega

Claude does it quite well. Or you that one is better. Have you tried both?

Anders Arpteg

Or uh I haven't tried Claude actually, I've just seen others try it. Um but usually, if you do one-off from scratch, that usually works well. Meaning if you just want to create a new presentation, that usually works somehow okay. But if I have a presentation and I want to edit it and keep the current style, to my experience, it hasn't worked well. Something that a human would do very, very easily. But then apparently Minimax 2.5 is really good at that as well, so it can adjust that as well. So and try to keep the style in some way. But uh yeah, so many things are happening. And then the whole acquisition of um SpaceX acquired the XAI, and uh XAI have acquired X, and now you know Ellen is moving into space data centers and what's a little bit it's happening so many things.

Jobs, New Roles, And Human‑AI Teams

Goran Cvetanovski

Yes, yes, super good. And uh, I think that uh there was another thing I'm just actually checking it right now because uh Google actually released like a music model which is called Liria 3. Oh uh it's uh right now still in beta, it's up to 30 seconds. Um, so I'm just testing it now because I'm uh deeply into Suno and etc. Yeah, so let's see if they uh will manage to bring some competition, although I don't think it's possible. Suno is like right now on the top level there. Yeah. Ah Suno is great.

Anders Arpteg

Well, cool. Uh so many things are are happening in the world of AI as as usual. Um so if we let's go back to to Langchain a bit here uh as well. And um, perhaps you know, we're going a bit more philosophical now, but before we go there, perhaps you can just elaborate a bit more. What are the most impressive like use cases you've seen? Some kind of example use cases where companies have been using Langchain and Langsmith for use cases, either for building products, perhaps, or for internal workflows in trying to optimize their internal operations or for personal development, perhaps? Do you have any favorite uh example use cases?

Danilo Nobrega

I mean, for me, in the number one is Klarna, because I mean it was iconic when when it happened, and I think it was one of those uh inflection points when people actually realize that oh, this is not just a toy, and it actually, you know, I can actually get value from this, and it the numbers are just crazy. Like uh they published it, right? Uh and and uh yeah, it's it's uh can you just elaborate a bit more?

Anders Arpteg

Do you mean the customer support part?

Danilo Nobrega

Or which yes, yes, yes, the the automation of that, right? That uh it's uh for me it's uh very impressive, and it was very early on as well.

Anders Arpteg

And then you have um Do you have any more details for people that haven't understood what Kana used agents for?

Danilo Nobrega

If you were to explain, yeah, so I I I it was for the customer support automating that part, and um I don't have the exact figures uh by memory, but I remember it was a drastic reduction in terms of time that the issues were resolved. Uh and then uh as a consequence as well, they they they were able to uh be much more uh productive uh because of that. The humans were right. Yes. Uh I know that that uh when it did like get routed to humans, they really got the cases that need humans, right? And not just like uh an easy case that could be solved by AI. So it was really like uh optimizing those those things. Um and yeah, they they spoke publicly about it. They they use Langgraph and Langsmith. Uh yeah.

Anders Arpteg

Oh, they did? Okay, so they used oh, interesting. Um so moving into that space then, you know, we we can of course some people are a bit afraid that uh they will lose their job, perhaps sitting in customer support these days, and then AI and agents are coming in. Um what how do you see the job market being impacted here?

Nordic Ecosystem Strengths

Danilo Nobrega

Yeah, so so it's an interesting question, right? Uh we were talking about agent engineering before, right? It's completely new. This didn't exist. Like uh, you know, even before I joined Langchain, I didn't have so much insight into what was going on, right? Now there's a whole new category of people. I mean, if you're in the market and you have somehow a combination of product, software engineering, and data science, you're super valuable right now, right? Right. Uh you also have uh new roles that are created because of the this shift. Obviously, it's it's um something that is happening, you know, and then it's just a matter of uh how do I adapt to this and make the most out of it. And I know that it's not uh it's not something that I want to comment because I know it's also very uh polemic, and uh of course I feel for people that lose their job, I don't think that's uh anything funny. Yeah. Um but the change is happening, yes, whether we like it or not. And so it's really about how how do I adapt the in the best way possible, right? And there are all these new roles. We spoke about one today, but but there are many more that that are being uh created, let's say, you know, create creating things are so much easier right now. So I can create uh a hundred different new images, but now what's hard is actually which one is the best. So this sort of curator role as well, right? And there's a lot of people that are good at that, and maybe today are not using uh that skill. So humans are needed, like you said, combining the what humans are good at with what AI is good at. And I think there's you know, we talked about evals, like that's a whole practice, uh understanding how to use them, which one where, you know, and making sense of all the data and so on. And so I think I think there will be uh there are already new this new roles that are being created, and there will be many more.

Anders Arpteg

Yeah. Yeah. Would you some people say something like this? And I would like to hear if you agree with it. But some people say of AI won't really replace people, but people that use AI will replace people that do not use AI. Yeah, the combination, yeah. Yeah, would you agree with that?

Danilo Nobrega

Yeah, I I I I can speak for myself. Uh, I've been using it since it went out. And um I remember I'm really into uh computer graphics, VR, AR, all of that. So I've been following it since the beginning. And I remember thinking, okay, these things are gonna get so small. It's gonna be like a contact lens that you can put on. And and you're gonna get all sorts of insight. I'm gonna be looking at you, I can see your heart rate, I can read your facial expression, I can read your LinkedIn, you know, your address, whatever. And compared to Goram, maybe that doesn't have these contact lenses, I'm gonna be much more efficient and do my job better. So it's like after a while, it's okay, you know, do I use it? How do I use it? I think the important thing is to use it in an ethical way, in a good way.

Anders Arpteg

Yeah.

Danilo Nobrega

If it's for the benefit, you know, of all, if if it's something that is aligned with your values, and you're not corrupting uh any ethics, I think it's all good. Uh that's how I see it personally.

Trust Through Traceability And Feedback Loops

Anders Arpteg

And some people phrase it like um, you know, OpenAI had this kind of pyramid where with five levels saying, you know, AGI will happen when an AI can go through all five levels, and one level is simply knowledge management, being able to work with large amounts of data or information and knowledge in somehow. And I would say AI is really, really good at that today, much better than humans. Um, and then we have some reasoning, and then we have autonomous, basically an agent layer there. And then we have some innovation, and in the top it's some kind of organizational, all working together in some way. And and we can see that of course AI is really dominating in the bottom layer here, in the knowledge layer, meaning I think OpenAI called it conversational, but still it means the same thing. And and no one, you know, as a human can ever like take 10 books and put it you know in a prompt and read it in seconds and have more or less perfect recall as an AI can already today. So so, in that kind of sense, in just being able to have amazing knowledge management skills, and AI is you know significantly better than humans today. But I would argue for the rest, it's still that humans do succeed and and are better than than AI. Would you agree so far that this is where we are today?

Danilo Nobrega

I guess it depends on what.

Anders Arpteg

Yeah.

Danilo Nobrega

Right? Like image classification AI is much much better.

Anders Arpteg

But that's a knowledge management thing, right? So you can you can go through a million of images and then seeing these are dogs, these are cats, and then simply taking a new image, you know, and and seeing that this matched the pattern of all these other images and and then recalling from that.

Danilo Nobrega

Yeah, I think I think uh yeah, everything that requires uh the use of your body, we're still not there yet, right? But uh regarding what we do for work, I think also like if you take the entire spectrum, it's it's not automated, but there are parts of it where it's very, very good, right?

Anders Arpteg

Yeah. Then you could argue that humans then should move up the ladder, so to speak, should move up the pyramid, and and then AI should be taking more and more of the bottom layers. And of course, we're seeing agents being more and more capable now. So when we are seeing AI that can reason better and better, I would say not still to the extent of humans, but still it's it's improving all the time and day by day. So, by simply potential, then if you think about the job market impact here that AI could have, could a good way to phrase this will be that an person having these kind of three skills, as you mentioned, from a product point of view and from an engineering point of view, and also from a data scientist point of view together, is what a human should do. I mean, they should be more generalized like that.

Danilo Nobrega

That's a very big uh responsibility, right? For me to say what a human should do. I think everyone should do what uh they think is best for them. Uh so I don't want to take that responsibility, but personally, I do think uh I'm personally a generalist. Okay. And uh there's a book, it's very interesting called Why I think the title was Why Generalists uh Thrive in a Special Specialized World or Specialist World. And it's going through these uh like it talks about Tiger Woods, for example, right? How uh he was groomed since very young, you know, his father was training him in golf, and and so he got excellent at it, uh, of course. But if you take a robot, you can program a robot to be just as good.

Anders Arpteg

Right.

Health Scans As Human‑AI Augmentation

Danilo Nobrega

Like Tiger was because it's something very precise that you do. Yes, it's a a lot of skills, but you can still, it's very like specific, like what you do, and then uh it talked about a very famous tennis player. I don't want to get his name wrong. The the name that's coming to my head is Federer, but I'm not sure. Yeah, but but uh his dad uh did something very different. He had a whole variety of sports that he went to as a kid. So he could try football, he could try you know tennis, but he could try. Many different swimming, and then in the end, uh he he ended up going to to tennis. And it turns out that tennis is much more like the real world because you can skid on the court, uh, you have to use your whole body, you don't know where the ball is coming, you know, and so there's many more variables that you have to deal with, right? Uh and turn it turns out, like for him, uh, that was the best sport, right? If we think about uh the world today, in my point of view, is that AI is very good at specializing, right? Yes, and I think um uh it's not the how that is value gonna be valued anymore. Uh that has is being automated. It's the why and the what. So it's about uh defining things, it's about strategy and also creativity.

Anders Arpteg

Right. And I would I would argue that's more of a generalist type of uh that aligns very well with the pyramid because you know, as you move up the pyramid, in the top you have the CEO basically of a company that is really trying is is a super generalist in some way. It's even like a politician, you know, there are normally super generalists and they have to make decisions about a lot of things that they don't know nothing about, and and still they have to thrive in some way. And uh, as you as human, you may not be a super expert in how type script script like language works, but you know at least how a product works or should work at least. So you continuously move upwards the pyramid and become increasingly general, so to speak, and then have AI or a team of agents working for you all the time.

Danilo Nobrega

Yeah, and funny enough, that's the architecture that works well as well when we talked about the multi-agent that you have the one that's it's like a team, right? You have the manager and then you have the other ones that are doing specialized.

Anders Arpteg

Yeah, is that what the deep agents are doing? Yeah, pretty much, yeah. Yeah, cool. Yeah, okay. Well, that's an uh yeah, interesting question and very difficult one, of course. You know how the job market's going to play out. That's that's fun. You know, speaking about the Nordics, I mean you have a special responsibility here about um Langchain, the Nordics. Do you do you see any specific strengths or weaknesses that we have in the Nordics when it comes to AI and agents?

AGI, Benchmarks, And Realistic Timelines

Danilo Nobrega

I think it's a very interesting market, and I think it's a market that uh you know maybe doesn't get all the glory it deserves because I mean you look at Spotify, Klarna, King, you know, Ericsson, all these companies that have been created here, which are now like uh global and and uh you know category defining a lot of them. Uh there's a lot of talent here, which means that the founders of these companies, which are now very wealthy, they're obviously going to finance new founders here. Um I think it's it's a place where a lot of interesting things are happening. What I what I like as well from the big enterprises that are already here, it's always very pragmatic, right? So no one is jumping at the new shiny thing. Everyone wants to work with what's actually giving results and and and working, you know, at least in Sweden, the whole lagon uh concept, you know, like uh it's a lot of pragmatism, which is good. And the fact that these companies are already working with agents and are implementing this. Everyone, I mean, this I mean, you told me how long you've been uh working with AI, but you take like big companies like IKEA, HM, they are publicly speaking about the AI for years now. So they're well in it. It's not like they're deciding, oh, are we gonna do it or not? No, they're like in second-order type of questions, right? Like what works. So for me, it's very interesting in that sense because um the adoption in a way, uh, at least for the open source for sure, is already there. Uh, and then you know, people are already seriously looking at what they're going, what they're going to put as a platform for observability and evals, uh, like we discussed before. So this is happening right now. Uh you know, this this year things are are happening. It's not like next year. And um, and it's it's very interesting to think to see. You know, the companies are all very forward-thinking. Uh yeah, it I definitely enjoy working in the Nordics. I think it also has a big advantage when you think about AI. I mean, if we take a little bit outside of agents, data centers, for example, yeah, right. Right. There's already quite a few, and there's more being built now because of the energy that's available, but also we have free air conditioning, right? It's like uh you you can really have efficient data centers running. Um yeah, there's there's a lot of uh maybe not a lot of people, but a lot of intensity. So so yeah, it's lots of talents, I would say.

Anders Arpteg

Lots of talents for sure for universities, and I think you know adoption of AI is I think Sweden and Denmark are in the lead in Europe at least. So it's uh yeah, yeah. For Sweden. Cool. Um if we were to move even more, you know, how you you spoke a lot about like traceability and the need for for that. Um and in some way also you could argue that we need to trust agents and AI in some way or form. How do you really do that? I mean, I guess uh traceability is a way to to just gain this kind of extra trust. Yeah. Uh or or how and why do you how can you as a company say, okay, I'm going to turn over all my sales uh processes now to to Langsmith agent. Uh how do you really build that kind of trust?

Danilo Nobrega

Yeah, so trust is about uh knowing, right? You need to know that that it works somehow.

Anders Arpteg

Yeah.

Danilo Nobrega

Um I I remember when I moved to Sweden, uh the president of KTH at that time, he said the Swedes are like a ketchup bottle. Okay, so you try to get the ketchup out, it doesn't come out. You really have to tap, tap, tap, and all of a sudden everything comes out. And he was talking about how to become friends with the Swede, and he said that process can take anywhere from one to two years, right? So they really need to get to know you and trust you and so on. It's a little bit similar with agents, uh, but I would say it's a lot faster, right? So you you need to make sure that the behavior is according to what you would expect, and that you're actually getting the business outcomes that that you set uh to get, right? Uh there needs to be a net positive there, hopefully much, much more than what you would expect. And the way you do that is by observing, is by measuring, by evaluating and iterating and by shipping fast, right? With that team that we talked about. So you need to start somewhere. It's not going to be perfect from day one. But as long as you observe and you learn with those feedback loops and you make the necessary modifications soon enough, you know, surely and slowly you will get to a place where it will be production ready and it can work. Um, you know, others have done it, uh, the numbers are out there. So it is possible. Um, and uh I see a lot of companies now running to do that with their internal processes. So we always talk about internal use cases and customer facing, right? I think the customer facing one is always a little bit uh people hesitate with that because it there can be damage and so on. But uh the internal ones uh I think are the ones that people are starting first. I think it's a good approach to start with the internal ones. Learn, yeah, learn how it works. Once things are working well, then go to external uh facing, right? But but yeah, that that's that's uh what I see uh happening.

Anders Arpteg

Would you where is the limit for you? If you had an AI uh or an agent that uh that, for example, works as let's say a doctor and and you get um diagnosis from an AI, would you today trust an AI more or would you prefer to have a human? It's funny you say that.

Futures, Policy, And Practical Optimism

Danilo Nobrega

Once a year I go to NECO Health. Yeah, you do? Okay, interesting. Yeah, you go on as well. And and the thing I like about it, okay, it's right now it's still not like wow, revolutionary mount mind-blowing. But in half an hour, you get measurements uh you know, for many things, cardiovascular, uh skin, you know, sunspots and things like that, uh imagery, uh thermal imagery, and and some other things, all analyzed strength, you measure strength, all analyzed by doctor. You would need to go to maybe at least three or four different doctors to get all that and blood test as well. Right. So in half an hour, they get all that data, they compile and they tell you everything looks good. Or if something doesn't look good, we're we're gonna review it with our team and we're gonna get back to you. So it's very efficient. So instead of five appointments or four appointments, I'd go in one with half an hour. And the thing I like the most is you know, because of AI and because of computers, that imagery of my body with all the sunspots next year, they will do the diff, the difference. Is there something new? Is it growing or not? And I'm sorry, this you cannot do that as a human because you have some people have thousands. Uh and and how you yeah.

Goran Cvetanovski

But uh the interesting thing is that uh I I have been three times, and on the the third time they've they managed they managed to find uh a difference. It was just one, and I have a lot of the a lot of them. After three years, one and I was sent to a doctor and they basically checked it. And um essentially it was nothing serious or something like that, but you know, in Sweden immediately they remove it if they they see that this is something could potentially be. Yeah, but that service you don't have.

Danilo Nobrega

That's perfect, yes, you know, good preventative 15 minutes, yeah.

Anders Arpteg

Yeah, I mean, this was a perfect example of something that AI does really well, which is go through a huge amount of data and do knowledge management and you know, being able to find these kind of differences, right?

Danilo Nobrega

Combined with the humans, right? You have the doctor which is supervising, and then of course you go get a deeper look, right?

Anders Arpteg

But isn't that a perfect example of humans getting more generalized? Like they have a single doctor in this case that is more general doing what five other doctors needed to do separately before. Now, with a single doctor, they can, with having AI augmentation, do the same work.

Danilo Nobrega

I would say that even if AI wasn't there, you still would go to a generalist, and after they send you to specialist, right? It is already like that in the health system. The difference is that you would need to go to five different appointments, maybe, and here you just go to one, right? So they're not doing any specialist uh maybe yeah, the skin, yes. Yes, you are right. Yeah, so yeah, in a way, I guess you you it makes sense, yeah.

Anders Arpteg

Yeah, uh so so many things are going to happen, and I guess we will slowly build trust with AI and agents for one service after the other, like healthcare, of course, but in so many other cases. I would personally, if I do my finances and do the bookkeeping and whatnot, uh even today, uh, trust in AI even more than a human. So I will take that path. Okay, sounds like you had a bad experience, Coral. But interesting. Cool. Um Danilo, uh it's been a true pleasure to have you here. I I would like to end off uh on an even more like philosophical level. And um and then if we try to look a bit further ahead in the future, yes. And um I'm not sure if you have any thoughts about AGI or do you have a preferred definition of what AGI is, by the way?

Danilo Nobrega

Well, it's been changing so much, right? Yeah, um for me, AGI is here in a way because uh you know, the original definition I think was if we are able to talk to it and it can understand us, and I do that with uh with the models today, right? So after after, you know, I don't know what definition uh people have, but but for me it's already super impressive. Uh what's already here. Okay, it's not perfect, but I can have a conversation, yeah, long conversation, to learn about something, to explore a topic, which wasn't possible before, right?

Anders Arpteg

At a Turing test, you know, just being able to understand and speak like a human, uh, I think is passed for a long while back. Um, and it's amazing even how if you do coding with AI, you can prompt it in a very, very short manner, and AI understands you, but if you do the same with a human, they would not understand you. So I would argue that an AI today is actually better at understanding, for coding at least, in a very, very narrow way and quick way than humans are. So in that way, you know, it's passing Turing tests and even surpassing the abilities of humans uh uh significantly today.

Danilo Nobrega

Yeah, so the Turing test is what I was referring to. Yeah. So when you speak about the AGI, what definition do you have?

Anders Arpteg

Yeah, so there's a lot of them. My preferred one is actually one from Sam Altman. He um he says, you know, if we are really going to have an AI that is artificial general intelligence, um, he basically says it is when AI systems will be able to do uh a task to the performance of an average co-worker, human coworker at work. So if you take you know, whatever coworker you have and you consider when an AI system can actually do what an average level human coworker can do, then we have AGI. We're starting to get there, right?

Danilo Nobrega

Because you're I don't think so. I think we have startups now that it's what they call the rise of the digital workers. We have startups now specialized in certain functions. There's a company called 11X that is a customer of ours, and they basically do SDRs. So it's fully automated, not saying it's perfect, right? I don't know what the latest state is, but it's something that's happening right now.

Anders Arpteg

Like whole company is happening, but but we are still seeing we can't really replace humans that well. For some tasks, of course, like Klorney customer service to some extent, but they still have humans there. Yeah. So not enough customer support.

Danilo Nobrega

So you're saying you don't need any humans anymore. Is that when it passes?

Anders Arpteg

Yeah, they should be able to replace an average level coworker, human coworker. And that's I think it's harder than people think. Because at some level, normal kind of action-taking abilities is still far inferior to humans.

Danilo Nobrega

I would say so.

Anders Arpteg

You would automate up to the 80-90%, but that last mile, it's very hard to I mean you you can still, I mean, if you take the PowerPoint example, I still can today really you know fully automate just creating a single PowerPoint, which should be super simple. You can create something from scratch, but you can't really edit in this way. You can't really use this practically, even though the Minimax one looks really promising. So it's getting closer and closer. And I think the GDP val kind of benchmark is really cool in it's you know, showing how it's getting there. And the Meta M ETR benchmark is also trying to measure that in some way. So we're we're getting closer, but I I would say it's uh it's further ahead. I would argue that it's further ahead than people think. But who knows?

Danilo Nobrega

Yeah, uh I mean from everything I've I've been reading, it's one of those things that's very hard to predict, right? But it's kind of like on an exponential curve. And when you less least expect it's here, right? Yeah.

Anders Arpteg

So we will see. Do you have any date potentially? Oh no. I still say in a Ray Kirchwell, he predicted 2029, so in three years.

Danilo Nobrega

Uh I I I read that uh his book, uh, what was it called? No, not that one. The um it was it was the other one uh about uh transhumanism and uh yeah yeah. Yeah, very interesting. Very interesting.

Anders Arpteg

Yeah. But if we still imagine at some point in the future uh we will have AGI, I guess you do believe we will have that at some point, right?

Danilo Nobrega

Um well at the rate things are going, I I don't doubt anything anymore, right? So so we we will see. The only thing that I've I've uh given up on is uh predicting, because it's one surprise after the other. So I tend to just go with the flow, work with what's here now, you know, and and then when new things come up, adapt, you know.

Anders Arpteg

Yeah. But we can think about it in two extremes. It could actually move in a very dangerous direction where humans could be abusing the extreme power that AI provides, and it could move towards the dystopian kind of uh future where the matrix or the Terminator kind of vision will happen when machines try to kill all the humans. Or it could be the other extreme of some utopian future, uh, what Nick Boostrom wrote about in Deep Utopia, or what the Elon Musk calls the world of abundance, where basically AI solves um you know health uh issues like uh being able to cure cancer and uh it fix the climate crisis that we have, it fixes uh the energy needs that we do have, and how we can do use uh fusion energy and whatnot. And simply as he say, it uh move the cost of products and services towards zero.

Danilo Nobrega

Yeah.

Anders Arpteg

And then it means that anything you need in terms of housing, in terms of food, uh, or even entertainment could be free. Yeah, and then you don't need to work, and um you can work if you want to, but you may not need to, and it becomes a Star Trek kind of future.

Danilo Nobrega

Yeah, I think these are very fun thought experiments, but every time I'm offered a binary solution, like it's either gonna be this way or that way, I get suspicious.

Anders Arpteg

But in that spank troop though, if you do believe it's somewhere in between, perhaps, you know, where do you and think we will end up in 10 years, for example?

Danilo Nobrega

It's I mean, uh, I can I can uh it's har first it's hard to predict, and second, this is like a personal thing, right? Yes, of course. But uh if we look in the past, like fire, the tools, um you know, all of these uh technologies that uh we were able to master, they all can be used for good and bad. Right. And overall, so far, we can be managing that, right? No, so I think the most important question is what are we doing right now? Because things build on top of each other. And so if if we are doing things right now with the AI that's available today, you know, we in a responsible way, you know, with with um uh being able to audit, trace, uh understanding why things are happening. If if we do this in a good way, I think we are in a good path for that good future. I think we cannot neglect, uh, because whatever happens right now is what's going to lead to to the future, right? So I think the moment we're living right now, I maybe not many people pay attention to it. And you know, we're talking about it in a podcast. Maybe a few years from now, they will say, Oh, this was uh crucible. Moment in history, you know, because we established this law and that law and these standards, and because of this, we were able to avoid that. I don't know. But uh I like to focus on the present because if we think too much about the future, it can create anxiety or it can create other sorts of feelings. So focus on the present. Uh do whatever we have to do with AI right now. Things are going to evolve fast. So so uh yeah, navigate the best way possible.

Closing Thanks And Next Steps

Anders Arpteg

Adapt to it. And adapting is something humans are really good at as well, right? Yes, yes. But it sounds like you're positive about the future.

Danilo Nobrega

I'm an optimist, so good. Me as well.

Anders Arpteg

Thank you so much, uh Danilo uh nobrega. Yes, very good. Uh it was a true pleasure to have you here. I wish I could have learned more about you know the future of Langshain. I'm sure it will have a lot of cool features coming up soon. And uh yeah, wish you very best, very best of luck with that. So uh thank you so much for coming here. It's been a pleasure to to talk with you.

Danilo Nobrega

Thank you. It's been a pleasure to be here. Thank you.