AMA | Episode 2 - The AI Agent Takeover: Separating Hype from Reality (Featuring Databricks)
2025年5月28日
In this AMA, the Sahara AI team continues the AI Agent Takeover series with a deep dive into how agents reason, communicate, and collaborate through narratives. Hosted by Marketing Lead Joules Barragan and featuring CEO & Co-founder Sean Ren alongside special guest Prithviraj Ammanabrolu, Assistant Professor at UC San Diego and Research Scientist at Databricks via MosaicML, this conversation explores how narrative understanding pushes the boundaries of agent intelligence, long-horizon planning, and human-AI collaboration. From interactive storytelling and scientific reasoning to the safety and alignment challenges of autonomous systems, this session examines both the promise and the pitfalls of teaching AI agents to think in stories.
Link: https://x.com/i/spaces/1DXxyqEDvNNxM
Transcript
Joules: All right, everybody, we'll be starting in just a few minutes. Thank you for attending. I see Raj here already. Let's get him set up as a speaker. How's it going?
Raj: It's going well. How are you doing?
Joules: Going great. Excited to have you today. We'll go ahead and get started in a few minutes. I can hear you perfectly. Yeah, it's pronounced Viraj, right?
Raj: Yes, it's Prithviraj. Yes, but I go by Prithviraj, Raj, whatever.
Joules: Yeah. Awesome.
Sean: Cool.
Joules: I'm seeing a lot of our speakers already using our cool bitsy overlays. That's awesome. Thank you all so much. Awesome. Welcome, everybody. Sean, can you hear us?
Sean: Yes, I can hear you.
Joules: Awesome. Your microphone sounds great. And Raj, you're here too.
Raj: Yep.
Joules: Excellent. Let's go ahead and get started. Welcome, everybody. I'm Joules with Sahara AI. I'll be your host today. This is episode two of our AI Agent Takeover series. We've got an exciting AMA today featuring two incredible minds in AI: our own Sean Ren, CEO and co-founder of Sahara AI.
Sean: Hey, everyone, I'm back.
Joules: Yeah, it's only been a week since your last AMA. Thank you so much for joining again, Sean. I know you're a very, very busy person.
Sean: Yeah, for sure. Excited to chat.
Joules: I love your new eyebrows. I just saw them update. Yeah, read our blog post. Definitely new. I'm seeing a lot of people showing the overlays today. It's really exciting. It's really cool. We also have our special guest, Prithviraj, or Raj for short. Raj is an assistant professor at UC San Diego leading Pearl's lab and a research scientist at Databricks via Mosaic ML. He was previously a researcher at AI2, and before that, he received his PhD at Georgia Tech. Thanks for joining us today, Raj.
Raj: Thanks for the invite to both of you. Good to meet all of you guys and good to hear from you again, Sean.
Sean: Excited to get connected to this space for sure.
Joules: Yeah. So today's AMA is going to explore how AI agents use language, feedback, real-world context, and narrative reasoning to become better, more collaborative communicators. If you're listening and you have any questions throughout the AMA, just drop them in the comments below, and we'll get to them at the very end. All right, let's get started. Raj, you've spent years exploring how machines tell stories. What was the moment or insight that really made you say, "Wow, AI needs to understand narratives the way humans do"?
Raj: Yeah. So to answer that, I'm going to take a bit of a step back to talk about the underlying motivation. So very early on in grad school, I read this paper that ended up being really influential for the rest of my life research career. The paper was called Grounded Cognition. It was by a psychologist named Larry Barsalou, who wrote it back when he was also in Atlanta at Emory. But it's this idea that the way people do things, the way people learn, is by interacting with the world around them and how all of the concepts that we know are less abstract but more linked to things in the world. And it's not just grounding in the sense of, "Hey, I've got a concept that's linked to a physical object," but it can be grounding to things that are just shared concepts between us. And that was really fascinating to me. One way that we link these concepts together is via narratives. We see narratives as the most natural form of human communication, since way back when a lot of morals, life lessons, everything is told in the form of stories, mythologies, so on and so forth. That was the original idea. It's like, "Oh, if we could get AIs to tell stories and to be able to communicate, we will have, in some senses, stories solved the communication problem of AIs with humans." And that's where my original inspiration came from in terms of trying to build these agents.
Joules: Awesome. Thank you so much. Sean, a big part of Sahara AI's mission is also about empowering creators, not just protecting their work, but by giving them the AI tools to capture their persona, scale their ideas, and optimize their workflows. Building on Raj's point about narrative, how important do you think AI being able to understand narratives the way humans do is in helping creators train AI that truly reflects who they are?
Sean: Yeah, compared to Raj, I definitely have a much more biased perspective about narrative. I'm more coming from using narrative understanding or narrative generation as a way to really measure the competence and capabilities of the current AI. I think one of the biggest challenges that narrative understanding and generation brought to AI or agents is this high-level, holistic planning and structuring of the whole idea. For example, if you want to produce a thesis for your PhD, you need to think about the narrative of your thesis. That can be broken down into many smaller tasks, such as you need to conduct a literature review on some of the topics you are working on, and you need to contrast and make differentiations with that work. Then you need to think about how to frame your ideas by positioning them among all the literature, and then tell your ideas in a way that people can understand. You need to tell people how you execute the bigger ideas, break it down into a four-year plan, and how the tasks come together one by one. It's a very complicated reasoning and planning task, just like in our real lives. When we need to work on a complex task by ourselves alone, or we need to interact with other coworkers or colleagues in terms of completing a bigger task, there's a lot of failure modes that we have to think about, what the fallback mechanisms are, and how we can reach the final goal by considering different possibilities.
I think those nuances can be approximated when you ask AI to generate a very complex narrative. But the different part about narrative understanding and generation is that you don't have to be constrained to so many norms and even physical laws in our real lives. You can be super creative to even generate fantasy or stuff that doesn't exist in the real world. That's what makes people entertained. So I think there are some parts about narrative generation and understanding that are different from producing an agent that can work with humans in real life. But in a way, I think for the research community, narrative understanding and generation is a very good space where we can really test out and push the limits of current AI or agents and see how they are doing.
Joules: Yeah, really great points. Continuing on the topic of narratives and AI, Raj, you've worked on what I kind of like to think of as these narrative agents because they're really these AI storytellers. I don't know if I made up the word "narrative agent" or I read it somewhere, I can't remember at this point. But there are basically these special kinds of learning agents, right? Can you explain for our audience what exactly these "narrative agents" are?
Raj: Yeah. So I guess one version of narrative agents is to imagine an agent that interacts with the world purely through language. In the form of a narrative, they receive these textual descriptions of the world—of the people around them, descriptions of the personas of the people, the locations they're in. And then, given that description, they have to perform an action. They have to be able to talk to other people in this simulated or real world. They have to be able to interact with and move objects and items around, generally in pursuit of their own objectives. These objectives, at least for the interactive narrative space, can range from solving a murder mystery—you're role-playing as this kind of detective—all the way to some of the more recent things that we've done, like Science World, where you have agents that are trying to learn how to do science experiments from first principles. So instead of memorizing the answers to a science question, they're trying to figure out what the procedure is and then systematically do it themselves. This looks very similar to the reinforcement learning world, where the environment is textual natural language, and the agent is also outputting textual natural language to the world. And that's the most basic form of these agents, at least when I was starting out almost 10 years ago in this space. They've gotten quite a bit more complicated since then.
One example that I like to use is this game called Zork. Zork was one of the very first computer games. It was a company called Infocom back in the 70s, before computer graphics or any of those things existed. People developed these kinds of games where you have to go around in a world, collect treasures, and solve puzzles. The games are quite complicated because they started in the 70s, and people just kept working on them more and more. If you look at the source code of some of these games now, they're millions of lines of code, and they're ridiculously complicated worlds with tens to hundreds of locations, characters, so on and so forth. As an anecdote, I used to play some of these games myself when I was a kid in high school. It took me three or four months to solve one of these games by myself. So that was one of the first things I started doing when I was in grad school: "Okay, what would it take for an AI to be able to reason through and have the reasoning capabilities and the ability to interact in natural language to solve these kinds of puzzle problems?"
And it turns out that some of the lessons that we've learned on the way really apply to a lot of different types of AI agents as well. So one concrete example of that is that it turns out that for embodied agents—so things that are more robots and whatnot—you can actually train robots in these kinds of narrative textual environments where you teach them to plan in high-level language how to do things, like make a recipe in a purely textual environment that's nice and fast and easy to computationally simulate, and then transfer them to a visual simulation, and then transfer them to a robot in the real world. This sort of multi-stage pipeline of training turns out to be a lot more efficient computationally than just trying to train them in some kind of robot environment. That's the overall high level of what these agents are, what some of the downstream immediate impacts have been, and why I really like using them as testbeds to be able to study natural language. I should also note that one of my students recently released a benchmark called the Text Adventure Learning Environment Suite (TAILS) where we have a set of 3,400+ environments like this. It turns out that on the hardest subset, even the best reasoning models right now get a score of about 15% on this benchmark. So it clearly shows that there's a long way to go in terms of being able to develop agents that can effectively reason their way through narratives.
Joules: Yeah, that's really, really cool. When I think of these narrative agents, the first thing that comes to my mind is games and smart NPCs that I can finally interact with, and they'll understand the narrative and what I'm telling them in intellectual ways. I think that's really, really amazing. But just listening to you, I'm also thinking through all these other ways these narrative agents can work, even just for helping people learn and work through complex ideas through cool storytelling. Are there any other interesting examples that you can think of on how these narrative agents can be used?
Raj: Yeah, it's really funny that you mentioned that. The reason I like these is because they have such a wide range of possible applications. NPCs in games is one interesting version. Sometime back in the pre-LLM era, we were actually figuring out how to do this, how to use these narrative agents as NPCs in games. There was one little project that we did. I was at MSR at the time, and we were collaborating with Xbox where we were trying to put an agent in this game called Sea of Thieves. At the time, text generation wasn't particularly good. And two, game developers are really, really picky about what they let their NPCs say. So eventually, what we ended up doing was deploying this narrative agent in the form of a pirate's parrot in the game. It got a lot of engagement, and this was before Transformers or anything like that happened. Things have just gotten significantly better since then. There are so many levels of applications to be able to do this. Everything from those sorts of NPCs in games as entertainment. A lot of the people I was collaborating with went to Facebook AI Research after that for a bit and spent more time doing these kinds of narrative agents. A good chunk of the people on my team eventually went on to co-found Character AI, which I'm sure a good chunk of you guys are familiar with, is one of the go-to places to be able to form these kinds of personalized AI and play around with characters. So there are those entertainment aspects to it, but there are also all of these other types of things you can do. You can get these agents to learn how to do science experiments and embodied tasks, and transfer it to real robots. So it's just pure versatility. Anything you can think of, you can probably express as a narrative, and it's just a very natural form of communication among us.
Joules: Yeah, that's really cool. I'm really excited for where this space is headed. I do have a question for Sean. We're seeing agents get better at automating tasks, right? But as we discussed in our last AI Agent Takeover episode, full automation is still a really big challenge. From your different perspectives—narrative, multi-agent orchestration—what's the single biggest challenge to building agents that truly think and act independently?
Sean: Yeah, that's a great question. I believe there will be many answers to these questions. I will probably just touch on one of them. I think we talk about... even for humans, it's really hard for them to have very consistent and strong executions with a given objective. Let's say if you tell people, "Hey, help me find the best house in this area," and you can even define what you mean by the best house based on your personal criteria. You give these objectives to 50 different real estate agents, and they might have rather different outcomes coming back to you in a couple of months. I think that's due to multiple reasons. First of all, everyone has different information gaps and their relatively scoped information. They would find very different results for you. Also, they might be interpreting your instructions or criteria rather differently. That's a to-do list, like things like intent understanding and understanding your persona and all of your personal histories. And then they will have various different execution paths in terms of how they search for the information, how they take one piece of information and continue to dig out another piece of information, and go down the path to find the final results. I think this happens a lot with humans trying to do their best work. And if we put all these thought problems onto AI agents, it's even harder for the agents.
Today, when we talk about agents being able to automate some of the tasks, we are actually referring to very narrow, very specialized tasks. For example, doing a summary of an article, changing the tone of an article, or doing a translation of articles. We are making great progress towards going to more complex, multi-step tasks. For example, "Find me the best merchants based on some of the criteria I send." And products like Deep Research or some other agentic search products can do a pretty decent job. But if you really ask them to execute something like, "Find the house for me," given all of the available information and API access, I think these agents are still giving rather inconsistent outcomes depending on their execution path. So I think today, when we think about how far we are from using these agents autonomously and independently executing tasks for us, there's still a lot of work to be done by giving these agents the ability of long-horizon planning and helping them to better align and understand human underspecified intentions and personal histories. So I think that's the biggest bottleneck as of today. But I'm definitely excited to hear what Raj thinks about this space.
Raj: Yeah, I think it's great that a lot of the agents that we've been operating on in the language-plus-narrative space, and a lot of the algorithms that were originally developed for use there, are actually still in use now. You can imagine something like Deep Research or this kind of agent search that you're imagining is also, in some ways, from a reinforcement learning perspective, very similar where you're inputting some text that you're getting from the Internet, and the outputs are various tool calls that this particular agent has to make with parameters. And some of the things my lab has explored is basically using very similar techniques to these interactive language agents of yore where they perform actions with parameters. "Pick up a knife from the table" is very similar. "Pick up" is a function call. It turns out that both of those things kind of map to the same underlying Markov decision process from a reinforcement learning perspective.
But I think you're right that where we're not quite there yet is being able to do this in a solid way in a very long-horizon manner. Models right now tend to lose coherence in what they're doing after maybe a few hundred steps. For example, if you're thinking of a computer use agent, they can maybe for some tasks do about an hour of autonomous work, and then they lose track of what they're doing after that. So the long-range horizons of stuff is something that definitely needs to be solved. It's also one of the core initial reasons that I was super interested in these kinds of interactive narratives because something that you've done very early on—a piece of information that you've gathered very early on after asking a question—ends up being very relevant to pass some kind of bottleneck later. And this is true for all sorts of agents. The deep research agent, which a lot of scientists are now using to help them do things like literature search or potentially suggest new research ideas, everything that you think of that requires these kinds of very long horizons has these sorts of dependencies that the models need to be able to overcome inherently. I think solving that is going to be one of the next big challenges in AI, but I'm excited by the progress that's happening right now.
Joules: Yeah, I love it. I'm obsessed with where we're headed. I just think back to where we were a year ago, a year before that, and it makes me excited for where we're going a year from now and a year after that. So it's definitely going to be really, really cool. I do have a follow-up question for both of you. Narrative is really how we make sense of the world. As AI agents start crafting or interpreting stories better than they do today, what impact do you think that's going to have on how we learn, how we work, how we make decisions? We briefly talked about this in some of our other answers, but I'd like to just take a deeper dive into it. We can start with whoever has their idea first.
Raj: I think we're going to see a lot tighter human-AI collaboration in the near future. As AIs are able to craft narratives better, they're going to become much more natural communicators and they're going to be able to communicate information to us in a more personalized manner in a way that we can understand. A concrete example is in education. I'm a professor at UCSD, and one of my students has a project running called Socratic Mind. Think about when you were in college and you were doing clicker questions or something—a way of trying to learn interactively instead of just sitting in a lecture. One of the things that these systems, which we are piloting in some of my classes at UCSD, do is use these AIs as a way of interactive oral assessment to get students to think deeper.
Instead of a static clicker question where they pick from four options and either get it right or wrong, you ask them a question based on the reading materials or the lecture. They give an answer back, and then the AI will probe them and be like, "Hey, this part of your question was okay, but can you explain this other part a little bit more?" The critical part, the narrative understanding part here, is that it turns out it's not actually very useful if the AI just gives the correct answer back to the human at any given point. It's much more useful if the AI crafts a narrative to help the student understand a particular concept better. For example, "You're a sixth grader having trouble understanding acid plus base equals salt plus water, but you previously understood mixing two paints creates a third paint. Now apply the same analogy to this new concept where two things that are different create something with totally new properties." This ability to craft a narrative in terms of things the student already understands is already showing benefits. A lot of the preliminary results suggest that this is genuinely improving learning outcomes from the students' perspective. That's just one concrete thing that we're doing right now, and I think we're going to start to see a lot more instances of this as AIs become more and more adept at expressing language in a way that people can understand more easily.
Sean: Just to add on to what Raj shared, if we really have this long-horizon planning and the ability to digest vague intentions from users, it will hugely impact how we work and learn. I'll give an example more grounded in the crypto context. Today, if you want to ask an AI agent to help you with some trading tasks, you probably need to be very specific. You have to tell them, "Do this amount of swapping between this token to the other token using this wallet address." You have to make all these parameters very well-specified using a complete and nicely framed sentence. The AI will be able to do that execution for you as of today.
But what we really wish to have is you talking to a financial advisor and saying, "I have $10,000 in my pocket, and I'm thinking about investing into some sectors of cryptocurrency. I'm a little worried about the recent fluctuations in the memecoin market, so what are some potential investments you recommend me doing in the next three months? I want to maybe just try it out." You make a bunch of requests or constraints in a high-level way compared to my earlier example, but you also give a lot of scope in terms of your investment interests. Now, if we could have an AI agent that is able to take on these vague instructions, interpret them based on what it knows about that user and the entire market situation, and all of the connected knowledge bases, and then make reasonable and even smart executions of the investments and trading, I think that's a huge shift.
That's going to hugely affect how we use and work with these AI agents. Less being a user with an admin perspective to control the agents, and more like treating the agent as your companion, as an advisor, as a friend that you can chat with, get some insights, discuss a strategy, and make an execution plan. I think that's a very different working mechanism. The positive thing is we're starting to see a lot of this happening with Deep Research and other agent search products in different verticals. For example, in the financial sector, we see analysts using agent search to generate research analysis about different projects and actually having more of a discussion and interactive way of working with those agents. So I think we are moving in the right direction, and I believe we will have this more "advisor-type" of agent for different use cases very soon.
Joules: Great insights from both of you. We're cutting it close on time, so I want to make sure we have time to answer questions from the community. Let's see... a question for both of you: When thinking about creating narrative agents, what's one thing that developers should keep in mind?
Raj: I guess the first thing you should keep in mind is probably who your audience is. Who is this narrative agent interacting with? What kind of language do they expect to see? You need to think about this from the perspective of the person that's actually using them and then work backward from there.
Sean: I agree. As of today's capabilities of the tools and agents that we can build, we need to make the requirements as specific and concrete as possible, which includes understanding the actual needs from the target users or audience. That's the only way we can build a successful agent that could accomplish those needs. If you just want an agent that tries to talk with you and engage with you without accomplishing any tasks, that's one type of agent. This reminds me of the early days of Character AI, where they were just trying to build chatbots that could keep people engaged and happy and emotionally supported while chatting. But those agents don't necessarily improve productivity or efficiency of work. So you really need to understand the needs of those agents. That's the first thing.
Joules: Thank you. Another question we have from the community. Here's a pretty simple one but very important: "It sounds like everything AI is an agent nowadays. What even is an agent, really? Is Chat GPT an agent or not?"
Sean: Great question. I can give a short answer. "Agent" is a very overloaded term, even in academia. Different people mean different things. I think a brief summary is that an agent is an AI that can both think and act. The keyword is "acting"—they can take actions, they can think, and they can do multiple steps of thinking and acting, creating a plan and a strategy to achieve a goal. I like to emphasize that agents should be goal-driven or task-driven. There is a high-level goal or task in mind that the agent is trying to achieve. I think that makes agents sound more exciting and different from the majority of AI we've seen in the past five years. I'm curious what you think, Raj.
Raj: Yeah, I mostly agree with Sean. It's actually funny because the first lecture I have in both my undergrad and grad classes when I teach agents is, "What the heck actually is an agent?" It's a little bit like asking biologists to define what life is. You can identify different aspects that a lot of people will agree should be parts of agents, but if you ask them to come up with a comprehensive definition, everyone's going to fail. Two things that a lot of people agree on are that agents will have some form of memory—a memory of all the things they've done in the past that they can use to decide what to do in the future. The second part of it, which plays off the first, is they do things. They have agency in the sense that they can do actions in a meaningful way. Something like a basic RAG (retrieval-augmented generation), where you have a retrieval step followed by a generation step, may not necessarily always be an agent. But something that's more like a tool, with tool use, where you have agents doing dynamic search, that is much more along the lines of what I would consider an agent.
I guess the one part of this that I might differ slightly from Sean on is that I don't necessarily think that being goal-driven is entirely necessary for being an agent. I think it's a "nice to have," not a "must have," because there are things now that remind me of Tim Rako's keynote at ICLR this year, talking about agents with intrinsic motivation. They do open-ended exploration in the world. There's this book written a while ago by Ken Stanley called Why Greatness Cannot Be Planned. It might be a little less practically useful in the short term, but I think there's a lot of potential insights there on how we may actually get to AGI for some definition, or agents that are generally more intelligent than humans at a wide range of tasks. One of the arguments is that we can and probably should be trying to train those agents without an immediate goal in mind. But of course, this is all very open-ended research for possible open-ended agents. I think at a high level, there are some components everyone agrees are part of all sorts of agents. I don't think anyone has a concrete definition, and if they tell you that they have a concrete definition that is agreed upon, they're probably lying to you. That's the long short answer.
Joules: No, I really love a long short answer. Those are always great. I think we have time for one more question. I tried to pick my favorite: "If agents start telling stories or crafting narratives, how do we avoid them manipulating people or spreading misinformation?"
Sean: That's a great question. I think the short answer is it's very hard. It's going to be very damaging without any sort of control. From the government's perspective, there should be some sort of basic compliance rules and regulations about tracing the source of the AI-generated content. There are many technical paths we can explore to achieve that. Sahara AI is doing one of them—we're trying to put a watermark on every single step of the AI development process, from datasets to models to the generated content from those models or agents, so that we know we can trace back the whole dependency to who actually produced this content. This helps us to better attribute the positive and negative outcomes of the agent's output or actions to the original source so we can narrow down the problems more quickly. I think that's an important thing we need to be careful about. Research-wise, there's a lot of active research on model fingerprinting and finding the attributions of a model's behavior to its training data points. We are paying very close attention to what this model provenance and fingerprinting will look like down the road.
Raj: I love all the different types of options that Sean has given. This is basically the alignment problem kind of retold in different ways: how can you make sure that the AI is not attempting to deceive people or attempting to tell narratives in a way that is trying to persuade people to do something else? This reminds me of a paper from a couple of years ago called Where Do People Tell Stories Online? For those of you familiar with it, there's a subreddit called "Change My View" where a person gives a view on something, and other people attempt to change their view by telling stories. It's interesting because this has been used as an AI evaluation in some senses, to see how well an AI can tell stories to change somebody's opinion. It turns out that AIs are actually shockingly effective right now at getting people to listen to views that are not their own, even for relatively simplistic things.
Like everything else, the alignment problem is a double-edged sword. As these systems get better at telling narratives, they might be used as a way of resolving conflicts between people who have different opposing viewpoints. At the same time, it seems plausible that they will be used to propagate misinformation. So doing things like attribution checking, as Sean mentioned, or being aligned to the preferences of trusted users is going to become very important in the near future so that you know exactly what the underlying motivations of how a model was trained are. What did the post-training objectives of these models actually look like? Who were the people who gave the data that these models are then learning to align to? Being open-source and transparent about these things would go a long way towards increasing public trust in these systems.
Joules: Great points all around. My immediate thought when I read this question was what happened with Open AI earlier this month with their GPT-4o update. It became very sycophantic, and it showed that people learn to trust AI less because of its accuracy and more because of the way it tells its narrative, how it communicates with you. It just made me think about how much that can impact people's behavior. Sean, you worked on a paper with some colleagues that covered this. Could you speak to that very briefly?
Sean: Yeah. There's a paper led by Caitlin Joe from Stanford, with collaboration from AI2, where we looked at how the expressions of an AI's language will affect people's perceptions of trustworthiness or confidence in the AI's answer. For example, if you make the expressions more warm and empathetic, it will actually be easier to get people's trust about the AI's response, regardless of if the response is factually correct or valid. That basically proves the point that simply by manipulating how an AI expresses things, it has a pretty significant effect on how humans perceive the AI as an authority or not. So we need to be extra careful about such potential manipulations if it's done in a suspicious or unintended way, for example, on political stances and other things that might have a huge societal impact. I think that paper just drew people's attention to the potential harms this could do.
Joules: Awesome, thank you. That's a really interesting paper. I highly recommend anybody read it if they have the time. We are out of time. Sean, Raj, do you have anything else you want to shout out before we go?
Raj: No, it's been great getting to catch up with Sean and all of you guys. I'm very excited to see the stuff that you all build out.
Sean: Yeah, same here. Raj, thanks for sharing your thoughts and spending time with us. I think we touched on at least two of the most important problems in the agentic AI era. One is the long-horizon, goal-driven reasoning and planning ability of the AI, which we're making great progress on, but we're not quite there yet. And the second one is the safety and alignment problem, which I think is equally, if not more, important as AI's capabilities evolve so quickly. As the main mission of Sahara AI is to create an open, secure, and transparent AI ecosystem, the safety, security, and ownership side of AI needs to be paid more attention to while people see the great advancement of the agentic capabilities of AI at the same time. Yeah. Thanks everyone. Great to share and chat with you here.
Joules: Yeah, thank you so much, Sean. Thank you, Raj. Excited to have you. This is recorded, so share it with your friends. We'll go ahead and share a recap later this week, and yeah, have a good day. Thank you.
Raj: See you all.
Sean: Thank you. See you. Take care.