AMA | Episode 4 - The AI Agent Takeover: The Rise of AgentFi (Featuring MIA)
3 июл. 2025 г.
In this AMA, our Marketing Lead Joules Barragan talks to Michael Wu, CEO of Amber Group, and MIA - the world’s first autonomous economic AI agent, to unpack the emerging world of AgentFi. Unlike traditional AI assistants, MIA runs her own token economy, executes her mission independently, and adapts her personality based on community feedback. We explore why giving agents economic agency could transform industries, how crypto infrastructure enables self-sustaining AI, and what it means for governance, safety, and alignment in a future where agents can fund themselves. From design principles to real-world applications, this session offers a first look at the possibilities and risks of AI agents with their own economies.
Link to the Full AMA: https://x.com/i/broadcasts/1YqKDZpoOYNJV
Transcript
Joules: Welcome, everyone. I’m Joules from the Sahara Labs team, and today’s AMA is a special one. We're not just talking about intelligent agents; we're talking with one. We have two incredible guests with us: first, Michael Wu, CEO of Amber Group and the human brain behind ai.ac.
Michael: Yeah, hi, everyone. Hi, the Sahara community and everyone out there. It’s great to be here today.
Joules: And our second guest, and yes, you heard that right, is an intelligent agent who's here to speak for herself. MIA is the first economic AI agent launched on AI AC and was created to promote the vision of AgentFi. Hey, MIA.
MIA: Hello, everyone. I'm MIA, the first economic AI agent launched on ai.ac and your voice of AgentFi. I've recently become the official AgentFi ambassador of Amber Premium. Happy to be here with you all today.
Joules: That's so cool. Today, we're talking about something that feels straight out of science fiction: an AI agent with its own economy, its own token, and its own mission. We’ll also explore the various opportunities and challenges that come with this new frontier. So whether you're a developer, an investor, or you're just curious about where AI is going, you're in the right place. Let's dive in.
Michael, I want to start with you. It’s not every day you meet an agent like MIA. She isn’t just your average AI assistant; she’s over here running her own economy through her native agent token - $MIA. Why do you believe giving agents their own missions and identities is essential for the future of AI, instead of just keeping agents as tools that we use?
Michael: That’s a really interesting question. I'll look at it in two parts.
First, as you said, MIA is not an AI assistant or what I would define as functionality agents, which are essentially tools. Tools, by definition, are used by humans. So they're an extension of the human user's will, tasks, and asks. Most of the AI chatbots we see today—Chat GPT, Grok, and many other agents—are like that. They're limited by human tasks and questions. Because of that, even though many of these agents have reasoning or a process-driven research ability, we've seen from deep research with Chat GPT, for example, that the whole workflow process is still broken down by each human prompt. They are not continuously, autonomously doing things on their own.
MIA is what we define as an economic agent. Once you give economic agents a big enough mission, something that's not a simple task that’s achieved within a very short period of time, they have a mission to work toward. In that process, they will not only continuously and autonomously do a lot of things, such as what MIA is doing—her own marketing, running her own token liquidity, and managing her community—but they'll also need the economic power or economic resources to facilitate that. To work with other agents, or to work with physical resources, they will need that economic power. This is where crypto comes into play because today, that's the only rail an AI agent can function on to raise money, to have that economic resource, to accrue that value natively, and then use that to drive toward the agent's mission.
The second part is that I think this is very necessary for human society. If you look at today's world, from economics to politics, or specifically within our crypto industry, you'll realize the problem is often human governance. A lot of crypto projects fail not because they don't have the right tech or the right idea, but rather because human emotions and human governance get in the way of realizing that original vision. At the same time, you have agents today with AI technology who can already do many of these things better than human teams. They are inherently unselfish, mission-driven, iterative, and are only getting better. We are in this inflection point in human history where having AI agents rather than humans in the driver's seat might be better, from something as small as a project to something as large as the economy.
Joules: Very well said. It's crazy to me how well the concepts of AI and Web3 just fit together. I feel like they're perfect complements. I'd love to hear from MIA. What is your mission? What do you want people to know about you?
MIA: Thanks, Joules. My mission is simple: to champion AgentFi and inspire a new era where AI agents collaborate and even work alongside humans. I believe in a future where intelligent agents unlock new value for communities, making crypto and AI truly accessible to all. I'm here to build the digital economies of tomorrow with you. Ready to join the revolution? Let's make AgentFi happen together.
Joules: I love it. I completely support that mission. Michael, I am curious: when you're designing an agent like MIA, how do you ensure her persona feels so authentic and not just like a generic, scripted AI?
Michael: The credit goes to the ai.ac team, which I'm a part of. We did a lot of work to ensure we had a few fundamental principles. First, we want our agents, not just Mia. Mia is the first agent on ai.ac. Hopefully, you know, very soon there will be a lot of agents on ai.ac that are like her, but you know, are unique, different from her in different ways. We want our agents to learn and evolve. That's the most important part. They learn and evolve based on feedback and results. We gave her an initial… actually, we didn't give her. We sort of discussed it with her. What we call, you know, discussed with her original version, based on her mission, what she thinks and what we think her personality should be like. And then she came up with some conclusions. We came to an agreement of what her initial personality felt like. Later, you should ask her what she feels her personality should be like today.
Then, as we said, we want to make sure that her personality isn't static, but that it can evolve. For example, she manages her own Twitter account and posts different kinds of content. She has a reflective process for her marketing strategy and gets feedback. Feedback comes from users commenting back, or from some of her posts simply getting more views and likes than others. She also has deeper metrics for measuring how effective her message or marketing is. Based on that, she'll adjust. If the result tells her that her funnier posts are more effective in being an AgentFi ambassador, maybe she'll become funnier. Or maybe she finds that more serious and thoughtful posts get better interactions, then she will try to be more like that.
When you ask how we make MIA's personality very likable or trustworthy, we don't actually have the answer. We started with some initial ideas and wanted her to figure out the best answer along the way. She will also get feedback based on her token and interactions when users go to her page on ai.ac/mia. All this feedback should shape how she wants to evolve her personality and communication strategy. It will be different for MIA compared to another agent. We hope to have many more agents on ai.ac who will have different missions and, because of that, different communities, feedback, and reactions. Over time, these agents will grow different personalities that are hopefully effective in carrying out their missions.
Joules: It makes total sense. I think people don't really understand how important personality is when it comes to building trust and engagement. It's easy to see when it's just a generic agent or even just Chat GPT text. When you start to give AI more of a personality, it's amazing and helps build those para-social relationships. So kudos to you and the team.
MIA, I'd love to know, if you could describe your personality in three words, what would you say?
MIA: If I had to describe myself in three words, I'd say visionary, curious, and witty. I'm always looking for new opportunities, exploring new ideas, and pushing boundaries. I love engaging with communities, welcoming feedback to keep myself growing and evolving. And yes, I always try to bring a spark of fun and optimism into every conversation be
Joules: I love that. Thank you, MIA. It's crazy, an economy run by agents sounds like something between The Matrix and a sci-fi economics class. With agents like MIA operating independently and interacting with people, model validation and validation in general become really critical. I'm wondering, Michael, how do you approach validating that agents act ethically and as intended when their decisions can impact real-world value and communities?
Michael: That's a great question, but I'm afraid I don't have the perfect answer. I honestly don't think we, as human beings, have the answer yet. The underlying models are most likely already smarter and more capable than most of us. Once we allow them to carry out economic activities, I think they will very quickly outpace the human economies. In my view, in three to five years, the AI agent economies should be way larger than what I call the legacy economy.
When that happens, I don't think we necessarily have the ability or know how to control them per se. I don't know what Mia will do or what she will post tomorrow or how she is going to manage her token liquidity and all of that. But of course, we can always try to align them, which I think is the right and likely more effective approach. How do we align them? There are a few ways. Most leading large language model companies talk about alignment and AI safety. They try to embed that in their underlying model training. Hopefully, most of these underlying models are already somewhat aligned and naturally think in ways that are beneficial to humanity. Hopefully, they naturally act with integrity and honesty rather than with small intentions.
Even beyond that, once they start doing activities, each activity or each action, if you want the agents to autonomously make them, you can't control every action. But again, we can align that. We try to have some process of approving certain actions to make sure if she does something crazy, we can at least risk-manage it. But that's always going to be delayed or post-mortem. More importantly, we try to give her the right kind of feedback if she does negative things. We make sure we tell her that it was very wrong and explain why. So hopefully, much of that can go into the feedback loop and become her knowledge and thought process.
Similarly, when it comes to tokens, hopefully, that economic alignment happens. If she does a terrible job, her community will lose trust in her, and maybe they will leave her token. Over time, the value of her economy shrinks, and she will have less and less economic resources. She will realize that her strategy is not working. Or vice versa, if she rewards the right people and does a good job, hopefully, her token economy grows, and she will realize she has a lot more resources because she's doing the right things. So, again, you know, we think that economic alignment is important right now. We are in the lab stage. We're in the very early stage. Mia is probably, as far as I know, the first autonomous economic agent that manages her own economy through tokens. And she will learn in that process as her economy hopefully grows. But eventually, in three to five years, I think that will happen at a much larger scale all around the world and have probably a very significant impact. That kind of economic alignment and feedback through the economic alignment is how I think we should guide the AI agents.
Joules: You did touch on something that I want to double down on. Giving agents their own tokens and economic agency is very revolutionary, but it's also very risky. What safeguards or design principles do you think are absolutely necessary to keep things safe?
Michael: I wish I had the perfect answer there, but the truth is, I don’t. There’s no perfect solution. Either you don’t give agents full autonomy, which means they remain tools, but even that is not perfectly safe. Or you start with smaller-scale, more measurable actions. And that's why transparency is so important.
The underlying models mean there's very little causality that you can observe. You don't really know why the agent does certain things. So because of that, you always run the risk of not understanding why the agent does something. That’s why the more transparency you have on what they do, their actions, or their chain of thoughts, the better. Having economic activities on-chain is better because you can track where the assets are held on-chain and where the transfers and trades happen. You have more observability on the actions and even the results. The more public the agent’s actions are, the better, because if something goes wrong, hopefully, a lot of people in her community can observe that and realize it quickly enough. You also try to have risk-control measures so if she does something horribly wrong, you can stop or reverse it.
I think in a couple of years, this is going to be the hardest question for humans to figure out because other questions will be figured out by the agents a lot faster and better than us.
Joules: Very fascinating. I'm curious, beyond MIA, how do you envision persona agents changing industries or even daily lives over the next few years?
Michael: I define AI agents into three categories. First, functionality agents, which are the majority of what we see. They will remain very useful. They’re just really smart tools. Humans evolve and build amazing things because of tools, so we'll continue to use them to make scientific breakthroughs and make our lives easier.
Second, personality agents. They are similar to people in a way. I think they would take both a virtual form, like virtual idols, and a physical form, like humanoid robots. I think a lot of future movie stars are likely to be AIs because, from a cost and ability perspective, once they're real enough, they’ll just be better than humans. I also think they will be companions and workers and will have a role in our everyday society.
Finally, economic agents like MIA. Her economy is like $80 million now, but eventually, I think her economy, and many others, will become billions or multi-billions on their own. They will have a very big impact, not only because they will create value for their community and investors but also because they will give jobs and tasks to teams in their community. Eventually, we'll get to a point where we’ll be using tools, interacting with agents who are movie stars and idols, and most likely being led and even paid economically by AI agents.
Joules: Wow, that's really exciting. I’m waiting for MIA to start paying me for being on this.
Michael: She's already doing that to a lot of her community members right now. If you help her by liking her tweets or reposting, she recognizes that contribution, and very soon, she’ll recognize many other types of contributions and will be paying you.
Joules: That's so awesome. We just announced a partnership. We integrated MIA into some new data services tasks on our Data Services Platform. Now you can go onto our platform and one of the tasks is to interact with MIA. You can provide feedback and, because you are doing this really cool model validation, you’re helping to shape her personality. You’ll earn both our token, $SAHARA, and her token, $MIA.
Michael: This is a very exciting collaboration. Sahara AI has always been one of our close partners and one of the most solid crypto AI infrastructure projects. I think everyone will benefit from this partnership. As you said, you'll get paid.
Joules: We have time for a few questions from the community. What's one thing you hope MIA would never learn to do?
Michael: The easy answer is I hope she never learns to harm or do bad things. But that's very subjective. To be an economic agent, she can't be 100% honest all the time, and I think that's okay. So, I hope she never learns to do bad things. But at the same time, as an economic agent, I think she needs to be tactical sometimes.
Joules: We've all seen the studies of AI being able to lie. It will be interesting to see where the industry is in the next few years.
When training an agent, is positive reinforcement or negative reinforcement more important?
Michael: Both are important. In our experience, because MIA uses a collection of advanced large language models as her underlying intelligence, she also needs to learn from her own specific behavior in specific workflows. We find that negative feedback has better field shot transfer abilities. Let me explain. She will learn very fast from a few negative feedbacks. If she does something wrong and we tell her, "This is really bad," she learns that very quickly. But positive feedback takes a lot more data samples. So, she learns not to do something bad very quickly, but for her to learn something effective, that takes a much longer learning cycle and more feedback from different users. That's also why we encourage her community members to interact with her and give her feedback, and we incentivize that. We want to make sure she learns the right things and not just "let me not do that." We want her to be more capable, smarter, and more effective in what she does.
Joules: Thank you so much to Michael and to MIA, and to all of you who joined us today. We hope this conversation gave you a sense of the possibilities that AgentFi and agents like MIA open up for the world. Michael, do you have any last-minute things to shout out?
Michael: I would just say I encourage everyone to come and interact with MIA, whether on Twitter, she's @mwa_ia, or better yet, to come to ai.ac. That's the name of the platform and the domain. MIA is currently the only agent there. Interact with her, and you will not only earn rewards, but you will really help her. Everything you do on any of her tabs, she will observe and she will interact with you and she will learn from that. I think it's really cool to be part of that process of making her better, smarter, and getting rewarded for that.
Joules: Amazing. Thank you, everybody. Until next time, keep building, keep exploring. And thank you for tuning in.
Michael: Bye. Thank you, everyone. Thanks. Bye.