The debates rage on about what constitutes an AI Agent.
Interestingly, a lot of the focus is on defining what should not be considered an agent. "If it's a single prompt, it's not an agent!", "If it's a workflow of prompts, it's not an agent!", "Unless its writing its own code it's not an agent!" and on it goes.
Everyone seems to keep raising the ante in a race to distinguish themselves. In the middle of 2024 Hugging Face defined agents as "a program driven by an LLM". By the end of 2024 they updated that definition to say "LLM outputs control the workflow" and introduced a star system (3 stars is the max just like Michelin restaurants). You only get 3 stars if an LLM controls "iteration and program continuation". Anthropic are similarly trying to create tiers of agents, although the definitions are somewhat confusing. If you have a workflow of prompts it’s ok to call it an “agentic” system but what you created is not an AI agent. Agents need to "direct their own processes".
Understandably, anyone looking into the field is left baffled. They want to do the right thing by their team and business. Perhaps they've been tasked to figure out how their organisation is not going to be rendered obsolete by the relentless forward march of AI and they've heard that AI Agents are the future. Surely they want 3-star agents. Why settle for anything less?
I've been working in the agent space for a long time. I co-authored a book in 2004 called "Agent-based software development". I am a fan of the approach. However, I am worried that currently technology companies are spending too much time trying to out-agent the other agents and not enough time worrying about how AI Agents can solve actual, real business problems.
If you are trying to figure out agents let me offer a different way of thinking about them. Stop thinking about how they are built. Start thinking about what it is that they enable. Specifically, what is it that they automate and how much can you delegate to them.
AI Agents are the high-level building blocks of an automation strategy. They first encapsulate a conceptual approach - "we will delegate work to software" and, eventually, a technological framework. Conceptually, each agent has specific capabilities and specific goals. They have a degree of autonomy or, using term I prefer, self-direction in determining how to deploy their capabilities to achieve their goals. The assignment of goals and the degree of self-direction required are going to depend on what you are trying to automate and what is currently achievable (safely) with the technologies at hand. Through the agent abstraction you can plug into a rich ecosystem and research field that offers decades of outputs. Frameworks on how to define and manage goals, determine how best to have these programs communicate and collaborate, how to think about scaling, trust, safety, planning, negotiation and even argumentation.
What of all these technologies do you need to use today? How you go about building them? Depends on the task and the breadth, ambition and risk appetite of your strategy. They don't need to be any more or less complicated that what they need to be. There is no AI Agent license exam to pass. Don't worry about the definitions. 3 stars is not better than 0 stars. Look into the technologies, frameworks and patterns and find the one that will achieve your goals with the least amount of risk possible and in the most efficient way possible. Whether it calls itself a workflow, a multi-agent system or an agentless architecture does not matter. Your strategy does not change.
Your strategy is one of moving away from whatever way you are executing work today to one where AI Agents are embedded in your infrastructure to execute that work more efficiently and (hopefully) better. While technical distinctions between agent implementations matter - they inform what's feasible, maintainable, and scalable - they should flow from your strategic needs rather than define them. Similarly, while frameworks and standardization efforts can provide valuable guidance and common ground for evaluation, they should serve as tools for achieving your goals rather than constraints that dictate them. AI Agents are a strategy first, a technology second - but a successful strategy must be grounded in a clear understanding of the technological landscape and its practical implications for your specific context.