Everyone by now knows what an AI agent is – an autonomous system that performs tasks by designing workflows and utilizing available tools. Beyond natural language processing, it handles decision-making, problem-solving, and interactions with external environments. AI agents leverage LLMs to process user inputs, execute actions, and determine when to integrate external tools. AI agents are autonomous systems designed to achieve specific goals with minimal human intervention. They analyze data, adapt their decision-making, and interact with other agents, systems, and users to optimize outcomes.
Every other post on LinkedIn is about these agents (of course the current flavour is #deepseek) and there are a huge number of them already (check my earlier post on what should be criteria to qualify as an agent) but 2025 will also see a lot of these deployed across enterprise applications. Agents and virtual workers will cover critical functions such as procurement, logistics, finance, and customer support.
However, just as human teams experience conflicts due to differing priorities, AI agents can clash when their goals misalign. What if an Inventory Management Agent flags a potential stockout and places an urgent order. But the Demand Forecasting Agent predicts a dropin demand and recommends reducing orders.
Things can get worse. The Supplier Negotiation Agent working tirelessly to secure a long-term contract with a key supplier in #Chicago — lower costs, better terms, the works. But just as you are about to sign, the Risk Assessment Agentflags the supplier for potential supply chain disruptions due to geopolitical tensions in their region. The Supplier Negotiation Agent is pushing hard to close the deal, while the Risk Assessment Agent screams, “Abort mission!” The result? A deadlock.
Or say – the Sustainability Agent is pushing for a new supplier with a strong eco-friendly track record in #USA, while the Cost Optimization Agent advocates for a cheaper, less sustainable option from #China. This would lead to a standoff.
How do we fix this? In my next post, I will introduce the Central Governance Agent (CGA) framework to mediate conflicts, ensure transparency, and uphold accountability.