Agentic AI: How Can We Govern It Without Stifling Innovation?
                
                Not long ago, artificial intelligence (AI) systems waited for you to tell them what to do. You asked a chatbot a question, and it answered. You provided data to a machine learning model, and it made a prediction. But a new chapter is unfolding – one in which AI tools do not just respond. The tools act and are increasingly embedded into business enterprise workflows, consumer products, and platforms. They also navigate across systems and respond to new or unexpected situations.  
This emerging class of tools, known as agentic AI, can set goals and act with humans still in the loop (but with little or no human prompting), significant personalization, and emerging cultural fluency. Agentic AI can not only find answers; it can also make decisions and execute them.
Increasingly, multiple agents may collaborate to execute a business transaction or collaborate with other agents with specific domain expertise. That shift presents enormous opportunities for businesses or nonprofit organizations. Indeed, there are expected challenges, with highly publicized examples such as AI agents failing industry, but the potential is enormous.
At its core, agentic AI combines autonomy (the ability to operate without step-by-step human guidance), goal orientation (the capacity to plan toward achieving specific outcomes), and action capability (the power to make changes in the real or digital world with customization).
 
The leap from reactive to proactive AI is significant. Traditional AI processed input when asked. Agentic AI is more like a colleague who notices a problem, drafts a plan, and starts implementing it – sometimes with other agent colleagues before you have even read your morning emails.
 
But with autonomy comes potential risk and the need for compliance guardrails and best practices. Agentic AI can act in ways that surprise, or even harm, its creators and users, and to some there is a fear that organizations are not ready for the risks.
 
Around the world, governments are experimenting with different approaches. The European Union’s Artificial Intelligence Act sets broad requirements for “high-risk” systems. Earlier this year, the EU dropped efforts to advance the AI Liability Directive, which would have allowed consumers to sue for certain damages. In the U.S., the regulatory environment is more sector-specific, with agencies like the FTC and FDA applying existing laws to AI contexts and the recent Administration Executive Orders and AI Action Plan encouraging innovation. Not surprisingly, China has introduced rules emphasizing state oversight and control.
There are a few non-exclusive options for governing agentic AI in the U.S.
Of course, safeguards are essential regardless of the regulatory framework. Business leaders have a unique role and responsibility in shaping the safe adoption of agentic AI. As I have written previously, industry self-regulation is not a substitute for public oversight, but it can be the first, fastest line of defense.
Governments, platforms, companies, academics, and civil society all have a role to play. Industry can lead by setting high bars for safety and transparency, while policymakers ensure those bars are met consistently and fairly.
Many spheres of society are looking closely at agentic AI. The Knight Foundation is funding a project that is “studying how advanced AI systems may harm, or help strengthen, democratic freedoms.” As Arvind Narayanan and Sayash Kapoor recently observed for that initiative, AI is becoming “normal technology,” which only ups the ante on how we treat it from a public policy and business operations perspective.
If we get governance right, for businesses it would mean clearer guardrails and trust-based market advantages. For consumers, it would mean safer, fairer, and more transparent interactions with agentic AI systems.
                    
            
        This emerging class of tools, known as agentic AI, can set goals and act with humans still in the loop (but with little or no human prompting), significant personalization, and emerging cultural fluency. Agentic AI can not only find answers; it can also make decisions and execute them.
Increasingly, multiple agents may collaborate to execute a business transaction or collaborate with other agents with specific domain expertise. That shift presents enormous opportunities for businesses or nonprofit organizations. Indeed, there are expected challenges, with highly publicized examples such as AI agents failing industry, but the potential is enormous.
At its core, agentic AI combines autonomy (the ability to operate without step-by-step human guidance), goal orientation (the capacity to plan toward achieving specific outcomes), and action capability (the power to make changes in the real or digital world with customization).
Practical Steps Agentic AI Might Take For Your Enterprise
In customer service, agentic AI might automatically approve an invoice, refund a purchase, reorder stock from a supplier, make travel plans, review and respond to emails, purchase and optimize advertising, or escalate an issue to the legal or compliance team. In a financial management context, agentic AI could analyze market trends, adjust a portfolio, and execute trades instantly. Some agents are general and others are specialized.The leap from reactive to proactive AI is significant. Traditional AI processed input when asked. Agentic AI is more like a colleague who notices a problem, drafts a plan, and starts implementing it – sometimes with other agent colleagues before you have even read your morning emails.
What Agentic AI and Its Personalization Power Mean for your Organization
Productivity leaps, new service models, and faster innovation cycles are just a few of the many potential benefits. Agentic AI in marketing could design and launch new campaigns in days instead of weeks. Supply chain agentic AI could predict shortages and reroute orders before a disruption hits.But with autonomy comes potential risk and the need for compliance guardrails and best practices. Agentic AI can act in ways that surprise, or even harm, its creators and users, and to some there is a fear that organizations are not ready for the risks.
The Governance Challenge
AI Governance has always been complex. But agentic AI complicates the equation because these tools and systems are actors, not just advisors. That raises questions of accountability, safety, human in the loop, privacy, and transparency. Agentic AI is advancing faster than laws can adapt.Around the world, governments are experimenting with different approaches. The European Union’s Artificial Intelligence Act sets broad requirements for “high-risk” systems. Earlier this year, the EU dropped efforts to advance the AI Liability Directive, which would have allowed consumers to sue for certain damages. In the U.S., the regulatory environment is more sector-specific, with agencies like the FTC and FDA applying existing laws to AI contexts and the recent Administration Executive Orders and AI Action Plan encouraging innovation. Not surprisingly, China has introduced rules emphasizing state oversight and control.
There are a few non-exclusive options for governing agentic AI in the U.S.
- Government Regulation: Traditional regulation offers accountability and enforcement power. Governments can set minimum safety standards, require testing, and punish bad actors. The downside is that technology evolves faster than statutes, overregulation can stifle innovation, and the government regulatory process does not always get optimal results.
- Industry Self-Regulation: Industries have long developed voluntary codes of conduct, technical standards, and certification programs. Advertising self-regulation for example, has enabled responsible practices and effective dispute resolution while preserving flexibility. Self-regulation can be faster and more adaptive. It also can solve problems unlikely to be handled by broad-based legislation, such as model dispute resolution mechanisms when two agentic systems’ engagement go off track; appropriate consent mechanisms and user control; customized autonomy and privacy preferences; special guidelines for children, teens, seniors, and the vulnerable; and auditing. Moreover, who is responsible when there are errors in execution through miscommunication, bugs, negligence, or deployer development error remains undefined in many cases and self-regulation could be the answer to working through these complex issues.
- Co-Regulation: Hybrid models marry government oversight with industry implementation. In finance, regulators set broad risk management rules, while banks develop detailed compliance procedures. This model allows for agility but requires trust and cooperation between public and private sectors.
Of course, safeguards are essential regardless of the regulatory framework. Business leaders have a unique role and responsibility in shaping the safe adoption of agentic AI. As I have written previously, industry self-regulation is not a substitute for public oversight, but it can be the first, fastest line of defense.
Governments, platforms, companies, academics, and civil society all have a role to play. Industry can lead by setting high bars for safety and transparency, while policymakers ensure those bars are met consistently and fairly.
Many spheres of society are looking closely at agentic AI. The Knight Foundation is funding a project that is “studying how advanced AI systems may harm, or help strengthen, democratic freedoms.” As Arvind Narayanan and Sayash Kapoor recently observed for that initiative, AI is becoming “normal technology,” which only ups the ante on how we treat it from a public policy and business operations perspective.
If we get governance right, for businesses it would mean clearer guardrails and trust-based market advantages. For consumers, it would mean safer, fairer, and more transparent interactions with agentic AI systems.