AI Agents Are Moving Fast Into Production—But Trust Issues Are Slowing Them Down

A new report from Dynatrace reveals that autonomous AI systems are rapidly advancing from experimental pilots to real-world production, but organizations are hitting significant barriers around trust, visibility, and control.

Widespread Adoption Across Critical Functions

According to “The Pulse of Agentic AI in 2026” report, which surveyed 919 senior leaders globally, 72% of organizations are already using AI agents for IT operations and DevOps. Half of all companies are deploying these autonomous systems for both internal operations and customer-facing applications. The technology has gained particular traction in mission-critical areas like cybersecurity, software engineering, and customer support—precisely the functions that can least afford failures.

Budget commitments remain robust, with current spending averaging $2-5 million annually. Three-quarters of organizations expect their agentic AI budgets to increase over the next year, signaling sustained priority despite economic uncertainties.

Production Readiness Hampered by Technical Barriers

While 50% of respondents have projects in production for limited use cases and 44% report broad adoption in select departments, significant obstacles remain. The top technical barrier? Organizations struggle to set clear rules for when AI agents should act autonomously versus when they should require human approval.

Other critical challenges include limited visibility into how agents make decisions, difficulty monitoring performance at scale, and problems tracing downstream impacts of unexpected behaviors. These observability gaps are preventing organizations from scaling their AI initiatives with confidence.

Humans Firmly in the Loop

Despite the “autonomous” label, human oversight remains paramount. The report found that 69% of agentic AI decisions are currently verified by humans, with data quality checks, human review, and drift monitoring serving as the primary validation measures.

Notably, 64% of organizations are building both fully autonomous and human-supervised agents, while only 13% are creating solely autonomous systems. Future automation goals indicate a 60/40 split favoring human-AI collaboration over full autonomy for most business applications.

The Observability Imperative

Organizations are discovering that traditional monitoring tools designed for deterministic software fall short when applied to probability-based AI systems. Nearly 70% use observability during implementation to integrate with existing systems and detect anomalies, but respondents report clear gaps in understanding AI behavior, risks, and business outcomes.

Leaders describe troubling “black box” behavior, fragmented monitoring, and weak linkages between technical metrics and business results. Manual reviews of AI communication flows—still used by 44% of organizations—simply cannot scale.

What’s Next

The report concludes that scaling agentic AI safely requires a fundamental shift: observability must function as a control plane, providing deterministic, context-based foundations for generative AI decisions. Organizations that establish this foundation—combining real-time monitoring with intelligent guardrails—will lead the next phase of autonomous operations.

As one executive noted, “Current tools lack real-time insights and proactive anomaly detection.” For AI agents to deliver on their promise, that must change.

Author