How Palo Alto Software Uses Global Agent to Experiment at Scale

Amplitude AI Agents help Palo Alto Software stay oriented as experimentation scales—and ensure insight keeps pace with execution.
Customers

Mar 16, 2026

6 min read

You know the feeling of returning from time away and realizing nothing has stood still? Sign-ups kept flowing. Promotions kept running. Feedback kept showing up in places that no one was watching. When you finally log in, it’s not panic you experience—it’s uncertainty. What happened while I was gone?

At Palo Alto Software, we build products like LivePlan that help entrepreneurs plan, fund, and grow their companies. These tools sit at the center of high-stakes decisions, so even small changes in our onboarding, pricing, or feature flows can have a real impact on conversion and retention. Understanding customer behavior is not optional, and guessing is expensive.

That means we are always testing. On a typical weekday, there are hundreds of actions within the platform—over the last 90 days, these actions have touched millions of events.

Having access to this data gives us clarity, but it also creates a lot of pressure. When experiments multiply, insight can easily lag behind execution. And at our scale of experimentation, keeping track of critical test results was starting to give every day the stress of coming back from a 2-week vacation.

When experiments outpace attention

Experimentation is how we run our business, and for us, experimentation means Amplitude. Nearly 40% of our Amplitude usage is tied directly to experimentation and feature flagging. We’re logging hundreds of millions of experiment assignments and exposures across onboarding, monetization, and core product workflows.

The hard part isn’t launching experiments—it’s staying oriented once the experiments are live.

Before AI Agents, teams spent a lot of time manually monitoring dashboards. We would scan charts daily, rebuild familiar views, and try to piece together what had changed overnight. Which experiment caused this shift? Is the drop meaningful? That work was careful and accurate, but it demanded constant attention.

As experimentation scaled, that approach stopped being sustainable.

How Global Agent changed the way we work

Global Agent changed how we start analysis.

Instead of opening dashboards and hoping the right signal jumps out, teams begin with questions for the agent. Explain this chart. Why are users dropping off here? What is driving this spike? These common, sensible questions make it easy to pull the information we need out of our data fast.

That speed boost compounds with how quickly Amplitude AI can answer follow-up questions too. Experiments rarely tell a clean story in one pass. You need to follow threads, challenge assumptions, and refine interpretations. Being able to have iterative conversations with your data is a game-changer.

What makes the agent work at our scale is transparency. Every answer is grounded in the underlying Amplitude data. Funnels, segmentation, cohorts, and Session Replay are always visible. Teams can verify results, go deeper, and debate conclusions before acting.

Always-on insight for always-on testing

As experimentation has accelerated, another issue has become clear: No one can monitor everything all the time.

Specialized Agents helped fill that gap. We use them to summarize changes across key dashboards, monitor critical flows like sign-up and onboarding, and surface anomalies automatically. They also help extract insights from session replays at a scale no team could realistically manage by hand.

This has been especially valuable during busy testing cycles or when people are onboarding or stepping away. Experiments keep running, and the system keeps watching.

Rigor still comes first

With millions of events flowing through the system, data quality is non-negotiable. We rely heavily on advanced analytics features like outlier removal, cohort refreshes, and asynchronous exports to keep analysis clean and activation reliable. We also pay close attention to implementation details because small issues get amplified quickly at our scale.

Amplitude AI does not remove complexity—it helps teams navigate it without cutting corners.

Experiments overlap. They touch the same flows. They influence the same users in different ways. In that environment, misunderstandings compound quickly. A small drop that goes unnoticed can linger. A false signal can trigger changes that mask the real problem.

AI Agents help us keep pace with that complexity. They give teams a faster way to understand which experiments are influencing outcomes, where behavior is diverging from expectations, and what actually deserves attention. That clarity lets us stay deliberate instead of reactive, even when dozens of tests are live at once.

A different kind of leverage

The biggest shift has not been in what we test, but in how we stay aligned around what we are being told.

AI Agents make it easier to start from a shared understanding instead of fragmented interpretations. Our agents help ensure that important changes surface even when no one is actively looking for them. Together, they reduce the cognitive load of running a highly experimental product organization.

We still rely on rigor. We still validate results. We still debate what the data means.

What has changed is that insight keeps up with execution. Even with scaled experimentation, Amplitude’s AI Agents allow us to step away and rest assured we aren’t missing a beat.

About the author
Shawn Hymer

Shawn Hymer

Data and Strategy Analyst, Palo Alto Software

Shawn Hymer is a Data and Strategy Analyst at Palo Alto Software. He has 10+ years of experience analyzing data for social/mobile games and SaaS products and is passionate about tracking patterns and analyzing user behavior.

More from Shawn