On this page

Experiments

Guides and Surveys works with Amplitude Experiment to test what your users respond to best. When you install the Guides and Surveys SDK, you get everything you need to run experiments on your Guides and Surveys.

Manager or Administrator role required

Running an experiment on your guide or survey requires the Manager role at a minimum. For more information about how roles impact who can use Guides and Surveys, go to Getting Started | Roles and Permissions.

How experiments work in Guides and Surveys

Guides and Surveys experiments test whether showing a guide or survey affects user behavior:

  • Control: Users in the control group don't see the guide or survey.
  • Variants: Users in variant groups see the guide or survey you create.

For example, to run a 50-50 test on whether a guide improves feature adoption:

  • Set the control to 50% (these users don't see the guide).
  • Set variant A to 50% (these users see the guide).

This split lets you measure the impact of your guide or survey against a baseline of users who don't see it.

Run an experiment

To add experimentation to your guide or survey, select Add experiment.

After you add an experiment, Guides and Surveys controls the experience, and Experiment controls user targeting and variant distribution based on the experiment type you choose.

Choose an experiment type

Guides and Surveys offers two experiment types.

A/B test

Choose an A/B test to create two variants of the same guide. Amplitude decides the winner based on the data it receives. Access results the same way as any other Amplitude experiment.

Multi-armed bandit test

Choose a Multi-armed Bandit test for a more dynamic approach. The system allocates more traffic to the higher-performing variant in real time, which helps you optimize faster.

Configure variants

After you select an experiment type, Guides and Surveys adds a control and two variants with autogenerated keys. The control serves as your baseline for measuring impact, and you can create multiple variants to test different versions of content or design.

To rename a variant, select it and click More options. From this menu, update the name, duplicate, or delete the variant.

Complete experiment setup

Adding variants is only the first part of experimentation in Guides and Surveys. To ensure users experience variants as they should:

  1. Make sure the experiment is running. Define a goal, review targeting, and click Start Experiment. For more information, go to Manage the experiment.
  2. If a specific user doesn't experience a variant, ensure the user is part of the experiment's target audience.
  3. If a user sees one variant, they should continue to receive that variant. Navigate to Users > User Profiles. Search for the user and open their profile. Go to the Guide and Survey tabs to view which experiences the user has seen.

Manage the experiment

Click Manage Experiment to open the experiment editor in a new tab. The experiment takes the name of your guide or survey, and contains any variants you added.

Updating variants

Variant names stay in sync between your guide or survey and the experiment when you save the guide or survey.

For more information about working with experiments, go to Feature Experiment

Exposures and assignments

Exposure events in Guides and Surveys experiments work similarly to a standard experiment. However, some cases can cause an uneven split between control and variant exposures. The targets and limits you set affect how often treatment exposures occur.

Consider the following example:

Amplitude assigns User A to the control, and User B to the treatment.

  • If Amplitude serves User B another guide or survey that blocks the display of the treatment, no exposure event fires. The exposure event fires only when User B sees the treatment.
  • If the same scenario occurs for User A in the control group, the exposure event fires because User A doesn't receive the relevant guide, which is expected for the control group.

End the experiment

To end the experiment, navigate to the experiment's configuration page, click Stop Experiment, and choose one of the following options:

  • Complete experiment: Declare a winner. If one of the variants is the winner, Amplitude archives the losing variant and publishes the winning variant. If you select the control as the winner, the experiment returns to its initial state and sets the control rollout to 100%, which means no users see the guide or survey.
  • Continue running experiment: The experiment remains live so you can collect more data.

Insights

The Insights tab is the dashboard where you track how users engage with your guide or survey. Monitor trends in views and completions over time, and track how different variants perform relative to one another.

Time-based analysis

Track guide and survey engagement trends over predefined time periods:

  • Hourly
  • Daily
  • Weekly
  • Monthly
  • Quarterly

Use these presets to find when users are most likely to engage with the guide or survey, and whether engagement changes after a new product release.

Date range selection

Select a predefined range based on the unit of time, or click the calendar icon to define your own range. Choose from:

  • Rolling window (Last # complete days and today)
  • Since date
  • Between dates

Use the advanced settings to:

  • Add a date offset to a rolling window
  • Exclude Today
  • Enable Time Range

Performance overview

The top chart on the Insights tab is the Performance Overview. Amplitude displays high-level metrics that track how your guide or survey is performing:

MetricDescription
Guides / Surveys viewedThe number of times the guide or survey was shown to users.
Guides / Surveys completedThe number of times the guide or survey was completed by users.
Trend graphTracks the view or completion count over the time range specified in the date range selector.

Was this helpful?