The virtues of A/B and multivariate testing are well-documented, but the response to a test run by Adobe earlier this month is a reminder that testing can carry risk.

Under one of Adobe’s tests, the company’s popular $9.99/month Photography plan was replaced by a $19.99/month plan that offered the same software, but included 1TB instead of 20GB of cloud storage.

In a statement issued to PetaPixel, Adobe confirmed that the plan changes being seen were tests. “From time to time, we run tests on Adobe.com which cover a range of items, including plan options that may or may not be presented to all visitors to Adobe.com. We are currently running a number of tests on Adobe.com,” the statement read.

The company pointed out that even though some visitors weren’t seeing certain subscription plans, including the $9.99/month plan, they are still available and can be purchased through a special link or by calling Adobe sales.

Of course, it’s arguable that many if not most of the users who visit Adobe.com probably didn’t know about those plans in the first place.

Can testing go too far?

Adobe isn’t the only company that has found itself dealing with bad PR as a result of testing related to packaging and pricing.

Last year, Netflix caused a stir when it tested changes to its European plans that cut the number of simultaneous streams permitted.

“We continuously test new things at Netflix and these tests typically vary in length of time. In this case, we are testing slightly different price points and features to better understand how consumers value Netflix,” a company spokesperson stated at the time.

Netflix customers voiced concern about the company’s plans with some commenting that the test, if made permanent for everyone, would represent a stealth price increase.

As testing becomes more common, it’s not just big tech companies like Adobe and Netflix that are agitating customers with their tests. For example, Australian movie theater chain Village Cinemas halted a trial of dynamic pricing last year after it caused a customer backlash.

Perhaps inspired by the surge pricing model popularized by Uber, Village Cinemas decided to test raising concession stand prices on busy nights. Affected customers complained that the tactic was price gouging and the company was forced to backtrack, issuing a statement that read in part, “Village Cinemas confirms that we were running pricing variation trials over the summer period which we appreciate may have caused angst and concern to our customers, we can now confirm that all pricing variation trials have been stopped effective immediately.”

These examples raise the question: even if companies can test just about anything, should they?

It’s understandable that companies want to explore possible changes to packages and pricing. After all, markets change and it’s important for companies to evolve their offerings with them. Companies that are dominant in their markets are especially incentivized to explore how they can take advantage of their positions to maximize revenue and profit.

At the same time, testing associated with popular products and services clearly has the potential to cause agitation, especially when it relates to how those products and services are packaged and priced.

For this reason, companies are wise to make sure their testing interests are considered in relationship to the interests of their stakeholders.

How can that be done?

Obviously, this needs to be assessed on a case-by-case basis. In Adobe’s case, for instance, the company arguably could have taken greater care to test a “soft” subscription plan removal.

This could entail moving its $9.99/month plan to a secondary page listing or updating the copy associated with its sales phone number to hint that other plans were available by phone.

Such an approach may have afforded the ability to effectively gauge how users respond to plan changes.

At the end of the day, while there’s no disputing the value of testing and the need for companies to, it’s also clear that companies need to be careful what they test because the negative effects of a poorly received test can outweigh the benefits of testing in the first place.

Learn more

Usability Training