Mobile app version of vmapp.org
Login or Join
Murphy175

: Is A/B testing worth the added expense and time? A client is inquiring about blind A/B split testing for an online application focused mostly on usability of the design and application workflow.

@Murphy175

Posted in: #ABTesting #WebDevelopment

A client is inquiring about blind A/B split testing for an online application focused mostly on usability of the design and application workflow. Someone in the client's office read an article about how all the large agencies do A/B testing of concepts prior to going live....so it must be important for them to do too!

--edit--
About the application: It's a public-facing application, but users will be paid members, so it's not for free, open consumption. Users will be direct clients, who in turn will resell the product to their clients. Traffic at peak would likely be +-1000 users; some of whom would be daily users while a smaller portion might just be 1X/week.
--edit--

Did the results justify the added expense and efforts?

10.05% popularity Vote Up Vote Down


Login to follow query

More posts by @Murphy175

5 Comments

Sorted by latest first Latest Oldest Best

 

@Lengel546

A/B testing is fairly inexpensive and easy to do these days, you can use Google Website Optimizer which is free or something like virtual website optimizer which have plans going down to /month. I recommend A/B testing any goal oriented tasks, (conversions, subscriptions, etc.) at worst you'll get no statistical winner and wasted at best you improve conversion rates and your client makes more money.

From looking at your description it does sound like they are looking for usability testing rather then conversion optimization. If that's the case I've used both clicktale (already mentioned above) and usertesting.com both are well worth the money in my experience. ustertesting.com allows for easy general usability tests and clicktale allows you to see what actual users are doing.

You can run both A/B tests and usability tests for less then 0 total, so if the budget is there and you have clear goals then it's absolutely worth it.

10% popularity Vote Up Vote Down


 

@Caterina187

If you have a situation where developers/managers have several things in mind for a certain page, then A/B testing with enough user traffic is a good way to try and decide what works better.

If the decisions are mandated, and there is not all that much room to experiment (the truth of most webdev shops) then superficial A/B testing will not bring the results you might expect.

The trick of A/B testing is getting enough statistical data, and then analyzing the results - where analysis is the hard part. So setting up A/B is not all that time consuming next to the time it requires to understand the data.

You might also want to consider using live usability tests, described on useit.com.

Or even automated tools like ClickTale that will show you how users are using your pages.

10% popularity Vote Up Vote Down


 

@Sarah324

Someone in the client's office read an
article about how all the large
agencies do A/B testing of concepts
prior to going live....so it must be
important for them to do too!


All well and good if you're being paid large agency money for large agency results on a large agency site... unfortunately, as Kris and Thomas Owens have noted, statistical sampling doesn't scale down very well.

Users who have paid for the service are not the people multivariate testing generally focuses upon (and wisely so) - the idea is to find how to convert people who aren't subscribers because there are more of them and their attention is worth more than the potential confusion to existing subscribers who've already signed off anyway.

10% popularity Vote Up Vote Down


 

@Pope3001725

I hope I'm understanding your situation correctly. If I'm not, leave a comment and I'll clean it up a bit.

In your situation, I'm not sure how well A/B testing will work - I'm concerned with being able to get usable results. There are two problems to overcome - getting statistically valid results and then understanding those results.

The first problem that you need to see is that the ~1000 people who are using your service aren't your entire user base and you can't be sure that they are representative of your users. Just because these 1000 users show certain tendencies in A/B testing doesn't mean that other groups of users will also have those tendencies. And I think that also clouds the statistic validity of the results because you have improper samples.

You also have two categories of people who use your particular service, and who knows how many who use the resold systems. In your system, you have the frequent and the infrequent users. But how about in the other deployments? If your changes would affect them as well, you might be impacting their ability to achieve their goals with absolutely no data on their user experience in A or B.

And understanding any results you do get will be difficult, especially if you deploy your A/B testing across multiple deployments of your service. If you are gathering data from several distinct populations, you might see that you achieve your desired results with A in some and B in others - you then have to decide if this is indeed accurate and then decide what to do with it.

Honestly, in this case, I would recommend surveys. Find out about the background of you users - age range, gender, computer experience, profession - and how they use the software. Then, find out what features they like or don't like, what features are easy to use or not, and so on. This survey should go out to as many people as possible - both people who use your deployment and people who use other deployments.

10% popularity Vote Up Vote Down


 

@Angela700

A/B testing has its place but it requires some things to hold. Most importantly you need a clear success metric. I.e. it must be straightforward to judge which performed better A or B. Primarily this holds if your site is oriented at a specific commercial goal (e.g. get people to place purchase orders, get people to click on ad links etc.).

You also need clear alternatives to test and a large enough userbase to be able to get statistically valid results for each A/B pairing. The number of pairings and the duration of the testing is a factor in this as well.

If you are more interested in improving overall usability you would be better off doing a cooperative usability evaluations (sitting down with test users and observing them go through a task list, noting issues). It is cheaper and produces very useful results.

Ultimately it is all about what you are trying to accomplish.

10% popularity Vote Up Vote Down


Back to top | Use Dark Theme