I don’t know if this is a universal trait – and I’m not entirely sure why – but my experience with Agile dev professionals is that there is a tendency to resist testing. I’m not talking about standard software Quality Assurance (incredibly, that’s not always a given either). I’m talking about A/B or multivariate testing, and true UX testing. These are things I have actually heard in my career as a marketing professional from dev groups.
“We don’t have the resources to do A/B testing – it would double development time.”
This person that said it misunderstood what A/B testing is. In most cases, you’re testing something new against your existing site – which is already developed and in production. So you’re only developing the new thing – and you were doing that anyway. Where’s the extra work?
If you’re introducing multivariate testing, developing multiple versions of something new would create extra work for a dev team that is used to allowing development of only one version at a time. But that’s the right way to do it for maximum iterative improvement.
“We are strapped for time – we don’t have time to develop things that won’t make it into production.”
You’re saying you don’t have time to find out if the change you’re working on to enhance click-thru rates will ratchet up the bounce rate instead. That’s dangerous to your website’s health. It means the whole enterprise needs to guess right 100% of the time – which just doesn’t happen. You’re also essentially saying you only have time to develop and deliver something once – and that’s it. Not very iterative, is it?
“Our dev team/marketing team/product manager would be demoralized if all their hard work never saw the light of day.”
That’s why they call it work. That’s why they have to pay you to do it. Certainly, we can make it fun. But ultimately, decisions on what does and doesn’t go into the final software release MUST be based on how that software accomplishes business goals. Team members who only feel validated by having their work seen can start a blog. Or therapy.
“Doing UX testing on only six people isn’t statistically significant.”
The same person who told me this also told me that he wanted a study on the top 1% of revenue-generators because that would be enough customers to be statistically significant (but surely you understand that the top 1% isn’t a representative sample for…aw, screw it, where’s my propeller hat?). UX testing is not the same as quantitative testing – five or six subjects are plenty. If you’re not sure why, read Jeff Sauro’s great blog post about user testing and sample sizes.
“We don’t need to test – we have a good feel for what users want.”
Robust, qualitative user testing tells you where you need to tweak to enhance your software’s usability. User testing is necessary, in part, because you may be too much of an insider, too in love with your own work, or too defensive about your work to identify the usability problems that can torpedo software adoption. It is also necessary because it helps remove our own biases and opinions (the dreaded “I think users would want”) as impediments to truly successful software. Skip it and you’ll risk shipping software that works but that users won’t use.
I had my frat buddy/luddite office mate/mother test it, and they said it was great!
User testing can’t be successful with insiders. Or insiders’ friends. Or insiders’ mothers. Real users won’t forgive you if it worked on your machine but doesn’t work on theirs. Users won’t let you explain the jargon that makes no sense to them and perfect sense to you. UX is a professional discipline – and no, you can’t do it just as well.