By R. Christopher Haines, Executive VP and Chief Operating Officer

Software needs to be tested. It’s that simple.

Before all the software vendors in the room get in an uproar, that’s not a knock on software vendors. It’s the nature of the beast. Anyone who knows the software development lifecycle knows testing is a part of it. So why do so many software vendors fight the fact that their software should be tested once it’s deployed by a customer? Isn’t it more important to be appropriately functional in the customer’s environment than it is in the vendor’s development shop? User Acceptance Testing (UAT) is as important as the testing software vendors do pre-deployment. But many software vendors seem to resist UAT.

What if a customer finds a bug the vendor didn’t catch? Is this a fatal flaw? Will their reputation be irreparable?

There are really two dogs in this fight. First, it should be the software vendor’s desire to have happy customers. Period. If customer-side UAT finds some issues before going into production — and the customer is happy to have found those issues — everybody wins.

Second, customers have to be educated about the fact that software is, well, software. It isn’t perfect. There are no rules that say everything is going to work as intended on first delivery. In the new, agile world, if you want it fast, something has to give. Often, it’s vendor-side testing. Fact of life.

Here’s an idea, admittedly self-serving: What if vendors pump out the software, and customers or their contracted testing providers do the testing? Developing software isn’t easy. So, vendors already have their hands full. Customers know what they want and how they want it to work. But they may not be accomplished at UAT.

To be fair to vendors, it’s not fair that customers and analyst get caught up in defect metrics that somehow become scorecards for vendors and their software. If we’re going to judge vendors by defect metrics, we better be taking close looks at their software and clearly defining defects, as well as our reasons for judging them to be so. (Before judging any software to be defective, did anyone look at the requirements submitted by the customer? If the customer expected A through Z but only specified A through A through H, that’s not a software defect.)

What if we recognize we’re all players on the same team? What if we grant that all of us in our respective roles are trying our best to do what we do? And what if the object of the game is to ensure the software — any software — meets the customer’s expectations?

If you want software to be perfect out of the box, give vendors all the time they need to develop and test it themselves. If you don’t have that kind of time, be prepared to encounter a greater number of issues on the customer side. And be okay with that. Most important˛ everyone involved (vendors, analysts, implementation partners, testing providers, et al.) need to educate customers that software has many moving parts and even more variables.

Testing is just a part of playing the game to win.