Knowing how much time to spend on testing software can be difficult. The pressures of time and cost often skew our perception of what it means to properly quality assure our products, and moving towards rapid software development can exacerbate that situation.
Taking the time upfront to consider the focus of your testing can pay dividends. It shouldn’t just be a priority to test early to decrease the cost of fixing issues, it should also be a priority to test in the right way from the start, to ensure we are meeting the needs of our users.
Focus on your objective
It’s very easy to focus on functionality - plugging blocks of code together to create journeys through the software. But we should also consider that the most important aspect is the context within which the software will be used by the end user.
It sounds like the most obvious statement in the world, however many projects focus their testing energies on “X% test coverage” or “finding all defects”, and other such panaceas of software delivery. These approaches are fraught with danger, filled with unequivocal statements of intent that can’t necessarily be achieved, and increase reputational risk through the failure to deliver meaningfully tested software.
Instead, take the time to truly consider the objective for your product, what it aims to achieve, and, importantly, the context of its use in light of the problem that it is intended to solve. Allow this to drive your testing efforts.
Don’t wait to only consider usability when you hit UAT
So you’ve got the context right, and you’re testing your product according to the principles that made it a great idea in the first place. Those defects you’re finding are more meaningful and relevant already, but are you also considering what the user experience is like?
A fundamental oversight is to consider that usability defects are best found in User Acceptance Testing (UAT) by real users, and that while we’re delivering functionality we should focus on functionality alone. Building in usability testing techniques to the ongoing acceptance of a product, from the design phase onwards, is crucial. It increases the likelihood that you’re finding and fixing serious usability issues early and therefore more cost effectively. Ultimately though you’ll be engineering usable, enjoyable software. That’s right – enjoyable!
If end users feel that your software has been created to be easy to learn and use, rather than just ‘doing what it’s supposed to do’, then they’ll have a better experience and feel more satisfied. It also means that they won’t be tempted to look elsewhere for alternative products instead of waiting for a major release to assess the software’s value.
Users are increasingly more discerning about their software, more demanding of its quality, and more expectant of the experience they will have using it. So increase your usability activity through the introduction of inspection methods and user-centric testing throughout your delivery cycle. This will enhance the customer experience, and ultimately the reputation of your software.
Engineer your performance while you still can
Refactoring and re-engineering your code is an expensive process, and it will only become more time consuming and difficult as your product grows.
Performance testing is key to ensuring that an application is not only performing within the boundaries of acceptable usage and responses, but also ensuring that you can scale and grow successfully, or meet the need of peak demands based on the customer usage.
Why, therefore, wait to only perform a single-pronged approach to performance testing as a parallel stream of activity that will potentially output a long list of performance concerns, just before you are readying for release to your users?
Building performance factors into your testing from the start will give you confidence. For example, you could engineer a perceived to be performant application, yet find small scale indicators that could potentially lead to large scale performance issues in a production release, and allow you to tune your codebase as you deliver.
Combined with performance metrics from a more considered performance test phase, these kind of insights generated at the integration or acceptance test level can build a much clearer view of the performance of particular aspects of your application. These insights may otherwise go undetected if you’re concentrating solely on testing against modelled transaction volumes for the system as a whole.
While such low level performance factor tests may not fully satisfy specific non-functional requirements, they are a good indicator of potential problems that can be addressed in time, ensuring the optimal performance of your application and the best possible customer experience.