We test and evaluate our client’s existing digital products to see where they can be improved for users. We test our own user experience designs, with real users, to prove they work well. And, we assess user satisfaction with products after they launch, so they can be continually improved.
Testing is often seen as the last thing to be done when developing a digital product.
But, that confuses just one kind of testing — acceptance testing — for all the testing that should be done during the lifetime of a product.
Concept testing, guerrilla testing, and in-person and remote usability testing are just as important, as are evaluation techniques such as expert (heuristic) review and customer satisfaction methods such as Net Promoter Score.
Testing and evaluation have different aims depending on when they are done.
When first engaged with by a client, we first evaluate their existing digital products to discover how successfully these are meeting the needs of their customers or users. This often gives us our first insights into the unmet needs of the users.
Then, as we move into user experience design, we test sketch-based concepts before they become wireframes, wireframes before they become a clickable prototype, and the clickable prototype before we send the UX design on for visual design and development.
In all these cases, we are testing to ensure that we head-off costly mistakes before they become embedded into the solution, and that the solution meets the users’ needs.
After a digital product is launched, we measure customer satisfaction, and elicit ideas for improvement from users. The aim is to develop a process of continual improvement.
The output of our testing is usually a concise report, structured around these themes: