Working in an agile environment means continuously improving the work process. We felt the need to have a second look at our testing methods and found some interesting tools to step up our game and offer more guidance to our customers during the UAT (User Acceptance Testing) period.
Want to know more?
Why is testing important?
Needless to say that if you buy a car, you’re kind of expecting it was thoroughly tested before you start driving it. You don’t want to discover something is faulty when you’re going 120 km/h on the freeway. The exact same thing goes for digital applications.
UAT, easy as 1-2-3?
We have different ways to test built features during sprints. The UAT (User Acceptance Testing) period typically takes place immediately after a sprint. It gives the customer the opportunity to test all the features that were built during that last sprint.
Sounds easy, right? Just surf around, check whether nothing is broken and if it all works, you’re done! But it’s actually not that straight-forward. Or at least not for all customers. So we decided to simplify the process.
Why change something?
We felt that the customer didn’t always understand how he needed to test features during the UAT period. He could definitely use some guidance in the process.
The customer’s interpretation of the features isn’t always how we intended them, which might lead to misunderstandings and even frustration at both ends.
In an attempt to offer more support, we were looking for a way to enter test scenarios in a testing environment that were understandable to a layman, without the technical jargon. And we found one.
Our testing environment
Every test starts from the user stories that were determined at the beginning of the project. Linked to these user stories are acceptance criteria. As soon as we have these, we can set up the test scenarios.
Our testing environment is partly automated. We’ve installed something called a ‘continous integration server’ that is used for testing only. The most recent version of an application is deployed here regularly. It issues a report and flags malfunctions if they occur. Here's a quick overview of how our testing environment is currently set up:
Testing the code; also called ‘Unit testing’. We check whether the app works on an architectural level. The tests check if a piece of code is behaving as it is supposed to in a specific context. We use PHPSpec for this.
Testing the (expected) behaviour: Behaviour Driven Development (BDD). We test the behaviour that you would naturally expect. To do this, we use BEHAT technology. It works based on the acceptance criteria of the user story. This technology allows us to formulate sentences in human language (not technical jargon) that are then transformed into code by the system.
Testing the usage. Enter the Human Factor; a dedicated tester continuously goes through all functionalities manually. This manual test will indicate possible anomalies that could not be tested in BDD. The added value: the manual tester writes out detailed test scenarios, which can later function as a manual for the customer to perform his own testing.
Testing the context. We call it Automated Browser Testing that is tested using BrowserStack. This technology makes screenshots of the most important pages and the most critical user interactions on the different devices the application will run on. It saves frontend developers a lot of precious time, and also produces a test report that can serve as a deliverable for the customer.
- Testing performance. We use New Relic to test the performance of an application.
Advantages for customer and supplier
These changes offer advantages to both the customer and our own developers:
development goes faster, so we can deliver faster
customers receive guidance during testing
automated testing helps to alert us sooner in case of malfunctions
If you want to learn more about the way we set up our testing environment, or if you have an upcoming project you would like to discuss, please give us a call.