Our Client needed to extend upon an existing test automation framework.
In addition, they needed to stabilize the existing framework by automating regression tests.
Qualitest redesigned an unusable test framework which included over 75% of its unusable features.
Two months of prototyping and automating small tasks with in-house python tools was undertaken.
Tests went from 50% pass rates to 98% passing in the next two months.
This reduced the setup and execution time for API and GUI tests by 72%.
Our Client is a multinational technology company specializing in internet services, artificial intelligence, machine learning, data analytics, and AR/VR applications. They have developed a set of vision-based computing capabilities that can understand what you’re looking at and use that information to copy or translate text, identify plants and animals, explore locales or menus, discover products, find visually similar images and take other useful actions.
It then tries to return the most relevant and useful results, and algorithms aren’t affected by advertisements or other commercial arrangements. If for example you see a cool building or landmark that you don’t recognize, our Client can tell you what you’re looking at and provide links to learn more. Similarly, whether on the road or in your garden it’s not uncommon to discover plants and animals that you can’t quite clock or describe perfectly with words. Our Client will help you search what you see and learn all about it — like whether that beautiful plant can grow indoors.
Our Client was looking to acquire dedicated QA automation engineers to focus on extending an existing test automation framework and adding to approximately twenty automated tests. The goal was to stabilize the existing framework and increase speed and efficiency by automating regression tests that were time-consuming and labor-intensive.
Qualitest was deployed to utilize our technical ability and experience to analyze, design, and develop solutions to extend the automation framework and allow for new features to be added over time and to reduce regression test time.
This would have the added benefit of allowing the manual QA Testers to focus on testing the latest features.
We took over an existing QA project and developed a successful test automation framework and a reliable automated regression testing program. There was a previous effort from a third-party company that had built an unreliable testing framework, so we essentially picked up where the previous team left off. As we started our evaluation, we found that much of the earlier company’s implemented framework procedures were unfortunately not fully functional or useable. We therefore worked with various engineers from several teams involved with the target applications being tested, but these engineers were not 100% dedicated to test development and lacked some of the expertise and background needed in test automation.
Our biggest challenge was not having a solid foundation for the test framework. Our Client did not have reliable emulator tests, and we also didn’t have a way to fully automate tests on devices without being present during test setup and execution. We therefore had to think outside the box to come up with a solution as the client’s AR/VR application underwent a significant redesign, which made most of the previous tests obsolete.
Once assigned to the project, our automation engineers spent the first two months in discovery, executing manual test plans to familiarize themselves with the product, while trying different approaches for automating individual tasks utilizing the client’s technologies to see what worked and what did not. We then focused on prototyping as we started to identify the type of framework required to test the client’s application effectively for the next six months of the project.
We then spent time upfront working with the client engineers to extend the existing framework. We began to set up and run two existing tests and proceeded to create another 20 tests while iterating and evolving the test infrastructure. It was at this point we made a design decision to create as much of our own infrastructure as possible to have better control over what was being executed without breaking other internal tests.