The Evolution of Test and the Enterprise Data Mart
Apr 25, 2018
Software testing has seen significant evolutionary changes that have driven greater efficiency and improved test coverage. However, this has not removed the need for software testing to be tracked, managed and communicated to others.
Without this information, stakeholders and decision makers can’t determine if a release has been sufficiently tested. There are certain levels of risk that are acceptable and unavoidable for business operations – but this lack of transparency makes effectively managing the risk, that is inherent in every software release, impossible. Considering the dire consequences of a glitch or bug making it into production, decision makers simply cannot let risk stay unquantified.
The challenge facing enterprise development teams is that for testing progress to be recorded, testing efforts had to be paused. Imagine a race car driver having to stop after each lap to record their completed lap, lap time and any issues they experienced – highly inefficient.
As software development continues to evolve, so do the activities associated with testing. While it seems that automation is being pressed at every possible stage of the delivery chain, complete automation of every test isn’t the answer. Neither is solely relying on exploration testing. There needs to be a healthy mix of both types.
Every organization will have a different ideal formula of automation vs exploration that works best for their product, environment, culture and delivery needs. The challenge is to find the right mix to maximize release quality while also meeting the needs of the timeline. But how can you determine the ideal proportions of this mixture when you lack the necessary testing data?
The Evolving Role of Testers
Traditionally, the tester’s role has been to sit at a desk and pound out test cases. While this is still a large part of a tester’s typical responsibilities, their role is also evolving. The smart tester will always work with a key purpose in mind – ensuring that business-critical testing and testing records are correctly managed.
Correct management can include collating the data needed for decision makers to act decisively. It can also involve identifying which test results require other exploratory tests to hunt down potential issues.
The role of the tester has further evolved to include managing the product quality. They do this by driving quality discussions and other activities to enhance product quality. On many teams, testers become gatekeepers – they are the ones who determine if a release will progress or if it needs to be sent back to the developer.
The most valuable testers know why, when and how the solution was tested. They know the quality of each release and of the overall product. A good tester will find ways to raise the bar on quality for the entire product team. Their provision of crucial data makes the decision maker’s job easier and more quality centric.
The role of tester can include activities such as:
Unit Testing
Test Environment Management
Continuous Integration Testing
Automated Testing
Exploratory Testing
Requirements Testing
End User Testing
Continuous Deployment Testing
Continuous Delivery Testing
Production side Testing
Test reporting and analytics
Requirements coverage analysis
And much more.
While figuring out the ideal mix of testing activities is essential, the less talked about, but equally important struggle is how to accurately capture the resulting test data.
Managing and Recording Testing Activities
This might seem simple at a glance, but upon implementation the complexity escalates quickly. It’s one thing to have a tester take an hour out of their work day to record their tests, results etc. It’s quite another to implement this strategy for every tester across a multi-national enterprise and have all of those reports roll up to a manageable and meaningful dataset.
This becomes exponentially more challenging when hundreds of projects, releases, requirements, release gates, compliance criteria, audits and thousands of test environments are in play. The complexity and scale involved only compound the question faced by decision makers – what is the risk of releasing this build to production?
Of course, this question is really made up of multiple sub-questions. Have all the requirements been adequately tested? Have any other potential issues been introduced? Has each release been tested by the delivery team, performance team, end users, etc?
Without current and complete information, there is really no way to effectively manage this type of risk. It’s this type of scenario that drives the crucial need for a testing data mart.
Establishing a Testing Data Mart
How can a data mart be established that tracks and associates user requirements with test environments, edge testing, test data, load tests, unit tests, regression tests, user testing, test cases, test plans, etc.? How can all of these things be tracked and communicated to decision makers? And finally, can the decision makers use this information to confidently release the product to production?
There are a variety of test tools available that can help with nearly every conceivable testing scenario. While these test tools have a variety of bells and whistles that make them stand apart from their competition, the smart tester knows that tools that provide clear analytics and reporting are vital. A data mart that can function as a central source of truth rather than working as an isolated tool. One that automatically collects data and distributes analytics to decision makers and stakeholders regardless of their organizational or geographic location.
For environments where efficiency, process and reporting are paramount, Plutora provides a solution that other competitors just can’t match. In addition to streamlined test environment management and improved workflow efficiency, their comprehensive data mart tracks everything. User information, versions, builds, test environments, test cases, requirements coverage, change management, defect management, automation, audit trails and even results and events from your favorite integrated tools – everything is captured.
The Plutora data mart automatically captures every detail and result from every test throughout the enterprise. Powerful reporting and analytics tools then organize and feed real-time data to reports and dashboards allowing testers, stakeholders and decision makers to know progress and results at any given moment.
Experience the freedom and flexibility of Plutora now with a free Demo.
Download our free eBook
Mastering Software Delivery with Value Stream Management
Discover how to optimize your software delivery with our comprehensive eBook on Value Stream Management (VSM). Learn how top organizations streamline pipelines, enhance quality, and accelerate delivery.