Plutora Blog - Test Case Management, Test management
Unit Testing in an CI/CD Enterprise EnvironmentReading time 10 minutes
Nearly every aspect of modern business is driven and controlled by software. The more efficiently new software can be brought to market, the more competitive a company can be. This creates strong motivation for a company to expedite every step – from concept to deployment – of the ALM process.
Almost as quickly as a software need or idea is identified, it can be translated into a requirements document and end up in front of a developer. To keep up with this increased pace, development teams are equally motivated to improve efficiency and are always seeking for better practices, processes and tools. As a result, development life cycles are moving at a faster pace than ever before and will continue to do so.
The next challenge becomes “How does a development team develop software faster without jeopardizing quality?” The high stakes have been shown in a variety of real-world examples – glitches and bugs in prematurely released software. Some glitches are small, others can mean life or death for the company.
Example 1: Knight Capital Group.
Knight Capital Group’s development team rolled out an updated software release on Tuesday. First thing Wednesday morning, the new update automatically began buying up billions of dollars of stocks. In just 45 minutes, it carried out operations that should have been spread out over several days. This glitch cost the company $440 Million dollars and bankrupted the company in less than an hour.
Example 2: NASA
NASA’s Mars Climate Orbiter burned up in the Martian atmosphere due to a mathematical conversion glitch. Later that year, another glitch prematurely shut down the engines of NASA’s Mars Polar Lander resulting in a catastrophic crash. These two software glitches cost NASA $357 Million dollars.
Example 3: American Airlines
Last year American Airlines experienced a software glitch – a new software release inadvertently approved every pilot vacation request submitted for the month of December. The company found itself with 15,000 unmanned flights. The holiday travel season is a make or break time for the airline. This software glitch left them with no choice but to buy back their pilots at 150% of normal pay, costing the airlines an estimated $8 million dollars.
Even development powerhouses like Amazon and Facebook aren’t immune from notable software glitches and have had newsworthy glitches more than once. The question is, how do we avoid becoming one of these headline statistics?
Protecting the Product Quality
It goes without saying that all software needs to be thoroughly tested. But with decreased development lifecycles and increased throughput, how can sufficient testing be carried out to maintain quality? This can be especially challenging in a Continuous Delivery or Continuous Integration (CI/CD) environment.
The lack of complete testing and lack of quality of testing only increase the risk to the quality of the final product. It is then up to project managers and development managers to evaluate how much risk they are willing to accept for any given build and release.
That said, how can we continue to test in this environment of expedited delivery?
As in any software development organization, the strategy to effective testing and issue resolution is to identify the issue as early in the development lifecycle as possible. This is called shifting left. The further you can shift left the identification of a glitch or bug, the better. A development rule of thumb states that each test phase that a bug passes through undetected makes it ten times more expensive to fix.
Consider a simple bug: In the Coding phase, if discovered, it would have taken just 1 minute to resolve. If it passes undetected to the Unit Testing phase, an estimated 10 minutes would be needed to identify and resolve it. If it still goes undetected, it would then take the testers and developers an estimated 100 minutes combined in the integration test phase. With this model in mind, the importance of catching as many of these issues as possible in the Coding or Unit Test stages becomes clear.
For unit testing to be efficient, the individual tests need to be small and quick to execute. These tests need to be readily accessible and organized in such a way that they can be quickly located, identified, and executed. This way, when future code changes and various design and compliance iterations take place, the unit tests can be quickly reused to test only those modules of code that may have been impacted.
Unit testing is the front-line effort to identify these issues as early as possible. But what is unit testing?
“Unit testing is a software testing method by which individual units of source code, sets of one or more computer program modules together with associated control data, usage procedures, and operating procedures, are tested to determine whether they are fit for use.”
Unit testing involves breaking down the code segments into small bitesize chunks or units of code or logic that can be quickly and easily tested.
An example would be to create a unit test for the multiplication module of an online calculator. If the code included this mathematical formula:
A x B = C
This would be a good candidate for a unit test. The unit test would then plug in known factors for “A” and “B” and compare the calculated product against the expected value. If the result is as expected, the test will pass. However, if it is anything other than the expected answer, the test will fail.
Additional unit tests would be created for the same code segment using all potential types of factors –including negative factors, factors that include decimals, non-numerical characters and various other combinations. Using this online calculator example, a separate set of unit tests would be created for each of the different mathematical functions (addition, subtraction, multiplication, division etc.).
The unit test is a segment of code that is created for the express purpose of testing a specific segment, module or class of the application code. There are many development tools that readily support Unit Testing within their existing framework. For example, if you are using .NET, some of those tools might include MS Test, Nunit, MbUnit and xUnit.net.
A good pattern for arranging and formatting unit tests code is called the “AAA” or “Triple A” which stands for Arrange, Act and Assert.
- Arrange is where you initialize the specific objects and define the values, or data that is passed to the method being tested.
- Act is where you would instantiate the method being tested.
- Assert is the verification that the method being tested produces expected results.
Once you’ve created your unit test, we recommend giving it a name that will make it easy to find and use in the future. Roy Osherov developed the following naming convention specifically for Unit Testing that is commonly used throughout the development community.
- UnitOfWork – Is the name of the method, class or project being tested
- StateUnderTest – Represent the input values for the method, class or project.
- ExpectedBehavior – Expresses the expected returns for the specified input.
There are several authoritative articles and books on these topics that will provide much more detail than we have space for in this article. One such book that is highly recommended is “Continuous Delivery” by Jez Humble and David Farley.
Who carries out Unit Testing?
Unit testing is primarily performed by the developer while modules are being created or reworked. However, Unit Tests can also be performed by a White Box Tester.
Ideally in a CI/CD environment, the developer will receive a change request, make the requested changes to the code, and immediately run the associated unit tests for the impacted modules and classes. These unit tests should be easily accessible and searchable in a test script library and should be short and fast to run.
The book “Continuous Delivery” states that ideally, compile and test should take around 90 seconds to perform but shouldn’t be longer than 5 minutes. 10 minutes would be the absolute maximum time. Any longer than this and the developer runs the risk that multiple commits will have taken place before the build can be run again.
As mentioned before, in an ideal world, unit tests would be created at the same time as the original code. However, if this didn’t happen, code segments that still require unit tests can be identified during code reviews, or as additional code is added or modified for iterative changes and enhancements.
In Test Driven Development (TDD) environments, unit tests are created before the code itself. This practice is commonly used in both Agile and Extreme Programming (XP) methodologies. Again, there is not enough space in this article to get into detail but there are several advantages to this method.
A Focus on the Objective
The goal of testing is to identify and resolve as many issues as possible before the product is released to production. Unit testing makes this testing effort much more efficient by shifting the discovery of those issues as far left as possible. This saves significant amounts of project time, dollars and substantially improves the overall quality of the product.
Good unit test practices in a CI/CD environment are essential to the ALM cycle. When refactoring and design changes occur, unit tests can ensure that the core functionality remains intact and accurate. And when those “quick” or “minor” changes are made, unit testing can be quickly executed to ensure the impacted code continues to function as expected.
Unit Testing for the Enterprise ALM
Unit testing on an enterprise-scale has clear requirements: a series of unit tests that can be run quickly and easily for any module of the application code. These test scripts also need to be tracked and managed in a library that can be easily accessed, quickly filtered to the impacted modules and executed efficiently.
It would also be ideal if the given solution tracked every test and matched them up with the associated user requirements, code release and project, so stakeholders could quickly see status and test coverage in relation to requirements. This would allow them to effectively manage the risk associated with each release – and avoid becoming an unfortunate headline.
For enterprise development and test teams, Plutora Test meets and exceeds these requirements. It enables geographically dispersed teams to create an easy-to-use library of test scripts and test plans, including automated and manual tests for all stages of the ALM (unit, integration, regression, load, performance, user tests and more).
The built-in Requirements Traceability Matrix provides a real-time view of test coverage with each associated change request, release and project. With the ability to fully integrate existing tools like Jira, Version One, Jenkins, and many others, dev teams can custom create their own ideal end-to-end ALM solution.
Integrating these solutions, Plutora acts as the central hub, collecting, communicating and managing results and defects from one tool to another. It also acts as a data mart for all your ALM data using powerful built-in analytics tools, so stakeholders will always have accurate up to the minute information without taking the developer or tester away from their work.
Plutora is an enterprise ALM solution that that enables your Software delivery teams to streamline processes, collaboration and reporting like never before.