EMA Webinar

Webinar
Get insights from the latest research by Enterprise Management Associates (EMA)
Test Environment Management: A Critical Requirement for Effective CI/CD Register Now
Menu Mobile
Get the latest software delivery industry news, reporting and resources. Follow us on Linkedin
Last updated on
Plutora Blog - DevOps, Test management

Integration Testing in an Enterprise DevOps Environment

Reading time 10 minutes

Stages of Development Testing

The need for testing a new piece of software before it’s released to customers is as old as software development itself. However, it’s typically left up to the individual organization to determine what types of testing need to be done, and how thorough the testing needs to be. The testing methodology and procedures can be left to individuals supporting each respective development team within an enterprise, causing scenarios where the right hand is unaware of what the left hand is doing. This will continue until at some point, things go horribly wrong. Then management is required to step in, and demand more effective testing. Developers then struggle to increase test effectiveness with their limited resources. The result is a constant balancing act between spending enough hours to test effectively without missing deadlines.

Testers are frequently looked at as merely a second line of defense to catch issues the developers miss. Because of this mindset, the sad reality of testing is that when budgets become strained, it is often the first area to be trimmed, or in many cases completely cut. With this reality in mind, we are always looking for ways to improve testing efficiency, which we take a look at in this article.

Who Carries out Integration Testing

When people typically think of testing enterprise scale software, they will typically envision a group of dedicated testers, hammering the newly developed software with a barrage of tests to expose its flaws and weaknesses. While an enterprise DevOps team may have a team of testers, they are not necessarily the only ones running tests.

In a DevOps environment, the responsibility of testing falls on developers, testers, and a variety of other team members. However, each of these respective groups is generally responsible for different stages of testing. For example, developers will perform both unit testing, and integration testing. Dedicated Testers will perform Systems Test, and various types of user groups perform User Acceptance tests.

 

Integration Testing V Model

 

Purpose of Integration Testing
Exploring every stage of testing would require a much longer article, so here we’ll focus primarily on Integration testing in a DevOps environment. We’ll start with a simplified definition.

Integration Testing: The testing of a component or module of code to ensure it integrates correctly with other components or modules of code.

Integration Testing Modules
When we talk about integrating one component with another, we are talking about making sure that the two segments fit together correctly and communicate data correctly between each other. Using our illustrated example, the search function (module #1) sends user defined search criteria to module #2 via XML. This second module then translates that, and creates the search parameters to send on to the database via JDBC.

The challenge with integration testing is that when you have multiple developers simultaneously developing multiple modules, you can’t test how two or more modules integrate together until all the modules are ready… or can you?

The answer is yes. There are different methods of integration testing that can be useful when testing integration of various modules of a product.

Types of Integration Testing

Big Bang Testing – This method waits until all modules of a given product are completed before any integration testing is carried out. Using the example in the image above, your new product under development has been split into six modules to maximize developer resources. With Big Bang testing, you would have to wait for all six modules to be completed before even starting integration testing.

This type of testing can be both cost and resource intensive, because the reality is that some modules will be completed long before others. Causing some of your developers to be unproductive while waiting until the last module is completed, then scramble again when it’s finally time to test. Then when an issue is inevitably discovered, it is far more difficult and time intensive to track the issue down to the specific segment of code that needs to be fixed.

Incremental Testing – Using our same example of six total modules for a given product, when two connecting modules are completed, they can be integrated together and tested to make sure that the data being communicated is exactly what is expected.

While this type of testing is clearly more efficient, there is another form of Incremental testing that is even more efficient yet. When development of the Search function is completed, instead of waiting for module #2 to be completed (to translate the search criteria into a JDBC database query) the developer can create a Stub to test the Search function against. Testing against stubs and drivers does not mean that further issues will not arise as modules are further integrated and further tested; problem code is identified and corrected in efficient stages, rather than in one final panicked rush.

  • Stub – A Stub is a small segment of code that simulates the response of the connecting lower level module. Using our example from the above diagram, it would receive the user defined search criteria from the Search module and provide a simple pre-defined response, or set of responses that would simulate what might be sent back from the database.
  • Driver – A Driver is similar to a Stub, but it simulates the data response of a connecting higher level, or parent module. Using our example from the diagram above, if the middleware component was completed first, a driver would simulate the sending of user defined search criteria, and also the receipt of the search results.

In a DevOps environment, Incremental testing is typically the preferred method of testing because it offers the most efficient use of development and testing resources. Tests can be carried out on a module as soon as development on it is completed. Developers can typically test it very quickly against their stubs and drivers, making any necessary corrections to the code for the module on the spot.

Top Down Testing – This form of testing involves testing the high level or parent module(s) first, then testing lower level or child modules as development is completed and they are integrated. Stubs are used to simulate the data response of lower level modules until they are completed and integrated.

Bottom Up Testing – Lower level modules are tested first to insure the individual modules are working correctly before they are integrated with their parent module. Drivers are used to simulate the parent modules data response until the development of the parent module is completed and integrated.

Regression Testing

Anytime a development team performs any type of iterative development or issue resolution, testing needs to be done. Not just testing for the results of that one change, but testing to make sure that the change didn’t inadvertently break or change expected results elsewhere in the application.

To ensure that these other areas of the application are still functioning correctly, every aspect of the application needs to be re-tested to ensure it’s functioning as designed. This end-to-end precautionary testing is called regression testing.

Because full regression testing should be completed for each release, it can be very repetitive, as well as time and resource intensive to manually execute the full test case for every conceivable scenario, and every application module. This is where automated testing tools earn their value. That said, we will discuss Regression Testing in much more detail in a future article.

Integration Testing in an Enterprise DevOps Environment

Even attempting to identify and discuss every possible factor for every possible DevOps scenario is a ludicrous venture. Our objective here is to provide some solutions and tools to consider to increase the efficiency of your integration testing. For additional information, a great eBook to include in your reading on this subject is A Practical Guide to Testing in DevOps, written by Katrina Clokie.

In an Agile or DevOps environment where continuous delivery pipelines are common, integration testing should be carried out as each module is completed or adjusted. For example, in many continuous delivery pipeline environments, it’s not uncommon to have multiple code deployments per developer per day. Running a quick set of integration tests at the end of each development phase prior to deployment should be a standard practice in this type of environment.

To efficiently test in this manner, the new component must either be tested against existing completed modules in a dedicated test environment or against Stubs and Drivers. Depending on your needs, it’s generally a good idea to keep a library of Stubs and Drivers for each application module in a folder or library to enable quick repetitive Integration testing use. Keeping Stubs and Drivers organized like this makes it easy to perform iterative changes, keeping them updated and performing optimally to meet your ongoing testing needs.

Another option to consider is a solution originally developed around 2002, called Service Virtualization. This creates a virtual environment, simulating module interaction with existing resources for testing purposes in a complex enterprise DevOps or Agile environment.

 


Wikipedia: Service virtualization emulates the behavior of software components to remove dependency constraints on development and testing teams. Such constraints occur in complex, interdependent environments when a component connected to the application under test is:

  • Not yet completed
  • Still evolving
  • Controlled by a third-party or partner
  • Available for testing only in limited capacity or at inconvenient times
  • Difficult to provision or configure in a test environment
  • Needed for simultaneous access by different teams with varied test data setup and other requirements
  • Restricted or costly to use for load and performance testing

 

Selenium is an excellent tool to use to automate test scripts. Anytime you are testing a given module or functionality repeatedly, it’s time to consider automating the test. Automated testing is especially useful for regression testing. Regression testing of a large application with many modules can be very time intensive. By automating those tests, test scripts are executed in a fraction of the time. It’s a tool that is well worth the investment.

In any development cycle, bugs and issues inevitably arise. To manage these over many developers and teams, you should consider using a good issue tracking software to record, track, and manage the reported issues throughout the lifecycle of issue resolution. Jira and Remedy are both great solutions for issue tracking and have a long history of refinement.
Integration Testing Keystone
The Plutora platform is a keystone solution for any enterprise Agile or DevOps team. It captures test metrics into a central repository, where every detail and test result across the enterprise can be accessed. Powerful reporting and analytics tools then organize and feed real-time data to reports and dashboards allowing testers, stakeholders and decision makers to know progress and results at any given moment. Transparency and visibility into the test process provide a springboard for test teams to move quickly, make informed decisions, and improve efficiency.

 

Dan Packer Dan Packer

Dan is an Industry Specialist at Plutora. Dan got his first taste of programming in high school, coding games in Basic. Since then, he has been directly involved with nearly every aspect of the Development and Release lifecycle — coding, testing, project management, team management, architecture, database, web & graphics designer, and much more. He has implemented development lifecycle methodologies for companies like Sears Financial, Novell, Sprint, Daimler-Benz Financial, Sabre, Centex and T-Mobile to name a few. In addition to his enterprise work, he has founded multiple companies, and continues to work as a business and technology advisor on various domestic and international projects. In total Dan has managed and orchestrated literally hundreds of deployments, development initiatives and thousands of iterative code enhancements.

Readers also check out