Plutora Blog - DevOps, IT Governance, Release Management
Come Talk to Us and Merck at Jenkins World 2018 – Booth 120Reading time 5 minutes
It’s that time again as Jenkins World is upon us. I love that the conference is being renamed to DevOps World as it spotlights a key aspect – this conference now covers a much wider area than discussions about an open source continuous integration server.
What Brings Us Together?
I studied electrical engineering in school but wrote xBase code to pay the bills. At some point during my junior year, I put all my focus onto software as I saw it changing everything. I’d love to say that I had that deep insights into the future, but the truth is that I just loved the logic of software and felt that my math classes in school had “jumped the shark” with div, grad, curl and all of that other stuff.
As professionals in the field of software solutions, we are all here to go faster, deliver higher quality applications, improve efficiency and ensure we align those solutions with business needs From “concept to cash,” software has truly changed the world – and continues to do so. The future of many companies truly depends upon their ability to deliver better solutions to their customers.
What does a CI/CD tool such as Jenkins have to do with that? A whole lot, it appears. Jenkins sits at the forefront of the new development methods and involve culture, process, governance, metrics, and most of all, going faster. Quite simply, it’s a key part of the automated delivery pipeline.
What to Watch For?
Real World Experiences – Such as Merck
The real world is where theory meets reality. Governance and compliance meet the realm of automation. Culture meets new practices. These journeys are real and represent countless hours of thought, code and cultural shift.
On Monday, September 17th at 5:15, we’ll be joined by Keith Smola of Merck who will discuss several key aspects of software delivery that has been a part of their journey.
Some key questions he’ll discuss:
- Release orchestration at Merck
What does the process of driving software delivery look like? How do they interconnect the various toolchains and systems? How do they incorporate governance and regulatory compliance into the process? Merck has been around for a very long time and their journey must involve heavy compliance and regulatory oversight. They are making strides improving delivery and we’ll hear more about that along with future plans.
- Tracking “idea to cash”
What role do metrics play in the delivery pipeline? How are they tracked and measured? What is the desired future state of those metrics in how they are gathered? How do they impact future delivery processes?
- Plutora as part of the DevOps toolchain
How does Plutora help in application delivery at Merck? How does it fit into the DevOps and legacy toolchains? What value does it provide?
Answers to these questions are part of their journey – make sure to come by and join the conversation.
Value Stream Management
If you’ve not heard of VSM, you have been missing out. Forrester recently released a New Wave on the topic. I’d expect Value Stream Mapping to be really prevalent at Jenkins World this year as it embodies the methodology of measuring in order to improve efficiency and speed and ensure alignment with the business. Value Stream Management takes the mapping exercise further by incorporating more than just what-if scenarios – orchestration of the pipeline (for both manual and automated tasks) and management of the test environments (both manual and automated tasks) are included.
The point is that everyone is on this software delivery improvement journey. Getting 100% of our delivery pipelines automated, unified in architecture, and migrated to the cloud is a lofty and far-away goal. Our customers have been around for a long time – a number of them more than a century. With decades of M&A activity requiring systems to be integrated and supported, they likely won’t ever be 100% automated and cloud-based. That doesn’t mean there aren’t improvements still to be made. DevOps practices still apply to both mainframe and serverless architectures. Knowing where to focus the improvement effort – that is the key.
Test Environment Management
The unsung hero of every transformation is a resilient set of pre-production environments. I remember setting up my first CI pipeline and deploying my first unit tests with code that was checked in and watching my first green build. How frustrated I was to come in the morning to find that the same unit tests and build had failed that night – it’s the same code! Investigation found that the code failed on roughly 30% of runs – and that the failures were due to environmental factors, not my code. Another round with beefed up memory and storage capacity and all was fixed again. What a microcosm of what we’re dealing with today. The push to agile methodologies has placed ever increasing pressure on ensuring proper configuration, availability, metrics, and test data. I’ve seen releases where more time was spent mucking around with the test environment than writing code. Proactive management of test environments and incorporating them into the delivery pipeline is truly the best investment any company delivering software can do –I have a number of customers where their ROI was less than 3 months.
The After Party with NADOG
Join us for the afterparty at NADOG! Since 2015, North American DevOps Group has been free for all practitioners and their management. We are one of the sponsors who are providing food, drinks, materials, and entertainment.
NADOG started as a place to network with your local peers. Come out to the event to make professional connections, and see what others are doing to overcome the same challenges. Make sure to introduce yourself to me as well as I’d like to hear YOUR story.