Complexity is an old enemy that just won’t go away quietly

GUEST NOTE: The reality of IT operations today means adding additional layers of complexity, but there are still opportunities to standardize and simplify processes

In the pursuit of efficiency, many organizations today want to have a story of standardization and automation to tell.

Both come from a desire to simplify operating environments, and in particular the monitoring of these environments.

After all, to quote Tony Robbins, “complexity is the enemy of execution”. According to your interlocutor, complexity is also the enemy of reliability, reliability, security and progress.





What most of these one-liners fail to recognize (or, in Robbins’ case, where they cease to be applicable to technological contexts) is that some complexity is to be expected in environments today’s digital hybrid and operating workplaces.

Most organizations carry “technical debt” from past investments and decisions, and some of it is still likely to be fully functional and fit for purpose. Similarly, most organizations now have hybrid IT operations, a mixture of applications and infrastructure that they own or do not own, each with its own set of dependencies and interdependencies, and accessible through their own networks or through the cloud and the public internet. Just getting IT to work in 2022 is that some complexity is inevitable.

And so, to add our own phrase to the mix, “complexity is today’s reality and change is the only constant” for many organizations.

That being said, there are opportunities to simplify complex structures and automate change. Both are considered desirable and achievable end states by many organizations and IT shops.

Reducing tool bloat, for example, is the target of many organizations.

The app parks in many places are unwieldy, with hundreds of apps on the books.

One of the reasons for this is silos: different parts of the organization that have historically done things differently from each other, resulting in the use of many different tools that all seem to do the same thing, just to different business units.

M&A growth has also exacerbated tool bloat and many contenders who have grown by acquisition find themselves on long journeys of decomplexity and standardization as efficiencies become necessary.

In addition to tool overhead, the need to expand visibility and reduce manual change management in monitoring IT environments is pushing more and more organizations toward automation. The most impactful automations are those applied to repetitive (or repeatable) processes. Running them in a more software-defined way reduces the cognitive load on IT teams and should give them time to focus on more strategic efforts.

Visibility is a prerequisite

The path from complexity to standardization or automation may not be immediately apparent.

To achieve this clarity, organizations must be able to establish visibility into their entire architecture and assess the performance of each part or tool against the overall function.

Only then will the repetition or overlapping of tools be evident; optimization opportunities are correctly identified; and will a composite picture emerge that will allow the organization to begin making difficult – but necessary – choices to standardize or automate.

Consider how an end-to-end business process or transaction is instrumented and monitored.

The first step to improving it would be to map and understand all the parts or components that make up the true end-to-end service delivery chain, across the networks and environments that are inside and outside of it. outside the perimeter. Once this is known, it becomes easier to identify which parts are most critical.

The exercise should also extend to the monitoring that is in place for all components. Each component, its role and function, must be understood, so that if performance degrades, the operations team has visibility into the degradation. It should also expose the effectiveness of the different tools or methods used for monitoring, if there is any overlap between them, and also if there are any gaps in visibility across the chain.

Once you have assessed the transaction or assembly of the end-to-end process, you can begin to make decisions about which metrics are most representative of the digital experience and whether they can be monitored using a single solution rather than several existing tools.

It’s a similar story with automation.

To make the best decision on how to implement automation, organizations need to understand the composition and functioning of the overall service delivery chain. This will expose what is and is not repeatable, where optimization gains should be achieved, and influence what is and is not a good candidate for automation.

Some parts of the process may be too business-critical to be automated: releasing a configuration to production, for example, may be best left with administrators, as a misconfiguration could remove all services at once. But components that would have less or negligible impact in the event of a problem may be good candidates for automation.

While some complexity will always be unavoidable and will change our only known constant of the future, there is still a long way to go in tool consolidation and intelligent automation.

Sharon D. Cole