The IT industry grapples with complexity and security as Kubernetes adoption grows

The information technology industry has a complexity problem, and this is leading to deeper conversations among thought leaders about how to solve it.

The days of building applications on a server using a monolithic architecture have evolved into developing many microservices, packaging them into containers, and orchestrating all production using of Kubernetes in a distributed cloud.

It’s no wonder that in the results of a global survey released by Pegasystems Inc. just two months ago, three out of four employees surveyed felt that the complexity of work had continued to increase and that they were overloaded with information, systems and processes. Almost half identified digital transformation as the cause.

Kubernetes has proven to be a great tool for driving modern IT infrastructure, but it has also been a big part of designing overly complex systems. One of the tech industry’s most prominent thought leaders drew attention to this issue in a recent interview at DockerCon 2022, with virtual coverage produced by theCUBE, SiliconANGLE’s live streaming studio. Media.

“The world is going to crumble on its own complexity,” said development manager Kelsey Hightower in a conversation with Docker Inc. chief executive Scott Johnston. “The number of teams I meet, and I won’t name any names, say, ‘Kelsey, we’re going to show you our Kubernetes stack.’ Twenty minutes later, they’re in room number 275. Who’s going to maintain this? Why are you doing this?”

Go to common interfaces

Hightower’s story highlights the need for standardized tools within the Kubernetes developer community. As Kubernetes matured, it became a platform for building other platforms, and platform-as-a-service offerings such as CloudRun, OpenShift, and Knative enabled many tasks of operational management for developers.

There has also been a trend to create common interfaces within Kubernetes to enable adoption without requiring an open source community-wide implementation agreement. These include the container networking interface, container runtime interface, and custom resource definitions.

Despite the growing complexity of the IT industry, Hightower sees hope in the ability of the Kubernetes community to centralize around standardized tools.

“These contracts are important, and these standards are going to put complexity where it belongs,” Hightower said. “If you’re a developer, yes, the world is complex, but that doesn’t mean you have to learn all that complexity. When you standardize, you level the whole field and move much faster. It must happen.

The challenge for many organizations is how to balance the demands of running a data-driven business with the complexity that comes with it. While some companies have simply dipped their toes into the container deployment waters, others have jumped headfirst into the pool.

A report on cloud operations from Canonical Ltd. revealed that Kubernetes users typically deploy two to five production clusters. The European Organization for Nuclear Research, known as CERN, is the largest particle physics laboratory in the world and manages about 210 bunches. Then there’s Mercedes-Benz, which pursued another model entirely. The global automaker gave a presentation at KubeCon Europe in May that described how it uses more than 900 Kubernetes clusters.

The German automaker was an early adopter of Kubernetes. It started experimenting with the container orchestration tool in 2015, just a year after Google LLC opened up the technology.

“We started small as a grassroots initiative,” Andrea Berg, head of corporate communications at Mercedes-Benz North America Corp., said in comments provided to SiliconANGLE. “It was conducted in a ‘developer-to-developer’ mindset and has been increasingly successful. We’ve helped shift our company’s mindset towards cloud-native, free and open-source software.

Mercedes-Benz Tech Innovation, the company’s subsidiary responsible for overseeing company-wide technology, has expanded its structure to support hundreds of application development teams. As the number of Kubernetes clusters grew, the company realized they needed a tool to manage them. He turned to the Cluster API on OpenStack, a Kubernetes-native way to manage clusters between different cloud providers.

The company also created a culture where developers would soon realize that once applications were completed, there would be no more kiosks to run them. Automation tools would drive DevOps.

“We realized that a single shared cluster would not meet our needs,” said Jens Erat, DevOps engineer at Mercedes-Benz, during a presentation at KubeCon Europe. “We had engineers with in-depth knowledge; we understood the technology and decided to create our own solution instead. You build it, you run it. There is an API for that.

Knative eases the burden on developers

The API’s path to a simpler approach to deploying Kubernetes in the enterprise received a boost in March when the Cloud Native Computing Foundation announced that it would accept Knative as an incubation project. Originally developed by Google, Knative is an open-source Kubernetes-based platform for managing serverless and event-driven applications.

The concept behind serverless technology is to bundle applications as functions, upload them to a platform, and scale and run them automatically. Developers only need to deploy applications. They don’t have to worry about where they run or how a given network handles them.

A number of large companies have a vested interest in seeing Knative more widely used. Red Hat, IBM, VMware, and TriggerMesh have worked with Google to improve Knative’s ability to manage serverless and event-driven applications on the Kubernetes platform.

“We’re seeing a lot of interest,” said Roland Huss, principal software engineer at Red Hat Inc., in an interview with SiliconANGLE. “We heard before the move that many backers weren’t looking for Knative because they weren’t part of a mutual foundation. We continue to ramp up and really hope to have more contributors.

Knative’s road has been bumpy, which revealed growing pains as the Kubernetes community grew. Google took some heat for previously deciding not to donate Knative, before announcing a change of heart in December.

Ahmet Alp Balkan, one of Google’s engineers who worked on different aspects of Knative before last year, wrote a blog post expressing concerns about how the serverless solution had been positioned within the developer community. Among Balkan’s concerns was the description of Knative as a building block of Kubernetes itself.

“I think we’ve overestimated how many people on the planet want to build a platform layer as a Heroku-like service on top of Knative,” Balkan wrote. “Our message revolved around those “platform engineers” or operators who could use Knative and build their UI/CLI experience on top of it. This was the target audience for these building blocks Knative had to offer. However, this turned out to be a very small and niche audience.

Need more security

Opinion leaders in the Kubernetes community have also become more attentive to the security of the container orchestration tool. Feedback from the user base has validated this direction.

In May, Red Hat released survey results that found that 93% of respondents had experienced at least one security incident in their container or Kubernetes environment. More than half of respondents had delayed or slowed deployment of applications due to security concerns. The report’s findings were given additional credibility in late June. Scanning tools used by cybersecurity research firm Cyble Inc. discovered 900,000 Kubernetes instances exposed online.

“True DevSecOps requires breaking down the silos between developers, operations, and security, including network security teams,” said Kirsten Newcomer, director of cloud strategy and DevSecOps at Red Hat, in a KubeCon interview. Europe with SiliconANGLE. “The Kubernetes paradigm requires involvement. This forces developers to get involved in things like network policy for things like software-defined network layer.

There is also a growing list of open source tools to harden Kubernetes environments. KubeLinter is a static analysis tool that can identify misconfigurations in Kubernetes deployments. Security-Enhanced Linux, a default security feature implemented in Red Hat OpenShift, provides policy-based access control. And the CNCF Falco project acts as a kind of container security camera, detecting unusual behavior or configuration changes in real time. Falco has reportedly been downloaded over 45 million times.

With Kubernetes, it’s easy to get caught up in the metrics around adoption, security, and enterprise application deployments. Yet behind the increased reliance on containers is an important element that gets lost in the noise. Whether Kubernetes is complex or not, many people now depend on this technology to function.

Towards the end of his dialogue this spring with Docker’s Johnston, Hightower told a story about his previous work for a financial company that processed purchase transactions for families in need of government assistance. At one point, the transaction processor crashed and Hightower joined his colleagues in a “war room” as the programmers went through a series of painstaking steps to reboot the system and get the platform working.

“We just watch this screen, some things were turning green and some things were turning red, and the things turning red were the result of declined payments,” Hightower recalled. “Each of these items that turned red on the dashboard represented someone with their whole family trying to buy groceries. Their only option was to leave all their groceries there. as a community, it’s reminding us that people always come before technology.

Image: distelAPPArath/Pixabay

Show your support for our mission by joining our Cube Club and our Cube Event community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, ​​Dell Technologies Founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many other luminaries and experts.

Sharon D. Cole