Red Hat on Edge Complexity

Image: Tomasz/Adobe Stock

Edge is complex. Once we get past the chilling enormity and earth-shattering reality of understanding this basic statement, we may be able to begin building frameworks, architectures, and services around the task before us. The Linux Foundation’s State Of The Edge report from last year put it succinctly: “The edge, with all its complexities, has become a rapidly evolving, powerful and demanding industry in its own right.”

Red Hat seems to have taken a stoic appreciation of the complex edge management role that awaits all enterprises now moving their IT stacks to straddle this space. The company says it sees edge computing as an opportunity to “extend the open hybrid cloud” to all the data sources and end users that populate our planet.

Pointing to edge terminals as divergent as those found on the International Space Station and your local neighborhood pharmacy, Red Hat now aims to clarify and validate the parts of its own platform that address specific edge workload challenges. .

At the tip of the edge

The mission is that while the edge and the cloud are intimately connected, we need to enable compute decisions outside of the data center, at the edge of the edge.

“Organizations are looking to edge computing as a way to optimize performance, cost, and efficiency to support a variety of use cases in industries ranging from smart city infrastructure, patients, games and everything else,” said Erica Langhi, Senior Solutions Architect at Red Hat.

SEE: Don’t Curb Your Excitement: Trends and Challenges in Edge Computing (TechRepublic)

Clearly, the concept of edge computing presents a new way of seeing where and how information is accessed and processed to create faster, more reliable, and more secure applications. Langhi points out that while many software application developers may be familiar with the concept of decentralization in the broader network sense, there are two key considerations for an edge developer to focus on.

“The first is about data consistency,” Langhi said. “The more dispersed the edge data is, the more consistent it needs to be. If multiple users try to access or modify the same data at the same time, everything should be synchronized. Edge developers should consider messaging and data streaming capabilities as a powerful foundation to support data consistency to build edge-native data transport services, data aggregation, and integrated edge applications.

Sparse Edge Requirements

This need to highlight the intricacies of edge environments stems from the fact that this is different computing – no customer offers their “requirements specification” document and user interface preferences – at this level. , we work with more granular technology constructs at the machine level. .

The second key consideration for Edge developers is security and governance.

“Operating on a large data surface means the attack surface is now extended beyond the data center with data at rest and in motion,” Langhi explained. “Edge developers can adopt encryption techniques to help protect data in these scenarios. With increased network complexity as thousands of sensors or devices are connected, edge developers must seek to implement automated, consistent, scalable, and policy-based network configurations to support security.

Finally, she says, by selecting an immutable operating system, developers can apply a reduced attack surface, helping organizations deal with security threats effectively.

But what really changes the game from traditional software development to state-of-the-art developer infrastructure is the variety of target devices and their integrity. That’s the view of Markus Eisele in his role as developer strategist at Red Hat.

“While developers typically think of frameworks and architects think of APIs and how to tie everything together, a distributed system that has compute units at the edge requires a different approach,” Eisele said.

What is needed is a complete and secure supply chain. It starts with integrated development environments – Eisele and his team point to Red Hat OpenShift Dev Spaces, a zero-configuration development environment that uses Kubernetes and containers – which are hosted on secure infrastructure to help developers create binaries for a variety of target platforms and computer units. .

Binaries based

“Ideally, the automation at work here goes well beyond a successful build, to having tested and signed binaries on verified base images,” Eisele said. “These scenarios can become very challenging from a governance perspective, but should be repeatable and minimally invasive to the inner and outer loop cycles for developers. even less room for error, especially when thinking about the security of the generated artifacts and how it all comes together while allowing developers to be productive.

Eisele’s inner and outer loop reference pays homage to the complexity at work here. The inner loop being a unique developer workflow where code can be tested and changed quickly. The outer loop being the point at which code is committed to a version control system or part of a software pipeline closer to the point of deployment into production. For more details, we can also recall that the notion of software artifacts referenced above designates the whole panoply of elements that a developer can use and/or create to build code. This could therefore include documentation and annotation notes, data models, databases, other forms of reference material, and the source code itself.

SEE: Recruitment kit: Back-end developer (TechRepublic Premium)

What we know for sure is that unlike data centers and the cloud, which have been around for decades, edge architectures are still evolving at a more exponential rate.

Parry the construction of a goal

“The design decisions that architects and developers make today will have a lasting impact on future capabilities,” said Ishu Verma, edge computing technical evangelist at Red Hat. “Some edge requirements are unique for every industry, but it’s important that design decisions are not designed specifically for the edge, as this can limit an organization’s future agility and scalability. “

Red Hat’s edge-centric engineers insist that a better approach is to build solutions that can work on any infrastructure — cloud, on-premises, and edge — as well as across industries. The consensus here seems to be firmly skewed toward choosing technologies like containers, Kubernetes, and lightweight application services that can help establish future-ready flexibility.

“Common elements of edge applications across multiple use cases include modularity, segregation, and immutability, which makes containers a good fit,” Verma. “Applications will need to be deployed on many different edge tiers, each with unique resource characteristics. Combined with microservices, containers representing instances of functions can be grown or shrunk based on resources or underlying conditions to meet customer needs at the edge.

Edge, but on a large scale

All these challenges await us then. But while the message is not to panic, the task is made more difficult if we are to create software application engineering for edge environments that can scale safely. With the edge at scale comes the challenge of managing thousands of edge devices deployed in many different locations.

“Interoperability is key to the edge at scale, as the same application must be able to run anywhere without being refactored to fit a framework required by an infrastructure or cloud provider,” said said Salim Khodri, EMEA edge go-to-market specialist at Red Chapeau.

Khodri’s comments line up with the fact that developers will want to know how they can leverage edge benefits without changing the way they develop, deploy, and maintain applications. In other words, they want to understand how they can accelerate the adoption of edge computing and combat the complexity of distributed deployment by making the programming experience at the edge as consistent as possible using their existing skills.

“Consistent tools and modern application development best practices, including CI/CD pipeline integration, open APIs, and native Kubernetes tools, can help address these challenges,” Khodri explained. “This is to provide industry-leading application portability and interoperability capabilities in a multi-vendor environment, as well as application lifecycle management processes and tools at the distributed edge.

It would be difficult to list the main points of advice here for one thing. Two would be a challenge and it might also require the use of some toes. The watchwords are perhaps open systems, containers and microservices, configuration, automation and of course data.

The distributed edge may start from the DNA of the data center and consistently maintain its intimate relationship with the backbone of the cloud-native computing stack, but it is essentially a disconnected relationship pairing.

Sharon D. Cole