Verge.io Unveils Shared, Virtualized GPU Computing to Reduce Complexity and Cost

ANN ARBOUR, Mich.–(BUSINESS WIRE)–Verge.io, the company providing an easier way to virtualize data centers, has added important new features to its Verge-OS software to bring users the performance of GPUs as virtualized and shared resources. This creates a cost-effective, simple, and flexible way to perform GPU-based machine learning, remote desktop, and other compute-intensive workloads in an agile, scalable, and secure Verge-OS virtual data center.

Verge-OS extracts compute, network, and storage from commodity servers and creates pools of raw resources that are easy to run and manage, creating feature-rich infrastructures for environments and workloads like clustered HPC in universities, ultra-converged and hyper-converged enterprises, DevOps and Test/Dev, compliant medical and healthcare, remote and edge computing, including VDI, and xSP offering hosted services, including private clouds.

Current methods of deploying system-wide GPUs are complex and expensive, especially for remote users. Rather than providing GPUs to the entire organization, Verge.io allows users and applications with access to a virtual data center to share the computing resources of a single server equipped with a GPU. Users/admins can “port” an installed GPU to a virtual data center by simply creating a virtual machine that has access to that GPU and its resources.

Alternatively, Verge.io can handle GPU virtualization and serve vGPUs to virtual data centers. This allows organizations to easily manage vGPUs on the same platform as all other shared resources.

According to Darren Pulsipher, Chief Solution Architect of Public Sector at Intel, “The market is looking for simplicity, and Verge-OS is like an ‘Easy Button’ for creating a virtual cloud that is so much faster and easier to configure than a cloud With Verge-OS, my customers can migrate and manage their data centers anywhere and upgrade their hardware without any downtime.

“The ability to deploy a GPU in a virtualized, converged environment and access that performance as needed, even remotely, radically reduces hardware investment while simplifying management,” said Yan Ness, CEO of Verge .io. “Our users increasingly need GPU performance, from scientific research to machine learning, so vGPU and GPU Passthrough are easy ways to share and pool GPU resources like they do with the rest of their capabilities. treatment.”

Verge-OS is ultra-lightweight software (less than 300,000 lines of code) that is easy to install and scale on low-cost commodity hardware and is self-managing based on AI/ML . A single license replaces separate hypervisor, networking, storage, data protection, and management tools to simplify operations and reduce the size of complex technology stacks.

Verge-OS based secure virtual data centers include all enterprise data services such as global deduplication, disaster recovery, continuous data protection, snapshots, long distance synchronization and automatic failover. They are ideal for creating honeypots, sandboxes, cyber ranges, isolated computes, and secure compliance enclaves to meet regulations such as HIPAA, CUI, SOX, NIST, and PCI. Nested multitenancy gives service providers, departmental enterprises, and campuses the ability to assign resources and services to groups and subgroups.

Currently, Verge.io supports NVIDIA Tesla and Ampere cards; additional licenses must be purchased for vGPU capacity.

For a full list of improvements, please visit https://updates.verge.io/release.html

About Verge.io

Verge.io offers an easier way to virtualize data centers and end the complexity of IT infrastructure. The company’s Verge OS software is the first and only fully integrated virtual cloud software stack for building, deploying and managing virtual data centers. Verge-OS offers significant capital savings, increased operational efficiency, reduced risk, and rapid scalability. For more information, visit www.verge.io or simply call 855-855-8300.

Follow us: LinkedIn and Twitter

Sharon D. Cole