Please Wait...
Please Wait...

Maximum GPU utilization with data center switching and congestion control
HPE Juniper Networking’s QFX5240 switch delivers consistent, predictable RoCEv2 performance, while Broadcom Tomahawk 5 chips and Thor 2 NICs combine for advanced load-balancing capabilities and scheduling to deliver high GPU bandwidth. This ensures lossless RDMA transport and stable throughput under the high traffic generated by AI workloads. HPE Juniper Validated Designs (rigorously vetted and proven architectures with configuration guidance), reduced CPU overhead on Thor 2 NICs, and simple repeatability on Vultr unlock predictable performance, easy management, and faster time-to-production.
Get started with the world's
largest privately-held cloud
infrastructure company
AI-Ready Ethernet Fabric for GPU Clusters
Vultr, HPE Juniper Networking, and Broadcom’s joint solution provides the optimized network and infrastructure layer for large-scale AI workloads.
Cluster-scale AI workloads need high-capacity GPUs, but they also need networks that can transfer data as fast as those GPUs can process it. Often, they can’t. Then, the networks become bottlenecks, and workloads fail.
Vultr’s partnership with HPE Juniper Networking and Broadcom addresses that common pain point. HPE Juniper Networking’s rail-optimized backend fabric, powered by Broadcom silicon, on Vultr’s composable infrastructure, provides breakthrough AI throughput efficiently and affordably.