ACE-AI™

Modern networking platform built for Distributed AI

ACE-AI Advantage

ACE-AI delivers a unified fabric across the network for Distributed AI, from Datacenter to Edge to Multi-cloud

arciq-network.e994a41c3619.svg
AI-Datacenter networking for Training models

ACE-AI employs IP CLOs and Virtual Distributed Router (VDR) architectures for scalable GPU connectivity. It ensures high performance and lossless connectivity through RoCEv2 support, Priority Flow Control (PFC), and Adaptive Routing, all while maintaining low latency and high availability.

arciq-resource.03e9e5fbe614.svg
Edge networking for Inferencing models

ACE-AI supports SmartNICs like BlueField3, enhancing inferencing capabilities at the Edge. This setup facilitates security, traffic engineering, and efficient multi-cloud networking for smooth model operations.

arciq-security.2fee9c80dcd9.svg
Hybrid and Multi-cloud connectivity

ACE-AI provides seamless access to AI workloads across various locations. Its Egress Cost Control (ECC) reduces costs associated with large transfers of AI data, optimizing resource use across clouds.

AI Workloads Overview

Distributed AI offers significant computational efficiency, scalability, security, and latency benefits.

ace-ai-diagram-compresed

Distributed AI offers significant computational efficiency, scalability, security, and latency benefits.

AI workloads are increasingly distributed. Examples of how AI is being distributed across the network include Distributed Model Training, where AI/ML models are trained on multiple nodes within the network, enhancing efficiency and performance for large, complex models. Federated Learning is another approach, where AI/ML models are trained on data distributed across the network and multiple device types, including smartphones, tablets, and wearables. Additionally, Inferencing at the Edge involves deploying inferencing models at the edge of the network, closest to end users, which reduces latency and improves performance for applications. Key requirements for networks supporting distributed AI include high performance and lossless connectivity, predictable latency, high availability and resiliency with zero impact failover, and fabric-wide visibility.

Learn more about ACE-AI™

ACE-AI Solution Brief

ArcOS™ Data Plane Adaptation Layer (DPAL)

Securely configure, operate, and monitor network devices

Distributed Data Center Solution Brief

Building Next Generation Distributed Data Centers for 5G and AI

ArcOS™ Datasheet

Get started today by taking a free TestDrive

© 2024 Arrcus Inc.

The hyperscale networking software company

twitter.9badb793a348.svg
linkedin.a6aa185890e0.svg

2077 Gateway Place Suite 400 San Jose, CA, 95110

Site by

unomena