AMD Instinct Series accelerators, delivered in the Cirrascale AI Innovation Cloud, enable performance leadership that is uniquely well-suited to power even the most demanding AI and HPC workloads.
We've partnered with AMD to offer their AMD Instinct Series accelerators in the cloud for customers to test, utilize and fully deploy. These accelerators provide exceptional compute performance, large memory density, high bandwidth memory, and support for specialized data formats. AMD Instinct accelerators are built on AMD CDNA™ architecture, which features Matrix Core Technologies and supports a broad range of precision capabilities.
Partnered with Cirrascale Cloud Services
AMD ROCm™ is an open software stack including drivers, development tools, and APIs that enable GPU programming from low-level kernel to end-user applications. ROCm is optimized for Generative AI and HPC applications, and is easy to migrate existing code into.
ROCm enables AI and HPC application development across a broad range of demanding workloads.
Cirrascale-hosted AMD Instinct series accelerators with advanced peer- to-peer I/O connectivity through a maximum of eight AMD Infinity Fabric™ links deliver up to 800 GB/s I/O bandwidth performance. With a cache-coherent solution using optimized AMD EPYC™ CPUs and Instinct accelerators, Infinity Fabric unlocks the promise of unified computing, enabling a quick and simple on-ramp for CPU code to accelerated platforms.
AMD Instinct™ MI350 Series GPUs are built on cutting-edge fourth-generation AMD CDNA™ architecture, setting new standards for GenAI and HPC in the cloud. With up to 288GB of HBM3E memory the MI350 delivers exceptional performance for training massive AI models, high-speed inference, and complex scientific workloads. Expanded datatype support, including FP16, FP8,and next-gen FP6 and FP4, maximizes throughput and energy efficiency for advanced AI models.
Cirrascale has announced the upcoming availability of the MI350 Series GPUs in its AI Innovation Cloud. Be sure to sign up to preview these instances when available.
AMD Instinct™ MI300X accelerators are uniquely well-suited to power even the most demanding AI and HPC workloads, offering exceptional compute performance, large memory density, high bandwidth memory, and support for specialized data formats.
AMD Instinct MI300X accelerators are built on AMD CDNA™ 3 architecture, which offers Matrix Core Technologies and support for a broad range of precision capabilities—from the highly efficient INT8 and FP8 (including sparsity support for AI) to the most demanding FP64 for HPC.
The AMD Instinct™ MI325X GPU accelerators set a new standard in performance for Gen AI models and data centers. Built on the 3rd Gen AMD CDNA™ architecture, these accelerators are designed for exceptional performance and efficiency for demanding AI tasks like training expansive models and inferencing.
Each accelerator is equipped with industry-leading 256 GB Next Gen HBM3e memory capacity and 6 TB/s bandwidth, combined with the processing power and datatype support required for AI deployments, the MI325X accelerators deliver the compute performance needed for any AI solution.
The AMD Instinct MI250 accelerator brings customers the compute engine selected for the first U.S. Exascale supercomputer.
AMD Instinct MI250 accelerators are built on AMD CDNA™ architecture, which offers Matrix Core Technologies and support for a broad range of precision capabilities—from the highly efficient INT8 and FP8 to the most demanding FP64 for HPC.
Pricing
Pricing
Cirrascale Cloud Services has one of the largest selections of NVIDIA GPUs available in the cloud.
The above represents our most popular instances, but check out our pricing page for more instance types.
Not seeing what you need? Contact us for a specialized cloud quote for the configuration you need.
Pricing
Pricing
Ready to take advantage of our flat-rate monthly billing, no ingress/egress data fees, and fast multi-tiered storage?
Get Started