PEAK:AIO, the information infrastructure pioneer redefining AI-first information acceleration, at the moment unveiled the primary devoted resolution to unify KVCache acceleration and GPU reminiscence enlargement for large-scale AI workloads, together with inference, agentic techniques, and mannequin creation.
As AI workloads evolve past static prompts into dynamic context streams, mannequin creation pipelines, and long-running brokers, infrastructure should evolve, too.
Additionally Learn: The Impression of Elevated AI Funding on Organizational AI Methods
“Whether or not you’re deploying brokers that suppose throughout classes or scaling towards million-token context home windows, the place reminiscence calls for can exceed 500GB per mannequin, this equipment makes it potential by treating token historical past as reminiscence, not storage,” mentioned Eyal Lemberger, Chief AI Strategist and Co-Founding father of PEAK:AIO “It’s time for reminiscence to scale like compute has.”
As transformer fashions develop in dimension and context, AI pipelines face two crucial limitations: KVCache inefficiency and GPU reminiscence saturation. Till now, distributors have retrofitted legacy storage stacks or overextended NVMe to delay the inevitable. PEAK:AIO’s new 1U Token Reminiscence Characteristic modifications that by constructing for reminiscence, not information.
The First Token-Centric Structure Constructed for Scalable AI
Powered by CXL reminiscence and built-in with Gen5 NVMe and GPUDirect RDMA, PEAK:AIO’s characteristic delivers as much as 150 GB/sec sustained throughput with sub-5 microsecond latency. It permits:
- KVCache reuse throughout classes, fashions, and nodes
- Context-window enlargement for longer LLM historical past
- GPU reminiscence offload through true CXL tiering
- Extremely-low latency entry utilizing RDMA over NVMe-oF
That is the primary characteristic that treats token reminiscence as infrastructure somewhat than storage, permitting groups to cache token historical past, consideration maps, and streaming information at memory-class latency.
In contrast to passive NVMe-based storage, PEAK:AIO’s structure aligns instantly with NVIDIA’s KVCache reuse and reminiscence reclaim fashions. This gives plug-in assist for groups constructing on TensorRT-LLM or Triton, accelerating inference with minimal integration effort. By harnessing true CXL memory-class efficiency, it delivers what others can not: token reminiscence that behaves like RAM, not information.
Additionally Learn: The Evolution of Information Engineering: Making Information AI-Prepared
“Whereas others are bending file techniques to behave like reminiscence, we constructed infrastructure that behaves like reminiscence, as a result of that’s what fashionable AI wants,” continued Lemberger. “At scale, it isn’t about saving information; it’s about conserving each token accessible in microseconds. That may be a reminiscence downside, and we solved it at embracing the newest silicon layer.”
The totally software-defined resolution makes use of off-the-shelf servers is anticipated to enter manufacturing by Q3. To debate early entry, technical session, or how PEAK:AIO can assist AI infrastructure wants,
[To share your insights with us, please write to psen@itechseries.com]