Close Menu
  • Home
  • AI News
  • AI Startups
  • Deep Learning
  • Interviews
  • Machine-Learning
  • Robotics

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

What's Hot

Fleet Provides $27 Million to Usher in New Period of Open System Administration

June 17, 2025

Teamworks Raises $235 Million at $1Billion+ Valuation to Speed up AI-Powered Innovation

June 17, 2025

Nextworld Launches Latest Launch With Enhanced AI Options Throughout the Platform

June 17, 2025
Facebook X (Twitter) Instagram
Smart Homez™
Facebook X (Twitter) Instagram Pinterest YouTube LinkedIn TikTok
SUBSCRIBE
  • Home
  • AI News
  • AI Startups
  • Deep Learning
  • Interviews
  • Machine-Learning
  • Robotics
Smart Homez™
Home»Machine-Learning»Hewlett Packard Enterprise Deepens Integration with NVIDIA on AI Manufacturing unit Portfolio
Machine-Learning

Hewlett Packard Enterprise Deepens Integration with NVIDIA on AI Manufacturing unit Portfolio

Editorial TeamBy Editorial TeamMay 19, 2025Updated:May 19, 2025No Comments7 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Reddit WhatsApp Email
Hewlett Packard Enterprise Deepens Integration with NVIDIA on AI Manufacturing unit Portfolio
Share
Facebook Twitter LinkedIn Pinterest WhatsApp Email


  • HPE Non-public Cloud AI, co-developed with NVIDIA, will help function department mannequin updates from NVIDIA AI Enterprise and the NVIDIA Enterprise AI Manufacturing unit validated design.

  • HPE Alletra Storage MP X10000 provides an SDK for NVIDIA AI Information Platform to streamline unstructured information pipelines for ingestion, inferencing, coaching and steady studying.

  • HPE AI servers rank No.1 in over 50 {industry} benchmarks and HPE ProLiant Compute DL380a Gen12 can be accessible to order with NVIDIA RTX PRO 6000 Blackwell Server Version GPUs beginning June 4.

  • HPE OpsRamp Software program expands accelerated compute optimization instruments to help NVIDIA RTX PRO 6000 Blackwell Server Version GPUs.

Hewlett Packard Enterprise introduced enhancements to the portfolio of NVIDIA AI Computing by HPE options that help the whole AI lifecycle and meet the distinctive wants of enterprises, service suppliers, sovereigns and analysis & discovery organizations. These updates deepen integrations with NVIDIA AI Enterprise – increasing help for HPE Non-public Cloud AI with accelerated compute, launching HPE Alletra Storage MP X10000 software program growth package (SDK) for NVIDIA AI Information Platform. HPE can be releasing compute and software program choices with NVIDIA RTX PRO™ 6000 Blackwell Server Version GPU and NVIDIA Enterprise AI Manufacturing unit validated design.

Additionally Learn: The Evolution of Information Engineering: Making Information AI-Prepared

“Our robust collaboration with NVIDIA continues to drive transformative outcomes for our shared prospects,” stated Antonio Neri, president and CEO of HPE. “By co-engineering cutting-edge AI applied sciences elevated by HPE’s strong options, we’re empowering companies to harness the complete potential of those developments all through their group, irrespective of the place they’re on their AI journey. Collectively, we’re assembly the calls for of at the moment, whereas paving the best way for an AI-driven future.”

“Enterprises can construct probably the most superior NVIDIA AI factories with HPE methods to prepared their IT infrastructure for the period of generative and agentic AI,” stated Jensen Huang, founder and CEO of NVIDIA. “Collectively, NVIDIA and HPE are laying the muse for companies to harness intelligence as a brand new industrial useful resource that scales from the info middle to the cloud and the sting.”

HPE Non-public Cloud AI provides function department help for NVIDIA AI Enterprise

HPE Non-public Cloud AI, a turnkey, cloud-based AI manufacturing unit co-developed with NVIDIA, features a devoted developer answer that helps prospects proliferate unified AI methods throughout the enterprise, enabling extra worthwhile workloads and considerably decreasing danger. To additional assist AI builders, HPE Non-public Cloud AI will help function department mannequin updates from NVIDIA AI Enterprise, which embrace AI frameworks, NVIDIA NIM microservices for pre-trained fashions, and SDKs. Characteristic department mannequin help will enable builders to check and validate software program options and optimizations for AI workloads . Together with present help of manufacturing department fashions that function built-in guardrails, HPE Non-public Cloud AI will allow companies of each measurement to construct developer methods and scale to production-ready agentic and generative AI (GenAI) functions whereas adopting a secure, multi-layered strategy throughout the enterprise.

HPE Non-public Cloud AI, a full-stack answer for agentic and GenAI workloads, will help the NVIDIA Enterprise AI Manufacturing unit validated design.

HPE’s latest storage answer helps NVIDIA AI Information Platform

HPE Alletra Storage MP X10000 will introduce an SDK which works with the NVIDIA AI Information Platform reference design. Connecting HPE’s latest information platform with NVIDIA’s customizable reference design will provide prospects accelerated efficiency and clever pipeline orchestration to allow agentic AI. Part of HPE’s rising information intelligence technique, the brand new X10000 SDK permits the mixing of context-rich, AI-ready information straight into the NVIDIA AI ecosystem. This empowers enterprises to streamline unstructured information pipelines for ingestion, inference, coaching, and steady studying throughout NVIDIA-accelerated infrastructure. Main advantages of the SDK integration embrace:

  • Unlocking information worth via versatile inline information processing, vector indexing, metadata enrichment, and information administration.
  • Driving effectivity with distant direct reminiscence entry (RDMA) transfers between GPU reminiscence, system reminiscence, and the X10000 to speed up the info path with the NVIDIA AI Information Platform.
  • Proper-sizing deployments with modular, composable constructing blocks of the X10000, enabling prospects to scale capability and efficiency independently to align with workload necessities.

Clients will be capable of use uncooked enterprise information to tell agentic AI functions and instruments by seamlessly unifying storage and intelligence layers via RDMA transfers. Collectively, HPE is working with NVIDIA to allow a brand new period of real-time, clever information entry for patrons from the sting to the core to the cloud.

Additionally Learn: The Impression of Elevated AI Funding on Organizational AI Methods

Further updates about this integration can be introduced at HPE Uncover Las Vegas 2025.

Business-leading AI server ranges up with NVIDIA RTX PRO 6000 Blackwell help

HPE ProLiant Compute DL380a Gen12 servers that includes NVIDIA H100 NVL, H200 NVL and L40S GPUs topped the newest spherical of MLPerf Inference: Datacenter v5.0 benchmarks in 10 checks, together with GPT-J, Llama2-70B, ResNet50 and RetinaNet. This industry-leading AI server will quickly be accessible with as much as 10 NVIDIA RTX PRO 6000 Blackwell Server Version GPUs, which can present enhanced capabilities and ship distinctive efficiency for enterprise AI workloads, together with agentic multimodal AI inference, bodily AI, mannequin effective tuning, in addition to design, graphics and video functions. Key options embrace:

  • Superior cooling choices: HPE ProLiant Compute DL380a Gen12 is accessible in each air-cooled and direct liquid-cooled (DLC) choices, supported by HPE’s industry-leading liquid cooling experience1, to take care of optimum efficiency below heavy workloads.
  • Enhanced safety: HPE Built-in Lights Out (iLO) 7, embedded within the HPE ProLiant Compute Gen12 portfolio, options built-in safeguards primarily based on Silicon Root of Belief and permits the primary servers with post-quantum cryptography readiness and that meet the necessities for FIPS 140-3 Stage 3 certification, a high-level cryptographic safety normal.
  • Operations administration: HPE Compute Ops Administration supplies safe and automatic lifecycle administration for server environments that includes proactive alerts and predictive AI-driven insights that inform elevated power effectivity and world system well being.

Two extra servers topped MLPerf Inference v5.0 benchmarks, offering third-party validation of HPE’s robust management in AI innovation, showcasing the superior capabilities of the HPE AI Manufacturing unit. Along with the HPE ProLiant Compute DL380a Gen12, these methods lead in additional than 50 eventualities. Highlights embrace:

  • HPE ProLiant Compute DL384 Gen12 server, that includes the dual-socket NVIDIA GH200 NVL2, ranked first in 4 checks together with Llama2-70B and Mixtral-8x7B.
  • HPE Cray XD670 server, with 8 NVIDIA H200 SXM GPUs, achieved the highest rating in 30 totally different eventualities, together with massive language fashions (LLMs) and laptop imaginative and prescient duties.

Advancing AI infrastructure with new accelerated compute optimization

HPE OpsRamp Software program is increasing its AI infrastructure optimization options to help the upcoming NVIDIA RTX PRO 6000 Blackwell Server Version GPUs for AI workloads. This software-as-a-service (SaaS) answer from HPE will assist enterprise IT groups streamline operations as they deploy, monitor and optimize distributed AI infrastructure throughout hybrid environments. HPE OpsRamp permits full-stack AI workload-to-infrastructure observability, workflow automation, in addition to AI-powered analytics and occasion administration. Deep integration with NVIDIA infrastructure – together with NVIDIA accelerated computing, NVIDIA BlueField, NVIDIA Quantum InfiniBand and Spectrum-X Ethernet networking and NVIDIA Base Command Supervisor – present granular metrics to observe the efficiency and resilience of AI infrastructure.

HPE OpsRamp offers IT groups the flexibility to:

  • Observe total well being and efficiency of AI infrastructure by monitoring GPU temperature, utilization, reminiscence utilization, energy consumption, clock speeds and fan speeds.
  • Optimize job scheduling and sources by monitoring GPU and CPU utilization throughout the clusters.
  • Automate responses to sure occasions, for instance, decreasing clock pace or powering down a GPU to forestall injury.
  • Predict future useful resource wants and optimize useful resource allocation by analyzing historic efficiency and utilization information.
  • Monitor energy consumption and useful resource utilization so as optimize prices for giant AI deployments.

Availability

  • HPE Non-public Cloud AI will add function department help for NVIDIA AI Enterprise by Summer time.
  • HPE Alletra Storage MP X10000 SDK and direct reminiscence entry to NVIDIA accelerated computing infrastructure can be accessible beginning Summer time 2025.
  • HPE ProLiant Compute DL380a Gen12 with NVIDIA RTX PRO 6000 Server Version can be accessible to order beginning June 4, 2025.
  • HPE OpsRamp Software program can be time-to-market to help NVIDIA RTX PRO 6000 Server Version.

[To share your insights with us, please write to psen@itechseries.com]



Supply hyperlink

Editorial Team
  • Website

Related Posts

Fleet Provides $27 Million to Usher in New Period of Open System Administration

June 17, 2025

Nextworld Launches Latest Launch With Enhanced AI Options Throughout the Platform

June 17, 2025

AiThority Interview with Tom Findling, CEO of Conifers.ai

June 17, 2025
Misa
Trending
Machine-Learning

Fleet Provides $27 Million to Usher in New Period of Open System Administration

By Editorial TeamJune 17, 20250

Fleet (fleetdm.com), the open gadget administration firm, right this moment introduced it has raised $27M in…

Teamworks Raises $235 Million at $1Billion+ Valuation to Speed up AI-Powered Innovation

June 17, 2025

Nextworld Launches Latest Launch With Enhanced AI Options Throughout the Platform

June 17, 2025

AiThority Interview with Tom Findling, CEO of Conifers.ai

June 17, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Our Picks

Fleet Provides $27 Million to Usher in New Period of Open System Administration

June 17, 2025

Teamworks Raises $235 Million at $1Billion+ Valuation to Speed up AI-Powered Innovation

June 17, 2025

Nextworld Launches Latest Launch With Enhanced AI Options Throughout the Platform

June 17, 2025

AiThority Interview with Tom Findling, CEO of Conifers.ai

June 17, 2025

Subscribe to Updates

Get the latest creative news from SmartMag about art & design.

The Ai Today™ Magazine is the first in the middle east that gives the latest developments and innovations in the field of AI. We provide in-depth articles and analysis on the latest research and technologies in AI, as well as interviews with experts and thought leaders in the field. In addition, The Ai Today™ Magazine provides a platform for researchers and practitioners to share their work and ideas with a wider audience, help readers stay informed and engaged with the latest developments in the field, and provide valuable insights and perspectives on the future of AI.

Our Picks

Fleet Provides $27 Million to Usher in New Period of Open System Administration

June 17, 2025

Teamworks Raises $235 Million at $1Billion+ Valuation to Speed up AI-Powered Innovation

June 17, 2025

Nextworld Launches Latest Launch With Enhanced AI Options Throughout the Platform

June 17, 2025
Trending

AiThority Interview with Tom Findling, CEO of Conifers.ai

June 17, 2025

Synopsys Accelerates AI and Multi-Die Design Innovation on Superior Samsung Foundry Processes

June 17, 2025

ValueLabs Proclaims Plans to Change into the Enterprise OS of the Agentic Period

June 16, 2025
Facebook X (Twitter) Instagram YouTube LinkedIn TikTok
  • About Us
  • Advertising Solutions
  • Privacy Policy
  • Terms
  • Podcast
Copyright © The Ai Today™ , All right reserved.

Type above and press Enter to search. Press Esc to cancel.