Nebius unveiled Nebius AI Cloud 3.5, including important new capabilities to its full-stack cloud platform that scale back operational friction and allow AI builders to prototype, check, and ship merchandise quicker.
The introduction of serverless options offers builders the power to launch workloads nearly immediately, eliminating the necessity for AI groups to spend important time configuring infrastructure earlier than they will run experiments, practice or serve fashions in manufacturing. Infrastructure configuration and runtime administration are dealt with by the Nebius platform, enabling builders to deal with constructing purposes as a substitute of managing environments.
Additionally Learn: AiThority Interview with Glenn Jocher, Founder & CEO, Ultralytics
Alongside serverless capabilities, Nebius is increasing its GPU providing with NVIDIA RTX PRO 6000 Blackwell Server Version for a variety of workloads together with AI inference, industrial robotics, bodily AI simulations, visible computing, and drug discovery.
Model 3.5 of Nebius AI Cloud “Aether” additionally introduces Nebius’s Knowledge Switch Service, which reduces knowledge administration overhead for groups working throughout environments by simplifying knowledge migration and replication between exterior S3-compatible storage programs and Nebius cloud areas.
Configuration setup for Managed Soperator, Nebius’s totally managed Slurm-on-Kubernetes resolution, has additionally been overhauled to supply extra choices and granularity when making a Slurm cluster for self-service customers. Managed Kubernetes observability has additionally been up to date to offer groups further cluster-level management.
The AI software market has additionally been redesigned to assist customers entry quicker instruments, fashions and purposes required of their workflows.
Different updates in Aether 3.5 embrace improved person administration and role-based permissions, making it simpler for organizations to handle entry throughout groups. New public APIs for billing knowledge streamline the export course of for finance and operations groups.
All the brand new options that the Aether 3.5 launch delivers can be found now on the worldwide Nebius AI Cloud infrastructure, with the serverless service accessible in public preview. NVIDIA RTX PRO 6000 Blackwell Server Version is offered at this time.
Nebius AI Cloud Aether 3.5 — at a look
Serverless AI
- Elastic, pay-as-you-go compute accelerated by NVIDIA
- Simplified entry to AI workloads with out managing infrastructure
- Designed for prototyping, experimentation, and mannequin inference analysis
NVIDIA RTX PRO 6000 Blackwell Server Version
- GPU possibility designed for a variety of workloads together with AI inference, industrial robotics, bodily AI simulations, visible computing, engineering analysis, and drug discovery
- Allows cost-efficient AI inference and simulation-heavy workloads
Knowledge Switch Service
- Consumer-friendly software for knowledge switch and replication throughout Nebius areas and S3-compatible object storage providers
Managed Soperator
- An up to date cluster configuration wizard for Nebius’ totally managed Slurm-on-Kubernetes resolution
Platform enhancements
- Up to date navigation for the AI/ML software market
- Improved disk encryption, boot picture administration, and Kubernetes-level observability
- Expanded controls for person administration and role-based permissions
- Public API for exporting billing knowledge in standardized codecs
Additionally Learn: The Infrastructure Conflict Behind the AI Increase
[To share your insights with us, please write to psen@itechseries.com]
