Close Menu
  • Home
  • AI News
  • AI Startups
  • Deep Learning
  • Interviews
  • Machine-Learning
  • Robotics

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

What's Hot

EAK:AIO Solves Lengthy-Operating AI Reminiscence Bottleneck for LLM Inference and Mannequin Innovation with Unified Token Reminiscence Characteristic

May 19, 2025

AI Undertaking Administration + Sooner Funds

May 19, 2025

Hewlett Packard Enterprise Deepens Integration with NVIDIA on AI Manufacturing unit Portfolio

May 19, 2025
Facebook X (Twitter) Instagram
Smart Homez™
Facebook X (Twitter) Instagram Pinterest YouTube LinkedIn TikTok
SUBSCRIBE
  • Home
  • AI News
  • AI Startups
  • Deep Learning
  • Interviews
  • Machine-Learning
  • Robotics
Smart Homez™
Home»Deep Learning»Zhejiang College Researchers Suggest Fuyou: A Low-Price Deep Studying Coaching Framework that Permits Environment friendly 100B Big Mannequin Wonderful-Tuning on a Low-Finish Server with a Low-Finish GPU and Restricted CPU Reminiscence Capability
Deep Learning

Zhejiang College Researchers Suggest Fuyou: A Low-Price Deep Studying Coaching Framework that Permits Environment friendly 100B Big Mannequin Wonderful-Tuning on a Low-Finish Server with a Low-Finish GPU and Restricted CPU Reminiscence Capability

By March 16, 2024Updated:March 16, 2024No Comments4 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Reddit WhatsApp Email
Zhejiang College Researchers Suggest Fuyou: A Low-Price Deep Studying Coaching Framework that Permits Environment friendly 100B Big Mannequin Wonderful-Tuning on a Low-Finish Server with a Low-Finish GPU and Restricted CPU Reminiscence Capability
Share
Facebook Twitter LinkedIn Pinterest WhatsApp Email


The appearance of huge language fashions (LLMs) has sparked a revolution in pure language processing, fascinating the world with their superior capabilities stemming from the large variety of parameters they make the most of. These LLMs, epitomized by the transformative energy of dense transformer fashions, haven’t solely damaged information in accuracy however have additionally turn out to be indispensable belongings in information administration duties. Not too long ago, the mannequin measurement of dense transformer fashions has grown from 1.5B (GPT-2) to 540B (PaLM), which reveals the evolution of those fashions in an unprecedented journey into the realm of linguistic mastery.

Whereas the potential of LLMs is simple, a crucial problem arises from their immense parameter sizes overwhelming even essentially the most highly effective GPUs, which presently peak at 80GB of reminiscence. When conducting stochastic gradient descent-based optimization, they have to be extra adequate to accommodate these huge parameters and their related optimizer states. To host such an enormous mannequin, one can combination system reminiscence from a number of GPUs, and it takes 32 NVIDIA A100 GPUs to suit a mannequin with 100 billion parameters for fine-tuning. Nevertheless, this strategy introduces prohibitive prices for many tutorial researchers, who at all times have a restricted finances for a lot of high-end GPU servers.

Researchers from Zhejiang College proposed Fuyou. This low-cost coaching framework allows environment friendly 100B big mannequin fine-tuning on a low-end server with a low-end GPU and restricted CPU reminiscence capability. It’s carried out on PyTorch, which is a well-liked deep-learning framework. In contrast with different fashions like ZeRO-Infinity, Fuyou can fine-tune 175B GPT-3 on a shopper GPU RTX 4090 with excessive GPU utilization, whereas ZeRO-Infinity fails to fine-tune. 

The main focus lies on integrating SSD-CPU communication as a pivotal optimization dimension, strategically harmonizing computation and information swapping to unlock the total potential of GPU utilization. This endeavor unfolds by three pioneering improvements:

  •  A synchronous out-of-core CPU optimizer that overlaps with backward propagation to maximise GPU utilization.
  • A GPU-CPU-SSD fully-pipelined activation swapping mechanism to permit for a considerably bigger mannequin fine-tuning.
  • An computerized activation swapping administration to routinely decide the optimum quantity of swapping activations to reduce the epoch time.

Within the dynamic realm of mannequin fine-tuning, Fuyou emerges as a powerhouse, delivering distinctive efficiency whether or not on the cutting-edge A100-80GB or the formidable 4090 in a commodity server. When fine-tuning a GPT-3 175B mannequin, Fuyou achieves 87 TFLOPS on 4090 and 172 TFLOPS on A100-80GB. Additionally, it reaches as much as 3.47×TFLOPS in comparison with ZeRO-Infinity when a GPT-3 13B mannequin is fine-tuned. To make the most of low cost SSDs in enhancing coaching throughput, the cost-effectiveness of Fuyou with Megatron-LM is in contrast on DGX-2 nodes utilizing tensor parallelism. Throughput is in contrast over the whole worth of GPUs6 and SSDs in a server the place Fuyou achieves at most 1.70× cost-effectiveness over Megatron-LM.

In conclusion, this paper proposed Fuyou, a low-cost coaching framework that permits environment friendly 100B big mannequin fine-tuning on a low-end server with a low-end GPU and restricted CPU reminiscence capability. It’s carried out on PyTorch. It achieves 87 and 172 TFLOPS when fine-tuning GPT-3 175B. In addition to, it reaches as much as 3.42× and 6.73× TFLOPS in comparison with ZeRO-Infinity and Colossal-AI when fine-tuning GPT-3 13B. Additionally, Fuyou achieves at most 1.70× cost-effectiveness over Megatron-LM.


Try the Paper. All credit score for this analysis goes to the researchers of this undertaking. Additionally, don’t neglect to comply with us on Twitter. Be a part of our Telegram Channel, Discord Channel, and LinkedIn Group.

In the event you like our work, you’ll love our e-newsletter..

Don’t Overlook to affix our 38k+ ML SubReddit



Sajjad Ansari is a ultimate 12 months undergraduate from IIT Kharagpur. As a Tech fanatic, he delves into the sensible purposes of AI with a deal with understanding the affect of AI applied sciences and their real-world implications. He goals to articulate advanced AI ideas in a transparent and accessible method.


🐝 Be a part of the Quickest Rising AI Analysis E-newsletter Learn by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and lots of others…



Related Posts

Microsoft Researchers Introduces BioEmu-1: A Deep Studying Mannequin that may Generate Hundreds of Protein Buildings Per Hour on a Single GPU

February 24, 2025

What’s Deep Studying? – MarkTechPost

January 15, 2025

Researchers from NVIDIA, CMU and the College of Washington Launched ‘FlashInfer’: A Kernel Library that Offers State-of-the-Artwork Kernel Implementations for LLM Inference and Serving

January 5, 2025
Misa
Trending
Interviews

EAK:AIO Solves Lengthy-Operating AI Reminiscence Bottleneck for LLM Inference and Mannequin Innovation with Unified Token Reminiscence Characteristic

By Editorial TeamMay 19, 20250

PEAK:AIO, the information infrastructure pioneer redefining AI-first information acceleration, at the moment unveiled the primary…

AI Undertaking Administration + Sooner Funds

May 19, 2025

Hewlett Packard Enterprise Deepens Integration with NVIDIA on AI Manufacturing unit Portfolio

May 19, 2025

Why Agentic AI Is the Subsequent Huge Shift in Workflow Orchestration

May 16, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Our Picks

EAK:AIO Solves Lengthy-Operating AI Reminiscence Bottleneck for LLM Inference and Mannequin Innovation with Unified Token Reminiscence Characteristic

May 19, 2025

AI Undertaking Administration + Sooner Funds

May 19, 2025

Hewlett Packard Enterprise Deepens Integration with NVIDIA on AI Manufacturing unit Portfolio

May 19, 2025

Why Agentic AI Is the Subsequent Huge Shift in Workflow Orchestration

May 16, 2025

Subscribe to Updates

Get the latest creative news from SmartMag about art & design.

The Ai Today™ Magazine is the first in the middle east that gives the latest developments and innovations in the field of AI. We provide in-depth articles and analysis on the latest research and technologies in AI, as well as interviews with experts and thought leaders in the field. In addition, The Ai Today™ Magazine provides a platform for researchers and practitioners to share their work and ideas with a wider audience, help readers stay informed and engaged with the latest developments in the field, and provide valuable insights and perspectives on the future of AI.

Our Picks

EAK:AIO Solves Lengthy-Operating AI Reminiscence Bottleneck for LLM Inference and Mannequin Innovation with Unified Token Reminiscence Characteristic

May 19, 2025

AI Undertaking Administration + Sooner Funds

May 19, 2025

Hewlett Packard Enterprise Deepens Integration with NVIDIA on AI Manufacturing unit Portfolio

May 19, 2025
Trending

Why Agentic AI Is the Subsequent Huge Shift in Workflow Orchestration

May 16, 2025

Enterprise Priorities and Generative AI Adoption

May 16, 2025

Beacon AI Facilities Appoints Josh Schertzer as CEO, Commits to an Preliminary 4.5 GW Knowledge Middle Growth in Alberta, Canada

May 16, 2025
Facebook X (Twitter) Instagram YouTube LinkedIn TikTok
  • About Us
  • Advertising Solutions
  • Privacy Policy
  • Terms
  • Podcast
Copyright © The Ai Today™ , All right reserved.

Type above and press Enter to search. Press Esc to cancel.