Close Menu
  • Home
  • AI News
  • AI Startups
  • Deep Learning
  • Interviews
  • Machine-Learning
  • Robotics

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

What's Hot

A Step-by-Step Coding Tutorial on NVIDIA PhysicsNeMo: Darcy Movement, FNOs, PINNs, Surrogate Fashions, and Inference Benchmarking

April 13, 2026

Seacoast AI Makes use of Leverage’s Sovereign AI to Put Its Knowledge to Work

April 13, 2026

Cloudflare Expands Its Agent Cloud to Energy the Subsequent Era of Brokers

April 13, 2026
Facebook X (Twitter) Instagram
Smart Homez™
Facebook X (Twitter) Instagram Pinterest YouTube LinkedIn TikTok
SUBSCRIBE
  • Home
  • AI News
  • AI Startups
  • Deep Learning
  • Interviews
  • Machine-Learning
  • Robotics
Smart Homez™
Home»Deep Learning»A Step-by-Step Coding Tutorial on NVIDIA PhysicsNeMo: Darcy Movement, FNOs, PINNs, Surrogate Fashions, and Inference Benchmarking
Deep Learning

A Step-by-Step Coding Tutorial on NVIDIA PhysicsNeMo: Darcy Movement, FNOs, PINNs, Surrogate Fashions, and Inference Benchmarking

Editorial TeamBy Editorial TeamApril 13, 2026Updated:April 13, 2026No Comments4 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Reddit WhatsApp Email
A Step-by-Step Coding Tutorial on NVIDIA PhysicsNeMo: Darcy Movement, FNOs, PINNs, Surrogate Fashions, and Inference Benchmarking
Share
Facebook Twitter LinkedIn Pinterest WhatsApp Email


print("n" + "="*80)
print("SECTION 4: DATA VISUALIZATION")
print("="*80)


def visualize_darcy_samples(
   permeability: np.ndarray,
   stress: np.ndarray,
   n_samples: int = 3
):
   """Visualize Darcy circulate samples."""
   fig, axes = plt.subplots(n_samples, 2, figsize=(10, 4 * n_samples))
  
   for i in vary(n_samples):
       im1 = axes[i, 0].imshow(permeability[i], cmap='viridis', origin='decrease')
       axes[i, 0].set_title(f'Permeability Subject (Pattern {i+1})')
       axes[i, 0].set_xlabel('x')
       axes[i, 0].set_ylabel('y')
       plt.colorbar(im1, ax=axes[i, 0], label="okay(x,y)")
      
       im2 = axes[i, 1].imshow(stress[i], cmap='scorching', origin='decrease')
       axes[i, 1].set_title(f'Stress Subject (Pattern {i+1})')
       axes[i, 1].set_xlabel('x')
       axes[i, 1].set_ylabel('y')
       plt.colorbar(im2, ax=axes[i, 1], label="u(x,y)")
  
   plt.tight_layout()
   plt.savefig('darcy_samples.png', dpi=150, bbox_inches="tight")
   plt.present()
   print("✓ Saved visualization to 'darcy_samples.png'")


visualize_darcy_samples(perm_train[:3], press_train[:3])


print("n" + "="*80)
print("SECTION 5: FOURIER NEURAL OPERATOR (FNO)")
print("="*80)


"""
The Fourier Neural Operator (FNO) learns mappings between perform areas
by parameterizing the integral kernel in Fourier area.


Key perception: Convolution in bodily area = multiplication in Fourier area


The FNO layer consists of:
1. FFT to remodel to frequency area
2. Multiplication with learnable weights (preserving solely low-frequency modes)
3. Inverse FFT to remodel again
4. Residual reference to an area linear transformation
"""


class SpectralConv2d(nn.Module):
   """
   2D Spectral Convolution Layer for FNO.
  
   Performs convolution in Fourier area by:
   1. Computing FFT of enter
   2. Multiplying with advanced learnable weights
   3. Computing inverse FFT
   """
  
   def __init__(
       self,
       in_channels: int,
       out_channels: int,
       modes1: int,
       modes2: int
   ):
       tremendous().__init__()
      
       self.in_channels = in_channels
       self.out_channels = out_channels
       self.modes1 = modes1
       self.modes2 = modes2
      
       self.scale = 1 / (in_channels * out_channels)
      
       self.weights1 = nn.Parameter(
           self.scale * torch.rand(in_channels, out_channels, modes1, modes2, dtype=torch.cfloat)
       )
       self.weights2 = nn.Parameter(
           self.scale * torch.rand(in_channels, out_channels, modes1, modes2, dtype=torch.cfloat)
       )
  
   def compl_mul2d(self, enter: torch.Tensor, weights: torch.Tensor) -> torch.Tensor:
       """Advanced multiplication for batch of 2D tensors."""
       return torch.einsum("bixy,ioxy->boxy", enter, weights)
  
   def ahead(self, x: torch.Tensor) -> torch.Tensor:
       batch_size = x.form[0]
      
       x_ft = torch.fft.rfft2(x)
      
       out_ft = torch.zeros(
           batch_size, self.out_channels, x.measurement(-2), x.measurement(-1) // 2 + 1,
           dtype=torch.cfloat, gadget=x.gadget
       )
      
       out_ft[:, :, :self.modes1, :self.modes2] = 
           self.compl_mul2d(x_ft[:, :, :self.modes1, :self.modes2], self.weights1)
      
       out_ft[:, :, -self.modes1:, :self.modes2] = 
           self.compl_mul2d(x_ft[:, :, -self.modes1:, :self.modes2], self.weights2)
      
       x = torch.fft.irfft2(out_ft, s=(x.measurement(-2), x.measurement(-1)))
      
       return x




class FNOBlock(nn.Module):
   """
   FNO Block combining spectral convolution with native linear rework.
  
   output = σ(SpectralConv(x) + LocalLinear(x))
   """
  
   def __init__(
       self,
       channels: int,
       modes1: int,
       modes2: int,
       activation: str="gelu"
   ):
       tremendous().__init__()
      
       self.spectral_conv = SpectralConv2d(channels, channels, modes1, modes2)
       self.local_linear = nn.Conv2d(channels, channels, 1)
      
       self.activation = nn.GELU() if activation == 'gelu' else nn.ReLU()
  
   def ahead(self, x: torch.Tensor) -> torch.Tensor:
       return self.activation(self.spectral_conv(x) + self.local_linear(x))




class FourierNeuralOperator2D(nn.Module):
   """
   Full 2D Fourier Neural Operator for studying operators.
  
   Structure:
   1. Carry enter to larger dimensional channel area
   2. Apply a number of FNO blocks (spectral convolutions + residuals)
   3. Venture again to output area
  
   This learns the mapping: okay(x,y) -> u(x,y) for Darcy circulate
   """
  
   def __init__(
       self,
       in_channels: int = 1,
       out_channels: int = 1,
       modes1: int = 12,
       modes2: int = 12,
       width: int = 32,
       n_layers: int = 4,
       padding: int = 9
   ):
       tremendous().__init__()
      
       self.modes1 = modes1
       self.modes2 = modes2
       self.width = width
       self.padding = padding
      
       self.fc0 = nn.Linear(in_channels + 2, width)
      
       self.fno_blocks = nn.ModuleList([
           FNOBlock(width, modes1, modes2) for _ in range(n_layers)
       ])
      
       self.fc1 = nn.Linear(width, 128)
       self.fc2 = nn.Linear(128, out_channels)
  
   def get_grid(self, form: Tuple, gadget: torch.gadget) -> torch.Tensor:
       """Create normalized grid coordinates."""
       batch_size, size_x, size_y = form[0], form[2], form[3]
      
       gridx = torch.linspace(0, 1, size_x, gadget=gadget)
       gridy = torch.linspace(0, 1, size_y, gadget=gadget)
       gridx, gridy = torch.meshgrid(gridx, gridy, indexing='ij')
      
       grid = torch.stack([gridx, gridy], dim=-1)
       grid = grid.unsqueeze(0).repeat(batch_size, 1, 1, 1)
      
       return grid
  
   def ahead(self, x: torch.Tensor) -> torch.Tensor:
       batch_size = x.form[0]
      
       grid = self.get_grid(x.form, x.gadget)
      
       x = x.permute(0, 2, 3, 1)
       x = torch.cat([x, grid], dim=-1)
      
       x = self.fc0(x)
       x = x.permute(0, 3, 1, 2)
      
       if self.padding > 0:
           x = F.pad(x, [0, self.padding, 0, self.padding])
      
       for block in self.fno_blocks:
           x = block(x)
      
       if self.padding > 0:
           x = x[..., :-self.padding, :-self.padding]
      
       x = x.permute(0, 2, 3, 1)
       x = F.gelu(self.fc1(x))
       x = self.fc2(x)
       x = x.permute(0, 3, 1, 2)
      
       return x




print("nCreating Fourier Neural Operator mannequin...")
fno_model = FourierNeuralOperator2D(
   in_channels=1,
   out_channels=1,
   modes1=8,
   modes2=8,
   width=32,
   n_layers=4,
   padding=5
).to(gadget)


n_params = sum(p.numel() for p in fno_model.parameters() if p.requires_grad)
print(f"✓ FNO Mannequin created with {n_params:,} trainable parameters")



Supply hyperlink

Editorial Team
  • Website

Related Posts

Researchers from MIT, NVIDIA, and Zhejiang College Suggest TriAttention: A KV Cache Compression Technique That Matches Full Consideration at 2.5× Larger Throughput

April 11, 2026

How Data Distillation Compresses Ensemble Intelligence right into a Single Deployable AI Mannequin

April 11, 2026

Alibaba’s Tongyi Lab Releases VimRAG: a Multimodal RAG Framework that Makes use of a Reminiscence Graph to Navigate Huge Visible Contexts

April 10, 2026
Misa
Trending
Deep Learning

A Step-by-Step Coding Tutorial on NVIDIA PhysicsNeMo: Darcy Movement, FNOs, PINNs, Surrogate Fashions, and Inference Benchmarking

By Editorial TeamApril 13, 20260

print(“n” + “=”*80) print(“SECTION 4: DATA VISUALIZATION”) print(“=”*80) def visualize_darcy_samples( permeability: np.ndarray, stress: np.ndarray, n_samples:…

Seacoast AI Makes use of Leverage’s Sovereign AI to Put Its Knowledge to Work

April 13, 2026

Cloudflare Expands Its Agent Cloud to Energy the Subsequent Era of Brokers

April 13, 2026

Milesight Networks Formally Launches, Powering Dependable Industrial Networks

April 13, 2026
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Our Picks

A Step-by-Step Coding Tutorial on NVIDIA PhysicsNeMo: Darcy Movement, FNOs, PINNs, Surrogate Fashions, and Inference Benchmarking

April 13, 2026

Seacoast AI Makes use of Leverage’s Sovereign AI to Put Its Knowledge to Work

April 13, 2026

Cloudflare Expands Its Agent Cloud to Energy the Subsequent Era of Brokers

April 13, 2026

Milesight Networks Formally Launches, Powering Dependable Industrial Networks

April 13, 2026

Subscribe to Updates

Get the latest creative news from SmartMag about art & design.

The Ai Today™ Magazine is the first in the middle east that gives the latest developments and innovations in the field of AI. We provide in-depth articles and analysis on the latest research and technologies in AI, as well as interviews with experts and thought leaders in the field. In addition, The Ai Today™ Magazine provides a platform for researchers and practitioners to share their work and ideas with a wider audience, help readers stay informed and engaged with the latest developments in the field, and provide valuable insights and perspectives on the future of AI.

Our Picks

A Step-by-Step Coding Tutorial on NVIDIA PhysicsNeMo: Darcy Movement, FNOs, PINNs, Surrogate Fashions, and Inference Benchmarking

April 13, 2026

Seacoast AI Makes use of Leverage’s Sovereign AI to Put Its Knowledge to Work

April 13, 2026

Cloudflare Expands Its Agent Cloud to Energy the Subsequent Era of Brokers

April 13, 2026
Trending

Milesight Networks Formally Launches, Powering Dependable Industrial Networks

April 13, 2026

Revelir AI Launches Automated QA Engine, Secures Xendit and Tiket.com as Enterprise Purchasers

April 13, 2026

Researchers from MIT, NVIDIA, and Zhejiang College Suggest TriAttention: A KV Cache Compression Technique That Matches Full Consideration at 2.5× Larger Throughput

April 11, 2026
Facebook X (Twitter) Instagram YouTube LinkedIn TikTok
  • About Us
  • Advertising Solutions
  • Privacy Policy
  • Terms
  • Podcast
Copyright © The Ai Today™ , All right reserved.

Type above and press Enter to search. Press Esc to cancel.