Close Menu
  • Home
  • AI News
  • AI Startups
  • Deep Learning
  • Interviews
  • Machine-Learning
  • Robotics

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

What's Hot

An Finish-to-Finish Coding Information to NVIDIA KVPress for Lengthy-Context LLM Inference, KV Cache Compression, and Reminiscence-Environment friendly Technology

April 10, 2026

Roadzen’s drivebuddyAI Secures Patent to Detect and Geo-Map Hazardous Street Situations in Actual-Time

April 9, 2026

AI-Powered Pharma Manufacturing Alerts Scalable Effectivity, Lengthy-Time period Price Benefits

April 9, 2026
Facebook X (Twitter) Instagram
Smart Homez™
Facebook X (Twitter) Instagram Pinterest YouTube LinkedIn TikTok
SUBSCRIBE
  • Home
  • AI News
  • AI Startups
  • Deep Learning
  • Interviews
  • Machine-Learning
  • Robotics
Smart Homez™
Home»Deep Learning»An Finish-to-Finish Coding Information to NVIDIA KVPress for Lengthy-Context LLM Inference, KV Cache Compression, and Reminiscence-Environment friendly Technology
Deep Learning

An Finish-to-Finish Coding Information to NVIDIA KVPress for Lengthy-Context LLM Inference, KV Cache Compression, and Reminiscence-Environment friendly Technology

Editorial TeamBy Editorial TeamApril 10, 2026Updated:April 10, 2026No Comments9 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Reddit WhatsApp Email
An Finish-to-Finish Coding Information to NVIDIA KVPress for Lengthy-Context LLM Inference, KV Cache Compression, and Reminiscence-Environment friendly Technology
Share
Facebook Twitter LinkedIn Pinterest WhatsApp Email


On this tutorial, we take an in depth, sensible strategy to exploring NVIDIA’s KVPress and understanding the way it could make long-context language mannequin inference extra environment friendly. We start by organising the complete setting, putting in the required libraries, loading a compact Instruct mannequin, and getting ready a easy workflow that runs in Colab whereas nonetheless demonstrating the true worth of KV cache compression. As we transfer by implementation, we create an artificial long-context corpus, outline focused extraction questions, and run a number of inference experiments to immediately examine customary technology with totally different KVPress methods. On the finish of the tutorial, we could have constructed a stronger instinct for a way long-context optimization works in observe, how totally different press strategies have an effect on efficiency, and the way this sort of workflow will be tailored for real-world retrieval, doc evaluation, and memory-sensitive LLM functions.

import os, sys, subprocess, textwrap, time, gc, json, math, random, warnings, examine
warnings.filterwarnings("ignore")


def run(cmd):
   print("n[RUN]", " ".be part of(cmd))
   subprocess.check_call(cmd)


run([sys.executable, "-m", "pip", "install", "-q", "--upgrade", "pip"])
run([sys.executable, "-m", "pip", "install", "-q", "torch", "transformers", "accelerate", "bitsandbytes", "sentencepiece", "kvpress==0.4.0"])


strive:
   from google.colab import userdata
   hf_token = userdata.get("HF_TOKEN")
besides Exception:
   hf_token = os.environ.get("HF_TOKEN", "")


if not hf_token:
   strive:
       import getpass
       hf_token = getpass.getpass("Enter your Hugging Face token (depart empty if mannequin is public and accessible): ").strip()
   besides Exception:
       hf_token = ""


if hf_token:
   os.environ["HF_TOKEN"] = hf_token
   os.environ["HUGGINGFACEHUB_API_TOKEN"] = hf_token


import torch
import transformers
import kvpress


from transformers import pipeline, BitsAndBytesConfig
from kvpress import ExpectedAttentionPress, KnormPress


print("Python:", sys.model.break up()[0])
print("Torch:", torch.__version__)
print("Transformers:", transformers.__version__)
print("CUDA out there:", torch.cuda.is_available())
if torch.cuda.is_available():
   print("GPU:", torch.cuda.get_device_name(0))


MODEL_ID = "Qwen/Qwen2.5-1.5B-Instruct"
MAX_NEW_TOKENS = 96
SEED = 42
random.seed(SEED)
torch.manual_seed(SEED)

We arrange the Colab setting and set up all required libraries to run the KVPress workflow efficiently. We securely gather the Hugging Face token, set setting variables, and import the core modules wanted for mannequin loading, pipeline execution, and compression experiments. We additionally print the runtime and {hardware} particulars so we clearly perceive the setup wherein we carry out the tutorial.

if torch.cuda.is_available():
   torch.cuda.empty_cache()
   quantization_config = BitsAndBytesConfig(
       load_in_4bit=True,
       bnb_4bit_compute_dtype=torch.float16,
       bnb_4bit_quant_type="nf4",
       bnb_4bit_use_double_quant=True,
   )
   pipe = pipeline(
       "kv-press-text-generation",
       mannequin=MODEL_ID,
       device_map="auto",
       token=hf_token if hf_token else None,
       model_kwargs={
           "quantization_config": quantization_config,
           "attn_implementation": "sdpa",
       },
   )
else:
   pipe = pipeline(
       "kv-press-text-generation",
       mannequin=MODEL_ID,
       device_map="auto",
       torch_dtype=torch.float32,
       token=hf_token if hf_token else None,
       model_kwargs={
           "attn_implementation": "sdpa",
       },
   )


def cuda_mem():
   if not torch.cuda.is_available():
       return {"allocated_gb": None, "reserved_gb": None, "peak_gb": None}
   return {
       "allocated_gb": spherical(torch.cuda.memory_allocated() / 1024**3, 3),
       "reserved_gb": spherical(torch.cuda.memory_reserved() / 1024**3, 3),
       "peak_gb": spherical(torch.cuda.max_memory_allocated() / 1024**3, 3),
   }


def reset_peak():
   if torch.cuda.is_available():
       torch.cuda.reset_peak_memory_stats()


def extract_answer(x):
   if isinstance(x, record) and len(x) > 0:
       x = x[0]
   if isinstance(x, dict):
       for ok in ["answer", "generated_text", "text", "output_text"]:
           if ok in x:
               return x[k]
       return json.dumps(x, indent=2, ensure_ascii=False)
   return str(x)


def generate_once(context, query, press=None, label="run"):
   gc.gather()
   if torch.cuda.is_available():
       torch.cuda.empty_cache()
   reset_peak()
   begin = time.time()
   out = pipe(
       context,
       query=query,
       press=press,
       max_new_tokens=MAX_NEW_TOKENS,
       do_sample=False,
       temperature=None,
       return_full_text=False,
   )
   elapsed = time.time() - begin
   reply = extract_answer(out)
   stats = cuda_mem()
   outcome = {
       "label": label,
       "elapsed_sec": spherical(elapsed, 2),
       "allocated_gb": stats["allocated_gb"],
       "reserved_gb": stats["reserved_gb"],
       "peak_gb": stats["peak_gb"],
       "reply": reply.strip(),
   }
   return outcome

We initialize the kv-press-text-generation pipeline and configure it otherwise relying on whether or not GPU help is on the market. We outline the helper capabilities that measure CUDA reminiscence utilization, reset peak reminiscence, extract solutions from mannequin outputs, and run a single technology cross cleanly. This half supplies the reusable execution logic that powers the remainder of the tutorial and allows us to check baseline inference with KV cache compression.

company_records = [
   {"company": "Arcturus Dynamics", "hq": "Bengaluru", "founded": 2017, "focus": "warehouse robotics"},
   {"company": "BlueMesa Energy", "hq": "Muscat", "founded": 2014, "focus": "grid analytics"},
   {"company": "CinderPeak Health", "hq": "Pune", "founded": 2019, "focus": "clinical imaging AI"},
   {"company": "DeltaForge Marine", "hq": "Kochi", "founded": 2012, "focus": "autonomous vessel telemetry"},
   {"company": "EonCircuit Labs", "hq": "Hyderabad", "founded": 2020, "focus": "edge silicon tooling"},
   {"company": "Frostline Aero", "hq": "Jaipur", "founded": 2016, "focus": "drone inspection"},
]


needle_facts = [
   "PROJECT NEEDLE 1: The internal codename for the confidential pilot program is SAFFRON-17.",
   "PROJECT NEEDLE 2: The audit escalation owner is Meera Vashisht.",
   "PROJECT NEEDLE 3: The approved deployment region for the first production rollout is Oman North.",
   "PROJECT NEEDLE 4: The emergency rollback phrase is amber lantern.",
   "PROJECT NEEDLE 5: The signed commercial start date is 17 September 2026.",
]


background_block = """
Lengthy-context techniques usually include repeated operational notes, historic data, coverage sections, and noisy retrieval artifacts.
The purpose of this demo is to create a realistically lengthy immediate the place just a few particulars matter for downstream answering.
KV cache compression reduces reminiscence utilization by pruning cached key-value pairs whereas preserving reply high quality.
"""


policy_block = """
Operational coverage abstract:
1. Security overrides throughput when sensor confidence falls beneath threshold.
2. Logs ought to protect area, timestamp, gadget class, and operator approval state.
3. Area trials might include duplicated annexes, OCR-style artifacts, and repeated compliance summaries.
4.  long-context mannequin should ignore irrelevant repetition and retrieve the precise particulars that matter.
"""


records_text = []
for i in vary(120):
   rec = company_records[i % len(company_records)]
   records_text.append(
       f"File {i+1}: {rec['company']} is headquartered in {rec['hq']}, based in {rec['founded']}, and focuses on {rec['focus']}. "
       f"Quarterly memo {i+1}: retention remained secure, operator coaching progressed, and the compliance appendix was reattached for evaluation."
   )


needle_insert_positions = {18, 41, 73, 96, 111}
full_corpus = []
for i, para in enumerate(records_text):
   full_corpus.append(background_block.strip())
   full_corpus.append(policy_block.strip())
   full_corpus.append(para)
   if i in needle_insert_positions:
       full_corpus.append(needle_facts[len([x for x in needle_insert_positions if x <= i]) - 1])

We create an artificial long-context dataset to check the KVPress system in a managed but lifelike means. We outline firm data, insert essential hidden information at totally different positions, and blend them with repeated background and coverage blocks, making the immediate lengthy and noisy. This helps us simulate the context wherein memory-efficient inference issues and the mannequin should retrieve solely the actually related particulars.

context = "nn".be part of(full_corpus)


query = textwrap.dedent("""
Reply utilizing solely the offered context.
Give a compact JSON object with precisely these keys:
commercial_start_date
deployment_region
audit_owner
rollback_phrase
pilot_codename
""").strip()


print("nContext characters:", len(context))
print("Approx phrases:", len(context.break up()))


experiments = []


baseline = generate_once(context, query, press=None, label="baseline_no_compression")
experiments.append(baseline)


presses = [
   ("expected_attention_0.7", ExpectedAttentionPress(compression_ratio=0.7)),
   ("expected_attention_0.5", ExpectedAttentionPress(compression_ratio=0.5)),
   ("knorm_0.5", KnormPress(compression_ratio=0.5)),
]


for label, press in presses:
   strive:
       outcome = generate_once(context, query, press=press, label=label)
       experiments.append(outcome)
   besides Exception as e:
       experiments.append({
           "label": label,
           "elapsed_sec": None,
           "allocated_gb": None,
           "reserved_gb": None,
           "peak_gb": None,
           "reply": f"FAILED: {sort(e).__name__}: {e}"
       })


strive:
   from kvpress import DecodingPress
   sig = examine.signature(DecodingPress)
   kwargs = {"base_press": KnormPress()}
   if "compression_interval" in sig.parameters:
       kwargs["compression_interval"] = 10
   elif "compression_steps" in sig.parameters:
       kwargs["compression_steps"] = 10
   if "target_size" in sig.parameters:
       kwargs["target_size"] = 512
   elif "token_buffer_size" in sig.parameters:
       kwargs["token_buffer_size"] = 512
   if "hidden_states_buffer_size" in sig.parameters:
       kwargs["hidden_states_buffer_size"] = 0
   decoding_press = DecodingPress(**kwargs)
   decoding_result = generate_once(context, query, press=decoding_press, label="decoding_knorm")
   experiments.append(decoding_result)
besides Exception as e:
   experiments.append({
       "label": "decoding_knorm",
       "elapsed_sec": None,
       "allocated_gb": None,
       "reserved_gb": None,
       "peak_gb": None,
       "reply": f"SKIPPED_OR_FAILED: {sort(e).__name__}: {e}"
   })

We assemble the ultimate context, outline the structured extraction query, and launch the core set of inference experiments. We first run the baseline with out compression, then apply a number of press methods to look at how totally different compression ratios have an effect on the outcomes. We additionally conduct a decoding-oriented compression experiment, which extends the tutorial past prefilling and supplies a broader view of the KVPress framework.

print("n" + "=" * 120)
print("RESULTS")
print("=" * 120)


for r in experiments:
   print(f"n[{r['label']}]")
   print("elapsed_sec:", r["elapsed_sec"])
   print("allocated_gb:", r["allocated_gb"])
   print("reserved_gb:", r["reserved_gb"])
   print("peak_gb:", r["peak_gb"])
   print("reply:")
   print(r["answer"])


print("n" + "=" * 120)
print("SIMPLE SUMMARY")
print("=" * 120)


def safe_float(x):
   strive:
       return float(x)
   besides Exception:
       return None


base_peak = safe_float(baseline["peak_gb"]) if baseline.get("peak_gb") just isn't None else None
base_time = safe_float(baseline["elapsed_sec"]) if baseline.get("elapsed_sec") just isn't None else None


for r in experiments[1:]:
   peak = safe_float(r["peak_gb"])
   t = safe_float(r["elapsed_sec"])
   peak_delta = None if base_peak is None or peak is None else spherical(base_peak - peak, 3)
   time_delta = None if base_time is None or t is None else spherical(base_time - t, 2)
   print({
       "label": r["label"],
       "peak_gb_saved_vs_baseline": peak_delta,
       "time_sec_saved_vs_baseline": time_delta,
       "answer_preview": r["answer"][:180].substitute("n", " ")
   })


print("n" + "=" * 120)
print("OPTIONAL NEXT STEPS")
print("=" * 120)
print("1. Swap MODEL_ID to a stronger long-context instruct mannequin that matches your GPU.")
print("2. Improve context size by duplicating records_text extra occasions.")
print("3. Strive different presses from kvpress, reminiscent of SnapKVPress, StreamingLLMPress, QFilterPress, or ChunkKVPress.")
print("4. Exchange the artificial corpus with your personal lengthy PDF/textual content chunks and preserve the identical analysis loop.")

We print all experiment outputs in a readable format and summarize the runtime and reminiscence variations relative to the baseline. We calculate easy comparability metrics to rapidly see how a lot reminiscence or time every compression technique saves. We then conclude with recommended subsequent steps to increase the tutorial to stronger fashions, longer contexts, further press strategies, and real-world doc workloads.

In conclusion, we developed a powerful sensible understanding of how NVIDIA’s KVPress can be utilized to optimize long-context inference in a practical Colab-based setting. We did greater than merely run a mannequin: we constructed an end-to-end workflow that installs the framework, hundreds the pipeline accurately, constructs a significant long-context enter, applies a number of compression presses, and evaluates the outcomes by way of reply high quality, runtime, and reminiscence conduct. By evaluating baseline technology with compressed KV-cache technology, we clearly noticed the trade-offs concerned. We gained helpful instinct about when these strategies may help scale back useful resource strain with out severely harming output constancy. We additionally explored the framework’s flexibility by testing totally different press configurations and together with an non-obligatory decoding-oriented compression path, offering a broader view of how KVPress can be utilized past a single static instance.


Take a look at the Codes and Pocket book right here. Additionally, be happy to comply with us on Twitter and don’t neglect to hitch our 120k+ ML SubReddit and Subscribe to our E-newsletter. Wait! are you on telegram? now you may be part of us on telegram as effectively.

Have to accomplice with us for selling your GitHub Repo OR Hugging Face Web page OR Product Launch OR Webinar and so forth.? Join with us




Supply hyperlink

Editorial Team
  • Website

Related Posts

An Implementation Information to Working NVIDIA Transformer Engine with Blended Precision, FP8 Checks, Benchmarking, and Fallback Execution

April 6, 2026

Defeating the ‘Token Tax’: How Google Gemma 4, NVIDIA, and OpenClaw are Revolutionizing Native Agentic AI: From RTX Desktops to DGX Spark

April 2, 2026

Meta Releases TRIBE v2: A Mind Encoding Mannequin That Predicts fMRI Responses Throughout Video, Audio, and Textual content Stimuli

March 27, 2026
Misa
Trending
Deep Learning

An Finish-to-Finish Coding Information to NVIDIA KVPress for Lengthy-Context LLM Inference, KV Cache Compression, and Reminiscence-Environment friendly Technology

By Editorial TeamApril 10, 20260

On this tutorial, we take an in depth, sensible strategy to exploring NVIDIA’s KVPress and…

Roadzen’s drivebuddyAI Secures Patent to Detect and Geo-Map Hazardous Street Situations in Actual-Time

April 9, 2026

AI-Powered Pharma Manufacturing Alerts Scalable Effectivity, Lengthy-Time period Price Benefits

April 9, 2026

How AI Search Is Reshaping How Merchants Vet On-line Brokers

April 9, 2026
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Our Picks

An Finish-to-Finish Coding Information to NVIDIA KVPress for Lengthy-Context LLM Inference, KV Cache Compression, and Reminiscence-Environment friendly Technology

April 10, 2026

Roadzen’s drivebuddyAI Secures Patent to Detect and Geo-Map Hazardous Street Situations in Actual-Time

April 9, 2026

AI-Powered Pharma Manufacturing Alerts Scalable Effectivity, Lengthy-Time period Price Benefits

April 9, 2026

How AI Search Is Reshaping How Merchants Vet On-line Brokers

April 9, 2026

Subscribe to Updates

Get the latest creative news from SmartMag about art & design.

The Ai Today™ Magazine is the first in the middle east that gives the latest developments and innovations in the field of AI. We provide in-depth articles and analysis on the latest research and technologies in AI, as well as interviews with experts and thought leaders in the field. In addition, The Ai Today™ Magazine provides a platform for researchers and practitioners to share their work and ideas with a wider audience, help readers stay informed and engaged with the latest developments in the field, and provide valuable insights and perspectives on the future of AI.

Our Picks

An Finish-to-Finish Coding Information to NVIDIA KVPress for Lengthy-Context LLM Inference, KV Cache Compression, and Reminiscence-Environment friendly Technology

April 10, 2026

Roadzen’s drivebuddyAI Secures Patent to Detect and Geo-Map Hazardous Street Situations in Actual-Time

April 9, 2026

AI-Powered Pharma Manufacturing Alerts Scalable Effectivity, Lengthy-Time period Price Benefits

April 9, 2026
Trending

How AI Search Is Reshaping How Merchants Vet On-line Brokers

April 9, 2026

For the Remainder of the Work

April 9, 2026

SoftServe Launches New Model Id: ‘Expertise Elevated’

April 9, 2026
Facebook X (Twitter) Instagram YouTube LinkedIn TikTok
  • About Us
  • Advertising Solutions
  • Privacy Policy
  • Terms
  • Podcast
Copyright © The Ai Today™ , All right reserved.

Type above and press Enter to search. Press Esc to cancel.