19.1 C
New York
Sunday, September 21, 2025

Learn how to Replace LLM Weights with No Downtime


Think about attempting to renovate the muse of a towering skyscraper with out asking its occupants to go away or pause their work. That’s precisely what MoonshotAI’s Checkpoint Engine does for AI fashions. It permits large language fashions to replace their brains, the weights, whereas nonetheless operating, so there’s no downtime. This breakthrough lets builders enhance their AI rapidly and effectively, even on fashions with over a trillion parameters operating on hundreds of GPUs. It’s quick, dependable, and designed to maintain AI programs operating easily whereas evolving in real-time, making it an important instrument for cutting-edge AI functions. This text goes over what it’s, the way it works, and why it issues for the way forward for large-scale AI programs.

What’s Moonshot AI’s Checkpoint engine?

Moonshot AI’s Checkpoint Engine is a specialised middleware designed to replace the weights of massive language fashions (LLMs) in real-time throughout inference with out interrupting ongoing operations. This functionality is crucial in Reinforcement studying situations the place mannequin weights have to be up to date incessantly. The Checkpoint Engine presently integrates seamlessly with vLLM inference frameworks and provides optimized efficiency by way of pipelining and reminiscence administration strategies. It additionally supplies options like reusing weights from current cases to scale back overhead in scaling situations.

Structure 

The core of the Checkpoint is the ParameterServer class, which handles the load replace logic and orchestrates the info movement.

  1. H2D(Host to Gadget): Strikes up to date weights from CPU reminiscence or storage to GPU reminiscence, utilizing optimized switch pipelines.
  2. Broadcast: Distributes the load throughout all inference engine cases effectively, leveraging CUDA IPC buffers for shared reminiscence communication.
  3. Reload: Every inference engine then selectively reloads related weight shards from the broadcasted knowledge in line with its sharding sample.

These three-stage pipelines guarantee environment friendly, overlapping communication and copying for velocity.

When GPU reminiscence is proscribed, the system can fall again to serial execution to keep up reliability.

Strategies Used

The Checkpoint Engine makes use of two principal strategies to replace mannequin weights throughout inference.

  1. Broadcast Technique: That is the quickest and the default method. That is perfect when numerous inference cases have to be up to date concurrently. It broadcasts the up to date weights from CPU reminiscence to all inference GPUs synchronously, making certain all cases keep completely in sync with minimal delay. 
  2. P2P (Peer-to-Peer) Technique: It’s used when inference cases are added or eliminated dynamically throughout runtime. It avoids disrupting current inference workloads by sending weights instantly from CPUs in current cases to GPUs in new cases by way of a peer-to-peer switch system, permitting easy and versatile updates.

Working 

The Checkpoint Engine orchestrates your complete switch course of. It first gathers crucial metadata to create a plan, together with deciding the right bucket measurement for knowledge switch. Then, it executes the switch, controlling the inference engine by way of a ZeroMQ socket to maximise efficiency. It organizes knowledge switch into pipelines with overlapped communication and duplicate, enabling quick and environment friendly weight updates even below heavy workload.

By implementing the above-mentioned strategies and structure, the Checkpoint Engine permits dwell weight updates for LLMs throughout hundreds of GPUs with minimal latency and repair disruption.

Set up and Utilization

Set up

To make use of the quickest broadcast 

Use Code:

pip set up checkpoint-engine

To make use of the versatile P2P implementation:

Use Code:

pip set up 'checkpoint-engine[p2p]'

It will set up mooncake-transfer-engine to help RDMA switch between completely different ranks.

Instance Use case

Step 1:

Put together an H800 or H20 machine with 8 GPUs with the most recent vLLM. You’ll want to embody /collective_rpc API endpoint commit (accessible in the primary department) since checkpoint-engine will use this endpoint to replace weights.

Step 2:

set up checkpoint-engine

Code:

uv pip set up 'checkpoint-engine[p2p]'

Step 3:

For our use case, we’re gonna use Qwen/Qwen3-235B-A22B-Instruct-2507 because the take a look at mannequin.

Code:

hf obtain Qwen/Qwen3-235B-A22B-Instruct-2507 --local-dir /decide/fashions/Qwen/Qwen3-235B-A22B-Instruct-2507/

Step 4:

Begin vLLM in dev mode and set –load-format dummy. Make certain to set –worker-extension-cls=checkpoint_engine.employee.VllmColocateWorkerExtension

Code:

VLLM_SERVER_DEV_MODE=1 python3 -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 19730 --trust-remote-code 

    --tensor-parallel-size=8 --max-model-len 4096 --load-format dummy 

    --served-model-name checkpoint-engine-demo --model /decide/fashions/Qwen/Qwen3-235B-A22B-Instruct-2507/ 

    --worker-extension-cls checkpoint_engine.employee.VllmColocateWorkerExtension

To replace weights by checkpoint-engine. No want to attend for vLLM to prepare. Use the code under.

Code:

torchrun --nproc-per-node 8 examples/replace.py --update-method all --checkpoint-path /decide/fashions/Qwen/Qwen3-235B-A22B-Instruct-2507/

To reuse weights from current cases

New checkpoint-engine cases can be a part of current cases and reuse their weights.

Utilizing the tactic under:

Step 1: Begin the present occasion with –save-metas-file global_metas.pkl to save lots of world metas to a file.

Step 2: Use –sleep-time 300 to ensure they keep alive.

Code:

torchrun --nproc-per-node 8 examples/replace.py --checkpoint-path $MODEL_PATH 

    --sleep-time 300 --save-metas-file global_metas.pkl

Step 3: After a checkpoint is registered, new cases can get hold of a replica of the checkpoint by setting –load-metas-file global_metas.pkl

Code:

torchrun --nproc-per-node 8 examples/replace.py --load-metas-file global_metas.pkl

FP8 quantization

At present, FP8 quantization doesn’t work in vLLM when updating weights. It makes use of a easy patch in patches/vllm_fp8.patch to deal with the right weight replace. Additionally ,this patch is just examined in DeepSeek-V3.1 and Kimi-K2. So there are possibilities of having some compatibility points with different fashions.

Take a look at

Run a easy correctness take a look at for checkpoint_engine

Code:

torchrun --nproc-per-node 8 assessments/test_update.py

Benchmark

MannequinGadget SetupMetadata GatheringReplace (Broadcast)Replace (P2P)
GLM-4.5-Air (BF16)8x H800 TP80.17 seconds3.94 seconds (1.42 GiB)8.83 seconds (4.77 GiB)
Qwen3-235B-A22B-Instruct-2507 (BF16)8x H800 TP80.46 seconds6.75 seconds (2.69 GiB)16.47 seconds (4.05 GiB)
DeepSeek-V3.1 (FP8)16x H20 TP161.44 seconds12.22 seconds (2.38 GiB)25.77 seconds (3.61 GiB)
Kimi-K2-Instruct (FP8)16x H20 TP161.81 seconds15.45 seconds (2.93 GiB)36.24 seconds (4.46 GiB)
DeepSeek-V3.1 (FP8)256x H20 TP161.40 seconds13.88 seconds (2.54 GiB)33.30 seconds (3.86 GiB)
Kimi-K2-Instruct (FP8)256x H20 TP161.88 seconds21.50 seconds (2.99 GiB)34.49 seconds (4.57 GiB)

Insights

Listed below are a number of observations that I’ve made:

  1. The published technique usually provides the quickest replace time, optimized for synchronous weight updates throughout many inference cases.
  2. The P2P technique takes longer however permits dynamic updates when cases be a part of or go away throughout runtime.
  3. These benchmark exhibits the scalability of Checkpoint Engine, dealing with a trillion parameter fashions effectively on clusters starting from 8 to 256 GPUs

Limitations of Checkpoint Engine

Whereas Checkpoint Engine is a robust answer for dwell weight updates in LLMs, it presently has some limitations.

  • Works Greatest with vLLM for Now: The engine is principally examined with the vLLM framework. Should you’re hoping to make use of it with different AI frameworks or customized setups, you may want some additional work to get it operating easily.
  • Pipeline Nonetheless Enhancing: The perfect seamless pipeline that overlaps knowledge strikes completely isn’t totally completed but. This implies there’s nonetheless potential to make the updates even quicker.
  • P2P Replace May Be Smoother: The peer-to-peer technique sends knowledge by way of a bottleneck at one principal node earlier than sharing it with others, which may sluggish issues down when you’ve a lot of GPUs.
  • Wants Additional GPU Reminiscence: The intelligent broadcast system makes use of extra GPU reminiscence to hurry issues up. On machines with much less reminiscence, it switches to a slower, much less environment friendly course of.
  • Restricted Assist for FP8 Fashions: Should you’re working with the newer FP8 quantized fashions, you’ll want some experimental patches. And even then, not all fashions play properly, but past a few examined ones.

Conclusion

Moonshot AI’s Checkpoint Engine is a game-changer for updating large AI fashions with out stopping them. It retains every part operating easily, even whereas the mannequin’s “mind” is getting smarter in real-time. Whereas it nonetheless has a number of areas to enhance, the potential is big. Should you’re working with massive AI programs, this instrument is unquestionably value watching. It’s serving to make the way forward for AI quicker and extra environment friendly, with none downtime.

Regularly Requested Questions

Q1. What drawback does Checkpoint Engine clear up?

A. It lets massive language fashions replace weights in real-time throughout inference with out downtime, so AI programs keep on-line whereas enhancing.

Q2. Which frameworks does Checkpoint Engine help?

A. Proper now, it’s primarily built-in and examined with the vLLM inference framework.

Q3. What’s the distinction between Broadcast and P2P strategies?

A. Broadcast is quicker for synchronized updates throughout many GPUs, whereas P2P permits versatile updates when cases be a part of or go away.

I’m a Knowledge Science Trainee at Analytics Vidhya, passionately engaged on the event of superior AI options akin to Generative AI functions, Massive Language Fashions, and cutting-edge AI instruments that push the boundaries of know-how. My position additionally includes creating participating instructional content material for Analytics Vidhya’s YouTube channels, creating complete programs that cowl the complete spectrum of machine studying to generative AI, and authoring technical blogs that join foundational ideas with the most recent improvements in AI. By this, I goal to contribute to constructing clever programs and share information that evokes and empowers the AI group.

Login to proceed studying and revel in expert-curated content material.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles