8 C
New York
Tuesday, November 25, 2025

Moonshot AI Researchers Introduce Seer: An On-line Context Studying System for Quick Synchronous Reinforcement Studying RL Rollouts


How do you retain reinforcement studying for big reasoning fashions from stalling on a number of very lengthy, very gradual rollouts whereas GPUs sit beneath used? a crew of researchers from Moonshot AI and Tsinghua College introduce ‘Seer’, a brand new on-line context studying system that targets a particular methods bottleneck in reinforcement studying for big language fashions. In synchronous on coverage setups, the rollout part dominates the price of every iteration. Seer restructures this part and studies rollout throughput beneficial properties of 74 % to 97 % and tail latency reductions of 75 % to 93 % in contrast with a robust synchronous baseline referred to as veRL.

https://arxiv.org/pdf/2511.14617

Why synchronous rollout is gradual for reasoning fashions?

Fashionable reasoning RL workloads use lengthy chain of thought model outputs. Within the Seer experiments, the researchers apply GRPO to 3 completely different fashions, Moonlight, Qwen2 VL 72B and Kimi K2. These workloads run on 32 compute nodes with 8 H800 GPUs per node. The three duties use 32, 128 and 256 GPUs respectively, with 400, 600 and 800 prompts per iteration and eight or 16 responses per immediate.

Most technology size is giant. Moonlight is configured for 65,536 tokens, Qwen2 VL 72B for 40,960 tokens and Kimi K2 for 98,304 tokens. A single lengthy chain of thought request can develop from a number of hundred megabytes of KVCache to tens of gigabytes as decoding progresses. This reminiscence development forces cases to scale back concurrency or to preempt requests, which triggers costly re decoding.

The analysis crew defines tail requests because the final 10 % of requests to complete in a rollout. For Moonlight and Qwen2 VL 72B, this tail alone can devour as much as 50 % of the overall rollout time within the baseline system. Rollout already dominates iteration time, so this tail impact immediately slows RL.

https://arxiv.org/pdf/2511.14617

Seer structure on high of Mooncake and vLLM

Seer retains the RL algorithm equivalent to synchronous veRL. Every coaching iteration makes use of solely information from the present rollout iteration, so the system preserves on coverage habits. The coaching part makes use of Megatron for distributed optimization. The rollout part makes use of an in home implementation of vLLM because the inference engine.

To help aggressive request scheduling, Seer depends on a International KVCache Pool constructed on the Mooncake disaggregated KVCache structure utilized in manufacturing for Kimi. Mooncake gives a two tier DRAM and SSD KV cache retailer shared throughout inference nodes, which permits Seer emigrate requests with out recomputing prefills.

On high of this substrate, Seer introduces three key mechanisms:

  1. Divided Rollout
  2. Context Conscious Scheduling
  3. Adaptive Grouped Speculative Decoding

These are orchestrated by a Request Buffer, a Context Supervisor and an Inference Engine Pool linked to the International KVCache Pool.

https://arxiv.org/pdf/2511.14617

Divided Rollout, nice grained scheduling and migration

Typical synchronous rollout assigns entire GRPO teams to inference cases. A gaggle is a set of requests that share one immediate. As soon as assigned, a bunch stays on the identical occasion till all responses end. As a consequence of giant variance in output lengths, this results in load imbalance and lengthy working stragglers.

Seer breaks teams down in two steps. It first decomposes every group into particular person requests. It then divides every request into a number of chunks based mostly on technology size. When the scheduler dispatches a request from the Request Buffer, it units a small max tokens worth similar to 8,000 tokens for that chunk. After every chunk, the request is re enqueued till it reaches an finish of sequence token or its unique max tokens restrict.

As a result of KVCache is saved within the International KVCache Pool, divided requests can transfer between cases at chunk boundaries with out re working the prefill. The scheduler maintains a concurrency stage that retains reminiscence utilization excessive whereas avoiding preemption. This reduces waste and smooths KVCache utilization throughout the iteration.

Context Conscious Scheduling utilizing group size statistics

The analysis crew observe that completely different requests in the identical group are inclined to have correlated output lengths. Seer makes use of this construction as on-line context. For every immediate group, it designates one request because the speculative request. The scheduler retains speculative requests in a excessive precedence queue and serves them with a smallest first coverage based mostly on generated tokens up to now. Brief requests full rapidly and exit. Lengthy requests stay and determine teams which might be potential tail candidates.

The Context Supervisor maintains a size estimate for every group. It updates this estimate to the utmost generated size amongst accomplished requests within the group. If no request has completed, it makes use of the unique max tokens as a conservative certain. As soon as speculative requests are in flight or executed, Seer schedules remaining requests with an approximate longest first coverage at group stage. This design achieves throughput and tail habits near an oracle scheduler that is aware of all output lengths upfront.

https://arxiv.org/pdf/2511.14617

Adaptive Grouped Speculative Decoding

Seer provides Adaptive Grouped Speculative Decoding on high of the earlier two parts to speed up decoding, particularly for lengthy requests within the tail. It introduces a Distributed Grouped Draft Server, or DGDS. DGDS maintains a Compressed Suffix Tree for every group and aggregates token sequences from all requests in that group. Situations asynchronously append generated tokens to DGDS, periodically fetch up to date suffix timber and carry out native speculative decoding based mostly on the shared sample statistics.

The system adjusts draft size and the variety of paths in line with mannequin structure, batch measurement and measured acceptance size. For dense and Combination of Consultants fashions, it pre-computes completely different hypothesis thresholds and makes use of them to certain draft depth for every batch. In late tail levels, concurrency is low, so Seer will increase draft depth and permits multi path drafting to boost accepted tokens per step.

Ablation outcomes present that divided rollout yields as much as 35 % throughput enchancment over the baseline. Including Context Conscious Scheduling will increase this to as much as 47 % over baseline. Enabling grouped speculative decoding raises the overall speedup to 77 % to 87 % over the baseline within the evaluated iteration.

Finish to finish influence on RL coaching

The analysis crew consider Seer on three RL duties constructed on Moonlight, Qwen2 VL 72B and Kimi K2. They run 10 rollout iterations per activity and measure output tokens per second and completion time for every rollout. Seer improves rollout throughput by 74 % to 97 % throughout these workloads relative to veRL with the identical RL algorithm and vLLM based mostly inference engine.

Tail latency is lowered by 75 % to 93 %. For reminiscence constrained duties, the baseline system spends as much as half of its time on the final 10 % of requests. Seer removes most of this tail by combining divided rollout, Context Conscious Scheduling and Adaptive Grouped Speculative Decoding on high of the Mooncake based mostly International KVCache Pool.

Key Takeaways

  • Rollout bottleneck: Seer targets the rollout part of synchronous RL, which accounts for about 63% to 87% of iteration time and is dominated by lengthy tail requests and KV cache fragmentation.
  • Three core mechanisms: Seer combines divided rollout, context conscious scheduling and adaptive grouped speculative decoding to take advantage of output size and sample similarity amongst GRPO responses that share a immediate.
  • Wonderful grained scheduling on a worldwide KV cache: Requests are cut up into chunks and migrated throughout a Mooncake model International KVCache Pool, which preserves synchronous on coverage RL whereas maintaining GPU reminiscence utilization excessive and decreasing preemptions.
  • On-line context for tail latency discount: Group stage size statistics from speculative requests drive context conscious scheduling that approximates an oracle longest first scheduler and sharply reduces the time spent on the final 10 % of requests.
  • Measured finish to finish beneficial properties: On manufacturing grade RL workloads with Moonlight, Qwen2 VL 72B and Kimi K2, Seer improves rollout throughput by 74% to 97% and reduces lengthy tail latency by 75% to 93% relative to a state-of-the-art synchronous vLLM based mostly baseline.

Seer is a crucial methods contribution as a result of it optimizes the rollout part in synchronous RL with out altering the underlying GRPO algorithm, so it preserves on coverage ensures and reproducibility whereas fixing an actual infrastructure bottleneck. The mixture of divided rollout, context conscious scheduling and adaptive grouped speculative decoding presents a sensible template for different RL stacks that depend on lengthy chain of thought reasoning fashions and enormous KVCache footprints. General, Seer reveals that on-line context studying on the methods stage is now as essential as mannequin structure for scaling reasoning RL effectively.


Try the Paper right here. Be happy to take a look at our GitHub Web page for Tutorials, Codes and Notebooks. Additionally, be happy to observe us on Twitter and don’t neglect to affix our 100k+ ML SubReddit and Subscribe to our Publication. Wait! are you on telegram? now you’ll be able to be a part of us on telegram as properly.


Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of Synthetic Intelligence for social good. His most up-to-date endeavor is the launch of an Synthetic Intelligence Media Platform, Marktechpost, which stands out for its in-depth protection of machine studying and deep studying information that’s each technically sound and simply comprehensible by a large viewers. The platform boasts of over 2 million month-to-month views, illustrating its reputation amongst audiences.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles