8.1 C
New York
Tuesday, November 25, 2025

Agent0: A Totally Autonomous AI Framework that Evolves Excessive-Performing Brokers with out Exterior Information via Multi-Step Co-Evolution


Massive language fashions want large human datasets, so what occurs if the mannequin should create all its personal curriculum and educate itself to make use of instruments? A workforce of researchers from UNC-Chapel Hill, Salesforce Analysis and Stanford College introduce ‘Agent0’, a completely autonomous framework that evolves high-performing brokers with out exterior knowledge via multi-step co-evolution and seamless software integration

Agent0 targets mathematical and normal reasoning. It exhibits that cautious process technology and power built-in rollouts can push a base mannequin past its authentic capabilities, throughout ten benchmarks.

https://arxiv.org/pdf/2511.16043

Two brokers from one base mannequin

Agent0 begins from a base coverage π_base, for instance Qwen3 4B Base or Qwen3 8B Base. It clones this coverage into:

  • a Curriculum Agent πθ that generates duties,
  • an Executor Agent πϕ that solves these duties with a Python software.

Coaching proceeds in iterations with two levels per iteration:

  1. Curriculum evolution: The curriculum agent generates a batch of duties. For every process, the executor samples a number of responses. A composite reward measures how unsure the executor is, how usually it makes use of the software and the way numerous the batch is. πθ is up to date with Group Relative Coverage Optimization (GRPO) utilizing this reward.
  2. Executor evolution: The educated curriculum agent is frozen. It generates a big pool of duties. Agent0 filters this pool to maintain solely duties close to the executor’s functionality frontier, then trains the executor on these duties utilizing an ambiguity conscious RL goal referred to as Ambiguity Dynamic Coverage Optimization (ADPO).

This loop creates a suggestions cycle. Because the executor turns into stronger by utilizing the code interpreter, the curriculum should generate extra advanced, software reliant issues to maintain its reward excessive.

https://arxiv.org/pdf/2511.16043

How the curriculum agent scores duties?

The curriculum reward combines three indicators:

Uncertainty reward: For every generated process x, the executor samples okay responses and majority votes a pseudo reply. Self consistency p̂(x) is the fraction of responses that agree with this majority. The reward is maximal when p̂ is near 0.5 and low when duties are too straightforward or too arduous. This encourages duties which can be difficult however nonetheless solvable for the present executor.

Software use reward: The executor can set off a sandboxed code interpreter utilizing python tags and receives outcomes tagged as output. Agent0 counts the variety of software calls in a trajectory and offers a scaled, capped reward, with a cap C set to 4 in experiments. This favors duties that really require software calls fairly than pure psychological arithmetic.

Repetition penalty: Inside every curriculum batch, Agent0 measures pairwise similarity between duties utilizing a BLEU primarily based distance. Duties are clustered, and a penalty time period will increase with cluster measurement. This discourages the curriculum from producing many close to duplicates.

A composite reward multiplies a format verify with a weighted sum of uncertainty and power rewards minus the repetition penalty. This composite worth feeds into GRPO to replace πθ.

How the executor learns from noisy self labels?

The executor can be educated with GRPO however on multi flip, software built-in trajectories and pseudo labels as an alternative of floor fact solutions.

Frontier dataset building: After curriculum coaching in an iteration, the frozen curriculum generates a big candidate pool. For every process, Agent0 computes self consistency p̂(x) with the present executor and retains solely duties the place p̂ lies in an informative band, for instance between 0.3 and 0.8. This defines a difficult frontier dataset that avoids trivial or unimaginable issues.

Multi flip software built-in rollouts: For every frontier process, the executor generates a trajectory that may interleave:

  • pure language reasoning tokens,
  • python code segments,
  • output software suggestions.

Technology pauses when a software name seems, executes the code in a sandboxed interpreter constructed on VeRL Software, then resumes conditioned on the consequence. The trajectory terminates when the mannequin produces a closing reply inside {boxed ...} tags.

A majority vote throughout sampled trajectories defines a pseudo label and a terminal reward for every trajectory.

ADPO, ambiguity conscious RL: Commonplace GRPO treats all samples equally, which is unstable when labels come from majority voting on ambiguous duties. ADPO modifies GRPO in two methods utilizing p̂ as an ambiguity sign.

  • It scales the normalized benefit with an element that will increase with self consistency, so trajectories from low confidence duties contribute much less.
  • It units a dynamic higher clipping certain for the significance ratio, which depends upon self consistency. Empirical evaluation exhibits that mounted higher clipping primarily impacts low chance tokens. ADPO relaxes this certain adaptively, which improves exploration on unsure duties, as visualized by the up clipped token chance statistics.
https://arxiv.org/pdf/2511.16043

Outcomes on mathematical and normal reasoning

Agent0 is applied on prime of VeRL and evaluated on Qwen3 4B Base and Qwen3 8B Base. It makes use of a sandboxed Python interpreter as the one exterior software.

The analysis workforce consider on ten benchmarks:

  • Mathematical reasoning: AMC, Minerva, MATH, GSM8K, Olympiad Bench, AIME24, AIME25.
  • Basic reasoning: SuperGPQA, MMLU Professional, BBEH.

They report go@1 for many datasets and imply@32 for AMC and AIME duties.

For Qwen3 8B Base, Agent0 reaches:

  • math common 58.2 versus 49.2 for the bottom mannequin,
  • general normal common 42.1 versus 34.5 for the bottom mannequin.

Agent0 additionally improves over sturdy knowledge free baselines akin to R Zero, Absolute Zero, SPIRAL and Socratic Zero, each with and with out instruments. On Qwen3 8B, it surpasses R Zero by 6.4 proportion factors and Absolute Zero by 10.6 factors on the general common. It additionally beats Socratic Zero, which depends on exterior OpenAI APIs.

Throughout three co evolution iterations, common math efficiency on Qwen3 8B will increase from 55.1 to 58.2 and normal reasoning additionally improves per iteration. This confirms steady self enchancment fairly than collapse.

Qualitative examples present that curriculum duties evolve from fundamental geometry inquiries to advanced constraint satisfaction issues, whereas executor trajectories combine reasoning textual content with Python calls to succeed in right solutions.

Key Takeaways

  1. Totally knowledge free co evolution: Agent0 eliminates exterior datasets and human annotations. Two brokers, a curriculum agent and an executor agent, are initialized from the identical base LLM and co evolve solely by way of reinforcement studying and a Python software.
  2. Frontier curriculum from self uncertainty: The curriculum agent makes use of the executor’s self consistency and power utilization to attain duties. It learns to generate frontier duties which can be neither trivial nor unimaginable, and that explicitly require software built-in reasoning.
  3. ADPO stabilizes RL with pseudo labels: The executor is educated with Ambiguity Dynamic Coverage Optimization. ADPO down weights extremely ambiguous duties and adapts the clipping vary primarily based on self consistency, which makes GRPO model updates steady when rewards come from majority vote pseudo labels.
  4. Constant beneficial properties on math and normal reasoning: On Qwen3 8B Base, Agent0 improves math benchmarks from 49.2 to 58.2 common and normal reasoning from 34.5 to 42.1, which corresponds to relative beneficial properties of about 18 % and 24 %.
  5. Outperforms prior zero knowledge frameworks: Throughout ten benchmarks, Agent0 surpasses earlier self evolving strategies akin to R Zero, Absolute Zero, SPIRAL and Socratic Zero, together with those who already use instruments or exterior APIs. This exhibits that the co evolution plus software integration design is a significant step past earlier single spherical self play approaches.

Editorial Notes

Agent0 is a crucial step towards sensible, knowledge free reinforcement studying for software built-in reasoning. It exhibits {that a} base LLM can act as each Curriculum Agent and Executor Agent, and that GRPO with ADPO and VeRL Software can drive steady enchancment from majority vote pseudo labels. The tactic additionally demonstrates that software built-in co evolution can outperform prior zero knowledge frameworks akin to R Zero and Absolute Zero on sturdy Qwen3 baselines. Agent0 makes a robust case that self evolving, software built-in LLM brokers have gotten a practical coaching paradigm.


Try the PAPER and Repo. Be at liberty to take a look at our GitHub Web page for Tutorials, Codes and Notebooks. Additionally, be happy to observe us on Twitter and don’t neglect to affix our 100k+ ML SubReddit and Subscribe to our E-newsletter. Wait! are you on telegram? now you’ll be able to be a part of us on telegram as nicely.


Michal Sutter is an information science skilled with a Grasp of Science in Information Science from the College of Padova. With a stable basis in statistical evaluation, machine studying, and knowledge engineering, Michal excels at reworking advanced datasets into actionable insights.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles