14 C
New York
Monday, October 13, 2025

Prime LLM Inference Suppliers In contrast


TL;DR

On this put up, we discover how main inference suppliers carry out on the GPT-OSS-120B mannequin utilizing benchmarks from Synthetic Evaluation. You’ll be taught what issues most when evaluating inference platforms together with throughput, time to first token, and value effectivity. We evaluate Vertex AI, Azure, AWS, Databricks, Clarifai, Collectively AI, Fireworks, Nebius, CompactifAI, and Hyperbolic on their efficiency and deployment effectivity.

Introduction

Giant language fashions (LLMs) like GPT-OSS-120B, an open-weight 120-billion-parameter mixture-of-experts mannequin, are designed for superior reasoning and multi-step era. Reasoning workloads eat tokens quickly and place excessive calls for on compute, so deploying these fashions in manufacturing requires inference infrastructure that delivers low latency, excessive throughput, and decrease value.

Variations in {hardware}, software program optimizations, and useful resource allocation methods can result in giant variations in latency, effectivity, and value. These variations immediately have an effect on real-world purposes akin to reasoning brokers, doc understanding programs, or copilots, the place even small delays can affect total responsiveness and throughput.

To judge these variations objectively, unbiased benchmarks have turn out to be important. As an alternative of counting on inside efficiency claims, open and data-driven evaluations now provide a extra clear solution to assess how completely different platforms carry out beneath actual workloads.

On this put up, we evaluate main GPU-based inference suppliers utilizing the GPT-OSS-120B mannequin as a reference benchmark. We look at how every platform performs throughout key inference metrics akin to throughput, time to first token, and value effectivity, and the way these trade-offs affect efficiency and scalability for reasoning-heavy workloads.

Earlier than diving into the outcomes, let’s take a fast have a look at Synthetic Evaluation and the way their benchmarking framework works.

Synthetic Evaluation Benchmarks

Synthetic Evaluation (AA) is an unbiased benchmarking initiative that runs standardized exams throughout inference suppliers to measure how fashions like GPT-OSS-120B carry out in actual circumstances. Their evaluations concentrate on reasonable workloads involving lengthy contexts, streaming outputs, and reasoning-heavy prompts quite than brief, artificial samples.

You’ll be able to discover the total GPT-OSS-120B benchmark outcomes right here.

Synthetic Evaluation evaluates a variety of efficiency metrics, however right here we concentrate on the three key components that matter when selecting an inference platform for GPT-OSS-120B: time to first token, throughput, and value per million tokens.

  • Time to First Token (TTFT)
    The time between sending a immediate and receiving the mannequin’s first token. Decrease TTFT means output begins streaming sooner, which is important for interactive purposes and multi-step reasoning the place delays can disrupt the stream.
  • Throughput (tokens per second)
    The speed at which tokens are generated as soon as streaming begins. Larger throughput shortens whole completion time for lengthy outputs and permits extra concurrent requests, immediately affecting scalability for large-context or multi-turn workloads.
  • Price per million tokens (blended value)
    A mixed metric that accounts for each enter and output token pricing. This offers a transparent view of operational prices for prolonged contexts and streaming workloads, serving to groups plan for predictable bills.

Benchmark Methodology

  • Immediate Measurement: Benchmarks lined on this weblog use a 1,000-token enter immediate run by Synthetic Evaluation, reflecting a typical real-world state of affairs akin to a chatbot question or reasoning-heavy instruction. Benchmarks for considerably longer prompts are additionally obtainable and may be explored for reference right here.
  • Median Measurements: The reported values signify the median (p50) over the past 72 hours, capturing sustained efficiency developments quite than single-point spikes or dips. For probably the most up-to-date benchmark outcomes, go to the Synthetic Evaluation GPT‑OSS‑120B mannequin suppliers web page right here.
  • Metrics Focus: This abstract highlights time to first token (TTFT), throughput, and blended value to offer a sensible view for workload planning. Different metrics—akin to end-to-end response time, latency by enter token rely, and time to first reply token—are additionally measured by Synthetic Evaluation however aren’t included on this overview.

With this system in thoughts, we are able to now evaluate how completely different GPU-based platforms carry out on GPT‑OSS‑120B and what these outcomes suggest for reasoning-heavy workloads.

Supplier Comparability (GPT‑OSS‑120B)

Clarifai

  • Time to First Token: 0.32 s

  • Throughput: 544 tokens/s

  • Blended Price: $0.16 per 1M tokens

  • Notes: Extraordinarily excessive throughput; low latency; cost-efficient; robust alternative for reasoning-heavy workloads.

Key Options:

  • GPU fractioning and autoscaling choices for environment friendly compute utilization
  • Native runners to execute fashions regionally by yourself {hardware} for testing and growth
  • On-prem, VPC, and multi-site deployment choices
  • Management Heart for monitoring and managing utilization and efficiency

Google Vertex AI

  • Time to First Token: 0.40 s

  • Throughput: 392 tokens/s

  • Blended Price: $0.26 per 1M tokens

  • Notes: Average latency and throughput; appropriate for general-purpose reasoning workloads.

Key Options:

  • Built-in AI instruments (AutoML, coaching, deployment, monitoring)

  • Scalable cloud infrastructure for batch and on-line inference

  • Enterprise-grade safety and compliance

Microsoft Azure

  • Time to First Token: 0.48 s

  • Throughput: 348 tokens/s

  • Blended Price: $0.26 per 1M tokens

  • Notes: Barely increased latency; balanced efficiency and value for traditional workloads.

Key Options:

  • Complete AI companies (ML, cognitive companies, customized bots)

  • Deep integration with Microsoft ecosystem

  • World enterprise-grade infrastructure

Hyperbolic

  • Time to First Token: 0.52 s

  • Throughput: 395 tokens/s

  • Blended Price: $0.30 per 1M tokens

  • Notes: Larger value than friends; good throughput for reasoning-heavy duties.

Key Options:

AWS

  • Time to First Token: 0.64 s

  • Throughput: 252 tokens/s

  • Blended Price: $0.26 per 1M tokens

  • Notes: Decrease throughput and better latency; appropriate for much less time-sensitive workloads.

Key Options:

  • Broad AI/ML service portfolio (Bedrock, SageMaker)

  • World cloud infrastructure

  • Enterprise-grade safety and compliance

Databricks

  • Time to First Token: 0.36 s

  • Throughput: 195 tokens/s

  • Blended Price: $0.26 per 1M tokens

  • Notes: Decrease throughput; acceptable latency; higher for batch or background duties.

Key Options:

  • Unified analytics platform (Spark + ML + notebooks)

  • Collaborative workspace for groups

  • Scalable compute for giant ML/AI workloads

Collectively AI

  • Time to First Token: 0.25 s

  • Throughput: 248 tokens/s

  • Blended Price: $0.26 per 1M tokens

  • Notes: Very low latency; reasonable throughput; good for real-time reasoning-heavy purposes.

Key Options:

  • Actual-time inference and coaching

  • Cloud/VPC-based deployment orchestration

  • Versatile and safe platform

Fireworks AI

  • Time to First Token: 0.44 s

  • Throughput: 482 tokens/s

  • Blended Price: $0.26 per 1M tokens

  • Notes: Excessive throughput and balanced latency; appropriate for interactive purposes.

Key Options:

CompactifAI

  • Time to First Token: 0.29 s

  • Throughput: 186 tokens/s

  • Blended Price: $0.10 per 1M tokens

  • Notes: Low value; decrease throughput; finest for cost-sensitive workloads with smaller concurrency wants.

Key Options:

  • Environment friendly, compressed fashions for value financial savings

  • Simplified deployment on AWS

  • Optimized for high-throughput batch inference

Nebius Base

  • Time to First Token: 0.66 s

  • Throughput: 165 tokens/s

  • Blended Price: $0.26 per 1M tokens

  • Notes: Considerably decrease throughput and better latency; could battle with reasoning-heavy or interactive workloads.

Key Options:

  • Fundamental AI service endpoints

  • Normal cloud infrastructure

  • Appropriate for steady-demand workloads

Finest Suppliers Based mostly on Value and Throughput

Choosing the proper inference supplier for GPT‑OSS‑120B requires evaluating time to first token, throughput, and value primarily based in your workload. Platforms like Clarifai provide excessive throughput, low latency, and aggressive value, making them well-suited for reasoning-heavy or interactive duties. Different suppliers, akin to CompactifAI, prioritize decrease value however include diminished throughput, which can be extra appropriate for cost-sensitive or batch-oriented workloads. The optimum alternative will depend on which trade-offs matter most in your purposes.

Finest for Value

Finest for Throughput

  • Clarifai: Highest throughput at 544 tokens/s with low first-chunk latency.

  • Fireworks AI: Robust throughput at 482 tokens/s and reasonable latency.

  • Hyperbolic: Good throughput at 395 tokens/s; increased value however viable for heavy workloads.

Efficiency and Flexibility

Together with worth and throughput, flexibility is important for real-world workloads. Groups typically want management over scaling habits, GPU utilization, and deployment environments to handle value and effectivity.

Clarifai, for instance, helps fractional GPU utilization, autoscaling, and native runners — options that may enhance effectivity and scale back infrastructure overhead.

These capabilities lengthen past GPT‑OSS‑120B. With the Clarifai Reasoning Engine, customized or open-weight reasoning fashions can run with constant efficiency and reliability. The engine additionally adapts to workload patterns over time, regularly bettering velocity for repetitive duties with out sacrificing accuracy.

Benchmark Abstract

To date, we’ve in contrast suppliers primarily based on throughput, latency, and value utilizing the Synthetic Evaluation Benchmark. To see how these trade-offs play out in follow, right here’s a visible abstract of the outcomes throughout the completely different suppliers. These charts are immediately from Synthetic Evaluation.

The primary chart highlights output velocity vs worth, whereas the second chart compares latency vs output velocity.

Output Speed vs Price (8 Oct 25)

Output Pace vs. Value

Latency vs Output Speed (8 Oct 25)

Latency vs. Output Pace

Beneath is an in depth comparability desk summarizing the important thing metrics for GPT-OSS-120B inference throughout suppliers.

SupplierThroughput (tokens/s)Time to First Token (s)Blended Price ($ / 1M tokens)
Clarifai5440.320.16
Google Vertex AI3920.400.26
Microsoft Azure3480.480.26
Hyperbolic3950.520.30
AWS2520.640.26
Databricks1950.360.26
Collectively AI2480.250.26
Fireworks AI4820.440.26
CompactifAI1860.290.10
Nebius Base1650.660.26

Conclusion

Selecting an inference supplier for GPT‑OSS‑120B includes balancing throughput, latency, and value. Every supplier handles these trade-offs in a different way, and the only option will depend on the particular workload and efficiency necessities.

Suppliers with excessive throughput excel at reasoning-heavy or interactive duties, whereas these with decrease median throughput could also be extra appropriate for batch or background processing the place velocity is much less important. Latency additionally performs a key position: low time-to-first-token improves responsiveness for real-time purposes, whereas barely increased latency could also be acceptable for much less time-sensitive duties.

Price issues stay necessary. Some suppliers provide robust efficiency at low blended prices, whereas others commerce effectivity for worth. Benchmarks overlaying throughput, time to first token, and blended value present a transparent foundation for understanding these trade-offs.

Finally, the proper supplier will depend on the engineering drawback, workload traits, and which trade-offs matter most for the applying.

 

Study extra about Clarifai’s reasoning engine

The Quickest AI Inference and Reasoning on GPUs.

Verified by Synthetic Evaluation

 



Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles