This weblog publish focuses on new options and enhancements. For a complete checklist, together with bug fixes, please see the launch notes.
A brand new Python-based methodology for mannequin importing and inference
We’ve got fully revamped the way in which fashions are uploaded and used for inference with a brand new Python-based methodology that prioritizes simplicity, pace, and developer expertise.
Constructed with a Python-first, user-centric design, this versatile strategy simplifies the method of working with fashions. It permits customers to focus extra on constructing and iterating, and fewer on navigating API mechanics. The brand new methodology streamlines inference, accelerates growth, and considerably improves general usability.
Mannequin Add
The Clarifai Python SDK now makes it even simpler to add customized fashions. Whether or not you are utilizing a pre-trained mannequin from Hugging Face or OpenAI, or one you have developed from scratch, integration is seamless. As soon as uploaded, your mannequin can instantly make the most of Clarifai’s strong platform options.
After import, your mannequin is robotically deployed and prepared to be used. You may consider it, join it with different fashions and agent operators in a workflow, or serve inference requests immediately.
As a part of this launch, we’ve considerably simplified the way you outline the mannequin.py
file for customized mannequin uploads. The brand new ModelC
lass sample lets you implement predict
, generate
, and streaming
strategies with out the necessity for further abstraction or boilerplate. You will get began in just some traces of code.
Right here’s a fast instance: a easy methodology that appends “Good day World” to any enter textual content, with built-in assist for various kinds of streaming responses. Try the complete documentation right here.
Inference
The brand new inference strategy affords an environment friendly, scalable, and simplified approach to run predictions along with your fashions.
Designed with a Python-first, developer-friendly focus, it reduces complexity so you possibly can spend extra time constructing and iterating, and fewer time coping with low-level API particulars.
Under is an instance of the best way to make a client-side predict
name that corresponds to the predict
methodology outlined within the earlier part. Checkout the docs right here.
New Printed Fashions
- Printed Llama-4-Scout-17B-16E-Instruct, a strong mannequin within the Llama 4 sequence that includes 17 billion parameters and 16 consultants for superior instruction tuning. It helps a local 10 million-token context window (presently 8k supported on Clarifai), making it preferrred for multi-document evaluation, advanced codebase understanding, and customized, clever workflows.
- Printed Qwen3-30B-A3B-GGUF, the most recent addition to the Qwen sequence. This new launch options each dense and mixture-of-experts (MoE) fashions, with important enhancements in reasoning, instruction-following, agent-based duties, and multilingual capabilities. The Qwen3-30B-A3B outperforms bigger fashions like QwQ-32B, leveraging fewer lively parameters whereas sustaining robust efficiency throughout coding and reasoning benchmarks.
- Printed OpenAI’s newest o3 mannequin, a strong and well-rounded LLM that units a brand new commonplace for efficiency throughout math, science, coding, and visible reasoning duties. It’s constructed for advanced, multi-step pondering and excels at technical problem-solving, deciphering visible information equivalent to charts and diagrams, high-stakes decision-making, and artistic ideation.
- Printed o4-mini, a smaller mannequin optimized for quick, cost-efficient reasoning. Regardless of its compact dimension, o4-mini delivers spectacular accuracy on math and coding benchmarks like AIME 2025. It’s preferrred to be used circumstances that require robust reasoning capabilities whereas retaining latency and price low. Each the fashions are additionally out there on the Playground, Strive them out right here.
Enhanced the Playground expertise
- Added computerized mode detection primarily based on the chosen mannequin — now intelligently switches between Chat and Imaginative and prescient modes for predictions.
- Improved mannequin search and identification for a quicker, extra correct choice expertise.
- Launched a Private Entry Token (PAT) dropdown, enabling customers to simply insert their PAT keys into code snippets.
- Carried out dynamic pricing show that updates primarily based on the chosen deployment.
- The chosen deployment ID is now robotically injected into the inference code.
Enhanced the Management Middle
Improved the Group platform
- Revamped the Discover web page with refreshed visible designs, a featured fashions showcase, and categorized use circumstances equivalent to LLMs and VLMs.
- Up to date the person mannequin viewer web page with an improved UI, direct entry to the Playground, deployment listings, and extra enhancements.
Further Modifications
- The Dwelling web page is now accessible to all customers, with sections requiring login robotically hidden for non-logged-in customers. A brand new “Current Exercise” part reveals customers their most up-to-date actions and operations. We additionally made enhancements to usability, efficiency, and general consumer expertise.
- New group accounts now begin on the Group plan by default, as a substitute of inheriting the consumer’s private plan. This transformation applies to customers on the Group, Important, and Skilled plans. Enterprise customers usually are not affected. The “Member Since” column now reveals when a member joined the group, and Settings pages are hidden from customers with out the required permissions.
- The billing part has been redesigned for a extra intuitive bank card administration expertise. We have added validation to stop duplicate card entries and assist for setting or altering the default bank card.
- The Python SDK now helps Pythonic fashions for a extra native expertise. We mounted failing exams to enhance stability. The CLI is now ~20x quicker for many operations, consists of config contexts, improved error messages, and corrected return arguments within the mannequin builder. Study extra right here.
Prepared to start out constructing?
With this Python-first launch, importing and working inference in your customized fashions is now quicker, less complicated, and extra intuitive than ever. Whether or not you are integrating a pre-trained mannequin or deploying one you have constructed from scratch, the Clarifai Python SDK provides you the instruments to maneuver from prototype to manufacturing with minimal overhead.
Discover the documentation and begin constructing as we speak.