

Gemini 2.5 Professional and Flash are typically out there and Gemini 2.5 Flash-Lite in preview
Based on Google, no modifications have been made to Professional and Flash because the final preview, apart from the pricing for Flash is totally different. When these fashions had been first introduced, there was separate pondering and non-thinking pricing, however Google mentioned that separation led to confusion amongst builders.
The brand new pricing for two.5 Flash is similar for each pondering and non-thinking modes. The costs are actually $0.30/1 million enter tokens for textual content, picture, and video, $1.00/1 million enter tokens for audio, and $2.50/1 million output tokens for all. This represents a rise in enter price and a lower in output price.
Google additionally launched a preview of Gemini 2.5 Flash-Lite, which has the bottom latency and value among the many 2.5 fashions. The corporate sees this as an economical improve from 1.5 and a couple of.0 Flash, with higher efficiency throughout most evaluations, decrease time to first token, and better tokens per second decode.
Gemini 2.5 Flash-Lite additionally permits customers to manage the pondering price range by way of an API parameter. Because the mannequin is designed for price and pace effectivity, pondering is turned off by default.
GitHub Copilot Areas arrive
GitHub Copilot Areas enable builders to bundle the context Copilot ought to learn right into a reusable house, which might embody issues like code, docs, transcripts, or pattern queries.
As soon as the house is created, each chat, completion, or command Copilot works from might be grounded in that data, enabling it to provide “solutions that really feel like they got here out of your group’s resident skilled as an alternative of a generic mannequin,” GitHub defined.
Copilot Areas might be free throughout its public preview and received’t rely in opposition to Copilot seat entitlements when the bottom mannequin is used.
OpenAI improves prompting in API
The corporate has now made it simpler to reuse, share, save, and handle prompts within the API by making prompts an API primitive.
Prompts may be reused throughout the Playground, API, Evals, and Saved Completions. The Immediate object can be referenced within the Responses API and OpenAI’s SDKs.
Moreover, the Playground now has a button that may optimize the immediate to be used within the API.
“By unifying prompts throughout our surfaces, we hope these modifications will assist you refine and reuse prompts higher—and extra promptly,” OpenAI wrote in a put up.
Syncfusion releases Code Studio
Code Studio is an AI-powered code editor that differs from different choices out there by having the LLM make the most of Syncfusion’s library of over 1,900 pre-tested UI parts moderately than producing code from scratch.
It affords 4 totally different help modes: Autocomplete, Chat, Edit, and Agent. It really works with fashions from OpenAI, Anthropic, Google, Mistral, and Cohere, in addition to self-hosted fashions. It additionally comes with governance capabilities like role-base entry, audit logging, and an admin console that gives utilization insights.
“Code Studio started as an in-house device and immediately writes as much as a 3rd of our code,” mentioned Daniel Jebaraj, CEO of Syncfusion. “We created a safe, model-agnostic assistant so enterprises can plug it into their stack, faucet our confirmed UI parts, and ship cleaner options in much less time.”
AI Alliance splits into two new non-profits
The AI Alliance is a collaborative effort amongst over 180 organizations throughout analysis, tutorial, and trade, together with Carnegie Mellon College, Hugging Face, IBM, and Meta. It has now been integrated right into a 501(c)(3) analysis and training lab and a 501(c)(6) AI expertise and advocacy group.
The analysis and training lab will deal with “managing and supporting scientific and open-source initiatives that allow open group experimentation and studying, main to raised, extra succesful, and accessible open-source and open information foundations for AI.”
The expertise and advocacy group will deal with “international engagement on open-source AI advocacy and coverage, driving expertise improvement, trade requirements and finest practices.”
Digital.ai introduces Fast Defend Agent
Fast Defend Agent is a cell software safety agent that follows the suggestions of OWASP MASVS, an trade normal for cell app safety. Examples of OWASP MASVS protections embody obfuscation, anti-tampering, and anti-analysis.
“With Fast Defend Agent, we’re increasing software safety to a broader viewers, enabling organizations each giant and small so as to add highly effective protections in only a few clicks,” mentioned Derek Holt, CEO of Digital.ai. “In immediately’s AI world, all apps are in danger, and by democratizing our app hardening capabilities, we’re enabling the safety of extra purposes throughout a broader set of industries. With eighty-three p.c of purposes underneath fixed assault – the continued innovation inside our core choices, together with the launch of our new Fast Defend Agent, couldn’t be coming at a extra essential time.”
IBM launches new integration to assist unify AI safety and governance
It’s integrating its watsonx.governance and Guardium AI safety options in order that corporations can handle each from a single device. The built-in resolution will be capable of validate in opposition to 12 totally different compliance frameworks, together with the EU AI Act and ISO 42001.
Guardium AI Safety is being up to date to have the ability to detect new AI use circumstances in cloud environments, code repositories, and embedded techniques. Then, it could possibly mechanically set off the suitable governance workflows from watsonx.governance.
“AI brokers are set to revolutionize enterprise productiveness, however the very advantages of AI brokers may current a problem,” mentioned Ritika Gunnar, normal supervisor of knowledge and AI at IBM. “When these autonomous techniques aren’t correctly ruled or secured, they’ll carry steep penalties.”
Safe Code Warrior introduces AI Safety Guidelines
This new ruleset will present builders with steerage for utilizing AI coding assistants securely. It allows them to determine guardrails that discourage the AI from dangerous patterns, reminiscent of unsafe eval utilization, insecure authentication flows, or failure to make use of parameterized queries.
They are often tailored to make use of with quite a lot of coding assistants, together with GitHub Copilot, Cline, Roo, Cursor, Aider, and Windsurf.
The principles can be utilized as-is or tailored to an organization’s tech stack or workflow in order that AI-generated output higher aligns throughout initiatives and contributors.
“These guardrails add a significant layer of protection, particularly when builders are transferring quick, multitasking, or discover themselves trusting AI instruments somewhat an excessive amount of,” mentioned Pieter Danhieux, co-founder and CEO of Safe Code Warrior. “We’ve saved our guidelines clear, concise and strictly centered on safety practices that work throughout a variety of environments, deliberately avoiding language or framework-specific steerage. Our imaginative and prescient is a future the place safety is seamlessly built-in into the developer workflow, no matter how code is written. That is just the start.”
SingleStore provides new capabilities for deploying AI
The corporate has improved the general information integration expertise by permitting prospects to make use of SingleStore Movement inside Helios to maneuver information from Snowflake, Postgres, SQL Server, Oracle, and MySQL to SingleStore.
It additionally improved the combination with Apache Iceberg by including a pace layer on prime of Iceberg to enhance information alternate speeds.
Different new options embody the power for Aura Container Service to host Cloud Features and Inference APIs, integration with GitHub, Notebooks scheduling and versioning, an up to date billing forecasting UI, and simpler pipeline monitoring and sequences.