

OpenAI releases analysis preview GPT-5.3-Codex-Spark for ChatGPT Professional customers
GPT-5.3-Codex-Spark is a light-weight model of the corporate’s coding mannequin, GPT-5.3-Codex, that’s optimized to run on ultra-low latency {hardware} and may ship over 1,000 tokens per second.
It’s the first final result of the corporate’s not too long ago introduced partnership with Cerebras so as to add 750MW of extremely low-latency AI compute. OpenAI says the explanation it’s being launched as a analysis preview is to supply builders with the chance to start out benefiting from this partnership whereas the corporate continues ramping up Cerabras in its knowledge facilities.
“Codex-Spark is our first mannequin designed particularly for working with Codex in real-time—making focused edits, reshaping logic, or refining interfaces and seeing outcomes instantly. With Codex-Spark, Codex now helps each long-running, bold duties and getting work executed within the second. We hope to study from how builders use it and incorporate suggestions as we proceed to broaden entry,” OpenAI wrote in a submit.
GitHub begins technical preview for Agentic Workflows
GitHub Agentic Workflows permit builders to explain the result they need in plain Markdown, add it as an automatic workflow to their repository, and have that be executed as a coding agent in GitHub Actions.
Agentic Workflows run as commonplace GitHub Actions workflows that may have extra guardrails for sandboxing, permissions, management, and assessment. Moreover, they help a wide range of coding agent engines, together with Copilot CLI, Claude Code, or OpenAI Codex.
“Using GitHub Agentic Workflows makes solely new classes of repository automation and software program engineering doable, in a manner that matches naturally with how developer groups already work on GitHub,” GitHub wrote in a submit.
Google provides Automated Critiques to Conductor within the Gemini CLI
Conductor is a Gemini CLI extension that helps deliver extra growth context into the terminal, and the brand new Automated Assessment function generates a post-implementation report after the agent completes its coding duties. Findings are categorized by severity (low, medium, and excessive) in order that builders can prioritize the place to iterate first.
“This degree of element ensures that “agentic” growth doesn’t imply ‘unsupervised’ growth. As an alternative, it creates a workflow the place the AI gives the labor and the developer gives the high-level architectural oversight, backed by automated verification,” Google wrote in a weblog submit.
Moreover, Google introduced that Gemini CLI extensions will now be capable of outline settings that the consumer will probably be prompted to supply when putting in an extension. By offering issues like API keys, base URLs, and undertaking identifiers upfront, customers will hopefully have fewer configuration errors to troubleshoot when working with the Gemini CLI, the corporate defined.
Google upgrades Gemini 3 Deep Assume mode
In keeping with Google, the upgraded mannequin options enhancements throughout math and programming reasoning, in addition to particular scientific domains like chemistry and physics. It accessible for Google AI Extremely subscribers within the Gemini app, and for choose researchers, engineers, and enterprises within the Gemini API for the primary time.
“We up to date Gemini 3 Deep Assume in shut partnership with scientists and researchers to deal with robust analysis challenges — the place issues typically lack clear guardrails or a single right answer and knowledge is commonly messy or incomplete. By mixing deep scientific data with on a regular basis engineering utility, Deep Assume strikes past summary principle to drive sensible functions,” Google wrote in a submit.
GitHub Copilot testing for .NET is now usually accessible
Now accessible in Visible Studio 2026 v18.3, the testing function permits builders to generate unit exams inside their IDE. The corporate added new capabilities to coincide with this GA launch, together with deeper IDE integration, extra pure prompting, and new methods to invoke the testing expertise.
The workforce plans to focus subsequent on getting it to deal with extra superior testing requests, which can contain addressing necessities like permitting builders to make clear intent, verify assumptions, and assessment proposed plans earlier than producing exams.
“Common availability is a crucial milestone, however it’s not the top of the journey. We proceed to run consumer research and collect suggestions to know how builders use GitHub Copilot testing for .NET in real-world eventualities, particularly as requests develop in dimension and complexity,” Microsoft wrote in a weblog submit.
Anthropic raises $30 billion in Sequence G funding
This newest funding spherical was led by GIC and Coatue; co-led by D. E. Shaw Ventures, Dragoneer, Founders Fund, ICONIQ, and MGX; and had participation from 30 different buyers. As of this spherical, Anthropic is now valued at $380 billion post-money.
In its announcement, Anthropic additionally revealed that its run-rate income is now $14 billion, and it has grown by 10x yearly since its first funding spherical three years in the past.
“Whether or not it’s entrepreneurs, startups, or the world’s largest enterprises, the message from our prospects is identical: Claude is more and more changing into important to how companies work,” mentioned Krishna Rao, chief monetary officer of Anthropic. “This fundraising displays the unimaginable demand we’re seeing from these prospects, and we are going to use this funding to proceed constructing the enterprise-grade merchandise and fashions they’ve come to depend upon.”
