This week we have now launched a wave of purpose-built datacenters and infrastructure investments we’re making around the globe to help the worldwide adoption of cutting-edge AI workloads and cloud companies.
As we speak in Wisconsin we launched Fairwater, our latest US AI datacenter, the most important and most subtle AI manufacturing unit we’ve constructed but. Along with our Fairwater datacenter in Wisconsin, we even have a number of similar Fairwater datacenters beneath building in different areas throughout the US.
In Narvik, Norway, Microsoft introduced plans with nScale and Aker JV to develop a brand new hyperscale AI datacenter.
In Loughton, UK, we introduced a partnership with nScale to construct the UK’s largest supercomputer to help companies within the UK.
These AI datacenters are vital capital initiatives, representing tens of billions of {dollars} of investments and a whole lot of 1000’s of cutting-edge AI chips, and can seamlessly join with our international Microsoft Cloud of over 400 datacenters in 70 areas around the globe. By way of innovation that may allow us to hyperlink these AI datacenters in a distributed community, we multiply the effectivity and compute in an exponential method to additional democratize entry to AI companies globally.
So what’s an AI datacenter?
The AI datacenter: the brand new manufacturing unit of the AI period

An AI datacenter is a singular, purpose-built facility designed particularly for AI coaching in addition to operating large-scale synthetic intelligence fashions and purposes. Microsoft’s AI datacenters energy OpenAI, Microsoft AI, our Copilot capabilities and plenty of extra main AI workloads.
The brand new Fairwater AI datacenter in Wisconsin stands as a outstanding feat of engineering, overlaying 315 acres and housing three huge buildings with a mixed 1.2 million sq. ft beneath roofs. Setting up this facility required 46.6 miles of deep basis piles, 26.5 million kilos of structural metal, 120 miles of medium-voltage underground cable and 72.6 miles of mechanical piping.
In contrast to typical cloud datacenters, that are optimized to run many smaller, unbiased workloads resembling internet hosting web sites, e-mail or enterprise purposes, this datacenter is constructed to work as one huge AI supercomputer utilizing a single flat networking interconnecting a whole lot of 1000’s of the newest NVIDIA GPUs. In reality, it would ship 10X the efficiency of the world’s quickest supercomputer at the moment, enabling AI coaching and inference workloads at a degree by no means earlier than seen.
The function of our AI datacenters – powering frontier AI
Efficient AI fashions depend on 1000’s of computer systems working collectively, powered by GPUs, or specialised AI accelerators, to course of huge concurrent mathematical computations. They’re interconnected with extraordinarily quick networks to allow them to share outcomes immediately, and all of that is supported by huge storage programs that maintain the info (like textual content, photographs or video) damaged down into tokens, the small models of knowledge the AI learns from. The aim is to maintain these chips busy on a regular basis, as a result of if the info or the community can’t sustain, the whole lot slows down.
The AI coaching itself is a cycle: the AI processes tokens in sequence, makes predictions concerning the subsequent one, checks them towards the suitable solutions and adjusts itself. This repeats trillions of instances till the system will get higher at no matter it’s being skilled to do. Consider it like knowledgeable soccer group’s follow. Every GPU is a participant operating a drill, the tokens are the performs being executed step-by-step, and the community is the teaching employees, shouting directions and conserving everybody in sync. The group repeats performs again and again, correcting errors till they will execute them completely. By the top, the AI mannequin, just like the group, has mastered its technique and is able to carry out beneath actual recreation situations.
AI infrastructure at frontier scale
Objective-built infrastructure is crucial to with the ability to energy AI effectively. To compute the token math at this trillion-parameter scale of main AI fashions, the core of the AI datacenter is made up of devoted AI accelerators (resembling GPUs) mounted on server boards alongside CPUs, reminiscence and storage. A single server hosts a number of GPU accelerators, linked for high-bandwidth communication. These servers are then put in right into a rack, with top-of-rack (ToR) switches offering low-latency networking between them. Each rack within the datacenter is interconnected, making a tightly coupled cluster. From the surface, this structure seems like many unbiased servers, however at scale it capabilities as a single supercomputer the place a whole lot of 1000’s of accelerators can prepare a single mannequin in parallel.
This datacenter runs a single, huge cluster of interconnected NVIDIA GB200 servers and hundreds of thousands of compute cores and exabytes of storage, all engineered for essentially the most demanding AI workloads. Azure was the primary cloud supplier to convey on-line the NVIDIA GB200 server, rack and full datacenter clusters. Every rack packs 72 NVIDIA Blackwell GPUs, tied collectively in a single NVLink area that delivers 1.8 terabytes of GPU-to-GPU bandwidth and provides each GPU entry to 14 terabytes of pooled reminiscence. Relatively than behaving like dozens of separate chips, the rack operates as a single, large accelerator, able to processing an astonishing 865,000 tokens per second, the best throughput of any cloud platform obtainable at the moment. The Norway and UK AI datacenters will use comparable clusters, and reap the benefits of NVIDIAs subsequent AI chip design (GB300) which presents much more pooled reminiscence per rack.
The problem in establishing supercomputing scale, notably as AI coaching necessities proceed to require breakthrough scales of computing, is getting the networking topology excellent. To make sure low latency communication throughout a number of layers in a cloud atmosphere, Microsoft wanted to increase efficiency past a single rack. For the newest NVIDIA GB200 and GB300 deployments globally, on the rack degree these GPUs talk over NVLink and NVSwitch at terabytes per second, collapsing reminiscence and bandwidth limitations. Then to attach throughout a number of racks right into a pod, Azure makes use of each InfiniBand and Ethernet materials that ship 800 Gbps, in a full fats tree non-blocking structure to make sure that each GPU can discuss to each different GPU at full line fee with out congestion. And throughout the datacenter, a number of pods of racks are interconnected to scale back hop counts and allow tens of 1000’s of GPUs to operate as one global-scale supercomputer.
When specified by a conventional datacenter hallway, bodily distance between racks introduces latency into the system. To handle this, the racks within the Wisconsin AI datacenter are specified by a two-story datacenter configuration, so along with racks networked to adjoining racks, they’re networked to extra racks above or under them.
This layered method units Azure aside. Microsoft Azure was not simply the primary cloud to convey GB200 on-line at rack and datacenter scale; we’re doing it at huge scale with clients at the moment. By co-engineering the complete stack with the very best from our trade companions coupled with our personal purpose-built programs, Microsoft has constructed essentially the most highly effective, tightly coupled AI supercomputer on the earth, purpose-built for frontier fashions.

Addressing the environmental affect: closed loop liquid cooling at facility scale
Conventional air cooling can’t deal with the density of recent AI {hardware}. Our datacenters use superior liquid cooling programs — built-in pipes flow into chilly liquid immediately into servers, extracting warmth effectively. The closed-loop recirculation ensures zero water waste, with water solely wanted to refill as soon as after which it’s frequently reused.
By designing purpose-built AI datacenters, we have been in a position to construct liquid cooling infrastructure into the power on to get us extra rack-density within the datacenter. Fairwater is supported by the second largest water-cooled chiller plant on the planet and can constantly flow into water in its closed loop cooling system. The new water is then piped out to the cooling “fins” on either side of the datacenter, the place 172 20-foot followers chill and recirculate the water again to the datacenter. This method retains the AI datacenter operating effectively, even at peak hundreds.

Over 90% of our datacenter capability makes use of this method, requiring water solely as soon as throughout building and frequently reusing it with no evaporation losses. The remaining 10% of conventional servers use out of doors air for cooling, switching to water solely throughout the hottest days, a design that dramatically reduces water utilization in comparison with conventional datacenters.
We’re additionally utilizing liquid cooling to help AI workloads in lots of our present datacenters; this liquid cooling is completed with Warmth Exchanger Models (HXUs) that additionally function with zero-operational water use.
Storage and compute: Constructed for AI velocity
Fashionable datacenters can include exabytes of storage and hundreds of thousands of CPU compute cores. To help the AI infrastructure cluster, a wholly separate datacenter infrastructure is required to retailer and course of the info used and generated by the AI cluster. To provide you an instance of the size — the Wisconsin AI datacenter’s storage programs are 5 soccer fields in size!

We reengineered Azure storage for essentially the most demanding AI workloads, throughout these huge datacenter deployments for true supercomputing scale. Every Azure Blob Storage account can maintain over 2 million learn/write transactions per second, and with hundreds of thousands of accounts obtainable, we are able to elastically scale to fulfill nearly any information requirement.
Behind this functionality is a essentially rearchitected storage basis that aggregates capability and bandwidth throughout 1000’s of storage nodes and a whole lot of 1000’s of drives. This allows scale to exabyte scale storage, eliminating the necessity for handbook sharding and simplifying operations for even the most important AI and analytics workloads.
Key improvements resembling BlobFuse2 ship high-throughput, low-latency entry for GPU node-local coaching, making certain that compute assets are by no means idle and that huge AI coaching datasets are at all times obtainable when wanted. Multiprotocol help permits seamless integration with various information pipelines, whereas deep integration with analytics engines and AI instruments accelerates information preparation and deployment.
Automated scaling dynamically allocates assets as demand grows, mixed with superior safety, resiliency and cost-effective tiered storage, Azure’s storage platform units the tempo for next-generation workloads, delivering the efficiency, scalability and reliability required.
AI WAN: Connecting a number of datacenters for an excellent bigger AI supercomputer
These new AI datacenters are a part of a world community of Azure AI datacenters, interconnected by way of our Vast Space Community (WAN). This isn’t nearly one constructing, it’s a few distributed, resilient and scalable system that operates as a single, highly effective AI machine. Our AI WAN is constructed with progress capabilities in AI-native bandwidth scales to allow large-scale distributed coaching throughout a number of, geographically various Azure areas, thus permitting clients to harness the ability of an enormous AI supercomputer.
It is a basic shift in how we take into consideration AI supercomputers. As a substitute of being restricted by the partitions of a single facility, we’re constructing a distributed system the place compute, storage and networking assets are seamlessly pooled and orchestrated throughout datacenter areas. This implies higher resiliency, scalability and suppleness for patrons.
Bringing all of it collectively
To satisfy the crucial wants of the most important AI challenges, we would have liked to revamp each layer of our cloud infrastructure stack. This isn’t nearly remoted breakthroughs, however composing a number of new approaches throughout silicon, servers, networks and datacenters, resulting in developments the place software program and {hardware} are optimized as one purpose-built system.
Microsoft’s Wisconsin datacenter will play a crucial function in the way forward for AI, constructed on actual know-how, actual funding and actual group affect. As we join this facility with different regional datacenters, and as each layer of our infrastructure is harmonized as a whole system, we’re unleashing a brand new period of cloud-powered intelligence, safe, adaptive and prepared for what’s subsequent.
To study extra about Microsoft’s datacenter improvements, take a look at the digital datacenter tour at datacenters.microsoft.com.
Scott Guthrie is chargeable for hyperscale cloud computing options and companies together with Azure, Microsoft’s cloud computing platform, generative AI options, information platforms and knowledge and cybersecurity. These platforms and companies assist organizations worldwide resolve pressing challenges and drive long-term transformation.