Supermicro announces largest U.S. campus in San Jose to scale rack-level, liquid-cooled AI systems

Supermicro is making a sizable bet on U.S.-based AI infrastructure. On April 27, 2026, the company announced what it called its largest U.S. location: a new Data Center Building Block Solutions campus near its headquarters in San Jose, California. The site spans roughly 32.8 acres and more than 714,000 square feet, becoming Supermicro’s fourth Bay Area location and expanding its regional footprint to nearly 4 million square feet.
The facility is designed to support the full operational chain for the company’s growing AI infrastructure portfolio, including advanced system design, domestic manufacturing, testing, service and global distribution. It also includes 10 MW of on-campus power capacity, a nod to the fact that integrating and validating AI racks has become power- and cooling-intensive before systems ever reach customer sites.
Supermicro is using the project to underscore a strategic shift. The company is leaning into its DCBBS—Data Center Building Block Solutions—approach, framing itself less as a traditional server maker and more as a provider of pre-engineered, rack-scale, liquid-cooled AI infrastructure intended to shorten the time between GPU allocation and production deployment.
That emphasis reflects broader industry moves from server-level integration to rack and cluster deployment, as well as the growing importance of liquid cooling as power densities rise. Charles Liang, Supermicro’s president and CEO, said the San Jose DCBBS campus represents a direct investment in American innovation and manufacturing leadership.
He said the expansion deepens the company’s roots in Silicon Valley by creating high-quality professional roles, and described the project as a way to advance domestic innovation, solution value and production capacity while improving Time-to-Online (TTO) and build-out efficiency for next-generation AI infrastructure.
The new campus extends Supermicro’s longstanding modular “building block” model beyond individual servers and into the data center floor. Historically, the company’s strength has been speed—rapidly combining motherboards, chassis, power, processors, GPUs, storage, networking and cooling into workload-specific systems.
In the AI era, the task has become integrating scarce GPUs, high-speed networking, liquid cooling, power distribution and software validation into deployable, rack-scale systems. According to Supermicro, the San Jose campus will function as an AI infrastructure staging and validation environment, where liquid-cooled racks can be assembled, tested and shipped as integrated systems rather than as discrete components.
The company says the facility enables closer collaboration with major customers and suppliers, reduces shipping time and keeps engineering and manufacturing teams closely aligned. The operational focus mirrors how AI infrastructure is now tested and validated: full-rack burn-in, coolant loop testing, leak detection, network verification, power sequencing and thermal performance checks under realistic conditions.
In doing so, Supermicro aims to address a growing chokepoint in AI infrastructure—capacity to integrate and validate complete systems, not just access to GPUs.
