
Hewlett Packard Enterprise (HPE) has expanded its Nvidia AI Computing by HPE portfolio with new choices designed to allow safe, scalable AI factories and superior knowledge centre networking.
The growth consists of the opening of an AI Manufacturing facility Lab in Grenoble, France. This facility is meant to permit enterprises to check and validate workloads on a sovereign, air-cooled infrastructure working completely throughout the European Union (EU).

Entry deeper trade intelligence
Expertise unmatched readability with a single platform that mixes distinctive knowledge, AI, and human experience.
The lab helps compliance with EU knowledge sovereignty necessities and is obtainable for international clients searching for to judge their AI deployments in a regionally compliant setting.
The Grenoble lab is supplied with HPE servers, HPE Juniper Networking PTX and MX Collection routers, Nvidia accelerated computing {hardware}, Nvidia Spectrum-X Ethernet networking, HPE Alletra storage, and the government-ready model of Nvidia AI Enterprise software program.
This permits clients to evaluate efficiency on EU-based infrastructure, addressing regulatory compliance for distributed AI workloads.
The Grenoble-based AI Manufacturing facility Lab is scheduled to open within the second quarter of 2026.
HPE additionally introduced its collaboration with Carbon3.ai to ascertain the Personal AI Lab in London.
This surroundings makes use of the HPE Personal Cloud AI platform, the Nvidia AI Enterprise suite, and Nvidia {hardware} to help enterprise adoption of AI functions within the UK.
Nvidia founder and CEO Jensen Huang mentioned: “We’re reworking the info centre into an AI manufacturing facility — a producing plant for the brand new industrial revolution — and by deploying the complete stack of NVIDIA accelerated computing and Spectrum-X Ethernet networking with HPE, we’re creating the template for sovereign AI.
“The brand new AI Manufacturing facility Lab supplies a foundry the place clients can flip knowledge into worth, securely and at scale.”
In response to necessities round operational sovereignty in Europe, HPE Personal Cloud AI now affords further graphics processing unit (GPU) configurations utilizing NVIDIA RTX PRO 6000 Blackwell Server Version and Hopper GPUs.
Integration of STIG-hardened and FIPS-enabled Nvidia AI Enterprise in remoted environments helps safety for compliance-driven workflows.
The platform adopts Nvidia Multi-instance GPU (MIG) know-how to supply fractionalisation capabilities geared toward optimising useful resource utilisation.
New Datacenter Ops Brokers from World Large Expertise (WWT), Nvidia, and HPE are being launched to automate administration duties throughout agentic AI and hybrid cloud environments.
HPE’s sovereign AI manufacturing facility options at the moment are delivered with system architectures designed for country-specific regulatory compliance. These reference designs incorporate safety controls crucial for audit help and controlled trade alignment.
For datacentre networking, HPE has built-in the Nvidia Spectrum-X Ethernet platform with BlueField-3 knowledge processing models (DPUs), extending high-performance connectivity each inside and between datacentres and clouds.
These networking capabilities are additional expanded utilizing HPE Juniper Networking’s MX and PTX routing platforms for low-latency connections throughout geographically distributed clusters.
On the storage aspect, HPE will ship the Alletra Storage MP X10000 Information Intelligence Nodes as of January 2026.
This storage structure introduces inline analytics by embedding Nvidia accelerated computing straight throughout the knowledge path.
Working the Nvidia AI Enterprise stack in keeping with the reference design for the Nvidia AI Information Platform, these nodes analyse incoming knowledge in actual time to help automated sample inference for downstream AI pipelines.
HPE has launched the Nvidia GB200 NVL4, now out there for enterprise deployment. Every system integrates two Grace CPUs and 4 Blackwell GPUs per node, supporting as much as 136 GPUs per rack.
On safety integration, CrowdStrike has been named the endpoint safety supplier for HPE Personal Cloud AI deployments throughout hybrid environments.
This builds on CrowdStrike’s present partnerships with each HPE and Nvidia round securing accelerated massive language mannequin functions.
HPE president and CEO Antonio Neri mentioned: “HPE and Nvidia proceed to supply the muse for safe AI factories at any scale, with new improvements that ship a better vary of efficiency for extra numerous workloads than ever earlier than.”
Fortanix know-how can even be utilized alongside Nvidia Confidential Computing on HPE Personal Cloud AI platforms to safe agentic workloads in extremely regulated or sovereign use instances.
Sovereign AI manufacturing facility architectures are additionally out there now, whereas orders for the Alletra Storage MP X10000 Information Intelligence Nodes will start in January 2026.

