Nvidia is opening up its high-performance AI systems to be tapped on by third-party custom chips and AI accelerators, in a bid to expand its current dominance by being everywhere that AI heavy lifting is needed.
This means hyperscale data centre operators and large businesses can enjoy some of Nvidia’s much-sought-after capabilities for AI training or agentic AI inferencing, while keeping existing hardware already invested in over the years.
Until now, the high-speed switching and linkages that have enabled Nvidia to connect multiple graphics processing units (GPUs) and CPUs to deliver top-notch AI performance, called NVLink, have been limited to Nvidia systems.
Now, Nvidia’s partners – MediaTek, Marvell, Alchip Technologies, Astera Labs, Synopsys and Cadence – will help make third-party semi-custom chips that incorporate some of Nvidia’s groundbreaking technologies in their silicon.
The new technology, called NVLink Fusion, also promises to enable chipmakers such as Fujitsu and Qualcomm Technologies to each couple their custom CPUs with Nvidia GPUs in a rack-scale architecture to boost AI performance.

How much of a performance boost is delivered to third-party systems is still a question, of course, but Nvidia seems optimistic it can expand its footprint.
“NVLink Fusion opens Nvidia’s AI platform and rich ecosystem for partners to build specialised AI infrastructures,” said chief executive officer Jensen Huang today, on the eve of the Computex show in Taipei.
In other words, users can get up to speed without going all-in with a costly Nvidia stack in the data centre. Or, as Huang said several times at an Nvidia event today: He’s grateful if you buy a full Nvidia kit, but happy too, if you buy some Nvidia, at least.
Besides large data centres, Nvidia has also set its sights on businesses and AI developers by pushing out an RTX Pro server and lunchbox-sized DGX Spark desktop computer that run workloads on a smaller scale.
Shown off by Huang today at a keynote event, they target parts of the market that are not met by Nvidia’s most powerful Blackwell GPUs meant for large data centres.
They promise AI performance for users who may not have access to Nvidia-powered AI data centres or prefer to build their own AI infrastructure. They also fill a gap in Nvidia’s current list of offerings, which are sold often to large hyperscale cloud providers.

The RTX Pro server will pack up to six Nvidia Blackwell GPUs for tasks such multimodal AI inference, scientific computing, graphics and video applications. Plus, it promises to help businesses run complex AI agents, which will need the additional horsepower of a powerful GPU.
The server, said Huang, will also run regular enterprise workloads such as hypervisors for virtual machines and even stream Citrix virtual desktops. “Even Crysis works on here,” he quipped, referring to a PC game that was notoriously hard to run.
Cisco, Dell Technologies, Hewlett Packard Enterprise and Lenovo will offer full-stack systems with the Nvidia RTX Pro servers to run Nvidia’s AI Enterprise software.
Nvidia’s data centre partners such as Asus, Compal, Foxconn, Gigabyte, MSI, Pegatron, Quanta Cloud Technology and Supermicro will also be offering the RTX Pro servers, which should be expected later this year.

Another key technology that Nvidia announced today was the petite DGX Spark desktop. Delivering 1 petaflop of AI compute, it is pitched as a “personal AI cloud” for AI developers, scientists and researchers to run tests quickly without having to spin up cloud resources.
To do so, it packs in an Nvidia GB10 Grace Blackwell chip, fifth-generation Tensor Cores and 128GB of unified memory in a chassis no bigger than a regular mini PC. Just as importantly, users can export the models they work on to Nvidia’s DGX Cloud or any accelerated cloud or data centre infrastructure.
There is also a more powerful “desktop” version called Spark Station. Again aimed at individual users, this “personal” version of an AI workhorse comes packed with an Nvidia GB300 Grace Blackwell Ultra Desktop chip, which offers up to 20 petaflops of AI performance and 784GB of unified system memory.
The DGX Station can serve as an individual desktop for one user running advanced AI models using local data, or as an on-demand, centralised compute node for multiple users, according to Nvidia.

As in previous years, the Taiwan-born Huang repeatedly stressed the importance of Taiwanese electronics manufacturers.
The island’s ecosystem, he said, is crucial in the quest to build so-called AI factories, which are expected to generate “intelligence” as a utility, like electricity and data before.
Speaking for almost two hours today, he announced a tie-up with the Taiwanese government to build an AI supercomputer and even revealed plans for a new Nvidia campus in Taipei, pending approval from the city government.
Calling on Taiwanese to contact the city mayor, he pitched Nvidia as a key player crucial to the AI race. For a popular homecoming AI hero whose every step in Taipei is closely followed on local media each time he visits, it’s hard to see him not getting his way.